id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.10424
Functional requirements to mitigate the Risk of Harm to Patients from Artificial Intelligence in Healthcare
The Directorate General for Parliamentary Research Services of the European Parliament has prepared a report to the Members of the European Parliament where they enumerate seven main risks of Artificial Intelligence (AI) in medicine and healthcare: patient harm due to AI errors, misuse of medical AI tools, bias in AI and the perpetuation of existing inequities, lack of transparency, privacy and security issues, gaps in accountability, and obstacles in implementation. In this study, we propose fourteen functional requirements that AI systems may implement to reduce the risks associated with their medical purpose: AI passport, User management, Regulation check, Academic use only disclaimer, data quality assessment, Clinicians double check, Continuous performance evaluation, Audit trail, Continuous usability test, Review of retrospective/simulated cases, Bias check, eXplainable AI, Encryption and use of field-tested libraries, and Semantic interoperability. Our intention here is to provide specific high-level specifications of technical solutions to ensure continuous good performance and use of AI systems to benefit patients in compliance with the future EU regulatory framework.
Juan M. García-Gómez, Vicent Blanes-Selva, José Carlos de Bartolomé Cenzano, Jaime Cebolla-Cornejo, Ascensión Doñate-Martínez
2023-09-19T08:37:22Z
http://arxiv.org/abs/2309.10424v1
Functional requirements to mitigate the Risk of Harm to Patients from Artificial Intelligence in Healthcare ###### Abstract The Directorate General for Parliamentary Research Services of the European Parliament has prepared a report to the Members of the European Parliament where they enumerate seven main risks of Artificial Intelligence (AI) in medicine and healthcare: patient harm due to AI errors, misuse of medical AI tools, bias in AI and the perpetuation of existing inequities, lack of transparency, privacy and security issues, gaps in accountability, and obstacles in implementation. In this study, we propose fourteen functional requirements that AI systems may implement to reduce the risks associated with their medical purpose: AI passport, User management, Regulation check, Academic use only disclaimer, data quality assessment, Clinicians double check, Continuous performance evaluation, Audit trail, Continuous usability test, Review of retrospective/simulated cases, Bias check, eXplainable AI, Encryption and use of field-tested libraries, and Semantic interoperability. Our intention here is to provide specific high-level specifications of technical solutions to ensure continuous good performance and use of AI systems to benefit patients in compliance with the future EU regulatory framework. Artificial Intelligence in Healthcare Regulation Risk of Patients' Harm Lack of Transparency Automatic auditing Functional requirements of software ## 1 Introduction The European AI Strategy aims at ensuring that AI is human-centric and trustworthy. The EU has been actively working on a regulation framework considering AI as a tool to benefit people and society. In this line, the European Commission proposed in 2021 the AI Act[1] for regulating artificial intelligence applications based on their risk of causing harm with the spirit of strengthening rules around data quality, transparency, human oversight, and accountability of products based on AI. The AI Act includes as its main goal addressing ethical questions and implementation challenges in the healthcare sector. Added to this regulation, the proposals on 2022 of the AI Liability Directive[2] and the Product Liability Directive[3] aim to establish a harmonised regime for dealing with consumer claims for damage caused by AI products and services. Some authors criticise that the set of regulations impacts the liability of patient harm due to AI-based medical devices, so _patients may not be able to successfully sue manufacturers or healthcare providers for some injuries caused by black-box medical AI systems under either EU Member States' strict or fault-based liability_ laws[4]. Taking into consideration novelty AI may bring to healthcare and the potential gaps in the current and future legislation in Europe, the Directorate General for Parliamentary Research Services of the European Parliament has prepared a report[5] to the Members of the European Parliament where they enumerate seven main risks of AI in medicine and healthcare: 1) patient harm due to AI errors, 2) the misuse of medical AI tools, 3) bias in AI and the perpetuation of existing inequities, 4) lack of transparency, 5) privacy and security issues, 6) gaps in accountability, and 7) obstacles in implementation. In this document, the European Parliament Research Service (EPRS) associated each risk with a set of specific sources of uncertainty or deterioration of the AI performance that would require mitigation actions to avoid potential harm to the patients. In this study, we have evaluated the functional requirements that AI systems may implement to reduce the risks associated with their medical purpose as defined by the Directorate General for Parliamentary Research Services[5]. Our intention here is to provide specific high-level specifications of technical solutions to ensure continuous good performance and use of AI systems to benefit patients in compliance with the future EU regulatory framework. In brief, we have conceived a set of fourteen risk-mitigation functional requirements that an implementation of an AI system may comply with. In the next section, we define these functional requirements and link them to the mitigation actions to reduce the risk of patients' harm. ## 2 Functional requirements for Artificial Intelligence systems to mitigate risks of patients' harm In this section, we map the proposed functional requirements introduced previously to the mitigation actions proposed by the Directorate General for Parliamentary Research Services[5] to reduce the seven risks of patient harm. To do that, we reviewed the sources of uncertainty and deterioration of AI systems for healthcare-associated to each risk, justifying how the functional requirements may help to reduce them. Figure 1 shows a graphical summary of the mapping between the risk of harm (in red) and Risk-mitigation functional requirements (in pink and violet), through sources of uncertainty (in orange) and their mitigation actions (in blue). In the figure, we link each of the sources of risks to the set of functional requirements as well as to the mitigation actions. As a result, 1) the potential harm due to AI errors can be mitigated by implementing data quality assessment of the data and continuous evaluation of the AI models. The results of these evaluations can be included in the AI passport so they can be checked as clinicians double-check before using the model; 2) the misuse of medical AI tools can be tackled by using the AI passport to describe the tool appropriately, supplying user management, and monitoring how they are used through the continuous usability test. 3) To control the bias present in the models, a thorough dataset description is needed within the AI passport and the bias check. As the developers of the tools declare, both sources should be present during the clinical double-check. 4) The lack of transparency within the AI models can be addressed through a combination of AI passport, User management, Academic use only disclaimer, Clinicians double check, Audit trail, Review of retrospective/simulated cases, Bias Check, and AI explainability. 5) Mitigation measures for privacy and security issues can be addressed by User management and using Encryption and field-tested libraries. 6) The gaps in accountability produced by the use of these AI tools can be mitigated by using AI passport, User management with roles, Regulation check, Academic use only disclaimer, Clinician double check, Audit trail, Bias check, eXplainable Artificial Intelligence (XAI), Encryption and field-tested libraries. 7) Finally, the obstacles in implementation can be saved by Data quality assessments, Clinical double check, Continuous performance evaluation, Continuous usability testing, Bias check, semantic interoperability and XAI. The long rationale for the mapping between sources of risks and functional requirements can be found in Supplementary Material on Appendix A. Next, we define the fourteen functional requirements we propose to mitigate the risk of harm to patients when using AI systems in healthcare. ### AI passport A complete statement called AI Passport detailing the AI system purpose, ethical declarations, context of use, training, and evaluation details, including potential biases due to the training datasets. Although its first version is created from Figure 1: Graphical summary of the mapping (green balls) between the risk of harm (in red) and risk-mitigation functional requirements of AI in healthcare (in pink and violet), though the sources of uncertainty (in orange) and their mitigation actions (in blue). the manufacturer statements, it should dynamically include the result of the Continuous performance evaluation and Usability test over the system's lifetime. ### User management A software control based on accounts and roles to allow or deny access to different sections of the system and audit every action of the users when interacting with the AI system. Users should be logged in to access the functions of the AI system for which they have permission. ### Regulation check AI system may actively check it has the certifications to operate with a patient given the applicable legislation to its use. For example, AI systems may have CE mark certification based on regulation EU 2017/745[6] to be used for clinical purposes in countries of the European Union. ### Academic use only disclaimer For those AI systems not certified for clinical purposes, an explicit disclaimer should be shown to the users to clearly state they are using software that is not certified for medical purposes, so it is for academic use only. The agreement of the condition of use by the user should be saved for future audits. ### Data quality assessment Data quality assessment of the training datasets and operational cases may avoid low performance or improper use of AI systems. This assessment may include but is not limited to completeness, consistency, uniqueness, correctness, temporal stability, multi-source stability, contextualisation, predictive value and reliability[7]. ### Clinicians double-check This consists of the confirmation by the user before sending a clinical case to an AI system. By this confirmation, the user agrees to use an AI-based system with its limitations of performance and interpretability. ### Continuous performance evaluation Continuous evaluation of the AI system's performance once it is deployed as part of its post-market surveillance. This may require a mechanism to receive the ground truth of the case to be compared against the AI's prediction. ### Audit Trail A chronological record of all actions carried out by the users, including user id, timestamp, input, output, and unique identification of the version of the AI system. ### Continuous usability test A correct use of the AI system should be guaranteed along its full life cycle. This can be checked with periodic usability and user experience (UX) test[8], using short user experience questionnaires, such as the System Usability Scale (SUS) [9] or the User Experience Questionnaire - Short version (UEQ-S)[10]. ### Review of cases Promoting continual learning[11] of users is key to correctly using AI systems. The display, prediction and comparison of retrospective and simulated cases may help healthcare professionals to keep their technical skills alive and adapt themselves to AI capabilities. ### Bias check Avoiding biased decisions when using AI systems are conditioned to the training datasets used to create the AI models. Two options may apply here: 1) The manufacturer of the AI system manifests the potential bias of the predictions due to the training dataset limitations or 2) the manufacturer may report the evaluation results of an AI bias test. ### eXplainable Artificial Intelligence Mechanism to explain how AI predictive models generate the output for a certain patient. This mechanism may reduce the lack of transparency, gaps in accountability and obstacles to a successful implementation in clinical scenarios[12, 13]. ### Encryption and field-tested libraries The use of cybersecurity best practices and worldwide accepted programming libraries are required to ensure a secure execution environment of the AI system. ### Semantic interoperability Integrating AI systems in clinical pathways may require exchanging health data with Electronic Health Records. Using open standards from health informatics, such as openEHR[14] and HL7[15], may simplify the semantic interoperability of the AI systems with the Health Information Systems of the healthcare organisation. Use case: Risk-mitigation functional requirements for an AI system to assist in palliative care interventions In this section, we review the risk-mitigation functional requirements for an AI system to assist decision-making on including non-cancer inpatients in palliative care interventions. The AI system consists of the prediction of one-year quality of life and survival based on the variables during the admission of patients over 65 years in the hospital. Our analysis is based on real experience developing AI-based solutions in projects with multidisciplinary EU consortia[16]. Table 1 describes eight potential situations where patients may be at risk due to the use of an AI system to assist with palliative care interventions. We identify the functional requirements for each situation and the associated mitigation actions to reduce the risk. \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} Potential situation & Risk & Risk-mitigation functional requirement & Mitigation action \\ \hline Laboratory results for Creatinne are expressed in micro-moles/L instead of mg/dL & Patient harm is due to AI errors because input variables are expressed in other units, so values are incorrect & AI passport & AI passport the range and unit for each input and output variable of the AI models \\ \hline The AI system is used in primary care whereas it was designed for inpatients & Misuse of medical AI tools & AI passport + Continuous usability test & AI passport has a description of how the system is intended to be used. Also, a continuous usability test may be useful to know if the system is being used properly \\ \hline The ethnicity data is not available on the training dataset & Bias in AI \& inequities & Bias check & If the cases selected to train the AI algorithm do not consider ethnicity data, the situation should be reported as a limitation on the Bias check \\ \hline Healthcare professionals do not understand how the Quality of Life model operates on the input variables so they are not confident in its performance & Lack of transparency & XAI + Continuous performance evaluation & The evaluation of each case should explain how the input variables have affected the model. Also, the report on the Continuous performance evaluation may increase the confidence of healthcare professionals in the AI system ## 4 Discussion ### Significance Fortunately, we can expect that the development of AI systems for healthcare will follow the highest standards for medical devices in the European market. Nevertheless, the complexity of the technology under AI systems makes it necessary to adapt norms and regulations for the correct deployment and use of this promising breakthrough in our healthcare systems. The set of fourteen functional requirements presented here is designed to explicitly implement the twenty-two mitigation actions proposed by the Directorate General for the European Parliamentary Research Services for the sources of uncertainty and deterioration of AI systems that may cause harm to patients. These functionalities can be implemented with current technology and do not require waiting for inherent solutions from AI technology. De-risking has been identified as a central challenge in the development of foundation models[17] from which next-generation AI systems for healthcare will be derived. Given that flaws in the foundation model are inherited by adapted models, we consider them sensible to implement active risk-mitigation functionalities once the models are deployed for medical use. Hence, the fourteen mitigation functionalities presented here are intended to help during the execution of the models during decision-making in patients. Our approach is based on empowering healthcare professionals through access to transparent information about the AI system. This will help them make correct use of the system. We also provide a complete audit trail to clarify \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} A set of clinicians are afraid clinical data from their patients are shared with other entities without acknowledgement & Privacy \& security issues & User management + Audit trail & Every user is registered and has a set of permissions. An audit trail keeps every action and section accessed within the platform. This will help prevent data leakage and ensure accountability of data use \\ \hline A clinician wants to use a model that is part of a new prototype with no CE mark & Gaps in accountability & Academic use only disclaimer & Upon accessing the AI system, a persistent warning stations ‘Only for academic purposes’ will be shown as a banner \\ \hline A healthcare professional requests prediction for a patient whose data is not available & Obstacles in implementation & Data quality assessment & The lack of data availability triggers quality control mechanisms, and the predictive process is stopped \\ \hline The system produces a prediction of Quality of Life that the healthcare professional consider incorrect or inaccurate & Patient harm due to AI errors & XAI + AI passport + Data quality assessment + Clinicians can check the explanation of the AI output, check the data quality assessment and review in the AI passport which kind of data was the model trained on. With the Clinicians double check, the professionals acknowledge they use the AI system as it is with the limitations reported in its AI passport \\ \end{tabular} \end{table} Table 1: Some risk-mitigation functional requirements for an AI system to assist decision-making on including non-cancer inpatients in palliative care interventions. The first and second columns describe a potential situation where a patient may come to harm. The third column enumerates the functional requirements that the AI system may implement to mitigate the risk, and the fourth column describes the mitigation action. accountability and liability of manufacturers and healthcare providers of AI systems. As a result, we expect our functionalities to reduce potential harm to patients and increase adoption of AI for healthcare in Europe. ### Meeting the fundamental rights The bioethical principles of autonomy, benefence, non-maleficence and justice applied in medical practice and research can also be extrapolated to AI systems used in this context. Considering the principles of beneficence and maleficence, AI systems in the healthcare sector should be developed with the ultimate aim of improving human health and well-being. But at the same time, it is necessary to identify, prevent and minimise risks of causing harm. AI has been used to promote patients' health by providing clinical decision support based on evidence. As a result, great opportunities have arisen, improving clinical capabilities in diagnosis, drug discovery, epidemiology, personalized medicine and operational efficiency (reviewed[18] by Morley _et al._in 2020). But these systems can also cause harm. Therefore, it has been stressed the necessity to develop ethical, regulatory and legal frameworks for the safe use of AI in clinical practice[19]. Among other causes, this harm can arise from AI errors and limitations, misuse of the system, or malfunctioning caused by hacking. AI errors must be clearly identified, and the capabilities and limitations of the system should be clearly identified. To avoid them, the systems should be explainable to attain certification by regulators. In order to avoid misuse, it is necessary to train clinicians working with the system. It is also important to avoid a negative effect on the SI system efficiency due to biased data. Training datasets should not be biased to assure a maximum benefit for all, avoiding inequalities and discrimination. In this context, it is also important to include underrepresented groups in the training datasets, thus promoting the principle of justice. On the other hand, the clinician should be informed by the system when introducing data representing individuals or situations not sufficiently represented in the training dataset. Additionally, to prevent misuse, the system should also be able to identify input data that exceeds normal limits to identify possible errors in the entry data. In order to make a good evaluation of risks and benefits derived from the use of AI systems, and to promote accountability in the case of harm it is necessary to promote transparency. It is necessary to promote traceability and explainability. By promoting those aspects, it will be possible to know how decisions are made in AI systems. This aspect is essential during its development when the system is examined by the research ethics committees and during its approval, ensuring compliance with regulations. Traceability not only promotes accountability but also enables the diagnosis of problems that arise in the use of AI systems and the subsequent refinement of algorithms and training data, leading to an improvement in the system performance. On the other hand, transparency and accountability are key elements that eventually lead to promoting the development of trustworthy systems. Autonomy also presents an important issue in the development, deployment and use of such systems. First, during its development, it is necessary to confirm that the training dataset has been obtained, assuring the informed consent of the patients. Also, during its deployment and use of the systems, human oversight should prevail, and it should be ensured that the clinicians are aware of using an AI system and that they know all its capabilities and limitations. From an alternative perspective, it can be argued that patients should have the right to provide consent for the utilization of AI systems in their medical diagnoses or treatments[20]. However, it is important to acknowledge that recent reviews have indicated that the existing legal framework of informed consent does not adequately address the obligation to disclose the use of medical AI/ML[21]. Despite the pros and cons of including the use of AI systems in the informed consent, it is true that considering the novelty of such systems, it would be convenient to disclose this information. Nonetheless, the opacity behind some AI systems poses a significant challenge to informed consent. In fact, if healthcare professionals themselves do not comprehend how the system arrives at a diagnosis or recommends a treatment, it becomes impossible to effectively communicate this information to the patient[22]. This perspective underscores the crucial need to enhance transparency in order to facilitate patient autonomy. The traceability of the system should ensure that both clinicians and patients give their informed content with its use. Autonomy should also be preserved from the point of view of the use of personal data. AI systems should strive to assure privacy and confidentiality, protecting the rights of patients and complying with the data regulations. This leads to the necessity of developing secure systems that prevent data breaches. All these ethical issues are considered in the numerous ethical frameworks developed in different areas. For example, the framework of ethical aspects of AI, robotics and related technology proposed by the European Parliament[23] in 2020 considers that AI systems should fully respect the EU Charter of Fundamental Rights and that AI development, deployment and use should respect human dignity, autonomy and self-determination of the individual, prevent harm, promote fairness, inclusion and transparency, eliminate biases and discrimination and limit negative externalities and of ensuring explainability. It seems that these requirements would, in fact, be global, as the analysis of the global corpus of principles and guidelines on ethical AI revealed a convergence around five ethical principles, including transparency, justice and fairness, non-maleficence, responsibility and privacy[24]. A recent revision of recent literature also focused on the main ethical debates on data privacy and security, trust in AI, accountability, and responsibility and bias[25]. Despite the considerable amount of discussion regarding the ethics of AI in health care, there has been little conversation or recommendations as to how to practically address these concerns[26]. The risk mitigation functional requirements presented here address these risks, ensuring the development of reliable, robust and trustworthy systems that correctly address the ethical requirements applicable to this type of system. Nonetheless, it seems evident the need to integrate ethics and specialists in ethics in the whole developing process[27]. In this context, it is not only necessary to embed ethics in the development process, but an external ethical review seems crucial. Although it is already required the involvement of an ethics board (Institutional Review Board, IRB, in the USA, and Research Ethics Committee, REC, in the UK and European Union) in the development of certified health products, an objective peer review process is not a universal requirement[28]. Anyway, apart from considering the need to evaluate AI healthcare systems by an ethics committee, it is necessary to address the lack of training of the committees in aspects related to AI, even though some consider that this lack of expertise is unproblematic, as the ethical issues raised are non-exceptional compared to other technologies[29]. In fact, it has been proposed that external experts and ad hoc boards can complement the ethics committee work[30]. ### Meeting current and future legislation So far, international bodies have focused on creating guidelines and recommendations for good practices, while binding regulation on the use of AI has been scarce. In February 2020, the European Commission published a white paper on AI with proposals for European Union actions or public policies in the field of AI. In October 2020, it approved three reports making explicit how to regulate AI to boost innovation, respect for ethical standards and trust in the technology. In June 2021, the WHO published the first report on AI applied to health in which it defined six principles related to its conception and usability[31]. In November 2021, the 193 member countries of UNESCO signed the Recommendation on the Ethics of AI[32]. It is basically a non-binding normative framework of a programmatic nature, to build the legal superstructure. It contains a set of values and principles to develop healthy and non-invasive practices of this technology. Subsequently, the European Commission proposed a legal framework to address risks and ensure fundamental rights and safety. It is a regulation binding on its member states and should contain the following requirements: 1) systematize the risks that could arise from the application of AI, 2) enumeration of a list of high-risk AI applications, 3) List requirements of AI systems for high-risk applications, 4) Comprehensive conformity assessment before the AI system is put into service or placed on the market, 5) Assessment after the AI system is placed on the market 6) Establish multilevel, European and national governance. In addition, the commission is proposing two directives: a directive on product liability and a proposal for a Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to AI (AI Liability Directive). The implementation of the requirements listed in this paper follows the ethical guidelines published by different international institutions. On the one hand, the description of AI models through the AI Passport, the XAI methods, the bias check and the continuous evaluation allow the operability of AI models to be as transparent as possible. On the other hand, the incorporation of the clinical double-check and the audit trail makes it possible to make sure that the use of AI is conscious by the physician, thus delimiting the responsibilities of the healthcare provider and the manufacturer. Finally, the continuous evaluations (of usability and performance) of the models make it possible to comply with post-market survival requirements. ### Towards an integrated platform to deploy AI for Healthcare The proposed requirements are willing to help AI systems to comply with future EU legislation. However, they suppose an extra burden for the creators of these tools. All the proposed requirements are not inherent to the AI models but to ensure their correct performance and use. This allows relying on platforms that implement risk-mitigation functionalities and accept subscriptions of AI predictive models as services for clinical purposes. This solution could also benefit manufacturers since the time from modelling to prototyping could be accelerated. This approach has already been explored in The Aleph platform ([https://thealeph.upv.es](https://thealeph.upv.es), Last accessed 15/09/2023) that was used for palliative care interventions[33]. At that moment, we aimed to provide a common entry point and Graphical User Interface (GUI) to various AI predictive models. The original platform supported multiple predictive models per service registered and XAI graphs to explain individual case predictions using SHAP[34]. The GUI and its usability were also validated using interviews with healthcare professionals after proposing to them to identify which patients may benefit from palliative care inclusion[8]. Given the legal requirements that are arising over the world in response to the advances of AI, we are willing to release a new version of The Aleph platform including the next functionalities: 1) An AI Passport file describing the models of the service that want to be implemented in The Aleph including all details of their training and hyper-parameters, 2) Enabling of the user management system that is already implemented in the platform, 3) A regulation check included in the platform service registration, 4) A banner displaying 'Only for academic use' in the services or predictive models that are not certified by the FDA nor the CE mark, 5) The clinical double check to confirm the request of the predictive task, 6) A view including the jobs sent by the user and/or their organisations where the actual ground truth can be added to obtain continuous performance checks, 7) An audit trail system that captures every action and section of application accessed by the users as well as the content of the predictive jobs and their results, 8) A little chatbot called Alf on the right bottom corner of the website. This chatbot will ask the user the UEQ and SUS questions to obtain continuous usability measurements, 9) The possibility of registering training services in a secondary menu and 10) Keeping the use of XAI per individual predictions at the service level despite the current discussion about their utility[35]. ## 5 Conclusions AI has great potential to help healthcare in the management, diagnosis, prognosis, and treatment of human beings. The purpose of worldwide governments, manufacturers and healthcare providers is to work well to get profit from this technology being in compliance with human rights. In agreement with this approach, we answer to the risk of harm detected by the EU parliament research services, by proposing fourteen functional requirements to allow healthcare professionals to be aware of the performance and limitations of the AI systems they are using. This approach may fill the gap in the accountability for harm caused by medical AI systems so patients can be protected under their EU Member States from injuries caused by medical practices using AI systems. ## 6 Declaration ### Funding This study was partially funded by the Agencia Valenciana de la Innovacio [SINUE - INNEST/2022/87], Agencia Estatal de Investigacion (NextGenerationEU) [MAGICA - TED2021-129579B-I00], the European Union's Horizon 2020 research and innovation programme [INADVANCE - 825750] and the Agencia de Investigacion de Espana [ALBATROSS - PID2019-104978RB-I00/AEI/10.13039/501100011033]. The funders played no role in the study design, data collection, analysis and interpretation of data, or the writing of this manuscript. ### Author contribution statement JMGG, VBS and ADM conceptualized the idea of implementing specific functionalities for AI regulation in medicine. VBS, JMGG and ADM mapped the potential harm in healthcare to specific functional requirements and prepared the use case. JCBC reviewed the current and future legislation applicable to IA in healthcare and JCC reviewed the implications of IA to the fundamental rights. All authors wrote, read, and approved the final manuscript ### Conflict of interest All authors disclose they do not have any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work.
2309.04451
A Comparative Study of Coherent and Incoherent Drives in Four-Level Quantum Dot-Based Spaser
In this article, we theoretically investigate a spaser (surface plasmon amplification by stimulated emission of radiation), which consists of a spherical silver nanoparticle surrounded by four-level gain medium of quantum dots (QDs). The spaser system is pumped coherently and incoherently with the same excitation rate, and the characteristics of coherent localized surface plasmon (LSP) mode, thus produced, are compared for the two pumping scenarios. We provide a detailed analytical expression for the steady state and show that the incoherent pump is more suitable for the continuous spaser mode. The reason is better understood by studying the temporal evolution of number of LSP (N_n ), where the oscillation of LSP starts early for incoherent drive and relaxes to steady state with a large value of N_n. At a large pump rate, spaser curve shows saturation. In addition, we have found that the resonance peak of spaser field is independent of coherent as well as incoherent pumping, while the peak amplitude of field depends on the pump rate.
Ankit Purohit, Akhilesh Kumar Mishra
2023-09-08T17:19:51Z
http://arxiv.org/abs/2309.04451v2
# A Comparative Study of Coherent and Incoherent Drives in Four-Level Quantum Dot Based Spaser ###### Abstract In this article, we theoretically investigate a spaser (surface plasmon amplification by stimulated emission of radiation), which consists of a spherical silver nanoparticle surrounded by four-level gain medium of quantum dots (QDs). The spaser system is pumped coherently and incoherently with the same excitation rate, and the characteristics of coherent localized surface plasmon (LSP) mode, thus produced, are compared for the two pumping scenarios. We provide a detailed analytical expression for the steady state and show that the incoherent pump is more suitable for the continuous spaser mode. The reason is better understood by studying the temporal evolution of number of LSP (\(N_{n}\)), where the oscillation of LSP starts early for incoherent drive and relaxes to steady state with a large value of \(N_{n}\). At a large pump rate, spaser curve shows saturation. In addition, we have found that the resonance peak of spaser field is independent of coherent as well as incoherent pumping, while the peak amplitude of field depends on the pump rate. Keywords: Spaser, localization surface plasmon mode, quantum dots, coherent and incoherent drives ## 1 Introduction Quantum optics and nano-plasmonics are rapidly expanding fields of study that offer new ways to generate and control the light at sub-wavelength dimension, utilize plasmons in quantum applications, and make miniaturized active plasmonic devices [1-5]. The confinement of light to a smaller volume can realize strong light-matter interaction. Localized surface plasmon (LSP) resonance is one of such phenomena wherein under the presence of external time dependent field free electron in a nano-size metal particle oscillates coherently. LSP is useful only when the resonance overcomes the metal's intrinsic losses. One of the ways to deal with the material losses is by transferring energy from external sources to sustain these resonances. The emission from these resonances can be intensified to behave like a laser since it exhibits characteristics analogous to those of a laser light [6-7]. A plasmonic nano-laser consists either of surface plasmon polariton mode or localized surface plasmon (LSP) mode [8]. But, a true spaser consists of LSP mode, which confines the field in all three-dimension, e.g., in a nanosized metal structure. The idea of spaser was first proposed theoretically in 2003 by D. Bergman and M.I. Stockman [6]. Spaser is a nanoscale laser that typically comprises of subwavelength plasmonic cavity (metal nanoparticle (NP)) and gain medium which is usually semiconductor
2309.14642
Editing Motion Graphics Video via Motion Vectorization and Transformation
Motion graphics videos are widely used in Web design, digital advertising, animated logos and film title sequences, to capture a viewer's attention. But editing such video is challenging because the video provides a low-level sequence of pixels and frames rather than higher-level structure such as the objects in the video with their corresponding motions and occlusions. We present a motion vectorization pipeline for converting motion graphics video into an SVG motion program that provides such structure. The resulting SVG program can be rendered using any SVG renderer(e.g. most Web browsers) and edited using any SVG editor. We also introduce a program transformation API that facilitates editing of a SVG motion program to create variations that adjust the timing, motions and/or appearances of objects. We show how the API can be used to create a variety of effects including retiming object motion to match a music beat, adding motion textures to objects, and collision preserving appearance changes.
Sharon Zhang, Jiaju Ma, Jiajun Wu, Daniel Ritchie, Maneesh Agrawala
2023-09-26T03:49:24Z
http://arxiv.org/abs/2309.14642v3
# Editing Motion Graphics Video via Motion Vectorization and Transformation ###### Abstract We propose a new approach to the motion graphics video using a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new new video with a new new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new new video with a new video with a new new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new video with a new video with a new video with a new video with a new new video with a new video with a new video with a new video with a new new video with a new video with Additional Key Words and Phrases: vector graphics, motion vectorization, scalable vector graphics, SVG, visual programs **ACM Reference Format:** Sharon Zhang, Jiau Ma, Jiajun Wu, Daniel Ritchie, and Maneesh Agrawala. 2023. Editing Motion Graphics Video via Motion Vectorization and Transformation. _ACM Trans. Graph._ 42, 6, Article 229 (December 2023), 13 pages. [https://doi.org/10.1145/3618316](https://doi.org/10.1145/3618316) ## 1. Introduction Programs have proven to be useful in many areas of computer graphics. The structure and repetition found naturally in our surroundings, combined with the symbolic reasoning that humans use to describe objects, can make programs particularly effective in representing visual content. For instance, biologists use _L-systems_ to model plant structures [21]; digital artists use _shader graphs_ to generate materials and textures [13]; data analysts use _grammar-based APIs_ to create visualizations [25, 26]; and SVG is a widely adopted _declarative program_ format for vector graphics [26]. There are several benefits to representing visual content with a program rather than working directly in the output space of pixels and frames. For one, programming languages often provide meaningful abstractions and concepts (i.e., language primitives) that operate at a higher level than pixels and align better with the ways that humans think about the underlying content. With SVG, for example, we can describe an animation as a collection of object primitives moving in time, instead of specifying individual pixel colors over time (Figure 1). Another benefit is that programs provide meaningful control parameters. SVG programs can describe the motions of objects using a sequence of affine transforms and editing the small set of transform parameters can generate a wide range of motions. In this work we focus on a particular domain of visual content-namely, _motion graphics_-which are essentially animated graphic designs usually consisting of shapes and typography in choreographed motions. Such motion graphics are ubiquitous in Web design, digital advertising, animated logos, and film title sequences. Yet, creating effective motion graphics requires expertise in crafting eye-catching motions and skill with animation software. Moreover, once they have been rendered as video--the most common format for motion graphics on the Web--they become very difficult to edit. Creating variations of a motion graphics video (e.g., swapping out objects, changing the text, or retiming motions of individual objects to music) is impractical without access to a higher level representation. We present tools for editing a motion graphics video by first converting it into an SVG motion program. Our _motion vectorization_ pipeline identifies objects, tracks their motions and occlusion relationships across the video, and generates an SVG motion program (Figure 1 top row). Our approach adapts the differentiable image compositing optimization method of Reddy et al. (2020) to our tracking problem. The resulting motion program can be rendered using an SVG renderer (e.g., most Web browsers) and edited using an SVG animation editor. To take further advantage of our representation, we introduce a _program transformation_ API that allows users to programmatically create variations of the SVG motion program. Our approach is to treat the SVG motion program as a scene graph composed of objects and their motions. We demonstrate how our API can be used to create a variety of effects, including retiming object motion to match music beats, adding motion textures (e.g., pulsing, wobbling) to objects and programmatically changing the appearance of objects (Figure 1 middle, bottom rows). In summary, we make two main contributions: 1. A _motion vectorization_ pipeline that converts a motion graphics video into an SVG motion program. 2. A _program transformation_ API for programmatically editing SVG motion programs to create variations. ## 2. Related Work _Recovering programs from visuals._ Because programs are such a useful representation for visual data, graphics and vision researchers have investigated how to automatically infer such programs from raw visual data. This problem has been explored in multiple visual domains, including 3D shape modeling [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 209, 201, 202, 203, 204, 206, 208, 209, 209, 210, 211, 222, 231, 232, 233, 234, 235, 236, 237, 238, 240, 209, 251, 252, 253, 254, 255, 256, 257, 258, 261, 259, 262, 263, 264, 259, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 331, 332, 34, 356, 36, 371, 38, 392, 301, 33, 334, 35, 36, 372, 38, 393, 31, 33, 394, 395, 396, 397, 302, 303, 33, 34, 35, 36, 373, 38, 398, 399, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 87, 89, 91, 80, 83, 85, 89, 92, 86, 87, 88, 89, 93, 94, 80, 88, 89, 95, 80, 81, 82, 83, 84, 85, 86, 87, 89, 96, 88, 97, 81, 84, 88, 89, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 161, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 183, 184, 185, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 209, 209, 210, 211, 223, 231, 240, 209, 222, 241, 242, 259, 266, 270, 209, 211, 223, 243, 244, 259, 271, 283, 284, 285, 286, 287, 288, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 332, 34, 356, 371, 38, 398, 399, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 83, 84, 86, 87, 89, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 1 letters). Our work is inspired by Bregler et al.'s (Bregler et al., 2002), who motion capture and retarget the exaggerated deformations of cartoon characters. However, they require manually annotated object contours as input, whereas our goal is to further automate the object detection and motion tracking process and to recover an SVG motion program that we can retarget via a program transformation API. _Layered video decomposition._ Our work enables object-level manipulation of motion graphics video, which calls for an object-centric layered decomposition. Prior work in decomposing natural videos uses motion cues to generate layers based on relative depth from the camera (Brostow and Essa, 1999; Wang and Adelson, 1994) or on coherent camera motion (Fradet et al., 2008). More recent neural methods decompose video into layers represented as frame sequences (Lu et al., 2020, 2021) or as neural atlases (Kasten et al., 2021; Ye et al., 2022). Such outputs can support appearance editing but do not enable motion editing. Zhang et al. (Zhang et al., 2022) generate sprite decompositions of cartoon videos, where each sprite is a sequence of frames and a corresponding sequence of homographies that map between sprite and frame coordinates. Since the appearance of each sprite can change from frame to frame, the corresponding homographies do not fully characterize the sprite motion. They also assume a fixed depth ordering of the layers which results in artifacts when objects change in relative depths. Our pipeline adapts Reddy et al.'s (Reddy et al., 2020) differentiable compositing method to compute relative depth (and motion parameters) as a function of time, allowing for dynamic object occlusion relationships. ## 3. Background _Characteristics of motion graphics video._ Motion graphics videos are commonly composed of a set of foreground objects, including basic shapes (e.g., rectangles, discs, etc.) and typography moving over a static background. The objects may occlude one another as well as split into separate objects, or merge together into a single object. In general, motion graphics videos may use textures and gradients to color both the foreground objects and the background, and foreground objects may move and deform non-rigidly. But we have found that in many contexts where motion graphics are prevalent--e.g., Web design, animated logos, digital advertising, film title sequences--a common stylistic choice is to use mostly solid-colored foreground objects undergoing affine motions over a static background. Sparing use of texture and photographic elements in combination with simpler motions can improve legibility and make it easier to guide the viewer's gaze through the video. which is crucial in contexts such as advertising. We focus on converting this important class of motion graphics video into SVG programs. _Structure of SVG motion programs._ Scalable vector graphics (SVG) is a declarative programming format for vector graphics that is widely implemented in Web browsers across a variety of devices (W3C, 2018). To convert a motion graphics video into an SVG motion program we can represent each foreground object, as an SVG group <g> containing its appearance <image> and a sequence of per-frame motion transforms. SVG natively supports affine transforms for warping elements with separate parameters for scale, translate, rotate and skewA and skewA1. Each object also includes a per-frame \(z\)-index depth ordering. Finally, a static background lies at the lowest depth. Figure 1 shows an example of our SVG representation where we have elided some detail to highlight the per-frame sequence of transform parameter values, (vals=\(\ldots\)) for one of the objects in the scene. Footnote 1: The scale and translate parameters allow separate control over \(x\) and \(y\). ## 4. Motion Vectorization The goal of our motion vectorization pipeline is to recover an SVG motion program from an input motion graphics video. The primary challenge is to identify and track each of the objects in the input video as they appear, move, occlude one another and disappear. We use a four stage pipeline: (1) we segment frames into regions (e.g. potential objects), (2) we generate candidate mappings explaining how objects might move from frame-to-frame, (3) we select the best collection of mappings explaining the frame-to-frame movements of the objects and finally (4) we write an SVG motion program. Our motion vectorization pipeline builds on Reddy et al.'s (Reddy et al., 2020) differentiable compositing optimization technique. We first describe how we adapt differentiable compositing to our problem setting in Section 4.1; we then present each stage of our pipeline in Sections 4.2 to 4.5. ### Differentiable image compositing Differentiable image compositing (Reddy et al., 2020) is an optimization technique originally designed to decompose a graphic pattern comprised of discrete elements (which may partially occlude one another) into a layered representation (Figure 2). It takes in a _target_ pattern image \(T\) and a set of _source_ element images \(\mathcal{S}=\{S_{1},...,S_{N}\}\) that appear in the pattern and optimizes a similarity transform (translation, rotation, and uniform scale) for each source element. It also computes a depth ordering so that when the transformed elements are rendered in back-to-front order they reproduce the target pattern. That is, \[\mathrm{DC}(\mathcal{S},T)=\{(S_{i},\Theta_{S_{i}},\Lambda_{S_{i}})|S_{i}\in \mathcal{S}\}, \tag{1}\] Figure 2. Differentiable image compositing (Reddy et al., 2020), takes a set of sources \(\mathcal{S}=\{S_{1},...,S_{N}\}\) and a target image \(T\) as input and computes a set of layering placement tuples \(\mathcal{S}^{*}=\{(S_{i},\Theta_{S_{i}},\Lambda_{S_{i}})\}\) such that the composite image \(C(\mathcal{S}^{*})\) matches \(T\). \(M^{\mathrm{disc}}(S_{i},C(\mathcal{S}^{*}))\) is a binary mask of the visible pixels of \(S_{i}\) after compositing. We extend Reddy et al.’s technique to generate affine transforms \(\Theta_{S_{i}}\) rather than similarity transforms. where \(\Theta_{S_{i}}\) is the transform that places \(S_{i}\) in \(T\), and \(\Lambda_{S_{i}}\) is the layer z-ordering for \(S_{i}\) in \(T\) with respect to the other elements in \(\mathcal{S}\) after transforming by their \(\Theta\)'s. We refer to the resulting set of layering placement tuples as \(\mathcal{S}^{*}=\{(S_{i}\,\Theta_{S_{i}},\Lambda_{S_{i}})\}_{i=1}^{N}\). With this information, we can define two additional image operators: (1) a _compositing operator_\(\mathcal{C}(\mathcal{S}^{*})\) composites all of the transformed source elements \(\Theta_{S_{i}}(S_{i})\) in back-to-front order according to their \(\Lambda\)'s; (2) a _visibility mask operator_\(M^{\text{vis}}(S_{i},I)\) produces a binary mask of the pixels of image \(I\) where \(S_{i}\) is visible. Importantly, \(M^{\text{vis}}\) always operates in the frame space represented by \(I\). For example, \(M^{\text{vis}}(S_{i},\mathcal{C}(\mathcal{S}^{*}))\) is the set of pixels of the transformed \(\Theta_{S_{i}}(S_{i})\) that are visible in \(\mathcal{C}(\mathcal{S}^{*})\). See Figure 2 for examples of both of these operators. To apply differentiable compositing to the context of tracking objects in motion graphics video, we have extended the optimization to compute an affine transformation \(\Theta_{S_{i}}\) (translation, rotation, non-uniform scale and skew) rather than a similarity transform. Specifically, we add scaleX, scaleY, skewX, and skewY as independent parameters in the optimization. ### Stage 1: Region extraction The first stage of our vectorization pipeline is to segment each input frame \(F_{t}\) into regions. Since we focus on motion graphics with mostly solid colored objects, as a default we use color clustering in LAB colorspace and mark the pixels in the cluster of the mode color as background. Alternatively users can specify a background image if the video has a photograph, texture or colored gradient as background. To separate the remaining foreground pixels into regions, as a default we construct an edge map for the frame (Canny, 1986) and then apply Zhang _et al._'s (2009) trapped-ball segmentation. This gives us a set of regions \(\mathcal{R}_{t}=\{R_{1},\dots,R_{N}\}\) for each frame \(F_{t}\). If the foreground is textured, users can choose to skip edge detection and apply connected-components segementation on the foreground pixels to form regions. Finally, we let users manually specify pixel-level region boundaries if necessary, as noted in Section 5. ### Stage 2: Generate candidate mapping types Given a set of regions for every input frame, our goal is to identify unique foreground objects and track them between frames. We initialize this process at the first frame \(F_{1}\) by treating each region \(R_{i}\in\mathcal{R}_{1}\) as an object \(O_{i}\) so that \(\mathcal{O}_{1}:=\mathcal{R}_{1}\). For each subsequent frame \(F_{t}\), our task is to determine how objects in the previous frame _map_ to regions in the current frame \(\mathcal{R}_{t}\) under affine transformations. Figure 3 shows the eight types of mappings that can occur between objects and regions. To determine which of these mapping types best matches objects in \(F_{t-1}\) with regions in \(F_{t}\), we construct an initial set of the likeliest mapping types in the form of two bipartite graphs: (1) the forward candidate mapping graph \(\mathcal{B}_{\text{fwd}}\) holds likely mappings taking objects to regions; (2) the backward candidate mapping graph \(\mathcal{B}_{\text{fwd}}\) holds likely mappings taking regions to objects. We first describe how we build the graphs and then explain how they encode likely mappings. Build candidate mapping graphsFigures 4 and 5 show how we build \(\mathcal{B}_{\text{fwd}}\) and \(\mathcal{B}_{\text{fwd}}\). For \(\mathcal{B}_{\text{fwd}}\), we first apply differentiable compositing as \(\operatorname{DC}(O_{t-1},F_{t})=\mathcal{O}^{*}\), treating \(O_{t-1}\) as the set of source elements and the current frame \(F_{t}\) including all of its regions \(\mathcal{R}_{t}\), as the target image. Then, for each object \(O_{i}\in O_{t-1}\), we consider each region \(R_{j}\in\mathcal{R}_{t}\) and compute a _source_ coverage weight as \[W^{\text{cov}}_{\text{src}}(O_{i},R_{j})=\frac{|M^{\text{vis}}(O_{i},C( \mathcal{O}^{*}))\cap M^{\text{vis}}(R_{j},F_{t})|}{|M^{\text{vis}}(O_{i},C( \mathcal{O}^{*}))|}. \tag{2}\] Fig. 3. Eight types of mappings that can occur between objects \(O_{i}\) in frame \(F_{t-1}\) and regions \(R_{j}\) in frame \(F_{t}\). **(1) One-to-one.** A single object \(O_{1}\) maps to all pixels in a single region \(R_{1}\) under a single affine transform \(\Theta_{O_{1}}\) from the object to the region or vice versa under transform \(\Theta_{R_{1}}\). **(2) One-to-many (no split).** A single object \(O_{2}\) maps to multiple regions under a single affine transform \(\Theta_{O_{2}}\) from object to regions. Since a single transform explains how the object moves to match all of the regions we consider all of them to be part of the same object (i.e., the object does not split). **(3) Many-to-one (no merge).** Two or more objects map to a single region, but require different affine transforms (e.g. \(\Theta_{O_{1}},\Theta_{O_{2}},\dots\)) from each object to the region. Since multiple transforms are needed, consider the objects as remaining distinct in frame \(F_{t}\) (i.e., the objects do not merge). **(4) Many-to-one (merge).** Two or more objects map to a single region under a single affine transform \(\Theta_{R_{1}}\), from the region to the objects. Since a single affine motion explains how the region moves to match all of the objects, we consider this a merge of the distinct objects. **(5) One-to-many (split).** A single object maps two or more regions but require a different affine transformation to map each region to the object (e.g. \(\Theta_{R_{0}},\Theta_{R_{0}},\dots\)). Since multiple transforms are required we consider the objects splitting into new distinct objects. **(6) Many-to-many (split and merge).** Multiple objects map to multiple regions under differing motions. Object(s) are splitting and simultaneously merging and the transforms needed to explain how such objects(s) map to regions are ambiguous. **(7) Unmapped object (disappear).** When an object does not map to any region in the current frame \(F_{t}\) we consider the object to have disappeared. **Unmapped region (appear).** When a region does not map to any object in the previous frame \(F_{t-1}\) we consider it a new object appearing for the first time. This weight measures the visible overlap between the transformed object and the region as a percentage of the visible area of the transformed object (Figure 5). We add the highest non-zero weighted edge \((O_{i},R_{j})\) to the forward graph \(\mathcal{B}_{\text{fwd}}\) (top left weight matrix in Figure 4). Similarly, for each region \(R_{j}\), we consider each object \(O_{i}\) and compute a _target_ coverage weight as \[W^{\text{cov}}_{\text{tgt}}(O_{i},R_{j})=\frac{|M^{\text{vis}}(O_{i},C(O^{*})) \cap M^{\text{vis}}(R_{j},F_{t})|}{|M^{\text{vis}}(R_{j},F_{t})|}. \tag{3}\] This weight measures the visible overlap between the transformed object and the region as a percentage of the visible area of the region (Figure 5). We add the highest non-zero weighted edge \((O_{i},R_{j})\) to \(\mathcal{B}_{\text{fwd}}\) if it has not already been added to the graph (top right weight matrix, Figure 4). The backward graph is built in exactly the same way except that we treat the regions \(\mathcal{R}_{t}\) as source elements and the previous frame \(F_{t-1}\) as the target in the differentiable compositing optimization to compute \(\text{DC}(\mathcal{R}_{t},F_{t-1})=\mathcal{R}^{*}\). For the coverage weights computations (Equations 2 and 3), we similarly flip the computation treating regions \(R_{j}\) as sources and objects \(O_{i}\) as targets and replace \(F_{t}\) with \(F_{t-1}\) (bottom row, Figure 4). In practice, we have found that DC is sensitive to the initial placement of source elements. Therefore, we initialize the source placement using shape context [1], optical flow [16] and RANSAC to estimate how each object (or region) moves to \(F_{t}\) (or \(F_{t-1}\)). Note also that when we use DC, we save the resulting sets of layering placement tuples \(\mathcal{O}^{*}\) and \(\mathcal{R}^{*}\) for use in later stages of our pipeline. _Extract candidate mappings._ The forward and backward candidate mapping graphs encode multiple candidate mappings. To extract the individual candidate mappings from either of these graphs, we first consider each connected component of the graph. We treat any such component that is one-to-one, one-to-many, or many-to-one (i.e., the component contains exactly one object or exactly one region) as a candidate mapping. If the component forms a many-to-many graph, we further break it into pieces (see inset) as follows. For each node (object or region) in the component, we form a subgraph that includes all edges the node is part of. Each resulting subgraph is then either a one-to-one, one-to-many, or many-to-one mapping candidate. As shown in Figure 3, many-to-many mappings are ambiguous because they require object(s) to simultaneously split and merge. In practice, we have found that such split-merges are rare for the kind of motion graphics videos we focus on in this work. Thus, our approach is to force our algorithm to explain many-to-many mappings as a combination of one-to-one, one-to-many, or many-to-one mappings. Figure 7 shows the complete set of mappings we extract from \(\mathcal{B}_{\text{fwd}}\) and \(\mathcal{B}_{\text{fwd}}\) for the example in Figure 4. ### Stage 3: Select best collection of mappings To select a set of mappings that best explain how objects move from frame \(F_{t-1}\) to \(F_{t}\) we first score each candidate mapping we obtain in stage 2 using a visibility-based penalty loss. Suppose \(H\) is a candidate mapping type extracted from the forward graph, and \(\mathcal{O}^{H}_{t-1}\) and \(\mathcal{R}^{H}_{t}\) are the set of object(s) and region(s) in \(H\). We define the visibility loss \(\mathcal{L}^{\text{vis}}\) as a masked \(L_{2}\)-norm of color differences between the composite image \(\mathcal{C}(\mathcal{O}^{*})\) of the transformed and layered objects, and the current frame \(F_{t}\). That is, \[\mathcal{L}^{\text{vis}}(\mathcal{O}^{H}_{t-1},\mathcal{R}^{H}_{t})=||(\mathcal{ C}(\mathcal{O}^{*})-F_{t})\otimes M^{\text{all}}||_{2}, \tag{4}\] where \(\otimes\) denotes pixel-wise multiplication and \(M^{\text{all}}\) is a mask \[M^{\text{all}}=\left(\bigcup_{O_{i}\in O^{H}_{t-1}}M^{\text{vis}}(O_{i},C( \mathcal{O}^{*}))\right)\cup\left(\bigcup_{R_{j}\in\mathcal{R}^{H}_{t}}M^{ \text{vis}}(R_{j},F_{t})\right), \tag{5}\] consisting of the union of the visible pixels of all of the transformed objects \(O_{i}\in O^{H}_{t-1}\) (first term) with the union of all of the regions \(R_{j}\in\mathcal{R}^{H}_{t}\) (second term). This loss is minimized when the pixels of the transformed objects in \(H\) match those of corresponding regions in \(H\) and there are no mismatched pixels. Similarly, if \(H\) is a candidate mapping type from the backward graph, we compute the penalty score as \(\mathcal{L}^{\text{vis}}(\mathcal{R}^{H}_{t},\mathcal{O}^{H}_{t-1})\), while replacing \(\mathcal{O}^{*}\) with \(\mathcal{R}^{*}\) and \(F_{t}\) with \(F_{t-1}\) in Equations 4 and 5. In particular, the visibility loss Fig. 4: To build the forward candidate mapping graph \(\mathcal{B}_{\text{fwd}}\) (top row), we consider each edge \((O_{i},R_{j})\) from object \(O_{i}\) to region \(R_{j}\) and compute coverage weights \(W^{\text{cov}}_{\text{src}}(O_{i},R_{j})\) and \(W^{\text{cov}}_{\text{tgt}}(O_{i},R_{j})\). We retain only highest non-zero weighted edges in the graph for each object – highlighted in green in the matrices, one per row. We similarly build the backward mapping graph \(\mathcal{B}_{\text{fwd}}\) (bottom row), but flip the direction of the edges \((R_{j},O_{i})\) to run from region \(R_{j}\) to object \(O_{i}\) with the coverage weights similarly inverted \(W^{\text{cov}}_{\text{src}}(R_{j},O_{i})\) and \(W^{\text{cov}}_{\text{tgt}}(R_{j},O_{i})\) (bottom row). Fig. 5: For edge \((O_{i},R_{j})\), we compute coverage weights \(W^{\text{cov}}_{\text{fwd}}\) and \(W^{\text{cov}}_{\text{tgt}}\) by first transforming the source object \(O_{i}\) to form \(\Theta_{O_{i}}(O_{i})\). \(W^{\text{cov}}_{\text{src}}\) is the area of the visible overlap between \(\Theta_{O_{i}}(O_{i})\) and \(R_{j}\) (purple) as a percentage of the visible area of the transformed object \(\Theta_{O_{i}}(O_{i})\) (pink or purple). \(W^{\text{cov}}_{\text{tgt}}\) is the area of the overlap (purple) as a percentage of the visible area of the target region \(R_{j}\) (cyan or purple). differs from the coverage weights (Section 4.3) as it evaluates the _color_ appearance of an entire mapping rather than object-region alignment. Figure 7 shows the penalty scores for the mappings we extracted for the example in Figure 4. We next select a set of conflict-free mappings from our set of candidates that collectively best explain how objects move, appear, or disappear between frames \(F_{t-1}\) and \(F_{t}\). A pair of candidate mappings are in conflict if they include the same object or region (Figure 7). Starting with the complete set of candidate mappings, we repeatedly select the candidate with the lowest penalty score and remove all conflicting candidates from the set. We stop when the candidate mapping set is empty, or the lowest score of the remaining candidates is greater than a threshold \(\epsilon\). We have found that \(\epsilon=0.1\) gives good results across all our examples. Finally, we propagate object IDs from the previous frame objects \(\mathcal{O}_{t-1}\) to current frame regions \(\mathcal{R}_{t}\) based on the selected mappings as shown in Figure 8. Anytime an object disappears we do not propagate its ID to any subsequent regions. Thus, objects which become completely occluded will re-appear with a new ID by default, though this can be easily changed with user input (Section 5). During this process we also keep track of a _canonical image_ for each object. When an object first appears, we save its labeled pixels as its canonical image. Every time an object appears unoccluded and covers a larger region of pixels in a subsequent frame, we update that canonical appearance by replacing the entire canonical image. Thus we maintain a high-resolution appearance for each object. ### Stage 4: Write an SVG motion program In the final stage, we refactor the frame-to-frame affine motion transforms for each object into an affine transform mapping the object's canonical image to each frame. This motion refactorization could be obtained by multiplying the frame-to-frame transforms or their inverses. In practice, we have found that we can further increase motion accuracy by re-running the DC optimization using the canonical images as the source and the corresponding labeled pixels in each frame as the target. Finally, we write out a SVG motion program with a static background image and a set of foreground objects, each represented by a canonical image, a per-frame sequence of affine transforms placing the canonical image in the frame, and a per-frame z-index depth for the object. ## 5. Results: Motion Vectorization Figure 1 shows an abstracted example of the SVG motion program our vectorization pipeline recovers from an input motion graphics video. We apply our motion vectorization pipeline on a test set of 38 motion graphics videos sourced from the Web, with many containing occlusions or fast object motion. A few videos include textures, photographic elements or color gradients in the foreground or background. Table 1 (Appendix A) gives more detail about these videos and the supplemental website provides complete running SVG motion programs for all of them. We first consider the reconstruction error between frames of the input motion graphics videos and corresponding frames produced by the SVG motion programs. Overall, the average \(L_{2}\) RGB error across our test set is 0.0086. Slight reconstruction errors appear mostly at edges of objects due to small inaccuracies in transform parameters, noise, compression or anti-aliasing (Figure 9 left). As a comparison we also use the sprite-from-sprite decomposition method (Zhang et al., 2022). Sprite-from-sprite successfully decomposes the 30 test videos and runs out of memory on the rest. The average \(L_{2}\) RGB reconstruction error for sprite-from-sprite on this subset of videos is 0.018, compared to 0.0079 using our method. See supplemental materials A for a more detailed discussion of this comparison. We also compute the number of tracking errors in each video. We define a tracking error as any time a mapping from objects in frame \(F_{t-1}\) to regions in \(F_{t}\) is incorrect with respect to a manually annotated set of ground truth mappings. Table 1 (Appendix A) shows the total number of such tracking errors as well as the count of errors amongst each mapping type for all the videos in our test set. We find that 24 videos in our test set contain no tracking errors at all, even as some of them contain fast motion, occlusions, or both. The remaining 14 videos all contain 15 errors or fewer. Across all the videos, 75% of the tracking errors occur in one-to-one mappings. Such errors are often due to fast motion and occlusions when objects enter or exit the frame (Figure 9 right top). The next most common tracking error type, at 21%, is incorrect one-to-many (no-split) mappings. Such errors often occur when objects occlude one another and the mapping is misidentified as a one-to-many (split) (Figure 9 right bottom). Two of the three remaining tracking errors occur when many-to-one (no merge) mappings are misidentified as many-to-one (merge) mappings. In these cases the video contains Figure 8. Propagating IDs based on mapping type. For one-to-one and one-to-many (no split) mappings, we assign all pixels of the corresponding region(s) the ID of the object. For one-to-many (split) and unmapped region (appear) mappings, we create new IDs and label the pixels of each region with a different ID. For many-to-one (merge) mappings, we create a new ID to assign to the pixels of the region and then relabel all previous instances of the corresponding objects in the mapping to this new ID. For many-to-one (no merge) mappings, we assign the IDs of each object \(O_{i}\) in the mapping to the corresponding pixels in \(\Theta_{O_{i}}(O_{i})\). Figure 7. We compute penalty scores \(\mathcal{L}^{\text{vis}}\) for each candidate mapping and then select the best conflict-free set of mappings using a greedy approach. similarly colored overlapping objects that move in unison, so our pipeline merges them into one object. The final tracking error occurs when a newly appearing region is incorrectly mapped to an existing object. The unmatched region (appear) mapping is misidentified as a one-to-one mapping (example in Figure 9 top right). Our test set did not produce errors of the other four mapping types. _Correcting tracking errors._ Most tracking errors are easily fixed by reassigning object IDs to regions. For instance if a region was assigned object ID 3 but should have been assigned object ID 7, we can manually relabel it. We provide a programmatic interface for such reassignment. An error in a many-to-one (no merge) mapping can require breaking the pixel mask of a region into multiple regions. In this case users can manually specify the pixel boundaries of each region in the frame where the error appears in Stage 1 of our pipeline to enforce the correct region boundaries. We found this correction to only be necessary for two videos (_shapeman_, _confetti_) in our test set. In general however, because our pipeline produces relatively few tracking errors they can often be corrected very quickly. _Discussion._ The SVG motion programs produced by our vectorization pipeline provide a representation of motion graphics videos that can be rendered using a SVG renderer, including most Web browsers. In addition, the motion programs can be edited using a SVG animation editor. We have built SVG motion program importers for Adobe After Effects [11] and Blender [12]. Such editors allow users to manually customize the motion and appearance of the objects using a graphical interface they may already be familiar with (see supplementary video). ## 6. Motion Program Transformation Our _program transformation_ API lets users programmatically express different ways of manipulating an SVG motion program to generate variations of it. Our approach is to treat the SVG motion program as a scene graph that describes the motions of objects over time. Our API adopts a well known-design pattern for working with a scene graph via two types of methods; (1) _state queries_ that look up information about the objects and events in the scene, and Figure 9. **Left:** Reconstruction errors (\(L_{2}\) RGB difference) between frames of input motion graphics videos (_Sk_, _avoidido_) and the corresponding frames rendered with the SVG motion program generated by our vectorization pipeline. **Right:** Tracking errors due to fast object motions and occlusions. **Right:** The _kapprituate_ input video contains characters translating quickly right to right. In frame 13 the ‘a’ is correctly assigned object ID 3, but in frame 14 it is incorrectly assigned a new object ID 8. This occurs because the leftmost ‘p’ in frame 14 is the closest similar looking region to the ‘a’ in frame 13 but the candidate mapping between the ‘a’ and the ‘p’ is rejected as being too low quality. The ‘p’ in frame 13 is also incorrectly mapped to the rightmost ‘p’ in frame 14 for similar reasons, while the leftmost ‘p’ in frame 14 is incorrectly assigned a new object ID 7 since it remains unmatched. Thus this example yields 2 one-to-one mapping errors and 1 unmatched region (appear) error. **Right Bottom:** In the _lucy_ video object 22 is correctly tracked before frame 76 (we visualize it in frame 69 to show the complete unoccluded object). In frame 76 occlusions alter the visibility of the corresponding region so much that a one-to-many (no split) mapping is misidentified as a one-to-many (split) mapping and the additional regions are given brand new IDs 27, 29 and 31. (2) _operators_ that modify the appearance or motion of objects. A transformation program typically starts by querying for a set of objects based on their properties (e.g. red colored objects) or the events they participate in (e.g. collisions) and then applies one or more operators to modify the selected objects. This design pattern of querying and then modifying a scene graph is often used in game engines (e.g., Unity [Unity Technologies 2023]) as well as Web APIs (e.g. JQuery [OpenJS Foundation 2023], D3 [Bostock et al. 2011], CSS [Mozilla 2023] and Chickenfoot [Bolin et al. 2005]) that treat the DOM as a scene graph. We describe the methods of our program transformation API (Sections 6.1 and 6.2) and briefly describe how we can use them to build a variety of higher-level transformation effects (Section 6.3). The supplemental materials B provides additional details about our API as well as multiple code examples. While our proof-of-concept implementation of the API enables all of the examples that follow, it is meant to minimally demonstrate our approach. In practice, it could be extended to include additional state queries and operators as necessary. ### Program Transformation API: _State Queries_ State queries retrieve properties or events for a specific object, over a range of frames: ``` propQuery(obj,propType,[frmA,frmB]): Returns a property of obj for each frame in [frmA,frmB] based on propType. Property types include: all, color, position, size, velocity, etc. eventQuery(obj,eventType,frmA,frmB]): Returns a list of events obj is involved in over the range of frames [frmA,frmB] based on eventType. Event types include: heldFrames,collisionFrames, motionCycleframes, etc. ``` To handle property queries, our API internally computes the chosen property for the object from our motion program representation. For example, to compute the color property of an object it clusters the pixels of the canonical image in color space and returns the color of the largest cluster for each frame in the frame range. Properties that vary based on the motion (e.g., position, size, velocity) are computed using the objects motion transform and reported in the global coordinates of the video frame. The all property type returns all objects that appear in motion program over the frame range. To handle event queries, our API internally processes the motion of the object to find frames when the chosen event type occurs. For example, to identify heldFrames we look for successive frames of the object where the motion transform from the canonical image to the frame placement remains fixed and return a list of all such frames. To identify collisionFrames we look for frames where the closest distance between the object boundary and another object boundary is below a threshold (e.g. the objects touch) and at least one of the objects experiences a large change in velocity. The API returns a list of collisions including the other object(s) involved and the points of contact on each object. To identify motionCycleframes we look for peaks in the autocorrelation of motion parameters (translation, rotation, scale skew) of the object and return a list of the corresponding frames. ### Program Transformation API: _Operators_ Our API provides operators to modify the appearance or motion of a specific object over a range of frames including: ``` retime(obj,[sFrmA,sFrmB],[frmA,frmB],easeFn[t]): Linearly remap motion transforms in source frame range [sFrmA,sFrmB] to target frame range [sFrmA,sFrmB]. Then resample the transforms in the target frame range using easing function easeFn[t]. adjLocalMotion(obj,xforforEn[t],[frmA,frmB]): Adjust motion of obj in local coordinate frame (i.e., of canonical image), over the range of frames [frmA,frmB] based on affine transforms generated by linearly sampling xforEn[t] in the range [0,1]. This method post-multiplies canonical-to-frame transform of obj. adjGlobalMotion(obj,xforEn[t],[frmA,frmB]): Adjust motion of obj in global coordinate frame (i.e., of video frame), over the range of frames [frmA,frmB] based on affine transforms generated by linearly sampling xforEn[t] in the range [0,1]. This method pre-multiplies the canonical-to-frame transform of obj. changeAppearance(obj,newAppearance,frmA,frmB]): Set canonical image of obj to newAppearance for frames in [frmA,frmB]. ``` In addition to the operators listed here, our API provides basic operators for creating new objects, deleting objects, copying motions, setting the motion transforms (rather than adjusting them via pre- or post-multiplication), etc. Figure 10 shows the the general pattern of a motion program transformer, written with our API. An objSelector code block (or function) selects one or more objects for transformation using a propQuery or eventQuery. An objTransformer code block (or function) then applies one or more operators to change the timing, motion or appearance of the selected object(s). For example, to transform all of the red colored objects to blue, the objSelector function would run a propQuery to obtain the color of each object and then select out the red ones. Then the objTransformer code block would use changeAppearance to set the color of the selected objects to blue. ### Higher-level object transformer effects Using our motion program transformation API we have built a variety of objTransformer functions that each produce a different, higher-level effect on the timing, motion or appearance of objects (e.g. anticipation/follow-through, motion textures). Several of these transformers implement motion adjustments commonly found in other animation editing systems [Kazi et al. 2014, 2016; Ma et al. 2022]. Importantly, the functions in our API are designed to compose with one another and facilitate the creation of many variations of a motion graphic, thereby supporting iterative design and exploration. Figure 10 provides code for a few object transformers, and supplemental materials B includes code for all of them. The supplemental website also includes multiple example SVG motion programs transformed by each of the higher-level effects described here that can be executed in a Web browser. The following sections give a brief overview of the types of object transformers. _Retining_. These object transformers manipulate an individual object timeline. This includes functions that linearly stretch or shrink the time scale of an object, apply slow in/out easing, re-time object motions to reference audio beats, etc. See Figure 9(c) and 9(d) for examples. _Spatial motion adjustment._ These adjustment object transformers manipulate how an individual object moves across the frame. This includes functions that add anticipation/follow-through (Figure 10e) and functions that apply motion textures (e.g. wobbling or pulsing) to an existing motion. _Appearance adjustment._ The changeAppearance object transformer updates the appearance of a given object by replacing the canonical appearance of an object with a new image. One unintended consequence of an appearance change is that collisions between objects may be affected. For instance, naively changing the dark blue circle in Figure 11 to a smaller-sized coin would not maintain collisions between the smaller coin and the yellow circle. Since collisions are often important events in a video, we also allow for _collision-preserving_ appearance changes. This type of appearance change uses event queries to find collisionframes and then applies local motion adjustments to best preserve the original collisions at those frames. ## 7. Results: Motion Program Transformation By combining objSelector and objTransformer blocks, we can create a variety of motion graphic variations. Figure 1, Figure 11 and Figures 5-6 in supplemental materials A show examples where we have composed multiple objSelector and objTransformer blocks to generate complex variations of retiming, spatial motion adjustment and appearances changes. Executable SVG motion programs and program transformer code for other additional examples with retiming, spatial motion adjustments and appearance adjustments are provided in the supplemental website. We encourage readers to browse the examples to see the breadth of different transformations and variations that can be achieved with our motion program transformation API. _Usability evaluation._ To further evaluate the usability of our program transformation API, we asked 10 people (all experienced Python coders, 5 familiar with query-then-operate design pattern) to use the API to programmatically create a variation of an SVG motion program (Figure 12). We first gave each participant a 30 minute tutorial (a combination of oral instruction and a Colab notebook) explaining how to use the API. We then gave them 15 minutes to write their own program transforming an animated digital card into one suitable for a different occasion. All participants successfully wrote a transformation program containing two or more object queries and transformations. On a 5 point Likert scale (1 = _very hard_, 5 = _very easy_) they all rated the query-then-operate pattern as _easy_ or _very easy_ to understand. Two participants who were familiar with the design pattern compared the structure of our API to SQL and other scene-graph based content creation APIs like Maya [14] and MotionBuilder [14]. Multiple participants stated in free-response feedback that the API was "intuitive to understand," "lightweight and natural," and "easy to use." Many participants liked the expressivity of the API. Nine participants noted that the API was flexible enough to accomplish the edits they wanted to make. One participant liked "how powerful the API is while still being easy to use," further commenting that "it covered a lot of possible transformations within relatively simple operations." Another wrote that the programmatic approach of our Figure 10. The general structure of motion program transformer (a) takes an SVG motion program P as input and alternates object selector blocks with object transformer blocks to modify the SVG program. The object selector function objSelector (b) selects one or more objects for transformation. It first runs queryfn (i.e., either propQuery or eventQuery) using the specified queryType (i.e., color, collisionframes) and then filters the objects to only those that match the specified criteria. The object transformers adjust the timing (c, d) motion (e) or appearance of a set of selected objects selObj. See the supplemental material B for additional examples of object transformers we have built to achieve a variety of effects. API "would be especially useful for mass producing animations or images that still look customized" and "[they] would welcome [the] programmatic approach compared to painful and arduous manual process of doing it through interfaces like InDesign." Overall, this feedback suggests that users familiar with programming are able to use our transformation API to easily produce variations of a SVG motion program. ## 8. Limitations and Future Work Our work enables editing of motion graphics video by first converting the video into an SVG motion program and then using motion program transformers programmatically create variations. However there are a few limitations that warrant future work. _Lifting assumptions on input video._ Our work focuses on motion graphics video with a static background and solid-colored, lightly textured or gradient-filled objects undergoing affine motions. Extending our approach to handle natural video containing moving backgrounds with highly textured, photographic foreground objects undergoing deformable motions, may be possible using recent video matting techniques [Kasten et al., 2021; Lu et al., 2021]. Handling non-affine motions within our pipeline would require modifications to the differentiable compositing optimization (Section 4.1) to account for the deformations. _Vectorizing canonical images._ Our SVG motion programs represent the appearance of each object using a canonical image. Converting these canonical images into a vector representation (e.g., composed of paths, shapes, gradients, etc.) would bring the benefits of a higher-level abstraction to the appearance of the objects in addition to their motions. Techniques for converting images into vector representations [Orzan et al., 2008; Reddy et al., 2021] is an active area of work that might be adapted to this context. _Higher-level program abstraction based on gestalt principles._ Our SVG motion programs represent motion graphics video using abstractions (e.g., objects) and controls (e.g., affine transform parameters) that are more meaningful than pixels and frames of video. One way to provide further meaningful abstraction might be to group objects based on perception and gestalt principles. For example if a motion graphic contains objects (e.g., letters) that move together and are near one another, they might be grouped together to form a higher-level composite object (e.g., a word). Such higher-level grouping could further facilitate program transformation as changes and adjustments could be applied to the composite objects. _GUI for motion editing._ Our system enables users to work with a programmatic representation of motion graphics video rather than pixels and frames. However, we have not developed a graphical user interface for editing the resulting SVG motion programs. Indeed, we believe many different GUIs could be built using our motion program representation and our program transformation API. One approach that may be especially fruitful is to extend the bidirectional SVG editing interface of Sketch-n-Sketch [Hempel et al., 2019], so Figure 11. Changing appearance while preserving collisions. This input video contains two balls that interact with one another with the dark blue ball bouncing around outside and inside the yellow ball. The program transformer changes the blue ball into a coin that is smaller than the blue ball. It then uses the collisionPreserveObjTransformer to adjust the motion of the smaller coin so that the collision points are maintained with the yellow ball. Finally it changes the appearance of the yellow ball to a piggy bank with the body of the bank the same size as the yellow ball. Figure 12. We asked user study participants to use our transformation API to repurpose a digital card with confetti falling down (top row). One participant created a happy holidays card with falling snow (middle). Another created a new years card reversing the falling motion to create streamers and stars. that direct manipulation changes to the graphics are immediately reflected in the SVG representation and vice versa. Inferring how direct, graphical manipulations should affect an underlying motion program is an important direction for future work. ## 9. Conclusion While motion graphics videos are prevalent on the Web today, they are difficult to edit because they are simply a collection of pixels and frames. We have presented a motion vectorization pipeline that converts such video into a SVG motion program that represents the video as objects moving over time. We further provide a motion program transformation API that enables programmatic editing of the resulting SVG programs to create variations of the timing, motions and object appearance. We believe that these tools can allow users to more easily explore motion graphics design options by borrowing from widely-available motion graphics video examples and that they open the door to dynamically adapting the graphics to the preferences of the viewer. ###### Acknowledgements. We thank Lvmin Zhang for valuable discussions on sprite-from-sprite. We would also like to thank the reviewers for their feedback. This research is supported by NSF Award #2219864, the Brown Institute for Media Innovation and the Stanford Institute for Human-Centered AI (HAI).
2303.18023
Multiple Hankel matrix rank minimization for audio inpainting
Sasaki et al. (2018) presented an efficient audio declipping algorithm, based on the properties of Hankel-structure matrices constructed from time-domain signal blocks. We adapt their approach to solving the audio inpainting problem, where samples are missing in the signal. We analyze the algorithm and provide modifications, some of them leading to an improved performance. Overall, it turns out that the new algorithms perform reasonably well for speech signals but they are not competitive in the case of music signals.
Pavel Záviška, Pavel Rajmic, Ondřej Mokrý
2023-03-31T13:02:02Z
http://arxiv.org/abs/2303.18023v2
# Multiple Hankel matrix rank minimization ###### Abstract Sasaki et al. (2018) presented an efficient audio declipping algorithm, based on the properties of Hankel-structured matrices constructed from time-domain signal blocks. We adapt their approach to solve the audio inpainting problem, where samples are missing in the signal. We analyze the algorithm and provide modifications, some of them leading to an improved performance. Overall, it turns out that the new algorithms perform reasonably well for speech signals but they are not competitive in the case of music signals. audio inpainting; audio declipping; rank minimization; Hankel matrix; autoregression ## I Introduction Audio declipping and audio inpainting are two closely related inverse problems. In the inpainting case, some audio samples are missing and the need for a means of reconstruction naturally arises. A number of successful algorithms has been proposed, based on different signal models. These include assumptions on autoregressivity of audio waveform [2, 3] or its smoothness [4], sparsity of the time-frequency audio representation [5, 6, 7, 8] and low-rank expansions of the spectrogram [9]. Other methods rely on copying non-local audio information into the gap [10, 11]. A special class of methods uses deep neural networks to learn the signal reconstruction [12, 13]. The case of audio declipping differs from inpainting solely by additional amplitude-based constraints stemming from the clipping process [14]. Thus, if the respective model allows it, declipping methods can be basically identical with their inpainting variants, only with additional requirements on the feasible set [15, 16, 17, 18, 19]. In some cases, however, such a modification is not possible and therefore there exist variety of algorithms designed specifically for declipping [1, 20, 21]. In the present paper, the fundamental assumption is that a segment of audio can be effectively approximated using an autoregressive (AR) process. This assumption is nevertheless not utilized directly by modeling the latent AR coefficients. Rather, as in [1], we exploit the fact that the Hankel matrix constructed from a block of an AR signal is low-rank. While [1] treats clipped audio, we propose an optimization problem dealing with missing samples and solve it by an algorithm similar to the referenced one. Since missing audio usually appears in compacts blocks of samples, in the experiments we focus on this particular case. Our first contribution is the introduction of a novel method into the context of audio inpainting. Second, while [1] evaluates their respective algorithm solely on speech, our approach, which includes several proposed modifications, is tested against the state-of-the-art methods also on a standard music dataset. The main idea of [1] is actually not brand new. Optimization involving Hankel matrices has been proposed in [22] and [23] for the case of audio declipping, and in [24] for audio inpainting. Compared to these works, [1] proposes to involve _multiple_ matrices in the optimization, leading to an improved reconstruction efficiency. ## II Method The method is based on the assumption that audio signals \(\mathbf{x}=\{x_{t}\}\) can be modeled as autoregressive (AR). An AR model of order \(r\) characterizes the signal samples as depending on \(r\) preceding ones, \[x_{t}=\sum_{k=1}^{r}a_{k}x_{t-k}+\varepsilon_{t}, \tag{1}\] where \(a_{k}\) are the AR coefficients and \(\varepsilon_{t}\) denotes noise. Given a signal \(\mathbf{x}\), elements of the Hankel-structured matrix \(\mathbf{X}\in\mathbb{R}^{M\times N}\) are defined as \[X_{i,j}=x_{i+j-1}, \tag{2}\] where \(M>N\). Rows are indexed by \(i\) and columns by \(j\). Note that this way, each antidiagonal of \(\mathbf{X}\) is constant. Let \(H\subset\mathbb{R}^{M\times N}\) denote the set (actually a vector space) of Hankel matrices of size \(M\times N\). The key observation is that when noise is ignored in (1), the rank of the corresponding Hankel matrix is equal to the order of the AR process, \(r\). This motivates the actual formulation of the inpainting problem, where the rank is being minimized, i.e., the minimum-order AR model of the signal is searched for, given the constraints. Let the set \(\Gamma\) summarize the inpainting conditions, using the reliable samples from the observed signal: \[\Gamma=\left\{\mathbf{X}\in\mathbb{R}^{M\times N}\mid\mathbf{X}_{i,j}=\mathbf{Y} _{i,j}\text{ for }i+j-1\in R\right\}, \tag{3}\] where \(\mathbf{Y}\) represents a Hankel matrix created from the depleted input signal and \(R\) denotes the set of indexes corresponding to the _reliable_ audio samples. The basic optimization problem then reads \[\min_{\mathbf{X}}\,\operatorname{rank}(\mathbf{X})\text{ s.t. }\mathbf{X}\in \Gamma\cap H, \tag{4}\] i.e., search for a matrix with minimum rank among all matrices satisfying the two feasible conditions. Such a problem is NP-hard, and therefore the solution must be only approximated. Often, the nuclear norm \(\|\mathbf{X}\|_{\star}\) is utilized in the literature as a convex surrogate to the non-convex \(\operatorname{rank}(\mathbf{X})\) function [25, 26, 27]. This makes the problem computationally affordable. However, the performance of such a basic form of the reconstruction approach is reported as poor in the context of audio declipping [1], and the same was observed by us in the field of audio inpainting. To improve the performance, authors of [1] shift to a modified problem, called _multiple_ matrix problem: \[\min_{\mathbf{X},\mathbf{D}_{i}}\sum_{i=1}^{L}\operatorname{rank }(\mathbf{D}_{i}\mathbf{X}) \tag{5}\] \[\text{s.t. }\sum_{i=1}^{L}\mathbf{D}_{i}=\mathbf{I},\ \mathbf{D}_{i}\in D,\ \mathbf{X}\in \Gamma\cap H.\] Here, \(L\) is a constant representing the number of matrices \(\mathbf{D}_{i}\), \(D\) denotes the set of diagonal matrices whose elements are either 0 or 1, and \(\mathbf{I}\) represents the identity matrix. In words, (5) splits the processed signal into \(L\) parts, and the rank of each of the parts is minimized separately. Yet, the parts together have to comply with the restrictions given by both \(\Gamma\) and \(H\). The division of each audio sample into one of the \(L\) blocks, coded by the diagonal matrices \(\mathbf{D}_{i}\), is optimized jointly with \(\mathbf{X}\). An approximate numerical solution to problem (5) can be obtained using the iterative partial matrix shrinkage (IPMS) algorithm [28]. It is a heuristic algorithm with roots in proximal splitting [29, 30]. The IPMS algorithm can be decomposed into three fundamental steps corresponding to the requirements of problem (5), see Alg. 1. ``` Input:\(\mathbf{Y},H\cap\Gamma,\{\mathbf{D}_{i}\}_{i=1}^{L}\) Parameters:\(\alpha,\alpha_{\text{min}},\eta_{\alpha},\lambda,\tau,t_{\text{max}}\) Initialization:\(t=0,\mathbf{X}=\mathbf{Y}\) 1repeat\(\alpha\leftarrow\max(\alpha/\eta_{\alpha},\alpha_{\text{min}})\)for\(i=1,\ldots,L\)do 2\([\mathbf{U},\sigma_{1},\sigma_{2},\ldots,\sigma_{N},\mathbf{V}]\leftarrow \operatorname{svd}(\mathbf{D}_{i}\mathbf{X})\)\(r_{i}\leftarrow\arg\min_{\tau}\sigma_{\tau}\) s.t. \(\sigma_{\bar{\tau}}\geq\alpha\sigma_{1}\) 3\(\mathbf{Z}_{i}\leftarrow\mathcal{T}_{r_{i},\lambda\sigma_{\tau_{i}}}( \mathbf{D}_{i}\mathbf{X})\) 4for\((i,j)\in\{1,2,\ldots,L\}\times\{1,2,\ldots,N\}\)do 5\((\mathbf{d}^{(i)})_{j}\leftarrow\max\left(0,\frac{1}{L}\left(1-\sum_{k=1}^{L} \frac{\langle\mathbf{z}_{j}^{(k)},\mathbf{x}_{j}\rangle}{\langle\mathbf{x}_{j },\mathbf{x}_{j}\rangle}\right)+\frac{\langle\mathbf{z}_{j}^{(i)},\mathbf{x}_{ j}\rangle}{\langle\mathbf{x}_{j},\mathbf{x}_{j}\rangle}-\tau\right)\) 6\((\mathbf{d}^{(i)})_{j}\leftarrow(\mathbf{d}^{(i)})_{j}/\sum_{k=1}^{L}(\mathbf{d }^{(k)})_{j}\)\(\forall\ i,j\) 7\(\mathbf{X}\leftarrow\left(\sum_{k=1}^{L}\mathbf{D}_{i}^{2}\right)^{-1}\left( \sum_{k=1}^{L}\mathbf{D}_{i}\mathbf{Z}_{i}\right)\) 8\(\mathbf{X}\leftarrow\operatorname{proj}_{H\cap\Gamma}(\mathbf{X})\)\(t\gets t+1\) 9until\(t_{\text{max}}<t\) return\(\mathbf{X}\) ``` **Algorithm 1**Inpainting algorithm based on IPMS The first step consists of enforcing the low rank of each \(\mathbf{D}_{i}\mathbf{X}\). This is achieved by thresholding the singular values of \(\mathbf{D}_{i}\mathbf{X}\). Ignoring for the moment the index \(i\), the singular values \(\sigma_{n}\) are obtained via the classic singular value decomposition (SVD): \([\mathbf{U},\sigma_{1},\ldots,\sigma_{N},\mathbf{V}]=\operatorname{svd}( \mathbf{D}_{i}\mathbf{X})\). While the usual way of processing the singular values would be to apply a soft thresholding [31] to all of them, the authors of [1] utilize the _partial_ soft thresholding operator \(\mathcal{T}_{r,\lambda\sigma_{r}}\). This operator performs the soft thresholding on the \(N-r\) smallest singular values among \(\sigma_{1},\ldots,\sigma_{N}\), with the adaptively derived threshold \(\lambda\sigma_{r}\). The largest \(r\) singular values are kept unchanged [28]. This way, matrices \(\mathbf{Z}_{i}\) are obtained. The second step includes an update of \(\mathbf{D}_{i}\). The preceding thresholding step does not provide matrices \(\mathbf{D}_{i}\) that would obey the constraints \(\mathbf{D}_{i}\in D\). Therefore, the update of \(\mathbf{D}_{i}\) is done by approximating the solution of the problem \[\min_{\mathbf{D}_{1},\ldots,\mathbf{D}_{L}}\sum_{i=1}^{L}\|\mathbf{Z}_{i}- \hat{\mathbf{D}}_{i}\mathbf{X}\|_{\text{F}}^{2} \tag{6}\] using a heuristic shrinkage technique, pushing the values of the diagonal elements \((\mathbf{d}^{(i)})_{j}\) closer to 0 or 1. The updated matrices \(\mathbf{D}_{i}\) satisfy the condition \(\sum_{i=1}^{L}\mathbf{D}_{i}=\mathbf{I}\). Finally, the third principal step involves the update of \(\mathbf{X}\) such that it minimizes the distance between \(\mathbf{Z}_{i}\) and \(\mathbf{D}_{i}\mathbf{X}\), followed by the projection of \(\mathbf{X}\) onto the intersection \(H\cap\Gamma\), which enforces the other two simultaneous feasible conditions. It is not hard to show that \(\operatorname{proj}_{H\cap\Gamma}\) can be computed as a composite projection \(\operatorname{proj}_{\Gamma}\circ\operatorname{proj}_{H}\), where the projection on the Hankel space is evaluated by computing the averages of the antidiagonals, and \(\operatorname{proj}_{\Gamma}\) simply replaces the samples at reliable indexes by the respective audio samples from the observation matrix \(\mathbf{Y}\). Note that opposite to audio declipping, in the case of inpainting the order of the two projections does not matter but computing the \(\operatorname{proj}_{H}\) first is more computationally effective, since the consecutive \(\operatorname{proj}_{\Gamma}\) can be done on the signal \(\mathbf{x}\), rather than on the corresponding Hankel matrix. Nonetheless, in the provided source codes the authors compute only the projection onto the reliable set in each iteration, while the projection onto the Hankel space is done only once at the very end of iterations, before the Hankel matrix \(\mathbf{X}\) is converted back to the time-domain vector. In the experiments in Sec. IV we include both variants of the algorithm denoted as IPMS\({}_{\Gamma}\) and IPMS\({}_{H\cap\Gamma}\) to evaluate the influence of \(\operatorname{proj}_{H}\) in each iteration on the performance of the algorithm. As a final note, the algorithm provided in Alg. 1 requires an initial setting of \(\mathbf{D}_{i}\). The idea is that since audio signals do not often switch the AR model, matrices \(\mathbf{D}_{i}\) are initialized such that the values of \(\mathbf{D}_{i}\) blend smoothly from one AR process to another and that it holds \(\sum_{i=1}^{L}(\mathbf{d}^{(i)})_{j}=1\). The paper [1] provides an explicit initialization formula, however the implementation provided by its authors slightly differs. The above-described character of the initialization is nevertheless preserved. ## III Block processing The original IPMS audio declipping method [1] was designed to process the signal by short, overlapping windows. To avoid breaking the AR signal assumption, rectangular analysis window is used by the authors; commonly used non-rectangular windows would lead to weakening the assumption of the deterministic part of (1). However, in the signal synthesis (i.e., after a processed block is restored) the authors _replace_ the overlapping samples with the currently processed block. This may lead to waveform discontinuities in the transitions between blocks, which may cause undesirable artifacts. To cope with this issue, we propose two approaches for smoothing out the block transitions. The first approach utilizes crossfading--a commonly-known technique to smoothly progress from one signal segment to another [32]. The exploited crossfading function was based on the squared sine wave \[c_{k}=\sin^{2}\left(\frac{k\pi}{2(K+1)}\right),\quad k=1,\ldots,K, \tag{7}\] where \(K\) represents the length of the crossfaded section. The second approach is based on the standard synthesis overlap-add (OLA) technique. A rectangular window is still used for analysis; however, for the signal synthesis a smooth window is used. The currently processed block is weighted by the synthesis window and added to the already-processed part of the signal. This ensures smooth blending of the currently processed block into the rest of the signal without discontinuities. For this application we chose the Hann window, which satisfies the important partition-of-unity property in the case of a 75% overlap. ## IV Experiments and results This section describes the experiments designed to evaluate the performance of the proposed method and its variants, and presents the numerical results of the restoration. ### _Data_ The experiments were performed on a dataset from the Audio Inpainting Toolbox1 accompanying the seminal article on audio inpainting by Adler _et al._[5]. The toolbox contains 10 musical and 10 speech (5 male and 5 female) uncompressed monophonic audio excerpts sampled at 16 kHz with a duration of 5 seconds and a bit-rate of 256 kbps. Footnote 1: [http://small.inria.fr/keysresults/audio-inpainting/](http://small.inria.fr/keysresults/audio-inpainting/) To simulate the loss of the time-domain samples, we generated ten gaps randomly distributed across the length of the signals, and zeroed the audio samples belonging to these intervals. To ensure a fair evaluation and comparison, positions of the gaps remained fixed for all tested signals and methods. The performed experiments utilized gaps ranging from 10 ms (160 samples) up to 50 ms (800 samples) with the step of 10 ms. ### _Metrics_ As the measure of restoration quality, we use signal-to-noise ratio (SNR), which evaluates the physical similarity of waveforms in decibels such that \[\mathrm{SNR}(\mathbf{y},\hat{\mathbf{y}})=10\cdot\log_{10}\frac{\|\mathbf{y}\| _{2}^{2}}{\|\mathbf{y}-\hat{\mathbf{y}}\|_{2}^{2}}, \tag{8}\] where \(\mathbf{y}\) represents the original, in real situations unknown, signal and \(\hat{\mathbf{y}}\) denotes the restored signal. Since the physical similarity of waveforms does not necessarily mean the most auditory pleasant result, we use also two perceptually motivated metrics--PEMO-Q [33] for music and Perceptual Evaluation of Speech Quality (PESQ) [34] for speech audio excerpts. PEMO-Q has been originally developed for rating audio quality degraded by compression algorithms, nevertheless, it is commonly used also for evaluating the performance of various audio restoration tasks, such as inpainting, declipping, dequantization, etc. Its output is a number called Objective Difference Grade (ODG), rating the severity of audio degradation in the range from \(-4\) (very annoying) to 0 (imperceptible). PESQ is a family of standards developed to model subjective tests commonly used in telecommunications. It evaluates the quality of speech signal on a Mean Opinion Score (MOS) scale ranging from 1 (bad) to 5 (excellent). The experiments include four different variants of the proposed inpainting algorithm. Two of them differ in the projection step--IPMS\({}_{\Gamma}\) projects only on the feasible set \(\Gamma\), while IPMS\({}_{H\cap\Gamma}\) computes the projection on the intersection of the Hankel space \(H\) and set of feasible solutions \(\Gamma\) in each iteration. The other two variants are based on IPMS\({}_{H\cap\Gamma}\) and utilize an additional step of smoothing the transitions between signal blocks: IPMS\({}_{\text{xfade}}\) uses crossfade governed by the squared sine wave (7) and IPMS\({}_{\text{OLA}}\) exploits the traditional OLA approach with the Hann window. The parameters of the proposed method are summarized in Table I. ### _Algorithms_ The performance of proposed inpainting algorithms was compared with several top-performing audio inpainting methods; namely nonnegative matrix factorization (NMF)-based EM1 [9], Janssen's method [2], interpolation based on the left-sided and right-sided AR-parameter vectors (LR) [3], the analysis sparse audio inpainter (A-SPAIN) [7], and A-SPAIN variant utilizing a dictionary learning approach (A-SPAIN-learned) [8]. The algorithms used in the experiments, including the proposed Hankel-based method, were implemented and tested in MATLAB 2022a. Except for the LR method, they all exploit the block-wise processing with 1024 samples long overlapping windows and 256 samples window shift (75% overlap). Note that the experiments were performed also using the recently introduced NMF-based methods AM and AMtoEM1 [9], the modified version of the Janssen's method [9], A-SPAIN-mod [8], and the weighted variant of \(\ell_{1}\)-minimization approach [6]. Nevertheless, these methods were omitted from the presented figures for clarity; their respective results were neither among the best nor the worst. ### _Results_ The results for the music dataset are illustrated in Fig. 1. The results show that performing the projection onto Hankel space significantly improves the inpainting results according to both the SNR and PEMO-Q. Smoothing transitions between the individual blocks of signal using crossfading or the OLA increases the performance even more, although in these cases the difference is less significant than in the case of Hankel space projection. Still, the overall results for music signals indicate that the proposed method is not competitive with the current state-of-the-art methods. Nonetheless, the situation is different for speech signals. The results in Fig. 2 show that while IPMS\({}_{I}\) loses to other methods, other variants of IPMS provide very good SNR results that are comparable to Janssen's method. Surprisingly, the best option for speech seems to be the plain IPMS\({}_{H\cap I}\)_without_ transition smoothing. The PESQ results are, however, more critical and show that while the proposed method outperforms all but one methods for shorter gaps (10 and 20 ms), its performance drops significantly for gaps that are longer. We offer an explanation why the algorithm performed better on speech than on music: Speech of a single speaker can be efficiently modeled exploiting an AR process, a fact that is utilized in speech coders, for instance. On the other hand, AR modeling of musical instruments is in general not that efficient. On top of that, when polyphonic/multiinstrument pieces are considered, the signal is formed as a sum of components. These can be AR-modelable separately, but the AR properties of individual components do not transfer to their sum. Losely speaking, a sum of AR processes is not an AR process. Finally, let us comment on the weights \(\mathbf{D}_{i}\), \(i=1,\ldots,L\) of the sub-processes. As described at the end of Sec. II, these weights are initialized by smooth windows, which seems to be a reasonable choice, allowing "switching" from one AR process to another, adaptively to the signal contents. Our observation is that at convergence, the profiles of \((\mathbf{d}^{(i)})_{j}\) do not form Fig. 1: SNR and PEMO-Q audio inpainting results evaluated on the “music” dataset. Fig. 2: SNR and PESQ audio inpainting results evaluated on the “speech” dataset. compact groups. This is at least surprising. It means that the optimization of (5) via the IPMS algorithm lead to a more or less random partition of the processed blocks into sub-blocks, which in a sense contradicts the basic AR modeling idea. ## V Conclusion This paper presented a novel audio inpainting algorithm based on Hankel-structured matrix rank minimization, formerly applied to the problem of audio declipping. The results of the experiments have shown that an extra projection onto the Hankel space in each iteration significantly improves the performance. The algorithm turned out to perform quite well for speech signals but not that well for music signals. Furthermore, we proposed and examined two possibilities of smoothing the transitions between individual signal blocks--one based on time-domain crossfading and the other on the (related) overlap-add approach. These techniques slightly improved the results for the music signals; however, in the case of speech signals they did not improve the performance according to PESQ and even slightly worsened the results according to the SNR. The results highlighted the potential of the proposed algorithm, especially in the case of speech inpainting. However, the obtained results did not meet the expectations stemming from the audio declipping performance reported in [1].
2309.13139
Exposing the Unseen: Exposure Time Emulation for Offline Benchmarking of Vision Algorithms
Visual Odometry (VO) is one of the fundamental tasks in computer vision for robotics. However, its performance is deeply affected by High Dynamic Range (HDR) scenes, omnipresent outdoor. While new Automatic-Exposure (AE) approaches to mitigate this have appeared, their comparison in a reproducible manner is problematic. This stems from the fact that the behavior of AE depends on the environment, and it affects the image acquisition process. Consequently, AE has traditionally only been benchmarked in an online manner, making the experiments non-reproducible. To solve this, we propose a new methodology based on an emulator that can generate images at any exposure time. It leverages BorealHDR, a unique multi-exposure stereo dataset collected over 10 km, on 55 trajectories with challenging illumination conditions. Moreover, it includes lidar-inertial-based global maps with pose estimation for each image frame as well as Global Navigation Satellite System (GNSS) data, for comparison. We show that using these images acquired at different exposure times, we can emulate realistic images, keeping a Root-Mean-Square Error (RMSE) below 1.78 % compared to ground truth images. To demonstrate the practicality of our approach for offline benchmarking, we compared three state-of-the-art AE algorithms on key elements of Visual Simultaneous Localization And Mapping (VSLAM) pipeline, against four baselines. Consequently, reproducible evaluation of AE is now possible, speeding up the development of future approaches. Our code and dataset are available online at this link: https://github.com/norlab-ulaval/BorealHDR
Olivier Gamache, Jean-Michel Fortin, Matěj Boxan, Maxime Vaidis, François Pomerleau, Philippe Giguère
2023-09-22T18:48:54Z
http://arxiv.org/abs/2309.13139v3
# Exposing the Unseen: Exposure Time Emulation for Offline Benchmarking of Vision Algorithms ###### Abstract Visual Odometry (VO) is one of the fundamental tasks in computer vision for robotics. However, its performance is deeply affected by High Dynamic Range (HDR) scenes, omnipresent outdoor. While new Automatic-Exposure (AE) approaches to mitigate this have appeared, their comparison in a reproducible manner is problematic. This stems from the fact that the behavior of AE depends on the environment, and it affects the image acquisition process. Consequently, AE has traditionally only been benchmarked in an online manner, making the experiments non-reproducible. To solve this, we propose a new methodology based on an emulator that can generate images at any exposure time. It leverages BorealHDR, a unique multi-exposure stereo dataset collected over 8.4 km, on 50 trajectories with challenging illumination conditions. Moreover, it contains pose ground truth for each image and a global 3D map, based on lidar data. We show that using these images acquired at different exposure times, we can emulate realistic images keeping a Root-Mean-Square Error (RMSE) below 1.78 % compared to ground truth images. To demonstrate the practicality of our approach for offline benchmarking, we compared three state-of-the-art AE algorithms on key elements of Visual Simultaneous Localization And Mapping (VSLAM) pipeline, against four baselines. Consequently, reproducible evaluation of AE is now possible, speeding up the development of future approaches. Our code and dataset are available online at this link: [https://github.com/norlab-ulaval/BorealHDR](https://github.com/norlab-ulaval/BorealHDR) ## I Introduction Cameras can capture high-resolution details of a scene, at high frame rates, and in a cost-effective manner. Because of this, they are used in many robotic applications, ranging from object detection to localization. One such application is VO, which is the task of predicting the displacement of a camera between consecutive images. It is the basis of many vision-based localization algorithms, such as VSLAM, and improving such a task is still an active field of research [1, 2]. In outdoor settings, the quick variations in illumination level and HDR scenes can severely compromise the performances of VO algorithms [3]. In one second, a car leaving a tunnel can experience an illumination variation over \(120\,\mathrm{dB}\)[4], while a standard 12-bit channel camera has a theoretical dynamic range of around \(72\,\mathrm{dB}\). A boreal forest in winter is another example of such HDR environments, where the sun reflected on the snow is highly contrasted with the darkness of the trees. This contrast inevitably leads to saturated pixels on both ends of the spectrum, as highlighted by blue and red colorized hues in Figure 1, thus resulting in valuable information loss for VO algorithms. Accordingly, researchers developed AE approaches to modify the camera exposure parameters, such as exposure time and gain, during operation to reduce the impact of the dynamic range on VO [5, 6, 7]. Unfortunately, given the online nature of exposure control, comparing each method is very challenging [8, 9], as one needs to exactly replicate test conditions for each run, including environment lighting and camera poses. One of the developed comparison methods for AE algorithms is based on installing on an acquisition platform as many cameras as the number of benchmarked methods. With the increasing number of tested approaches, the experimental setup quickly becomes challenging and costly. Another way is to repeat multiple times the exact same trajectory for each benchmarked AE. In an uncontrolled environment, such as the one illustrated in Figure 1, the dynamic illumination changes, and the length of the trajectory would make it impossible to have a replicable benchmark. In this work, we propose a novel method for an equitable comparison between AE algorithms applied to robot local Fig. 1: _Upper image_: Overhead view of a trajectory from BorealHDR dataset, taken in Montmorey Forest, Québec, Canada. The traveled trajectory is shown in purple. Possible emulated exposure times \(\Delta t_{e}\) along the trajectory are depicted in orange. _Lower image_: Acquired brackets (6), with one used to generate \(\Delta t_{e}=$9\,\mathrm{ms}$\) from the upper image. The bracket exposure times \(\Delta t_{b}\) are \(\{1,2,4,8,16,32\}\,\mathrm{ms}\), increasing the dynamic range of our capture by 30 dB or 5 stops. Blue and red colorized pixels are over- and under-exposed, respectively. ization, based on the bracketing technique [10] and realistic image emulation. As illustrated by \(\Delta t_{e}\) in Figure 1, our emulator framework allows generating images at any desired exposure time for recorded trajectories. This is possible due to our multi-exposure dataset, called BorealHDR, recorded in winter conditions at the Montmorey Forest in Quebec, Canada. It is composed of 50 trajectories, totaling more than 4 \(\mathrm{h}\) of recordings over 8.4 \(\mathrm{km}\). It also includes 3D lidar scans and inertial measurements. The novelty of our dataset and our emulation technique allows us to benchmark, in an offline manner, AE algorithms, feature extraction and VO pipelines, as opposed to single-exposure datasets. In short, our contributions are the following: 1. A new comparison method for AE algorithms based on an emulation framework that allows to generate realistic images at any plausible exposure time; 2. A multimodal dataset that provides stereo images at multiple exposures, and 3D map ground truths, in kilometer-scale HDR environments; and 3. First offline benchmarking of seven AE algorithms on VO-based experiments. ## II Related Work We first review the existing methodologies for comparison of AE algorithms. Then, we go over available datasets for VO algorithms and their limitations in the context of testing AE, highlighting the advantages of our dataset. Finally, we discuss important state-of-the-art AE algorithms that will be used for the first benchmarking on a large VO dataset. ### _Methodologies for AE Algorithms Comparison_ The nature of AE algorithms makes their comparison very challenging, as they actively change camera parameters during execution. Here, we present state-of-the-art AE benchmarking methods that address this issue, and categorize them between static and moving cameras approaches. **Static Cameras -** One way to evaluate AE methods is to fix a camera and acquire multiple pictures at different exposure times. Shim _et al._[11] used a surveillance camera to collect images with 210 combinations of exposure parameters, six times a day, to compare their new AE method with others. Zhang _et al._[5] developed an 18 real-world static scenes dataset, where images at varying exposure times were collected in each scene. Kim _et al._[12] collected a dataset made of 1000 images with multiple different exposures, covering indoor scenes and outdoor scenes. Shin _et al._[8] were the only ones using a stereo camera to collect a static scene dataset. Multi-exposed static scene images allow comparing some proxies of VO such as single image feature detection. However, complete trajectories, as proposed in our multi-exposure dataset, are required to benchmark AE algorithms on complete VO pipelines. **Moving Cameras -** We mainly observe three trends using cameras in motion to compare AE algorithms: simulation-based methods, multi-camera methods, and multi-trajectory methods. An example of _simulation-based methods_ was used by Zhang _et al._[5], which used the Multi-FoV synthetic dataset [13], to simulate multiple versions of an image, but with different exposition levels. Similarly, Gomez-Ojeda _et al._[14] trained a neural network to produce images with higher gradient information using a synthetic dataset, where they simulated 12 different exposures. Although these techniques reduce dataset development time, they face the _Sim2Real_ gap [15], in opposition to our approach, which relies only on real-world images. _Multi-camera methods_ are the most widespread for AE algorithms comparison, leveraging a hardware solution for effective data collection. In this setup, two or three cameras are typically installed on a mobile platform [5, 6, 9, 16]. This allows exact comparison since all AE methods are executed simultaneously, thus facing the same conditions. Wang _et al._[2] used four stereo cameras, facilitating VO comparison, as the stereo depth estimation provides scaling information. An important drawback for these multi-camera methods is that the hardware complexity grows with the number of AE methods tested, rapidly becoming impractical. On the contrary, our approach can be used to compare on an unlimited number of AE methods, at a fixed acquisition cost. _Multi-trajectory methods_ consist of repeating multiple times the exact same trajectory using a different exposure control scheme at each iteration. It was used by Begin _et al._[7] to develop a gradient AE technique, which was tested using two cameras installed on a motorized \(0.16\,\mathrm{m}^{2}\) motion table. This allowed for exact repetitions of the same trajectories multiple times, with a ground truth precision near \(0.1\,\mathrm{mm}\). To evaluate AE methods in the presence of motion blur, Han _et al._[1] acquired images from three cameras in motion on a motorized rail. While providing precise ground truth, the rail approach severely restrains the total area covered by the trajectories. It also limits the number of observable environments. A variation of the multi-trajectory method was developed by Kim _et al._[6]. They drove using stop-and-go maneuvers on a single trajectory, where they stopped six times in total. At each stop, they collected images at multiple exposure times. This technique restricts the number of images that can be acquired, since any changes in the environment would corrupt the benchmark. In our case, we do not make any static assumption, and we are able to compare AE methods using standard VO pipelines. Our dataset has the advantage of being versatile and independent of the tested solutions, resulting in better replicability, even if new algorithms are developed. ### _Datasets for VO_ Several datasets were collected aiming at improving VO against challenging illumination conditions. The KITTI [17] and the Oxford [18] datasets both acquired stereo cameras, lidars, Inertial Measurement Unit (IMU), and GPS data, with Oxford offering different weathers, seasons, and illuminations. The North Campus dataset [19] also acquired urban stereo images, changing between indoors and outdoors, on a 15-month range, using a segway robotic platform. The UMA-VI dataset [20] had for main purpose to acquire HDR images with a large number of low-textured scenes. They used a handheld custom rig equipped with cameras and IMU, but they only provided ground truth through loop closure error. Closer to our dataset environment, TartanDrive [21] and FinnForest [22] both acquired off-road data. The TartanDrive dataset contains seven proprioceptive and exteroceptive modalities used for reinforcement learning purposes, including stereo images. Although simpler, the FinnForest dataset is also composed of stereo images in summer and winter, showing the same forest landscapes under multiple conditions. From the papers described in Section II-A, only [8] and [7] published their dataset. While the presented datasets allowed great improvements of VO algorithms, ours is complementary by providing the full dynamic range of scenes through exposure times cycling. Combined with our emulation framework, we unlock the possibility to select the exposure time _during playback_, expanding the realm of camera parameters algorithms evaluation. ### _AE Algorithms_ Most vision-based localization algorithms rely on image gradient information to localize. For instance, Shim _et al._[11] designed an image quality metric based on gradient magnitude. Their exposure control scheme generates seven synthetic versions of the latest acquired image, simulating different exposure levels, to identify the next exposure value maximizing their metric. The AE algorithm proposed by Zhang _et al._[5] sorts the gradient level of each pixel, and applies a weight factor based on their percentile value. By combining their quality metric and the Camera Response Function (CRF), they predict the best next exposure value. Kim _et al._[12] developed an image quality metric based on the gradient level and Shannon entropy [23], used to detect saturation. To demonstrate the benchmarking capabilities of our approach, we provide an implementation of the above methods, which are often used for comparison [2, 7], since they cover main aspects of image quality for localization algorithms. Other techniques exist, such as Shin _et al._[8], who takes into account the Signal-to-Noise Ratio (SNR), but we did not implement it, since it is mostly based on camera gain. ## III Theory In this section, we detail the selected approaches for the development of our system, namely the emulation technique, the AE implementations, and the lidar ground truth. Details on how our system can be used for benchmarking are presented in Section IV. ### _Emulation Technique_ From the real images acquired using the bracketing technique, we are able to emulate an image at any other realistic exposure time. Our emulation method is based on the image acquisition process, which maps the scene radiance \(E\) to the image pixel values \(I(x)\), using the vignetting \(V(x)\), the exposure time \(\Delta t\), and the CRF \(f(\cdot)\). This process can be expressed using the following equation [24]: \[I(x)=f\left(\Delta tV(x)E\right). \tag{1}\] From Equation 1, the relationship between two images \(I_{\text{source}}\) and \(I_{\text{target}}\) and their respective exposure times \(\Delta t_{\text{source}}\) and \(\Delta t_{\text{target}}\) can be defined as \[I_{\text{target}}=f\left(\frac{\Delta t_{\text{target}}}{\Delta t_{\text{ source}}}\cdot f^{-1}\left(I_{\text{source}}\right)\right), \tag{2}\] where \(f^{-1}(\cdot)\) is the inverse CRF [25]. To estimate \(f(\cdot)\) and \(f^{-1}(\cdot)\), we take multiple images of a static scene at several exposure times, allowing to capture the whole dynamic range at fixed radiance, following [26]. With \(f^{-1}(\cdot)\) and Equation 2, it is thus possible to emulate any targeted exposure times \(I_{\text{target}}\) by using a known image with an exposure time \(\Delta t_{\text{source}}\). Note that, for the sake of simplicity, an image taken from one exposure time in the bracketing cycle will be called _bracket_. Considering that the data is acquired while moving, the brackets are not taken at the same time nor position. Therefore, we decided not to interpolate between the brackets, since it would create artifacts in the resulting image. Instead, we select the most appropriate \(I_{\text{source}}\) from the available brackets to emulate \(I_{\text{target}}\). The selection process takes into account the distance between \(\Delta t_{\text{source}}\) and \(\Delta t_{\text{target}}\), and the amount of saturated pixels in \(I_{\text{source}}\), since these do not contain any usable information. Also, if we select a bracket such as \(\Delta t_{\text{target}}/\Delta t_{\text{source}}>1\), the result will be an image with higher pixel values, which means that the noise in the signal will also increase. Therefore, it is preferable to select \(I_{\text{source}}\) that has a higher exposure time compared to our desired \(I_{\text{target}}\) to maximize the SNR. Based on these considerations, we developed a simple selection method named HigherNoSat. The first step is to find the two closest brackets \(\Delta t_{\text{bl}}\) and \(\Delta t_{\text{fbh}}\), that bound \(\Delta t_{\text{target}}\), such that: \[\Delta t_{\text{bl}}<\Delta t_{\text{target}}<\Delta t_{\text{fbh}}. \tag{3}\] Then, for each of the two brackets, we calculate the amount of image saturation, which corresponds to the number of pixels with values of 0 and 4095, for our 12-bit channel images. If the saturation level of the higher bracket is below \(1\,\%\), we select it as the best candidate, otherwise, we pick the lower one. Finally, if \(\Delta t_{\text{target}}\) is outside the range of available \(\Delta t_{\text{source}}\), we select the closest bracket. ### _Implementation Details of Compared AE Algorithms_ Based on Section II-C, we implement and benchmark three state-of-the-art AE methods: \(M_{\text{Shim}}\)[11], \(M_{\text{Zhang}}\)[5], and \(M_{\text{Kim}}\)[12]. These methods are not open-source, thus they were all implemented based on our understanding of the papers. \(M_{\text{Zhang}}\) suffers from instabilities coming from the CRF estimation, which were partly resolved using the description in [8]. The exposure control scheme of \(M_{\text{Kim}}\) used a Gaussian Process (GP), which sparsely sweep the camera exposure parameters until convergence. In our implementation, we use a sliding window on the trajectory to only consider the most recent images in the GP training. This can cause steep changes in the desired exposure time, since the exploration algorithm of the GP needs to cover a wide range of values to converge. We also implemented four baseline methods, corresponding to the typical AE algorithms. The first baseline algorithm is a fixed exposure time approach \(M_{fixed}\). The exposure time is selected once, at the beginning of each sequence, by using a brightness target of \(50\,\%\). The three other baselines are variable exposure algorithms, seeking to keep the average brightness target to \(30\,\%\), \(50\,\%\), and \(70\,\%\) of the 12-bit depth range. They are named respectively \(M_{30\,\%}\), \(M_{50\,\%}\), \(M_{70\,\%}\). These percentages were chosen as to cover a wide range of exposure. ### _Ground Truth Generation_ To generate the ground truth trajectories, we employed a low-drifting variant of the Iterative Closest Point (ICP) algorithm, as described by Kubelka _et al._[27]. The method registers deskewed point clouds at \(10\,\mathrm{Hz}\) into a simultaneously built dense 3D map of the environment. It relies on the on-board lidar and is illumination-independent, with median error values around \(1\,\%\) of the total trajectory length. The ICP and VO trajectories were time-synchronized using GPs. ## IV Results This section first describes our acquisition platform, before evaluating the performances of the emulation method. Then, we elaborate our data collection pipeline, and finally, conduct the first online AE benchmarking on large VO-based trajectories. ### _Experimental Setup_ The BorealHDR dataset was collected using a homemade water-resistant acquisition backpack, depicted in Figure 2. The enclosure is a Pelican case 1510 containing a battery and a Jetson AGX Xavier Development kit embedded computer. The camera sensors are two Basler a2A1920-51gcPRO, in a stereo-calibrated configuration with a baseline of \(18\,\mathrm{cm}\), and hardware-triggered by an STM32 microcontroller. In addition, the platform is equipped with a Velodyne VLP-16 lidar, an Xsens MTI-30 IMU, and an Emlid Reach RS+ GPS receiver. The images are acquired at a rate of \(r_{\text{real}}\) of 22 frames per second (FPS) in 12-bit color, using loss-less compression. The image acquisition process cycles through six exposure time values, i.e., \(\Delta T_{\text{bracket}}=\{1,2,4,8,16,32\}\,\mathrm{ms}\), yielding an effective emulation rate of \(r_{\text{emul}}=\) 3.66 FPS. This number of brackets is a compromise between the offline \(r_{\text{emul}}\) and the emulation error, detailed in Section IV-B. The small footprint of the acquisition platform allows collecting data in narrow and hard-to-access spaces for robotic vehicles, which was fundamental for our 8.4 \(\,\mathrm{km}\) dataset in HDR environments. ### _Emulation_ To evaluate the performances of our image emulator, we collect five test sequences of 1000 ground truth images with exposure times ranging from \(20\,\mathrm{\SIUnitSymbolMicro s}\) to \(50\,\mathrm{ms}\), in static scenes both indoor and outdoor. Then, we emulate the same images following the method described in Section III-A, and calculate the RMSE between \(I_{\text{emul}}\) and \(I_{\text{GT}}\) for each ground truth exposure. To account for the camera's intrinsic noise, we average the error between 25 consecutive images at the same exposure time, over the whole spectrum, and subtract this noise from our results. The five test sequences serve to validate our bracket selection method HigherNoSat, which does not have access to ground truth images in real world settings. Overall, our tests show that the emulation method maintains a median RMSE of \(0.21\,\%\) and never surpasses \(1.78\,\%\), emphasizing its consistency and accuracy. One of the test sequences is detailed in Figure 3, showing emulation performances in controlled lab settings. On the left side, the RMSE curve from HigherNoSat, in black, is compared to the error obtained by selecting a single bracket as \(I_{\text{source}}\) for the whole range, in colors. It shows that our bracket selection maintains the error close to the lowest value of the six colored curves. The right side plots present a qualitative evaluation of the emulated images from the acquired brackets. Each column exposes the distributions of pixel intensity for one target exposure time, while each row emulates this image from a different bracket. Red markers represent which exposure time in \(\Delta T_{\text{bracket}}\) was chosen by our emulator via HigherNoSat, for this image. We observe that our bracket selector consistently picks the closest or second-closest emulated image to the ground truth distribution, overlaid in gray. ### _Dataset Gathering_ Our BorealHDR dataset comprises 50 sequences totaling around 8.4 \(\,\mathrm{km}\) for 4 h of data. Most of them are loops, which is a common practice for VSLAM datasets. The relatively low \(r_{\text{emul}}\) implies that the data should be collected at a low walking speed to avoid large displacements between each acquisition cycle. Hence, our average pace is around \(2\,\mathrm{km}/\mathrm{h}\), which increases the data collection time but does not impact Fig. 2: Picture of the developed backpack for the dataset acquisition. Main components are identified as follows: 1 Two Basler a2A1920-51gcPRO cameras, 2 Xsens MTI-30 IMU, 3 VLP16 3D lidar, 4 Emlid Reach RS+ GPS receiver, and 5 Ubiquiti UniFi UAP-AC-M wifi antenna. the users of our approach. \(333\,813\,\)images were gathered on two separate days in April 2023 in the Montmoreny boreal Forest near Quebec City, Canada, described in [28]. The multi-session acquisitions allowed to cover multiple meteorological and illumination conditions. We concentrated the acquisitions on HDR scenes by capturing trees and snow in the camera frame. A variety of natural environments was also targeted during the acquisitions, such as 14 open spaces, 16 narrow tree corridors, 15 forest trails, and 5 larger forest roads. From these, 20 sequences were acquired in bright sunlight, increasing the HDR conditions. Using snowshoes to get to distant locations, we were able to not only acquire images with different scene illuminations, but also different 3D structures, allowing for a richer collection of test scenarios for VSLAM algorithms. ### _Benchmark_ The main contribution of our paper is to unlock _offline_ testing of active vision methods that could only previously be tested in an _online_ manner. To this effect, we conduct two series of tests to benchmark the methods explained in Section III-B, namely _(1) feature tracking and uniformity_, and _(2) stereo VO_, both highlighting the impact of each AE algorithm in key aspects of a VSLAM pipeline. #### Iii-D1 Feature Tracking and Uniformity For feature-based VO algorithms, the performance is dependent on the quality and quantity of detected keypoints. Therefore, we evaluate feature detection for each implemented AE algorithms on our recorded dataset, over three distinct tests. We selected the Scale-Invariant Feature Transform (SIFT) [29] feature-detector, since it is the most accurate according to [30]. First, we estimate the uniformity of the detected keypoints, by dividing each image into a \(20\times 20\) grid and assessing its occupancy rate. A uniform distribution of features is presumed to be a good proxy for VO algorithm performance. The results are displayed in Figure 3(a), which shows that most implemented AE methods yield a similar grid coverage. We observe that \(M_{70}\,\%\) and \(M_{\text{Kim}}\) provide slightly reduced uniformity, with median values respectively \(1.2\,\%\) and \(5.2\,\%\) under the average of the other combined methods. In Figure 3(b) we show that we can also obtain the number of detected features that are matched between consecutive images, again for all implemented AE methods. \(M_{\text{fixed}}\) is the one tracking the highest number of features between two images, with a median of \(810\) matches. This makes sense, Fig. 4: SIFT features analysis. (a) Uniformity of detected keypoints for an image divided into a \(20\times 20\) grid. (b) Number of match features between a pair of images for all AE metrics. The gray shading in (a) and (b) is to highlight the state-of-the-art AE methods. (c) Percentage of successful trajectories based on the number of matches detected. If one image contains less matches than \(\tau_{\text{matches}}\), the sequence is marked as not successful. Fig. 3: Validation of our emulation framework using 1000 ground truth images captured with constant illumination, but varying exposure time. _Left:_ RMSE curves showing the emulation error if a single bracket was always selected, in colors, compared to our bracket selection method HigherNoSat, in black. Red symbols correspond to the emulated exposure times displayed on the right. _Right:_ Qualitative comparison of the distributions of five emulated exposure times (columns) from six different bracket images (rows). The markers are placed next to the selected bracket by HigherNoSat. Ground truth distributions are overlaid in gray for each column. since exposure is calibrated when starting each sequence and the lack of exposure change keeps a better flow between consecutive images. Now, looking at full trajectories, we define a success criterion corresponding to a minimal number of matches \(\tau_{\text{matches}}\) between each consecutive image pairs. If, for a given AE algorithm and trajectory, this criterion is always true, then the sequence is considered successful. Accordingly, we evaluate the percentage of successful trajectories with \(min(\tau_{\text{matches}})=5\), the minimum number of matches for motion estimation [31], and the results are shown in Figure 3(c). We observe that the overall robustness of the AE algorithms decreases rapidly. For \(\tau_{\text{matches}}=100\), which is largely under any first quartile from Figure 3(b), \(M_{\text{30\,\%}}\) shows the poorest performance, completing \(24\,\%\) of the dataset sequences successfully, while the best method, \(M_{\text{fixed}}\), achieves \(56\,\%\) of success. This clearly shows the challenge that represents our dataset for VO. Many of the developed benchmarking techniques for AE algorithms would have been able to conduct our first two experiments. However, the third one could only have been done using methodologies based on moving camera, since we evaluate the features on trajectories. More importantly, our approach enables entirely reproducible testing, offline. Consequently, we can always add more AE methods to the benchmarks, without having to collect new data. #### Iv-C2 Stereo Visual Odometry (VO) We also investigate the impact of AE methods on VO, which is a key component of VSLAM. We implemented a simple stereo VO pipeline based on _OpenCV_[32] using SIFT [29] for feature detection. The _evo1_ library is used to calculate the Relative Pose Error (RPE) [33] for each AE algorithm. Our ground truth trajectories are generated using the lidar data and ICP, as explained in Section III-C. An example using one trajectory from our dataset is illustrated in Figure 5, along with the 3D map built with the lidar data. The RPE is displayed in Figure 6, where we mainly observe three clusters. The low-tier one contains \(M_{\text{Kim}}\), which has a minimal error of \(8.3\,\%\), making it the least performing method. This is caused by the GP's exploration step, as explained in Section III-B. The mid-tier cluster includes three methods having minimal translation errors averaging \(4.0\,\%\). It is composed of \(M_{\text{fixed}}\), \(M_{\text{30\,\%}}\), and \(M_{\text{70\,\%}}\). As highlighted in Section IV-D1, \(M_{\text{30\,\%}}\) and \(M_{\text{70\,\%}}\) produce more saturated pixels, decreasing the number of detected features. Even though \(M_{\text{fixed}}\) was among the best methods for feature tracking, its unresponsiveness to illumination changes results in an error of \(4.1\,\%\) when applied to VO. Finally, the top-tier results are obtained by \(M_{\text{Shim}}\), \(M_{\text{50\,\%}}\), and \(M_{\text{Zhang}}\), which have respectively a minimal translation error of \(3.0\,\%\), \(2.9\,\%\), and \(2.6\,\%\). \(M_{\text{50\,\%}}\) shows similar error compared to \(M_{\text{Shim}}\) and \(M_{\text{Zhang}}\). We conjecture that this is caused by the snow floor creating a textureless environment, reducing the overall gradient. All in all, our new developed approach allows for the first comparison of AE methods directly on a large VO dataset, in an offline manner. Footnote 1: [https://github.com/MichaelGrupp/evo](https://github.com/MichaelGrupp/evo) ## V Conclusion And Future Work In this work, we proposed an emulator framework based on a multi-exposure dataset, which allows comparing AE algorithms in an offline and reproducible manner. Our dataset, BorealHDR, contains 50 trajectories combining for \(8.4\,\mathrm{km}\), focusing on collecting images in HDR scenes taken in a snowy boreal forest environment. In addition to the 12-bit color stereo images, we also provide pose ground truth for each image and 3D map for each sequence, based on lidar data. We have shown the versatility of our methodology by benchmarking seven AE algorithms, of which three were from recent works. We concluded that our emulation approach is an efficient solution for offline benchmarking of active algorithms, such as AE, by making experiments on multiple key elements affecting VSLAM pipelines. In the future, we plan to extend our dataset to be multi-seasonal, and to contain more drastic illumination changes and scenarios. We also plan to design novel AE approaches that take into account the pose uncertainty estimates obtained by the VSLAM pipeline, which were previously difficult to develop without a flexible testing methodology. Fig. 5: Example of a collected trajectory from our BorealHDR dataset. The ground truth trajectory and one VO result are illustrated superimposed on the generated lidar 3D map. Black color represents the ground, and other colors highlight structures in the environment. Fig. 6: Averaged translation error over all the trajectories as a function of the trajectory length, for the implemented AE algorithms. The line styles depict three performance clusters, going from the top with solid, dashed, and then solid again. The legend is ordered based on performance.
2310.20490
Long-Tailed Learning as Multi-Objective Optimization
Real-world data is extremely imbalanced and presents a long-tailed distribution, resulting in models that are biased towards classes with sufficient samples and perform poorly on rare classes. Recent methods propose to rebalance classes but they undertake the seesaw dilemma (what is increasing performance on tail classes may decrease that of head classes, and vice versa). In this paper, we argue that the seesaw dilemma is derived from gradient imbalance of different classes, in which gradients of inappropriate classes are set to important for updating, thus are prone to overcompensation or undercompensation on tail classes. To achieve ideal compensation, we formulate the long-tailed recognition as an multi-objective optimization problem, which fairly respects the contributions of head and tail classes simultaneously. For efficiency, we propose a Gradient-Balancing Grouping (GBG) strategy to gather the classes with similar gradient directions, thus approximately make every update under a Pareto descent direction. Our GBG method drives classes with similar gradient directions to form more representative gradient and provide ideal compensation to the tail classes. Moreover, We conduct extensive experiments on commonly used benchmarks in long-tailed learning and demonstrate the superiority of our method over existing SOTA methods.
Weiqi Li, Fan Lyu, Fanhua Shang, Liang Wan, Wei Feng
2023-10-31T14:30:31Z
http://arxiv.org/abs/2310.20490v2
# Long-Tailed Learning as Multi-Objective Optimization ###### Abstract Real-world data is extremely imbalanced and presents a long-tailed distribution, resulting in models that are biased towards classes with sufficient samples and perform poorly on rare classes. Recent methods propose to rebalance classes but they undertake the seesaw dilemma (what is increasing performance on tail classes may decrease that of head classes, and vice versa). In this paper, we argue that the seesaw dilemma is derived from gradient imbalance of different classes, in which gradients of inappropriate classes are set to important for updating, thus are prone to overcompensation or undercompensation on tail classes. To achieve ideal compensation, we formulate the long-tailed recognition as an multi-objective optimization problem, which fairly respects the contributions of head and tail classes simultaneously. For efficiency, we propose a Gradient-Balancing Grouping (GBG) strategy to gather the classes with similar gradient directions, thus approximately make every update under a Pareto descent direction. Our GBG method drives classes with similar gradient directions to form more representative gradient and provide ideal compensation to the tail classes. Moreover, We conduct extensive experiments on commonly used benchmarks in long-tailed learning and demonstrate the superiority of our method over existing SOTA methods. ## Introduction Deep learning has made significant progress and been widely applied in many applications [11, 13]. Most of these excellent achievements rely on large and relatively balanced datasets, such as ImageNet [1] and MS-COCO [12]. However, real-world data is often extremely imbalanced, presenting a long-tailed distribution. Training on long-tailed data usually results in serious bias towards classes with sufficient samples (head classes) and performs poorly on rare classes (tail classes), giving rise to the field of long-tailed learning. To address the problem of learning in long-tailed distribution, recent progress on long-tailed learning can be categorized into three groups. First, the class-rebalancing methods [15] increase the importance of tail classes via resampling or reweighting directly. Second, the decoupling methods [14] use a two-stage training scheme to balance the classifier after observing from a pre-training phase. Third, the representing methods [1] design specific loss functions to achieve inter-class sparsity and a more balanced feature distribution. To sum up, the key consensus of these methods is to improve the importance of tail classes in long-tailed training. However, the existing rebalancing [15, 16, 17] methods aiming to increase the importance of tail-class gradients, may suffer from the _see Figure 1: Three gradient compensation scenarios in reweighting for two-class imbalance training: (a) undercompensation; (b) overcompensation; (c) ideal compensation. By optimizing two classes simultaneously at each step, an ideal gradient should step towards the Pareto front without harming both classes, which means the loss descent (\(b_{\text{final}}\)) should be suited between two class-independent loss descent directions (\(b_{head}\) and \(b_{tail}\)). _saw dilemma_. That is, to increase performance on tail classes may decrease that of head classes, and vice versa. In this paper, we study the seesaw dilemma in the perspective of gradient imbalance in long-tailed learning, and we observe that the tail-class gradients are suppressed by those of head classes. Under this observation, inappropriate weighting scheme may lead to the _overcompensation_ or _undercompensation_ on gradient of tail classes. In general, undercompensation refers to a bias towards head class learning and overcompensation refers to the over-bias towards learning tail classes. Taking an imbalanced two-class classification as an example, we illustrate the effects of different compensations in Fig. 1. By projecting from parameter space to loss space, we find that undercompensation may result in insufficient learning (Fig. 1(a)) for tail classes, while overcompensation may hinder the learning of head classes (Fig. 1(b)). Ideally, _a feasible compensation to the gradients in a long-tailed problem should maintain a Pareto descent direction Harada et al. (2006), which should never damage any classes in the imbalanced distribution_, demonstrated in Fig. 1(c). To achieve feasible compensations in the seesaw dilemma, we propose to formulate the long-tailed learning into a multi-objective optimization problem (MOO), where each class holds its class-level training empirical loss. In this way, our goal is to find a compromising gradient that not damage any of these losses at each update from a set of class-level gradients. Furthermore, it is impractical to extract gradients for every class independently in each mini-batch training because of two reasons. On one hand, more classes lead to more computation time and may also cause the out-of-memory problem. On the other hand, a limited batch size can not guarantee the access to each class especially tail classes leading to incompleteness of objectives in each optimization step. Accordingly, in this paper, we develop a Gradient-Balancing Grouping (GBG) algorithm to present a batch-level gradient balance in long-tailed learning. Specifically, we first compute the gradients of all classes and obtain the gradient similarity between classes to build an similarity matrix. Then, we learn to group the classes with similar gradients according to the similarity matrix. To obtained a balanced gradient to guarantee the Pareto descent, inspired by the classic multi-gradient descent algorithm Sener and Koltun (2018), we take the bundled gradients from each groups as a min-norm optimization, which can be solved easily via quadratic programming. Our main contributions are three-fold: * To the best of our knowledge, it is the first time that we formulate long-tailed recognition as a multi-objective optimization problem, to address the seesaw dilemma for the head and tail classes in previous methods. * We propose a grouping method based on gradient similarity to solve the multi-objective optimization efficiently without compromising accuracy. * Our method has been validated to outperform state-of-the-art works on the broadly used benchmarks including CIFAR10/100-LT, ImageNet-LT and INaturalist2018, which demonstrate its capability in solving long-tailed problems efficiently. ## Related Work **Long-tailed Learning via Class-Rebalancing.** Class-Rebalancing includes resampling and reweighting. Resampling strategies aim to attain a balanced training data distribution. They use over-sampling Buda et al. (2018) to enlarge instance number of tail classes or use under-sampling He and Garcia (2009) to decrease that of head classes. But they afford risks of overfitting tail classes or impairing model generalization. Class-Reweighting methods assign weights to the loss functions of each class that are negatively correlated with their sample sizes, aiming to balance the gradient contribution of different classes Jamal et al. (2020). However, inappropriate weights used in reweighting methods may cause problems such as underfitting or overfitting of the model. **Long-tailed Learning via Grouping Strategy.** Grouping strategies decompose long-tailed classification problem into a multi-task problem or a multi-level recognition problem by grouping the label sets according to certain rules Yang et al. (2022) such as grouping based on instance numbers of classes Li et al. (2020). Though current grouping strategies can avoid tail categories being suppressed to some extent, they could not solve the problem of knowledge interaction blocking between different groups Yang et al. (2022). **Multi-Objective Optimization in Deep Learning.** Multi-Objective Optimization (MOO) refers to optimizing multiple objective functions which may be conflicting in optimization problems. The target of MOO is to find a set of optimal solutions that can simultaneously optimize multiple objectives Lyu et al. (2023). MOO can be applied to fields that require simultaneously optimizing multiple targets such as multi-task learning Sener and Koltun (2018); Lyu et al. (2021); Chen et al. (2023) and recommendation systems Geng et al. (2015). In this paper, we use MOO to balance the learning of head classes and tail classes. ## The Proposed Method ### Gradient Imbalance Problem in LT Learning Let \(\mathcal{D}=\{(x_{i},y_{i}),\cdots,(x_{N},y_{N})\}\) denotes a long-tailed training set, with totally \(N\) samples and \(K\) classes. Long-tailed classification aims to learn a function \(f\left(\mathbf{\theta}\right)\) with parameters \(\theta\) to predict each test sample correctly. For a data point \((x_{i},y_{i})\), \(x_{i}\) represents the \(i\)-th data point in the training set and \(y_{i}\) represent its ground-truth label. Usually, the model will be trained using an empirical risk loss as follows: \[L\left(\mathbf{x},\mathbf{y}\right)=\frac{1}{N}\sum_{i=1}^{N}L\left(x_{i},y_{ i}\right)=-\frac{1}{N}\sum_{i=1}^{N}\log\left(\frac{e^{z_{y_{i}}}}{\sum_{j=1}^{K}e^ {z_{j}}}\right), \tag{1}\] where \(z_{j}\) is the predicted logit of class \(j\) and \(z_{y_{i}}\) is the logit of the corresponding ground-truth class. To explore the gradient imbalance in long-tailed learning, we split the training loss into head and tail losses as follows: \[L\left(\mathbf{x},\mathbf{y}\right)=\frac{1}{N}\left[\sum_{i=1}^{N_{\text{tail} }}L\left(x_{i},y_{i}\right)+\sum_{j=1}^{N_{\text{head}}}L\left(x_{i},y_{i} \right)\right], \tag{2}\] where we use \(N_{\text{head}}\) and \(N_{\text{tail}}\) to represent the numbers of head-class samples and tail-class samples in a mini-batch. Thus, the gradient of parameter \(\theta\) can be denoted as: \[\nabla_{\mathbf{\theta}}=\frac{\partial L\left(\mathbf{x},\mathbf{y}\right)}{\partial\mathbf{ \theta}}=\nabla_{\mathbf{\theta}}^{\text{tail}}+\nabla_{\mathbf{\theta}}^{\text{head}}, \tag{3}\] where \(\nabla_{\mathbf{\theta}}^{\text{tail}}\) and \(\nabla_{\mathbf{\theta}}^{\text{head}}\) are the gradients of \(\mathbf{\theta}\) generated by the tail-class and head-class instances in the mini-batch. In Fig. 2, we measure the mean gradient similarity between class and whole gradient in each batch in different epochs. The similarity measures the contribution of gradients from different classes to the gradient descent process. A larger similarity means a larger contribution. Easy to observe in, _the gradients of head and tail classes, which is presented as dots in the figure, are significantly imbalance_, where we have \(\nabla_{\mathbf{\theta}}^{\text{tail}}<\nabla_{\mathbf{\theta}}^{\text{head}}\) in every epochs. The reason of the gradient imbalance in long-tailed distribution, we argue, is the head-class samples make up the majority of most batches, resulting in the gradient domination of head classes in magnitude and direction over tail classes, which can be represented as \(\nabla_{\mathbf{\theta}}^{\uparrow}\nabla_{\mathbf{\theta}}^{\text{head}}>\nabla_{ \mathbf{\theta}}^{\uparrow}\nabla_{\mathbf{\theta}}^{\text{tail}}\). Finally, it cause the model could not obtain enough knowledge from tail-class data. At the same time, it raises confusion that the imbalance in Fig. 2(a) is caused by the imbalanced data distribution in each batch. We conduct a similar experiment with resampling strategy. As Fig. 2(b) shows, the gradient imbalance phenomenon gets worse while using resampling strategy, which indicate the gradient imbalance is not caused by imbalanced distribution within each batch instead of the imbalanced distribution of the dataset. To solve this problem, previous methods compensate tail-class gradients by raising the weight of tail-class loss by rebalancing strategies Lin et al. (2017); Wu et al. (2020). However, as shown in Fig. 1, intuitive rebalancing methods encounter the _seesaw dilemma_, where the solutions may suffer from either overcompensation or undercompensation. Overcompensation refers to that the tail classes are overemphasized, while the head classes are underestimated, resulting in the learning of the head classes is excessively inhibited. Undercompensation is equivalent to no compensation. In this paper, we seek to find an ideal compensation at each training iteration in long-tail learning, where the update should not damage any class in a long-tailed distribution. To achieve this, for the first time to the best of our knowledge, we formulate the long-tailed learning into a multi-objective optimization problem as illustrated in the next subsection. ### LT Problem as Multi-Objective Optimization Multi-Objective Optimization (MOO) means optimizing multiple objectives simultaneously. Given \(T\) different objectives, a deep model with MOO yields the following multi-objective empirical risk minimization formulation: \[\min_{\mathbf{\theta}}\ \left\{L_{i}\left(\mathcal{D}_{i}\right),\cdots,L_{T}\left( \mathcal{D}_{T}\right)\right\}, \tag{4}\] where \(\mathcal{D}_{i}\) is the data of objective \(i\). Because of the conflict among objectives, the goal of MOO is to achieve Pareto optimality via training. **Definition 1** (Pareto Optimality).: _(1) (Pareto Dominate) Let \(\mathbf{\theta}_{a}\), \(\mathbf{\theta}_{b}\) be two solutions for Problem (4), \(\mathbf{\theta}_{a}\) is said to dominate \(\mathbf{\theta}_{b}\) (\(\mathbf{\theta}_{a}\prec\mathbf{\theta}_{b}\)) if and only if \(L_{i}(\mathbf{\theta}_{a})\leq L_{i}(\mathbf{\theta}_{b})\), \(\forall i\in\{1,2,\cdots,T\}\) and \(L_{i}(\mathbf{\theta}_{a})<L_{i}(\mathbf{\theta}_{b})\), \(\exists i\in\{1,2,\cdots,T\}\)._ _(2) (Pareto Critical)_ \(\mathbf{\theta}\) is called Pareto critical if no other solution in its neighborhood can have better values in all objective functions._ _(3) (Pareto Descent Direction) If \(\mathbf{\theta}_{1}\) is not Pareto critical and can be updated to \(\mathbf{\theta}_{2}\) by gradient \(\mathbf{g}\). If \(\mathbf{\theta}_{2}\prec\mathbf{\theta}_{1}\), say \(\mathbf{g}\) is a Pareto descent direction._ An MOO problem may have multiple solutions, consisting of a Pareto set, whose projection in loss space is called _Pareto Front_. To approach the Pareto front in the loss space for all classes in a long-tailed distribution, we need to make each update under a Pareto descent direction that not damage any class's performance. To this end, we convert the single objective loss function in Eq. (1) into a multi-objective optimization problem, which yields a gather of loss functions for each category \[\mathcal{L}\left(\mathbf{\theta};\mathcal{D}\right)=\left\{L_{1}\left(\mathbf{\theta} ;\mathcal{D}_{1}\right),\cdots,L_{K}\left(\mathbf{\theta};\mathcal{D}_{K}\right) \right\}, \tag{5}\] where \(L_{k}\left(\mathbf{\theta};\mathcal{D}_{k}\right)\) and \(\mathcal{D}_{k}\subseteq\mathcal{D}\) represent loss function of class \(k\) and the samples from class \(k\) respectively. Let us review the seesaw dilemma under the MOO setting. After splitting training loss, we have \(K\) different loss w.r.t. \(K\) classes. Subsequently, we obtain task-specific gradients \(\{\nabla_{1},\cdots,\nabla_{K}\}\) via derivation, where \(\nabla_{i}=\nabla_{\mathbf{\theta}}^{L_{i}}\) and update only once. That is, we need to aggregate all gradients into one. A simple aggregation way is to set weights and sum class-level gradients. At the iteration \(n\), the problem can be reformulated as follows: \[\min_{\{\alpha_{1},\cdots,\alpha_{K}\}}\ \ \left\{L_{i}\left(\mathbf{\theta}^{(n-1)}- \tau\sum\nolimits_{i=1}^{K}\alpha_{i}\nabla_{i}\;;\;\mathcal{D}_{i}\right) \left|\forall i\right\}. \tag{6}\] Figure 2: Gradient imbalance in long-tail learning. The bars denote the mean similarity between class-level and batch-level gradients in each batch. The dots represent the normalization of mean gradients of classes in epochs. We conduct the experiment on CIFAR10-LT, where (a) uses only cross entropy loss and (b) uses resampling strategy. We show the result of top two head classes and the last two tail classes. To avoid the damage for all classes, we prefer to have \(L_{i}\left(\mathbf{\theta}^{(n)};\mathcal{D}_{i}\right)\leq L_{i}\left(\mathbf{\theta}^{(n -1)};\mathcal{D}_{i}\right)\) for any \(i\in[1,K]\). _However, the multiple gradients may have large conflicts in terms of magnitude and direction_. Inappropriate weighting may result in overcompensation and undercompensation that some classes may get decrease. In contrast, the goal of multi-objective optimization is to achieve Pareto descent direction in each step, which will damage no class. Intuitively, using the loss of each category as optimization objectives can achieve better performance. However, in fact, more objectives do not necessarily mean better performance. Multi-objective optimization problems can pose significant challenges due to the increase in the dimensionality of search space and the complexity of Pareto fronts as the number of objectives increases. Therefore, it is impractical to directly solve Problem (6) to achieve accurate Pareto descent direction, especially when the label space has large dimension. Furthermore, the batch size is limited by the restricted size of hardware memory so it is difficult to cover all classes and store the gradients from all classes into the memory. In next subsection, we propose a simple yet effective gradient-balancing grouping strategy to obtain an approximate Pareto descent direction. ### Gradient-based Class Grouping Class grouping [10] is one of effective solutions in long-tailed learning. However, most of them rely on heuristic ideas and they can not guarantee a good compensation. In this paper, we propose a Gradient-Balancing Grouping (GBG) strategy to solve the gradient conflict and obtain an approximate Pareto descent direction. GBG assigns classes with similar gradients direction into a group to make their gradients form a resultant force, which represents the approximate no-conflict direction to those of all corresponding classes in the group. Specifically, given class gradient in a batch \(\{\nabla_{1},\cdots,\nabla_{c}\}\), where \(c\) denotes the contained class numbers of the batch. Let the category set be \(\mathcal{C}=\{1,\cdots,K\}\). We first compute a similarity matrix \(\mathbf{A}\) to measure the similarity between any two gradients, and the element \(\mathbf{A}_{i,j}\) is computed by \[\mathbf{A}_{i,j}=\frac{\nabla_{\mathbf{\theta}}^{i}\nabla_{\mathbf{\theta}}^{j}}{\|\nabla_ {\mathbf{\theta}}^{i}\|\|\nabla_{\mathbf{\theta}}^{j}\|}. \tag{7}\] According to the similarity matrix \(\mathbf{A}\), we then build a graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\). \(\mathcal{V}\) denotes the set of nodes in the graph, and each node represents a class. \(\mathcal{E}=\{A_{i,j}\},i\leq j\leq K\) denotes the set of edges, where each edge represents the gradient similarity between class \(i\) and class \(j\). Our target is to find a way of grouping categories so that categories with high similarity on update habit are placed in a same group. Then, we define the affinity between groups as follows: \[a\left(\mathcal{V}_{m},\mathcal{V}_{n}\right)=\sum\nolimits_{i\in\mathcal{V }_{m},j\in\mathcal{V}_{n}}\mathbf{A}_{i,j}, \tag{8}\] where \(\mathcal{V}_{m},\mathcal{V}_{n}\subset\mathcal{V}\) represent two different groups and \(\mathcal{V}_{m}\cap\mathcal{V}_{n}=\emptyset,\forall m\neq n\). Inspired by spectrum-clustering [22], our target is equivalent to find a graph cutting \(\mathcal{P}=\{\mathcal{V}_{1},\mathcal{V}_{2},\cdots,\mathcal{V}_{G}\}\) that minimizes the summation of affinity between groups, where \(G\) is the number of groups. We formulate the problem as follows: \[\min\nolimits_{\mathcal{P}\in\mathbb{P}}\quad\sum\nolimits_{\mathcal{V}^{ \prime}\in\mathcal{P}}a\left(\mathcal{V}^{\prime},\mathcal{V}\right)-a\left( \mathcal{V}^{\prime},\mathcal{V}^{\prime}\right), \tag{9}\] Figure 3: Illustration of our proposed method. At the first stage, we use GBG to gather the classes with high gradient similarity together. At the second stage, we use an averaging strategy to merge the gradients of the categories in the same group, and then solve a MOO problem to obtain an approximate Pareto descent direction in each iteration. where \(\mathbb{P}\) is the searching space for possible grouping strategy. Then, we use NCut [20] to transform the optimization problem into the form of minimizing the Rayleigh entropy to obtain the partitioning result \(\mathcal{P}\). The grouping results \(\mathcal{P}\) obtained through the above method ensure that the categories in the same group have similar update habit. Then, during the model training process, the gradients of each group we obtained in each batch is equivalent to the average of the gradients of the categories in their corresponding group, as shown in Fig. 3 (Stage 2). In other words, the gradients obtained from each group are as similar as possible to the gradients of any category within the group, which enables the gradients within each group to work together and implicitly increase the contribution of the tail class during training. The overall class grouping procedure of GBG in LT problem is summarized in Algorithm 1. First, we fix the parameters of the initialized or pretrained model \(f\left(\mathbf{\theta}\right)\). Next, we calculate the average gradients of each class and obtain the gradient similarity between each class through similarity function (7) to build an symmetrical gradient similarity matrix \(\mathbf{A}\). Eventually, we solve the graph cutting problem Eq. (9) and get the final grouping result \(\mathcal{P}\). However, the gradient conflicts between each group still exist. In the following, we show how to solve the group-level MOO problem. ``` 0: Training set \(\mathcal{D}\), Group results \(\mathcal{P}^{*}\), Model parameters \(\mathbf{\theta}\), Learning rate \(\eta\) 1: Sample a mini-batch \(\mathcal{B}=\left\{\mathcal{B}_{1},\cdots,\mathcal{B}_{G}\right\}\) from the training set \(\mathcal{D}\) 2:for\(i=1\to G\)do 3: Compute group-level loss \(L_{i}=L\left(\mathbf{\theta};\mathcal{B}_{i}\right)\) 4: Back-propaganda and compute gradients \(\nabla_{\mathbf{\theta}}L_{i}\) 5:endfor 6:\(\alpha_{1}^{*},\cdots,\alpha_{G}^{*}\leftarrow\) Solve Eq. (11) 7:\(\mathbf{\theta}\leftarrow\ \mathbf{\theta}-\eta\sum_{i=1}^{G}\alpha_{i}\nabla_{\mathbf{ \theta}}L_{i}\) 8:Updated parameters \(\mathbf{\theta}\) ``` **Algorithm 2**Update model by grouping MOO ### Solving Group-level MOO Problem Following previous studies [18], our method is split into 2 stages as shown in Fig. 3. At the first stage, we calculate the average gradients of each class and form a gradient similarity matrix. Then we divide categories into \(G\) groups according to their gradient similarity. At the second stage, we bundle gradients of each group in each batch and form an MOO problem. Then, we solve the MOO problem to approximate a Pareto descent direction, achieving optimization of all groups at each iteration. Based on the class grouping \(\mathcal{P}^{*}=\left\{\mathcal{V}_{1},\mathcal{V}_{2},\cdots,\mathcal{V}_{G}\right\}\), the optimization goal of the long-tail problem can be converted into the training loss of \(G\) groups \[\min_{\mathbf{\theta}}\left(L_{1}\left(\mathbf{\theta}\right),\cdots,L_{G}\left(\mathbf{ \theta}\right)\right). \tag{10}\] To efficiently solve the MOO problem, we adopts Multi Gradient Descent Algorithm (MGDA) [17] that leverages the Karush-Kuhn-Tucker (KKT) conditions and transform the MOO problem into a min-norm single objective optimization as follows: \[\min_{\alpha_{1},\cdots,\alpha_{G}} \left\|\sum\nolimits_{i=1}^{G}\alpha_{i}\nabla_{\mathbf{\theta}}L_ {i}\left(\mathbf{\theta}\right)\right\|^{2},\] (11) s.t. \[\sum\nolimits_{i=1}^{G}\alpha_{i}=1\ \ \text{and}\ \ \alpha_{i}\geq 0, \forall i.\] As a min-norm single objective optimization, this problem can be easily solved by quadratic programming. With the solution to this optimization problem, we obtain the final gradient for the long-tailed learning \[\mathbf{d}^{*}=\sum_{i=1}^{G}\alpha_{i}\nabla_{\mathbf{\theta}}L_{i}. \tag{12}\] According to [17], vector \(\mathbf{d}^{*}\) is either zero or a feasible Pareto descent direction for all groups. We show the multi-objective optimization based gradient descent steps in Algorithm 2. Specifically, in every iteration, we compute the loss of each group and conduct backward over the model parameters for each loss to get \(\nabla_{\mathbf{\theta}}L_{i}\left(\mathbf{\theta}\right)\). Then we acquire the weights \(\left\{\alpha_{1},\cdots,\alpha_{G}\right\}\) by solving 11 and use it carrying out weighted summation \(\sum_{i=1}^{G}\alpha_{i}\nabla_{\mathbf{\theta}}L_{i}\left(\mathbf{\theta}\right)\) to get the final gradients which are used to update model parameters. Moreover, we propose a simple but effective resampling method Group-Aware Completion (GAC) Sampler to guarantee that each batch contains samples from all groups, _i.e._, we have data from all groups in every mini-batch. For each iteration, if the data of a group is missing in the mini-batch, we sample from the missing training data of the missing group with probability based on the class-balanced term [16], so that the number of samples reaches 1/10 of the batch size. This can ensure that each batch contains samples from all groups and to some extent improve the contribution of tail-class samples. Compared with the original optimization problem containing a large number of objectives, our method greatly reduces the number of optimization objectives and is able to complete the training relatively efficiently. ## Experiment ### Datasets **CIFAR10/100-LT.** CIFAR10 with 10 classes and CIFAR100 (Krizhevsky, Hinton et al., 2009) with 100 classes are balanced datasets, containing 50,000 training images and 10,000 validation images both. CIFAR10/100-LT are the long-tailed version of CIFAR10/100. Specifically, they are generated by downsampling CIFAR10/100 with different Imbalance Factor (IF) \(\hat{\beta}=N_{\text{max}}/N_{\text{min}}\) where \(N_{\text{max}}\) and \(N_{\text{min}}\) are the instance size of most frequent and least frequent classes in the training set (Cui et al., 2019; Cao et al., 2019). The validation set of CIFAR10-LT has 1,000 images per class and that of CIFAR100-LT is 100 images per class. **ImageNet-LT.** Similar to long-tailed CIFAR, Liu et al. (Liu et al., 2019) proposed ImageNet-LT as the long-tailed version of original ImageNet. ImageNet-LT is sampled from vanilla ImageNet following a Pareto distribution with the power value \(\alpha=6\). It contains totally 115.8K training images of 1,000 categories with \(N_{max}=1,280\) and \(N_{min}=5\). We use the balanced validation set of vanilla ImageNet which contains 50 images per class. **iNaturalist 2018.** iNaturalisit 2018 is a large-scaled real-world dataset that naturally presents a long-tailed distribution. It consists of 437.5K images from 8,142 classes with \(\beta=512\). The validation set contains 24.4K images with 3 images per class to test our method. low Li et al. (2020) to divide all categories into 4 groups according to the instance numbers of them, which means classes with similar instance number are in the same group. The results show that our grouping strategy achieve the best performance among all grouping strategy, which prove the effectiveness of grouping via gradient. ### Analysis on Different Group Numbers In Fig. 4, we present the impact of different group numbers within our gradient-based class grouping mechanism. We conduct experiments across CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets, varying the number of groups to determine the optimal configuration. The outcomes of these experiments reveal that the use of four groups yields the most favorable results. This finding suggests that employing an excess number of objectives does not necessarily enhance performance in the MOO problems. Moreover, the strategy of assigning each class as an individual objective in CIFAR10-LT significantly undermines performance. This outcome is likely attributed to the heightened complexity of the optimization space when confronted with numerous objectives. The resulting increased likelihood of encountering local optima could lead to performance degradation. ### Ablation Study We perform several experiments to demonstrate the compacity and validity of our method. We split our methods into two parts including gradient-balanced grouping strategy (GBG) and multi-objective optimization (MOO) in Table. 6. We choose CIFAR10-LT (IF=100) for experiment and using ResNet-32 as backbone. We set the number of groups as four, and use average strategy to merge the gradients of groups. To validate the compatibility of proposed method, we add our method on naive model, Cross-Entropy (CE). The results show a significant improvement when our method is applied to CE, indicating its compatibility. By utilizing GBG, we implicitly enhance the impact of tail-class gradients. Therefore, GBG achieves a modest accuracy improvement of 0.13% when adding GBG on BCL. On the other hand, directly employing the MOO method sets the loss of each category as an optimization objective. However, due to the limited batch size, data from tail classes are often absent in batches, resulting in the absence of certain optimization objectives in each iteration. Consequently, solely relying on MOO significantly decreases performance, which is only 74.58%. In other words, the optimization objectives for the MOO problem are not fixed, considerably influencing the resolution of the MOO problem. Our method combine Grouping and MOO. Through GBG, we put categories with high gradient similarity into a group, fixing the optimization objectives for MOO problem. We further solved the gradient conflicts between different groups through MOO, which enabled us to obtain approximate Pareto descent directions in every descent step. Building on this, we achieve a improvement of 0.85% compared with only using Grouping. ## Conclusion In this paper, we found that gradient imbalance in the training process is a significant issue leading to poor performance in long-tailed learning. We thoroughly analyzed that the inappropriate compensation on the gradients of different classes resulting in the seesaw dilemma of previous methods. To solve the problem, we formulated the long-tailed recognition as an MOO problem and proposed a GBG algorithm to balance the gradient contributions of head and tail classes. Then, GBG makes classes with similar gradient directions form more representative gradients. With GBG, we approximately made every update of model parameters under a Pareto descent direction and provided ideal compensation to the tail classes. The experimental results on commonly used benchmarks proved that our method achieved new state-of-the-art performance, which demonstrated the superiority of our propose methods. In the future, we plan to further study how to adaptively adjust the number of groups \begin{table} \begin{tabular}{l|c} \hline Method & Accuracy \\ \hline CE & 71.36 \\ CE+GBG+MOO & 75.53 \\ BCL & 84.07 \\ BCL+GBG & 84.20 \\ BCL+MOO & 74.58 \\ BCL+GBG+MOO & **85.05** \\ \hline \end{tabular} \end{table} Table 6: Ablation study for our proposed method with BCL as baseline on CIFAR10-LT (IF = 100). \begin{table} \begin{tabular}{l|c c c|c} \hline Methods (ResNet-50) & Many & Medium & Few & All \\ \hline \(\tau\)-norm Kang et al. (2019) & 59.1 & 46.9 & 30.7 & 49.4 \\ Balanced Softmax Ren et al. (2020) & 62.2 & 48.8 & 29.8 & 51.4 \\ Decoupling-LWS Kang et al. (2019) & 60.2 & 47.2 & 30.3 & 49.9 \\ RIDE(4 Experts) Wang et al. (2020) & 68.2 & 53.8 & 36.0 & 56.8 \\ LADE Hong et al. (2021) & 62.3 & 49.3 & 31.2 & 51.9 \\ DisAlign Zhang et al. (2021) & 62.7 & 48.8 & 31.6 & 51.8 \\ FDC Ma et al. (2023) & 65.5 & 51.9 & 37.8 & 55.3 \\ BCL1\({}^{\prime}\) Zhu et al. (2022) & 66.9 & 54.3 & 37.6 & 56.9 \\ \hline GBG (Ours) & **69.6** & **55.8** & **38.1** & **58.7** \\ \hline \end{tabular} \end{table} Table 4: Comparison on three subsets of ImageNet-LT. Figure 4: Comparison results of different group numbers. \begin{table} \begin{tabular}{l|c|c} \hline Strategy & \multicolumn{2}{c}{ImageNet-LT} \\ \hline Backbone & ResNet-50 & ResNet-50 \\ \hline Random & 56.37\(\pm\)0.41 & 56.96\(\pm\)0.24 \\ Instance number & 56.61\(\pm\)0.27 & 57.44\(\pm\)0.38 \\ GBG & 57.49\(\pm\)0.29 & 58.61\(\pm\)0.37 \\ \hline \end{tabular} \end{table} Table 5: Grouping strategy comparison on ImageNet-LT (Avg\(\pm\)Std over 5 fixed seeds). and class grouping during training to reduce the time of hyperparameter tuning, enabling our method to be more efficiently applied to different datasets.
2303.18037
Traffic Sign Recognition Dataset and Data Augmentation
Although there are many datasets for traffic sign classification, there are few datasets collected for traffic sign recognition and few of them obtain enough instances especially for training a model with the deep learning method. The deep learning method is almost the only way to train a model for real-world usage that covers various highly similar classes compared with the traditional way such as through color, shape, etc. Also, for some certain sign classes, their sign meanings were destined to can't get enough instances in the dataset. To solve this problem, we purpose a unique data augmentation method for the traffic sign recognition dataset that takes advantage of the standard of the traffic sign. We called it TSR dataset augmentation. We based on the benchmark Tsinghua-Tencent 100K (TT100K) dataset to verify the unique data augmentation method. we performed the method on four main iteration version datasets based on the TT100K dataset and the experimental results showed our method is efficacious. The iteration version datasets based on TT100K, data augmentation method source code and the training results introduced in this paper are publicly available.
Jingzhan Ge
2023-03-31T13:14:36Z
http://arxiv.org/abs/2303.18037v1
# Traffic Sign Recognition Dataset and Data Augmentation ###### Abstract Although there are many datasets for traffic sign classification, there are few datasets collected for traffic sign recognition and few of them obtain enough instances especially for training a model with the deep learning method. The deep learning method is almost the only way to train a model for real-world usage that covers various highly similar classes compared with the traditional way such as through color, shape, etc. Plus, due to the appearance frequency of different classes of traffic signs in the real world, the imbalance between different classes' instances in the datasets makes the training results even worse. Also, for some certain sign classes, their sign meanings were destined to can't get enough instances in the dataset. To solve this problem, we purpose a unique data augmentation method for the traffic sign recognition dataset that takes advantage of the standard of the traffic sign. We called it TSR dataset augmentation. We based on the benchmark Tsinghua-Tencent 100K (TT100K) dataset to verify the unique data augmentation method. we performed the method on four main iteration version datasets based on the TT100K dataset and the experimental results showed our method is efficacious. The iteration version datasets based on TT100K, data augmentation method source code and the training results introduced in this paper are publicly available. ## 1 Introduction Deep learning is a machine learning technique that teaches computers to do what humans are born with: learn by example. In deep learning, computer models learn to perform tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art precision, sometimes exceeding human levels. Models are trained by using a large set of labeled data[1]. The dataset is vitally important for deep learning. The first thing we do was to choose a suitable traffic sign recognition dataset. For well-known TSR datasets like DFG Traffic Sign Dataset, Swedish Traffic Sign Dataset, German Traffic Sign Detection Benchmark, and LISA Traffic Sign Dataset are collected outside mainland China. Chinese TRS datasets are much larger: CCTS TSDD, CCTSDB and Tsinghua-Tencent 100K (TT100K). However, these datasets have the following shortcomings in varying degrees, such as excessive subdivision or crude classification classes, not enough instances, and dataset collection methods. We think that the best dataset collection method is collected by the camera. For example, if collected through video recording, one picture is collected every 5 frames in the video clips containing traffic signs. This method will cause a large number of similar scenes, especially in the case of the small size of the dataset, which is unfavorable for model training and for our data augmentation method for traffic sign recognition datasets purposed in this paper. DFG Traffic Sign Dataset was collected in Slovenia and labeled by DFG Consulting d.o.o. It contains 6957 images and 200 classes, each containing at least 20 instances. Each class containing at least 20 instances is obviously not enough for training[2]. The German Traffic Sign Detection Benchmark features a single-image detection problem with 900 images (divided into 600 training images and 300 evaluation images) and a total of 1206 instances divided into only 4 classes. Apparently, this dataset's class is too crude[3]. The Swedish Traffic Signs Dataset was collected in Sweden. It contains
2309.09161
Generalized Finsler geometry and the anisotropic tearing of skin
A continuum mechanical theory with foundations in generalized Finsler geometry describes the complex anisotropic behavior of skin. A fiber bundle approach, encompassing total spaces with assigned linear and nonlinear connections, geometrically characterizes evolving configurations of a deformable body with microstructure. An internal state vector is introduced on each configuration, describing subscale physics. A generalized Finsler metric depends on position and the state vector, where the latter dependence allows for both direction (i.e., as in Finsler geometry) as well as magnitude. Equilibrium equations are derived using a variational method, extending concepts of finite-strain hyperelasticity coupled to phase-field mechanics to generalized Finsler space. For application to skin tearing, state vector components represent microscopic damage processes (e.g., fiber rearrangements and ruptures) in different directions with respect to intrinsic orientations (e.g., parallel or perpendicular to Langer's lines). Nonlinear potentials, motivated from soft-tissue mechanics and phase-field fracture theories, are assigned with orthotropic material symmetry pertinent to properties of skin. Governing equations are derived for one- and two-dimensional base manifolds. Analytical solutions capture experimental force-stretch data, toughness, and observations on evolving microstructure, in a more geometrically and physically descriptive way than prior phenomenological models.
John D. Clayton
2023-09-17T05:22:28Z
http://arxiv.org/abs/2309.09161v2
# Generalized Finsler geometry and the anisotropic ###### Abstract A continuum mechanical theory with foundations in generalized Finsler geometry describes the complex anisotropic behavior of skin. A fiber bundle approach, encompassing total spaces with assigned linear and nonlinear connections, geometrically characterizes evolving configurations of a deformable body with microstructure. An internal state vector is introduced on each configuration, describing subscale physics. A generalized Finsler metric depends on position and the state vector, where the latter dependence allows for both direction (i.e., as in Finsler geometry) as well as magnitude. Equilibrium equations are derived using a variational method, extending concepts of finite-strain hyperelasticity coupled to phase-field mechanics to generalized Finsler space. For application to skin tearing, state vector components represent microscopic damage processes (e.g., fiber rearrangements and ruptures) in different directions with respect to intrinsic orientations (e.g., parallel or perpendicular to Langer's lines). Nonlinear potentials, motivated from soft-tissue mechanics and phase-field fracture theories, are assigned with orthotropic material symmetry pertinent to properties of skin. Governing equations are derived for one- and two-dimensional base manifolds. Analytical solutions capture experimental force-stretch data, toughness, and observations on evolving microstructure, in a more geometrically and physically descriptive way than prior phenomenological models. **Key Words**: anisotropy; biological materials; continuum mechanics; Finsler geometry; nonlinear elasticity; orthotropic symmetry; skin; soft condensed matter **Mathematics Subject Classification (MSC) 2020**: 53Z05 (primary), 53B40, 74B20 ###### Contents * 1 Introduction * 1.1 Background * 1.2 Prior work * 1.3 Purpose and scope * 1.3.1 Soft tissue and skin mechanics * 1.3.2 Overview of the current work * 2 Generalized Finsler space * 2.1 Reference configuration * 2.1.1 Basis vectors and nonlinear connections * 2.1.2 Length, area, and volume * 2.1.3 Covariant derivatives * 2.1.4 A divergence theorem * 2.1.5 Pseudo-Finsler and Finsler spaces * 2.2 Spatial configuration * 2.2.1 Basis vectors and nonlinear connections * 2.2.2 Length, area, and volume * 2.2.3 Covariant derivatives * 2.2.4 A divergence theorem * 3 Finsler-geometric continuum mechanics * 3.1 Motion and deformation * 3.2 Particular assumptions * 3.2.1 Director fields * 3.2.2 Connections and metrics * 3.3 Energy and equilibrium * 3.3.1 Variational principle * 3.3.2 General energy density * 3.3.3 Euler-Lagrange equations * 3.3.4 Spatial invariance and material symmetry * 4 One-dimensional base manifold * 4.1 Geometry and kinematics * 4.2 Governing equations * 4.2.1 Energy density * 4.2.2 Linear momentum * 4.2.3 Micro-momentum * 4.3 General solutions * 4.3.1 Homogeneous fields * 4.3.2 Stress-free states * 4.4 Constitutive model * 4.4.1 Metrics * 4.4.2 Nonlinear elasticity * 4.5 Specific solutions * 4.5.1 Homogeneous fields * 4.5.2 Stress-free states Two-dimensional base manifold * 5.1 Geometry and kinematics * 5.2 Governing equations * 5.2.1 Energy density * 5.2.2 Linear momentum * 5.2.3 Micro-momentum * 5.3 General solutions * 5.3.1 Homogeneous fields * 5.3.2 Stress-free states * 5.4 Constitutive model * 5.4.1 Metrics * 5.4.2 Nonlinear elasticity * 5.5 Specific solutions * 5.5.1 Uniaxial extension * 5.5.2 Biaxial extension * 5.5.3 Stress-free states * 6 Conclusion * 7 Appendix A: Variational derivatives * A.1 Deformation gradient and director gradient * A.2 Volume form * 8 Appendix B: Toward residual stress and growth * B.1 Macroscopic momentum * B.2 Micro-momentum and growth ## 1 Introduction Finsler geometry and its generalizations suggest the possibility of enriched descriptions of numerous phenomena in mathematical physics, albeit at the likely expense of greater complexity of analysis and calculations compared to Riemannian geometry. Fundamentals of Finsler geometry, aptly credited to Finsler [1], are discussed in the classic monograph of Rund and the more recent text of Bao et al. [2, 3]. See also the overview article by Eringen [4]. Extensions to pseudo- and generalized Finsler geometries are described in the monograph of Bejancu [5] and research cited therein [6, 7, 8], as well as several more recent works [9, 10]. Generalized Finsler geometry is predominantly used herein, since strict classical Finsler geometry is insufficient to describe all phenomena pertinent to the present class of materials physics. Applications of (generalized) Finsler geometry in the broad physical sciences are vast and diverse; a thorough review is outside the scope of the present article. Available books discuss applications in optics, thermodynamics, and biology [11] and spinor structures and other topics in modern physics [12]. Finsler geometry and its generalizations have also been used for describing anisotropic spacetime, general relativity, quantum fields, gravitation, electromagnetism, and diffusion [13, 14, 15, 16, 17, 18, 19]. The current work implements a continuum mechanical theory for the behavior of solid materials. ### Background Classical continuum mechanics, encompassing nonlinear elasticity and plasticity theories as example constitutive frameworks, is couched in the context of Riemannian-Cartan manifolds [20, 21, 22]. Non-vanishing torsion and/or curvature tensors may emerge depending on linear connections introduced to describe various incompatibilities and possible sources of residual stresses including dislocations and disclinations in crystals [21, 23, 24, 25], inhomogeneous temperature distributions [26, 27], or biological growth processes [28, 29, 30]. In the classical Riemannian context, a continuous material body is viewed as a differentiable manifold \(\mathcal{M}\) of dimension \(n\), parameterized by coordinate chart(s) \(\{X^{A}\}\) (\(A=1,\ldots,n\)). A Riemannian metric is introduced on \(\mathcal{M}\): in components, \(G_{AB}=G_{AB}(X)\) populate a symmetric and positive-definite \(n\times n\) tensor field, where argument \(X\) implies functional dependence on the \(X^{A}\)[22]. A covariant derivative \(\nabla\) (i.e., the linear connection on \(\mathcal{M}\)) completes the geometric description. The corresponding linear connection coefficients most generally consist of \(n^{3}\) independent field components \(\Gamma^{A}_{BC}=\Gamma^{A}_{BC}(X)\). For usual solid bodies, \(n=3\), but other dimensions are permissible. Coordinate descriptions may be framed in terms of holonomic or anholonomic bases, where the latter do not correspond to continuous curves parameterizing the body [31, 32]; anholonomic coordinates emerge for a multiplicative decomposition of the deformation gradient [33, 34]. In Finsler geometry and its generalizations, a base manifold \(\mathcal{M}\), parameterized by one or more charts \(\{X^{A}\}\) (\(A=1,\ldots,n\)), is again introduced. A fiber bundle \((\mathcal{Z},\mathcal{M},\Pi,\mathcal{U})\) of total (generalized Finsler) space \(\mathcal{Z}\) of dimension \(n+m\) is constructed. The total space in Finsler geometry is typically identified with the slit tangent bundle, i.e., \(\mathcal{Z}\to T\mathcal{M}\backslash 0\)[3] with \(m=n\), but this is not essential in more general formulations [5, 9, 10]. Auxiliary coordinates \(\{D^{K}\}\) (\(K=1,\ldots,m\)) cover each fiber \(\mathcal{U}\), such that \(\mathcal{Z}\) is parameterized by \(\{X^{A},D^{K}\}\). Particular transformation laws are assigned for changes of coordinates associated with \(X\) and \(D\). Nonlinear connection coefficients define nonholonomic bases that transform conventionally on \(T\mathcal{M}\) under \(X\)-coordinate changes. Furthermore, at least two, and up to four [5, 35], additional fields of connection coefficients are needed to enable horizontal and vertical covariant derivatives with respect to \(X\) and \(D\) on \(\mathcal{Z}\)[2, 5, 10]. The generalized pseudo-Finsler metric tensor is of the form \(G_{AB}=G_{AB}(X,D)\), always symmetric. The metric is positive definite for Finsler geometry, but not always so for the pseudo-Finsler case [10]. For strict Finsler geometry (but not necessarily its generalizations [5, 6, 18, 36]), \(G_{AB}\) are second derivatives of a (squared) fundamental function \(\frac{1}{2}\mathcal{F}^{2}\) with respect to \(D^{A}\)[2, 3]. In Finsler geometry, \(\mathcal{F}\) is positively homogeneous of degree one in \(D\); the resulting metric is homogeneous of degree zero in \(D\)[2, 3] and thus should not depend only on the magnitude of a vector comprised of components \(\{D^{A}\}\). In (generalized) Finsler space, the \((X,D)\)-coordinate dependencies of the metric and various linear and nonlinear connections enter other geometric objects and tensorial operations: torsion and curvature forms, volume and area forms, the divergence theorem [37], etc. Motivation for Finsler geometry is description of detailed physics via the set of auxiliary coordinates \(\{D^{A}\}\) attached to each position \(X\) in real physical space. In solid mechanics, the idea can be interpreted as an extension of micropolar, micromorphic, or Cosserat-type theory [38, 39, 40, 41, 42, 43] from Riemannian (and more often, Euclidean) space to generalized Finsler space. However, in classical micromorphic theories, a Riemannian, rather than Finsler, metric is used, the material domain with microstructure is fully parameterized by the \(\{X^{A}\}\); basis vectors and coordinate transformation laws are those of classical continuum mechanics, as are integral theorems. The director triads of micromorphic theories enter the balance laws and constitutive functions, but they do not affect geometric objects and covariant derivatives in the same way as \(D\) of generalized Finsler space. ### Prior work The first application of Finsler geometry in the context of continuum mechanics of solids appears to be the treatment of ferromagnetic elastic-plastic crystals of Amari [44]. Conservation laws and field theories, with application to ferromagnetics, were further developed by Ikeda [45, 46, 47, 48]. Bejancu [5] gives a generalized Finsler treatment of kinematics of deformable bodies. More contemporary theories includes those of Saczuk and Stumpf [49, 50, 51], with underpinnings in a monograph [52]. Different physical phenomena (e.g, different physical meanings of \(\{D^{K}\}\)[51]) are encompassed by their models that include kinematics, balance laws, and thermodynamics, but their focus is most often on mechanics of elastic-plastic crystals and dislocations [49, 50, 52]. See also recent theory [53] that applies generalized Finsler geometry to topological defects and the comprehensive review [54] of prior works on generalized Finsler geometry in continuum physics. A new theory of Finsler-geometric continuum mechanics was developed for nonlinear elastic solids with evolving microstructure, first published in the article [55] with a preliminary version in a technical report [56]. This variational theory was extended to allow for explicit inelastic deformation and applied to study phase transitions and shear localization in crystalline solids [55, 57]. The theory has also been broadened for dynamics and shock waves [58, 59], and most recently has been used to describe ferromagnetic solids [54], enriching the governing equations of Maugin and Eringen [60, 61] with pertinent aspects arising from Finsler geometry [44, 48]. Prior to this theory [54, 55], pragmatic solutions of boundary value problems using continuum mechanical models incorporating generalized Finsler geometry appeared intractable due to complexity of governing equations and unwieldy parameterizations (e.g., uncertain constitutive functions and material constants). Most aforementioned work [5, 44, 45, 46, 47, 51, 53] presented purely theoretical constructions without attempt to formulate or solve physical boundary value problems. A material response was calculated by Saczuk and Stumpf [50, 52], but motion and internal state coordinates were prescribed a priori, without apparent solution of governing conservation laws for macroscopic and microscopic momentum and energy. In contrast, the present theory [55, 56] appears to be the first Finsler geometry-based continuum mechanics theory for which analytical and numerical solutions to the governing equations have been found, as evidenced by solutions to numerous problems for (non)linear elastic materials with evolving microstructure (e.g, fractures, twinning, phase transitions, dislocations), as evidenced in those and subsequent works [54, 55, 56, 57, 58, 59, 62]. All prior applications considered stiff crystalline solids or generic materials. The current research newly applies the theory to soft biological tissues, specifically the skin. Furthermore, prior applications in fracture and cavitation [54, 55, 59, 62] were limited to either locally isotropic damage or to local material separation on a single cleavage plane. The current treatment advances the description to anisotropic fractures or ruptures on multiple material surfaces at a single point \(X\). Most cited prior applications invoked only a single non-trivial state vector component in \(D\) (an exception being a multi-component \(D\) for twinning and fracture [59]) and most often conformal Weyl-type rescaling of \(G_{AB}\) with canonically vanishing nonlinear connection (with a few exceptions studied [62, 57]). The current research incorporates an anisotropic generalized Finsler metric for multi-dimensional problems and non-trivial nonlinear connections to show utility by example. ### Purpose and scope The scope of this paper covers two primary purposes: * Demonstration of utility of the generalized Finsler geometric theory for describing anisotropic elasticity and anisotropic structural rearrangements in soft biological tissue; * Consolidation and refinement of the theory for the equilibrium (i.e., quasi-static) case. The first item furnishes the first known application of Finsler geometry-based continuum theory to analyze finite-strain mechanics of soft biological tissue. Prior work of others [63, 64] used ideas from Finsler geometry to reproduce nonlinear stress-strain to failure responses of biologic solids, but that work used a discrete, rather than continuum, theory with material points represented as vertices linked by bonds; interaction potentials comprised bonding energies within a Hamiltonian. In that approach [65, 66, 67], a Finsler metric for bond stretch depends on orientation of local microstructure entities (e.g., molecular chains or collagen fibers) described by the Finsler director vector field \(D\). Instead, the current continuum theory considers, in a novel way, effects of microstructure on anisotropy (elastic and damage-induced) in both a geometric and constitutive sense. The second item includes a renewed examination of Rund's divergence theorem [37] in the context of an osculating Riemannian metric. It is shown that certain choices of metric and connection coefficients, with possible addition of a source term to the energy conservation law, can recover governing equations for biologic tissue growth [30] in the quasi-static limit (Appendix B). #### 1.3.1 Soft tissue and skin mechanics Most soft tissues have inherent directionality due to their collagen fiber-based and/or aligned cellular microstructures [68, 69], toward which tools of analysis from Finsler geometry might be anticipated to aptly apply. The mechanics of skin deformation [70, 71, 68], degradation [72, 73], and tearing [74, 73] are investigated herein. Like most biological materials, microstructure of skin is complex. The respective middle and outer layers of skin are the dermis and epidermis, with elastin and collagen fibers and cells embedded in a ground matrix. Underlying hypodermis (i.e., adipose) can be labeled an inner layer of the skin. The microstructure dictates nonlinear, anisotropic, viscoelastic, and tearing behaviors [74, 75, 76]. Mechanical behavior at small strains is primarily controlled by the elastin and ground substance, whereby the collagen fibers are coiled or slack [75]. Under increasing tensile stretch, the collagen fibers straighten and tighten, supporting most of the load, and compliance decreases. Under more severe stretching, fibers slide, delaminate, and rupture, leading to reduced stiffness, strain softening, and material failure [72, 73, 74, 77]. Experiments indicate that skin elasticity has orthotropic symmetry [75, 76, 68, 70]. Orthotropy arises from preferred arrangements of the collagen fibers, leading to greater stiffness along direc tions along which more fibers are aligned. In the plane of the dermis, fibers tend to be dispersed about a primary axis along which stiffness is greatest. In vivo, resting skin tension is greatest along this axis, parallel to Langer's lines [75]. In typical uniaxial and biaxial tests [68, 70, 71, 74], extracted skin is unstretched initially, but the greater stiffness along the primary direction persists, with differences in stiffness also emerging between orthogonal in-plane and out-of-plane directions [70, 75]. As might be expected, damage processes are also anisotropic due to fiber degradation that differs with respect to direction of loading relative to the microstructure [73, 74]. Skin, as is most biological tissue, is simultaneously nonlinear elastic, viscoelastic, and poroelastic [68, 76, 78, 79]; pertinence of mechanisms depends on the time scale of loading. The present application considers only monotonic loading at a constant rate (e.g, no cycling or rate fluctuations). Loading rates are assumed much slower or faster than viscous relaxation times. Thus, the pseudo elastic approach is justified to study these experiments [68], whereby hyperelastic models are deemed reasonable [71, 80, 81, 82, 83], albeit noting that different elastic constants (e.g., static and dynamic moduli) are needed to fit data at vastly different limiting low and high loading rates [84, 85]. In future applications to problems with time dependence, internal state variables can be extended, leading to kinetic laws with explicit viscous dissipation [78, 86]. The current study is limited to relatively small samples, tested in vitro, under uniaxial or biaxial extension [68, 70, 74, 87]. The material is modeled as unstressed initially and homogeneous with regard to elastic properties. In the future, the current theory can be extended to study residual stress due to growth or heterogeneous material features, as well as heterogeneous elastic properties. Residual stresses can be addressed, in the context of Riemannian manifolds, using a material metric having a non-vanishing Riemann-Christoffel curvature of its Levi-Civita connection [27, 30] or an anholonomic multiplicative term in the deformation gradient [29, 88]. These ideas may be extended to generalized Finsler space (e.g., invoking the current fiber bundle approach) in future. An early nonlinear elastic model described orthotropic symmetry using a phenomenological pseudo-strain energy potential [89]. Another early model delineated contributions of elastin and collagen fibers [79]. More recently, a class of nonlinear elastic models accounting for anisotropy from fiber arrangements using structure tensors has been successful for representing many soft tissues, including arterial walls [80, 90], myocardium [82, 91], and skin [71]. Polyconvex energy potentials can be incorporated for stability and to facilitate existence of (unique) solutions to nonlinear elastic problems [81, 90]. Fiber dispersion can be incorporated to modulate the degree of anisotropy [71, 92]. To date, most damage models accounting for softening and failure have been phenomenological, whether implemented at the macroscopic scale (either isotropic or along preferred fiber directions) or at the scale of individual fibers and their distributions [73, 77, 90, 93]. These damage models, with a basis in continuum damage mechanics [94], are thermodynamically consistent in the sense that damage is dissipative, but their particular kinetic laws and (often numerous) parameters are calibrated to experimental data without much physical meaning. In contrast, the phase-field approach has been recently implemented for soft-tissue fracture or rupture, incorporating relatively few parameters with physical origin (e.g., surface energy) and regularization facilitating unique solutions to problems involving material softening [95, 96]. The kinetic law or equilibrium equation for damage is derived from fundamental principles [97] and drives material to a local minimum-energy state, in contrast to ad hoc equations simply selected to match data. #### 1.3.2 Overview of the current work Implementation of the present generalized Finsler theory consists of four key elements: definition of the internal state \(D\), assignment of the metric tensor, assignment of the linear and nonlinear connections, and prescription of the local free energy potential. For soft tissue mechanics, the state vector represents the fiber rearrangements. Damage anisotropy is monitored via its direction, with different components of \(D\) reflecting fiber reorganization and rupture with respect to orientations of microstructure features [73, 74]; the magnitude of each component of \(D\) measures local intensity of damage in a given material direction. The metric tensor with components \(G_{AB}(X,D)\) depends on position \(X\) as well as direction and magnitude of \(D\) in generalized Finsler space; novel \(D\)-dependence captures rescaling of the material manifold as damage entities open, close, or rearrange in different directions [54, 62]. The preferred linear connection is that of Chern and Rund [3], ensuring compatibility with the divergence theorem used to derive the Euler-Lagrange equations [54, 55]. The generalized Finslerian \(D\)-dependence of both the metric and linear connection explicitly affect the governing equations. Roles of nonlinear connections are newly examined; a non-trivial prescription is shown to influence the fracture energy and stress-strain response. The free energy density consists of nonlinear elastic contribution and an internal structure contribution. The nonlinear elastic potential enriches the orthotropic theory of Holzapfel, Ogden, Gasser, and others [71, 80, 82, 83, 92] with implicit contributions from the generalized Finsler metric as well as anisotropic degradation from \(D\). The structural contribution is motivated from phase-field mechanics [95, 98]. A previous model for arterial dissection [95] accounted for fiber-scale damage anisotropy using a scalar order parameter. The current theory invokes a more physically descriptive, vector-valued order parameter (i.e., normalized \(D\)) of generalized Finsler type. With regard to skin experiments, solutions obtained for the current model are shown to admirably match extension and failure data, including stress-strain behavior and fracture toughness [73, 74, 99] with parameters having physical or geometric origins. The general theory is thus potentially more physically realistic, and considered more descriptive from a geometric perspective, than past models based on phenomenological damage mechanics [90, 94, 100, 101]. This paper is organized as follows. Mathematical preliminaries (e.g., notation and definitions for objects in referential and spatial configurations) are provided in SS2. The Finsler-geometric theory of continuum mechanics is presented in SS3, including kinematics of finite deformation and equilibrium equations derived with a variational approach. The next two sections specialize the theory to model soft tissue, specifically skin. In SS4, a one-dimensional (1-D) model for the base manifold \(\mathcal{M}\) is formulated. Analytical and semi-numerical solutions are obtained for uniaxial extension and compared to experimental data. In SS5, a two-dimensional (2-D) model for \(\mathcal{M}\) is formulated, whereby the skin has orthotropic symmetry; solutions are obtained for biaxial extension with anisotropic damage in orthogonal material directions. Conclusions follow in SS6. ## 2 Generalized Finsler space Content of SS2 consolidates a more thorough exposition given in a recent review [54], from which notation is adopted. Other extensive texts include those of Rund, Bejancu, and Bao et al. [2, 3, 5]. A new contribution in the present SS2 is interpretation of the divergence theorem [37, 54] using an osculating Riemannian metric, whereby for the further simplifying assumption of vanishing nonlinear connection, a representation akin to that of classical Riemannian geometry is obtained. ### Reference configuration The very general fiber bundle approach of Bejancu [5] encompasses geometric fundamentals of the theory. The reference configuration is linked to a particular instant in time at which a deformable solid body is undeformed relative to some intrinsic state. A differential manifold \(\mathcal{M}\) of dimension \(n\) is physically identified with a body embedded in ambient Euclidean space of dimension \(N\geq n\). **Remark 2.1.1**.: Such an embedding only applies to base manifold \(\mathcal{M}\). Neither the total space of the fiber bundle \(\mathcal{Z}\), to be introduced in what follows, nor its specialization to a Finsler space \(F_{n}\) discussed in SS2.1.5, can generally be embedded in Euclidean space [2, 102, 103]. Let \(X\in\mathcal{M}\) denote a material point or particle, and let \(\{X^{A}\}(A=1,2,\ldots,n)\) denote a coordinate chart that may partially or completely cover \(\mathcal{M}\). Attached to each material point is a vector \(\mathbf{D}\); chart(s) of secondary coordinates \(\{D^{K}\}(K=1,2,\ldots,m)\) are assigned over \(\mathcal{M}\). Fields \(\{D^{K}\}\) are smooth over \(\mathcal{M}\): \(\mathbf{D}\) is as many times continuously differentiable with respect to \(\{X^{A}\}\) as needed. Define \(\mathsf{Z}=(\mathcal{Z},\Pi,\mathcal{M},\mathcal{U})\) as a fiber bundle of total space \(\mathcal{Z}\) (dimension \(n+m\)), where \(\Pi:\mathcal{Z}\to\mathcal{M}\) is the projection and \(\mathcal{U}=\mathcal{Z}_{X}=\Pi^{-1}(X)\) is the fiber at \(X\). A chart over a region of \(\mathcal{Z}\) is \(\{X^{A},D^{K}\}\). Each fiber is a vector space of dimension \(n\); \((\mathcal{Z},\Pi,\mathcal{M})\) constitutes a vector bundle. Let \(\mathcal{M}^{\prime}\subset\mathcal{M}\) be an open neighborhood of any \(X\in\mathcal{M}\), \(\Phi\) an isomorphism of vector spaces, and \(P_{1}\) a projection operator onto the first factor. Then the following diagram is commutative [5]: #### 2.1.1 Basis vectors and nonlinear connections Coordinate transformations from \(\{X,D\}\) to another chart \(\{\tilde{X},\tilde{D}\}\) on \(\mathcal{Z}\) are of the form [5, 10] \[\tilde{X}^{A}=\tilde{X}^{A}(X),\qquad\tilde{D}^{J}(X,D)=Q^{I}_{K}(X)D^{K}, \tag{2.1}\] where \(Q^{J}_{K}\) is non-singular and differentiable, with inverse obeying \(\tilde{Q}^{I}_{K}Q^{K}_{J}=\delta^{I}_{J}\). As usual, \(\delta^{I}_{J}=1\,\forall\,I=J,\delta^{I}_{J}=0\,\forall\,I\neq J\).The holonomic basis for the tangent bundle \(T\mathcal{Z}\) is the field of frames \(\{\frac{\partial}{\partial X^{A}},\frac{\partial}{\partial D^{K}}\}\). The holonomic basis for the cotangent bundle \(T^{*}\mathcal{Z}\) is \(\{dX^{A},dD^{K}\}\). Under a change of coordinates \((X,D)\to(\tilde{X},\tilde{D})\) on \(\mathcal{Z}\) induced by \(X\to\tilde{X}\) on \(\mathcal{M}\), holonomic basis vectors on \(T\mathcal{Z}\) transform from (2.1) as [5, 10] \[\frac{\partial}{\partial\tilde{X}^{A}}=\frac{\partial X^{B}}{\partial\tilde{X }^{A}}\frac{\partial}{\partial X^{B}}+\frac{\partial D^{K}}{\partial\tilde{X }^{A}}\frac{\partial}{\partial D^{K}}=\frac{\partial X^{B}}{\partial\tilde{X }^{A}}\frac{\partial}{\partial X^{B}}+\frac{\partial\tilde{Q}^{K}_{J}}{ \partial\tilde{X}^{A}}\tilde{D}^{J}\frac{\partial}{\partial D^{K}}, \tag{2.2}\] \[\frac{\partial}{\partial\bar{D}^{J}}=\frac{\partial X^{B}}{\partial\bar{D}^{J}} \frac{\partial}{\partial X^{B}}+\frac{\partial D^{K}}{\partial\bar{D}^{J}}\frac {\partial}{\partial D^{K}}=\bar{Q}^{K}_{J}\frac{\partial}{\partial D^{K}}. \tag{2.3}\] Similarly, for the holonomic basis on \(T^{*}\mathscr{Z}\), \[d\tilde{X}^{A}=\frac{\partial\tilde{X}^{A}}{\partial X^{B}}dX^{B}+\frac{ \partial\tilde{X}^{A}}{\partial D^{K}}dD^{K}=\frac{\partial\tilde{X}^{A}}{ \partial X^{B}}dX^{B}, \tag{2.4}\] \[d\bar{D}^{J}=\frac{\partial\bar{D}^{J}}{\partial X^{B}}dX^{B}+\frac{\partial \bar{D}^{J}}{\partial D^{K}}dD^{K}=\frac{\partial Q^{J}_{K}}{\partial X^{B}}D ^{K}dX^{B}+Q^{J}_{K}dD^{K}. \tag{2.5}\] Given (2.1), \(\{\frac{\partial}{\partial X^{A}}\}\) and \(\{dD^{K}\}\) do not transform as conventional basis vectors on \(\mathscr{Z}\). Define [5, 9] \[\frac{\delta}{\delta X^{A}}=\frac{\partial}{\partial X^{A}}-N^{K}_{A}\frac{ \partial}{\partial D^{K}},\qquad\delta D^{K}=dD^{K}+N^{K}_{B}dX^{B}. \tag{2.6}\] Non-holonomic basis vectors \(\{\frac{\delta}{\delta X^{A}}\}\) and \(\{\delta D^{K}\}\) obey [10] \[\frac{\delta}{\delta\bar{X}^{A}}=\frac{\partial X^{B}}{\partial\bar{X}^{A}} \frac{\delta}{\delta X^{B}},\quad\delta\bar{D}^{J}=Q^{J}_{K}\delta D^{K}; \qquad\big{\langle}\frac{\delta}{\delta X^{B}},dX^{A}\big{\rangle}=\delta^{A }_{B},\quad\big{\langle}\frac{\partial}{\partial D^{K}},\delta D^{J}\big{\rangle} =\delta^{J}_{K}. \tag{2.7}\] The set \(\{\frac{\delta}{\delta X^{A}},\frac{\partial}{\partial D^{K}}\}\) are used as a convenient local basis on \(T\mathscr{Z}\), and the dual set \(\{dX^{A},\delta D^{K}\}\) on \(T^{*}\mathscr{Z}\)[3, 9]. The \(N^{K}_{B}(X,D)\) are the nonlinear connection coefficients; \(N^{K}_{B}\) are presumed differentiable with respect to \((X,D)\). These do not obey coordinate transformation rules for linear connections nor always correspond to a covariant derivative with the properties of a linear connection. For (2.7) to hold under coordinate transformations \(X\to\bar{X}\)[3, 5], \[\bar{N}^{J}_{A}=\left(Q^{J}_{K}N^{K}_{B}-\frac{\partial Q^{J}_{K}}{\partial X^ {B}}D^{K}\right)\frac{\partial X^{B}}{\partial\bar{X}^{A}}. \tag{2.8}\] **Remark 2.1.2**.: The geometry of tangent bundle \(T\mathscr{Z}\) with nonlinear connection admits an orthogonal decomposition \(T\mathscr{Z}=V\mathscr{Z}\oplus H\mathscr{Z}\) into a vertical vector bundle \(V\mathscr{Z}\) with local field of frames \(\{\frac{\partial}{\partial D^{A}}\}\) and a horizontal distribution \(H\mathscr{Z}\) with local field of frames \(\{\frac{\delta}{\delta X^{A}}\}\)[5]. Fibers of \(V\mathscr{Z}\) and \(H\mathscr{Z}\) are of respective dimensions \(m\) and \(n\). Henceforth, vertical and horizontal subspaces are of the same dimension: \(m=n\). Indices \(J,K,\ldots\) can thus be replaced with \(A,B,\ldots\) in the summation convention, which runs from \(1\) to \(n\). In (2.1), let \[Q^{A}_{B}=\frac{\partial\bar{D}^{A}}{\partial D^{B}}=\frac{\partial\bar{X}^{A} }{\partial X^{B}}. \tag{2.9}\] A formal way of achieving (2.9) via soldering forms is given by Minguzzi [10]. Coordinate differentiation operations are expressed as follows, with \(f\) a differentiable function of arguments \((X,D)\): \[\partial_{A}f(X,D)=\frac{\partial f(X,D)}{\partial X^{A}},\quad\bar{\partial} _{A}f(X,D)=\frac{\partial f(X,D)}{\partial D^{A}};\quad\delta_{A}(\cdot)=\frac{ \delta(\cdot)}{\delta X^{A}}=\partial_{A}(\cdot)-N^{B}_{A}\bar{\partial}_{B}( \cdot). \tag{2.10}\] The special cases \(f\to X\) and \(f\to D\) are written [54, 55] \[\partial_{B}X^{A}=\frac{\partial X^{A}}{\partial X^{B}}=\delta^{A}_{B},\quad \bar{\partial}_{B}X^{A}=0;\qquad\partial_{B}D^{A}=\frac{\partial D^{A}}{\partial X ^{B}},\quad\bar{\partial}_{B}D^{A}=\delta^{A}_{B}. \tag{2.11}\] #### 2.1.2 Length, area, and volume The Sasaki metric tensor [3, 35, 104] enables a natural inner product of vectors over \(\mathcal{Z}\): \[\mathcal{G}(X,D)=\mathbf{G}(X,D)+\check{\mathbf{G}}(X,D)=G_{AB}(X,D)dX^{A}\otimes dX^{B }+\check{G}_{AB}(X,D)\delta D^{A}\otimes\delta D^{B}; \tag{2.12}\] \[\mathcal{G}_{AB}=G_{AB}=\mathbf{G}\left(\frac{\delta}{\delta X^{A}},\frac{\delta}{ \delta X^{B}}\right)=\check{G}_{AB}=\tilde{\mathbf{G}}\left(\frac{\partial}{ \partial D^{A}},\frac{\partial}{\partial D^{B}}\right)=\check{G}_{BA}=G_{BA}= \mathcal{G}_{BA}. \tag{2.13}\] Components of \(\mathbf{G}\) and \(\check{\mathbf{G}}\) are equal and simply hereafter referred to as \(G_{AB}\), but their bases span orthogonal subspaces. Components \(G_{AB}\) and inverse components \(G^{AB}\) lower and raise indices in the usual manner, and \(G\) denotes the determinant of the \(n\times n\) non-singular matrices of components of \(\mathbf{G}\) or \(\check{\mathbf{G}}\): \[G^{AB}G_{BC}=\delta^{A}_{C};\qquad G(X,D)=\det[G_{AB}(X,D)]=\det[\check{G}_{AB }(X,D)]. \tag{2.14}\] **Remark 2.1.3**.: Let \(\mathbf{V}=V^{A}\frac{\delta}{\delta X^{A}}\in H\mathcal{Z}\) be a vector field over \(\mathcal{Z}\); its magnitude at point \((X,D)\) is \(|\mathbf{V}|=\langle\mathbf{V},\mathcal{G}\mathbf{V}\rangle^{1/2}=\langle\mathbf{V},\mathbf{G}\bm {V}\rangle^{1/2}=|\mathbf{V}\cdot\mathbf{V}|^{1/2}=|V^{A}G_{AB}V^{B}|^{1/2}=|V^{A}V_{ A}|^{1/2}\geq 0\), where \(V^{A}\) and \(G_{AB}\) are evaluated at \((X,D)\). When interpreted as a block diagonal \(2n\times 2n\) matrix, the determinant of \(\mathcal{G}\) is [49, 50, 52] \[\mathcal{G}(X,D)=\det[G_{AB}(X,D)]\det[\check{G}_{AB}(X,D)]=|\det[G_{AB}(X,D)] |^{2}=|G(X,D)|^{2}. \tag{2.15}\] Let \(\mathrm{d}\mathbf{X}\) denote a differential line element of \(\mathcal{M}\) referred to non-holonomic horizontal basis \(\{\frac{\delta}{\delta X^{A}}\}\), and let \(\mathrm{d}\mathbf{D}\) denote a line element of \(\mathcal{U}\) referred to vertical basis \(\{\frac{\partial}{\partial D^{A}}\}\). Squared lengths are \[|\mathrm{d}\mathbf{X}|^{2}=\langle\mathrm{d}\mathbf{X},\mathcal{G}\mathrm{d}X\rangle=G _{AB}\mathrm{d}X^{A}\mathrm{d}X^{B},\qquad|\mathrm{d}\mathbf{D}|^{2}=\langle \mathrm{d}\mathbf{D},\mathcal{G}\mathrm{d}\mathbf{D}\rangle=G_{AB}\mathrm{d}D^{A} \mathrm{d}D^{B}. \tag{2.16}\] The respective volume element \(\mathrm{d}V\) and volume form \(d\Omega\) of the \(n\)-dimensional base manifold \(\mathcal{M}\), and the area form \(\Omega\) for its boundary \(\partial\mathcal{M}\), are defined by [37] \[\mathrm{d}V=\sqrt{G}\,\mathrm{d}X^{1}\mathrm{d}X^{2}\ldots\mathrm{d}X^{n}, \qquad d\Omega=\sqrt{G}\,dX^{1}\wedge dX^{2}\wedge\ldots\wedge dX^{n}, \tag{2.17}\] \[\Omega=\sqrt{B}\,dU^{1}\wedge\ldots\wedge dU^{n-1}. \tag{2.18}\] Local coordinates on the \((n-1)\)-dimensional oriented hypersurface \(\partial\mathcal{M}\) are given by parametric equations \(X^{A}=X^{A}(U^{\alpha})\,(\alpha=1,\ldots,n-1)\), \(B^{A}_{\alpha}=\frac{\partial X^{A}}{\partial U^{\alpha}}\), and \(B=\det(B^{A}_{\alpha}G_{AB}B^{B}_{\beta})\). #### 2.1.3 Covariant derivatives Horizontal gradients of basis vectors are determined by generic affine connection coefficients \(H^{A}_{BC}\) and \(K^{A}_{BC}\), where \(\nabla(\cdot)\) is the covariant derivative: \[\nabla_{\delta/\delta X^{B}}\frac{\delta}{\delta X^{C}}=H^{A}_{BC}\frac{ \delta}{\delta X^{A}},\qquad\nabla_{\delta/\delta X^{B}}\frac{\partial}{ \partial D^{C}}=K^{A}_{BC}\frac{\partial}{\partial D^{A}}. \tag{2.19}\] Analogously, vertical gradients employ generic connection coefficients \(V^{A}_{BC}\) and \(Y^{A}_{BC}\): \[\nabla_{\partial/\partial D^{B}}\frac{\partial}{\partial D^{C}}=V^{A}_{BC}\frac{ \partial}{\partial D^{A}},\qquad\nabla_{\partial/\partial D^{B}}\frac{\delta}{ \delta X^{C}}=Y^{A}_{BC}\frac{\delta}{\delta X^{A}}. \tag{2.20}\] For example, let \(\mathbf{V}=V^{A}\frac{\delta}{\delta X^{A}}\in H\mathcal{Z}\) be a vector field. Then the (total) covariant derivative of \(\mathbf{V}\) is \[\nabla\mathbf{V} =\nabla_{\delta/\delta X^{B}}\mathbf{V}\otimes dX^{B}+\nabla_{\partial /\partial D^{B}}V\otimes\delta D^{B} \tag{2.21}\] \[=(\delta_{B}V^{A}+H^{A}_{BC}V^{C})\frac{\delta}{\delta X^{A}} \otimes dX^{B}+(\bar{\partial}_{B}V^{A}+Y^{A}_{BC}V^{C})\frac{\partial}{ \partial D^{A}}\otimes\delta D^{B}\] \[=V^{A}_{|B}\frac{\delta}{\delta X^{A}}\otimes dX^{B}+V^{A}|_{B} \frac{\partial}{\partial D^{A}}\otimes\delta D^{B}.\] Denoted by \((\cdot)_{|A}\) and \((\cdot)_{|B}\) are horizontal and vertical covariant derivatives with respect to \(\{X^{A}\}\) and \(\{D^{B}\}\). **Remark 2.1.4**.: The sequence of covariant indices on connections follows some works [4, 22, 31, 98] and is the transpose of others [2, 5, 34]. For symmetric connections, it is inconsequential. Components of the horizontal covariant derivative of metric tensor \(\mathbf{G}=G_{AB}\,dX^{A}\otimes dX^{B}\) (i.e., the horizontal part of \(\mathbf{\mathcal{G}}\)) are \[G_{AB|C}=\delta_{C}G_{AB}-H^{D}_{CA}G_{DB}-H^{D}_{CB}G_{AD}=\partial_{C}G_{AB}- N^{D}_{C}\bar{\partial}_{D}G_{AB}-H^{D}_{CA}G_{DB}-H^{D}_{CB}G_{DA}. \tag{2.22}\] The following identity is also noted for \(G=\det(G_{AB})\), a scalar density [37]: \[(\sqrt{G})_{|A}=\partial_{A}(\sqrt{G})-N^{B}_{A}\bar{\partial}_{B}(\sqrt{G})- \sqrt{G}H^{B}_{AB}. \tag{2.23}\] Christoffel symbols of the second kind for the Levi-Civita connection are \(\gamma^{A}_{BC}\), Cartan's tensor is \(C^{A}_{BC}\), and horizontal coefficients of the Chern-Rund and Cartan connections are \(\Gamma^{A}_{BC}\). All are torsion-free (i.e., symmetric): \[\gamma^{A}_{BC} =\tfrac{1}{2}G^{AD}(\partial_{C}G_{BD}+\partial_{B}G_{CD}- \partial_{D}G_{BC})=G^{AD}\gamma_{BCD}, \tag{2.24}\] \[C^{A}_{BC} =\tfrac{1}{2}G^{AD}(\bar{\partial}_{C}G_{BD}+\bar{\partial}_{B}G_{ CD}-\bar{\partial}_{D}G_{BC})=G^{AD}C_{BCD},\] (2.25) \[\Gamma^{A}_{BC} =\tfrac{1}{2}G^{AD}(\delta_{C}G_{BD}+\delta_{B}G_{CD}-\delta_{D}G_ {BC})=G^{AD}\Gamma_{BCD}. \tag{2.26}\] **Remark 2.1.5**.: Chern-Rund-Cartan coefficients are metric-compatible for horizontal covariant differentiation of \(\mathbf{G}=G_{AB}\,dX^{A}\otimes dX^{B}\) since \(H^{A}_{BC}=\Gamma^{A}_{BC}\Rightarrow G_{AB|C}=0\) in (2.22). Similarly, Cartan's tensor is metric-compatible for vertical covariant differentiation of \(\mathbf{G}\): \(Y^{A}_{BC}=C^{A}_{BC}\Rightarrow\tilde{G}_{AB}|_{C}=0\). From direct calculations with respective (2.24), (2.25), and (2.26), traces of linear connections are related to partial gradients of \(G=\det(\mathbf{G})\): \[\partial_{A}(\ln\sqrt{G})=\gamma^{B}_{AB},\qquad\bar{\partial}_{A}(\ln\sqrt{G })=C^{B}_{AB},\qquad\delta_{A}(\ln\sqrt{G})=\tfrac{1}{2}G^{BC}\delta_{A}G_{CB} =\Gamma^{B}_{AB}. \tag{2.27}\] **Remark 2.1.6**.: \(H^{A}_{BC}=\Gamma^{A}_{BC}\Rightarrow G_{|A}=2G(\ln\sqrt{G})_{|A}=0\) and \(Y^{A}_{BC}=C^{A}_{BC}\Rightarrow G|_{A}=2G(\ln\sqrt{G})|_{A}=0\). Nonlinear connection coefficients \(N^{A}_{B}(X,D)\) admissible under (2.1) and (2.8) can be obtained in several ways. When \(T\mathcal{Z}\) is restricted to locally flat sections [3, 10], \(N^{A}_{B}=0\) in a preferred coordinate chart \(\{X,D\}\), but \(\tilde{N}^{A}_{B}\) in (2.8) do not vanish for heterogeneous transformations under which \(\partial_{B}Q^{\prime}_{K}\) is nonzero. A differentiable real Lagrangian function \(\mathsf{L}(X,D)\) can be introduced, from which \(N^{A}_{B}=\mathsf{G}^{A}_{B}\), where [5] \[\mathsf{G}^{A}_{B}=\bar{\partial}_{B}\mathsf{G}^{A}=\bar{\partial}_{B}[G^{AE}( D^{C}\bar{\partial}_{E}\partial_{C}\mathsf{L}-\partial_{E}\mathsf{L})]. \tag{2.28}\] **Remark 2.1.7**.: Let \(G_{AB}(X,D)\) be positively homogeneous of degree zero in \(D\). Then \(G^{A}\) below are components of a spray [3, 10], and canonical nonlinear connection coefficients \(N^{A}_{B}=G^{A}_{B}\) that obey (2.8) are \[G^{A}=\tfrac{1}{2}\gamma^{A}_{BC}D^{B}D^{C},\qquad G^{A}_{B}=\bar{\partial}_{B }G^{A}. \tag{2.29}\] For classification, let \(K^{A}_{BC}=H^{A}_{BC}\) and \(Y^{A}_{BC}=V^{A}_{BC}\). Then a complete generalized Finsler connection is the set \((N^{A}_{B},H^{A}_{BC},V^{A}_{BC})\). The Chern-Rund connection is \((G^{A}_{B},\Gamma^{A}_{BC},0)\). The Cartan connection is \((G^{A}_{B},\Gamma^{A}_{BC},C^{A}_{BC})\). The Berwald connection is \((G^{A}_{B},G^{A}_{BC},0)\), where \(G^{A}_{BC}=N^{A}_{BC}=\bar{\partial}_{B}N^{A}_{C}=\bar{\partial}_{B}\bar{ \partial}_{C}G^{A}\). #### 2.1.4 A divergence theorem Let \(\mathcal{M}\) be a manifold of dimension \(n\) having \((n-1)\)-dimensional boundary \(\partial\mathcal{M}\) of class \(C^{1}\), a positively oriented hypersurface. Stokes' theorem for a \(C^{1}\) differentiable \((n-1)\) form \(\boldsymbol{\alpha}\) on \(\mathcal{M}\) is \[\int_{\mathcal{M}}d\boldsymbol{\alpha}=\int_{\partial\mathcal{M}}\boldsymbol{ \alpha}. \tag{2.30}\] **Theorem 2.1.1**.: _Let \(\mathcal{M}\), \(\dim\mathcal{M}=n\), be the base manifold of a generalized Finsler bundle of total space \(\mathcal{Z}\) with positively oriented boundary \(\partial\mathcal{M}\) of class \(C^{1}\) and \(\dim\partial\mathcal{M}=n-1\). Let \(\boldsymbol{\alpha}(X,D)=V^{A}(X,D)N_{A}(X,D)\Omega(X,D)\) be a differentiable \((n-1)\)-form, and let \(V^{A}\) be contravariant components of vector field \(\boldsymbol{V}=V^{A}\frac{\delta}{\delta X^{A}}\in H\mathcal{Z}\). Denote the field of positive-definite metric tensor components for the horizontal subspace by \(G_{AB}(X,D)\) with \(G=\det(G_{AB})>0\). Assign a symmetric horizontal linear connection \(H^{A}_{BC}=H^{A}_{CB}\) such that \((\sqrt{G})_{|A}=0\), and assume that \(C^{1}\) functional relations \(D=D(X)\) exist for representation of the vertical fiber coordinates at each \(X\in\mathcal{M}\). Then in a coordinate chart \(\{X^{A}\}\), (2.30) is explicitly, with volume and area forms given in the second of (2.17) and (2.18),_ \[\int_{\mathcal{M}}[V^{A}_{|A}+(V^{A}C^{C}_{BC}+\bar{\partial}_{B}V^{A})D^{B}_{ ;A}]\,d\Omega=\int_{\partial\mathcal{M}}V^{A}N_{A}\,\Omega, \tag{2.31}\] _where the horizontal covariant derivative is \(V^{A}_{|A}=\delta_{A}V^{A}+H^{B}_{BA}V^{A}\), the definition \(D^{B}_{;A}=\partial_{A}D^{B}+N^{B}_{A}\) with \(\partial_{A}D^{B}=\partial D^{A}/\partial X^{B}\), and \(N_{A}\) is the unit outward normal component of \(\boldsymbol{N}=N_{A}\,dX^{A}\) to \(\partial\mathcal{M}\)._ **Proof.** The proof, not repeated here, is given in the review article [54], implied but not derived explicitly in an earlier work [55]. The proof of (2.31) [54] extends that of Rund [37]-who specified a Finsler space \(F_{n}\) with Cartan connection \((G^{A}_{B},\Gamma^{A}_{BC},C^{A}_{BC})\) and metric acquired from a Finsler (Lagrangian) function \(\mathcal{F}\) (SS2.1.5)-to a generalized Finsler space with arbitrary positive-definite metric \(G_{AB}(X,D)\) and arbitrary nonlinear connection \(N^{A}_{B}(X,D)\). \(\square\) **Remark 2.1.8**.: Under the stipulations of Stokes' theorem, (2.31) holds when \(\mathcal{M}\) and \(\partial\mathcal{M}\) are replaced with any compact region of \(\mathcal{M}^{\prime}\subset\mathcal{M}\) and positively oriented boundary of that region. **Remark 2.1.9**.: The Chern-Rund-Cartan horizontal connection coefficients, \(H^{A}_{BC}=\varGamma^{A}_{BC}\), uniquely fulfill symmetry and metric-compatibility requirements. **Remark 2.1.10**.: A different basis and its dual over \(\mathcal{M}\) could be prescribed for \(\mathbf{V}\) and \(\mathbf{N}\) given certain stipulations [54]. However, geometric interpretation of covariant differentiation on the left side of (2.31) suggests \(\{\frac{\delta}{\delta X^{A}}\}\) should be used for \(\mathbf{V}\), by which dual basis \(\{dX^{B}\}\) should be used for \(\mathbf{N}\) to ensure invariance: \(\langle\mathbf{V},\mathbf{N}\rangle\to V^{A}N_{B}\langle\frac{\delta}{\delta X^{A}}, dX^{B}\rangle\). If instead \(\mathbf{V}\) is referred to the holonomic basis \(\{\frac{\partial}{\partial X^{A}}\}\), then \(N^{A}_{B}=0\) should be imposed for invariance with \(N_{B}dX^{B}\). As noted prior to (2.28), this choice would restrict (2.31) to homogeneous transformations of coordinates \(\{X,D\}\). As assumed in Theorem 2.1.1 [37, 54, 55], \(C^{1}\) functions \(D=D(X)\) must exist over all \(X\in\mathcal{M}\). Relations of generalized Finsler geometry [5] still apply, but additional relations emerge naturally when metric \(G_{AB}\) is interpreted as an osculating Riemannian metric [2, 44]. Specifically, an alternative representation of (2.31) is newly proven in the following. **Corollary 2.1.1**.: _Given \(C^{1}\) functions \(D=D(X)\), let \(\tilde{G}_{AB}(X)=G_{AB}(X,D(X))\) be components of the osculating Riemannian metric derived from \(\mathbf{G}=G_{AB}dX^{A}\otimes dX^{B}\). Then (2.31) is equivalent to_ \[\int_{\mathcal{M}}\tilde{V}^{A}_{:A}\,d\Omega=\int_{\partial\mathcal{M}}\tilde {V}^{A}\tilde{N}_{A}\,\Omega, \tag{2.32}\] _where the vector \(\tilde{V}^{A}(X)=V^{A}(X,D(X))\), unit normal \(\tilde{N}_{A}(X)=N_{A}(X,D(X))\), and covariant derivative \(\tilde{V}^{A}_{:A}=\partial_{A}\tilde{V}^{A}+\tilde{\gamma}^{B}_{BA}\tilde{V} ^{A}\) with connection \(\tilde{\gamma}^{B}_{BA}(X)=\partial_{A}(\ln\sqrt{\tilde{G}(X)})=\tilde{\gamma }^{B}_{AB}(X)\) and \(\tilde{G}=\det(\tilde{G}_{AB})\)._ **Proof.** The right of (2.32) is identical to the right of (2.31) given the change of variables. In the left of (2.32), from chain-rule differentiation, vanishing (2.23), and (2.27), \[\partial_{A}\tilde{V}^{A}=\partial_{A}V^{A}+\bar{\partial}_{B}V^{A}\partial_{ A}D^{B}, \tag{2.33}\] \[\begin{split}\tilde{V}^{A}\tilde{\gamma}^{B}_{BA}& =\tilde{V}^{A}\partial_{A}(\ln\sqrt{\tilde{G}})=V^{A}[\partial_{ A}(\ln\sqrt{G})+\bar{\partial}_{B}(\ln\sqrt{G})\partial_{A}D^{B}]\\ &=V^{A}[\delta_{A}(\ln\sqrt{G})+C^{C}_{BC}(N^{B}_{A}+\partial_{ A}D^{B})]=V^{A}[H^{B}_{AB}+C^{C}_{BC}D^{B}_{;A}]=V^{A}[H^{B}_{BA}+C^{C}_{BC}D^{B}_{;A} ].\end{split} \tag{2.34}\] Adding (2.33) to (2.34) with canceling \(\pm N^{B}_{A}\bar{\partial}_{B}V^{A}\) terms then produces \[\begin{split}\tilde{V}^{A}_{:A}&=\{\partial_{A}V^ {A}+\bar{\partial}_{B}V^{A}\partial_{A}D^{B}-N^{B}_{A}\bar{\partial}_{B}V^{A} \}+\{N^{B}_{A}\bar{\partial}_{B}V^{A}+V^{A}[H^{B}_{BA}+C^{C}_{BC}D^{B}_{;A}]\} \\ &=\delta_{A}V^{A}+V^{A}H^{B}_{BA}+\bar{\partial}_{B}V^{A}(\partial _{A}D^{B}+N^{B}_{A})+V^{A}C^{C}_{BC}D^{B}_{;A}=V^{A}_{|A}+(\bar{\partial}_{B}V ^{A}+V^{A}C^{C}_{BC})D^{B}_{;A}.\end{split} \tag{2.35}\] Integrands on the left sides of (2.31) and (2.32) are thus verified to match, completing the proof. \(\square\) **Remark 2.1.11**.: Coefficients of the Levi-Civita connection of \(\tilde{G}_{AB}\) satisfy the symmetry and metric-compatibility requirements used to prove (2.32): \[\tilde{\gamma}^{A}_{BC}=\tfrac{1}{2}\tilde{G}^{AD}(\partial_{C}\tilde{G}_{BD}+ \partial_{B}\tilde{G}_{CD}-\partial_{D}\tilde{G}_{BC})=\tilde{G}^{AD}\tilde{ \gamma}_{BCD}. \tag{2.36}\] **Remark 2.1.12**.: Given (2.36), the form of the divergence theorem in (2.32) appears analogous to that of a Riemannian manifold with boundary. It is not identical, however, since the non-holonomic basis \(\{\frac{\delta}{\delta X^{A}}\}\) is used for \(\mathbf{V}\). As in Remark 2.1.3, the holonomic basis \(\{\frac{\partial}{\partial X^{A}}\}\) could be used in a preferred chart \(\{X,D(X)\}\) wherein \(N^{A}_{B}=0\); under such special conditions the distinction vanishes. #### 2.1.5 Pseudo-Finsler and Finsler spaces Preceding developments hold for generalized Finsler geometry, by which the metric tensor components need not be derived from a Lagrangian [5, 6, 36]. Subclasses of generalized Finsler geometry do require such a Lagrangian function, denoted by \(\mathcal{L}\). Let \(\mathcal{Z}=T\mathcal{M}\backslash 0\) (i.e., the tangent bundle of \(\mathcal{M}\) excluding zero section \(D=0\)). Let \(\mathcal{L}(X,D):\mathcal{Z}\to\mathbb{R}\) be positive homogeneous of degree two in \(D\), and as many times differentiable as needed with respect to \(\{X^{A}\}\) and \(\{D^{A}\}\) (\(C^{\infty}\) is often assumed [3], but \(C^{5}\) is usually sufficient [10]). Then \((\mathcal{M},\mathcal{L})\) is a pseudo-Finsler space when the \(n\times n\) matrix of components \(G_{AB}\) is both non-singular over \(\mathcal{Z}\) and obtained from Lagrangian \(\mathcal{L}\): \[G_{AB}(X,D)=\bar{\partial}_{A}\bar{\partial}_{B}\mathcal{L}(X,D),\qquad \mathcal{L}=\tfrac{1}{2}G_{AB}D^{A}D^{B}. \tag{2.37}\] A Finsler space \((\mathcal{M},\mathcal{F})\), also denoted by \(F_{n}\) where \(n=\dim\mathcal{M}\), is a pseudo-Finsler space for which \(G_{AB}(X,D)\) is always positive definite over \(\mathcal{Z}\). For a Finsler space \(F_{n}\)[2, 3], the fundamental scalar Finsler function \(\mathcal{F}(X,D)\) is introduced, positive homogeneous of degree one in \(D\): \[\begin{split}\mathcal{F}(X,D)=\sqrt{2\mathcal{L}(X,D)}& =|G_{AB}(X,D)D^{A}D^{B}|^{1/2}\\ &\leftrightarrow\quad\mathcal{L}(X,D)=\tfrac{1}{2}\mathcal{F}^{2} (X,D);\qquad\mathcal{F}(X,D)>0\,\forall D\neq 0.\end{split} \tag{2.38}\] In Finsler geometry [2, 3, 5], it follows that \(\mathsf{L}=\mathcal{L}\) and \(\mathsf{G}^{A}=G^{A}\) in (2.28) and (2.29), and that \[G_{AB}=\tfrac{1}{2}\bar{\partial}_{A}\bar{\partial}_{B}(\mathcal{F}^{2}),\quad G _{B}^{A}=\gamma_{BC}^{A}D^{C}-C_{BC}^{A}\gamma_{DE}^{C}D^{D}D^{E}=\Gamma_{BC}^ {A}D^{C},\quad C_{ABC}=\tfrac{1}{4}\bar{\partial}_{A}\bar{\partial}_{B}\bar{ \partial}_{C}(\mathcal{F}^{2}). \tag{2.39}\] Reductions and embeddings for Finsler spaces are discussed elsewhere [2, 3, 10, 54, 102, 103]. ### Spatial configuration A description on a fiber bundle analogous to that of SS2.1 is used for the spatial configuration (i.e., current configuration) of a body. A differential manifold \(\mathfrak{m}\) of dimension \(n\) represents a (deformed) physical body, with base space embedded in ambient Euclidean space of dimension \(N\geq n\). **Remark 2.2.1**.: Definitions in SS2.2 parallel those of SS2.1, where lower-case indices and symbols, with the exception of connections, distinguish current-configurational quantities. Let \(x\in\mathfrak{m}\) denote the spatial image of a body particle or point with \(\{x^{a}\}(a=1,2,\ldots,n)\) being a coordinate chart on \(\mathfrak{m}\). At each spatial point is a vector \(\boldsymbol{d}\), and chart(s) of secondary coordinates \(\{d^{k}\}(k=1,2,\ldots,m)\) are assigned over \(\mathfrak{m}\). Define \(\mathsf{z}=(\mathfrak{z},\pi,\mathfrak{m},\mathfrak{u})\) as a fiber bundle of total space \(\mathfrak{z}\) (dimension \(n+m\)), where \(\pi:\mathfrak{z}\to\mathfrak{m}\) is the projection and \(\mathfrak{u}=\mathfrak{z}_{x}=\pi^{-1}(x)\) is the fiber at \(x\). A chart covering a region of \(\mathfrak{z}\) is \(\{x^{a},d^{k}\}\). Each fiber is an \(n\)-dimensional vector space, so \((\mathfrak{z},\pi,\mathfrak{m})\) constitutes a vector bundle. The global mapping from referential to spatial base manifolds is \(\varphi\), referred to herein as the motion. The global mapping from referential to current total spaces is the set \(\Xi=(\varphi,\theta)\), where in general \(\varphi(X,D):\mathcal{M}\to\mathfrak{m}\) and \(\Xi(X,D):\mathcal{Z}\to\mathfrak{z}\). Functional forms of \(\varphi(X,D)\) and \(\Xi(X,D)\) vary in the literature [54]; details are discussed in SS3.1. Mappings and field variables can be made time \((t)\) dependent via introduction of independent parameter \(t\)[50, 58, 59]. Explicit time dependence is excluded from the current theoretical presentation that focuses on equilibrium configurations [55, 62]. The following diagram commutes [5]: #### 2.2.1 Basis vectors and nonlinear connections Coordinate transformations from \(\{x,d\}\) to \(\{\tilde{x},\tilde{d}\}\) on \(\mathfrak{z}\) are of the general Finsler form \[\tilde{x}^{a}=\tilde{x}^{a}(x),\qquad d^{j}(x,d)=q_{k}^{j}(x)d^{k}, \tag{2.40}\] where \(q_{k}^{j}\) is non-singular and differentiable, with inverse obeying \(\tilde{q}_{k}^{i}q_{j}^{k}=\delta_{j}^{i}\). The holonomic basis for the tangent bundle \(T\mathfrak{z}\) is \(\{\frac{\partial}{\partial x^{a}},\frac{\partial}{\partial d^{k}}\}\), and the holonomic basis for the cotangent bundle \(T^{*}\mathfrak{z}\) is \(\{dx^{a},dd^{k}\}\). Non-holonomic basis vectors \(\{\frac{\delta}{\delta x^{a}}\}\) and \(\{\delta d^{k}\}\) that transform traditionally under \(x\to\tilde{x}\) are \[\frac{\delta}{\delta x^{a}}=\frac{\partial}{\partial x^{a}}-N_{a}^{k}\frac{ \partial}{\partial d^{k}},\qquad\delta d^{k}=dd^{k}+N_{b}^{k}dx^{b}. \tag{2.41}\] The set \(\{\frac{\delta}{\delta x^{a}},\frac{\partial}{\partial d^{k}}\}\) is used as a local basis on \(T\mathfrak{z}\), and \(\{dx^{a},\delta d^{k}\}\) on \(T^{*}\mathfrak{z}\). Tangent bundle \(T\mathfrak{z}\) with nonlinear connection admits an orthogonal decomposition into vertical vector bundle and horizontal distribution: \(T\mathfrak{z}=V\mathfrak{z}\oplus H\mathfrak{z}\). The transformation law of the spatial nonlinear connection is \[\tilde{N}_{a}^{j}=\left(q_{k}^{j}N_{b}^{k}-\frac{\partial q_{k}^{j}}{\partial x ^{b}}d^{k}\right)\frac{\partial x^{b}}{\partial\tilde{x}^{a}}, \tag{2.42}\] Subsequently, take \(m=n\). Indices \(j,k,\ldots\to a,b,\ldots\), summation over duplicate indices is from \(1\) to \(n\), and in (2.40), the \(d^{a}\) transform like components of a contravariant vector field over \(\mathcal{M}\): \[q_{b}^{a}=\frac{\partial\tilde{d}^{a}}{\partial d^{b}}=\frac{\partial\tilde{x }^{a}}{\partial x^{b}}. \tag{2.43}\] Spatial coordinate differentiation is described by the compact notation \[\partial_{a}f(x,d)=\frac{\partial f(x,d)}{\partial x^{a}},\quad\bar{\partial} _{a}f(x,d)=\frac{\partial f(x,d)}{\partial d^{a}};\qquad\delta_{a}(\cdot)= \frac{\delta(\cdot)}{\delta x^{a}}=\partial_{a}(\cdot)-N_{a}^{b}\bar{\partial }_{b}(\cdot); \tag{2.44}\] \[\partial_{b}x^{a}=\frac{\partial x^{a}}{\partial x^{b}}=\delta_{b}^{a},\quad \bar{\partial}_{b}x^{a}=0;\qquad\partial_{b}d^{a}=\frac{\partial d^{a}}{ \partial x^{b}},\quad\bar{\partial}_{b}d^{a}=\delta_{b}^{a}. \tag{2.45}\] #### 2.2.2 Length, area, and volume The Sasaki metric tensor that produces an inner product of vectors over \(\mathfrak{z}\) is [104] \[\mathfrak{g}(x,d)=\boldsymbol{g}(x,d)+\dot{\boldsymbol{g}}(x,d)=g_{ ab}(x,d)dx^{a}\otimes dx^{b}+\check{g}_{ab}(x,d)\delta d^{a}\otimes\delta d^{b}; \tag{2.46}\] \[\mathfrak{g}_{ab}=g_{ab}=\boldsymbol{g}\left(\frac{\delta}{ \delta x^{a}},\frac{\delta}{\delta x^{b}}\right)=\check{g}_{ab}=\check{ \boldsymbol{g}}\left(\frac{\partial}{\partial d^{a}},\frac{\partial}{\partial d ^{b}}\right)=\check{g}_{ba}=g_{ba}. \tag{2.47}\] Denote by \(\mathrm{d}\boldsymbol{x}\) a differential line element of \(\mathfrak{m}\) referred to non-holonomic horizontal basis \(\{\frac{\delta}{\delta x^{a}}\}\) and \(\mathrm{d}\boldsymbol{d}\) a differentiable line element of \(\mathfrak{u}\) referred to vertical basis \(\{\frac{\partial}{\partial d^{a}}\}\). Their squared lengths are \[|\mathrm{d}\boldsymbol{x}|^{2}=\langle\mathrm{d}\boldsymbol{x},\mathfrak{g} \mathrm{d}\boldsymbol{x}\rangle=g_{ab}\mathrm{d}x^{a}\mathrm{d}x^{b},\qquad| \mathrm{d}\boldsymbol{d}|^{2}=\langle\mathrm{d}\boldsymbol{d},\mathfrak{g} \mathrm{d}\boldsymbol{d}\rangle=g_{ab}\mathrm{d}d^{a}\mathrm{d}d^{b}. \tag{2.48}\] The scalar volume element and volume form of \(\mathfrak{m}\), where \(\dim\mathfrak{m}=n\), and the area form of \(\partial\mathfrak{m}\), the \((n-1)\)-dimensional boundary of a compact region of \(\mathfrak{m}\), are respectively \[\mathrm{d}v=\sqrt{g}\,\mathrm{d}x^{1}\mathrm{d}x^{2}\ldots\mathrm{d}x^{n}, \quad d\omega=\sqrt{g}\,dx^{1}\wedge dx^{2}\wedge\ldots\wedge dx^{n},\quad \omega=\sqrt{b}\,du^{1}\wedge\ldots\wedge du^{n-1}. \tag{2.49}\] The surface embedding in \(\mathfrak{M}\) is \(x^{a}=x^{a}(u^{\alpha})\,(\alpha=1,\ldots,n-1)\), \(b^{a}_{\alpha}=\frac{\partial x^{a}}{\partial u^{\alpha}}\), and \(b=\det(b^{a}_{\alpha}g_{ab}b^{b}_{\beta})\). #### 2.2.3 Covariant derivatives Denote by \(\nabla\) the covariant derivative. Horizontal gradients of basis vectors are determined by coefficients \(H^{a}_{bc}\) and \(K^{a}_{bc}\), and vertical gradients by \(V^{a}_{bc}\) and \(V^{a}_{bc}\): \[\nabla_{\delta/\delta x^{b}}\frac{\delta}{\delta x^{c}}=H^{a}_{bc}\frac{\delta }{\delta x^{a}},\qquad\nabla_{\delta/\delta x^{b}}\frac{\partial}{\partial d^ {c}}=K^{a}_{bc}\frac{\partial}{\partial d^{a}}; \tag{2.50}\] \[\nabla_{\partial/\partial d^{b}}\frac{\partial}{\partial d^{c}}=V^{a}_{bc} \frac{\partial}{\partial d^{a}},\qquad\nabla_{\partial/\partial d^{b}}\frac{ \delta}{\delta x^{c}}=Y^{a}_{bc}\frac{\delta}{\delta x^{a}}. \tag{2.51}\] By example, covariant derivative operations over \(\mathfrak{z}\) are invoked like (2.21) for \(\boldsymbol{V}=V^{a}\frac{\delta}{\delta x^{a}}\in H\mathfrak{z}\): \[\nabla\boldsymbol{V}=\nabla_{\delta/\delta x^{b}}\boldsymbol{V}\otimes dx^{b} +\nabla_{\partial/\partial d^{b}}\boldsymbol{V}\otimes\delta d^{b}=V^{a}_{|b} \frac{\delta}{\delta x^{a}}\otimes dx^{b}+V^{a}|_{b}\frac{\partial}{\partial d ^{a}}\otimes\delta d^{b}. \tag{2.52}\] Herein \((\cdot)_{|a}\) and \((\cdot)|_{b}\) denote horizontal and vertical covariant differentiation with respect to coordinates \(x^{a}\) and \(d^{b}\). Let \(\gamma^{a}_{bc}\) be coefficients of the Levi-Civita connection on \(\mathfrak{z}\), \(C^{a}_{bc}\) coefficients of the Cartan tensor on \(\mathfrak{z}\), and \(\Gamma^{a}_{bc}\) horizontal coefficients of the Chern-Rund and Cartan connections on \(\mathfrak{z}\): \[\gamma^{a}_{bc}=\tfrac{1}{2}g^{ad}\left(\partial_{c}g_{bd}+\partial_{b}g_{cd} -\partial_{d}g_{bc}\right)=g^{ad}\gamma_{bcd}, \tag{2.53}\] \[C^{a}_{bc}=\tfrac{1}{2}g^{ad}\left(\partial_{c}g_{bd}+\partial_{b}g_{cd}- \partial_{d}g_{bc}\right)=g^{ad}C_{bcd}, \tag{2.54}\] \[\Gamma^{a}_{bc}=\tfrac{1}{2}g^{ad}\left(\delta_{c}g_{bd}+\delta_{b}g_{cd}- \delta_{d}g_{bc}\right)=g^{ad}\Gamma_{bcd}. \tag{2.55}\] #### 2.2.4 A divergence theorem Let \(\mathfrak{m}\), \(\dim\mathfrak{m}=n\), be the base manifold of a generalized Finsler bundle of total space \(\mathfrak{z}\) with positively oriented \((n-1)\)-dimensional \(C^{1}\) boundary \(\partial\mathfrak{m}\). Let \(\boldsymbol{\alpha}(x,d)=V^{a}(x,d)n_{a}(x,d)\omega(x,d)\) be a differentiable \((n-1)\)-form, and let \(V^{a}\) be contravariant components of vector field \(\boldsymbol{V}=V^{a}\frac{\delta}{\delta x^{a}}\in H_{\mathfrak{z}}\). Denote the field of components for positive-definite metric tensor on the horizontal subspace by \(g_{ab}(x,d)\) with \(g=\det(g_{ab})>0\). Assign horizontal connection \(H^{a}_{bc}=H^{a}_{cb}\) such that \((\sqrt{g})|_{a}=0\) (e.g., \(H^{a}_{bc}=I^{a}_{bc}\)), and assume that \(C^{1}\) functional relations \(d=d(x)\) exist for representation of the vertical fiber coordinates \(\forall x\in\mathfrak{m}\). Then in a chart \(\{x^{a}\}\), with volume and area forms given in (2.49), (2.30) is \[\int_{\mathfrak{m}}[V^{a}_{|a}+(V^{a}C^{c}_{bc}+\bar{\partial}_{b}V^{a})d^{b}_ {;a}]\,d\omega=\int_{\partial\mathfrak{m}}V^{a}n_{a}\,\omega, \tag{2.56}\] with \(n_{a}\) the unit outward normal on \(\partial\mathfrak{m}\), \(V^{a}_{|a}=\delta_{a}V^{a}+V^{a}H^{b}_{ba}\), and \(d^{b}_{;a}=\partial_{a}d^{b}+N^{b}_{a}\). Proof matches that of Theorem 2.1.1 upon changes of variables; a corollary akin to Corollary 2.1.1 also holds. ## 3 Finsler-geometric continuum mechanics The original theory of Finsler-geometric continuum mechanics [55, 56] accounts for finite deformations under conditions of static equilibrium for forces conjugate to material particle motion and state vector evolution. Subtle differences exist among certain assumptions for different instantiations, incrementally revised in sucessive works. Most differences are explained in a review [54]. ### Motion and deformation Particle motion \(\varphi:\mathcal{M}\to\mathfrak{m}\) and its inverse \(\Phi:\mathfrak{m}\to\mathcal{M}\) are the one-to-one and \(C^{3}\)-differentiable functions \[x^{a}=\varphi^{a}(X),\qquad X^{A}=\Phi^{A}(x),\qquad(a,A=1,2,\ldots,n) \tag{3.1}\] with \((\Phi\circ\varphi)(X)=X\). Total motion is \(\Xi:\mathcal{Z}\to\mathfrak{z}\), where \(\Xi=(\varphi,\theta)\). Refer to Fig. 1. **Remark 3.1.1**.: Vector field \(\boldsymbol{D}\) and its spatial counterpart \(\boldsymbol{d}\) are referred to as internal state vector fields or director vector fields, but neither vector must be of unit length. These are assigned physical interpretations pertinent to the specific class of mechanics problem under consideration [54]. Motions of state vectors are defined as \(C^{3}\) functions: \[d^{a}=\theta^{a}(X,D),\qquad D^{A}=\Theta^{A}(x,d),\qquad(a,A=1,2,\ldots,n). \tag{3.2}\] **Remark 3.1.2**.: Fiber dimensions are \(m=\dim\mathcal{U}=\dim\mathfrak{u}=\dim\mathfrak{m}=\dim\mathcal{M}=n\). Extension for \(m\neq n\) is conceivable [5, 45]. However, setting \(m=n\) enables a more transparent physical interpretation of the vertical vector bundle, and it allows use of (2.9) and (2.43) that simplify notation and calculations. For usual three-dimensional solid bodies, \(n=3\) as implied in parts of prior work [54], but other dimensions are permissible (e.g., two-dimensional membranes (\(n=2\)) and one-dimensional rods (\(n=1\))). From (3.1) and (3.2), transformation formulae for partial differentiation operations between configurations of a differentiable function \(h(x,d):\mathfrak{z}\to\mathbb{R}\) are \[\frac{\partial(h\circ\Xi)}{\partial X^{A}}=\frac{\partial h}{\partial x^{a}} \frac{\partial\varphi^{a}}{\partial X^{A}}+\frac{\partial h}{\partial d^{a}} \frac{\partial\theta^{a}}{\partial X^{A}},\qquad\frac{\partial(h\circ\Xi)}{ \partial D^{A}}=\frac{\partial h}{\partial d^{a}}\frac{\partial\theta^{a}}{ \partial D^{A}}. \tag{3.3}\] **Remark 3.1.3**.: Unlike Chapter 8 of Bejancu [5], basis vectors need not convect from \(T\mathcal{Z}\) to \(T\mathfrak{z}\) with the motion \(\Xi\). Rather, as in classical continuum field theories of mechanics [20, 105], basis vectors--as well as metric tensors and connection coefficients--can be assigned independently for configuration spaces \(\mathcal{Z}\) and \(\mathfrak{z}\). As such, \((\frac{\delta}{\delta x^{a}},\frac{\partial}{\partial d^{a}},g_{ab},H^{a}_{ bc},K^{a}_{bc},V^{a}_{bc},Y^{a}_{bc},N^{a}_{b})\) need not be obtained from \((\frac{\delta}{\delta X^{A}},\frac{\partial}{\partial D^{A}},G_{AB},H^{A}_{ BC},K^{A}_{BC},V^{A}_{BC},V^{A}_{BC},N^{A}_{B})\) via push-forward operations by \(\Xi\). But choosing \(N^{a}_{b}\) as the push-forward of \(N^{A}_{B}\)[5] is beneficial since \[N^{b}_{a}\frac{\partial\varphi^{a}}{\partial X^{A}}=N^{B}_{A}\frac{\partial \theta^{b}}{\partial D^{B}}-\frac{\partial\theta^{b}}{\partial X^{A}}\quad \Rightarrow\quad\frac{\delta(h\circ\Xi)}{\delta X^{A}}=\frac{\delta h}{ \delta x^{a}}\frac{\partial\varphi^{a}}{\partial X^{A}}=\frac{\delta h}{ \delta x^{a}}\frac{\delta\varphi^{a}}{\delta X^{A}}=\frac{\delta h}{\delta x^ {a}}F^{a}_{A}, \tag{3.4}\] by which \(\delta_{A}(\cdot)=F^{a}_{A}\delta_{a}(\cdot)\) simply relates the delta derivative across configurations. As implied in (3.4), deformation gradient field \(\mathbf{F}:H\mathcal{Z}\to H\mathfrak{z}\) is defined as the two-point tensor field \[\mathbf{F}=\frac{\delta\mathbf{\varphi}}{\delta X}=\frac{\delta\varphi^{a}}{\delta X ^{A}}\frac{\delta}{\delta x^{a}}\otimes dX^{A}=\frac{\partial\varphi^{a}}{ \partial X^{A}}\frac{\delta}{\delta x^{a}}\otimes dX^{A}, \tag{3.5}\] with (3.1) used in the rightmost equality. The inverse deformation gradient \(\mathbf{f}:H\mathfrak{z}\to H\mathcal{Z}\) is defined as the following: \[\mathbf{f}=\frac{\delta\mathbf{\Phi}}{\delta x}=\frac{\delta\Phi^{A}}{\delta x^{a}} \frac{\delta}{\delta X^{A}}\otimes dx^{a}=\frac{\partial\Phi^{A}}{\partial x ^{a}}\frac{\delta}{\delta X^{A}}\otimes dx^{a}. \tag{3.6}\] **Remark 3.1.4**.: Accordingly, \(F^{a}_{A}(X)f^{A}_{b}(x(X))=\delta^{a}_{b}\) and \(F^{a}_{A}(X)f^{B}_{a}(x(X))=\delta^{B}_{A}\). Usual stipulations on regularity [22] of motions (3.1) apply such that \(\det(F^{a}_{A})>0\) and \(\det(f^{A}_{A})>0\). Transformation equations relating differential line elements of (2.16) and (2.48) follow: \[\mathrm{d}\mathbf{x}=\mathrm{d}x^{a}\frac{\delta}{\delta x^{a}}=F^{a}_{A}\mathrm{ d}X^{A}\frac{\delta}{\delta x^{a}}=\mathbf{F}\mathrm{d}\mathbf{X},\qquad\mathrm{d}\mathbf{X}= \mathrm{d}X^{A}\frac{\delta}{\delta X^{A}}=f^{A}_{a}\mathrm{d}x^{a}\frac{ \delta}{\delta X^{A}}=\mathbf{f}\mathrm{d}\mathbf{x}. \tag{3.7}\] Advancing (3.7), with definition of the determinant, (2.17), and (2.49), volume elements and forms, respectively, transform between reference and spatial representations on \(\mathcal{M}\) and \(\mathfrak{m}\), with \(J=\det(F^{a}_{A})\sqrt{g/G}\) and \(j=1/J=J^{-1}>0\), via (e.g., [22, 98]) \[\mathrm{d}v=J\mathrm{d}V=[\det(F^{a}_{A})\sqrt{g/G}]\mathrm{d}V,\qquad\mathrm{ d}V=j\mathrm{d}v=[\det(f^{A}_{a})\sqrt{G/g}]\mathrm{d}v, \tag{3.8}\] \[\varphi^{*}d\omega=Jd\Omega,\qquad\Phi^{*}d\Omega=jd\omega. \tag{3.9}\] Strain can be quantified using symmetric Lagrangian deformation tensor \(\mathbf{C}=C_{AB}dX^{A}\otimes dX^{B}\): \[|\mathrm{d}\mathbf{x}|^{2}=F^{a}_{A}g_{ab}F^{b}_{B}\mathrm{d}X^{A}\mathrm{d}X^{B}=C _{AB}\mathrm{d}X^{A}\mathrm{d}X^{B}=\langle\mathrm{d}\mathbf{X},\mathbf{C}\mathrm{d}X \rangle,\quad C_{AB}=F^{a}_{A}g_{ab}F^{b}_{B}=G_{AC}C^{C}_{B}=C_{BA}. \tag{3.10}\] From (3.8), \(\det(C_{AB})=\det(C^{C}_{A}G_{CB})=J^{2}G\). Then from the first of (2.50) and (3.4) [56, 62], \[\nabla_{\delta/\delta X^{A}}\frac{\delta}{\delta x^{c}}=\frac{\delta x^{a}}{ \delta X^{A}}\nabla_{\delta/\delta x^{a}}\frac{\delta}{\delta x^{c}}=\delta_{A} \varphi^{a}H^{b}_{ac}\frac{\delta}{\delta x^{b}}=F^{a}_{A}H^{b}_{ac}\frac{\delta }{\delta x^{b}}. \tag{3.11}\] Similarly, the second of (2.50) gives \(\nabla_{\delta/\delta X^{A}}\frac{\partial}{\partial d^{c}}=F^{a}_{A}K^{b}_{ac} \frac{\partial}{\partial d^{b}}\), though this is not needed later. ### Particular assumptions #### 3.2.1 Director fields The divergence theorem (2.31) is used to derive Euler-Lagrange equations for equilibrium of stress and state vector fields in SS3.3.3. Its derivation [37, 54] requires existence of functional relations \[D^{A}=D^{A}(X),\qquad d^{a}=d^{a}(x), \tag{3.12}\] where the second of (3.12) is implied by the first under a consistent change of variables per (2.56) and (3.1). Existence of the following functional forms emerges from (3.1), (3.2), and (3.12): \[d^{a}=\theta^{a}(X,D)=\hat{\theta}^{a}(X,D(X))=\bar{\theta}^{a}(X),\qquad D^{A} =\Theta^{A}(x,d)=\hat{\Theta}^{A}(x,d(x))=\bar{\Theta}^{A}(x). \tag{3.13}\] **Remark 3.2.1**.: In some prior work [55, 56], alternative representations of particle motions incorporating state vector fields as arguments have been posited. These likely more complex alternatives are admissible but inessential [54]. The current theory, like some others [44, 50, 52], does not always require \(\theta\) or \(\Theta\) be specified explicitly, though use of the former is implied later in SS5. The canonical, and pragmatic, choice for \(\theta(X,D)\), given field \(D^{A}(X)\), is [62] \[d=D\circ\Phi\quad\Leftrightarrow\quad d(x)=D(\Phi(x))\quad\Rightarrow\quad \theta^{a}(D(X))=D^{A}(X)\langle\delta d^{a},\frac{\partial}{\partial D^{A}} \rangle=D^{A}(X)\delta^{a}_{A}, \tag{3.14}\] where \(\delta^{a}_{A}\) is viewed as a shifter between \(V_{\mathfrak{Z}}\) and \(V\mathcal{Z}\). Accordingly, \(\delta^{a}_{A}=1\,\forall\,a=A,\,\delta^{a}_{A}=0\,\forall\,a\neq A\). **Remark 3.2.2**.: Invoking (3.14), \(\partial_{A}\theta^{a}(D(X))=0\) by definitions of \(\theta^{a}=\theta^{a}(D(X))\) and \(\partial_{A}(\cdot)=(\partial(\cdot)/\partial X^{A})|_{D=\mathrm{const}}\) in (2.10). Also, \(\bar{\partial}_{A}\theta^{a}(D(X))=\delta^{a}_{A}\) by (2.11) and (3.14). Then (3.4) reduces to \(N^{a}_{b}=N^{A}_{B}f^{B}_{b}\delta^{a}_{A}\), and conveniently for the degenerate case: \(N^{A}_{B}=0\Leftrightarrow N^{a}_{b}=0\). **Figure 1** Total deformation \(\Xi=(\varphi,\theta):\mathcal{Z}\to\mathfrak{z}\) of material manifold \(\mathcal{M}\) (dim \(\mathcal{M}=n=m=2\)) with base-space coordinates \(\{X^{A}\}\) to spatial representation \(\mathfrak{m}\) with base-space coordinates \(\{x^{a}\}\). Internal structure fields are \((D,d)\) on total spaces \((\mathcal{Z},\mathfrak{z})\); arrows depict local components of state vectors \(\mathbf{D}\) and \(\mathbf{d}\) for neighborhoods centered at \(X\) and \(x\). #### 3.2.2 Connections and metrics Use of (2.31) for any admissible \(G_{AB}(X,D)\) necessitates a symmetric linear connection horizontally compatible with \(G_{AB}\), meaning \(H^{A}_{BC}=\Gamma^{A}_{BC}\), with \(\Gamma^{A}_{BC}\) Chern-Rund-Cartan coefficients of (2.26). The simplest admissible choice of vertical coefficients is \(V^{A}_{BC}=0\), corresponding to the Chern-Rund connection [2, 3, 106]. The canonical choice \(N^{A}_{B}=G^{A}_{B}\) of (2.29) also corresponds to the Chern-Rund connection, but it is inessential for generalized Finsler geometry. Choices \(K^{A}_{BC}=H^{A}_{BC}\)[55, 56] and \(Y^{A}_{BC}=V^{A}_{BC}\) are logical given (2.9), but these are not mandatory. Setting \(K^{A}_{BC}=0\), providing compatibility with Cartesian metric \(\delta_{AB}\), may also be of utility [54]. Given a Sasaki metric \(\boldsymbol{\mathcal{G}}\) of (2.12) with \(G_{AB}\) of (2.13), pragmatic connection coefficients over \(\mathcal{Z}\) are summarized in (3.15); complementary connections over \(\mathfrak{z}\) given Sasaki metric \(\boldsymbol{g}\) with \(g_{ab}(x,d)\) of (2.47) follow thereafter: \[H^{A}_{BC}=\Gamma^{A}_{BC},\quad V^{A}_{BC}=Y^{A}_{BC}=0;\quad H^{a}_{bc}= \Gamma^{a}_{bc},\quad V^{a}_{bc}=Y^{a}_{bc}=0;\quad N^{a}_{b}=N^{A}_{B}f^{B}_ {b}\delta^{a}_{A}. \tag{3.15}\] **Remark 3.2.3**.: Note \(K^{A}_{BC}\) and \(K^{a}_{bc}\) are left arbitrary to admit mathematical descriptions of different physics, in contrast to \(Y^{A}_{BC}\) and \(Y^{a}_{bc}\) set equal to their purely vertical counterparts for simplicity. Since nonlinear connection \(N^{A}_{B}\) is also not explicitly chosen in (3.15) but is left general to admit more physics than considered previously [54], (2.8) is not necessarily ensured for arbitrary changes of coordinates, so transformation properties of \(N^{A}_{B}\) should be verified. Once the former \(N^{A}_{B}\) is chosen, \(N^{a}_{b}\) in (3.15) presumes (3.14) is invoked with (3.4). **Remark 3.2.4**.: If the fields \(G_{AB}(X,D)\) and \(g_{ab}(x,d)\) are known, linear connection coefficients in (3.15) can be calculated from definitions in SS2.1 and SS2.2. Metric \(G_{AB}\) need not be homogeneous of degree zero with respect to \(D\), but it can be. Components \(G_{AB}\) need not be derived, as in SS2.1.5, from a Lagrangian \(\mathcal{L}\) or more specifically a fundamental Finsler function \(\mathcal{F}\), but they can be. Dependence of \(\boldsymbol{\mathcal{G}}\) on \(X\) and \(D\) is based on symmetry and physics pertinent to the particular problem of study. Similar statements describe the spatial metric \(\boldsymbol{\mathfrak{g}}\) and components \(g_{ab}\). A decomposition of \(G_{AB}\) into a Riemannian part \(\bar{G}_{AC}\) and a director-dependent part \(\hat{G}^{C}_{B}\) is useful for describing fundamental physics and for solving boundary value problems [55, 56, 57, 62]: \[\begin{split}\boldsymbol{G}&=\bar{\boldsymbol{G}} \hat{\boldsymbol{G}};\qquad G_{AB}(X,D)=\bar{G}_{AC}(X)\hat{G}^{C}_{B}(X,D); \\ \bar{\boldsymbol{G}}&=\bar{G}_{AB}\,dX^{A}\otimes dX ^{B};\qquad\hat{\boldsymbol{G}}=\hat{G}^{A}_{B}\frac{\delta}{\delta X^{A}} \otimes dX^{B}.\end{split} \tag{3.16}\] More specific functional forms in (3.16) are advocated herein, as implied by past applications [54]: \[\begin{split} G_{AB}(X,D)&=\bar{G}_{AC}(X)\hat{G}^{C }_{B}(D(X))=\hat{G}^{C}_{A}(D(X))\bar{G}_{CB}(X);\\ \bar{G}_{AB}&=\bar{G}_{BA},\qquad\hat{G}^{C}_{A}G_{BC }=\hat{G}^{C}_{B}G_{CA}.\end{split} \tag{3.17}\] **Remark 3.2.5**.: Components of \(\bar{G}_{AB}\) are chosen to best represent symmetry of the physical body; in elasticity, often a Riemannian metric for rectilinear, cylindrical, or spherical coordinates on \(\mathcal{M}\). Components of \(\hat{G}^{C}_{B}\) are assigned based on how microstructure \(D\) affects measured lengths of material elements with respect to an observer in total generalized Finsler space \(\mathcal{Z}\) (i.e., the space of the physical body enriched with microstructure geometry) [54, 55, 62]. Ideas apply analogously to spatial metric \(g_{ab}(x,d)\) with \(X\) replaced by \(x\), and with \(D\) replaced by \(d\). For example, the spatial analog of (3.17) is \[g_{ab}(x,d)=\bar{g}_{ac}(x)\hat{g}^{c}_{b}(d(x))=\hat{g}^{c}_{a}(d(x))\bar{g}_{cb }(x);\qquad\bar{g}_{ab}=\bar{g}_{ba},\qquad\qquad\hat{g}^{c}_{a}g_{bc}=\hat{g}^ {c}_{b}g_{ca}. \tag{3.18}\] All metrics in (3.17) and (3.18) are assumed invertible with positive determinants. A symmetric tensor \(\bar{\mathbf{C}}\)[62] and volume ratio \(\bar{J}>0\) are defined to exclude internal state-dependence of strain: \[\bar{\mathbf{C}}(X)=\bar{C}_{AB}(X)\,dX^{A}\otimes dX^{B},\qquad\bar{C}_{AB}=F^{a}_ {A}\bar{g}_{ab}F^{b}_{B},\quad\bar{C}^{A}_{B}=\bar{G}^{AC}\bar{C}_{CB}; \tag{3.19}\] \[\bar{J}(X)=\sqrt{\det(\bar{C}^{A}_{B}(X))};\qquad\bar{J}=J\sqrt{\hat{G}/\hat{ g}},\quad\hat{G}=\det(\hat{G}^{A}_{B}),\quad\hat{g}=\det(\hat{g}^{a}_{b}). \tag{3.20}\] ### Energy and equilibrium #### 3.3.1 Variational principle A variational principle [54, 55, 56] is implemented. Let \(\Psi\) denote the total energy functional for a compact domain \(\mathcal{M}^{\prime}\subset\mathcal{M}\) with positively oriented boundary \(\partial\mathcal{M}^{\prime}\), and let \(\psi\) be the local free energy density per unit reference volume of material: \[\Psi[\mathbf{\varphi},\mathbf{D}]=\int_{\mathcal{M}}\psi(F^{a}_{A},D^{A},D^{A}_{|B},X ^{A})\,d\Omega. \tag{3.21}\] Denote surface forces as \(\mathbf{p}=p_{a}dx^{a}\), a mechanical load vector (force per unit reference area), and \(z=z_{A}\delta D^{A}\), a thermodynamic force conjugate to the internal state vector. Denote a generic local, vector-valued volumetric source term conjugate to structure variations by \(\mathbf{R}=R_{A}\delta D^{A}\), extending prior theory [54, 55, 56] to accommodate more possible physics [30, 107] (Appendix B). A variational principle for Finsler-geometric continuum mechanics, holding \(X\) fixed but with \(x=\varphi(X)\) and \(D\) variable, is \[\delta\Psi[\mathbf{\varphi},\mathbf{D}]=\oint_{\partial\mathcal{M}^{\prime}}(\langle \mathbf{p},\delta\mathbf{\varphi}\rangle+\langle\mathbf{z},\delta\mathbf{D}\rangle)\Omega+ \int_{\mathcal{M}^{\prime}}\langle\mathbf{R},\delta\mathbf{D}\rangle\,d\Omega. \tag{3.22}\] In coordinates, with variation of \(\mathbf{D}\) in parentheses to distinguish from non-holonomic basis \(\{\delta D^{A}\}\), \[\delta\int_{\mathcal{M}^{\prime}}\psi\,d\Omega=\oint_{\partial\mathcal{M}^{ \prime}}\{p_{a}\delta\varphi^{a}\}\Omega+\oint_{\partial\mathcal{M}^{\prime}} \{z_{C}\delta(D^{C})\}\Omega+\int_{\mathcal{M}^{\prime}}\{R_{C}\delta(D^{C}) \}d\Omega. \tag{3.23}\] Results used in SS3.3.3 are now noted, with \(\alpha=1\) or \(\alpha=2\) (derived in Appendix A using (3.15)): \[\delta F^{a}_{A}=\delta_{A}(\delta\varphi^{a}),\qquad\delta D^{A}_{|B}=[ \delta(D^{A})]_{|B}-(\bar{\partial}_{C}N^{A}_{B}-\bar{\partial}_{C}K^{A}_{BD}D ^{D})\delta(D^{C}), \tag{3.24}\] \[\delta(d\Omega)=\tfrac{1}{2}\alpha G^{AB}\bar{\partial}_{C}G_{AB}\delta(D^{C}) d\Omega=\alpha\bar{\partial}_{C}(\ln\sqrt{G})\delta(D^{C})d\Omega=\alpha C^{A}_{CA} \delta(D^{C})d\Omega. \tag{3.25}\] #### 3.3.2 General energy density As evident in (3.22), independent variables entering total free energy density, per unit reference volume, function \(\psi\) are the deformation gradient, the internal state vector, the horizontal gradient of the internal state vector, and the reference position of the material particle: \[\psi=\psi(\mathbf{F},\mathbf{D},\nabla\mathbf{D},\mathbf{X})=\psi(F^{a}_{A},D^{A},D^{A}_{|\mathbf{B }},X^{A}). \tag{3.26}\] Dependence on \(\mathbf{F}\) accounts for bulk elastic strain energy. Dependence on \(\mathbf{D}\) generally accounts for effects of microstructure on stored energy. Energy from heterogeneity of microstructure (e.g., internal material surfaces) is captured by dependence on the internal state gradient: \[\nabla\mathbf{D}=D^{A}_{|\mathbf{B}}\frac{\partial}{\partial D^{A}}\otimes dX^{B}+D^{A }|_{\mathbf{B}}\frac{\partial}{\partial D^{A}}\otimes\delta D^{B}; \tag{3.27}\] \[D^{A}_{|\mathbf{B}}=\delta_{\mathbf{B}}D^{A}+K^{A}_{BC}D^{C}=\partial_{B}D^{A}-N^{A}_ {\mathbf{B}}+K^{A}_{BC}D^{C},\qquad D^{A}|_{\mathbf{B}}=\bar{\partial}_{\mathbf{B}}D^{A}+ V^{A}_{BC}D^{C}=\delta^{A}_{\mathbf{B}}. \tag{3.28}\] Dependence on \(\mathbf{X}\) permits heterogeneous properties. Prior work [54, 55] adds motivation for (3.26). **Remark 3.3.1**.: Vertical gradient \(D^{A}|_{\mathbf{B}}=\delta^{A}_{\mathbf{B}}\), calculated from \(V^{A}_{BC}=0\) by (3.15), provides no information, so it is excluded from the arguments of energy density in (3.26). Expansion of the integrand on the left in (3.23), with \(\delta X^{A}=0\) by definition, is \[\begin{split}&\delta\psi=\frac{\partial\psi}{\partial F^{a}_{A}} \delta F^{a}_{A}+\frac{\partial\psi}{\partial D^{A}}\delta(D^{A})+\frac{ \partial\psi}{\partial D^{A}_{|\mathbf{B}}}\delta D^{A}_{|\mathbf{B}}=P^{A}_{a} \delta F^{a}_{A}+Q_{A}\delta(D^{A})+Z^{B}_{A}\delta D^{A}_{|\mathbf{B}};\\ & P^{A}_{a}=\frac{\partial\psi}{\partial F^{a}_{A}},\qquad Q_{A} =\frac{\partial\psi}{\partial D^{A}},\qquad Z^{A}_{\mathbf{B}}=\frac{\partial \psi}{\partial D^{B}_{|\mathbf{A}}}.\end{split} \tag{3.29}\] Denoted by \(\mathbf{P}\) is the mechanical stress tensor (i.e., the first Piola-Kirchhoff stress, a two-point tensor, generally non-symmetric), \(\mathbf{Q}\) an internal force vector conjugate to \(\mathbf{D}\), and \(\mathbf{Z}\) a micro-stress tensor conjugate to the horizontal gradient of \(\mathbf{D}\). #### 3.3.3 Euler-Lagrange equations Connection coefficients in (3.15) are employed along with (3.1), (3.11), (3.12), (3.24), and (3.25). Insertion of (3.29) into the left side of (3.23), followed by integration by parts and use of (2.31) of Theorem 2.1.1, produces \[\begin{split}&\delta\int_{\mathcal{M}^{\prime}}\!\!\psi\,d\Omega= \int_{\mathcal{M}^{\prime}}\{P^{A}_{a}\delta F^{a}_{A}+Q_{A}\delta(D^{A})+Z^{B }_{A}\delta D^{A}_{|\mathbf{B}}\}d\Omega+\int_{\mathcal{M}^{\prime}}\psi\delta(d \Omega)\\ =&-\int_{\mathcal{M}}\{\partial_{A}P^{A}_{a}+\bar{ \partial}_{\mathbf{B}}P^{A}_{a}\partial_{A}D^{B}+P^{B}_{a}\Gamma^{A}_{AB}-P^{A}_ {c}\Gamma^{c}_{ba}F^{b}_{A}+P^{A}_{a}C^{C}_{BC}(\partial_{A}D^{B}+N^{B}_{A}) \}\delta\varphi^{a}d\Omega\\ &\quad-\int_{\mathcal{M}^{\prime}}\{\partial_{A}Z^{A}_{C}+\bar{ \partial}_{\mathbf{B}}Z^{A}_{C}\partial_{A}D^{B}+Z^{B}_{C}\Gamma^{A}_{AB}-Z^{A}_{ \mathbf{B}}K^{B}_{AC}-Q_{C}\\ &\quad\quad\quad\quad\quad+Z^{B}_{A}[\bar{\partial}_{C}N^{A}_{ \mathbf{B}}-\bar{\partial}_{C}K^{A}_{BD}D^{D}+\bar{\partial}^{A}_{C}C^{D}_{ED}( \partial_{\mathbf{B}}D^{E}+N^{E}_{\mathbf{B}})]-\alpha\psi C^{A}_{CA}\}\delta(D^{C})d \Omega\\ &+\oint_{\partial\mathcal{M}^{\prime}}\{P^{A}_{a}\delta\varphi^{a }\}N_{A}\Omega+\oint_{\partial\mathcal{M}^{\prime}}\{Z^{A}_{C}\delta(D^{C})\} N_{A}\Omega.\end{split} \tag{3.30}\] Euler-Lagrange equations consistent with any admissible variations \(\delta\mathbf{\varphi}\) and \(\delta\mathbf{D}\) locally at each \(X\in\mathcal{M}^{\prime}\), as well as natural boundary conditions on \(\partial\mathcal{M}^{\prime}\) are obtained as follows. Steps follow those outlined in the original works [55, 56] with minor departures [54]. The first of these culminating Euler-Lagrange equations is the macroscopic balance of linear momentum, derived by setting the first integral on the right-hand side of (3.30) equal to zero, consistent with the right side of (3.23). Localizing the outcome and presuming the result must hold for any admissible variation \(\delta\varphi^{a}\), \[\partial_{A}P^{A}_{a}+\bar{\partial}_{B}P^{A}_{a}\partial_{A}D^{B}+P^{B}_{a} \Gamma^{A}_{AB}-P^{A}_{c}\Gamma^{c}_{ba}F^{b}_{A}=-P^{A}_{a}C^{C}_{BC}( \partial_{A}D^{B}+N^{B}_{A}). \tag{3.31}\] The second Euler-Lagrange equation is the balance of micro-momentum (i.e., director momentum or internal state equilibrium). It is derived by setting the second integral on the right side of (3.30) equal to the rightmost term in (3.23) and then localizing, giving for any admissible variation \(\delta(D^{C})\), \[\begin{split}\partial_{A}Z^{A}_{C}+\bar{\partial}_{B}Z^{A}_{C} \partial_{A}D^{B}&+Z^{B}_{C}\Gamma^{A}_{AB}-Z^{A}_{B}K^{B}_{AC}- (Q_{C}-R_{C})\\ &=\alpha\psi C^{A}_{CA}-Z^{B}_{A}[\bar{\partial}_{C}N^{A}_{B}- \bar{\partial}_{C}K^{A}_{BD}D^{D}+\delta^{A}_{C}C^{D}_{ED}(\partial_{B}D^{E}+ N^{E}_{B})].\end{split} \tag{3.32}\] Natural boundary conditions on \(\partial\mathcal{M}^{\prime}\) are derived by setting the second-to-last and last boundary integrals in (3.30) equal to the remaining, respective first and second boundary integrals on the right side of (3.23) and localizing the results, yielding for any admissible variations \(\delta\varphi^{a}\) and \(\delta(D^{C})\), \[p_{a}=P^{A}_{a}N_{A},\qquad z_{A}=Z^{B}_{A}N_{B}. \tag{3.33}\] **Remark 3.3.2**.: With natural boundary conditions (3.33) or essential boundary conditions (i.e., prescribed \(\mathbf{\varphi}(X)\) and \(\mathbf{D}(X)\) for \(X\in\partial\mathcal{M}^{\prime}\)) and local force density vector \(\mathbf{R}(X)\) for each \(X\in\mathcal{M}^{\prime}\), (3.31) and (3.32) comprise \(2n\) coupled PDEs in \(2n\) degrees-of-freedom \(x^{a}=\varphi^{a}(X)\) and \(D^{A}(X)\) at any \(X\in\mathcal{M}^{\prime}\), and by extension, any \(X\in\mathcal{M}\). **Remark 3.3.3**.: Consider the simplified case when Riemannian metrics are used: no \(D\)-dependence of \(\mathbf{G}\) and no \(d\)-dependence of \(\mathbf{g}\). Then \(\Gamma^{A}_{BC}=\gamma^{A}_{BC}\), \(\Gamma^{a}_{bc}=\gamma^{a}_{bc}\), and \(C^{A}_{BC}=0\). The right side of (3.31) vanishes, and (3.31) is of the form of the static momentum balance of classical continuum mechanics with null body force [22, 23, 33]. Further taking \(N^{A}_{B}\) and \(K^{A}_{BC}\) independent of \(D\), (3.32) is similar to equilibrium equations for gradient materials [108] as in phase-field mechanics [97, 109]. **Remark 3.3.4**.: In some prior work [55], \(G_{AB}(X,D)\) was an argument of \(\psi\), extending (3.26), and \(D\)-dependence of the metric manifested in a distinct thermodynamic force, rather than entering implicitly in \(Q_{A}\). The present approach is favored for brevity [54], but the former is admissible. **Proposition 3.3.1**.: _Euler-Lagrange equations can be expressed in the following alternative way:_ \[\partial_{A}P^{A}_{a}+\bar{\partial}_{B}P^{A}_{a}\partial_{A}D^{B}+P^{B}_{a} \gamma^{A}_{AB}-P^{A}_{c}\Gamma^{c}_{ba}F^{b}_{A}=-P^{A}_{a}C^{C}_{BC}\partial _{A}D^{B}, \tag{3.34}\] \[\begin{split}\partial_{A}Z^{A}_{C}+\bar{\partial}_{B}Z^{A}_{C} \partial_{A}D^{B}&+Z^{B}_{C}\gamma^{A}_{AB}-Z^{A}_{B}K^{B}_{AC}-(Q _{C}-R_{C})\\ &=\alpha\psi C^{A}_{CA}-Z^{B}_{A}(\bar{\partial}_{C}N^{A}_{B}- \bar{\partial}_{C}K^{A}_{BD}D^{D}+\delta^{A}_{C}C^{D}_{ED}\partial_{B}D^{E}). \end{split} \tag{3.35}\] **Proof.** From (2.10) and (2.27), \[\Gamma^{A}_{AB}=\Gamma^{A}_{BA}=\partial_{B}(\ln\sqrt{G})-N^{A}_{B}\tilde{ \partial}_{A}(\ln\sqrt{G})=\gamma^{A}_{BA}-N^{A}_{B}C^{C}_{AC}. \tag{3.36}\] Substitution of (3.36) with symmetry \(\gamma^{A}_{BC}=\gamma^{A}_{CB}\) into (3.31) and (3.32) yields (3.34) and (3.35). \(\square\) **Remark 3.3.5**.: Notably, (3.34) and (3.35) show how the nonlinear connection terms \(N^{A}_{B}\) cancel, simplifying calculations. Nonlinear connection \(N^{A}_{B}\) still ultimately affects governing equations via influence on \(D^{A}_{|B}=\partial_{B}D^{A}-N^{A}_{B}+K^{A}_{BC}D^{C}\), thus affecting \(Z^{B}_{A}=\partial\psi/\partial D^{A}_{|B}\), and through \(\tilde{\partial}_{C}N^{A}_{B}\) in (3.35). Spatial \(N^{a}_{b}\) can enter \(\Gamma^{a}_{bc}\) in (3.34). The linear connection \(K^{A}_{BC}\) and its gradient \(\tilde{\partial}_{D}K^{A}_{BC}\) in (3.35) are somewhat unique to Finsler-geometric continuum mechanics. The trace of Cartan's tensor, \(C^{A}_{BA}\), in all forms of the Euler-Lagrange equations is also a distinctive feature. This term, of course, vanishes when \(G_{AB}\) is independent of \(D\) (i.e., a Riemannian rather than Finslerian metric). #### 3.3.4 Spatial invariance and material symmetry First consider rotations of the spatial frame of reference, given by orthonormal transformation \(q^{a}_{b}\) in (2.40) whereby \(\det(q^{a}_{b})=1\) and \(\tilde{q}^{a}_{b}=g^{ac}q^{d}_{c}g_{bd}\) (i.e., \(\boldsymbol{q}^{-1}=\boldsymbol{q}^{\mathrm{T}}\)[22]). Since \(\boldsymbol{F}\to\boldsymbol{q}\boldsymbol{F}\) under such coordinate changes, \(\psi\) in (3.26) should obey more restricted forms to maintain proper observer independence. Two possibilities are \[\psi=\hat{\psi}[\boldsymbol{C}(\boldsymbol{F},\boldsymbol{g}),\boldsymbol{D}, \nabla\boldsymbol{D},\boldsymbol{X}]=\hat{\psi}(C_{AB},D^{A},D^{A}_{|B},X^{A}), \tag{3.37}\] \[\psi=\bar{\psi}[\tilde{\boldsymbol{C}}(\boldsymbol{F},\bar{\boldsymbol{g}}), \boldsymbol{D},\nabla\boldsymbol{D},\boldsymbol{X}]=\bar{\psi}(\tilde{C}_{ AB},D^{A},D^{A}_{|B},X^{A}), \tag{3.38}\] noting that (3.26) can be consistently expressed from (3.1), (3.2), (3.17), and (3.18) as \[\psi(\boldsymbol{F},\boldsymbol{D},\nabla\boldsymbol{D},\boldsymbol{X})=\bar{ \psi}(\boldsymbol{F},\boldsymbol{D},\bar{\boldsymbol{G}}(\boldsymbol{X}), \hat{\boldsymbol{G}}(\boldsymbol{D}),\bar{\boldsymbol{g}}(\varphi(\boldsymbol{ X})),\hat{\boldsymbol{g}}(\theta(\boldsymbol{X},\boldsymbol{D})),\nabla \boldsymbol{D},\boldsymbol{X}). \tag{3.39}\] From (3.10), (3.19), (3.37) and (3.38), first Piola-Kirchhoff stress \(P^{A}_{a}\) of (3.29) is calculated using the chain rule: \[P^{A}_{a}=\frac{\partial\psi}{\partial F^{a}_{A}}=2g_{ab}F^{b}_{B}\frac{ \partial\hat{\psi}}{\partial C_{AB}}=2\bar{g}_{ab}F^{b}_{B}\frac{\partial\bar {\psi}}{\partial\bar{C}_{AB}}. \tag{3.40}\] The resulting Cauchy stress tensors with spatial components \(\sigma^{ab}\) and \(\bar{\sigma}^{ab}\) obey symmetry rules consistent with the classical local balance of angular momentum [20, 22, 33]: \[\sigma^{ab}=\frac{1}{J}g^{ac}P^{A}_{c}F^{b}_{A}=\frac{2}{J}F^{a}_{A}F^{b}_{B} \frac{\partial\hat{\psi}}{\partial C_{AB}}=\sigma^{ba},\quad\bar{\sigma}^{ab} =\frac{1}{\bar{J}}\bar{g}^{ac}P^{A}_{c}F^{b}_{A}=\frac{2}{\bar{J}}F^{a}_{A}F^{ b}_{B}\frac{\partial\bar{\psi}}{\partial\bar{C}_{AB}}=\bar{\sigma}^{ba}. \tag{3.41}\] Now consider changes of the material frame of reference, given by transformation \(Q^{A}_{B}\) of (2.1) and (2.9) with inverse \(\tilde{Q}^{B}_{A}\). Under affine changes of coordinates \(X^{A}\to Q^{C}_{A}X^{A}\), it follows that \(dX^{A}\to Q^{C}_{A}dX^{A}\), \(F^{a}_{A}\to\tilde{Q}^{A}_{C}F^{a}_{A}\), \(G_{AB}\to\tilde{Q}^{A}_{C}\tilde{Q}^{B}_{D}G_{AB}\), \(C_{AB}\to\tilde{Q}^{A}_{C}\tilde{Q}^{B}_{D}C_{AB}\), \(\bar{G}_{AB}\to\tilde{Q}^{A}_{C}\tilde{Q}^{B}_{D}\tilde{G}_{AB}\), \(\bar{C}_{AB}\to\tilde{Q}^{A}_{C}\tilde{Q}^{B}_{D}\tilde{C}_{AB}\), \(D^{A}\to Q^{C}_{A}D^{A}\), \(\delta D^{A}\to Q^{C}_{A}\tilde{\delta}D^{A}\), and \(D^{A}_{|B}\to Q^{C}_{A}\tilde{Q}^{B}_{D}D^{A}_{|B}\). Energy densities \(\psi\), \(\hat{\psi}\), and \(\bar{\psi}\) should be invariant under all transformations \(\tilde{Q}^{A}_{B}\) (e.g., rotations, reflections, inversions) belonging to the symmetry group \(\mathbb{Q}\) of the material [33, 61, 81, 110] (e.g., \(\psi\to\psi\)). The present focus is on polynomial invariants [81, 110] with basis \(\mathcal{P}\) of invariant functions with respect to \(\hat{\mathbf{Q}}\in\mathbb{Q}\) and energy offsets \(\hat{\psi}_{0}=\text{constant}\), \(\bar{\psi}_{0}=\text{constant}\): \[\hat{\mathcal{P}}=\{I_{1},I_{2},\ldots,I_{\upsilon}\};\qquad I_{ \alpha}=I_{\alpha}(\mathbf{C},\mathbf{D},\nabla\mathbf{D}),\qquad\hat{\psi}=\hat{\psi}(I_{ 1},I_{2},\ldots,I_{\upsilon},X)+\hat{\psi}_{0}; \tag{3.42}\] \[\bar{\mathcal{P}}=\{\bar{I}_{1},\bar{I}_{2},\ldots,\bar{I}_{\zeta }\};\qquad\bar{I}_{\alpha}=\bar{I}_{\alpha}(\bar{\mathbf{C}},\mathbf{D},\nabla\mathbf{D}), \qquad\bar{\psi}=\bar{\psi}(\bar{I}_{1},\bar{I}_{2},\ldots,\bar{I}_{\zeta},X)+ \bar{\psi}_{0}. \tag{3.43}\] The total number of applicable invariants is \(\upsilon\) or \(\zeta\) for (3.37) or (3.38). Stress of (3.40) becomes \[P^{A}_{a}=2g_{ab}F^{b}_{B}\sum_{\alpha=1}^{\upsilon}\hat{\psi}_{\alpha}\frac{ \partial I_{\alpha}}{\partial C_{AB}}=2\bar{g}_{ab}F^{b}_{B}\sum_{\alpha=1}^{ \zeta}\bar{\psi}_{\alpha}\frac{\partial\bar{I}_{\alpha}}{\partial\bar{C}_{AB} };\quad\hat{\psi}_{\alpha}=\frac{\partial\hat{\psi}}{\partial I_{\alpha}}, \quad\bar{\psi}_{\alpha}=\frac{\partial\bar{\psi}}{\partial\bar{I}_{\alpha}}. \tag{3.44}\] **Remark 3.3.6**.: A thorough and modern geometric treatment of material symmetry, uniformity, and homogeneity in continuous media is included in a recent monograph [111]. ## 4 One-dimensional base manifold The framework of SS2 and SS3 is applied for \(n=1\): a 1-D base manifold \(\mathcal{M}\). In SS4.1, geometry and kinematics are presented, including assumptions that enable tractable solutions to several classes of boundary value problems while at the same time maintaining sufficient generality to address broad physical behaviors. Resulting 1-D governing equations are derived in SS4.2. General solutions are obtained for two problem classes in SS4.3. Constitutive functions for a soft biological tissue, namely a 1-D strip of skin under axial extension, are given in SS4.4. Model parameters and analytical solutions for 1-D skin stretching and tearing are reported in SS4.5. ### Geometry and kinematics Let \(X=X^{1}\). Considered is a reference domain \(\{\mathcal{M}:X\in[-L_{0},L_{0}]\}\), where the total length relative to a Euclidean metric is \(2L_{0}\), and boundary \(\partial\mathcal{M}\) is the endpoints \(X=\pm L_{0}\). The referential internal state vector reduces to the single component \(D=D^{1}\), which is assumed to have physical units, like \(X\), of length. The spatial coordinate is \(x=x^{1}\), and the spatial state component is \(d=d^{1}\). A normalization constant (i.e., regularization length) \(l\) is introduced, and the physically meaningful domain for internal state is assumed as \(D\in[0,l]\). The associated order parameter is \[\xi(X)=\frac{D(X)}{l}=\frac{d(\varphi(X))}{l},\qquad l>0, \tag{4.1}\] with meaningful domain \(\xi\in[0,1]\), and where (3.12) and (3.14) are invoked. For generic \(f\) and \(h\) differentiable in their arguments, let \[f^{\prime}(X)=\frac{\mathrm{d}f(X)}{\mathrm{d}X},\quad f^{\prime\prime}(X)= \frac{\mathrm{d}^{2}f(X)}{\mathrm{d}X^{2}};\qquad\dot{h}(\xi)=\frac{\mathrm{d }h(\xi)}{\mathrm{d}\xi},\quad\ddot{h}(\xi)=\frac{\mathrm{d}^{2}h(\xi)}{ \mathrm{d}\xi^{2}}. \tag{4.2}\] For 1-D manifolds, the following metrics apply from (3.17) and (3.18): \[G_{11}(X,D)=G(X,D)=\bar{G}(X)\hat{G}(D)=\hat{G}(D),\quad g_{11}(x,d)=g(x,d)= \bar{g}(x)\hat{g}(d)=\hat{g}(d). \tag{4.3}\] Since \(\check{g}=\check{G}=1\) for isometric 1-D Riemannian spaces, setting \[\hat{g}(d(\varphi(X)))=\hat{G}(D(X))\leftrightarrow g(\xi)=G(\xi) \tag{4.4}\] renders \(\mathfrak{m}\) and \(\mathcal{M}\) isometric when \(\phi(X)=X+c_{0}\Leftrightarrow F(X)=1\), regardless of local values of \(D\), \(d\), or \(\xi\) at corresponding points \(x=\varphi(X)\). **Remark 4.1.1**.: This assumption (4.4), used henceforth in SS4, may be relaxed in future applications to address residual stress (e.g., from growth [30]; see Appendix B), especially for \(n=\dim\mathcal{M}>1\). Henceforth in SS4, functional dependence on \(D\) or \(d\) is replaced with that on \(\xi\). Then \[D^{\prime}=\frac{\xi^{\prime}}{l},\qquad\frac{\partial f(X,D)}{\partial D}= \frac{1}{l}\frac{\partial f(X,\xi(D))}{\partial\xi}. \tag{4.5}\] The following functional forms are assumed for referential nonlinear connection \(N_{B}^{A}\) and linear connection \(K_{BC}^{A}\), with \(N_{0}=\) constant and \(\hat{K}(X)\) both dimensionless: \[N_{B}^{A}\to N_{1}^{1}=N=-N_{0}l\xi^{\prime},\qquad K_{BC}^{A}\to K_{11}^{1}(X,\xi)=K(X,\xi)=\frac{\hat{K}(X)}{l\xi}\,\Rightarrow\,\bar{\partial}_{1}K_{11} ^{1}D=-K_{11}^{1}. \tag{4.6}\] Spatial coefficients \(K_{bc}^{a}\) do not affect the governing equations and thus are left unspecified. Conditions (3.15) apply in 1-D, leading to, with (4.1)-(4.6), \[\Gamma_{BC}^{A}\to\Gamma_{11}^{1}=\frac{1}{2G}\delta_{1}G=\frac{1}{2G}( \partial_{1}G-N_{1}^{1}\bar{\partial}_{1}G)=-N\bar{\partial}_{1}(\ln\sqrt{G} )=-NC_{11}^{1}=-N\frac{\chi}{l}, \tag{4.7}\] \[\chi(\xi)=\frac{\dot{G}(\xi)}{2G(\xi)}=\frac{\dot{g}(\xi)}{2g(\xi)}=lC_{11}^{ 1}(\xi), \tag{4.8}\] \[N_{b}^{a}\to\frac{N}{F}=-\frac{N_{0}l\xi^{\prime}}{F}=-N_{0}l\frac{\mathrm{d} \xi}{\mathrm{d}x},\qquad\Gamma_{bc}^{a}\to-\frac{N}{F}\frac{\dot{g}}{2g}=- \frac{N}{F}\frac{\chi}{l}. \tag{4.9}\] The deformation gradient, deformation tensor, Jacobian determinant, and director gradient are \[F_{A}^{a}\to F_{1}^{1}=F=\frac{\mathrm{d}\varphi}{\mathrm{d}X}=\varphi^{ \prime},\quad C_{B}^{A}\to C_{1}^{1}=C=G^{11}g_{11}F_{1}^{1}F_{1}^{1}=F^{2}=( \varphi^{\prime})^{2},\quad J=F=\sqrt{C}, \tag{4.10}\] \[D_{|B}^{A}\to D_{|1}^{1}=\frac{\mathrm{d}D}{\mathrm{d}X}-N+KD=(1+N_{0})l\xi^{ \prime}+\hat{K}. \tag{4.11}\] From (4.4), \(\bar{C}=\bar{C}_{1}^{1}=C_{1}^{1}=C\) and \(\bar{J}=J\) in 1-D reductions of (3.19) and (3.20). ### Governing equations A generic energy density is assigned and equilibrium equations are derived for the 1-D case given prescriptions of SS4.1. #### 4.2.1 Energy density In 1-D, \(C_{AB}\) consists of a single invariant \(C\), and \(D^{A}\) and \(D^{A}_{|B}\) likewise. Dependencies in (3.26) are suitably represented by \(F\), \(\xi\), and \((\xi^{\prime},X)\) with (4.1) and (4.11). Since \(\bar{C}=C=F^{2}\), all energy densities \(\psi\) of (3.26) in (3.37) - (3.39) are expressed simply as \[\psi=\psi(C,\xi,\xi^{\prime},X). \tag{4.12}\] Denote by \(\mu_{0}\) a constant, later associated to an elastic modulus, with units of energy density. **Remark 4.2.1**.: For comparison with data from experiments in ambient Euclidean 3-space, \(\mu_{0}\) can be given units of energy per unit (3-D) volume, such that \(\Psi=\int_{\mathcal{M}}\psi d\Omega\) is energy per unit cross-sectional area normal to \(X\). For a 1-D \(\mathcal{M}\), this cross-sectional area is, by definition, constant. Denote by \(\Upsilon_{0}\) a constant, related to surface energy, with units of energy per unit (2-D fixed cross-sectional) area. Let \(W\) be strain energy density and \(\Lambda\) energy density associated with microstructure. Denote by \(w\) a dimensionless strain energy function, \(y\) a dimensionless interaction function (e.g., later representing elastic degradation from microstructure changes), \(\lambda\) a dimensionless phase energy function, and \(\iota\) a dimensionless gradient energy function assigned a quadratic form. Free energy density (4.12) is then prescribed in intermediate functional form as follows: \[\psi(C,\xi,\xi^{\prime},X)=W(C,\xi)+\Lambda(\xi,\xi^{\prime},X)=\frac{\mu_{0} }{2}w(C)y(\xi)+\frac{\Upsilon_{0}}{l}[\lambda(\xi)+\iota(\xi^{\prime},X)], \tag{4.13}\] \[\iota=|D^{1}_{|1}|^{2}-\hat{R}^{2}=D^{1}_{|1}G_{11}G^{11}D^{1}_{|1}-\hat{R}^{ 2}=[(1+N_{0})l\xi^{\prime}+\hat{R}]^{2}-\hat{R}^{2},\quad(N_{0}=\text{const}, \hat{R}=\hat{R}(X)). \tag{4.14}\] Note \(\iota(0,X)=0\). For null ground-state energy and stress, \(\psi(1,0,0,X)=0\) and \(\frac{\partial\psi}{\partial C}(1,0,0,X)=0\): \[w(1)=0,\qquad\frac{\text{d}w}{\text{d}C}(1)=0,\qquad\frac{\text{d}^{2}w}{ \text{d}C^{2}}\geq 0;\qquad\lambda(0)=0. \tag{4.15}\] The third of (4.15) ensures convexity of \(w\). Thermodynamic forces originating in (3.29) are derived as \[P=P^{1}_{1}=\frac{\partial\psi}{\partial F}=2\frac{g}{G}F\frac{\partial\psi}{ \partial C}=2\sqrt{C}\frac{\partial\psi}{\partial C}=\mu_{0}y\sqrt{C}\frac{ \text{d}w}{\text{d}C}, \tag{4.16}\] \[Q=Q_{1}=\frac{\partial\psi}{\partial D}=\frac{1}{l}\frac{\partial\psi}{ \partial\xi}=\frac{\mu_{0}}{2l}w\frac{\text{d}y}{\text{d}\xi}+\frac{\Upsilon_ {0}}{l^{2}}\frac{\text{d}\lambda}{\text{d}\xi}=\frac{\Upsilon_{0}}{l^{2}} \left(A_{0}w\dot{y}+\lambda\right),\quad A_{0}=\frac{\mu_{0}l}{2\Upsilon_{0}}, \tag{4.17}\] \[Z=Z^{1}_{1}=\frac{\partial\psi}{\partial D^{1}_{|1}}=\frac{\Upsilon_{0}}{l} \frac{\partial\iota}{\partial D^{1}_{|1}}=2\frac{\Upsilon_{0}}{l}D^{1}_{|1}=2 \frac{\Upsilon_{0}}{l}[(1+N_{0})l\xi^{\prime}+\hat{R}]. \tag{4.18}\] The volumetric source term in (3.22) is prescribed as manifesting from changes in energy density proportional to changes of the local referential volume form (e.g., physically representative of local volume changes from damage/tearing, similar to effects of tissue growth on energy (Appendix B)): \[R=R_{1}=\beta\psi\bar{\partial}_{1}(\ln\sqrt{G})=\frac{\beta}{l}\psi\chi, \qquad(\beta=\text{constant}). \tag{4.19}\] #### 4.2.2 Linear momentum The macroscopic momentum balance, (3.31) or (3.34) is, upon use of relations in SS4.1 and SS4.2.1, \[\frac{\mathrm{d}P}{\mathrm{d}X}=P(N_{0}-1)\chi\frac{\mathrm{d}\xi}{\mathrm{d}X}= -(1-N_{0})\frac{P}{2G}\frac{\mathrm{d}G}{\mathrm{d}X}. \tag{4.20}\] This is a separable first-order ordinary differential equation (ODE) that can be integrated directly: \[\int_{P_{0}}^{P}\mathrm{d}(\ln P)=-(1-N_{0})\int_{G_{0}}^{G}\mathrm{d}(\ln\sqrt {\mathrm{G}})\quad\Rightarrow\quad P=P_{0}\left(\sqrt{G_{0}/G}\right)^{1-N_{0 }}. \tag{4.21}\] The integration limit on \(G(\xi(X))\) is \(G_{0}=G(0)\), and \(P_{0}\) is a constant stress corresponding to \(\xi=0\). **Remark 4.2.2**.: If \(G\) is Riemannian, then \(G=G_{0}\) and \(P=P_{0}=\mathrm{constant}\). In the Finslerian setting, \(P\) can vary with \(X\) if \(\xi\) varies with \(X\) and \(N_{0}\) differs from unity. However, if \(P\) vanishes on \(\partial\mathcal{M}\) (i.e., at \(X=\pm L_{0}\)), then \(P_{0}=0\) necessarily, so \(P(X)=0\forall X\in\mathcal{M}\), meaning this 1-D domain cannot support residual stress. The same assertion applies when (4.4) is relaxed and \(N_{0}\) vanishes. From (4.16) and (4.21), when \(\mu_{0}\) is nonzero, \[\sqrt{C(X)}\frac{\mathrm{d}w(C(X))}{\mathrm{d}C}y(\xi(X))\left[\frac{G(\xi(X)) }{G_{0}}\right]^{(1-N_{0})/2}=\frac{P_{0}}{\mu_{0}}=\mathrm{constant}, \tag{4.22}\] where the value of \(P_{0}\), constant for a given static problem, depends on the boundary conditions. #### 4.2.3 Micro-momentum Define \(\bar{K}(X)=l\hat{K}(X)\). Then the microscopic momentum balance, (3.32) or (3.35), is, upon use of relations in SS4.1 and SS4.2.1 and dividing by \(2\varGamma_{0}(1+N_{0})\), \[\begin{split}\frac{\mathrm{d}^{2}\xi}{\mathrm{d}X^{2}}& +\chi(\xi)\left[1-\frac{(1+N_{0})(\alpha-\beta)}{2}\right]\left( \frac{\mathrm{d}\xi}{\mathrm{d}X}\right)^{2}+\frac{\bar{K}(X)}{l^{2}}\chi(\xi )\left[\frac{1}{1+N_{0}}-(\alpha-\beta)\right]\frac{\mathrm{d}\xi}{\mathrm{d }X}\\ &+\frac{\mathrm{d}\bar{K}(X)}{\mathrm{d}X}\frac{1}{l^{2}(1+N_{0})} -\frac{1}{2l^{2}(1+N_{0})}\left[\frac{\mathrm{d}\lambda(\xi)}{\mathrm{d}\xi}+ (\alpha-\beta)\chi(\xi)\lambda(\xi)\right]\\ &=\frac{A_{0}w(C(X))}{2l^{2}(1+N_{0})}\left[\frac{\mathrm{d}y(\xi )}{\mathrm{d}\xi}+(\alpha-\beta)\chi(\xi)y(\xi)\right].\end{split} \tag{4.23}\] This is a nonlinear and non-homogeneous second-order ODE with variable coefficients. General analytical solutions are not feasible. However, the following assumption is made henceforth in SS4 to reduce the nonlinearity (second term on left side) and render some special solutions possible: \[\beta=\alpha-2/(1+N_{0}). \tag{4.24}\] **Remark 4.2.3**.: Assumption (4.24) generalizes, yet is consistent with, physically realistic choices for fracture, shear bands, cavitation, and phase transitions [55, 56, 62]: \(\alpha=2,\beta=0,N_{0}=0\). Applying (4.24) with notation of (4.2), (4.23) reduces to the form studied in the remainder of SS4: \[l^{2}(1+N_{0})\xi^{\prime\prime}-\frac{\dot{\lambda}}{2}-\frac{\chi\lambda}{1+N_{ 0}}-\bar{K}\chi\xi^{\prime}+\bar{K}^{\prime}=\frac{A_{0}w}{2}\left[\dot{y}+ \frac{2\chi y}{1+N_{0}}\right]. \tag{4.25}\] This is a linear second-order ODE, albeit generally non-homogeneous with variable coefficients. For the special case that \(\Upsilon_{0}(1+N_{0})=0\), terms on the left of (4.23) all vanish, and equilibrium demands \[\mu_{0}w(C(X))\left[\frac{\mathrm{d}y(\xi)}{\mathrm{d}\xi}+\frac{2\chi(\xi)y( \xi)}{1+N_{0}}\right]=0. \tag{4.26}\] ### General solutions #### 4.3.1 Homogeneous fields Consider cases wherein \(\xi(X)\to\xi_{\mathrm{H}}=\mathrm{constant}\,\forall\,X\in[-L_{0},L_{0}]\). Assign the notation \(f_{\mathrm{H}}(X)=f(X,\xi_{\mathrm{H}})\). Then stress and momentum conservation in (4.16) and (4.21) combine to \[P_{\mathrm{H}}=\mu_{0}\sqrt{C}\frac{\mathrm{d}w}{\mathrm{d}C}y_{\mathrm{H}}=P _{0}\left(\frac{G_{0}}{G_{\mathrm{H}}}\right)^{(1-N_{0})/2}=\mathrm{constant}. \tag{4.27}\] If \(\mu_{0}\), \(y_{\mathrm{H}}\), and \(\mathrm{d}w/\mathrm{d}C\) are nonzero, convexity of \(w\) suggests \(C=C_{\mathrm{H}}=F_{\mathrm{H}}^{2}=\mathrm{constant}\). Accordingly, \(\varphi_{\mathrm{H}}(X)=F_{\mathrm{H}}X+c_{0}\). If \(\mu_{0}=0\), \(y_{\mathrm{H}}=0\), or \(\mathrm{d}w/\mathrm{d}C=0\), then \(P_{\mathrm{H}}=0\) and \(\varphi_{\mathrm{H}}(X)\) is arbitrary. Assume now that none of the former are zero, such that \(F=F_{\mathrm{H}}\), \(C=C_{\mathrm{H}}\), \(w=w_{\mathrm{H}}=w(C_{\mathrm{H}})\) are constants. Then equilibrium equation (4.25) becomes, with \(\bar{K}_{\mathrm{H}}^{\prime}=K_{0}^{\prime}\) a dimensionless constant, \[-\frac{\dot{\lambda}_{\mathrm{H}}}{2}-\frac{\chi_{\mathrm{H}}\lambda_{ \mathrm{H}}}{1+N_{0}}+K_{0}^{\prime}=\frac{A_{0}w_{\mathrm{H}}}{2}\left[\dot{ y}_{\mathrm{H}}+\frac{2\chi_{\mathrm{H}}y_{\mathrm{H}}}{1+N_{0}}\right]. \tag{4.28}\] **Remark 4.3.1**.: If \(\varphi_{\mathrm{H}}\) is imposed by displacement boundary conditions, then \(C_{\mathrm{H}}\) is known, as is \(w_{\mathrm{H}}\). In that case, (4.28) is an algebraic equation that can be solved implicitly for \(\xi_{\mathrm{H}}\), the value of which is substituted into (4.27) for stress \(P_{\mathrm{H}}\). If \(P_{\mathrm{H}}\) is imposed by traction boundary conditions, then (4.27) and (4.28) are to be solved simultaneously for \(C_{\mathrm{H}}\) and \(\xi_{\mathrm{H}}\). #### 4.3.2 Stress-free states Now consider cases wherein \(P=0\,\forall\,X\in[-L_{0},L_{0}]\). Relation (4.20) is trivially satisfied. Assume \(\mu_{0}\) is nonzero. Then (4.22) requires, since \(C>0\), \(G>0\), \[\frac{\mathrm{d}w(C(X))}{\mathrm{d}C}y(\xi(X))=0. \tag{4.29}\] This is obeyed for any \(y(\xi)\) at \(C=1\) (i.e., rigid-body motion) via (4.15). Assume further that \(w=0\), again satisfied at \(C=1\) via (4.15). Then the right side of (4.25) vanishes, leaving \[\xi^{\prime\prime}-\frac{\bar{K}\chi}{l^{2}(1+N_{0})}\xi^{\prime}-\frac{\lambda }{2l^{2}(1+N_{0})}-\frac{\chi\lambda}{l^{2}(1+N_{0})^{2}}+\frac{\bar{K}^{ \prime}}{l^{2}(1+N_{0})}=0, \tag{4.30}\] with functional dependencies \(\xi(X)\), \(\chi(\xi)\), \(\bar{K}(X)\), and \(\lambda(\xi)\). The ODE is linear or nonlinear depending on forms of \(\lambda\) and \(\chi\); analytical solutions can be derived for special cases. If \(\bar{K}=\text{constant}\), (4.30) is autonomous. If \(\bar{K}=0\), then (4.30) is \[\frac{\mathrm{d}^{2}\xi}{\mathrm{d}X^{2}}=\zeta\frac{\mathrm{d}\zeta}{\mathrm{ d}\xi}=\frac{1}{2l^{2}(1+N_{0})}\left[\frac{\mathrm{d}\lambda}{\mathrm{d} \xi}+\frac{2\chi(\xi)\lambda(\xi)}{1+N_{0}}\right], \tag{4.31}\] where \(\zeta=\xi^{\prime}\Rightarrow\xi^{\prime\prime}=\zeta\mathrm{d}\zeta/\mathrm{ d}\xi\). The right equation can be separated and integrated as \[\begin{split}\frac{1}{2}\zeta^{2}&=\frac{1}{2l^{2} (1+N_{0})}\int\left[\frac{\mathrm{d}\lambda}{\mathrm{d}\xi}+\frac{2\chi(\xi) \lambda(\xi)}{1+N_{0}}\right]\mathrm{d}\xi+c_{1}\\ &\Rightarrow\frac{\mathrm{d}\xi}{\mathrm{d}X}=\pm\frac{1}{l \sqrt{1+N_{0}}}\left(\int\left[\frac{\mathrm{d}\lambda}{\mathrm{d}\xi}+\frac{2 \chi(\xi)\lambda(\xi)}{1+N_{0}}\right]\mathrm{d}\xi+c_{1}\right)^{1/2}.\end{split} \tag{4.32}\] This first-order ODE can be separated and solved for \(\xi=\arg[X(\xi)]\), where \[X(\xi)=\pm l\sqrt{1+N_{0}}\int\frac{\mathrm{d}\xi}{\{\int[\mathrm{d}\lambda/ \mathrm{d}\xi+2\chi(\xi)\lambda(\xi)/(1+N_{0})]\,\mathrm{d}\xi+c_{1}\}^{1/2}}+ c_{2}. \tag{4.33}\] Integration constants are \(c_{1}\) and \(c_{2}\), determined by boundary conditions. Now allow arbitrary \(\bar{K}(X)\) but restrict \(\chi=0\) (e.g., \(G=G_{0}\)). Assume \(\lambda(\xi)\) is quadratic such that \(\dot{\lambda}=2\omega_{0}+2\omega_{1}\xi\). Now (4.30) is linear: \[\frac{\mathrm{d}^{2}\xi}{\mathrm{d}X^{2}}-\frac{\omega_{1}}{l^{2}(1+N_{0})} \xi=\frac{1}{l^{2}(1+N_{0})}\left(\omega_{0}-\frac{\mathrm{d}\bar{K}}{\mathrm{ d}X}\right). \tag{4.34}\] This ODE is non-homogeneous but has constant coefficients. Assume \(\omega_{1}>0\) and \(N_{0}>-1\). Then \[\xi(X)=c_{1}\exp\left[(X/l)\sqrt{\omega_{1}/(1+N_{0})}\right]+c_{2}\exp\left[ -(X/l)\sqrt{\omega_{1}/(1+N_{0})}\right]+\xi_{\mathrm{p}}(X), \tag{4.35}\] where \(c_{1}\) and \(c_{2}\) are new constants and \(\xi_{\mathrm{p}}\) is the particular solution from \(\omega_{0}\) and \(\bar{K}(X)=l\hat{K}(X)\). ### Constitutive model The framework is applied to a strip of skin loaded in tension along the \(X\)-direction. **Remark 4.4.1**.: A 1-D theory cannot distinguish between uniaxial strain or uniaxial stress conditions, nor can it account for anisotropy. Thus, parameters entering the model (e.g., \(\mu_{0}\), \(\gamma_{0}\)) are particular to those loading conditions and material orientations from experiments to which they are calibrated (e.g., uniaxial stress along a preferred fiber direction). The nonlinear elastic potential of SS4.4.2 specializes a 3-D model [71, 82, 83, 92] to 1-D. The internal structure variable \(\xi=D/l\) accounts for local rearrangements that lead to softening and degradation under tensile load [72, 73, 74, 77]: fiber sliding, pull-out, and breakage of collagen fibers, as well as rupture of the elastin fibers and ground matrix. **Remark 4.4.2**.: Specifically, \(D\) is a representative microscopic sliding or separation distance among microstructure constituents, and \(l\) is the value of this distance at which the material can no longer support tensile load. In the context of cohesive theories of fracture [73, 112, 113], \(D\) can be interpreted as a crack opening displacement. **Remark 4.4.3**.: Some physics represented by the present novel theory, not addressed by nonlinear elastic-continuum damage [73, 90] or phase-field [95, 114] approaches, are summarized as follows. The Finslerian metrics \(G(\xi)=g(\xi)\) account for local rescaling of material and spatial manifolds \(\mathcal{M}\) and \(\mathfrak{m}\) due to microstructure changes (e.g, expansion due to tearing or cavitation). Nonlinear connection \(N_{0}\) rescales the quadratic contribution of the gradient of \(\xi\) to surface energy by a constant, and linear connection \(\hat{K}\) rescales the linear contribution of the gradient of \(\xi\) to surface energy by a continuous and differentiable function of \(X\), enabling a certain material heterogeneity. #### 4.4.1 Metrics From (2.16), (2.48), (3.10), (4.3), (4.4), and (4.10), the difference in squared lengths of line elements \(\mathrm{d}\boldsymbol{x}\) and \(\mathrm{d}\boldsymbol{X}\) is \[(|\mathrm{d}\boldsymbol{x}|^{2}-|\mathrm{d}\boldsymbol{X}|^{2})(C,\xi)=G(\xi )(C-1)\mathrm{d}X\,\mathrm{d}X. \tag{4.36}\] Herein, the metric is assigned an exponential form frequent in generalized Finsler geometry [7, 55] and Riemannian geometry [27, 30]: \[G(\xi)=\exp\left(\frac{2k}{r}\xi^{r}\right)\quad\Rightarrow\quad\chi(\xi)= \frac{\dot{G}}{2G}=\frac{\dot{g}}{2g}=k\xi^{r-1}. \tag{4.37}\] For \(\xi\in[0,1]\), two constants are \(k\), which is positive for expansion, and \(r>0\). **Remark 4.4.4**.: Local regions of \(\mathcal{M}\) at \(X\) and \(\mathfrak{m}\) at \(x=\varphi(X)\) are rescaled isometrically by \(G(\xi(X))\). Physically, this rescaling arises from changes in structure associated with degradation, to which measure \(\frac{1}{2}\ln G(\xi)\) is interpreted as a contributor to remnant strain. For Riemannian metrics, \(G=\bar{G}=\bar{g}=g=1\), in which case (4.36) is independent of \(\xi\) and this remnant strain always vanishes. The ratio of constants is determined by the remnant strain contribution at failure: \(\hat{\epsilon}=\frac{k}{r}=\frac{1}{2}\ln G(1)\). Since \(\xi\in[0,1]\), smaller \(r\) at fixed \(\frac{k}{r}\) gives a sharper increase in \(\frac{1}{2}\ln G\) versus \(\xi\). Values of \(k\) and \(r\) are calibrated to data in SS4.5; choices of \(N_{0}\) and \(\bar{K}\) are explored parametrically therein. Nonlinear connection \(N_{0}=\mathrm{constant}\) and linear connection \(\hat{K}(X)=\bar{K}(X)/l\) affect the contribution of state gradient \(\xi^{\prime}\) to surface energy \(\iota\) via (4.13) and (4.14). Constraint \(N_{0}>-1\) is applied to avoid model singularities and encompass trivial choice \(N_{0}=0\). The value of \(N_{0}\) uniformly scales the contribution of \((\xi^{\prime})^{2}\) to \(\iota\) and \(\psi\). Function \(\hat{K}\) scales, in a possibly heterogeneous way, the contribution of \(\xi^{\prime}\) to \(\iota\) and \(\psi\). Even when \(\xi^{\prime}\) vanishes, \(N_{0}\) and \(\bar{K}\) can affect solutions. #### 4.4.2 Nonlinear elasticity Strain energy density \(W\) in (4.13) is dictated by the normalized (dimensionless) function \(w(C)\): \[w(C)=(\sqrt{C}-1)^{2}+\frac{a_{1}}{2b_{1}}\left[\exp\{b_{1}(C-1)^{2}\}-1 \right], \tag{4.38}\] where dimensionless constants are \(a_{1}\geq 0\) and \(b_{1}>0\), and \(\mu_{0}>0\) is enforced along with \(\Gamma_{0}>0\) in (4.13). This adapts prior models for collagenous tissues [71, 82, 83, 92] to the 1-D case. The first term on the right, linear in \(C\), accounts for the ground matrix and elastin. The second (exponential) term accounts for the collagen fibers, which, in the absence of damage processes, stiffen significantly at large \(C\). Such stiffening is dominated by the parameter \(b_{1}\), whereas \(a_{1}\) controls the fiber stiffness at small stretch \(\sqrt{C}\approx 1\)[71]. The elastic degradation function \(y(\xi)\) and independent energy contribution \(\lambda(\xi)\) in (4.13) are standard from phase-field theories [95, 114], where \(\vartheta\in[0,\infty)\) is a constant with \(\vartheta=2\) typical for brittle fracture and \(\vartheta=0\mapsto y=1\) for purely elastic response: \[y(\xi)=(1-\xi)^{\vartheta},\quad\dot{y}(\xi)=-\vartheta(1-\xi)^{\vartheta-1} ;\qquad\lambda(\xi)=\xi^{2},\quad\dot{\lambda}(\xi)=2\xi. \tag{4.39}\] When \(\vartheta>0\), \(y(1)=0\): no strain energy \(W\) or tensile load \(P\) are supported at \(X\) when \(D(X)=l\). Verification of (4.15) for prescriptions (4.38) and (4.39) is straightforward [81, 82]. Stress \(P\) conjugate to \(F=\sqrt{C}\) and force \(Q\) conjugate to \(D=l\xi\) are, from (4.16), (4.17), (4.38), and (4.39): \[P(C,\xi)=\mu_{0}(1-\xi)^{\vartheta}\left[(\sqrt{C}-1)+a_{1}\sqrt{C}(C-1){\rm exp }\{b_{1}(C-1)^{2}\}\right], \tag{4.40}\] \[Q(C,\xi)=\frac{2\Gamma_{0}}{l^{2}}\left[\xi-\frac{A_{0}\vartheta}{2}(1-\xi)^{ \vartheta-1}\left((\sqrt{C}-1)^{2}+\frac{a_{1}}{2b_{1}}\left[{\rm exp}\{b_{1} (C-1)^{2}\}-1\right]\right)\right]. \tag{4.41}\] **Remark 4.4.5**.: Ideal elasticity (i.e., no structure-mediated metric variation or degradation), is obtained when \(k=0\Rightarrow G=1\Rightarrow\chi=0\), \(\vartheta=0\leftrightarrow y=1\Rightarrow\dot{y}=0\), and \(\bar{K}^{\prime}=0\). In this case, as \(\dot{\lambda}(0)=0\) by (4.39), trivial solutions to (4.21) and (4.23) are \(P(X)=P_{0}=\) constant, \(\xi(X)=0\,\forall\,X\in\mathscr{M}\). ### Specific solutions Inputs to the model are nine constants \(l>0\), \(k\), \(r>0\), \(N_{0}>-1\), \(\mu_{0}>0\), \(a_{1}\geq 0\), \(b_{1}>0\), \(\vartheta\geq 0\), \(\Upsilon_{0}>0\), and the function \(\bar{K}(X)\). These are evaluated for stretching and tearing of skin [73, 74, 113] by applying the constitutive model of SS4.4 to the general solutions derived in SS4.3. #### 4.5.1 Homogeneous fields Here the skin specimen is assumed to degrade homogeneously in a gauge section of initial length \(2L_{0}\) (i.e., diffuse damage), an idealization fairly characteristic of certain experiments [64, 68, 72, 74, 87]. Per SS4.3.1, assume deformation control, with \(F=F_{\rm H}=\sqrt{C_{\rm H}}\geq 1\) increased incrementally from unity. The analytical solution for \(\xi=\xi_{\rm H}\) is then the implicit solution of (4.28) upon substitution of (4.37), (4.38) and (4.39), here for \(\vartheta>0\): \[\begin{split}\xi_{\rm H}+[k/(1+N_{0})]&\xi_{\rm H }^{1+r}=\tfrac{1}{2}A_{0}\vartheta(1-\xi_{\rm H})^{\vartheta-1}\{(\sqrt{C_{ \rm H}}-1)^{2}+[a_{1}/(2b_{1})]\\ &\times\left[{\rm exp}\{b_{1}(C_{\rm H}-1)^{2}\}-1\right]\}\{1-2 k\xi_{\rm H}^{r-1}(1-\xi_{\rm H})/[(1+N_{0})\vartheta]\}+K_{0}^{\prime}. \end{split} \tag{4.42}\] This dimensionless solution does not depend on \(\mu_{0}\), \(\Upsilon_{0}\), or \(l\) individually, but only on dimensionless ratio \(A_{0}=\tfrac{\mu_{0}l}{2\Gamma_{0}}\). However, stress \(P_{\rm H}=P(C_{\rm H},\xi_{\rm H})\) is found from (4.40), which does depend on \(\mu_{0}\). Stress \(P\) is shown in Fig. 2(a), first assuming \(N_{0}=0\) and \(K_{0}^{\prime}=0\) for simplicity. The Finsler model, with \(A_{0}=8.5\times 10^{-2}\) corresponding to baseline parameters given in Table 1, successfully matches experimental1 data [74]. The value of \(\mu_{0}\) is comparable to the low-stretch tensile modulus in some experiments [71, 75], acknowledging significant variability in the literature. Footnote 1: Stretch corresponding to Fig. 5(e) in experimental work [74] is defined as engineering strain plus 1.2 in Fig. 2(a) of §4.5.1 and Fig. 4(a) of §5.5.1 to account for pre-stress (\(\approx 0.7\) MPa) and pre-strain (\(\approx 0.2\)), so \(\sqrt{C}=1\) consistently for stress-free reference states among models and experiments. Stress-free states at null strain are consistent with data in Fig. 3(a) of that work [74]. Alternatively, \(2\frac{\sigma_{0}}{\mu_{0}}(\sqrt{C}-1)\), with \(\sigma_{0}=\text{constant}\), can be added to \(w\) of (4.38) giving a pre-stress of \(P_{(C=1,\xi=0)}=\sigma_{0}\) to fit data with pre-stress; this, however, would require relaxation of the second of (4.15). **Remark 4.5.1**.: The ideal elastic solution (\(\xi=0\)) is shown for comparison. Excluding structure evolution corresponding to collagen fiber rearrangements, sliding, and breakage, the model is too stiff relative to this data for which such microscopic mechanisms have been observed [74]. The ideal elastic model is unable to replicate the linearizing, softening, and failure mechanisms with increasing stretch \(\sqrt{C}\) reported in experiments on skin and other soft tissues [63, 64, 68, 74, 87]. In Fig. 2(b), effects of \(\vartheta\) on \(P\) are revealed for \(\hat{\epsilon}=0.1\), \(r=2\), \(N_{0}=0\), and \(K_{0}^{\prime}=0\), noting \(\vartheta=0\) produces the ideal nonlinear elastic solution \(\xi_{\text{H}}=0\) in (4.42). Peak stress increases with decreasing \(\vartheta\); the usual choice from phase-field theory \(\vartheta=2\) provides the close agreement with data in Fig. 2(a). In Fig. 2(c), effects of Finsler metric scaling factors \(\hat{\epsilon}=\frac{k}{r}\) and \(r\) on stress \(P\) are demonstrated, where at fixed \(r\), peak stress increases (decreases) with increasing (decreasing) \(\hat{\epsilon}\) and \(k\). Baseline choices \(\hat{\epsilon}=0.1\) and \(r=2\) furnish agreement with experiment in Fig. 2(a). A remnant strain of \(0.1\) is the same order of magnitude observed in cyclic loading experiments [72, 78]. Complementary effects on evolution of structure versus stretch are shown in Fig. 2(e): modest changes in \(\xi\) produce significant changes in \(P\). In Fig. 2(d), effects of connection coefficients \(N_{0}\) and \(K_{0}^{\prime}\) are revealed, holding material parameters at their baseline values of Table 1. For this homogeneous problem, maximum \(P\) decreases with increasing \(N_{0}\) and \(K_{0}^{\prime}\). Corresponding evolution of \(\xi\) is shown in Fig. 2(f). When \(K_{0}^{\prime}<0\), a viable solution \(\xi_{\text{H}}\in[0,1]\) exists only for \(\sqrt{C}>1\). The total energy per unit cross-sectional area of the specimen is \(\bar{\Psi}\), found upon integration of \(\psi(C_{\text{H}},\xi_{\text{H}})\) in (4.13) over \(\mathcal{M}\) with local volume element \(\text{d}V=\sqrt{G(\xi_{\text{H}})}\,\text{d}X\): \[\frac{\bar{\Psi}}{L_{0}}=\mu_{0}\left[\left(1-\xi_{\text{H}}\right)^{\vartheta }\{\left(\sqrt{C_{\text{H}}}-1\right)^{2}+\frac{a_{1}}{2b_{1}}\left[\exp\{b_{1 }(C_{\text{H}}-1)^{2}\}-1\right]\}+\frac{\xi_{\text{H}}^{2}}{A_{0}}\right] \exp\left(\frac{k}{r}\xi_{\text{H}}^{r}\right). \tag{4.43}\] #### 4.5.2 Stress-free states The stress-free solutions of SS4.3.2 are applied to evaluate the remaining unknown parameters \(l\) and \(\gamma_{0}\), given \(\mu_{0}\) and \(A_{0}\) from SS4.5.2. Assume the specimen tears completely at its midpoint at \(X=0\), such that \(\xi(0)=1\). No load is supported anywhere, and only rigid body motion is possible at other locations \(X\) where \(\xi(X)>0\). Assume the specimen is clamped at its ends where it is gripped, such that \(\xi(-L_{0})=\xi(L_{0})=0\). Symmetry conditions \(\xi(-X)=\xi(X)\) are imposed, with \(\xi^{\prime}(0)\) discontinuous, such that a solution need be calculated only for the half-space \(X\in[0,L_{0}]\). First take \(\tilde{K}=l\hat{K}=0\) so that (4.33) holds. Assume \(c_{1}=0\) corresponding to \(\xi^{\prime}=0\) where \(\xi=0\) since the anti-derivative in (4.32) vanishes at \(\xi=0\) when \(\lambda=\xi^{2},\chi=k\xi^{r-1},r>0\). It is verified a Figure 2: Extension and tearing of skin for imposed axial stretch ratio \(\sqrt{C}\), 1-D model: (a) stress \(P\) comparison with data [74] (see text §4.5.1 for definition of experimental stretch ratio) of Finsler model (baseline) and ideal nonlinear elasticity (null structure change) (b) effect on stress \(P\) of energy degradation exponent \(\vartheta\) with \(\hat{\epsilon}=0.1\), \(r=2\), \(N_{0}=0\), and \(K_{0}^{\prime}=0\) (c) effect on stress \(P\) of Finsler metric scaling \(\hat{\epsilon}=\frac{k}{r}\) and \(r\) with \(\vartheta=2\), \(N_{0}=0\), and \(K_{0}^{\prime}=0\) (d) effect on stress \(P\) of nonlinear connection \(N_{0}\) and linear connection \(K_{0}^{\prime}\) with \(\vartheta=2\), \(\hat{\epsilon}=0.1\), and \(r=2\) (e) effect on internal structure \(\xi=D/l\) of Finsler metric scaling \(\hat{\epsilon}=\frac{k}{r}\) and \(r\) with \(\vartheta=2\), \(N_{0}=0\), and \(K_{0}^{\prime}=0\) (f) effect on internal structure \(\xi=D/l\) of nonlinear connection \(N_{0}\) and linear connection \(K_{0}^{\prime}\) with \(\vartheta=2\), \(\hat{\epsilon}=0.1\), and \(r=2\) posteriori [55, 59, 62] that this closely approximates true boundary conditions \(\xi(\pm L_{0})=0\) as well as \(\xi^{\prime}(\pm L_{0})=0\) for \(L_{0}\gg l\). Then the physically valid (negative) root for the half-domain giving \(X\geq 0\) in (4.33) becomes, with (4.37) and (4.39), \[\frac{X(\xi)}{L_{0}}=-\frac{l}{L_{0}}\sqrt{1+N_{0}}\int_{z=1}^{z=\xi}\frac{ \mathrm{d}z}{z\sqrt{1+2kz^{r}/[(1+N_{0})(2+r)]}}. \tag{4.44}\] The lower limit follows from \(X(1)=0\), obviating \(c_{2}\) in (4.33). Analytical solution \(\xi=\arg X(\xi)\) is exact, but it is most easily evaluated by quadrature when \(k\) is nonzero, decrementing \(z\) from \(1\) to \(0\) in small negative steps. The profile of \(\xi(X)\) depends on \(X/L_{0}\) and \(l/L_{0}\), but not \(l\) or \(L_{0}\) individually. **Remark 4.5.2**.: This new 1-D solution, (4.44), agrees with more specific solutions derived in past work: \(N_{0}=0\) and \(r=1\)[55, 56] with slight correction [59] and \(N_{0}=0\) and \(r=2\)[59]. Normalized surface energy per two-sided cross-sectional area, \(\bar{\gamma}\), is obtained by integration of \(\psi=\Lambda\) in (4.13) over \(\mathcal{M}\): \[\bar{\gamma}=\frac{1}{2\overline{\gamma_{0}}}\int_{-L_{0}}^{L_{0}}\psi\sqrt{G }\,\mathrm{d}X=\frac{1}{2l}\int_{-L_{0}}^{L_{0}}\{\xi^{2}+(1+N_{0})l\xi^{ \prime}[2\hat{K}+(1+N_{0})l\xi^{\prime}]\}\exp[(k/r)\xi^{r}]\,\mathrm{d}X. \tag{4.45}\] This energy likewise depends on \(l/L_{0}\) but not \(l\) or \(L_{0}\) individually. Baseline values of \(k\) and \(r\) are now taken from Table 1. The solution (4.44) is shown for \(N_{0}=0\) and different \(l/L_{0}\) in Fig. 3(a). The smaller (larger) the regularization length ratio \(l/L_{0}\), the sharper (more diffuse) the zone centered at the midpoint of the domain over which prominent structure changes occur. Normalized energy density (4.45) is shown in Fig. 3(b) versus \(l/L_{0}\) for several \(N_{0}\). Increasing \(N_{0}\) increases this energy, as might be anticipated from (4.13) with (4.14) when \(\hat{K}=0\). A stress-free \begin{table} \begin{tabular}{l l l r r} \hline Parameter & Units & Definition & Value (1-D) & Value (2-D) \\ \hline \(l\) & mm & length scale & 0.04 & 0.04 \\ \(k\) & \(\cdots\) & metric scaling factor & 0.2 & 0.2 \\ \(m\) & \(\cdots\) & metric scaling factor & \(\cdots\) & 0.3 \\ \(r\) & \(\cdots\) & metric scaling exponent & 2 & 2 \\ \(\mu_{0}\) & N/mm\({}^{2}\) & shear modulus (axial 1-D) & 0.2 & 0.2 \\ \(\kappa_{0}\) & N/mm\({}^{2}\) & bulk modulus (\(\kappa_{0}=k_{0}\mu_{0}\)) & \(\cdots\) & 1.2 \\ \(a_{1}\) & \(\cdots\) & nonlinear elastic constant & 2.8 & 2.8 \\ \(a_{2}\) & \(\cdots\) & nonlinear elastic constant & \(\cdots\) & 6 \\ \(b_{1}\) & \(\cdots\) & nonlinear elastic constant & 0.055 & 0.055 \\ \(b_{2}\) & \(\cdots\) & nonlinear elastic constant & \(\cdots\) & 0.17 \\ \(\bar{\sigma}\) & \(\cdots\) & degradation exponent & 2 & 2 \\ \(\varsigma\) & \(\cdots\) & degradation exponent & \(\cdots\) & 2 \\ \(\Upsilon_{0}\) & mJ/mm\({}^{2}\) & isotropic surface energy & 0.47 & 0.47 \\ \(\gamma_{\varsigma}\) & \(\cdots\) & anisotropic energy factor & \(\cdots\) & 1 \\ \(\gamma_{\eta}\) & \(\cdots\) & anisotropic energy factor & \(\cdots\) & 0.84 \\ \hline \end{tabular} \end{table} Table 1: Baseline model parameters for rabbit skin tissue: 1-D and 2-D theories ruptured state is energetically favorable to a stressed homogeneous state (SS4.5.1) from applied deformation \(C_{\rm H}\) when \(\bar{\Psi}^{\prime}>2\bar{\gamma}\Omega_{0}\), with \(\bar{\Psi}^{\prime}\) given by (4.43). Ratio \(\bar{\Psi}/(2\bar{\gamma}\Omega_{0})\) is shown in Fig. 3(c) versus \(\sqrt{C}=\sqrt{C_{\rm H}}\) with \(l/L_{0}=10^{-2}\) and several \(N_{0}\), recalling \(K_{0}^{\prime}=0\). Increasing \(N_{0}\) increases \(\bar{\gamma}\), reducing \(\bar{\Psi}/(2\bar{\gamma}\Omega_{0})\). For cases in Fig. 3(a), Fig. 3(b), and Fig. 3(c), \(\xi\left(\pm L_{0}\right)<10^{-8}\) and \(|l\xi^{\prime}(\pm L_{0})|<10^{-8}\) are observed for \(l/L_{0}\leq 0.03\), verifying \(c_{1}=0\) in (4.33) and (4.44) under this length constraint. The remaining parameters \(l\) and \(T_{0}\) are now quantified. To match the measured energy release rate \(J_{\rm C}\) (i.e., toughness) of skin, \(2\bar{\gamma}\Omega_{0}\approx J_{\rm C}\). Take \(L_{0}=4\) mm, the span of specimens [74] whose data are represented in Fig. 2(a). Then \(l/L_{0}=10^{-2}\Rightarrow l=40\,\mu\)m is more than sufficiently small to adhere to the aforementioned boundary constraints (i.e., \(c_{1}=0\)) while providing a damage profile of intermediate diffusivity in Fig. 3(a). This value of \(l\) then gives \(T_{0}=\frac{\mu_{0}l}{2A_{0}}=0.47\) kJ/m\({}^{2}\) (Table 1). **Remark 4.5.3**.: Along with the choice \(N_{0}=0\), the Finsler model with full set of baseline parameters in Table 1 produces \(\bar{\gamma}\approx 1\) in Fig. 3(b) and \(2\bar{\gamma}\Omega_{0}=1.0\) kJ/m\({}^{2}\), in concurrence with experimental data: \(0.5\lesssim J_{\rm C}\lesssim 2.5\) kJ/m\({}^{2}\)[73, 99, 113]. Value \(l=40\,\mu\)m is between \(4\times\) and \(40\times\) the collagen fiber Figure 3: Extension and tearing of skin, 1-D model: (a) stress-free solution for internal state profile (baseline parameters, \(N_{0}=0\)) (b) normalized surface energy for rupture versus regularization length (c) ratio of homogeneous energy to energy for stress-free localized rupture (d) stress-free solution, \(\hat{\epsilon}=0,l/L_{0}=10^{-2}\), heterogeneous connection \(\bar{K}(X)\) diameter [68, 69, 74]. Though not shown in Fig. 3(b), increasing \(\hat{\epsilon}=\frac{k}{r}\) from \(0.1\) to \(0.2\) at \(\vartheta=r=2\) with \(N_{0}^{\prime}=K_{0}^{\prime}=0\) and \(l/L_{0}=10^{-2}\) increases effective toughness to \(2\tilde{\gamma}\Omega_{0}=1.02\) kJ/m\({}^{2}\). Under the same conditions, reducing \(\hat{\epsilon}\) to \(0\) diminishes the predicted toughness to \(2\tilde{\gamma}\Omega_{0}=0.94\) kJ/m\({}^{2}\). Finally, take \(k=0\) but permit nonzero \(\bar{K}(X)=l\hat{K}(X)\), such that (4.35) applies. As an example, let \(\bar{K}=-K_{0}^{\prime}l\cdot(1-X/L_{0})\) for \(X\in[0,L_{0}]\) and \(\bar{K}=K_{0}^{\prime}l\cdot(1+X/L_{0})\) for \(X\in[-L_{0},0)\). Boundary conditions \(\xi(0)=1\) and \(\xi(\pm L_{0})=0\) still apply, as does symmetry relation \(\xi(X)=\xi(-X)\). From (4.39), \(\omega_{1}=1\) and \(\omega_{0}=0\). For the whole domain \(X\in[-L_{0},L_{0}]\), \(\bar{K}^{\prime}=K_{0}^{\prime}l/L_{0}=\text{constant}\), and simply \(\xi_{\text{p}}=K_{0}^{\prime}l/L_{0}\). Then (4.35) gives \[\xi(X)=c_{1}\exp[X/\{l\sqrt{1+N_{0}}\}]+c_{2}\exp[-X/\{l\sqrt{1+N_ {0}}\}]+K_{0}^{\prime}l/L_{0}, \tag{4.46}\] \[c_{1}=1-c_{2}-K_{0}^{\prime}l/L_{0}=\frac{-K_{0}^{\prime}l/L_{0 }-[1-K_{0}^{\prime}l/L_{0}]\exp[-L_{0}/\{l\sqrt{1+N_{0}}\}]}{\exp[L_{0}/\{l \sqrt{1+N_{0}}\}]-\exp[-L_{0}/\{l\sqrt{1+N_{0}}\}]}.\] Profiles of \(\xi(X)\) are shown in Fig. 3(d) for \(K_{0}^{\prime}\geq 0\) with baseline \(l/L_{0}=10^{-2}\). Normalized surface energy \(\tilde{\gamma}\) from (4.45) is reported in Fig. 3(d) for each case, recalling \(\hat{\epsilon}=k=0\) produces Riemannian (Euclidean) metric \(G=1\). Setting \(K_{0}^{\prime}>0\) increases \(\tilde{\gamma}\) for this problem. Setting \(K_{0}^{\prime}<0\) reduces \(\tilde{\gamma}\) and produces a physically invalid solution (not shown in Fig. 3(d)) in (4.46): \(\xi<0\) on part of \(\mathscr{M}\). ## 5 Two-dimensional base manifold The framework of SS2 and SS3 is applied for \(n=2\): a 2-D base manifold \(\mathscr{M}\). In SS5.1, geometry and kinematics are presented. Governing equations are derived in SS5.2. Solutions are considered for general problem classes in SS5.3. Constitutive functions for an orthotropic 2-D patch of skin under planar deformations are assigned in SS5.4. Solutions for stretching and tearing follow in SS5.5. ### Geometry and kinematics Reference coordinates are Cartesian (orthogonal): \(\{X^{1},X^{2}\}\). Considered is a reference domain \(\{\mathscr{M}:X^{1}\in[-L_{0},L_{0}],X^{2}\in[-W_{0},W_{0}]\}\), where the total area relative to a Euclidean metric is \(4L_{0}W_{0}\), and boundary \(\partial\mathscr{M}\) is the edges \((X^{1},X^{2})=(\pm L_{0},\pm W_{0})\). The referential internal state vector has coordinates \(\{D^{1},D^{2}\}\), both with physical units of length. Spatial coordinates are Cartesian \(\{x^{1},x^{2}\}\) and \(\{d^{1},d^{2}\}\). A normalization constant (i.e., regularization length) is \(l\), with physically meaningful domain assumed as \(D^{A}\in[0,l]\) (\(A=1,2\)). With notation \(f(X,D)=f(X^{A},D^{B})\), dimensionless order parameters are, with (3.12) and (3.14) invoked, \[\xi(X)=\frac{D^{1}(X)}{l}=\frac{d^{1}(\varphi(X))}{l},\qquad\eta(X)=\frac{D^{ 2}(X)}{l}=\frac{d^{2}(\varphi(X))}{l},\qquad l>0. \tag{5.1}\] Physically meaningful domains are \(\xi\in[0,1]\) and \(\eta\in[0,1]\). For 2-D manifolds with Cartesian base coordinates \(\{X^{1},X^{2}\}\) and \(\{x^{1},x^{2}\}\), the following metrics apply from (3.17) and (3.18): \[\bar{G}_{AB}=\delta_{AB},\quad\bar{g}_{ab}=\delta_{ab};\quad G_{AB}(X,D)=\hat {G}_{AB}(D),\quad g_{ab}(x,d)=\hat{g}_{ab}(d). \tag{5.2}\] Herein, the following constraint is imposed: \[\hat{g}_{ab}(d(\varphi(X)))=\delta^{A}_{a}\delta^{B}_{b}\hat{G}_{AB}(D(X)) \leftrightarrow g_{ab}(\xi,\eta)=\delta^{A}_{a}\delta^{B}_{b}G_{AB}(\xi,\eta), \tag{5.3}\] making \(\mathfrak{m}\) and \(\mathcal{M}\) isometric when \(\phi^{a}(X)=\delta^{a}_{A}X^{A}+c^{a}_{0}\Leftrightarrow F^{a}_{A}=\delta^{a} _{A}\) regardless of \(\{\xi,\eta\}\) at \(x=\varphi(X)\). **Remark 5.1.1**.: Equation (5.3) may be removed in other settings to directly model residual stress (e.g., Appendix B), but all residual stresses are not necessarily eliminated with (5.3) in place. Though other non-trivial forms are admissible (e.g., SS4.1), assume nonlinear \(N^{A}_{B}\) and linear \(K^{A}_{BC}\) connections vanish: \[N^{A}_{B}=0\Rightarrow N^{a}_{b}=\delta^{a}_{A}N^{A}_{B}(F^{-1})^{B}_{b}=0, \qquad K^{A}_{BC}=0. \tag{5.4}\] The \(K^{a}_{bc}\) do not affect the governing equations to be solved later, so they are unspecified. Applying (3.15) and (5.1)-(5.4), \[\delta_{A}G_{BC}=\partial_{A}G_{BC}-N^{D}_{A}\bar{\partial}_{D}G_{BC}=0 \Rightarrow\Gamma^{A}_{BC}=0,\qquad\delta_{a}g_{bc}=0\Rightarrow\Gamma^{a}_{ bc}=0, \tag{5.5}\] \[\chi_{A}(\xi,\eta)=lC^{B}_{AB}(\xi,\eta)=l\bar{\partial}_{A}\{\ln\sqrt{G(\xi, \eta)}\};\quad l\bar{\partial}_{1}(\cdot)=\partial(\cdot)/\partial\xi,\quad l \bar{\partial}_{2}(\cdot)=\partial(\cdot)/\partial\eta. \tag{5.6}\] The deformation gradient, deformation tensor, Jacobian determinant, and director gradient are \[F^{a}_{A}=\frac{\partial\varphi^{a}}{\partial X^{A}},\quad C^{A}_{B}=G^{AC}g_{ bc}F^{b}_{B}F^{c}_{C}=G^{AC}F^{c}_{C}\delta^{F}_{c}G_{FE}\delta^{E}_{b}F^{b}_{B}, \quad J=\det(F^{a}_{A})=\sqrt{\det(C^{A}_{B})}, \tag{5.7}\] \[D^{A}_{|B}=\delta_{B}D^{A}+K^{A}_{BC}D^{C}=\partial_{B}D^{A};\qquad D^{1}_{|A} =l\partial_{A}\xi,\quad D^{2}_{|A}=l\partial_{A}\eta. \tag{5.8}\] Unless \(F^{a}_{A}\) and \(G_{AB}\) are diagonal, \(\mathbf{C}\) and \(\bar{\mathbf{C}}\) can differ. From (3.19) and (3.20), \[\bar{C}^{A}_{B}=\delta^{AC}\bar{C}_{CB}=\delta^{AC}\delta_{bc}F^{b}_{B}F^{c}_{ C},\qquad\bar{J}=\sqrt{\det(\bar{C}^{A}_{B})}=J. \tag{5.9}\] ### Governing equations A generic energy density is chosen and equilibrium equations are derived for the 2-D case of SS5.1. #### 5.2.1 Energy density For the present case, dependencies on \(D^{A}\) and \(D^{A}_{|B}\) are suitably represented by \((\xi,\eta)\) and \((\partial_{A}\xi,\partial_{A}\eta)\) of (5.1) and (5.8). The functional form of (3.38) is invoked without explicit \(X\) dependency, whereby \[\psi=\bar{\psi}(\bar{C}_{AB},\xi,\eta,\partial_{A}\xi,\partial_{A}\eta). \tag{5.10}\] Henceforth in SS5, the over-bar is dropped from \(\psi\) to lighten the notation. Denote by \(\mu_{0}\) a constant, later associated to a shear modulus, with units of energy density. **Remark 5.2.1**.: For comparison with experiments in ambient 3-space, \(\mu_{0}\) has units of energy per unit 3-D volume, so \(\Psi=\int_{\mathcal{M}}\psi\,d\Omega\) is energy per unit thickness normal to the \(X^{1}\) and \(X^{2}\). Denote by \(T_{0}\) a constant related to surface energy with units of energy per unit (e.g., 2-D fixed cross-sectional) area, and by \(\gamma_{\xi}\) and \(\gamma_{\eta}\) two dimensionless constants. Let \(W\) be strain energy density and \(\Lambda\) energy density associated with microstructure. Denote by \(w\) a dimensionless strain energy function (embedding possible degradation), \(\lambda\) and \(\nu\) dimensionless phase energy functions, \(\iota\) a dimensionless gradient energy function assigned a sum of quadratic forms, and \(\nabla_{0}(\cdot)=\frac{\partial}{\partial\mathbf{X}}(\cdot)\) the partial material gradient. Free energy (5.10) is prescribed in intermediate functional form as \[\begin{split}\psi(\bar{\mathbf{C}},\xi,\eta,\nabla_{0}\xi,\nabla_{0} \eta)&=W(\bar{\mathbf{C}},\xi,\eta)+\Lambda(\xi,\eta,\nabla_{0}\xi, \nabla_{0}\eta)\\ &=\frac{\mu_{0}}{2}w(\bar{\mathbf{C}},\xi,\eta)+\frac{\Upsilon_{0}}{l} [\gamma_{\xi}\lambda(\xi)+\gamma_{\eta}\nu(\eta)+\iota(\nabla_{0}\xi,\nabla_{0 }\eta)],\end{split} \tag{5.11}\] \[\iota=\gamma_{\xi}|\Gamma\nabla_{0}\xi|^{2}+\gamma_{\eta}l^{2}|\Gamma\nabla_{0 }\eta|^{2}=l^{2}\delta^{AB}(\gamma_{\xi}\partial_{A}\xi\partial_{B}\xi+\gamma_ {\eta}\partial_{A}\eta\partial_{B}\eta). \tag{5.12}\] Note \(\iota(0,0)=0\). Therefore, for null ground-state energy density \(\psi\) and stress \(P_{a}^{A}\), \[w(\delta_{AB},\xi,\eta)=0,\qquad\frac{\partial w}{\partial\bar{\mathcal{C}}_{ AB}}(\delta_{AB},\xi,\eta)=0;\qquad\lambda(0)=\nu(0)=0. \tag{5.13}\] Convexity and material symmetry are addressed in SS5.4.2. Thermodynamic forces of (3.29) are, applying (3.40), \[P_{a}^{A}=\frac{\partial\psi}{\partial F_{A}^{a}}=2\delta_{ab}F_{B}^{b}\frac{ \partial\psi}{\partial\bar{\mathcal{C}}_{AB}}=\mu_{0}\delta_{ab}F_{B}^{b} \frac{\partial w}{\partial\bar{\mathcal{C}}_{AB}}, \tag{5.14}\] \[Q_{1}=\frac{1}{l}\frac{\partial\psi}{\partial\xi}=\frac{\Upsilon_{0}}{l^{2}} \left(A_{0}\frac{\partial w}{\partial\xi}+\gamma_{\xi}\frac{\mathrm{d}\lambda }{\mathrm{d}\xi}\right),\quad Q_{2}=\frac{1}{l}\frac{\partial\psi}{\partial \eta}=\frac{\Upsilon_{0}}{l^{2}}\left(A_{0}\frac{\partial w}{\partial\eta}+ \gamma_{\eta}\frac{\mathrm{d}\nu}{\mathrm{d}\eta}\right);\quad A_{0}=\frac{\mu _{0}l}{2\Upsilon_{0}}, \tag{5.15}\] \[Z_{1}^{A}=\frac{\partial\psi}{\partial D_{|A}^{1}}=\frac{\Upsilon_{0}}{l^{2}} \frac{\partial\iota}{\partial(\partial_{A}\xi)}=2\Upsilon_{0}\gamma_{\xi} \delta^{AB}\partial_{B}\xi,\quad Z_{2}^{A}=\frac{\partial\psi}{\partial D_{|A }^{2}}=\frac{\Upsilon_{0}}{l^{2}}\frac{\partial\iota}{\partial(\partial_{A} \eta)}=2\Upsilon_{0}\gamma_{\eta}\delta^{AB}\partial_{B}\eta\,. \tag{5.16}\] The source term in (3.22) manifests from changes in energy proportional to changes of the local referential volume form (e.g., local volume changes from damage, treated analogously to an energy source from tissue growth (Appendix B)): \[R_{A}=\beta\psi\bar{\partial}_{A}(\ln\sqrt{G})=\frac{\beta}{l}\psi\chi_{A}, \qquad(\beta=\text{constant};A=1,2). \tag{5.17}\] #### 5.2.2 Linear momentum Linear momentum balance (3.31) or (3.34) is, invoking relations in SS5.1 and SS5.2.1, \[\begin{split}\mu_{0}\delta_{ab}&\left[\frac{ \partial^{2}\varphi^{b}}{\partial X^{A}\partial X^{B}}\frac{\partial w}{ \partial\bar{\mathcal{C}}_{AB}}+\frac{\partial\varphi^{b}}{\partial X^{B}} \left(\frac{\partial^{2}w}{\partial\bar{\mathcal{C}}_{AB}\partial X^{A}}+ \frac{\partial^{2}w}{\partial\bar{\mathcal{C}}_{AB}\partial\xi}\frac{\partial \xi}{\partial X^{A}}+\frac{\partial^{2}w}{\partial\bar{\mathcal{C}}_{AB} \partial\eta}\frac{\partial\eta}{\partial X^{A}}\right)\right]\\ &=-\mu_{0}\delta_{ab}\frac{\partial\varphi^{b}}{\partial X^{B}} \frac{\partial w}{\partial\bar{\mathcal{C}}_{AB}}\left[\frac{\partial}{\partial \xi}\left(\ln\sqrt{G}\right)\frac{\partial\xi}{\partial X^{A}}+\frac{\partial} {\partial\eta}\left(\ln\sqrt{G}\right)\frac{\partial\eta}{\partial X^{A}} \right].\end{split} \tag{5.18}\] **Remark 5.2.2**.: For nonzero \(\mu_{0}\), (5.18) is two coupled nonlinear PDEs (\(a=1,2\)) in four field variables: \(\varphi^{1}(X)\), \(\varphi^{2}(X)\), \(\xi(X)\), and \(\eta(X)\). #### 5.2.3 Micro-momentum State-space equilibrium (3.32) or (3.35) is, using relations of SS5.1 and SS5.2.1 and dividing by \(2\Gamma_{0}\), the two equations \[\begin{split}&\gamma_{5}\delta^{AB}\frac{\partial^{2}\xi}{ \partial X^{A}\partial X^{B}}+\left(1-\frac{\alpha-\beta}{2}\right)\gamma_{5} \delta^{AB}\frac{\partial}{\partial\xi}\left(\ln\sqrt{G}\right)\frac{\partial \xi}{\partial X^{A}}\frac{\partial\xi}{\partial X^{B}}-\frac{\gamma_{5}}{2l^{2 }}\frac{\mathrm{d}\lambda}{\mathrm{d}\xi}\\ &\quad+\gamma_{5}\delta^{AB}\frac{\partial}{\partial\eta}\left( \ln\sqrt{G}\right)\frac{\partial\xi}{\partial X^{A}}\frac{\partial\eta}{ \partial X^{B}}-\left(\frac{\alpha-\beta}{2}\right)\gamma_{\eta}\delta^{AB} \frac{\partial}{\partial\xi}\left(\ln\sqrt{G}\right)\frac{\partial\eta}{ \partial X^{A}}\frac{\partial\eta}{\partial X^{B}}\\ &\quad-\left(\frac{\alpha-\beta}{2l^{2}}\right)\frac{\partial}{ \partial\xi}\left(\ln\sqrt{G}\right)\left[\gamma_{5}\lambda+\gamma_{\eta}\nu \right]=\frac{A_{0}}{2l^{2}}\left[\frac{\partial w}{\partial\xi}+(\alpha- \beta)\frac{\partial}{\partial\xi}\left(\ln\sqrt{G}\right)w\right],\end{split} \tag{5.19}\] \[\begin{split}&\gamma_{\eta}\delta^{AB}\frac{\partial^{2}\eta}{ \partial X^{A}\partial X^{B}}+\left(1-\frac{\alpha-\beta}{2}\right)\gamma_{ \eta}\delta^{AB}\frac{\partial}{\partial\eta}\left(\ln\sqrt{G}\right)\frac{ \partial\eta}{\partial X^{A}}\frac{\partial\eta}{\partial X^{B}}-\frac{\gamma _{\eta}}{2l^{2}}\frac{\mathrm{d}\nu}{\mathrm{d}\eta}\\ &\quad+\gamma_{\eta}\delta^{AB}\frac{\partial}{\partial\xi} \left(\ln\sqrt{G}\right)\frac{\partial\eta}{\partial X^{A}}\frac{\partial\xi} {\partial X^{B}}-\left(\frac{\alpha-\beta}{2}\right)\gamma_{5}\delta^{AB} \frac{\partial}{\partial\eta}\left(\ln\sqrt{G}\right)\frac{\partial\xi}{ \partial X^{A}}\frac{\partial\xi}{\partial X^{B}}\\ &\quad-\left(\frac{\alpha-\beta}{2l^{2}}\right)\frac{\partial}{ \partial\eta}\left(\ln\sqrt{G}\right)\left[\gamma_{5}\lambda+\gamma_{\eta}\nu \right]=\frac{A_{0}}{2l^{2}}\left[\frac{\partial w}{\partial\eta}+(\alpha- \beta)\frac{\partial}{\partial\eta}\left(\ln\sqrt{G}\right)w\right].\end{split} \tag{5.20}\] **Remark 5.2.3**.: For nonzero \(\Gamma_{0}\), (5.19) and (5.20) are two coupled nonlinear PDEs in four field variables: \(\varphi^{1}(X)\), \(\varphi^{2}(X)\), \(\xi(X)\), and \(\eta(X)\), where derivatives of \(\varphi^{1}(X)\) and \(\varphi^{2}(X)\) enter \(w\) on the right sides via \(\bar{C}_{AB}=\partial_{A}\varphi^{a}\delta_{ab}\partial_{B}\varphi^{b}\). For the special case \(\Gamma_{0}=0\), the left sides of (5.19) and (5.20) vanish, whereas for \(\mu_{0}=0\), the right sides vanish. ### General solutions #### 5.3.1 Homogeneous fields Examine cases for which \(\xi(X)\to\xi_{\mathrm{H}}=\mathrm{constant}\) and \(\eta(X)\to\eta_{\mathrm{H}}=\mathrm{constant}\) at all points \(X\in\mathcal{M}\); the constants may differ: \(\xi_{\mathrm{H}}\neq\eta_{\mathrm{H}}\) in general. Apply the notation \(f_{\mathrm{H}}(X)=f(X,\xi_{\mathrm{H}},\eta_{\mathrm{H}})\). Restrict \(\mu_{0}>0\). Then (5.14) and (5.18) reduce to \[\frac{\partial P_{a}^{A}}{\partial X^{A}}=\mu_{0}\delta_{ab}\left[\frac{ \partial^{2}\varphi^{b}}{\partial X^{A}\partial X^{B}}\frac{\partial w}{ \partial\bar{C}_{AB}}+\frac{\partial\varphi^{b}}{\partial X^{B}}\frac{ \partial^{2}w}{\partial\bar{C}_{AB}\partial X^{A}}\right]=0\Rightarrow(P_{ \mathrm{H}})_{a}^{A}=\frac{\mu_{0}}{2}\left(\frac{\partial w}{\partial F_{A}^{a }}\right)_{\mathrm{H}}=\mathrm{constant}. \tag{5.21}\] This should be satisfied for any homogeneous \(F_{A}^{a}=(F_{\mathrm{H}})_{A}^{a}\) for which \(\partial^{2}\varphi^{a}/\partial X^{A}\partial X^{B}=0\). Micro-momentum conservation laws (5.19) and (5.20) become \[-\gamma_{5}\frac{\mathrm{d}\lambda}{\mathrm{d}\xi}-(\alpha-\beta)\frac{\partial }{\partial\xi}\left(\ln\sqrt{G}\right)\left[\gamma_{5}\lambda+\gamma_{\eta}\nu \right]=A_{0}\left[\frac{\partial w}{\partial\xi}+(\alpha-\beta)\frac{\partial }{\partial\xi}\left(\ln\sqrt{G}\right)w\right], \tag{5.22}\] \[-\gamma_{\eta}\frac{\mathrm{d}\nu}{\mathrm{d}\eta}-(\alpha-\beta)\frac{\partial }{\partial\eta}\left(\ln\sqrt{G}\right)\left[\gamma_{5}\lambda+\gamma_{\eta}\nu \right]=A_{0}\left[\frac{\partial w}{\partial\eta}+(\alpha-\beta)\frac{ \partial}{\partial\eta}\left(\ln\sqrt{G}\right)w\right], \tag{5.23}\] wherein \(\lambda=\lambda_{\rm H}\), \(\nu=\nu_{\rm H}\), \((\frac{\partial}{\partial\xi}\ln\sqrt{G})_{\rm H}\), and \((\frac{\partial}{\partial\eta}\ln\sqrt{G})_{\rm H}\) are all algebraic functions of \((\xi_{\rm H},\eta_{\rm H})\), while \(w=w_{\rm H}\), \((\frac{\partial}{\partial\xi}w)_{\rm H}\), and \((\frac{\partial}{\partial\eta}w)_{\rm H}\) are algebraic functions of of \((\xi_{\rm H},\eta_{\rm H},(F_{\rm H})_{A}^{a})\). **Remark 5.3.1**.: Homogeneous equilibrium is satisfied by the six algebraic equations (5.21) (\(a,A=1,2\)), (5.22), and (5.23) in ten unknowns \((P_{\rm H})_{a}^{A}\), \((F_{\rm H})_{A}^{a}\), \(\xi_{\rm H}\), \(\eta_{\rm H}\). Given \((P_{\rm H})_{a}^{A}\) or \((F_{\rm H})_{A}^{a}\) as mechanical loading, the remaining six unknowns can be obtained from a simultaneous solution. If \((F_{\rm H})_{A}^{a}\) is imposed, (5.22) and (5.23) are two equations in \(\xi_{\rm H}\), \(\eta_{\rm H}\). Then (5.21) yields the remaining \((P_{\rm H})_{a}^{A}\). **Remark 5.3.2**.: Essential boundary conditions for homogeneous states are \(\xi=\xi_{\rm H}\) and \(\eta=\eta_{\rm H}\), both \(\forall X\in\partial\mathcal{M}\). Since \(\xi\) and \(\eta\) are constants, \(Z_{A}^{B}=0\) by (5.16), so corresponding natural boundary conditions for forces conjugate to internal structure parameters in (3.33) are \(z_{A}=Z_{A}^{B}N_{B}=0\). #### 5.3.2 Stress-free states Consider cases whereby \(P_{a}^{A}=0\,\forall X\in\mathcal{M}\). Linear momentum conservation laws (3.31), (3.34), and (5.18) are trivially satisfied. Restrict \(\mu_{0}>0\). Since \(F_{A}^{a}\) is non-singular, (5.14) requires \(\partial w/\partial\bar{C}_{AB}=0\). This is obeyed at \(\bar{C}_{AB}=\delta_{AB}\) via (5.13); thus assume rigid body motion (i.e., \(\varphi^{a}=Q_{A}^{a}X^{A}+c_{0}^{a}\), with \(Q_{A}^{a}\) constant and proper orthogonal and \(c_{0}^{a}\) constant) whereby \(w=0\) vanishes as well by (5.13). **Remark 5.3.3**.: General analytical solutions for stress-free states are not apparent without particular forms of functions \(w(\bar{C}_{AB},\xi,\eta)\), \(G(\xi,\eta)\), \(\lambda(\xi)\), \(\nu(\eta)\), and values of \(\gamma_{\xi}\), \(\gamma_{\eta}\), \(\alpha\), \(\beta\), and \(l\). **Remark 5.3.4**.: If \(\partial w/\partial\xi=\partial w/\partial\eta=0\) for \(\bar{C}_{AB}=\delta_{AB}\), then right sides of (5.19) and (5.20) vanish. Whether or not stress-free deformation states with \(\bar{C}_{AB}\neq\delta_{AB}\) (e.g., locally) exist depends on \(w\). ### Constitutive model The framework is applied to a rectangular patch of skin loaded in the \(X^{1}\)-\(X^{2}\) plane. A 2-D theory (i.e., membrane theory) cannot distinguish between plane stress and plane strain conditions [115], nor can it account for out-of-plane anisotropy. Nonetheless, 2-D nonlinear elastic models are widely used to represent soft tissues, including skin [68, 89]. Thus, parameters entering the model (e.g., \(\mu_{0}\), \(\mathcal{I}_{0}\)) are particular to loading conditions and material orientations from experiments to which they are calibrated (e.g., here, plane stress). **Remark 5.4.1**.: In a purely 2-D theory, incompressibility often used for 3-D modeling of biological tissues [68, 71, 80, 82], cannot be assumed since contraction under biaxial stretch is not quantified in a 2-D theory. Incompressibility is also inappropriate if the material dilates due to damage. The skin is treated as having orthotropic symmetry, with two constant orthogonal directions in the reference configuration denoted by unit vectors \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\): \[\mathbf{n}_{1}=n_{1}^{A}\frac{\delta}{\delta X^{A}},\quad\mathbf{n}_{2}=n_{2}^{A}\frac {\delta}{\delta X^{A}};\quad n_{i}^{A}\delta_{AB}n_{j}^{B}=\delta_{ij}\quad(i,j=1,2). \tag{5.24}\] **Remark 5.4.2**.: The collagen fibers in the plane of the skin need not all align with \(\mathbf{n}_{1}\) or \(\mathbf{n}_{2}\), so long as orthotropic symmetry is respected. For example, each \(\mathbf{n}_{i}\) can bisect the alignments of two equivalent primary families of fibers in the skin whose directions are not necessarily orthogonal [71, 92]. In such a case, \(\mathbf{n}_{1}\) is still a unit vector orthogonal to \(\mathbf{n}_{2}\); planar orthotropy is maintained with respect to reflections about both unit vectors \(\mathbf{n}_{i}\). **Remark 5.4.3**.: The internal structure variables \(\xi=D^{1}/l\) and \(\eta=D^{2}/l\) account for mechanisms that lead to softening and degradation under tensile load: fiber sliding, pull-out, and breakage of collagen fibers, and rupture of the elastin fibers and ground matrix. Each \(D^{A}\) (\(A=1,2\)) is a representative microscopic sliding or separation distance in the \(\mathbf{n}_{i}\delta_{A}^{i}\) direction, with \(l\) the distance at which the material can no longer support tensile load along that direction. **Remark 5.4.4**.: In the cohesive zone interpretation, each \(D^{A}\) is viewed as a crack opening displacement for separation on a material surface (line in 2-D) normal to \(\mathbf{n}_{i}\delta_{A}^{i}\). Finslerian metrics \(G_{AB}(\xi,\eta)=\delta_{A}^{a}\delta_{B}^{b}g_{ab}(\xi,\eta)\) of SS5.4.1 anisotropically rescale material and spatial manifolds \(\mathcal{M}\) and \(\mathfrak{m}\) due to microstructure changes in different directions. In the absence of damage, the nonlinear elastic potential of SS5.4.2 specializes a 3-D model [71, 82, 83, 92] to 2-D. #### 5.4.1 Metrics From (2.16), (2.48), (5.2), (5.3), and (5.7), the difference in squared lengths of line elements \(\mathrm{d}\mathbf{x}\) and \(\mathrm{d}\mathbf{X}\) is \[(|\mathrm{d}\mathbf{x}|^{2}-|\mathrm{d}\mathbf{X}|^{2})(\mathbf{F},\xi,\eta)=[\delta_{a}^{ E}\delta_{b}^{F}G_{EF}(\xi,\eta)F_{A}^{a}F_{B}^{b}-G_{AB}(\xi,\eta)]\mathrm{d}X^{A} \,\mathrm{d}X^{B}. \tag{5.25}\] **Remark 5.4.5**.: Local regions of \(\mathcal{M}\) at \(X\) and \(\mathfrak{m}\) at \(x=\varphi(X)\) are rescaled isometrically by components \(G_{AB}(\xi(X),\eta(X))\). When \(F_{A}^{a}=\delta_{A}^{a}\), \(|\mathrm{d}\mathbf{x}|=|\mathrm{d}\mathbf{X}|\) regardless of \(G_{AB}\), \(\xi\), or \(\eta\). For degenerate Riemannian metrics \(G_{AB}=\bar{G}_{AB}=\delta_{AB}\) and \(g_{ab}=\bar{g}_{ab}=\delta_{ab}\), (5.25) becomes independent of \((\xi,\eta)\). The Cartesian coordinate chart \(\{X^{A}\}\) is prescribed such that \(n_{i}^{A}=\delta_{i}^{A}\) in (5.24); thus \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are parallel to respective \(X^{1}\)- and \(X^{2}\)-directions on \(\mathcal{M}\). Rescaling arises from changes in structure associated with degradation and damage in orthogonal directions, to which remnant strain contributions \(\frac{1}{2}\ln[G_{11}(\xi)]\) and \(\frac{1}{2}\ln[G_{22}(\eta)]\) can be linked. The metric tensor \(G_{AB}\) is hereafter assigned specific exponential terms, generalizing the 1-D form of SS4.4.1 to an anisotropic 2-D form appropriate for orthotropic symmetry: \[[G_{AB}(\xi,\eta)]=\begin{bmatrix}\exp\big{(}\frac{2k}{r}\xi^{r}\big{)}&0\\ 0&\exp\big{(}\frac{2m}{r}\eta^{r}\big{)}\end{bmatrix}\,\Rightarrow\,G(\xi,\eta )=\det[G_{AB}(\xi,\eta)]=\exp\big{(}\frac{2}{r}[k\xi^{r}+m\eta^{r}]\big{)}\,. \tag{5.26}\] For \(\xi\in[0,1]\) and \(\eta\in[0,1]\), two constants in (5.26) are \(k\) and \(m\), positive for expansion. A third constant \(r>0\) modulates rates of change of \(G_{11}(\xi)\) and \(G_{22}(\eta)\) with respect to their arguments. Ratios are determined by remnant strain contributions at failure: \(\hat{\epsilon}_{\xi}=\frac{k}{r}\) and \(\hat{\epsilon}_{\eta}=\frac{m}{r}\). Values of \(k\), \(m\), and \(r\) are calibrated to data in SS5.5.1. Isotropy arises in (5.26) when \(\eta=\xi\) and \(m=k\). **Remark 5.4.6**.: More general forms of \(G_{AB}(\xi,\eta)\), likely with more parameters, are possible; (5.26) is a simple form sufficient to address experimental observations for extension and tearing of skin. From (5.26), non-vanishing components of Cartan's tensor in (2.25) and (5.6) are \[lC_{11}^{1}=\chi_{1}=\frac{\partial}{\partial\xi}(\ln\sqrt{G})=k\xi^{r-1}, \qquad lC_{22}^{2}=\chi_{2}=\frac{\partial}{\partial\eta}(\ln\sqrt{G})=m\eta^{ r-1}. \tag{5.27}\] #### 5.4.2 Nonlinear elasticity The nonlinear elasticity model generalizes that of SS4.4.2 to a 2-D base space \(\mathcal{M}\) with anisotropic Finsler metric depending on two structure variable components, \(\xi\) and \(\eta\) in normalized dimensionless form. For the 2-D case, material symmetry of SS3.3.4 requires careful consideration. Here, the skin is treated as a planar orthotropic solid [68, 75, 89]. Viewing the \(D^{A}\) as components of a material vector field, orthotropic symmetry suggests invariants \(\xi^{2}\) and \(\eta^{2}\). For physically admissible ranges \(\xi\in[0,1]\) and \(\eta\in[0,1]\), these can be replaced with \(\xi\) and \(\eta\). Viewing the \(D^{A}_{|B}\) similarly, orthotropic symmetry permits a more general functional dependence than the sum of quadratic forms in \(\iota\) of (5.11) and (5.12). However, the chosen form of \(\iota\) in (5.12) allows for partial anisotropy, not inconsistent with orthotropy, when \(\gamma_{\xi}\) and \(\gamma_{\eta}\) differ. Thus, the structure-dependent contribution to \(\psi\), \(\Lambda l=\Upsilon_{0}(\gamma_{\xi}\lambda+\gamma_{\eta}\nu+\iota)\), more specifically here \[\lambda(\xi)=\xi^{2},\quad\nu(\eta)=\eta^{2};\qquad\iota(\nabla_{0}\xi,\nabla _{0}\eta)=l^{2}\delta^{AB}(\gamma_{\xi}\partial_{A}\xi\partial_{B}\xi+\gamma_{ \eta}\partial_{A}\eta\partial_{B}\eta), \tag{5.28}\] is consistent with material symmetry requirements. Strain energy density \(W\) in (5.11) is dictated by dimensionless function \(w(\bar{C}_{AB},\xi,\eta)\). Per the above discussion, \(\xi\) and \(\eta\) are treated as scalar invariant arguments. A partial list of remaining invariants [82, 91] of (3.43) for orthotropic symmetry of a 2-D material entering \(w\) (and thus \(\psi=\bar{\psi}\)) is then, applying \(n_{i}^{A}=\delta_{i}^{A}\) in (5.24), \[\bar{I}_{1}=\text{tr}\bar{\mathbf{C}}=\delta^{AB}\bar{C}_{AB},\quad\bar{I}_{2}=J^{ 2}=\det\bar{\mathbf{C}},\quad\bar{I}_{3}=\bar{C}_{AB}n_{1}^{A}n_{1}^{B}=\bar{C}_{ 11},\quad\bar{I}_{4}=\bar{C}_{AB}n_{2}^{A}n_{2}^{B}=\bar{C}_{22}. \tag{5.29}\] **Remark 5.4.7**.: As \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are orthonormal, \(\bar{I}_{1}=\bar{I}_{3}+\bar{I}_{4}\), so one of \(\bar{I}_{1},\bar{I}_{3},\bar{I}_{4}\) in (5.29) is redundant. Since \(J\geq 1\), dependence on \(\bar{I}_{2}=\bar{C}_{11}\bar{C}_{22}-(\bar{C}_{12})^{2}\) can be replaced by \(J\) (or by \((\bar{C}_{12})^{2}\) given \(\bar{I}_{3},\bar{I}_{4}\)). The Euclidean metric \(\bar{G}^{AB}=\delta^{AB}\), rather than Finsler metric \(G^{AB}\), is used for scalar products in (5.28) and (5.29), consistent with (5.9). In 2-space, \(\bar{I}_{1}\) and \(\bar{I}_{2}\) are the complete set of isotropic invariants of \(\bar{\mathbf{C}}\). Two orthotropic invariants are \(\bar{I}_{3}\) and \(\bar{I}_{4}\); several higher-order invariants are admissible [82, 91] but excluded here since (5.29) is sufficient for the present application. The dimensionless strain energy function entering (5.11) is prescribed specifically as \[w(\bar{C}_{AB},\xi,\eta) =\left[\frac{1}{J}(\bar{C}_{11}+\bar{C}_{22})+k_{0}(J-1)^{2}-2 \right]y_{\mu}(\xi,\eta) \tag{5.30}\] \[+\left[\frac{a_{1}}{2b_{1}}\left(\exp\{b_{1}(\bar{C}_{11}-1)^{2} \}-1\right)\right]\text{H}(\bar{C}_{11}-1)y_{\xi}(\xi)\] \[+\left[\frac{a_{2}}{2b_{2}}\left(\exp\{b_{2}(\bar{C}_{22}-1)^{2} \}-1\right)\right]\text{H}(\bar{C}_{22}-1)y_{\eta}(\eta).\] Dimensionless constants are \(k_{0}>0\), \(a_{1}\geq 0\), \(b_{1}>0\), \(a_{2}\geq 0\), and \(b_{2}>0\). Right-continuous Heaviside functions \(\mathrm{H}(f)=1\,\forall f\geq 0,\mathrm{H}(f)=0\,\forall f<0\). Also, \(\mu_{0}>0\) and \(\Upsilon_{0}>0\) are enforced in (5.11). **Remark 5.4.8**.: Potential \(w\) in (5.30) extends prior models for collagenous tissues [71, 82, 83, 92] to include anisotropic structure changes. The first term on the right, linear in \(\bar{I}_{1}/J\) and independent of volume change, accounts for isotropic shearing resistance of ground matrix and elastin. The second term on the right accounts for resistance to volume (area) change, \(k_{0}\) being a dimensionless bulk (area) modulus finite for a 2-D model; the dimensional bulk modulus \(\kappa_{0}=k_{0}\mu_{0}\). Exponential terms account for stiffening from collagen fibers in orthogonal directions \(\mathbf{n}_{i}\). Heaviside functions prevent fibers from supporting compressive load [82, 116] since they would likely buckle. Degradation functions are \(y_{\mu}(\xi,\eta)\), \(y_{\xi}(\xi)\), and \(y_{\eta}(\eta)\), where for the anisotropic theory, \[y_{\mu}=(1-\xi)^{\vartheta}(1-\eta)^{\xi}=y_{\xi}y_{\eta},\qquad y_{\xi}=(1- \xi)^{\vartheta},\qquad y_{\eta}=(1-\eta)^{\xi}. \tag{5.31}\] Corresponding material constants are \(\vartheta\in[0,\infty)\) and \(\varsigma\in[0,\infty)\). Notice that matrix strain energy degrades equivalently with increasing \(\xi\) and \(\eta\) via \(y_{\mu}\), maintaining isotropy of the first term in (5.30). As collagen fibers debond, the ground matrix and elastic simultaneously weaken. **Remark 5.4.9**.: Choices \(\vartheta=\varsigma=2\) are typical for phase-field fracture [98], though other values are possible for soft biologic tissues [95]. Setting \(\vartheta=\varsigma=0\) implies null degradation (i.e., ideal elastic stress-stretch response). **Remark 5.4.10**.: When \(\xi=\eta=0\), \(w\) of (5.30) is polyconvex [81, 90], facilitating existence and uniqueness of solutions. Also, \(\psi\) with (5.28), (5.30), and (5.31) obeys (5.13). Stress components \(P_{a}^{A}\) conjugate to \(F_{A}^{a}=\partial_{A}\varphi^{a}\) are found from (3.44), (5.14), (5.30), and (5.31), while forces \(Q_{1,2}\) conjugate to \(\xi,\eta\) are found from (5.15), (5.28), (5.30), and (5.31): \[\begin{split} P_{a}^{A}/\mu_{0}=J^{-1}&[\delta_{ab }\delta^{AB}F_{B}^{b}-\tfrac{1}{2}\bar{C}_{BC}\delta^{BC}(F^{-1})_{a}^{A}+k_{0 }J^{2}(J-1)(F^{-1})_{a}^{A}](1-\xi)^{\vartheta}(1-\eta)^{\varsigma}\\ &+[a_{1}(\bar{C}_{11}-1)\mathrm{exp}\{b_{1}(\bar{C}_{11}-1)^{2} \}\delta_{ab}\delta_{1}^{A}F_{1}^{b}](1-\xi)^{\vartheta}\mathrm{H}(\bar{C}_{1 1}-1)\\ &+[a_{2}(\bar{C}_{22}-1)\mathrm{exp}\{b_{2}(\bar{C}_{22}-1)^{2} \}\delta_{ab}\delta_{2}^{A}F_{2}^{b}](1-\eta)^{\varsigma}\mathrm{H}(\bar{C}_{2 2}-1),\end{split} \tag{5.32}\] \[\begin{split} Q_{1}l^{2}/(2\Upsilon_{0})&=\gamma_{ \xi}\xi-A_{0}\vartheta\left[(1-\xi)^{\vartheta-1}(1-\eta)^{\varsigma}\{J^{-1} (\bar{C}_{11}+\bar{C}_{22})+k_{0}(J-1)^{2}-2\}\right]\\ &\quad-A_{0}\vartheta(1-\xi)^{\vartheta-1}\left[\tfrac{1}{2}(a_{ 1}/b_{1})\left(\mathrm{exp}\{b_{1}(\bar{C}_{11}-1)^{2}\}-1\right)\right]\mathrm{ H}(\bar{C}_{11}-1),\end{split} \tag{5.33}\] \[\begin{split} Q_{2}l^{2}/(2\Upsilon_{0})&=\gamma_{ \eta}\eta-A_{0}\varsigma\left[(1-\eta)^{\varsigma-1}(1-\xi)^{\vartheta}\{J^{-1} (\bar{C}_{11}+\bar{C}_{22})+k_{0}(J-1)^{2}-2\}\right]\\ &\quad-A_{0}\varsigma(1-\eta)^{\varsigma-1}\left[\tfrac{1}{2}(a_{ 2}/b_{2})\left(\mathrm{exp}\{b_{2}(\bar{C}_{22}-1)^{2}\}-1\right)\right]\mathrm{ H}(\bar{C}_{22}-1).\end{split} \tag{5.34}\] **Remark 5.4.11**.: An ideal elastic response is obtained when \(k=m=0\Rightarrow G_{AB}=\delta_{AB}\Rightarrow\chi_{A}=0\), and \(\vartheta=\varsigma=0\Rightarrow\frac{\partial w}{\partial\xi}=\frac{\partial w }{\partial\eta}=0\). Then since \(\frac{\mathrm{d}\chi}{\mathrm{d}\xi}(0)=0\) and \(\frac{\mathrm{d}\nu}{\mathrm{d}\eta}(0)=0\) by (5.28), the right side of (5.18) vanishes identically, and the (trivial) solutions to (5.19) and (5.20) are \(\xi(X)=\eta(X)=0\,\forall\,X\in\mathscr{M}\). **Remark 5.4.12**.: An isotropic version of the theory can be obtained, if along with \(m=k\) in (5.26), the following choices are made instead of (5.31): \[y_{\mu}=\tfrac{1}{2}[(1-\xi)^{\vartheta}+(1-\eta)^{\vartheta}],\quad\xi= \vartheta,\quad y_{\xi}=y_{\eta}=0;\qquad\gamma_{\xi}=\gamma_{\eta}=\tfrac{1}{ 2}\gamma_{\mu}\geq 0. \tag{5.35}\] Collagen fiber contributions to strain energy are removed such that \(w\) now only depends on isotropic invariants of \(\bar{\mathbf{C}}\). Equilibrium equations (5.18), (5.19), and (5.20) are identical under the change of variables \(\xi\leftrightarrow\eta\), implying \(\eta(X)=\xi(X)\) if identical boundary conditions on \(D^{A}\) or \(z_{A}\) are applied for each field on \(\partial\mathcal{M}\). In this case, one of (5.19), and (5.20) is redundant and replaced with \(\eta=\xi\). ### Specific solutions Possible inputs to the 2-D model are seventeen constants \(l>0\), \(k\), \(m\), \(r>0\), \(\mu_{0}>0\), \(k_{0}>0\), \(a_{1}\geq 0\), \(b_{1}>0\), \(a_{2}\geq 0\), \(b_{2}>0\), \(\vartheta\geq 0\), \(\varsigma\geq 0\), \(\gamma_{0}>0\), \(\gamma_{\xi}\), \(\gamma_{\eta}\), \(\alpha\), and \(\beta\). Values of \(l\) and \(\gamma_{0}\) are taken from the analysis in SS4.5.2 of complete tearing of a 1-D specimen of skin to a stress-free state. This is appropriate given that 1-D and 2-D theories are applied to describe surface energy and material length scale pertinent to the same experiments [73, 74, 99, 113], and since stress-free solutions in SS5.5.3 perfectly parallel those of SS4.5.2. The remaining parameters are evaluated, in SS5.5.1, by applying the constitutive model of SS5.4 to the general solutions for homogeneous fields derived in SS5.3.1 to uniaixal-stress extension of 2-D skin specimens along the material \(X^{1}\)- and \(X^{2}\)-directions, respectively aligned perpendicular and parallel to Langer's lines. **Remark 5.5.1**.: Collagen fibers of the microstructure in the dermis are aligned predominantly along Langer's lines and are more often pre-stretched in vivo along these directions [75]. In vivo or in vitro, elastic stiffness at finite stretch tends to be larger in directions along Langer's lines (i.e., parallel to \(X^{2}\) and \(\mathbf{n}_{2}\)) than in orthogonal directions (e.g., parallel to \(\mathbf{n}_{1}\)). Degradation and failure behaviors are also anisotropic: rupture stress tends to be larger, and failure elongation lower, for stretching in the stiffer \(\mathbf{n}_{2}\)-direction [74, 75, 87]. In SS5.5.2, model outcomes are reported for planar biaxial extension [68, 70, 115] of 2-D specimens, highlighting simultaneous microstructure degradation perpendicular and parallel to Langer's lines. Lastly, in SS5.5.3, stress-free states analogous to those modeled in a 1-D context in SS4.5.2 are evaluated for the 2-D theory. In SS5.5.1 and SS5.5.2, equilibrium solutions of SS5.3.1 hold. Invoking (5.27), (5.28), (5.30), (5.31), and (5.32), and dropping \((\cdot)_{\rm H}\) notation for brevity, (5.21), (5.22), and (5.23) comprise the algebraic system \[\begin{split} P^{A}_{a}=\mu_{0}& J^{-1}[\delta_{ab} \delta^{AB}F^{b}_{B}-\tfrac{1}{2}\bar{C}_{BC}\delta^{BC}(F^{-1})^{A}_{a}+k_{0} J^{2}(J-1)(F^{-1})^{A}_{a}](1-\xi)^{\vartheta}(1-\eta)^{\varsigma}\\ &+[a_{1}(\bar{C}_{11}-1)\text{exp}\{b_{1}(\bar{C}_{11}-1)^{2}\} \delta_{ab}\delta^{A}_{1}F^{b}_{1}](1-\xi)^{\vartheta}\text{H}(\bar{C}_{11}- 1)\\ &+[a_{2}(\bar{C}_{22}-1)\text{exp}\{b_{2}(\bar{C}_{22}-1)^{2}\} \delta_{ab}\delta^{A}_{2}F^{b}_{2}](1-\eta)^{\varsigma}\text{H}(\bar{C}_{22}- 1)\\ &=\text{constant},\end{split} \tag{5.36}\] \[\gamma_{\xi}\xi+k\xi^{r-1}[\gamma_{\xi}\xi^{2}+\gamma_{\eta}\eta^{2}]=-\frac {A_{0}}{2}\left[\frac{\partial w(\bar{C}_{AB},\xi,\eta)}{\partial\xi}+2k\xi^{ r-1}w(\bar{C}_{AB},\xi,\eta)\right], \tag{5.37}\] \[\gamma_{\eta}\eta+m\eta^{r-1}[\gamma_{\xi}\xi^{2}+\gamma_{\eta}\eta^{2}]=-\frac{A_{ 0}}{2}\left[\frac{\partial w(\bar{C}_{AB},\xi,\eta)}{\partial\eta}+2m\eta^{r-1} w(\bar{C}_{AB},\xi,\eta)\right]. \tag{5.38}\] Consistent with (4.24) for \(N_{0}=0\)[55, 56, 62], \(\beta=\alpha-2\) is assumed in (5.37) and (5.38), reducing the number of requisite parameters to fifteen; \(\alpha\) and \(\beta\) enter the governing equations only through their difference. Boundary conditions on internal state, are, for homogeneous conditions, \[\xi(X^{1}=\pm L_{0},X^{2}=\pm W_{0})=\xi_{\rm H},\qquad\eta(X^{1}=\pm L_{0},X^{ 2}=\pm W_{0})=\eta_{\rm H}. \tag{5.39}\] Alternative conditions to (5.36)-(5.39) are considered for heterogeneous stress-free states in SS5.5.3. #### 5.5.1 Uniaxial extension First consider homogeneous uniaxial-stress extension in either the \(X^{1}\)- or \(X^{2}\)-direction. From symmetry of the loading mode and material model, shear stresses vanish identically: \(P^{1}_{2}=0\), \(P^{2}_{1}=0\). Similarly, \(F^{1}_{2}=0\), \(F^{2}_{1}=0\), and \(\bar{C}_{12}=\bar{C}_{21}=0\). The homogeneous deformation fields are \[\varphi^{1}=\lambda_{1}X^{1},\quad\varphi^{2}=\lambda_{2}X^{2};\quad F^{1}_{1 }=\lambda_{1},\quad F^{2}_{2}=\lambda_{2};\quad\bar{C}_{11}=(\lambda_{1})^{2},\quad\bar{C}_{22}=(\lambda_{2})^{2};\quad J=\lambda_{1}\lambda_{2}. \tag{5.40}\] At any single given load increment, stretch ratios are the constants \(\lambda_{1}>0\) and \(\lambda_{2}>0\). Mechanical boundary conditions are, for extension along \(X^{1}\) with \(\lambda_{1}\geq 1\), \[\varphi^{1}(X^{1}=\pm L_{0})=\pm\lambda_{1}L_{0},\qquad p_{2}(X^{2}=\pm W_{0} )=P^{2}_{2}(X^{2}=\pm W_{0})=0. \tag{5.41}\] In this case, \(P^{2}_{2}=0\,\forall\,X\in\mathcal{M}\), and the sole non-vanishing stress component in (5.36) is \(P^{1}_{1}\). Note that \(\lambda_{2}\) is unknown a priori. Given \(\lambda_{1}\) from the first of (5.41), values consistent with (5.39) are obtained by solving (5.36) for \(a=A=2\) with \(P^{2}_{2}=0\), (5.37), and (5.38) simultaneously for \(\lambda_{2}\), \(\xi\), and \(\eta\) as functions of \(\lambda_{1}\). Axial stress \(P^{1}_{1}\) is then found afterwards using (5.36) with \(a=A=1\). For axial loading along \(X^{2}\) with \(\lambda_{2}\geq 1\), \[\varphi^{2}(X^{2}=\pm W_{0})=\pm\lambda_{2}W_{0},\qquad p_{1}(X^{1}=\pm L_{0} )=P^{1}_{1}(X^{1}=\pm L_{0})=0. \tag{5.42}\] Now \(P^{1}_{1}=0\,\forall\,X\in\mathcal{M}\), and the sole non-vanishing stress component in (5.36) is \(P^{2}_{2}\). Given \(\lambda_{2}\) from the first of (5.42), values consistent with (5.39) are obtained by solving (5.36) for \(a=A=1\) with \(P^{1}_{1}=0\), (5.37), and (5.38) simultaneously for \(\lambda_{1}\), \(\xi\), and \(\eta\) as functions of \(\lambda_{2}\). Axial stress \(P^{2}_{2}\) is found afterwards using (5.36) with \(a=A=2\). Values of all baseline parameters are listed in Table 1. Identical values of those constants shared among 1-D and 2-D theories are found to aptly describe the experimental data for stretching along \(\mathbf{n}_{1}\), in conjunction with natural choice \(\gamma_{\xi}=1\). The 2-D theory features additional parameters to account for orthotropic anisotropy (e.g., stiffer response along \(\mathbf{n}_{2}\), with peak stress occurring at lower stretch) as well as an areal bulk modulus \(\kappa_{0}\) absent in the 1-D theory. **Remark 5.5.2**.: Adherence to physical observations dictates \(a_{2}>a_{1}\), \(b_{2}>b_{1}\), and \(\kappa_{0}>\mu_{0}\). Since degradation is more severe, and toughness lower for stretching along \(\mathbf{n}_{2}\), \(m>k\) and \(\gamma_{\eta}<\gamma_{\xi}\). The standard choice [95, 98]\(\varsigma=\vartheta=2\) in (5.31) was found sufficient to describe test data. Model outcomes for non-vanishing stress components and internal state vector components are presented in respective Fig. 4(a) and Fig. 4(b). Experimental \(P_{1}^{1}\) versus \(\lambda_{1}\) data for loading along \(\mathbf{n}_{1}\), with \(\lambda_{1}\geq 0\) prescribed in the corresponding model calculations, are identical to \(P\) versus \(\sqrt{C}\) data depicted using the 1-D theory in SS4.5.1. These data [74] are for relatively high-rate extension of rabbit skin along a longitudinal direction, parallel to the backbone of the torso and perpendicular to Langer's lines. Nonlinear elastic parameters should be viewed as instantaneous dynamic moduli in a pseudo elastic representation [68, 84, 85], since loading times are brief relative to stress relaxation times [74]. Single-experiment data of similar fidelity for transverse extension, parallel to Langer's lines, to complete load drop were not reported, but a range of maximum stress and strain were given for extension along \(\mathbf{n}_{2}\)[74]. A representative peak stress \(P_{2}^{2}\) and corresponding stretch \(\lambda_{2}\) based on such data [74] are included in Fig. 4(a). According to such data [74], the material is stiffer, and ruptures at a higher stress (\(\approx\frac{4}{3}\times\)) but lower strain (\(\approx\frac{2}{3}\times\)), in the transverse \(\mathbf{n}_{2}\)-direction. **Remark 5.5.3**.: For loading along \(\mathbf{n}_{1}\), \(\xi\to 1\) and \(\eta\to 0\) for \(\lambda_{1}\gtrsim 3.5\), meaning most internal structure evolution correlates with degradation in this direction, with small transverse effects of \(\eta\). Analogously, loading along \(\mathbf{n}_{2}\) gives \(\eta\to 1\) and \(\xi\to 0\) for \(\lambda_{2}\gtrsim 3\). The rate of increase of \(\eta\) with \(\lambda_{2}>1\) is more rapid than the rate of increase of \(\xi\) with \(\lambda_{1}>1\), since the skin degrades sooner and fails at a lower strain for stretching parallel to Langer's lines. The present diffuse model is an idealization characteristic of experiments when there is no sharp pre-crack [64, 68, 72, 74, 87]. Shown in Fig. 4(c) and Fig. 4(d) are predictions at modest stretch along \(\mathbf{n}_{1}\) or \(\mathbf{n}_{2}\) under uniaxial stress conditions identical to those of Fig. 4(a) as well as uniaxial strain, whereby \(\lambda_{2}=1\) or \(\lambda_{1}=1\) is enforced using the scheme of SS5.5.2 rather than respective \(P_{2}^{2}=0\) or \(P_{1}^{1}=0\). Predictions for the ideal elastic case (\(\vartheta=\varsigma=0\Rightarrow\xi=\eta=0\)) are shown for comparison. Results are stiffer for the ideal elastic case since degradation commensurate with structure change is omitted. In agreement with other data [70], skin is elastically stiffer in uniaxial strain relative to uniaxial stress. Choosing a higher value of \(k_{0}=\kappa_{0}/\mu_{0}>1\) in (5.30) would further increase this difference if merited #### 5.5.2 Biaxial extension Now consider homogeneous biaxial-stress extension in the \(X^{1}\)- and \(X^{2}\)-directions. From symmetry, \(P_{2}^{1}=0\), \(P_{1}^{2}=0\), \(F_{2}^{1}=0\), \(F_{1}^{2}=0\), and \(\bar{C}_{12}=\bar{C}_{21}=0\). The homogeneous deformation fields are \[\varphi^{1}=\lambda_{1}X^{1},\ \varphi^{2}=\lambda_{2}X^{2};\quad F_{1}^{1}= \lambda_{1},\ F_{2}^{2}=\lambda_{2};\quad\bar{C}_{11}=(\lambda_{1})^{2},\ \bar{C}_{22}=(\lambda_{2})^{2};\quad J= \lambda_{1}\lambda_{2}. \tag{5.43}\] Stretch ratios are \(\lambda_{1}>0\) and \(\lambda_{2}>0\), constants over \(\mathcal{M}\). Mechanical boundary conditions are \[\varphi^{1}(X^{1}=\pm L_{0})=\pm\lambda_{1}L_{0},\qquad\varphi^{2}(X^{2}=\pm W _{0})=\pm\lambda_{2}W_{0}. \tag{5.44}\] With \(\lambda_{1}\) and \(\lambda_{2}\) prescribed by (5.44), equilibrium equations (5.37) and (5.38) are solved simultaneously for \(\xi\) and \(\eta\) as functions of \(\lambda_{1},\lambda_{2}\), giving homogeneous values of fields consistent with (5.39). Then \(P_{1}^{1}\) and \(P_{2}^{2}\) are obtained afterwards with (5.36) for \(a=A=1\) and \(a=A=2\). Model predictions for equi-biaxial stretching, \(\lambda_{1}=\lambda_{2}\), are produced using the baseline material parameters of Table 1, obtained for the 2-D theory in SS5.5.1. In Fig. 5(a), stresses also include those Figure 4: Uniaxial extension and tearing of skin for imposed axial stretch \(\lambda_{1}\geq 1\) or \(\lambda_{2}\geq 1\), 2-D model: (a) stress \(P_{1}^{1}\) or \(P_{2}^{2}\) (baseline parameters, Table 1) with representative experimental data [74] (see text §4.5.1 for consistent definition of experimental stretch accounting for pre-stress) for straining perpendicular or parallel to Langer’s lines (b) normalized internal structure components \(\xi\) and \(\eta\) (baseline parameters) (c) stress \(P_{1}^{1}\) for moderate extension \(\lambda_{1}\leq 2.1\) under uniaxial stress (\(P_{2}^{2}=0\)) or uniaxial strain (\(\lambda_{2}=1\)) conditions for Finsler model (baseline parameters) and ideal elastic model (\(\vartheta=\varsigma=0\)) (d) stress \(P_{2}^{2}\) for moderate extension \(\lambda_{2}\leq 2.0\) under uniaxial stress (\(P_{1}^{1}=0\)) or uniaxial strain (\(\lambda_{1}=1\)) conditions for Finsler model (baseline) and ideal elastic model (\(\vartheta=\varsigma=0\)) for the ideal elastic case (\(\vartheta=\varsigma=0\Rightarrow\xi=\eta=0\)) that are noticeably higher for \(\lambda_{1}>1.5\) and increase monotonically with stretch. For the Finsler theory, under this loading protocol (\(\lambda_{1}=\lambda_{2}\)), \(P_{2}^{2}\) increases more rapidly than \(P_{1}^{1}\) with increasing \(\lambda_{1}\), reaching a slightly lower peak value at significantly lower stretch. Elastic stiffness during the lower-stretch loading phase is higher in the \(\mathbf{n}_{2}\)-direction due to the preponderance of aligned collagen fibers, but degradation associated with internal structure evolution is more rapid due to the lower toughness of skin when torn in this direction. The latter phenomena is evident in Fig. 5(b), wherein \(\eta(\lambda_{1})>\xi(\lambda_{1})\) for \(\lambda_{1}\in[1.1,3.9]\). **Remark 5.5.4**.: Experimental data on failure skin focus on uniaxial extension [74, 75]. Known biaxial data (e.g, [68, 70]) do not report stretch magnitudes sufficient to cause tearing, so direct validation does not appear possible. Should skin prove to be more stiff and damage tolerant in equi-biaxial stretch experiments, \(w\) of (5.30) can be modified so the tangent bulk modulus proportional to \(k_{0}\) increases more strongly with \(J\) and does not degrade so severely with structure evolution. #### 5.5.3 Stress-free states Protocols of SS5.3.2 now apply. Two boundary value problems are addressed that parallel the 1-D analysis of SS4.5.2. External boundary conditions are \(\xi=0\) and \(\eta=0\) everywhere along \(\partial\mathcal{M}\). Stress \(P_{a}^{A}=0\) everywhere in \(\mathcal{M}\), so mechanical traction \(p_{a}=P_{a}^{A}N_{A}=0\) over \(\partial\mathcal{M}\). For the generalized Finsler metric in (5.26), restrict \(r>1\Rightarrow\chi_{1}(\xi=0)=\chi_{2}(\eta=0)=0\) in (5.27). In the first problem, assume the specimen is stretched uniaxially along the \(\mathbf{n}_{1}\)-direction (i.e., along \(X^{1}\), perpendicular to Langer's lines) until localized failure occurs. The skin ruptures completely across the midspan at \(X^{1}=0\), such that \(\xi(0,X^{2})=1\). In this ruptured state, \(\bar{C}_{AB}=\delta_{AB}\) everywhere on \(\mathcal{M}\) for all components except \(\bar{C}_{11}\), which can differ from \(\delta_{AB}\) only along the line \(X^{1}=0\). The solution for \(\eta(X^{1},X^{2})\) is \(\eta(X^{1},X^{2})=0\), for which (5.20) is trivially satisfied. From symmetry, the remaining unknown field \(\xi\) depends only on \(X=X^{1}\), and \(\xi(-X)=\xi(X)\). With this partial solution, the remaining governing equation (5.19) has vanishing right side and reduces to Figure 5: Equi-biaxial extension and tearing of skin, 2-D model: (a) stress components from Finsler model (baseline parameters, \(\vartheta=\varsigma=2\)) and ideal elastic model (\(\vartheta=\varsigma=0\)) (b) normalized internal structure components \(\xi\) and \(\eta\) the generally nonlinear but autonomous second-order ODE \[\gamma_{\xi}\,\frac{\mathrm{d}^{2}\xi}{\mathrm{d}X^{2}}=\frac{\gamma_{\xi}\xi}{l^ {2}}\left(1+k\xi^{r}\right). \tag{5.45}\] Dividing by \(\gamma_{\xi}>0\), (5.45) is identical to (4.31) with \(N_{0}=0\), \(\lambda=\xi^{2}\), and \(\chi=\chi_{1}=k\xi^{r-1}\). Solutions (4.33) and (4.44) hold verbatim. Normalized energy per unit area normal to the \(X^{1}\)-direction is \[\tilde{\gamma}_{1}=\frac{1}{2\gamma_{0}}\int_{-L_{0}}^{L_{0}}\psi\sqrt{G}\, \mathrm{d}X=\frac{\gamma_{\xi}}{2l}\int_{-L_{0}}^{L_{0}}\{\xi^{2}+l^{2}( \mathrm{d}\xi/\mathrm{d}X)^{2}\}\exp[(k/r)\xi^{r}]\,\mathrm{d}X, \tag{5.46}\] identical to (4.45) when \(\gamma_{\xi}=1\) and \(N_{0}=0\). Given \(\gamma_{\xi}=1\), \(k=0.2\), \(r=2\), and \(\gamma_{0}=0.47\) kJ/m\({}^{2}\) (Table 1), outcomes of the 2-D theory here match those of the 1-D theory in Fig. 3(a) and Fig. 3(b) with \(N_{0}=0\) and \(\tilde{\gamma}_{1}=\tilde{\gamma}\). Toughness \(2\tilde{\gamma}_{1}\gamma_{0}=1.0\) kJ/m\({}^{2}\) is consistent with experiment [73, 99, 113]. In the second problem, assume the specimen is stretched along \(\mathbf{n}_{2}\) (i.e., along \(X^{2}\), parallel to Langer's lines). The skin ruptures completely across the midspan at \(X^{2}=0\), with \(\eta(X^{1},0)=1\). Now, \(\bar{C}_{AB}=\delta_{AB}\) everywhere for all components except \(\bar{C}_{22}\), which can differ from \(\delta_{AB}\) only along \(X^{2}=0\). The solution for \(\xi(X^{1},X^{2})\) is \(\xi=0\), for which (5.19) is trivially obeyed. From symmetry, \(\eta\) depends only on \(X=X^{2}\), and \(\eta(-X)=\eta(X)\). Balance law (5.20) reduces to \[\gamma_{\eta}\,\frac{\mathrm{d}^{2}\eta}{\mathrm{d}X^{2}}=\frac{\gamma_{\eta} \eta}{l^{2}}\left(1+m\eta^{r}\right). \tag{5.47}\] Dividing by \(\gamma_{\eta}>0\), (5.47) matches (4.31) with \(N_{0}=0\), \(\nu=\eta^{2}\), \(\chi=\chi_{2}=m\eta^{r-1}\), and the obvious change of variables. Solutions (4.33) and (4.44) hold. Normalized energy per unit area is \[\tilde{\gamma}_{2}=\frac{1}{2\gamma_{0}}\int_{-L_{0}}^{L_{0}}\psi\sqrt{G}\, \mathrm{d}X=\frac{\gamma_{\eta}}{2l}\int_{-L_{0}}^{L_{0}}\{\eta^{2}+l^{2}( \mathrm{d}\eta/\mathrm{d}X)^{2}\}\exp[(m/r)\eta^{r}]\,\mathrm{d}X \tag{5.48}\] for free surfaces normal to the \(X^{2}\)-direction, matching (4.45) if \(\gamma_{\eta}=1\) and \(N_{0}=0\). Given \(\gamma_{\eta}=0.84\), \(m=0.3\), \(r=2\), and \(\gamma_{0}=0.47\) kJ/m\({}^{2}\) (Table 1), profiles of \(\eta(X)\) for this problem are very similar to those of \(\xi(X)\) from the 1-D theory in Fig. 3(a). Energy for \(N_{0}=0\) in Fig. 3(b) transforms as \(\tilde{\gamma}_{2}\approx\gamma_{\eta}\tilde{\gamma}\), and \(2\tilde{\gamma}_{2}\gamma_{0}=0.85\) kJ/m\({}^{2}\) is within experimental ranges of 0.5 to 2.5 kJ/m\({}^{2}\)[73, 99, 113]. **Remark 5.5.5**.: Since \(2\tilde{\gamma}_{2}\gamma_{0}<2\tilde{\gamma}_{1}\gamma_{0}\), the model predicts that skin is more brittle in directions parallel to Langer's lines than perpendicular to Langer's lines, in concurrence with experiment [74, 87]. Collagen fibers are less coiled initially in directions parallel to Langer's lines [75], giving the skin lower compliance and less potential strain accommodation at rupture in those directions. **Remark 5.5.6**.: All parameters in Table 1 have clear physical or geometric origins; none are ad hoc. Constant \(l\) is the critical fiber sliding distance or crack opening displacement for rupture. Ratios \(\frac{k}{r}\) and \(\frac{m}{r}\) are associated with remnant strain contributions in orthogonal \(\mathbf{n}_{1}\)- and \(\mathbf{n}_{2}\)-directions along primary initial fiber directions (e.g., perpendicular and parallel to Langer's lines). The isotropic shear modulus and bulk modulus for the matrix, consisting of ground substance and elastin, are \(\mu_{0}\) and \(\kappa_{0}\). Nonlinear elastic constants \(a_{1}\) and \(b_{1}\) control stiffening due to collagen fiber elongation in the \(\mathbf{n}_{1}\)-direction, while \(a_{2}\) and \(b_{2}\) control stiffening due to fiber elongation in the \(\mathbf{n}_{2}\)-direction. Loss of elastic stiffness due to fiber rearrangements and damage processes in matrix, fibers, and their interfaces, in respective \(\mathbf{n}_{1}\)- and \(\mathbf{n}_{2}\)-directions, is modulated by \(\vartheta\) and \(\varsigma\). Isotropic surface energy is \(\gamma_{0}\), with factors \(\gamma_{\xi}\) and \(\gamma_{\eta}\) scaling the fracture toughness in respective \(\mathbf{n}_{1}\)- and \(\mathbf{n}_{2}\)-directions. Conclusion A theory of finite-deformation continuum mechanics with a basis in generalized Finsler geometry has been developed and refined. Elements of an internal state vector represent evolving microstructure features and can be interpreted as order parameters. Dependence of the material metric on internal state affects how distances are measured in the material manifold and how gradients (i.e., covariant derivatives) are resolved. A new application of the theory to anisotropic soft-tissue mechanics has been presented, whereby the internal state is primarily associated with collagen fiber rearrangements and breakages. The material metric contains explicit contributions from sliding or opening modes in different material directions. Solutions to boundary value problems for tensile extension with tearing in different directions agree with experimental data and microscopic observations on skin tissue, providing physical and geometric insight into effects of microstructure. **Funding:** This research received no external funding. **Conflicts of interest:** The author declares no conflicts of interest. **Data availability statement:** Not applicable; this research produced no data. ## 7 Appendix A: Variational derivatives The variational derivative \(\delta(\cdot)\) of SS3.3.1 invokes \((\varphi^{a},D^{A})\) with \(a,A=1,2,\ldots,n\) as the total set of \(2n\) varied independent parameters or degrees-of-freedom. ### Deformation gradient and director gradient The first of (3.24) follows from (3.1), (3.5), and commutation of \(\delta(\cdot)\) and \(\partial_{A}(\cdot)\) operators since the variation is performed at fixed \(X^{A}\): \[\delta F^{a}_{A}(\varphi(X),X)=\delta(\partial_{A}\varphi^{a}(X))=\partial_{ A}(\delta\varphi^{a}(X))=\delta_{A}(\delta\varphi^{a}(X))=(\delta\varphi^{a}( X))_{|A},\] (A.1) with \(\boldsymbol{F}\) treated as a two-point tensor. **Remark A.1.1**.: The third equality in (A.1) follows from \(N^{B}_{A}(X,D)\bar{\partial}_{B}\varphi^{a}(X)=0\). The leftmost and rightmost equalities interpret \(\varphi^{a}(X)\) and \(\delta\varphi^{a}(X)\), respectively, as point functions rather than vector fields [20, 22]. Denote by \(f(X,D)\) a generic differentiable function of arguments \(\{X^{A},D^{A}\}\) in a coordinate chart on \(\mathscr{Z}\). The variation of \(f(X,D)\) is defined by the first of the following: \[\delta f(X,D)=f(X,D)|_{A}\delta(D^{A})=\bar{\partial}_{A}f(X,D)\delta(D^{A}),\] (A.2) where \((\cdot)|_{A}\) is the vertical covariant derivative (e.g., as in (2.21)). For the choices \(V^{A}_{BC}=0\) and \(Y^{A}_{BC}=0\) of (3.15), \(f(X,D)|_{A}=\bar{\partial}_{A}f(X,D)\) and the rightmost form is obtained, consistent with prior definitions [54, 55]. This is used with (3.28) to obtain the second of (3.24): \[\begin{split}\delta D^{A}_{|B}&=\delta(\partial_{B} D^{A})-\delta N^{A}_{B}+\delta(K^{A}_{BC})D^{C}+K^{A}_{BC}\delta(D^{C})\\ &=[\partial_{B}\delta(D^{A})-N^{C}_{B}\bar{\partial}_{C}\delta(D^ {A})+K^{A}_{BC}\delta(D^{C})]-\bar{\partial}_{C}N^{A}_{B}\delta(D^{C})+\bar{ \partial}_{D}K^{A}_{BC}D^{C}\delta(D^{D})\\ &=[\delta(D^{A})]_{|B}-(\bar{\partial}_{C}N^{A}_{B}-\bar{ \partial}_{C}K^{A}_{BD}D^{D})\delta(D^{C}),\end{split}\] (A.3) where it is assumed per (3.12) that \(\bar{\partial}_{C}\delta[D^{A}(X)]=\bar{\partial}_{C}[\delta(D^{A})(X)]=0\) on \(\mathscr{M}\) and \(\mathscr{Z}\). ### Volume form Two definitions have been set forth in prior work for the variation of the volume form \(d\Omega(X,D)\). The first quoted here sets [54] \[\begin{split}\delta(d\Omega)&=[\delta\sqrt{G}/ \sqrt{G}]d\Omega=(\ln\sqrt{G})|_{A}\delta(D^{A})d\Omega=\tfrac{1}{2}G^{BC}G_{ CB}|_{A}\delta(D^{A})d\Omega\\ &=(C^{B}_{AB}-Y^{B}_{AB})\delta(D^{A})d\Omega=C^{B}_{AB}\delta(D ^{A})d\Omega\,,\end{split}\] (A.4) where the first equality is a definition and (2.27) and (A.2) have been used subsequently. **Remark A.2.1**.: According to (A.4), the magnitude of the volume form is varied locally over \(n\)-dimensional base space in (3.25) with \(\alpha=1\) prior application of the divergence theorem (2.31) used to procure (3.31) and (3.32) from (3.30) of SS3.3.3. The choice (A.4) was used in the most recent theory [54] and implied in a prior numerical implementation [59]. The second definition quoted here was used in the original theoretical derivations [55, 56]: \[\begin{split}\delta(d\Omega)&=[\delta\sqrt{\mathcal{ G}}/\sqrt{\mathcal{G}}]d\Omega=(\ln\sqrt{\mathcal{G}})|_{A}\delta(D^{A})d \Omega=(\ln\sqrt{G^{2}})|_{A}\delta(D^{A})d\Omega\\ &=G^{BC}G_{CB}|_{A}\delta(D^{A})d\Omega=2(C^{B}_{AB}-Y^{B}_{AB}) \delta(D^{A})d\Omega=2C^{B}_{AB}\delta(D^{A})d\Omega\,.\end{split}\] (A.5) In derivation of (A.5), the determinant of the Sasaki metric as defined in (2.15) has been used along with (2.27) and (A.2). **Remark A.2.2**.: The definition given by the first equality in (A.5) is notionally consistent with other earlier theory [49, 50, 52]. In the present viewpoint with (A.5), the magnitude of the volume form is varied locally in \(2n\)-dimensional total space \(\mathscr{Z}\) via (3.25) with \(\alpha=2\) before integrating over base \(n\)-dimensional space \(\mathscr{M}\) in (3.30) of SS3.3.3. **Remark A.2.3**.: Definition (A.4) corresponds to \(\alpha=1\) and definition (A.5) to \(\alpha=2\) in (3.25). The only ramification in the governing Euler-Lagrange equations is scaling of local free energy density by a factor of one or two through \(\alpha\psi C^{A}_{CA}\) in the micro-momentum balance, in either form (3.32) or (3.35). Macroscopic momentum is unaffected by the definition of \(\delta(d\Omega)\). Appendix B: Toward residual stress and growth ### Macroscopic momentum Consideration of residual stress begins with examination of the balance of linear momentum in the form (3.34), repeated and reorganized for convenience: \[\partial_{A}P^{A}_{a}+P^{B}_{a}\gamma^{A}_{AB}-P^{A}_{c}\gamma^{c}_{ba}F^{b}_{A }=-\{[\bar{\partial}_{B}P^{A}_{a}+P^{A}_{a}\bar{\partial}_{B}(\ln\sqrt{G})] \partial_{A}D^{B}+P^{A}_{c}(\gamma^{c}_{ba}-\Gamma^{c}_{ba})F^{b}_{A}\}.\] (B.1) **Remark B.1.1**.: Terms on the left side of (B.1) are standard for nonlinear elasticity theory [22]. If the free energy \(\psi\) does not depend on \(D^{A}\) or \(D^{A}_{|B}\), then the stress \(P^{A}_{a}=\partial\psi/\partial F^{a}_{A}\) is also conventional, presuming \(\psi\) is such that in the undeformed state \(C_{AB}=G_{AB}\Rightarrow P^{A}_{a}=0\). In that case, when the right side of (B.1) vanishes, the body manifold \(\mathcal{M}\) should not contain residual stresses when \(F^{a}_{A}=\partial_{A}\varphi^{a}\) for regular motions \(\varphi^{a}(X)\) (e.g., in the absence of topological changes). **Remark B.1.2**.: Departures from classical nonlinear elasticity arise when (i) \(\psi\) has dependencies on \(D^{A}\) or \(D^{A}_{|B}\), (ii) when \(P^{A}_{a}\) or \(G\) depends on \(D^{A}\) along with heterogeneous state field \(\partial_{A}D^{B}\neq 0\), or (iii) when a different connection than the Levi-Civita connection is used for \(\Gamma^{c}_{ba}\) (i.e., \(\Gamma^{c}_{ba}\neq\gamma^{c}_{ba}\) due to \(d\)-dependence of spatial metric \(g_{ab}\)). Each of these departures could potentially induce stresses \(P^{A}_{a}\neq 0\) in a simply connected body externally unloaded via \(p_{a}=P^{A}_{a}N_{A}=0\) everywhere on its oriented boundary \(\partial\mathcal{M}\) (i.e., residual stresses). Analysis of a particular version of the general theory offers more insight. First assume in (3.18) that \(\tilde{g}^{a}_{b}\to\tilde{g}^{a}_{b}\) such that \(g_{ab}(x,d)\to g_{ab}(x)=\bar{g}_{ab}(x)\): the spatial metric tensor \(\boldsymbol{g}\) is Riemannian rather than Finslerian. Then \(\gamma^{a}_{bc}=\Gamma^{a}_{bc}\). Now use the osculating Riemannian interpretation of the Finslerian material metric \(\boldsymbol{G}\) offered by Corollary 2.1.1 manifesting from (3.12): \[\tilde{G}_{AB}(X)=G_{AB}(X,D(X)),\qquad\tilde{G}(X)=\det(\tilde{G}_{AB}(X)),\] (B.2) \[\tilde{\gamma}^{A}_{BA}=\partial_{B}(\ln\sqrt{\tilde{G}})=\partial_{B}(\ln \sqrt{G})+\tilde{\partial}_{A}(\ln\sqrt{G})\partial_{B}D^{A}=\gamma^{A}_{BA}+ \tilde{\partial}_{A}(\ln\sqrt{G})\partial_{B}D^{A},\] (B.3) \[\tilde{P}^{A}_{a}(X)=P^{A}_{a}(X,D(X)),\qquad\partial_{B}\tilde{P}^{A}_{a}= \partial_{B}P^{A}_{a}+\partial_{C}P^{A}_{a}\partial_{B}D^{C}.\] (B.4) Substituting (B.3) and (B.4) into (B.1) gives, with \(\gamma^{c}_{ba}=\Gamma^{c}_{ba}\), \[\partial_{A}\tilde{P}^{A}_{a}+\tilde{P}^{B}_{a}\tilde{\gamma}^{A}_{AB}-\tilde {P}^{A}_{c}\gamma^{c}_{ba}F^{b}_{A}=0.\] (B.5) **Remark B.1.3**.: Expression (B.5) has the standard appearance for static equilibrium in classical continuum mechanics, but stress \(\tilde{P}^{A}_{a}\) and connection \(\tilde{\gamma}^{A}_{BC}\) both implicitly depend on internal state \(D^{A}\), and the former possibly by its gradient \(D^{A}_{|B}\) as well if appearing in \(\psi\). Coefficients \(\tilde{\gamma}^{A}_{BC}\) are those of the Levi-Civita connection of \(\tilde{G}_{AB}\) via (2.36). Now neglect dependence on internal state gradient in the energy density, require \(D\)-dependence to arise only through \(G_{AB}\), and assume the body is homogeneous (with mild abuse of notation): \[\psi=\psi(F^{a}_{A},D^{A})=\psi(F^{a}_{A},G_{AB}(X,D))=\tilde{\psi}(F^{a}_{A}, \tilde{G}_{AB}(X))=\tilde{\psi}(C_{AB}(F^{a}_{A},g_{ab}),\tilde{G}_{AB}(X)).\] (B.6) Recall from (3.10) that \(C_{AB}=F^{a}_{A}g_{ab}F^{b}_{B}\). As a simple example, take, where \(n=\dim\mathcal{M}\), \[\bar{\psi}=\frac{\mu_{0}}{2}(C_{AB}\tilde{G}^{AB}-n)\Rightarrow\tilde{P}^{A}_{a }=\frac{\partial\bar{\psi}}{\partial F^{A}_{a}}=\mu_{0}g_{ab}\tilde{G}^{AB}F^{b }_{A},\quad\frac{\partial\bar{\psi}}{\partial\tilde{G}^{AB}}=-\frac{\mu_{0}}{2 }\tilde{G}^{AC}\tilde{G}^{BD}C_{CD},\] (B.7) and where \(\mu_{0}>0\) is a constant (e.g., an elastic shear modulus). Now assume that spatial manifold \(\mathfrak{m}\) is Euclidean [27, 30] such that the Riemann-Christoffel curvature tensor from \(\gamma^{a}_{bc}\) (and thus derived from \(g_{ab}\)) vanishes identically. **Remark B.1.4**.: In this case, (B.5), the last of (B.6), and the example (B.7) are consistent with the geometric theory of growth mechanics of Yavari [30] in the setting of quasi-statics. Incompressibility can be addressed by appending linear momentum to include contribution from an indeterminant pressure to be determined by boundary conditions under isochoric constraint \(J=1\)[22]. Otherwise, \(\bar{\psi}\) can be augmented with term(s) to ensure \(C^{A}_{B}\to\delta^{A}_{B}\Rightarrow\tilde{P}^{A}_{a}=0\) (e.g., (4.38) for \(n=1\)). The Riemann-Christoffel curvature tensor from \(\tilde{\gamma}^{A}_{BC}\) (and thus \(\tilde{G}_{AB}\)) need not vanish in general: \[\tilde{\mathcal{R}}^{A}_{BCD}=\partial_{B}\tilde{\gamma}^{A}_{CD}-\partial_{C }\tilde{\gamma}^{A}_{BD}+\gamma^{A}_{BE}\tilde{\gamma}^{E}_{CD}-\gamma^{A}_{ CE}\tilde{\gamma}^{E}_{BD}.\] (B.8) **Remark B.1.5**.: In Riemannian geometry, \(\tilde{\gamma}^{A}_{BC}\) are symmetric, differentiable, and obey (2.36); (B.8) has \(\frac{1}{12}n^{2}(n^{2}-1)\) independent components [31]. For \(n=3\), \(\tilde{\mathcal{R}}^{A}_{BCD}\) contains six independent components, determined completely by the metric and Ricci curvature \(\tilde{\mathcal{R}}^{A}_{ABC}\)[30, 98]. For \(n=2\), \(\tilde{\mathcal{R}}^{A}_{BCD}\) contains only one independent component, determined completely by the scalar curvature \(\tilde{\kappa}=\frac{1}{2}\mathcal{R}_{AB}\tilde{G}^{AB}\). For \(n=1\), \(\tilde{\mathcal{R}}^{A}_{BCD}\) always vanishes (i.e., a 1-D manifold is always flat in this sense). When \(\tilde{\mathcal{R}}^{A}_{BCD}\) is nonzero over a region of \(\mathcal{M}\), then no compatible deformation \(\tilde{F}^{A}_{a}(X)\) exists that can push-forward \(\tilde{G}_{AB}\) to match the Euclidean metric \(g_{ab}(\phi(X))\) that would render corresponding regions of \(\mathcal{M}\) and \(\mathfrak{m}\) isometric. In other words, the push-forward \(g_{ab}=\tilde{F}^{A}_{a}\tilde{G}_{AB}\tilde{F}^{B}_{b}\) where \(\tilde{F}^{A}_{a}=\partial_{a}\zeta^{A}\) does not exist, \(\zeta^{A}\) being (nonexistent) Euclidean coordinates on \(\mathcal{M}\). In such cases, \(\mathcal{M}\) would always have to be deformed (e.g., strained) to achieve its spatial representation \(\mathfrak{m}\), since no isometry exists between the two configurations. **Remark B.1.6**.: If an intrinsically curved body manifold in the reference state \(\mathcal{M}\) is stress-free per the constitutive prescription (e.g., (B.7) or any other standard elasticity model), then the intrinsically flat body in the current state \(\mathfrak{m}\) would be necessarily strained and stressed, even if external traction \(p_{a}\) vanishes along its boundary. Thus, this particular rendition of the generalized Finsler theory supplies residual stress from a non-Euclidean material metric tensor \(\tilde{G}_{AB}\) in a manner matching other works that use Riemannian geometry [27, 30]. In the full version of the generalized Finsler theory [54, 55], as discussed following (B.1), residual stresses could emerge from additional sources to those discussed under the foregoing assumptions of a Euclidean spatial metric, a conventional hyperelastic energy potential, and an osculating Riemannian material metric with non-vanishing curvature. A number of different curvature forms can be constructed from the various connections and derivatives of Finsler geometry and its generalizations [3, 5]. Further analysis, beyond the present scope, is needed to relate these geometric objects to physics in the continuum mechanical setting, including residual stresses. **Remark B.1.7**.: Deformation gradient \(F^{A}_{A}\) could be decomposed into a product of two mappings [62]: \(F^{A}_{A}(X)=\partial_{A}\varphi^{a}(X)=(F^{E})^{a}_{\alpha}(X)(F^{D})^{\alpha}_{ A}(D(X))\). In this case, the strain energy potential is written to emphasize the elastic deformation \(\mathbf{F}^{E}\), with the state-dependent deformation \(\mathbf{F}^{D}\) accounting explicitly for inelastic deformation mechanisms, including growth [29, 107]. In this setting, residual stresses can arise if \((\mathbf{F}^{E})^{-1}\) and thus \(\mathbf{F}^{D}\) do not fulfill certain integrability conditions: neither two-point tensor \((\mathbf{F}^{E})^{-1}\) nor \(\mathbf{F}^{D}\) is always integrable to a vector field [98]. ### Micro-momentum and growth Now consider the internal state-space equilibrium equation, (3.35), first under the foregoing assumptions used to derive (B.5). Furthermore, take \(N^{A}_{B}=N^{A}_{B}(X)\ K^{A}_{BC}=\gamma^{A}_{BC}(X)\), and \(\alpha=1\). Then, with these assumptions, in the osculating Riemannian interpretation of Corollary 2.1.1, (3.35) is \[\partial_{A}\tilde{Z}^{A}_{C}+\tilde{Z}^{B}_{C}\tilde{\gamma}^{A}_{AB}-\tilde{ Z}^{A}_{B}\gamma^{B}_{AC}-Q_{C}=\psi\tilde{\partial}_{C}(\ln\sqrt{G})-R_{C},\] (B.9) \[\tilde{Z}^{A}_{B}(X)=Z^{A}_{B}(X,D(X))=\frac{\partial\psi}{\partial D^{B}_{|A }}(X,D(X)),\qquad Q_{A}(X,D(X))=\frac{\partial\psi}{\partial D^{A}}(X,D(X)),\] (B.10) where (B.10) follows from (3.29). Use energy density \(\psi\) of (B.6), so \(\tilde{Z}^{A}_{B}=0\) identically. Choose the volumetric source term \(R_{C}=\psi\tilde{\partial}_{C}(\ln\sqrt{G})\), which here represents the local change in energy density per unit reference volume due to effects of growth on the local volume form \(d\Omega(X,D)\), since now, per (A.4) of Appendix A, \(\psi\delta(d\Omega)=\psi[\tilde{\partial}_{C}(\ln\sqrt{G})\delta(D^{C})]d \Omega=R_{C}\delta(D^{C})d\Omega\). **Remark B.2.1**.: Physical justification exists in the context of growth mechanics for biological systems: \(R_{C}\) can account for the affect on energy density from changes in mass due to tissue growth [30, 107]. Thus (B.9) further reduces to, with (B.6), to a form very similar to the equilibrium case of Yavari [30] (e.g., matching equation (2.73) of ref. [30] with vanishing time derivative, if here \(\tilde{\partial}_{A}G_{BC}\) is arbitrary): \[Q_{A}=\frac{\partial\psi}{\partial D^{A}}=\frac{\partial\psi}{\partial G_{BC }}\frac{\partial G_{BC}}{\partial D^{A}}=\frac{\partial\tilde{\psi}}{\partial \tilde{G}_{BC}}\frac{\partial G_{BC}}{\partial D^{A}}=0.\] (B.11) To see how internal state components \(\{D^{A}\}\) can represent growth, consider the case \(n=2\) (i.e., 2-D \(\mathcal{M}\) such as a biological membrane), by which \(\{D^{A}\}\to(D^{1},D^{2})=(l_{1}\xi^{1},l_{2}\xi^{2})\), where \(l_{1,2}>0\) are normalization constants that render the \(\xi^{A}\) dimensionless. Choose a polar (i.e., cylindrical \(\{X^{A}\}\to(R,\Theta)\)) coordinate system on a region of \(\mathcal{M}\) with (3.17) applying, such that \(\bar{\mathbf{G}}=\text{diag}(1,R^{2})\). Assume a generalized Finslerian contribution \(\hat{\mathbf{G}}=\text{diag}(\exp(h_{1}(\xi^{1})),\exp(h_{2}(\xi^{2}))\), where \(h_{1}(D(X))=h_{1}(D^{1}(R,\Theta)/l_{1})\) and \(h_{2}(D(X))=h_{2}(D^{2}(R,\Theta)/l_{2})\) are differentiable functions of their arguments. In matrix form, the second of (3.17) becomes, in this example of anisotropic growth, \[[G_{AB}]=\begin{bmatrix}G_{RR}&0\\ 0&G_{\Theta\Theta}\end{bmatrix}=[\hat{G}^{C}_{A}][\bar{G}_{CB}]=\begin{bmatrix} \exp(h_{1}(\xi^{1}))&0\\ 0&R^{2}\exp(h_{2}(\xi^{2}))\end{bmatrix}.\] (B.12) A more specific case is now studied in detail. Denote by \(\chi(R)\) a radial growth function. Then set \[\xi=\xi^{1}=\frac{D^{1}}{l_{1}}=\frac{D^{2}}{l_{2}}=\xi^{2},\quad\xi=\xi(R); \qquad h=h_{1}=-h_{2}=2\chi,\quad h=h(\xi(R))=2\chi(R).\] (B.13) This produces metric \(\tilde{G}_{AB}(X)\) of Yavari (ref. [30], eq. (2.101)) for anisotropic growth of an annulus: \[\begin{split}[G_{AB}(R,\xi)]&=\begin{bmatrix}\exp(h( \xi(R)))&0\\ 0&R^{2}\exp(-h(\xi(R)))\end{bmatrix}\\ \Rightarrow&[\tilde{G}_{AB}(R)]&=\begin{bmatrix}\exp(2\chi(R))&0\\ 0&R^{2}\exp(-2\chi(R))\end{bmatrix}.\end{split}\] (B.14) **Remark B.2.2**.: In this special case given by (B.14), internal state changes preserve volume via \(\det(G_{AB}(X,D))=R^{2}\) being independent of \(\chi\), \(\xi\), and \(D\), so \(C_{AB}^{B}=0\). Now apply the energy potential (B.7), such that for internal state equilibrium, (B.11) becomes, defining \(h(\xi)=\mathrm{d}h(\xi)/\mathrm{d}\xi\), \[Q_{A}=-\frac{\mu_{0}}{2}\tilde{G}^{BD}\tilde{G}^{CE}C_{DE}\bar{\partial}_{A}G_ {BC}=0\,\Rightarrow\,\begin{cases}2l_{1}Q_{1}(\xi,R)=-\mu_{0}\mathrm{exp}(-h( \xi))C_{RR}\dot{h}(\xi)=0,\\ 2l_{2}Q_{2}(\xi,R)=\mu_{0}R^{-2}\mathrm{exp}(h(\xi))C_{\Theta\Theta}\dot{h}( \xi)=0.\end{cases}\] (B.15) Thus, equilibrium of internal state is only ensured for this particular strain energy function and material metric when \(\dot{h}=0\). A sample function with three equilibrium states at \(\xi=0,\frac{1}{2},1\) is the double well: \[h=\xi^{2}(1-\xi)^{2},\qquad\dot{h}=2\xi(1-\xi)(1-2\xi).\] (B.16) Now revisit the Levi-Civita connection and curvature for the metric \(\tilde{\mathbf{G}}\) in (B.14). Denote \(h^{\prime}(\xi(R))=[\mathrm{d}h(\xi(R))/\mathrm{d}\xi][\mathrm{d}\xi(R)/ \mathrm{d}R]=\dot{h}(\xi)\xi^{\prime}(R)\). From (2.36), the \(\tilde{\gamma}_{BC}^{A}\), have the non-vanishing components \[\tilde{\gamma}_{RR}^{R}=\frac{h^{\prime}}{2},\quad\tilde{\gamma}_{\Theta\Theta }^{R}=\exp(-2h)\left(\frac{R^{2}h^{\prime}}{2}-R\right),\quad\tilde{\gamma}_{ R\Theta}^{\Theta}=\tilde{\gamma}_{\Theta R}^{\Theta}=\frac{1}{R}-\frac{h^{ \prime}}{2}.\] (B.17) Recalling \(\tilde{\kappa}\) is the scalar curvature, the non-vanishing covariant components of \(\tilde{\mathcal{R}}_{BCDE}=\tilde{\mathcal{R}}_{BCD}^{A}\tilde{G}_{AE}\) are, from (B.8), \[\begin{split}\tilde{\mathcal{R}}_{R\Theta R\Theta}& =\tilde{\mathcal{R}}_{\Theta R\Theta R}=-\tilde{\mathcal{R}}_{R \Theta R\Theta}=-\tilde{\mathcal{R}}_{\Theta RR\Theta}=-R^{2}\tilde{\kappa}= -[\partial_{R}\tilde{\gamma}_{\Theta\Theta}^{R}+\tilde{\gamma}_{\Theta\Theta }^{R}(\tilde{\gamma}_{RR}^{R}-\tilde{\gamma}_{R\Theta}^{\Theta})]\tilde{G}_{RR }\\ &=-\frac{\mathrm{d}}{\mathrm{d}R}\left[\exp(-2h)\left(\frac{R^{2}h ^{\prime}}{2}-R\right)\right]\exp(h)+\left(\frac{R^{2}h^{\prime}}{2}-R\right) \left(\frac{1}{R}-h^{\prime}\right)\exp(-h)\\ &=-\frac{R}{2}\mathrm{exp}(-h)\left[R\{h^{\prime\prime}-(h^{ \prime})^{2}\}+4h^{\prime}\right]\\ &=-\frac{R}{2}\mathrm{exp}(-h)\left[R\left(\frac{\mathrm{d}^{2} \xi}{\mathrm{d}R^{2}}+\frac{\mathrm{d}\xi}{\mathrm{d}R}\frac{\mathrm{d}}{ \mathrm{d}R}\right)\frac{\mathrm{d}h}{\mathrm{d}\xi}-R\left(\frac{\mathrm{d}h}{ \mathrm{d}\xi}\frac{\mathrm{d}\xi}{\mathrm{d}R}\right)^{2}+4\frac{\mathrm{d}h} {\mathrm{d}\xi}\frac{\mathrm{d}\xi}{\mathrm{d}R}\right].\end{split}\] (B.18) Take the annular material manifold \(\{\mathcal{M}:R\in[R_{0},R_{1}],\Theta\in[0,\Theta_{1}]\}\), \(R_{1}>R_{0}>0\), \(\Theta_{1}<2\pi\). Since \(R>0\) and for bounded \(h\), the local flatness condition from (B.18) is \[R\{h^{\prime\prime}-(h^{\prime})^{2}\}+4h^{\prime}=0\quad\leftrightarrow\quad R \left(\frac{\mathrm{d}^{2}\xi}{\mathrm{d}R^{2}}+\frac{\mathrm{d}\xi}{\mathrm{d }R}\frac{\mathrm{d}}{\mathrm{d}R}\right)\frac{\mathrm{d}h}{\mathrm{d}\xi}-R \left(\frac{\mathrm{d}h}{\mathrm{d}\xi}\frac{\mathrm{d}\xi}{\mathrm{d}R}\right) ^{2}+4\frac{\mathrm{d}h}{\mathrm{d}\xi}\frac{\mathrm{d}\xi}{\mathrm{d}R}=0.\] (B.19) **Remark B.2.3**.: The first of (B.19) is a second-order nonlinear ODE for radial distribution of the generic function \(h=h(R)\). The second is a second-order nonlinear ODE for \(\xi=\xi(R)\) that could be solved if intermediate functional form \(h(\xi)\) is known a priori (e.g., (B.16)). Trivial solutions are \(h(R)=\text{constant}\) and \(\xi(R)=\text{constant}\). General non-trivial analytical solutions are not obvious. Given appropriate boundary conditions, determination of particular non-trivial solutions for flatness, if they exist, would appear to require numerical methods.
2302.14524
Swing Amplification and the Gaia Phase Spirals
We explore the interplay between in-plane and vertical dynamics in stellar discs within the framework of the shearing box approximation. Julian and Toomre used the shearing sheet to show that leading density waves are amplified as they swing into a trailing ones. We extend their formalism into the dimension perpendicular to the disc and obtain explicit solutions for the response of a disc to an impulsive, external excitation. An excitation that is is symmetric about the mid plane produces a density/breathing wave as well as two-armed phase spirals in the vertical phase space plane. On the other hand, an excitation that is antisymmetric about the mid plane leads to a bending wave and single-armed phase spirals. In either case, self-gravity plays a crucial role in driving the evolution of the disturbance and determining the amplitude and pitch angle of the ensuing spirals. We also show that when the disc is excited by a co-rotating cloud, it develops stationary phase spirals in the wake of the cloud. The results call into question simple kinematic arguments that have been used to determine the age of the phase spirals seen in the Gaia survey
Lawrence M. Widrow
2023-02-28T12:37:37Z
http://arxiv.org/abs/2302.14524v1
# Swing Amplification and the Gaia Phase Spirals ###### Abstract We explore the interplay between in-plane and vertical dynamics in stellar discs within the framework of the shearing box approximation. Julian and Toomre used the shearing sheet to show that leading density waves are amplified as they swing into a trailing ones. We extend their formalism into the dimension perpendicular to the disc and obtain explicit solutions for the response of a disc to an impulsive, external excitation. An excitation that is is symmetric about the mid plane produces a density/breathing wave as well as two-armed phase spirals in the vertical phase space plane. On the other hand, an excitation that is antisymmetric about the mid plane leads to a bending wave and single-armed phase spirals. In either case, self-gravity plays a crucial role in driving the evolution of the disturbance and determining the amplitude and pitch angle of the ensuing spirals. We also show that when the disc is excited by a co-rotating cloud, it develops stationary phase spirals in the wake of the cloud. The results call into question simple kinematic arguments that have been used to determine the age of the phase spirals seen in the Gaia survey. keywords: Galaxy:kinematics and dynamics - Solar Neighborhood - Galaxy: disc - Galaxy: structure ## 1 Introduction One of the most intriguing discoveries from Gaia Data Release 2 (Gaia Collaboration et al., 2018, 2018) is the existence of spirals in the vertical, or \(z-w\), phase space distribution function (DF) of Solar Neighbourhood stars (Antoja et al., 2018). The phase spirals are easily seen in maps of the \(z-w\) DF once the smooth background distribution has been removed. They have a fractional density contrast of a few percent and display a rich morphology that depends on the properties of the stars under consideration such as Galactocentric radius, Galactic azimuth, angular momentum, and epicyclic energy (Laporte et al., 2019; Widmark, 2019; Bland-Hawthorn et al., 2019; Li and Shen, 2020; Hunt et al., 2022; Frankel et al., 2022; Antoja et al., 2022). For example the spirals tend to be two-armed in the inner galaxy and one-armed in the outer galaxy (Hunt et al., 2022). They also appear in \(z-w\) maps of the mean azimuthal and radial velocities. The most natural explanation for the spirals is that they are disturbances in the \(z-w\) DF from some past event or events that have undergone phase mixing (Tremaine, 1999). For example, if a local patch of the disc experiences a "kick" perpendicular to the mid plane, stars will be displaced in the \(w\)-direction. The perturbed DF will then shear into a one-armed spiral since stars with low vertical energy rotate about the origin of the \(z-w\) plane at a higher frequency than stars with high vertical energy. On the other hand, one might imagine a breathing mode perturbation where the DF is squeezed in \(z\) and/or stretched in \(w\). Over time, this perturbation will shear into a two-armed spiral. In either case, if the evolution in phase space is purely kinematic, then the pitch angle of the phase spiral will depend on the time since the initial perturbation and the anharmonicity of the vertical potential. Indeed, the simplest approach to understanding the Gaia phase spirals is to model the DF as test particles in a fixed one-dimensional potential, introduce an ad hoc perturbation, and evolve the system until a spiral pattern matching the one seen in the data is reached. This approach leads to an estimate of \(300-900\,\mathrm{Myr}\) for the age of the spiral (Antoja et al., 2018). When the test-particle analysis is extended to three-dimensions, it can help elucidate the origin of the mean \(v_{\phi}\) and \(v_{R}\) spirals (Binney and Schonrich, 2018; Darling and Widrow, 2019). Similar estimates for the age of the spirals can be obtained by transforming the data into action-angle-frequency or (\(J_{z}\), \(\theta_{z}\), \(\Omega_{z}\)) coordinates. If the spirals are created by a single event and if the potential is time-independent, then \(J_{z}\) will be constant and \(\theta_{z}=\Omega_{z}t+\theta_{0}\) along a star's orbit. Thus, in the \(\theta_{z}-\Omega_{z}\) plane, the spirals should appear as parallel ridges whose slope is proportional to the inverse age of the spiral (Li and Widrow, 2021; Frankel et al., 2022; Li and Widrow, 2023; Tremaine et al., 2022). Note that both methods require a model for the background gravitational potential. An alternative approach is to use the shape of the spirals to constrain the gravitational potential (Widmark et al., 2021, 2021). A promising candidate for the origin of the spirals is the passage of a dwarf galaxy or dark matter subhalo through the Galactic disc with the Sagittarius dwarf galaxy (Sgr) considered a prime suspect (Laporte et al., 2019; Bennett and Bovy, 2021; Bennett et al., 2022). Sgr is the nearest known neighbor to the Milky Way (Ibata et al., 1994) and has an orbit that has likely taken it through the mid plane of the Galaxy several times over the last few Gyr (Johnston et al., 1995). Purcell et al. (2011) argued that Sgr was crucial in shaping the Milky Way's bar and spiral arms while Gomez et al. (2012) suggested that it could have also generated the vertical bending and breathing waves seen in both pre-Gaia surveys and Gaia (Widrow et al., 2012; Williams et al., 2013; Carlin et al., 2013; Xu et al., 2015; Bennett and Bovy, 2019). Several groups have found vertical phase spirals in high-resolution simulations of a Sgr-Milky Way encounter though none of these have managed to reproduce the morphology of the spirals found in the Gaia data (Laporte et al., 2019; Bennett and Bovy, 2021). Simulations have also been used to explore other origins of the phase spirals such as the vertical waves generated by a buckling event in the Galactic bar (Khoperskov et al., 2019). It is worth noting the spirals are subtle (few percent) phase space features at a scale of \(200\,\mathrm{pc}\) by \(10\,\mathrm{km}\,\mathrm{s}^{-1}\) and therefore push the limits of the resolution simulations. The general conclusion from these investigations is that the simple picture of a kinematic spiral generated from a single event is incomplete, if not incorrect. For example, the transition from two-armed to one-armed spirals as one moves out in Galactocentric radius may require multiple events (Hunt et al., 2022). In addition, when the spirals are transformed from \(z-w\) to \(\theta_{z}-\Omega_{z}\) coordinates, they appear as curved rather than parallel bands (Frankel et al., 2022; Tremaine et al., 2022). Finally the phase spirals found in simulations are not as tightly wound as the ones seen in the data (Laporte et al., 2019; Bennett and Bovy, 2021; Bennett et al., 2022). These results may indicate that self-gravity is essential for modelling the evolution of the spirals. This point was stressed in Darling and Widrow (2019) who compared N-body simulations of a test-particle disc with fully self-consistent ones. In both cases, a bend at the solar circle was introduced into the disc. The spirals that developed in the test-particle case were easy to detect and had the expected pitch angle. On the other hand, the spirals in the live disc were less tightly wound and more difficult to discern. Recently, Tremaine et al. (2022) proposed an alternative scenario in which phase spirals are generated by a continual sequence of weak perturbations and erased by phase space diffusion due to the graininess of the gravitational potential. In this picture, the pitch angle of the spirals reflects the diffusion time scale rather than the elapsed time from an initial perturbation. As a proof of concept, they presented simulations in which test particles in a fixed potential were subjected to a stochastic sequence of kicks and were able to reproduce key features of the Gaia spirals. In this paper we explore the connection between in-plane perturbations and phase spirals within the framework of the shearing box. In their classic paper on swing amplification, Julian and Toomre (1966) (hereafter, JT66) followed the evolution of a plane wave perturbation in a razor-thin disc by integrating the linearized equations for the surface density and gravitational potential. They found that in a marginally stable disc (Toomre parameter \(Q\ga 1\)) with surface density \(\Sigma_{0}\) and epicyclic frequency \(\kappa\), waves with wavelength close to \(\lambda_{\mathrm{crit}}=4\pi^{2}G\Sigma_{0}/\kappa^{2}\) were amplified by one or two orders of magnitude as they swung from leading to trailing. Our first task will be to extend the JT66 formalism into the dimension perpendicular to the disc, that is, to go from a shearing sheet to a shearing box. The shearing box approximation has been used in a variety of problems in theoretical astrophysics such as the study of accretion discs and, most notably, the magnetorotational instability (Hawley et al., 1995). There, the evolution of the system is driven by gasdynamics and magnetic fields. In our case, the system is collisionless and the evolution is driven entirely by gravity. As in JT66, we solve the linearized collisionless Boltzmann equation for a disc that is perturbed by an external excitation. The approach has some commonalities with the formalism developed in Banik et al. (2022) who also considered the response of an isothermal slab to an external potential in linear theory. However, their analysis did not include self-gravity, epicyclic motions, or shear, all of which play important roles in the our analysis. We also build on early studies of self-gravitating modes in plane-symmetric systems by Kalnajs (1973); Araki (1985); Mathur (1990); Weinberg (1991); Widrow and Bonner (2015). Though this work focuses on the \(z-w\) phase spirals, the shearing box machinery can be used to investigate more general questions about the interplay between in-plane and vertical dynamics. In particular, one can study the relationship between in-plane disturbances such as spiral arms and the vertical waves seen throughout the Milky Way's disc (Widrow et al., 2012; Carlin et al., 2013; Williams et al., 2013; Yanny and Gardner, 2013; Xu et al., 2015; Schonrich and Dehnen, 2018; Bennett and Bovy, 2019; Widmark et al., 2022). The connection between vertical breathing waves and spiral arms was investigated with N-body simulations by Debattista (2014); Ghosh et al. (2022); Kumar et al. (2022) and in linear perturbation theory by Moari et al. (2015, 2016). These studies focused on moments of the DF rather than the DF itself. And though the latter treated the full 3D geometry of the disc, it didn't include self-gravity of the perturbations. Our treatment has the advantages of analytic methods while still including self-gravity. The price we pay is that the treatment of epicyclic motion and shear are only approximate. An outline of the paper is as follows. In Section 2, we present the formalism for calculating the response of a disc to external excitations within a shearing box. In Section 3, we consider the case where a single wave with a well-defined wave vector is excited impulsively. In particular, we study perturbations that generate either breathing waves or bending waves. In Section 4, we compute the stationary response of the disc to a co-rotating mass. We discuss the implications of our results, limitations of the shearing box approximation, and avenues for extending this work in Section 5. We conclude with a summary of our results in Section 6. ## 2 Shearing box equations ### particle orbits Shearing box coordinates are a local Cartesian approximation to cylindrical coordinates in a rotating frame. They were devised by Hill (1878) to study the three-body problem and used in early studies of galactic dynamics by Goldreich and Lynden-Bell (1965), JT66, and Goldreich and Tremaine (1978). A more recent discussion of the shearing sheet in the context of galactic dynamics and swing amplification can be found in Fuchs (2001). Here we follow the pedagogical flow and notation of Binney (2020) (hereafter B20) who provided a particularly clear and accessible treatment of the JT66 formalism for the shearing sheet. Let (\(R\), \(\phi\), \(z\)) be inertial cylindrical coordinates for a rotating stellar disc and consider a patch of the disc centered on \(R=R_{0}\), \(\phi=\Omega t\) and \(z=0\) where \(\Omega\) is the angular frequency of a circular orbit at \(R=R_{0}\). The shearing box coordinates are \(x=R-R_{0}\), \(y=R_{0}(\phi-\Omega t)\), and \(z\) and the Lagrangian is given by \[\mathcal{L}=\frac{1}{2}\left(\dot{x}^{2}+\left(1+\frac{x}{R_{0}}\right)^{2} \left(\Omega R_{0}+\dot{y}\right)^{2}+\dot{z}^{2}\right)-\Phi_{0}(x,\,z). \tag{1}\] Formally, we assume \(\mathbf{x}\ll R_{0}\) and \(\dot{\mathbf{x}}\ll R_{0}\Omega\) though these inequalities are only marginally satisfied for the size of the patch that we will consider. The relationships between \(\dot{\mathbf{x}}\) and the conjugate momenta \(\mathbf{p}\) are given by \[\dot{x}=p_{x}\equiv u, \tag{2}\] \[\dot{y}=\frac{p_{y}}{\left(1+x/R_{0}\right)^{2}}-R_{0}\Omega\equiv v-2Ax, \tag{3}\] and \[\dot{z}=pz\equiv w. \tag{4}\] The quantities \((u,v,w)\) correspond to the radial, azimuthal, and vertical components of a particle's velocity relative to the local circular orbit and are thus the shearing box analogues to the \((U,V,W)\) velocity components often used to study the dynamics of the solar neighborhood. Since the potential is independent of \(y\), \(p_{y}\) is a constant of motion. It is however, \(\mathcal{O}(R_{0}\Omega)\). Following B20, we introduce \(\Delta_{y}\equiv p_{y}-R_{0}\Omega\), which is the same order as \(\dot{\mathbf{x}}\). The Hamiltonian is given by \[H=\frac{1}{2}\left(p_{x}^{2}+\frac{p_{y}^{2}}{\left(1+x/R_{0}\right)^{2}}+p_{z }^{2}\right)-\Omega R_{0}p_{y}+\Phi_{0}(x,\,z). \tag{5}\] We assume that the potential is additively separable in \(x\) and \(z\) and write \[\Phi_{0}(x,\,z)=\xi(x)+\chi(z). \tag{6}\] Expanding \(\xi\) in a Taylor series about \(x=0\) we find \[\xi(x)=R_{0}\Omega^{2}x+\frac{1}{2}\left(\Omega^{2}-4A\Omega\right)x^{2}, \tag{7}\] where \(A\) is Oort's first constant and, without loss of generality, we take \(\Phi(\mathbf{x})=0\). To quadratic order in small quantities, The Hamiltonian can be written as \[H=H_{x}(p_{x},x)+H_{y}(\Delta_{y})+H_{z}(z,p_{z})+\text{constant} \tag{8}\] where \(H_{x}\equiv\frac{1}{2}\left(p_{x}^{2}+\kappa^{2}\left(x-\bar{x}\right)^{2}\right)\) and \(H_{z}=\frac{1}{2}p_{x}^{2}+\chi(z)\) are constants of motion and \(\bar{x}\equiv 2\Omega\Delta_{y}/\kappa^{2}\) is the shearing box analogue of the guiding radius. (See B20 for a more detailed calculation.) In general, a particle will move along an elliptical orbit about the point \((x,\,y)=(\bar{x},\,y_{0}-2A\bar{x})\) and execute anharmonic oscillations in \(z\) about the mid plane. We can therefore write \[x(t)=X\cos\theta_{r}+\bar{x} \tag{9}\] where \(\theta_{r}=\kappa t+\theta_{0}\) and \(X\) and \(\theta_{0}\), are constants. It follows that \[p_{x}=\dot{x}=-\kappa X\sin\theta_{r}, \tag{10}\] and \[\dot{y}=-2A\bar{x}-2\Omega X\cos\theta_{r}, \tag{11}\] or equivalently \[v=2B\left(x-\bar{x}\right) \tag{12}\] where \(B\) is Oort's second constant. In addition, we have \[y(t)=y_{0}-2A\bar{x}t-\frac{2\Omega}{\kappa}X\sin\theta_{r}. \tag{13}\] ### plane wave perturbations We consider a density perturbation of the form \[\rho_{1}(\mathbf{x},\,t)=e^{i\mathbf{k}_{p}\cdot\mathbf{x}_{p}}\tilde{\rho_{1} }(z,\,t) \tag{14}\] where \(\mathbf{x}_{p}\) and \(\mathbf{k}_{p}\) are the position vector and wavenumber in the plane of the disc. Here and throughout, the over-tilde denotes coefficient of \(e^{i\mathbf{k}_{p}\cdot\mathbf{x}_{p}}\). This perturbation represents a spiral wave in the plane of the disc. If the shearing box is centered on the corotation radius of the wave, then \(\mathbf{x}_{p}\cdot\mathbf{k}_{p}\) must be constant for particles on circular orbits and therefore \[k_{x}(t_{0})x+k_{y}y_{0}=k_{x}(t)x+k_{y}y(t). \tag{15}\] For circular orbits, \(x\) is constant, \(y(t)-y_{0}=-2A(t-t_{0})x\), and therefore \[k_{x}(t)=k_{x0}+2k_{y}A(t-t_{0}) \tag{16}\] where \(k_{x0}=k_{x}(t_{0})\). Without loss of generality, we can set \(t_{0}=0\). Then \[k_{p}=k_{y}\left(1+4A^{2}t^{2}+\alpha^{2}+4At\alpha\right)^{1/2}=k_{y}/\beta(t) \tag{17}\] where \(\alpha\equiv k_{x0}/k_{y}\) and \(\beta\equiv k_{y}/k_{p}\). For a single mode, we can further set \(k_{x0}=0\) so that \(t=0\) corresponds to the time when wave crests are aligned with the \(y\) axis. For general orbits we have \[\mathbf{k}_{p}\cdot\mathbf{x}_{p} =k_{x0}\left(\bar{x}+X\cos\theta_{r}\right)\] \[+k_{y}\left[y_{0}+2X\left(At\cos\theta_{r}-\frac{\Omega}{\kappa} \sin\theta_{r}\right)\right]. \tag{18}\] and therefore \[\mathbf{k}_{p}\cdot\mathbf{x}_{p}|_{t^{\prime}}=\mathbf{k}_{p}\cdot\mathbf{x} _{p}|_{t}+\psi(t^{\prime})-\psi(t). \tag{19}\] where \[\psi(t)=k_{x0}X\cos\theta_{r}+2k_{y}X\left(At\cos\theta_{r}-\left(\Omega/ \kappa\right)\sin\theta_{r}\right). \tag{20}\] ### equilibrium model By Jeans theorem, \(f_{0}\) can be written as a function of the integrals of motion \(H_{x}\), \(\Delta_{y}\), and \(H_{z}\)(Binney & Tremaine, 2008). Here, we assume that it is independent of \(\Delta_{y}\) and separable in \(H_{x}\) and \(H_{z}\). Following B20, we further assume that the in-plane factor of the DF is given by the Maxwell-Boltzmann distribution. For the vertical factor, we use the DF for the lowered isothermal plane (Weinberg, 1991). Putting these together, we have \[f_{0}(\mathbf{x},\,\mathbf{p})=\frac{\Omega\Sigma_{0}}{\left(2\pi\right)^{3/2} \kappa_{20}\sigma_{x}^{2}\sigma_{z}}e^{-H_{x}/\sigma_{x}^{2}}F_{z}(H_{z}) \tag{21}\] where \(\Sigma_{0}\) is the surface density, \(z_{0}\equiv\sigma_{z}^{2}/\pi G\Sigma_{0}\) is the characteristic thickness of the system, and \[F_{z}(H_{z})=\begin{cases}N_{z}\left(e^{-H_{z}/\sigma_{z}^{2}}-e^{-E_{0}/\sigma_ {z}^{2}}\right)&0<H_{z}<E_{0}\\ 0&\text{otherwise.}\end{cases} \tag{22}\] The constant \(N_{z}\) is defined so that \[\frac{1}{\sqrt{8\pi}\sigma_{z}_{0}}\int dzdp_{z}F_{z}(H_{z})=1 \tag{23}\] and \[\Sigma_{0}=\int d^{3}\mathbf{p}\,dzf_{0}(\mathbf{x},\,\mathbf{p}). \tag{24}\] Note that our definition of the in-plane DF differs from the one in B20 since we use \(p_{y}\) as the azimuthal velocity coordinate rather than \(v\). The vertical potential \(\Phi_{z}\) and density are determined by solving Poisson's equation \[\frac{d^{2}\Phi_{z}}{d^{2}z}=4\pi G\rho_{0}(z) \tag{25}\] where \[\rho_{0}(z) =\int d^{3}\mathbf{p}f_{0} \tag{26}\] \[=\frac{N_{z}\Sigma_{0}}{2z_{0}}\left(e^{-\chi_{z}(z)/\sigma_{z}^{ 2}}\operatorname{erf}(t)-\frac{2}{\sqrt{\pi}}te^{-E_{0}/\sigma_{z}^{2}}\right) \tag{27}\] with \(t=\left(E_{0}-\chi(z)\right)^{1/2}/\sigma_{z}\). Note that for a self-consistent model, that is, one where the potential is determined entirely from the disc itself, we recover the result for the isothermal plane in the limit \(E_{0}\rightarrow\infty\): \[\rho(z)=\frac{\Sigma_{0}}{2z_{0}}\mathrm{sech}^{2}(z/z_{0}) \tag{28}\] \[\chi(z)=2\sigma_{z}^{2}\log\cosh(z/z_{0}) \tag{29}\] and \(N_{z}=1\)(Spitzer, 1942; Camm, 1950). The lowered isothermal plane is the one-dimensional analogue of the lowered isothermal sphere or King model (King, 1966) and provides a model in which the density is identically zero for \(|z|\) greater than some finite truncation length. ### gravitational potential The contribution to the gravitational potential from the density perturbation in equation 14 can be written \[\Phi_{1}(\mathbf{x},\,t)=e^{\mathbf{i}k_{p}\cdot\mathbf{x}_{p}}\tilde{\Phi}_ {1}(z,t) \tag{30}\] where \[\frac{\partial^{2}\tilde{\Phi}_{1}}{\partial z^{2}}-k_{p}^{2}\tilde{\Phi}_{1} =4\pi G\tilde{\rho}_{1}(z,\,t). \tag{31}\] The solution is given by \[\tilde{\Phi}_{1}(z,t)=-\frac{2\pi G}{k_{p}}P(z,t) \tag{32}\] where the Green's function integral \[P(z,t)=\int_{-\infty}^{\infty}\tilde{\rho}_{1}(\zeta,t)e^{-k_{p}|z-\zeta|}\,d\zeta. \tag{33}\] has dimensions of surface density. The result for a razor-thin disc, \[\tilde{\Phi}_{1}(z,t)=-\frac{2\pi G}{k_{p}}\Sigma_{1}(t)e^{-k_{p}|z|}, \tag{34}\] is recovered by setting \(\tilde{\rho}_{1}(\zeta,t)=\tilde{\Sigma}_{1}(t)\delta(\zeta)\). Similarly, the \(z\)-derivative of the potential is given by \[\frac{\partial\tilde{\Phi}_{1}}{\partial z}=2\pi GQ(z,t) \tag{35}\] where \[Q(z,t)\equiv\int_{-\infty}^{\infty}\tilde{\rho}(\zeta,t)e^{-k_{p}|z-\zeta|} \mathrm{sgn}(z-\zeta)d\zeta. \tag{36}\] ### linearized distribution function In linear theory, we write \(f(\mathbf{x},\,\mathbf{p},\,t)=f_{0}(z,\,p_{z})+f_{1}(\mathbf{x},\,\mathbf{p}, \,t)\) and \(H=H_{0}+\Phi_{1}(\mathbf{x},\,t)\). The collisionless Boltzmann equation is then \[\frac{df_{1}}{dt}\equiv\frac{\partial f_{1}}{\partial t}+[f_{1},\,H_{0},]=[ \Phi_{1},\,f_{0}] \tag{37}\] where \([\,,\,]\) are the usual Poisson brackets (Binney & Tremaine, 2008). This equation admits the following integral expression for \(f_{1}\): \[f_{1}(\mathbf{x},\mathbf{p},t)=J_{p}+J_{z} \tag{38}\] where \[J_{p}(\mathbf{x},\,\mathbf{p})\equiv\int_{t_{i}}^{t}dt^{\prime}\,\frac{ \partial\Phi_{1}}{\partial\mathbf{x}_{\mathbf{p}}}\cdot\frac{\partial f_{0}}{ \partial\mathbf{p}_{p}}=i\int_{t_{i}}^{t}dt^{\prime}\,\mathbf{k}_{p}\cdot \frac{\partial f_{0}}{\partial\mathbf{p}_{p}}\,\Phi_{1} \tag{39}\] and \[J_{z}(\mathbf{x},\,\mathbf{p})\equiv\int_{t_{i}}^{t}dt^{\prime}\,\frac{ \partial\Phi_{1}}{\partial z}\frac{\partial f_{0}}{\partial p_{z}}. \tag{40}\] The lower bounds for the integrals represent an initial time when perturbations are first introduced while the integrands are evaluated along unperturbed orbits as given in Section 2.1. For \(J_{p}\), we use the fact that \[\frac{\partial f_{0}}{\partial p_{y}}=\frac{\partial f_{0}}{\partial\Delta_{y}} =\frac{2\Omega}{\kappa^{2}}\frac{\partial f_{0}}{\partial\bar{x}} \tag{41}\] to find \[\mathbf{k}_{p}\cdot\frac{\partial f_{0}}{\partial\mathbf{p}_{p}}=-\frac{f_{0}}{ \sigma_{z}^{2}}\left(k_{x}p_{x}-2\Omega k_{y}(x-\bar{x})\right). \tag{42}\] Since this expression is evaluated along an unperturbed orbit, we have \[\frac{\mathbf{k}_{p}}{k_{p}}\cdot\frac{\partial f_{0}}{\partial\mathbf{p}_{p}}= \frac{f_{0}\beta}{\sigma_{z}^{2}}\left(\left(\alpha+2At^{\prime}\right)\kappa X \sin\theta_{r}+2\Omega X\cos\theta_{r}\right). \tag{43}\] For a plane wave perturbation \[\rho_{1}(\mathbf{x}^{\prime},t^{\prime})=e^{\mathbf{i}k_{p}^{\prime}\cdot \mathbf{x}_{p}^{\prime}}\tilde{\rho}_{1}(z^{\prime},\,t^{\prime})=e^{\mathbf{i} k_{p}\cdot\mathbf{x}_{p}}e^{\mathbf{i}(\psi^{\prime}-\psi)}\,\tilde{\rho}_{1}(z^{ \prime},t^{\prime}) \tag{44}\] and therefore \[\tilde{J}_{p}(\mathbf{x},\,\mathbf{p},\,t) =-\frac{iG\Sigma_{0}\Omega}{\left(2\pi\right)^{1/2}\kappa\sigma_{z }^{2}\sigma_{z}}e^{-H_{z}/\sigma_{x}^{2}}F_{z}(H_{z})\] \[\times\int_{t_{i}}^{t}dt^{\prime}\,\beta^{\prime}e^{(\psi^{\prime}- \psi)}P(z^{\prime},t^{\prime})\] \[\times\left(\left(\alpha+2At^{\prime}\right)\kappa\,X\sin\theta_{r}+2 \Omega\,X\cos\theta_{r}\right). \tag{45}\] We recover equation 38 of B20 by setting \(\alpha=0\), \(\tilde{\rho}_{1}=\tilde{\Sigma}_{1}\delta(z)\), and \(F_{z}=\sqrt{8\pi}z_{0}\sigma_{z}\delta(z)\delta(p_{z})\) and then integrating over \(z\) and \(w\). A similar calculation leads to the following expression for \(J_{z}\): \[\tilde{J}_{z}(\mathbf{x},\,\mathbf{p},\,t) =-\frac{G\Sigma_{0}\Omega N_{z}}{\sqrt{2\pi}\kappa\sigma_{z}^{2} \sigma_{z}^{3}}e^{-H_{x}/\sigma_{z}^{2}}e^{-H_{z}/\sigma_{z}^{2}}\] \[\times\int_{t_{i}}^{t}dt^{\prime}\,w^{\prime}e^{(\psi^{\prime}- \psi)}Q(z^{\prime},t^{\prime}). \tag{46}\] ### vertical phase space DF We obtain the DF in the \(z-w\) plane by integrating \(f_{1}\) over \(\mathbf{p}_{p}\). Following equation 38 we write \[f_{1z}(z,\,p_{z})=e^{\mathbf{\tilde{a}}_{\mathbf{p}}\cdot\mathbf{x}_{p}}\left( \tilde{\mathcal{J}}_{p}+\tilde{\mathcal{J}}_{z}\right) \tag{47}\] where \(\tilde{\mathcal{J}}_{p,z}\equiv\int d^{2}\mathbf{p}_{p}\tilde{\mathcal{J}}_{p,z}\). To carry out the integral we use the following change of variables from B20. Since \(dp_{y}=d\Delta_{y}=\left(\kappa^{2}/2\Omega\right)d\tilde{x}\), we have \(dp_{x}dp_{y}=\left(\kappa/2\Omega\right)d\tilde{U}_{x}^{\prime}dU_{y}^{\prime}\) where \[U_{x}^{\prime}=p_{z}=-\kappa X\sin\theta_{r} \tag{48}\] and \[U_{y}^{\prime}=\kappa\left(x-\bar{x}\right)=\kappa X\cos\theta_{r}. \tag{49}\] In the \((U_{x}^{\prime},\,U_{y}^{\prime})\) system, \(\theta_{r}\) is a polar angle. We can therefore use \((U_{x},\,U_{y})\) coordinates that are obtained from \((U_{x}^{\prime},\,U_{y}^{\prime})\) by rotating \(\theta_{r}\) into \(\theta_{0}\). The result is \[\tilde{\mathcal{J}}_{p}(z,w,t) =-\frac{iG\Sigma_{0}F_{z}(H_{z})}{\sqrt{2\pi z_{0}}\sigma_{z}^{4} \sigma_{z}}\int_{t_{i}}^{t}dt^{\prime}\beta^{\prime}P(z^{\prime},t^{\prime})\] \[\times\int d^{2}U\mathbf{c}\cdot\mathbf{U}e^{-U^{2}/2\sigma_{z}^{ 2}+2i\mathbf{b}\cdot\mathbf{U}} \tag{50}\] where the vectors \(\mathbf{b}\) and \(\mathbf{c}\) are defined in B20 and the Appendix. The integral over \(\mathbf{U}\) can be done analytically to yield \[\tilde{\mathcal{J}}_{p}(z,w,t) =\frac{\sqrt{8\pi}G\Sigma_{0}F_{z}(H_{z})}{z_{0}\sigma_{z}} \tag{51}\] \[\int_{t_{i}}^{t}dt^{\prime}\,\beta^{\prime}\mathbf{c}\cdot \mathbf{b}e^{-2\sigma_{z}^{2}b^{2}}P(z^{\prime},t^{\prime}). \tag{52}\] This expression can be written in terms of the Toomre parameter \[Q\equiv\frac{\kappa\sigma_{x}}{3.36G\Sigma_{0}} \tag{53}\] and the critical wavenumber for axisymmetric perturbations \[k_{\rm crit}\equiv\frac{\kappa^{2}}{2\pi G\Sigma_{0}}. \tag{54}\] The result is \[\tilde{\mathcal{J}}_{p}(z,w,t)=\frac{\kappa F_{z}(H_{z})}{\sqrt{8\pi z_{0}} \sigma_{z}}\int_{t_{i}}^{t}dt^{\prime}K_{p}(t,t^{\prime})P(z^{\prime},t^{ \prime}) \tag{55}\] where \[K_{p}(t,t^{\prime})=4\beta^{\prime}\mathbf{c}\cdot\hat{\mathbf{b}}\exp(-0.572Q ^{2}\hat{b}^{2}) \tag{56}\] and \(\hat{\mathbf{b}}=(\kappa/k_{\rm crit})\mathbf{b}\). This equation is an example of a Volterra integral and extends the well-known result from JT66 into the dimension perpendicular to the mid plane of the disc. The function \(K_{p}\), which B20 referred to as the JT kernel, is the same as in the case of a razor thin disc. The difference here is that it is multiplied by the Green's function integral, which is also a function of \(t^{\prime}\). Thus, the effective kernel is \(K_{p}P\). A similar calculation leads to \[\tilde{\mathcal{J}}_{z}=-\frac{N_{z}e^{-H_{z}/\sigma_{z}^{2}}}{\sqrt{8\pi}z_{0} ^{3}\sigma_{z}}\int_{t_{i}}^{t}dt^{\prime}p_{z}^{\prime}K_{z}(t,t^{\prime})Q(z ^{\prime},t^{\prime}) \tag{57}\] where \[K_{z}(t,t^{\prime})=\exp(-0.572Q^{2}\hat{b}^{2}). \tag{58}\] The integral of this term over \(z\) and \(w\) is zero and therefore it doesn't contribute directly to \(\Sigma_{1}\). In short, \(\mathcal{J}_{p}\) describes the redistribution of mass in the plane of the disc while \(\mathcal{J}_{z}\) describes the redistribution of mass in the \(z-w\) plane. ### physical parameters For definiteness we set physical quantities for our calculations as follows. We take the angular frequency of the shearing box to be \(\Omega=V_{c}/R_{0}=(230{\rm km\,s^{-1}})/8\,{\rm kpc}\simeq 28.8\,{\rm km\,s^{-1} \,kpc^{-1}}\) and Oort's first constant to be \(A=\Omega/2\simeq 14.4\,{\rm km\,s^{-1}\,kpc^{-1}}\). The choice of \(A=-B=\Omega/2\) corresponds to a Mestel disc, which has a flat rotation curve. The epicycle frequency is then \(\kappa=\sqrt{2}\Omega\simeq 41\,{\rm km\,s^{-1}\,kpc^{-1}}\). Following B20, we use \(\kappa t/\pi\) as a dimensionless time variable when plotting the time evolution of various quantities. For reference, \(\pi/\kappa\simeq 77\,{\rm Myr}\). We assume a surface density for the equilibrium system of \(\Sigma_{0}=7\times 10^{7}\,M_{\odot}{\rm kpc}^{-2}\), which gives a volume density in the mid plane of \(\rho_{0}\simeq 0.16\left(\sigma_{z}/15\,{\rm km\,s^{-1}}\right)^{-2}M_{\odot}\,{ \rm pc}^{-3}\). With these parameters, the critical wavelength if \(\lambda_{\rm crit}\simeq 8.2\,{\rm kpc}\). These values are roughly consistent with values for the Solar neighbourhood. Of course, a more realistic model would include a multi-component disc along with other contributions to the gravitational potential such as the gas disc and dark halo. ## 3 Impulsive excitations ### breathing wave excitation We first consider the response of the disc to a plane-wave impulsive excitation. We begin by assuming that the excitation is localized in the mid plane and symmetric in \(z\). The density in equations 33 and 36 is then given by \[\tilde{\rho}_{1}(z,t)=\frac{\Sigma_{c}}{\kappa}\delta\left(t-t_{i}\right)\delta( z)+\tilde{\rho}_{s}(z,t). \tag{59}\] where \(\rho_{s}\) is the density perturbation in the disc itself, i.e., the self-gravity term. Since this is a single wave with well defined \(k_{y}\), we can set \(k_{x}(t=0)=k_{x0}=0\). In the absence of self-gravity, the vertical DF given by \[\tilde{f}_{1z}(z,w,t)=\Sigma_{e}e^{-k_{y}|z_{i}|}\] \[\times\left(K_{p}(t,t_{i})F_{z}(H_{z})-\frac{w_{i}N_{z}}{\kappa h_ {z}}{\rm sgn}(z_{i})K_{z}(t,t_{i})e^{-E_{z}/\sigma_{z}^{2}}\right). \tag{60}\] Note that \(z_{i}=z(t_{i})\) and \(w_{i}=w(t_{i})\) are implicit functions of \(z\) and \(w\). In Fig. 1 we show the evolution of \(\Sigma_{1}/\Sigma_{e}\) for the case when \(t_{i}=-1.5\pi/\kappa\), \(Q=1.2\), and \(k_{y}=k_{\rm crit}/2\). The timescale between successive peaks corresponds to the period for epicyclic motions, \(2\pi/\kappa\). The figure illustrates the effect the disc's thickness has on the effective kernel since, in the absence of self-gravity, \(\Sigma_{1}/\Sigma_{e}\) is proportional to \(K_{p}P\). In Fig. 2 we plot the reduction in the peak value of \(\Sigma_{1}\) relative to what the peak value would be in a razor-thin disc. Toomre (1964) and JT66 suggested that one could account for finite thickness effects by multiply \(K\) by \((1-\exp\gamma)/\gamma\) where \(\gamma\equiv hk_{y}(1+(2At_{i})^{2})^{1/2}\) and \(2h\) is the effective thickness of the disc. This expression can derived from equations 32 and 33 by taking the disc is a uniform density slab of thickness \(2h\). For the fit in Fig. 2 we use \(hk_{y}=0.9z_{0}k_{\rm crit}/2=0.9\,(0.53Q\sigma_{z}/\sigma_{x})^{2}\) where the numerical factor of \(0.9\) is obtained via chi-by-eye and accounts for the fact the density in our truncated isothermal model is not uniform. In Fig. 3 we show the vertical DF at nine different times for the case where \(\sigma_{z}=15\,{\rm km\,s^{-1}}\). The DF winds up with a pitch angle that increases linearly with time (see below). The time-dependence of the amplitude of the perturbation is consistent with what we showed in Fig. 1. In particular, the amplitude reaches local maxima at \(\kappa t/\pi\simeq-1.2\), \(1.5\), \(3.5\) and local minima at \(\kappa t/\pi\simeq 1,3.1\). As discussed above, the \(\mathcal{J}_{p}\) and \(\mathcal{J}_{z}\) terms in equation 47 involve very different aspects of mass redistribution. The \(\mathcal{J}_{p}\) term describes mass redistribution in the plane and therefore changes the local surface density. Conversely, \(\mathcal{J}_{z}\) describes a redistribution of mass in the \(z-w\) plane, but leaves the surface density unchanged. This difference is illustrated in Fig. 4 where we show separate contributions to the vertical DF from \(\mathcal{J}_{p}\) and \(\mathcal{J}_{z}\) at two different epochs. For \(\kappa t/\pi=-1\), shortly after the disc has been excited, the contribution from \(\mathcal{J}_{p}\) is proportional to \(F_{z}(H_{z})\Phi_{1}\), which is symmetric in \(z\). This contribution changes the shape of the \(z-w\) DF while preserving the \(z\to-z\) and \(w\to-w\) symmetries. We can therefore view it as a breathing wave along with mass redistribution in the \(x-y\) plane. Over time, the pattern winds up into a two-armed phase spiral. With \(\mathcal{J}_{z}\), the perturbation due to the excitation is initially proportional to \(wF_{z}(H_{z})\partial\Phi_{1}/\partial z\). Thus, the amplitude of the perturbation is maximal along the diagonals of the \(z-w\) plane with a sign that alternates as one circles the origin. This pattern is evident in the lower middle panel of Fig. 3. The perturbation winds up into a two armed spiral with alternating positive and negative bands. These points are further illustrated in the right-most panels where we show the density as a function of \(z\). While both \(\mathcal{J}_{p}\) and \(\mathcal{J}_{z}\) lead to density perturbations that are symmetric in \(z\), the variations with \(z\) for the latter are more prominent due to the geometry of the phase space perturbation. Figure 1: Amplitude of an impulsively-excited plane wave as a function of time in the absence of self-gravity. The amplitude has been normalized by the amplitude of the impluse. Shown are results for discs of various thicknesses as set by the vertical velocity dispersion where \(\sigma_{z}=5-30\,\mathrm{km}\,\mathrm{s}^{-1}\) in steps of \(5\,\mathrm{km}\,\mathrm{s}^{-1}\) for orange, red, purple, cyan, green, and brown. The black curve is the result for a razor thin disk. Figure 4: Separate contribution to the vertical DF from the \(\mathcal{J}_{p}\) (left column) and \(\mathcal{J}_{z}\) (right column). The upper row is for \(\kappa t/\pi=1\) while the lower row is for \(\kappa t/\pi=3.4\). The sum of the two contributions yields the total DF as seen in the corresponding panels of Fig.3. The rightmost column shows the contributions from \(\mathcal{J}_{p}\) and \(\mathcal{J}_{z}\) to the vertical density as dotted and dashed lines, respectively. The solid line shows the number density for the equilibrium model. Figure 3: Vertical (\(z-w\)) phase space DF when self-gravity is ignored at nine different epochs for the case where \(\sigma_{z}=15\mathrm{km}\,\mathrm{s}^{-1}\). The numbers in the upper right corners of each panel give the time in units of \(\pi/\kappa\). The numbers in the upper left corners indicate the maximum value for the phase space density as indicated by the color scale in the sense that one integrates the map over \(z\) and \(w\) to obtain the value of \(\Sigma_{1}/\Sigma_{e}\) in Fig.1. Figure 2: Reduction in amplitude of a kinematic wave as a function of \(\sigma_{z}\). The blue points give the reduction in amplitude relative the result for the razor thin disc. The black curve is the phenomenological formula due to Toomre (1964) and JT66 In Fig.5 we show the surface density as a function of time for the case when self-gravity, and hence swing amplification, are included. For a razor thin disc, the surface density is amplified by a factor of \(\sim 35\). The amplification factor decreases with increasing disc thickness. For example, when \(\sigma=20\,\mathrm{km}\,\mathrm{s}^{-1}\), the peak is only \(13\%\) of what it is for the razor thin case even though the kernel is only reduced by \(20\%\). Nevertheless, the perturbation is still amplified by a factor of \(4.7\). In Fig.6 we plot the vertical DF for the same nine epochs as in Fig.3. The effects of self-gravity are striking. In the absence of self-gravity, the contributions from \(\mathcal{J}_{p}\) and \(\mathcal{J}_{z}\) are comparable and so we have a phase spiral combined with a modulation of the local surface density. With self-gravity, there is a rapid amplification of the surface density and compression in the \(z-w\) phase space at \(t\sim 1.5\pi/\kappa\). This in-plane compression of the disc sets off new vertical phase spirals so that by \(t\sim 4\pi/\kappa\) the spirals are less tightly wound but stronger than in the case without self gravity. These points are further illustrated in Fig.7 where we show the DF as a function of \(\Omega_{z}\) and \(\theta_{z}\). Plots of Gaia data in this space were made by Li and Widrow (2021); Frankel et al. (2022); Li and Widrow (2023) and Tremaine et al. (2022). In the absence of self-gravity, the phase spiral is transformed into parallel, diagonal bands with a slope proportional to the reciprocal of the age of the spiral. With self-gravity, the bands are steeper suggesting a younger age. In fact, the slope is related to the time between the swing-amplification peak and the observation time. In addition, the bands appear to bend upward in a manner similar to what is seen in Frankel et al. (2022) and Tremaine et al. (2022). ### bending wave excitation Our previous examples focused on breathing waves and two-armed vertical spirals. The excitation of bending waves and one-armed spirals requires an external density that is antisymmetric in \(z\). As a toy model for this process, we take the external mass distribution to be \[\tilde{\rho_{e}}=\frac{\Sigma_{b}}{\kappa\Delta^{2}}\delta(t-t_{i})ze^{-z^{2}/2 \Delta^{2}}. \tag{61}\] The evolution of the system in the absence of self gravity is shown in Fig. 8. The initial anti-symmetric perturbation winds up just as one expects. On the other hand, when self gravity is included (Fig. 9) the DF is amplified during the early stages of evolution and the rate at which the system phase mixes is significantly slower. Without self gravity the system undergoes \(\sim 3\) phase wrappings by \(t\simeq 5\pi/\kappa\) as compared with just a single phase wrapping in the self-gravitating case. The importance of self gravity for an anti-symmetric disturbance may seem surprising since the total surface density is identically zero. However, one can think of the external density \(\rho_{e}\) as the combination of two parallel discs, one with positive surface density and the other with negative surface density. Since the time scale for swing amplification is comparable to the time scale for mixing between the upper and lower components of the disc, it is able to amplify these components separately before mixing takes hold. Only after one or two wrappings is achieved does phase mixing proceed at the rate expected from pure kinematics. Figure 5: Surface density a function of time for the case where self-gravity is included. Physical conditions and line colors as the same as in Fig. 1. Figure 6: Vertical DF when self gravity is included. The physical conditions and epochs are the same as in Fig. 3. Figure 7: Vertical phase space DF in the angle-frequency (\(\theta_{z}-\nu_{z}\)) plane. The left panel shows the total DF for \(\kappa t/\pi=4.1\) (lower-middle panel of Fig. 3 mapped onto the \(\theta_{z}-\nu_{z}\) plane. The black line-segment is the expected slope from kinematic phase mixing. It corresponds to an age of the disturbance of \(t-t_{i}=5.6\pi/\kappa\) and lines up with the ridges. The right panel shows the same plot for the case when self-gravity is included (lower-middle panel of Fig. 6. In this case, the black line segment corresponds to \(t-t_{sw}\simeq 2.2\pi/\kappa\) where \(t_{sw}\) is roughly the time of the swing-amplification peak. ## 4 Excitation of the disc by a cloud Next, we consider the disc's response to a massive cloud on a circular orbit, that is, a cloud at rest in the shearing box. Following JT66 and B20, we decompose the density of the cloud into Fourier modes and compute the response of the disc from each mode using the formalism developed in the previous section. We assume that the perturbing mass is a Gaussian in \(\mathbf{x}\), \[\rho_{\rm e}(\mathbf{x})=\frac{M}{\left(2\pi\right)^{3/2}\Delta^{3}}e^{-| \mathbf{x}|^{2}/2\Delta^{2}}\, \tag{62}\] where \(\Delta\) and \(M\) are the size and mass of the cloud, respectively. The \(\mathbf{x}_{p}\) Fourier transform is then \[\tilde{\rho}_{\rm e}(\mathbf{k}_{p},\ z)=\frac{M}{\sqrt{2\pi}\Delta}e^{- \Delta^{2}|\mathbf{k}_{p}|^{2}/2}e^{-z^{2}/2\Delta^{2}}. \tag{63}\] As discussed in JT66 and B20, each Fourier mode evolves according to the equation developed in the previous section. We can therefore replace \(\tilde{\rho}_{1}(\xi,\ t^{\prime})\) in equations 33 and 36 with \(\tilde{\rho}_{\rm e}+\tilde{\mu}_{s}\left(\mathbf{k}_{p};\zeta,t^{\prime}\right)\). Note that while \(\tilde{\mu}_{s}\) is treated like \(\tilde{\rho}_{s}\) in our plane-wave calculations, here it has dimensions of mass per unit length. The volume density is calculated via an inverse Fourier transform, that is, an integral over \(k_{x}\) and \(k_{y}\). Likewise, an inverse Fourier transform is required to get \(f_{1z}\) as a function of \(x\) and \(y\). In Fig.10 we show the surface density perturbation generated by the cloud for the case when \(\Delta=0.05\lambda_{\rm crit}\) and \(Q=1.2\). The features of the response have been discussed at length in JT66, Fuchs (2001), and B20. The ridge that runs from the lower left to the upper right arises from swing-amplified trailing waves that originated as leading waves and roughly follows the line \(x=y/2At_{sw}\) where \(t_{sw}\simeq\pi/\kappa\) is the time of the primary swing-amplification peak. The structure is a stationary disturbance in the disc though individual stars are continuously passing through it. In Fig.11 we show the vertical DF at the 15 positions across the disc indicated by blue stars in Fig.10. These Figure 8: Vertical phase space DF when self-gravity is ignored for nine different epochs. The figure is similar to Fig.3 except that here, the disc is excited by an anti-symmetric density given by equation 61. The color scale is centered on zero and its stretch is the same in each panel. Integrating the absolute value of the DF in each map over \(z\) and \(w\) yields \(0.15\Sigma_{\rm b}\) where \(\Sigma_{\rm b}\) is the amplitude of the external excitation in equation 61. Figure 10: Surface density from a massive cloud that is on a circular orbit at \(\mathbf{x}=0\). The contours, in units \(M/\)kpc\({}^{2}\), are as follows: solid black – \(0.2,0.4,0.6\); dashed black – \(0.001,0.01\); solid red – \(-0.2\); dashed red – \(-0.01,0.001\). Figure 9: Vertical DF for a bending perturbation when self-gravity is included. In this case, the number in the upper left corner of each panel gives the amplification factor relative to the case where self-gravity is ignored (Fig. 8). DFs are calculated on a \(200\times 200\) grid in the \(z-w\) plane and then smoothed using the Scipy routine ndImage.gaussian_filter with sigma= 8. We then divide the DFs by the equilibrium DF to obtain a fractional residual map. The numbers in the upper right corners of each panel indicate the residual as a percentage assuming \(M=10^{9}\,M_{\odot}\). Evidently, the pattern of perturbations in the \(z-w\) plane is strongly dependent on the position within the mid plane. The phase spirals are most prominent between one and two times \(\lambda_{\rm crit}\) from the perturbing mass and close to the wake produced by swing amplification. We might have anticipated this from Fig. 6 where we found that the spirals arose soon after peaks formed in surface density. Elsewhere in the \(x-y\) the perturbation takes the form of a breathing wave. ## 5 Discussion Toomre (1981) described swing amplification as a conspiracy between shear, epicyclic motion or shaking, and self-gravity. The formalism presented in Section 2 allows us to study the actions of an additional conspirator: phase mixing in the dimension perpendicular to the mid plane. Phase mixing regulates swing amplification since the effects of self-gravity are diminished once the system has undergone one or two phase wrappings in the \(z-w\) plane. Our formalism also allows us to explicitly account for the reduction in self-gravity due to the finite thickness of the disc. We are therefore able to demonstrate the validity of the phenomenological formula from Toomre (1964) and JT66. The key takeaways from Section 3 and 4 are that swing amplification can enhance phase spirals by amplifying a disturbance in the \(z-w\) plane before phase mixing takes hold. Moreover, stationary phase spirals can form in the wake of a co-rotating mass. In the usual picture of phase mixing, individual stars follow the ridges and troughs of the vertical DF as it winds up. Here, individual stars pass through the spiral in the same way that stars pass in and out of the spiral arms of a disc galaxy. Taken together, these results call into question the simple picture of the Gaia phase spirals where their shapes are determined by kinematic phase mixing. There are obvious improvements and extensions that we can make to the calculations presented in this paper. First, external components to the background potential can be included to account for a thin gas disc and extended dark halo. Doing so will change the vertical potential and hence rate at which disturbances in the \(z-w\) plane wind up while reducing the effect of self-gravity. (See, figure 2 of Tremaine et al. (2022)). Second, we might consider more realistic scenarios for exciting the disc such as a passing satellite galaxy or dark matter sub halo, as in Section 6 of B20. One expects that the response of the disc to a passing satellite will be intermediate to the response from an impulsive excitation and a stationary cloud. In addition, one might add intermediate-scale masses to stir up the system as a means of testing the diffusion hypothesis in Tremaine et al. (2022). It is worth commenting on the computational complexity of our calculations. In the shearing sheet, the computational complexity for the response to a single wave of definite \(k_{y}\) is \(\mathcal{O}(N_{t}^{2})\) where \(N_{t}\), the number of time steps in the Volterra integral, is \(\mathcal{O}(10^{2})\) for the calculations presented in this paper. The computational complexity for the response to a general, time-dependent excitation will then be \(N_{k}^{2}N_{t}^{2}\) where \(N_{k}\) is the number of points for each dimension of the \(\mathbf{x}_{p}-\mathbf{k}_{p}\) Fourier transform. However, for a stationary perturbation, such as the co-moving cloud, one can use the time-invariance on the solution to replace the \(k_{x}\) integral with an integral over \(t\). The computational complexity is therefore reduced to \(N_{k}N_{t}^{2}\) (B20). In our calculations, the complexity is increased by a factor of \(N_{t}^{2}N_{w}\) where \(N_{z}\) and \(N_{w}\) and the number of grid points in the \(z\) and \(w\) directions, respectively. Calculations for the co-rotating cloud are therefore \(N_{z}^{2}N_{w}N_{k}N_{t}^{2}\) and can be done in one to several hours on an 8 processor 3.2 GHz machine. The computation time for a general time-dependent perturbation will be on the order of one to a few days, long but not prohibitive. The computation might be improved by using angle action variables in place of \(z\) and \(w\) as in Banik et al. (2022). Alternatively, one might resort to N-body simulations in the shearing box (Fuchs et al., 2005). Of course, in the end, the shearing box is not a perfect substitute for a rotating disc. The critical wavelength is \(2-3\) times the exponential scale length of the disc. This means that the surface density of a realistic disc varies considerably on scales considered in this paper and the separability assumption for the potential is surely suspect. ## 6 Conclusions In this work, we presented a formalism to study the response of a small local patch of a stellar disc to an external perturbation within the framework of the shearing box approximation. It extended the shearing sheet formalism of JT66 in the dimension perpendicular to the disc and allowed us to examine what happens within the disc as it responds to an external perturbation. In general any disturbance in the vertical structure of a disc leads to \(z-w\) spirals as the disturbance undergoes phase mixing. The main result of this paper is that self-gravity can amplify disturbances in the disc before phase mixing takes hold. This amplification is strongest for leading waves as they swing into trailing ones. Perhaps unexpectedly, the process also works with bending waves. Finally, it is possible to set up stationary phase spirals in the wake of a co-rotating mass. A complete understanding of the Gaia phase spirals is still lacking. Investigations that range from test particles in one dimension to fully self-consistent N-body simulations have failed to reproduce the structures found in the data in all their complexity. The shearing box calculations presented in this work provide an intermediate approach since they include self-gravity and an approximate form of epicyclic motion and differential rotation or shear. Thus, despite their obvious limitations, they allow one to explore the effects of self-gravity and the interplay between in-plane and vertical dynamics. ## Acknowledgements We acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada. ## Appendix A Kernel Definitions With the change of variables described in Section 2.6 we have \(H_{x}=U^{2}\). Furthermore, we can use standard trigonometric identities to write \[\psi(t^{\prime})-\psi(t)=2\mathbf{b}\cdot\mathbf{U} \tag{20}\] where \[b_{x}(t,t^{\prime})\equiv\frac{k_{x}}{\kappa}\left(A(t^{\prime}S^{\prime}-tS)+ \frac{\alpha}{2}\left(S^{\prime}-S\right)+(\Omega/\kappa)(C^{\prime}-C)\right) \tag{21}\] and \[b_{y}(t,t^{\prime})\equiv\frac{k_{y}}{\kappa}\left(A(t^{\prime}C^{\prime}-tC)+ \frac{\alpha}{2}\left(C^{\prime}-C\right)-(\Omega/\kappa)(S^{\prime}-S)\right). \tag{22}\] Likewise, we have \[\left(\alpha+2At^{\prime}\right)\kappa X\sin\theta_{r}+2\Omega X\cos\theta_{r }=2\mathbf{c}\cdot\mathbf{U} \tag{23}\] where \[c^{\prime}_{x}\equiv-\left(\frac{\alpha}{2}+At^{\prime}\right)C^{\prime}+ \frac{\Omega}{\kappa}S^{\prime} \tag{24}\] and \[c^{\prime}_{y}\equiv\left(\frac{\alpha}{2}+At^{\prime}\right)S^{\prime}+ \frac{\Omega}{\kappa}C^{\prime}. \tag{25}\] ## Data Availability The data underlying this article were generated by numerical calculations using original Python code written by the author. The code incorporated routines from NumPy(Harris et al., 2020) and SciPy(Virtanen et al., 2020). The data for the figures and the code will be shared on reasonable request to the author.
2309.16041
Automorphisms of the fine 1-curve graph
The fine 1-curve graph of a surface is a graph whose vertices are simple closed curves on the surface and whose edges connect vertices that intersect in at most one point. We show that the automorphism group of the fine 1-curve graph is naturally isomorphic to the homeomorphism group of a closed, orientable surface with genus at least one.
Katherine Williams Booth, Daniel Minahan, Roberta Shapiro
2023-09-27T21:45:11Z
http://arxiv.org/abs/2309.16041v1
# Automorphisms of the fine 1-curve graph ###### Abstract The fine 1-curve graph of a surface is a graph whose vertices are simple closed curves on the surface and whose edges connect vertices that intersect in at most one point. We show that the automorphism group of the fine 1-curve graph is naturally isomorphic to the homeomorphism group of a closed, orientable surface with genus at least one. ## 1 Introduction Let \(S=S_{g}\) be an oriented, connected, closed surface with genus \(g\). The _fine 1-curve graph of \(S\)_, denoted \(\mathcal{C}_{1}^{\dagger}(S)\), is a graph whose vertices are simple, closed, essential curves in \(S\). There is an edge between two vertices \(u\) and \(v\) if \(|u\cap v|\leq 1.\) Since homeomorphisms preserve intersections, there is a natural homomorphism \(\operatorname{Homeo}(S)\to\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S)\) induced by the standard action of \(\operatorname{Homeo}(S)\) on \(S\). Our main result is the following. **Theorem 1.1**.: _Let \(S_{g}\) be a closed, orientable, connected surface with \(g\geq 1\). The map_ \[\Phi:\operatorname{Homeo}(S_{g})\to\operatorname{Aut}\mathcal{C}_{1}^{ \dagger}(S_{g})\] _is an isomorphism._ A version of the fine 1-curve graph, denoted \(\mathcal{C}_{\pitchfork}^{\dagger}(S)\), was introduced by Le Roux-Wolff [10]. In their paper, they work with connected, nonspherical and possibly nonorientable or noncompact surfaces. The vertices of their graph correspond to nonseparating curves while edges connect pairs of curves that are either disjoint or intersect once essentially (termed torus pairs below). Le Roux-Wolff show that \(\operatorname{Aut}\mathcal{C}_{\pitchfork}^{\dagger}(S)\) is isomorphic to \(\operatorname{Homeo}(S)\). **The fine curve graph.** The _fine curve graph_ of \(S\), denoted \(\mathcal{C}^{\dagger}(S)\), was introduced by Bowden-Hensel-Webb to study \(\operatorname{Diff}_{0}(S)\)[1]. The vertices of \(\mathcal{C}^{\dagger}(S)\) are essential, non-peripheral, simple, closed curves in \(S\). Two vertices are connected by an edge if their corresponding curves are disjoint. In analogy with our theorem, Long-Margalit-Pham-Verberne-Yao prove that \(\operatorname{Aut}\mathcal{C}^{\dagger}(S)\) is isomorphic to \(\operatorname{Homeo}(S)\)[12]. **Graphs of curves and Ivanov's metaconjecture.** A classically studied object is the _curve graph_ of \(S\), denoted \(\mathcal{C}(S)\). The vertices of \(\mathcal{C}(S)\) are isotopy classes of essential, non-peripheral (if \(S\) has boundary), simple closed curves in \(S\). Two vertices are connected by an edge if they admit disjoint representatives. The _extended mapping class group_ of \(S\), denoted \(\operatorname{MCG}^{\pm}(S)\), is the group of connected components of \(\pi_{0}(\operatorname{Homeo}(S))\). Ivanov showed that, for surfaces of genus at least three, \(\operatorname{Aut}\mathcal{C}(S)\) is isomorphic to \(\operatorname{MCG}^{\pm}(S)\)[13]. Following this, Ivanov made the following metaconjecture [11, pg 84]. _Ivanov's metaconjecture_. Every object naturally associated to a surface S and having a sufficiently rich structure has \(\operatorname{MCG}^{\pm}(S)\) as its groups of automorphisms. Moreover, this can be proved by a reduction to the theorem about the automorphisms of \(\mathcal{C}(S)\). Brendle and Margalit showed that Ivanov's metaconjecture holds for a large number of graphs where edges correspond to disjointness [1, Theorem 1.7]. **The \(k\)-curve graph.** Ivanov's metaconjecture may also hold for graphs of curves where edges do not correspond to disjointness. For example, consider the _\(k\)-curve graph_, \(\mathcal{C}_{k}(S_{g})\), which has the same vertices as the curve graph. Edges connect vertices whose isotopy classes admit representatives that intersect at most \(k\) times. Agrawal-Aougab-Chandran-Loving-Oakley-Shapiro-Xiao [1] showed that when \(g\) is sufficiently large with respect to \(k\), \(\operatorname{Aut}\mathcal{C}_{k}(S_{g})\) is isomorphic to \(\operatorname{MCG}^{\pm}(S_{g})\) for any \(k\geq 1\). Similarly to how the fine curve graph is an analogue of the curve graph, the fine \(1\)-curve graph is an analogue of the \(k\)-curve graph for \(k=1\). **Sketch of the proof of Theorem 1.1.** When \(g\geq 2\), we prove Theorem 1.1 by reducing to the theorem of Long-Margalit-Pham-Verberne-Yao [1]. In particular, we show that every automorphism of \(\mathcal{C}_{1}^{\dagger}(S)\) preserves the set of edges connecting disjoint curves. For \(g=1\), we reduce to the theorem of Le Roux-Wolff [11] and show that the set of edges that are in \(\mathcal{C}_{1}^{\dagger}(S)\) but not in \(\mathcal{C}_{\pitchfork}^{\dagger}(S)\) are preserved by automorphisms. Edges in this set correspond to pairs of curves in a specific configuration; such a pair of curves will be called a pants pair and is defined in the following paragraph. _Torus pairs versus pants pairs_. There are two types of configurations of pairs of curves that intersect once. If a pair of curves crosses at their point of intersection, we call it a _torus pair_, as on the left side of Figure 1. We note that both curves that comprise a torus pair must be nonseparating. Otherwise, if neither curve crosses the other at their point of intersection, we call it a _pants pair_, as on the right hand side of Figure 1. These definitions are reminiscent of those in Long-Margalit-Pham-Verberne-Yao [1], with two key differences: we require all intersections to be single points (called _degenerate_ in [1]) and the curves in a pants pair are allowed to be homotopic. **Paper outline.** In Section 2, we prove several preliminary results about separating curves. In Section 3, we show that torus pairs are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S)\). In Section 4, we show that pants pairs are preserved by automorphisms when \(g\geq 2\). In Section 5, we prove Theorem 1.1 in the case that \(g=1\). We conclude by proving Theorem 1.1 in Section 6. **Acknowledgments.** The authors would like to thank their advisor Dan Margalit for many helpful conversations. The authors would also like to thank Jaden Ai, Ryan Dickmann, Jacob Guynee, Sierra Knavel, and Abdoul Karim Sane for useful discussions. The Figure 1: Examples of torus pairs (left) and pants pairs (right) authors thank Federic Le Roux and Maxime Wolff for sharing their manuscript and further correspondences. The authors further thank Nick Salter for comments on a draft of the manuscript. The first author was supported by the National Science Foundation under Grant No. DMS-1745583. The third author was partially supported by the National Science Foundation under Grant No. DMS-2203431. ## 2 Separating curves and their homotopy classes In this section, we give several prelimininary results about separating curves. We prove in Lemma 2.1 that separating curves are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). In Lemma 2.2, we prove that pairs of homotopic separating curves that are adjacent in \(\mathcal{C}_{1}^{\dagger}(S_{g})\) are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). We also define a new quotient graph that will be used in future sections. The relation used to define the quotient graph is proven to be equivalent to homotopy of curves in Lemma 2.3. Moreover, we show that the structure of this quotient graph is preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) in Lemma 2.5. **Preliminary graph theoretic definitions.** Two vertices connected by an edge in a graph are called _adjacent_. The _link_ of a vertex \(v\) in a graph \(G\), denoted \(\operatorname{link}(v)\), is the subgraph induced by the vertices adjacent to \(v\) in \(G\). A graph \(G\) is a _join_ if there is a partition of the vertices of \(G\) into at least two nonempty subsets, called _parts_, such that every vertex in one part is adjacent to all vertices in all of the other parts. A graph \(G\) is an _\(n\)-join_ if it is a join that can be partitioned into \(n\) sets but cannot be partitioned into \(n+1\) sets. A graph \(G\) is a _cone_ if there is a vertex \(v\), called a _cone point_, that is adjacent to all other vertices in \(G\). We say that a separating curve \(u\)_separates_ the curves \(a\) and \(b\) if \(a\) and \(b\) lie in the closures of distinct connected components of \(S\setminus u.\) If \(a\) and \(b\) are contained in the closure of the same connected component of \(S\setminus u\), then they are on the same side of \(u\); otherwise, \(a\) and \(b\) are on different sides of \(u\). We begin by showing that the sets of separating and nonseparating curves are each preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). **Lemma 2.1**.: _Let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \(u\) is a separating curve if and only if \(\varphi(u)\) is a separating curve. Moreover, \(\varphi\) preserves the sides of \(u\)._ Proof.: We will show that a curve \(u\) is separating if and only if \(\operatorname{link}(u)\) is a join. In fact, we will show that it is a \(2\)-join. Suppose \(u\) is separating. Then no curve in \(\operatorname{link}(u)\) can form a torus pair with \(u\) and must be either disjoint or form a pants pair with \(u\). Therefore, every curve in \(\operatorname{link}(u)\) will lie in the closure of exactly one of the components of \(S_{g}\setminus u\). Let \(A\) and \(B\) be the closures of the two components of \(S_{g}\setminus u\). We claim that \(\operatorname{link}(u)\) is a join with two parts are given by curves contained in \(A\) and the curves contained in \(B.\) Suppose \(a\) and \(b\) are curves in \(\operatorname{link}(u)\) such that \(a\subset A\) and \(b\subset B.\) Since \(|a\cap u|\leq 1\) and \(|b\cap u|\leq 1,\) and \((a\cap b)\subset u,\) we have that \(|a\cap b|\leq 1.\) We conclude that \(a\) and \(b\) are adjacent in \(\mathcal{C}_{1}^{\dagger}(S_{g})\), thus concluding the proof of the claim. Suppose \(u\) is not separating. Then, \(S_{g}\setminus u\) is a single connected component. Let \(a,b\in\operatorname{link}(u).\) Then, we can move \(a\) off of itself and isotope it to intersect \(a\) and \(b\) at least twice each and \(u\) at most once; call this new curve \(a^{\prime}.\) It follows that \(a^{\prime}\) is adjacent to neither nor \(b\), so \(a\) and \(b\) cannot be in different parts of a join. We conclude that \(\operatorname{link}(u)\) cannot be a join. If two curves \(a,b\in\operatorname{link}(u)\) lie in the closure of the same component of \(S_{g}\setminus u\), they must lie in the same part of the join \(\operatorname{link}(u)\). It follows that \(\operatorname{link}(u)\) is a 2-join. Thus, \(\varphi(a)\) and \(\varphi(b)\) are in the same part of the join \(\operatorname{link}(\varphi(u))\) and therefore lie in the closure of the same component of \(S\setminus\varphi(u)\). **Link of an edge and the separating link.** Let \(u\) and \(v\) be adjacent vertices in \(\mathcal{C}_{1}^{\dagger}(S)\). The _link of an edge_ spanned by \(u,v\) is \(\operatorname{link}(u,v)=\operatorname{link}(u)\cap\operatorname{link}(v).\) The _separating link of \((u,v)\)_, denoted \(\operatorname{link}^{\operatorname{sep}}(u,v)\), is the subgraph of \(\operatorname{link}(u,v)\) induced by separating curves. We use these definitions to show that homotopic pairs of separating curves adjacent in \(\mathcal{C}_{1}^{\dagger}(S_{g})\) are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). **Lemma 2.2**.: _Let \(u\) and \(v\) be adjacent separating curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \(u\) and \(v\) are homotopic if and only if \(\operatorname{link}(u,v)\) is a 3-join such that one of the parts contains only separating curves. Hence, for any \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S)\), a separating curve \(u\in\mathcal{C}_{1}^{\dagger}(S)\) is homotopic to an adjacent curve \(v\in\mathcal{C}_{1}^{\dagger}(S)\) if and only if \(\varphi(u)\) is homotopic to \(\varphi(v)\)._ Proof.: Because \(u\) and \(v\) are both separating and are either disjoint or form a pants pair, \(S\setminus(u\cup v)\) must have three connected components. By a similar argument to Lemma 2.1, \(\operatorname{link}(u,v)\) is a 3-join induced by said components. Moreover, the link of a pair of adjacent nonseparating curves or a nonseparating and separating adjacent pair of curves is not a 3-join. This follows directly using the same argument that the link of a nonseparating curve is not a join and from the fact that such pairs will not split a surface into three connected components. If \(u\) and \(v\) are homotopic as in Figure 2, then one of the components of \(S\setminus(u\cup v)\) is a (possibly pinched) annulus. All essential curves that lie in the (possibly pinched) annulus are separating, so it follows that one of the parts is comprised of separating curves. If \(u\) and \(v\) are not homotopic as in Figure 3, then \(S\setminus(u\cup v)\) consists of three components, each of which has genus. Therefore, each component supports a nonseparating curve. It follows that all parts of the join contain a nonseparating curve. With this in mind, we prove the following result about homotopic curves in the link of an edge. The original idea behind the proof is in Bowden-Hensel-Webb [1] and is expanded on in Long-Margalit-Pham-Verberne-Yao [13]. **Lemma 2.3**.: _Let \(u,v\) be two nonseparating adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S)\). Let \(a\) and \(b\) be two separating curves in \(\operatorname{link}(u,v)\). Then \(a\) and \(b\) are homotopic if and only if there is a path from \(a\) to \(b\) in \(\operatorname{link}(u,v)\) consisting of curves homotopic to \(a\) and \(b\)._ Figure 2: Adjacent, homotopic separating curves We need one auxiliary result before proving Lemma 2.3. **Lemma 2.4**.: _Let \(u,v\) be two nonseparating adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S)\). Let \(a\) and \(b\) be two separating, homotopic curves in \(\operatorname{link}(u,v)\). Then there is a homotopy \(\psi:S^{1}\times I\to S_{g}\) from \(a\) to \(b\) such that, for every \(t\in I\), there is a closed neighborhood \(T\subseteq I\) with \(t\in T\) where \(\psi(S^{1}\times T)\) is contained in a possibly pinched annulus \(A_{T}\) such that \(u\) intersects \(A_{T}\) at most once, and \(v\) intersects \(A_{T}\) at most once._ Proof.: There are three cases to consider. _Case 1: \(u\) and \(v\) are on the same side of \(a\)._ Since \(a\) and \(b\) are isotopic, \(u\) and \(v\) must be on the same side of \(b\) as well. Choose an annulus \(A\) with \(a\subseteq\partial A\) such that \(A\) is supported on the side of \(a\) that does not contain \(u\) and \(v\). Then \(a\) is homotopic to the other boundary component \(a^{\prime}\) of \(A\), and each curve in this homotopy is disjoint from \(u\) and \(v\) (besides \(a\) itself). Similarly, construct \(b^{\prime}\) adjacent to \(b\). Then \(a^{\prime}\) and \(b^{\prime}\) are disjoint from \(u\) and \(v\), so there is a homotopy between \(a^{\prime}\) and \(b^{\prime}\) that consists only of vertices of \(\operatorname{link}(u,v)\) disjoint from \(u\) and \(v\). Let \(\psi:S^{1}\times I\) be the resulting homotopy from \(a\) to \(b\). Then for any \(t\in I\), there is an annulus \(A_{t}\) containing \(\psi(S^{1}\times\{t\})\) that contains \(\psi(S^{1}\times T)\) for some closed neighborhood of \(t\), and that only intersects \(u\) and \(v\) each in at most one point, namely either the points of intersection of \(a\) with \(u\) and \(v\), or the points of intersection of \(b\) with \(u\) and \(v\), so the lemma holds. _Case 2: \(u\) and \(v\) are on opposite sides of \(a\), and \(u\) and \(v\) are disjoint._ As in Case 1, it suffices to show that \(a\) is homotopic to \(a^{\prime}\in\operatorname{link}(u,v)\) disjoint from \(u\) and \(v\) via vertices of \(\operatorname{link}(u,v)\). Since \(u\) and \(v\) are disjoint, we can choose closed annuli \(A_{u}\) and \(A_{v}\) such that \(u\subseteq A_{u}\), \(v\subseteq A_{v}\), and \(A_{u}\cap a\), \(A_{v}\cap a\) are each either connected or empty. If \(A_{u}\cap a\) is nonempty, let \(d_{u}\) be the unique subinterval of \(\partial A_{u}\) that connects the two points of intersection \(a\cap A_{u}\) to each other, such that the path \(A_{u}\cap a\) is homotopic rel \(\partial A_{u}\cap a\) to \(d_{u}\). Let \(a^{\prime\prime}\) be the curve given by removing \(a\cap A_{u}\) from \(a\) and replacing it with \(d_{u}\). Construct \(d_{v}\) similarly, and let \(a^{\prime}\) be the resulting curve given by removing \(a^{\prime\prime}\cap A=a\cap A\) and replacing it with \(d_{v}\). Then \(a\) is homotopic to \(a^{\prime}\) and \(a^{\prime}\) is disjoint from \(u\) and \(v\). Repeat this same process for \(b\) to get \(b^{\prime}\), and then \(a^{\prime}\) and \(b^{\prime}\) are homotopic through vertices disjoint from \(u\) and \(v\). Similarly to Case I, this completes the proof of the Lemma in this case. _Case 3: \(u\) and \(v\) are on opposite sides of \(a\), and \(u\) and \(v\) intersect._ Let \(\delta\) be the unique point of intersection in \(u\cap v\). Since \(u\) and \(v\) are on opposite sides of \(a\), we see that \(\delta\in a\). Now, since \(b\) is homotopic to \(a\), it must be the case that \(u\) and \(v\) are on opposite sides of \(b\) as well. Therefore, \(\delta\in b\) as well. Now, we can think of \(\delta\) as a marked point and identify \(a\), \(b\), \(u\), and \(v\) with arcs \(\overline{a},\ \overline{b},\ \overline{u}\), and \(\overline{v}\), respectively, based at \(\delta\). We see that \(\overline{a}\) and \(\overline{b}\) are disjoint from \(\overline{u}\) and \(\overline{v}\), since as curves \(a\) and \(b\) only intersect \(u\) and \(v\) each at most once. Hence the arcs \(\overline{a}\) and \(\overline{b}\) are homotopic to each other in the marked surface \((S_{g},\delta)\) via a path Figure 3: The link of adjacent, non-homotopic separating curves is a 3-join where each part contains a nonseparating curve. of arcs disjoint from \(\overline{u}\) and \(\overline{v}\). But then this homotopy of arcs is canonically identifies with a homotopy of curves in \(\operatorname{link}(u,v)\) that all intersect \(u\) and \(v\) only at \(\delta\). Let \(\psi\) be the resulting homotopy. For any \(t\in I\), there is a pinched annulus \(A\) containing \(\psi(S^{1}\times T)\) for some closed neighborhood \(t\in T\), such that \(A\) intersects \(u\) and \(v\) only at \(\delta\), so the lemma holds. Proof of Lemma 2.3.: The backwards direction follows by definition, so we only need to prove the forwards direction. Let \(\psi:S^{1}\times I\to S_{g}\) be a homotopy as in Lemma 2.4. For each point \(t_{i}\in I\), choose an interval \(T_{i}=[s_{i},s^{\prime}_{i}]\subseteq I\) with \(s_{i}\neq s^{\prime}_{i}\) that contains \(t_{i}\) such that the set \(\psi(S^{1}\times T_{i})\) is contained in a (possibly pinched) annulus \(A_{i}\), such that \(A_{i}\) intersects \(u\) and \(v\) in at most one point each. Since \(I\) is compact, there is a finite collection of such intervals \(T_{1},\dots,T_{n}\) whose interiors cover \(I\). By restricting each \(T_{i}\), we may assume that \(s^{\prime}_{i}=s_{i+1}\) for all \(0\leq i<n\). Observe that if two separating curves \(x,y\subseteq S_{g}\) are contained in the interior of a (possibly pinched) annulus \(A\), then a curve \(z\subset\partial A\) is both adjacent and homotopic to \(x\) and \(y\), forming a path of length two between \(x\) and \(y\). For each \(T_{i}\), there are two curves \(x_{i}=\psi(S^{1}\times\{s_{i}\})\) and \(y_{i}=\psi(S^{1}\times\{s^{\prime}_{i}\})\) given by the restriction of \(\psi\) to each endpoint of \(T_{i}\) such that \(x_{i}\) and \(y_{i}\) both each intersect \(u\) and \(v\) in at most one point, since we have assumed the same for \(A_{i}\). Hence each \(x_{i}\) and \(y_{i}\) are vertices in \(\operatorname{link}(u,v)\). Then each \(x_{i}\) is connected by a path of length \(2\) in \(\operatorname{link}(u,v)\) to \(y_{i}\). We have chosen the \(T_{i}\) in such way that \(y_{i}=x_{i+1}\) for \(0\leq i<n\). Then \(x_{0}=a\) and \(y_{n}=b\) by construction, so there is a path from \(x_{0}\) to \(y_{n}\) of length \(2n\) in \(\operatorname{link}(u,v)\), such that each vertex of this path is homotopic to \(a\), so the lemma holds. **Separating link quotient.** Let \(u\) and \(v\) be adjacent nonseparating curves in \(\mathcal{C}^{\dagger}_{1}(S_{g}).\) Define the _separating link quotient of \((u,v)\)_, denoted \(\mathcal{Q}^{\operatorname{sep}}(u,v)\), to be the separating link of \(u\) and \(v\) quotiented out by homotopy on the vertices. In other words, the vertices of \(\mathcal{Q}^{\operatorname{sep}}(u,v)\) are homotopy classes of separating curves in \(\operatorname{link}(u,v)\) and an edge connects two vertices if they admit disjoint representatives. Hereafter, if \(a\) is a separating curve in \(\mathcal{C}^{\dagger}_{1}(S_{g})\), we will denote two things by \([a]\): 1) the set of curves in \(\mathcal{C}^{\dagger}_{1}(S_{g})\) homotopic to \(a\) and 2) the vertex that \(a\) represents in \(\mathcal{Q}^{\operatorname{sep}}(u,v)\). In the following lemma, we prove that the structure of the separating link quotient is preserved under automorphisms of \(\mathcal{C}^{\dagger}_{1}(S_{g})\). **Lemma 2.5**.: _Let \(u\) and \(v\) be nonseparating curves adjacent in \(\mathcal{C}^{\dagger}_{1}(S_{g})\) and let \(\varphi\in\operatorname{Aut}\mathcal{C}^{\dagger}_{1}(S_{g})\). Then \(\mathcal{Q}^{\operatorname{sep}}(u,v)\cong\mathcal{Q}^{\operatorname{sep}}( \varphi(u),\varphi(v))\)._ Proof.: By Lemma 2.1, \(\varphi\) induces an isomorphism between \(\operatorname{link}^{\operatorname{sep}}(u,v)\) and \(\operatorname{link}^{\operatorname{sep}}(\varphi(u),\varphi(v))\). Then by Lemma 2.2, \(\varphi\) preserves disjoint homotopic separating curves. Hence if \(d,d^{\prime}\in\operatorname{link}^{\operatorname{sep}}(u,v)\) map to the same point in \(\mathcal{Q}^{\operatorname{sep}}(u,v)\), then \(\varphi(d)\) and \(\varphi(d^{\prime})\) must map to the same point in \(\mathcal{Q}^{\operatorname{sep}}(\varphi(u),\varphi(v))\). Therefore the isomorphism \(\operatorname{link}^{\operatorname{sep}}(u,v)\cong\operatorname{link}^{ \operatorname{sep}}(\varphi(u),\varphi(v))\) descends to an isomorphism \(\mathcal{Q}^{\operatorname{sep}}(u,v)\cong\mathcal{Q}^{\operatorname{sep}}( \varphi(u),\varphi(v))\). With these preliminary results in mind, we are ready to proceed with the main body of the proof of Theorem 1.1. ## 3 Torus pairs In this section, we prove that automorphisms of our graph preserve torus pairs. **Proposition 3.1**.: _Let \(S_{g}\) be a surface of genus \(g\geq 2\). Let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\) and \(u,v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \((\varphi(u),\varphi(v))\) is a torus pair if and only if \((u,v)\) is a torus pair._ We will now characterize torus pairs using the graph \(Q^{\operatorname{sep}}(u,v)\) in Lemma 3.2. **Lemma 3.2**.: _Let \(g\geq 2\). Let \(u\) and \(v\) be nonseparating, adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \((u,v)\) is a torus pair if and only if the separating link quotient \(\mathcal{Q}^{\operatorname{sep}}(u,v)\) is a cone._ Proof.: We first assume that \((u,v)\) is a torus pair and show that \(Q^{\operatorname{sep}}(u,v)\) is a cone. We then assume that \((u,v)\) is not a torus pair, and show that \(Q^{\operatorname{sep}}(u,v)\) is not a cone. _Suppose \((u,v)\) is a torus pair_. Consider the homotopy class \(\mathcal{H}\) of curves in \(\operatorname{link}(u,v)\) homotopic to the boundary of the torus that \(u\) and \(v\) fill. By Lemma 2.3, \(\mathcal{H}\) descends to a single point in \(Q^{\operatorname{sep}}(u,v)\). Let \(a\in\operatorname{link}(u,v)\setminus\mathcal{H}\) be a separating curve. Then, \(a\) is not contained in the torus filled by \(u\cup v\), and therefore is isotopic to a curve \(a^{\prime}\) disjoint from \(u\cup v.\) Let \(b\) be the boundary of a regular neighborhood of \(u\cup v\) disjoint from \(a^{\prime}\). Thus \(b\in\mathcal{H}\) and \(b\) is disjoint from--and therefore adjacent to--\(a^{\prime}.\) We therefore have that \(\mathcal{H}\) is a cone point of \(Q^{\operatorname{sep}}(u,v)\). _Suppose \((u,v)\) is not a torus pair_. We must show that there is no cone point of \(\mathcal{Q}^{\operatorname{sep}}(u,v).\) We will do this by ascertaining that for any homotopy class of separating curves in \(\operatorname{link}(u,v)\), there is another homotopy class of curves that minimally intersects the first at least twice. Let \(a\in\operatorname{link}(u,v)\) be a separating curve. Because \(a\) is separating, there must be at least one genus in each connected component of \(S_{g}\setminus a.\) Therefore, there is a nonseparating curve \(c\) that: 1) intersects \(a\) and cannot be homotoped to be disjoint from \(a\), and 2) is disjoint from \(u\cup v.\) Consider a representative \(b\) of \(T_{[c]}[a]\), the Dehn twist of \([a]\) about \([c].\) The intersection number of \([a]\) and \([b]\) must be at least \(2\), so \(|a\cap b|\geq 2\). Furthermore, since \(c\) and \(a\) are disjoint from \(u\) and \(v\), we can choose \(b\) to be disjoint from \(u\) and \(v\) as well. Hence, the vertices in \(\mathcal{Q}^{\operatorname{sep}}(u,v)\) corresponding to \(a\) and \(b\) are not adjacent, and thus the vertex corresponding to \(a\) is not a cone point. We conclude that \(\mathcal{Q}^{\operatorname{sep}}(u,v)\) has no cone points. Proof of Proposition 3.1.: Let \(u,v\) be as in the statement of the proposition and let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\). Since \(\varphi\) is invertible, it suffices to show that \((u,v)\) a torus pair implies \((\varphi(u),\varphi(v))\) a torus pair. By Lemma 3.2, \(Q^{\operatorname{sep}}(u,v)\) is a cone. Then by Lemma 2.5, \(Q^{\operatorname{sep}}(\varphi(u),\varphi(v))\) is a cone. Therefore \((\varphi(u),\varphi(v))\) is a torus pair by Lemma 3.2. Figure 4: A torus pair with the separating curve \(\delta\) which represents the cone point in \(Q^{\operatorname{sep}}(u,v)\) Pants pairs The goal of this section is to prove the following proposition: **Proposition 4.1**.: _Let \(S_{g}\) be a surface of genus \(g\geq 2\). Let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\) and \(u,v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \((\varphi(u),\varphi(v))\) is a pants pair if and only if \((u,v)\) is a pants pair._ The main observation behind the proof of Proposition 4.1 is that two curves \(u\) and \(v\) are disjoint if and only if \(u\) has a neighborhood disjoint from \(v\). To use this observation, we will break down the proof of Proposition 4.1 into several steps, as follows. 1. Reduce to the case that \(u\) and \(v\) are both nonseparating (Lemma 4.2). 2. Distinguish the boundary curves of a neighborhood of a nonseparating curve by showing that automorphisms preserve: 1. pairs of adjacent homotopic nonseparating curves \(\mathcal{C}_{1}^{\dagger}(S_{g})\) (Lemma 4.3), 2. whether a nonseparating curve is contained (in some sense) in the annulus bounded by adjacent homotopic curves \(a\) and \(b\) (Lemma 4.6), and 3. whether two homotopic nonseparating curves adjacent in \(\mathcal{C}_{1}^{\dagger}(S_{g})\) form a pants pair or are disjoint (Lemma 4.7). 3. Show that automorphisms preserve whether non-homotopic nonseparating curves are disjoint or a pants pair (Lemma 4.9). Combining Steps 0, 1.3, and 2 proves Proposition 4.1 for all possible arrangements of curves. We begin by proving Step 0 in the following lemma. We use the notation \(A_{1}*A_{2}*\cdots*A_{k}\) to denote the decompositon of a \(k\)-join into its parts. **Lemma 4.2**.: _Let \(u\) and \(v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S)\). Suppose that \(u\) is separating._ 1. _If_ \(v\) _is nonseparating:_ _Let_ \(A*B\) _be the decomposition of_ \(\operatorname{link}(u)\) _into a join and let_ \(v\in A.\) _Then_ \((u,v)\) _is a pants pair if and only if there exists a nonseparating curve_ \(w\in B\) _such that_ \((v,w)\) _is a pants pair._ 2. _If_ \(v\) _is separating and homotopic to_ \(u\)_:_ _Let_ \(A*B*C\) _be a decomposition of_ \(\operatorname{link}(u,v)\) _into a 3-join such that_ \(B\) _contains only separating curves. Then,_ \((u,v)\) _is a pants pair if and only if there exist nonseparating curves_ \(a\in A\) _and_ \(c\in C\) _such that_ \((a,c)\) _is a pants pair._ 3. _If_ \(v\) _is separating and not homotopic to_ \(u\)_:_ _Let_ \(A*B*C\) _be a decomposition of_ \(\operatorname{link}(u,v)\) _into a 3-join. Then,_ \((u,v)\) _is a pants pair if and only if there exist nonseparating curves_ \(a\in A,\ b\in B,\) _and_ \(c\in C\) _such that_ \((a,b),\)__\((b,c),\) _and_ \((c,a)\) _are all pants pairs._ _In particular, if \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S)\) and \(\varphi\) sends pants pairs consisting of nonseparating curves to pants pairs consisting of nonseparating curves, then \(\varphi\) sends any pants pair to a pants pair._ Proof.: _The case that \(v\) is nonseparating._ Suppose \((u,v)\) is a pants pair. Let \(w\in B\) be a nonseparating curve that intersects \(u\) at the point \(u\cap v\). On the other hand, suppose \(u\) and \(v\) are disjoint. Then any curve \(w\in B\) must be disjoint from \(v\). _The case that \(v\) is separating and homotopic to \(u\)._ Suppose \((u,v)\) is a pants pair. Then, we can choose nonseparating curves \(a\in A\) and \(c\in C\) that intersect \(u\) and \(v\) at \(u\cap v\), as in Figure 5. Suppose \(u\) and \(v\) are disjoint. Then any curve in \(A\) is disjoint from any curve in \(C\). _The case that \(v\) is separating and not homotopic to \(u\)._ Suppose \((u,v)\) is a pants pair. Then we can choose nonseparating curves \(a\in A\), \(b\in B\), and \(c\in C\) such that they pairwise form pants pairs. An example of such a selection of curves is in Figure 6. Suppose \(u\) and \(v\) are disjoint. Take \(A\) to correspond to all curves supported in the subsurface bounded by only \(u\) and \(C\) to correspond to all curves supported in the subsurface bounded by only \(v.\) It follows that any \(a\in A\) and \(c\in C\) are disjoint. Now that we have reduced Proposition 4.1 to the case of nonseparating pairs of curves, we are ready to prove Step 1. ### Step 1: Neighborhoods of nonseparating curves In this section, we prove that neighborhoods of nonseparating curves are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g}).\) This is broken down into three main steps. In Step 1.1, we show that automorphisms preserve adjacent homotopic nonseparating curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). In Step 1.2, we show that automorphisms preserve whether a curve is contained (in some sense) in the annulus bounded by two homotopic curves. In Step 1.3, we show that automorphisms preserve whether homotopic nonseparating curves form a pants pair. Figure 5: If \(u\) and \(v\) form a pants pair, we can find nonseparating curves \(a\in A\) and \(c\in C\) that form a pants pair. Figure 6: If \(u\) and \(v\) form a pants pair, we can find nonseparating curves \(a\in A\), \(b\in B\), and \(c\in C\) such that any two form a pants pair. **Step 1.1: Homotopic nonseparating curves** The main result of this step is the following lemma. **Lemma 4.3**.: _Let \(u\) and \(v\) be two adjacent nonseparating curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \(\varphi(u)\) is homotopic to \(\varphi(v)\) if and only if \(u\) is homotopic to \(v\)._ Since homotopic curves are jointly separating, the first step in the proof of Lemma 4.3 is to show that the set of jointly separating pairs of curves is preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). **Lemma 4.4**.: _Let \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\) for \(g\geq 2.\) Let \(u\) and \(v\) be adjacent nonseparating curves in \(S_{g}\) that do not form a torus pair. Then \(u\) and \(v\) are not jointly separating if and only if \(\varphi(u)\) and \(\varphi(v)\) are not jointly separating._ _It follows that \(u\) and \(v\) are jointly separating if and only if \(\varphi(u)\) and \(\varphi(v)\) are jointly separating._ Proof.: We prove the lemma by showing the following: \(u\) and \(v\) are not jointly separating if and only if there exists a separating curve \(c\) such that \(c\) separates \(u\) and \(v\). _Suppose \(u\) and \(v\) are jointly separating._ In this case, any separating curve \(c\) in \(\operatorname{link}(u,v)\) would have to lie in a single component of \(S\setminus(u\cup v)\); otherwise, \(c\) would intersect \(u\) or \(v\) at least twice. Therefore, any such \(c\) does not separate \(u\) and \(v\). _Suppose \(u\) and \(v\) are neither jointly separating nor a torus pair._ In this case, there is a separating curve that separates \(u\) from \(v\) as in Figure 7. To find such a curve, cut \(S\) along \(u\) and \(v\) (retaining the boundaries), and take \(c\) to be the separating curve that forms a (potentially pinched) pair of pants with the boundaries arising from \(u\). A pair of adjacent homotopic curves is jointly separating, so we now further distinguish between homotopic jointly separating curves and non-homotopic jointly separating curves. **Lemma 4.5**.: _Let \(u\) and \(v\) be two adjacent jointly separating nonseparating curves in \(\mathcal{C}_{1}^{\dagger}(S)\). Then, \(u\) and \(v\) are homotopic if and only if \(\varphi(u)\) and \(\varphi(v)\) are homotopic._ Proof.: By Lemma 2.5, it is enough to show that \(u\) and \(v\) are homotopic if and only if \(\mathcal{Q}^{\mathrm{sep}}(u,v)\) is not a join. _Suppose \(u\) and \(v\) are homotopic._ Let \([a]\) and \([b]\) be distinct vertices of \(\mathcal{Q}^{\mathrm{sep}}(u,v).\) Then, \(a\) and \(b\) are necessarily in the same connected component of \(S\setminus(u\cup v).\) Let \(c\) be a curve in that same component of \(S\setminus(u\cup v)\) such that no curve isotopic to \(c\) is disjoint from neither \(a\) nor \(b.\) Then, every curve in the isotopy class \([d]=T_{[c]}[a]\), the Dehn twist of \([a]\) about \([c]\), intersects every curve in the isotopy class of \(a\) and every curve in the isotopy class of \(b\) at least twice. Choose a representative \(d\) of \(T_{[c]}[a]\) disjoint from \(u\cup v\). Then we have that \([d]\) Figure 7: Curves separating \(u\) from \(v\) a vertex of \(\mathcal{Q}^{\mathrm{sep}}(u,v)\) is neighbors with neither \([a]\) nor \([b].\) Since this is true for any \([a]\) and \([b],\) we conclude that \(\mathcal{Q}^{\mathrm{sep}}(u,v)\) is not join. _Suppose \(u\) and \(v\) are not homotopic._ Let \([a]\) be a vertex of \(Q^{\mathrm{sep}}(u,v)\) and \(a\in\mathrm{link}(u,v)\) be an arbitrary representative. Then, \(a\) is contained in one connected component of \(S\setminus(u,v)\) except possibly for points of intersection with \(u\) and \(v\). By pushing \(a\) off of itself in the direction away from \(u\) and \(v\), we obtain another curve \(a^{\prime}\) with \([a^{\prime}]=[a].\) We notice that no curve \(a^{\prime}\) homotopic to \(a\) is contained in a different connected component of \(S\setminus(u\cup v)\) from \(a,\) since then \(a\) and \(a^{\prime}\) would bound a (potentially pinched) annulus that must contain \(u\) or \(v,\) a contradiction. We now conclude that \(\mathcal{Q}^{\mathrm{sep}}(u,v)\) is indeed a join, where the parts correspond to equivalence classes of curves contained in each of the components of \(S\setminus(u\cup v)\) (potentially except for intersections with \(u\) and \(v\)). We now show that homotopic nonseparating pairs of curves are preserved by automorphisms of \(\mathcal{C}^{\dagger}_{1}(S_{g}).\) Proof of Lemma 4.3.: By Proposition 3.1, torus pairs are preserved by automorphisms, and thus the set of disjoint and pants pairs are preserved by automorphisms. Disjoint and pants pairs may be jointly separating or not jointly separating; by Lemma 4.4, being (not) jointly separating is preserved by automorphisms. Finally, jointly separating pairs may be either homotopic or not. Lemma 4.5 ascertains that, within the set of jointly separating pairs, homotopic pairs are preserved by automorphisms of \(\mathcal{C}^{\dagger}_{1}(S_{g}).\) By combining these results, we conclude that nonseparating homotopic pairs are preserved by automorphisms. #### Step 1.2: Containment in an annulus In this section, we prove that automorphisms preserve whether a curve lies in the annulus bounded by two homotopic curves. Such a curve must be homotopic to the two boundary curves, and therefore all three curves must be adjacent in \(\mathcal{C}^{\dagger}_{1}(S_{g}).\) First, we define what it means for a curve \(w\) to lie in the annulus bounded by adjacent homotopic curves \(u\) and \(v\) in \(\mathcal{C}^{\dagger}_{1}(S_{g}).\) Let \(\mathrm{Ann}_{\leq 1}(u,v)\subseteq\mathrm{link}(u,v)\) denote the subgraph generated by curves \(w\) supported on the (possibly pinched) annulus bounded by \(u\) and \(v\) such that \(|w\cap(u\cup v)|\leq 1\) as in Figure 8. Our main goal in this section is to prove the following result. **Lemma 4.6**.: _Let \(u,\)\(v,\) and \(w\) be pairwise adjacent homotopic nonseparating curves in \(\mathcal{C}^{\dagger}_{1}(S_{g})\) and \(\varphi\in\mathrm{Aut}\,\mathcal{C}^{\dagger}_{1}(S_{g})\). Then \(w\in\mathrm{Ann}_{\leq 1}(u,v)\) if and only if \(\varphi(w)\in\mathrm{Ann}_{\leq 1}(\varphi(u),\varphi(v)).\)_ Figure 8: Annulus formed by a pair of homotopic, nonseparating curves \(u\) and \(v\) Proof.: Since elements of \(\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\) preserve separating links by Lemma 2.2, it is enough to show that \(w\in\operatorname{Ann}_{\leq 1}(u,v)\) if and only if \(\operatorname{link}^{\operatorname{sep}}(u,v)\subset\operatorname{link}^{ \operatorname{sep}}(w)\). _Suppose \(w\in\operatorname{Ann}_{\leq 1}(u,v)\). Let \(a\in\operatorname{link}^{\operatorname{sep}}(u,v)\). Since \(|w\cap(u,v)|\leq 1\) and \(w\) is in the annulus bounded by \(u\) and \(v\) while \(a\) is not, \(|a\cap w|\leq 1\). We conclude that \(a\in\operatorname{link}^{\operatorname{sep}}(w)\)._ _Suppose \(w\not\in\operatorname{Ann}_{\leq 1}(u,v)\). Then, \(w\) is either (1) in the annulus bounded by \(u\) and \(v\) and intersects both of them or (2) not in the annulus bounded by \(u\) and \(v\)._ _Suppose \(w\) is in case (1), and take any separating curve \(a\in\operatorname{link}(u,v)\) disjoint from \(u\cup v\). We can then isotope \(a\) to touch \(u\) and \(v\) at \(u\cap w\) and \(v\cap w\), respectively. Thus \(a\not\in\operatorname{link}(w)\)._ _Suppose \(w\) is in case (2), and take any separating curve \(a\in\operatorname{link}(u,v).\) We can then isotope \(a\) to intersect \(w\) at least twice, so \(a\not\in\operatorname{link}(w)\)._ _We conclude that \(\operatorname{link}^{\operatorname{sep}}(u,v)\not\subset\operatorname{link}^ {\operatorname{sep}}(w)\). _ #### Step 1.3: Homotopic nonseparating pants pairs We now state and prove the result which allows us to distinguish pants pairs from disjoint pairs in the case that \(u\) and \(v\) are homotopic nonseparating curves. **Lemma 4.7**.: _Let \(u\) and \(v\) be adjacent homotopic nonseparating curves in \(S_{g}\) and \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g}).\) Then, \((u,v)\) is a pants pair if and only if \((\varphi(u),\varphi(v))\) is a pants pair. It follows that \((u,v)\) is a disjoint pair if and only if \((\varphi(u),\varphi(v))\) is a disjoint pair._ _The following lemma provides the combinatorial conditions necessary to prove Lemma 4.7._ **Lemma 4.8**.: _Let \(u\) and \(v\) be adjacent homotopic nonseparating curves in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Then \((u,v)\) is a pants pair if and only if there exists a curve \(\gamma\in\operatorname{link}(u,v)\) such that 1) \(\gamma\) forms torus pairs with both \(u\) and \(v\) and 2) for any \(\delta\in\operatorname{Ann}_{\leq 1}(u,v)\), \(\gamma\) and \(\delta\) are adjacent in \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Otherwise, \(u\) and \(v\) are disjoint._ Proof.: Suppose \((u,v)\) is a pants pair. Choose a curve \(\gamma\in\operatorname{link}(u,v)\) that forms a torus pair with \(u\) and \(v\) and passes through their point of intersection. Then, \(\gamma\) intersects every essential curve contained in \(\operatorname{Ann}_{\leq 1}(u,v)\) exactly once, and is therefore adjacent to them. A schematic of this situation is pictured on the left in Figure 9. Suppose \(u\) and \(v\) are disjoint. Choose any curve \(\gamma\in\operatorname{link}(u,v)\) that forms a torus pair with both \(u\) and \(v\). Since an interval of \(\gamma\) lies in the annulus bounded by \(u\) and \(v\), we can choose a curve in \(\operatorname{Ann}_{\leq 1}(u,v)\) that intersects \(\gamma\) at least twice. An example of such a curve is shown on the right in Figure 9. Figure 9: We can distinguish homotopic curves that are disjoint from those that form a pants pair. Proof of Lemma 4.7.: The statement follows directly from Lemma 4.8, as torus pairs (Proposition 3.1) and being an element of \(\operatorname{Ann}_{\leq 1}(u,v)\) (Lemma 4.6) are both preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). Lemma 4.7 will allow us to conclude Proposition 4.1 in the case that \(u\) and \(v\) are homotopic. ### Step 2: Non-homotopic nonseparating curves: pants pairs vs. disjoint pairs The main result from this section is Lemma 4.9. It proves Proposition 4.1 in the case that the edge \((u,v)\) consists of nonseparating, non-homotopic curves. **Lemma 4.9**.: _Let \(u\) and \(v\) be adjacent non-homotopic nonseparating curves in \(S_{g}.\) Then, for any \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g}),\) we have that \((u,v)\) is a pants pair if and only if \((\varphi(u),\varphi(v))\) is a pants pair. We therefore have that automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) also preserve disjoint pairs of curves._ The main tool used to prove the above lemma is Lemma 4.10, which provides a necessary and sufficient combinatorial condition for non-homotopic nonseparating curves to form a pants pair. **Lemma 4.10**.: _Let \(u\) and \(v\) be adjacent non-homotopic nonseparating curves in \(\mathcal{C}_{1}^{\dagger}(S)\). Suppose that \((u,v)\) is not a torus pair. Then \(u\) and \(v\) are disjoint if and only if there exist adjacent curves \(\alpha,\beta\in\operatorname{link}(u,v)\), such that_ 1. \(\alpha\)_,_ \(u\)_, and_ \(\beta\) _are homotopic,_ 2. \(\alpha\)_,_ \(u\)_, and_ \(\beta\) _are disjoint, and_ 3. \(u\in\operatorname{Ann}_{\leq 1}(\alpha,\beta)\) Figure 10: We can distinguish non-homotopic curves that are disjoint from those that form a pants pair. _Otherwise, \((u,v)\) is a pants pair._ The main idea behind Lemma 4.10 is that \(u\) and \(v\) are disjoint if and only if there is a regular neighborhood of \(u\) that is disjoint from \(v.\) Figure 10 gives a schematic of the proof. Proof of Lemma 4.10.: Suppose first that \(u\) and \(v\) are disjoint. Then, there is an annular neighborhood of \(u\) disjoint from \(v.\) The boundary components of such a neighborhood are the desired curves \(\alpha\) and \(\beta.\) Suppose now that such \(\alpha\) and \(\beta\) exist. Let \(N\) be the annulus with boundary components \(\alpha\) and \(\beta\). By hypothesis, \(u\subseteq\operatorname{Int}(N)\). If \(u\cap v\neq\emptyset\), then \(v\) must intersect either \(\alpha\) or \(\beta\) in two places, which contradicts the assumption that \(\alpha,\beta\in\operatorname{link}(u,v).\) We are now ready to complete the main result of the Section 4.2. Proof of Lemma 4.9.: Suppose that \(u\) and \(v\) are disjoint. Let \(\alpha\) and \(\beta\) be as in Lemma 4.10. Then \(\varphi\) preserves property (1) of \(\alpha\) and \(\beta\) by Lemma 4.3, property (2) by Lemma 4.7, and property (3) by Lemma 4.6. Hence \(\varphi(\alpha)\) and \(\varphi(\beta)\) realize \(\varphi(u)\) and \(\varphi(v)\) as being disjoint by Lemma 4.10, so the result follows. We are now ready to complete the main result of Section 4. Proof of Proposition 4.1.: Let \(u\) and \(v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(S_{g}).\) We must show that if \((u,v)\) is a pants pair, then for any \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\), \((\varphi(u),\varphi(v))\) is also a pants pair. By Lemma 4.2, we may assume that \(u\) and \(v\) are nonseparating. We prove the proposition with casework. _Case 1._ If \(u\) and \(v\) are homotopic, nonseparating, and form a pants pair, Lemma 4.7 asserts that \(\varphi(u)\) and \(\varphi(u)\) form a pants pair. _Case 2._ If \(u\) and \(v\) are non-homotopic, nonseparating, and form a pants pair, Lemma 4.9 asserts that \(\varphi(u)\) and \(\varphi(u)\) form a pants pair. ## 5 Torus Case In this section, we approach the case where the surface \(S_{g}\) has genus \(1\), and is therefore a torus. We will denote our surface by \(T\) to avoid ambiguity. Our previous tools do not apply in the torus case because there are no essential separating curves on a torus. Proposition 5.1 is the main result of this section. **Proposition 5.1**.: _Let \(T\) be a torus. Then the natural map_ \[\Phi:\operatorname{Homeo}(T)\to\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T)\] _is an isomorphism._ We prove Proposition 5.1 in two steps. First, in Section 5.1, we use the proof method of Le Roux-Wolff [10] to show that torus pairs are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(T)\). Then, in Section 5.2, we show that pants pairs (and therefore disjoint pairs) are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(T)\). ### Torus pairs vs. non-torus pairs The goal of this section is to prove Proposition 5.2. **Proposition 5.2**.: _Let \(u\) and \(v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(T)\) and \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T).\) Then \((u,v)\) is a torus pair if and only if \((\varphi(u),\varphi(v))\) is a torus pair._ We prove this proposition by building on the work of Le Roux-Wolff. We begin by introducing the relevant results and explaining how we adapt their statements and proofs to apply in the case of the fine \(1\)-curve graph on the torus. For any nonspherical, possibly non-orientable and possibly noncompact surface \(S\), Le Roux-Wolff study the graph \(\mathcal{C}_{\pitchfork}^{\dagger}(S)\), which has a vertex for every essential, simple, closed curve and two vertices are connected by an edge if they are either disjoint or they have one topologically transverse intersection point. (In the language of our paper, the edges correspond to disjoint pairs and torus pairs.) Le Roux-Wolff show that \(\operatorname{Homeo}(S)\cong\operatorname{Aut}\mathcal{C}_{\pitchfork}^{\dagger} (S)\) via the natural homomorphism [10]. To do this, they categorize all the possible curve configurations. A _clique_ is a collection of pairwise adjacent vertices in a graph. In particular, a clique \((a,b,c)\) is of type _necklace_ if \((a,b),\ (b,c)\), and \((c,a)\) are all torus pairs and do not have a common intersection point. Then they make the assertion that in \(\mathcal{C}_{\pitchfork}^{\dagger}(S)\), 1. a clique \((a,b,c)\) is of type necklace if and only if there exists a finite set of (at most 8) vertices such that every \(d\in\operatorname{link}(a,b,c)\) is adjacent to at least one vertex in the finite set, 2. adjacent vertices \(a,b\) are disjoint if and only if there is no \(c\in\operatorname{link}(a,b)\) such that \((a,b,c)\) is of type necklace, and 3. \(a,b\) form a torus pair if and only if \(a\) and \(b\) are not disjoint. In the case of the fine \(1\)-curve graph of the torus, these properties still hold with an adjustment to the last two: 1. adjacent vertices \(a,b\) are disjoint or a pants pair if and only if there is no \(c\in\operatorname{link}(a,b)\) such that \((a,b,c)\) is of type necklace and 2. adjacent vertices \(a,b\) are a torus pair if and only if \(a\) and \(b\) are neither disjoint nor a pants pair. In fact, (1) is the main property we must verify en route to Proposition 5.2. It is stated in the following lemma. **Lemma 5.3**.: _A clique \((a,b,c)\) in \(\mathcal{C}_{1}^{\dagger}(T)\) is of type necklace if and only if there exists a finite set of (at most 8) vertices such that every \(d\in\operatorname{link}(a,b,c)\) is adjacent to at least one vertex in the finite set._ The forward direction of Lemma 5.3 is proven by Le Roux-Wolff and applies without adaptations. The key to the backward direction is an adaptation to Lemma 2.5 of Le Roux-Wolff, which we give as follows. **Lemma 5.4** (Adaptation of Lemma 2.5 of Le Roux-Wolff).: _Let \((a,b,c)\) be a clique not of type necklace. Then, there exists \(d\in\operatorname{link}(a,b,c)\) such that \(d\) intersects every component of \(T\setminus\{a,b,c\}\)._ This proof is done via casework on the possible arrangements of cliques. To aid in categorizing arrangements of cliques, we use an updated version of the notation of Le Roux-Wolff: for a clique \((a,b,c)\), we record pairwise intersection types by a triple \((\cdot,\cdot,\cdot)\), up to permutation. If two curves are disjoint, we signify this by a 0; if they form a torus pair, by a 1; and if they form a pants pair, by a P. Proof of Lemma 5.4.: The cases (1,1,1), (1,1,0), and (0,0,0) are all accounted for in Le Roux-Wolff; (1,0,0) does not apply in the torus case. The remainder of the cases arise from replacing some 0's with P's. _Case 1: (1,1,P)._ In this case, we have two homotopic, touching curves and a third curve that forms a torus pair with them. There are two subcases: whether the three curves intersect at one point or at 3 distinct points. _Case 1a: one point of intersection._ To create a fourth curve adjacent to all three that intersects all components of \(T\setminus\{a,b,c\}\), we push the third curve off of itself. A schematic of this configuration is shown in Figure 11. _Case 1b: three points of intersection._ The difficulty here is that \(T\setminus\{a,b,c\}\) has 3 connected components, so pushing the third curve will no longer work. In particular, a curve that satisfies the conditions of the lemma must form a torus pair with each of \(a\), \(b\), and \(c\). A schematic of this configuration is shown in Figure 12, along with a curve \(d\) that satisfies the condition of the lemma. _Case 2: variants of (0,0,0): (P,0,0), (P,P,0), and (P,P,P)._ In all of these configurations, all of \(a,b,c\) are homotopic, and thus form (possibly pinched) annuli. A curve \(d\) transverse to \(a,b\), and \(c\) that does not cross any of the touching points satis Figure 11: A schematic of Case 1a of Lemma 5.4 Figure 12: A schematic of Case 1b of Lemma 5.4 the lemma. A schematic of these configurations, along with a curve \(d\) that satisfies the hypotheses of the lemma, is shown in Figure 13. Then, Lemma 2.6 of Le Roux-Wolff holds with minor adaptations and an identical proof; here, it appears as Lemma 5.5. It provides the converse to Lemma 5.3. **Lemma 5.5**.: _Let \((a,b,c)\) be a clique in \(\mathcal{C}_{1}^{\dagger}(T)\) not of type necklace and \(\{\alpha_{1},\dots,\alpha_{j}\}\) be a finite collection of curves distinct from \(a,\ b,\) and \(c.\) Then, there is a vertex \(d\in\operatorname{link}(a,b,c)\) that is not adjacent to any \(\alpha_{i}.\)_ The main idea of the proof is to start with a curve \(d\) given by Lemma 5.4 and isotope it in \(T\setminus\{a,b,c\}\) to intersect each \(\alpha_{j}\) arbitrarily many times. We now have all the tools we need to prove Lemma 5.3. Proof of Lemma 5.3.: The forward direction is given by Lemma 2.8 of Le Roux-Wolff. The proof applies because all edge relations in \(\mathcal{C}_{\pitchfork}^{\dagger}(T)\) still exist in \(\mathcal{C}_{1}^{\dagger}(T)\). The reverse direction follows from Lemma 5.5. It remains to show properties \((2^{\prime})\) and \((3^{\prime})\) above. The following lemma is similar to Corollary 2.4 of Le Roux-Wolff. **Lemma 5.6**.: _In \(\mathcal{C}_{1}^{\dagger}(T),\) the following properties hold._ 1. _Adjacent vertices_ \(a\) _and_ \(b\) _are disjoint or a pants pair if and only if there is no_ \(c\in\operatorname{link}(a,b)\) _such that_ \((a,b,c)\) _is of type necklace and_ 2. _adjacent vertices_ \(a\) _and_ \(b\) _are a torus pair if and only if_ \(a\) _and_ \(b\) _are neither disjoint nor a pants pair._ Proof.: If adjacent curves \(a\) and \(b\) are disjoint or form a pants pair, then by definition, there is no third curve \(c\) such that \((a,b,c)\) is a necklace clique. Alternately, if \(a\) and \(b\) are a torus pair, then up to homeomorphism, they are the (1,0) and (0,1) curves on the torus. These curves, along with some (1,1) curve, form a necklace clique. A schematic of this configuration appears in Figure 14. Property \((3^{\prime})\) follows from the definitions of torus pairs. Figure 13: A schematic of Case 2 of Lemma 5.4 Proof of Proposition 5.2.: By Lemma 5.6, \((u,v)\) is a torus pair if and only if there is a curve \(w\) such that \((u,v,w)\) are of type necklace. Lemma 5.3 asserts that the property of being of necklace type is indeed combinatorial, and therefore is preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(T)\). We conclude that torus pairs are preserved by automorphisms of \(\mathcal{C}_{1}^{\dagger}(T)\). ### Pants pairs vs. disjoint pairs It remains to show that we can distinguish pants pairs from disjoint pairs in \(\mathcal{C}_{1}^{\dagger}(T).\) We will do this in the following proposition. **Proposition 5.7**.: _Let \(u\) and \(v\) be adjacent curves in \(\mathcal{C}_{1}^{\dagger}(T)\) and \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T)\). Then \((u,v)\) is a pants pair if and only if \((\varphi(u),\varphi(v))\) is a pants pair._ Proof.: Proposition 5.1 ascertains that automorphisms preserve the set of pants and disjoint pairs. Without loss of generality, we may assume that \((u,v)\) is a pants or disjoint pair. By Proposition 5.2, we can distinguish whether adjacent curves on a torus are homotopic, which is precisely when they are not a torus pair. Define the _homotopic link_ of \((u,v)\), denoted by \(\operatorname{link}^{\hom}(u,v)\), to be the subgraph of \(\operatorname{link}(u,v)\) induced by curves homotopic to \(u\) and \(v\). We will prove the theorem by showing that \((u,v)\) is a pants pair if and only if \(\operatorname{link}^{\hom}(u,v)\) is a join. _Suppose \((u,v)\) is a pants pair_. Then, \(u\) and \(v\) bound a pinched annulus. Any curve in \(\operatorname{link}^{\hom}(u,v)\) is contained in this pinched annulus or in its complement (possibly intersecting \(u\) and \(v\)). Let \(w\in\operatorname{link}^{\hom}(u,v)\) be contained in the pinched annulus. Then, \(w\) must intersect both \(u\) and \(v\) at \(u\cap v\), and thus does not intersect them elsewhere. Let \(x\in\operatorname{link}^{\hom}(u,v)\) be contained in the complement of the pinched annulus. If \(x\cap(u\cup v)\) is nonempty, then \(|w\cap x|=1\) if \(x\cap u\cap v\neq\emptyset\) and \(|w\cap x|=0\) otherwise. We conclude that \(x\) and \(w\) are adjacent in \(\operatorname{link}^{\hom}(u,v)\), which is therefore a join with parts defined by the subsurfaces of \(T\) bounded by \(u\) and \(v\). _Suppose \((u,v)\) is a disjoint pair_. We will show that \(\operatorname{link}^{\hom}(u,v)\) is not a join by showing that no possible partition exists. We have that \(u\) and \(v\) bound two annuli with boundary, \(A\) and \(B\), and every curve in \(\operatorname{link}^{\hom}(u,v)\) is contained in precisely one of the two annuli. First, we claim that curves contained in the same annulus cannot be in different parts of the partition. Let \(a_{1},a_{2}\in\operatorname{link}^{\hom}(u,v)\) be contained in \(A.\) Then, there exists a curve \(a_{3}\in\operatorname{link}^{\hom}(u,v)\) contained in \(A\) that intersects \(a_{1}\) and \(a_{2}\) at least two times each, and is therefore adjacent to neither. Figure 14: A clique of type necklace Thus, the only possible partition for a join is into two parts, each corresponding to curves contained in one of \(A\) or \(B.\) However, there is a curve \(a\) contained in \(A\) that intersects each of \(u\) and \(v\) once, and a curve \(b\) contained in \(B\) that intersects \(u\) and \(v\) at \(a\cap u\) and \(a\cap v,\) respectively. Therefore, \(a\) and \(b\) are not adjacent. We conclude that there is no possible partition of \(\operatorname{link}^{\hom}(u,v)\) into a join. Proposition 5.7 allows us to combinatorially distinguish between pants pairs and disjoint pairs. With that in hand, we are ready to prove Proposition 5.1. ### \(\operatorname{Homeo}(T)\cong\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T)\) Proof of Proposition 5.1.: We first note that any homeomorphism of \(T\) induces an element of \(\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T).\) It remains to show that an element of \(\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T)\) induces a homeomorphism of \(S.\) We will reduce this claim to the theorem of Le Roux-Wolff [10] by showing that any automorphism of \(\mathcal{C}_{1}^{\dagger}(T)\) induces an automorphism of \(\mathcal{C}_{\pitchfork}^{\dagger}(T)\). It suffices to show that for any \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(T)\) and any adjacent curves \(u\) and \(v\) in \(\mathcal{C}_{1}^{\dagger}(T),\) we have \[(u,v)\text{ disjoint}\ \ \Longleftrightarrow\ (\varphi(u),\varphi(v))\text{ disjoint}.\] In particular, we must show that \((u,v)\) is a pants pair if and only if \((\varphi(u),\varphi(v))\) is a pants pair. By Proposition 5.2, torus pairs are preserved by automorphisms, and by Proposition 5.7, pants pairs are distinguishable from disjoint pairs. We therefore have that \(\varphi\) induces an element of \(\operatorname{Aut}\mathcal{C}_{\pitchfork}^{\dagger}(T).\) We now invoke Le Roux-Wolff [10, Theorem 1.1] that \(\operatorname{Homeo}(S_{g})\cong\operatorname{Aut}\mathcal{C}_{\pitchfork}^{ \dagger}(S_{g})\) to complete the proof. ## 6 Proof of the main theorem Proof of Theorem 1.1.: There are two cases, depending on the genus \(g\). We begin by showing that automorphisms of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) induce automorphisms of \(\mathcal{C}_{\pitchfork}^{\dagger}(T)\) (if \(g=1\)) or \(\mathcal{C}^{\dagger}(S_{g})\) (if \(g\geq 2\)). _Case 1: \(g=1\)._ This is Proposition 5.1. _Case 2: \(g\geq 2\)._ We observe that homeomorphisms preserve intersection number, so any homeomorphism of \(S_{g}\) induces an automorphism of \(\mathcal{C}_{1}^{\dagger}(S_{g})\). It remains to show that an automorphism of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) induces a homeomorphism of \(S\). We reduce this claim to the result of Long-Margalit-Pham-Verberne-Yao that \(\operatorname{Aut}\mathcal{C}^{\dagger}(S_{g})\cong\operatorname{Homeo}(S_{g})\) via the natural isomorphism. To do this, we show that any automorphism of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) sends all pairs of vertices corresponding to once-intersecting curves to pairs of curves corresponding to once-intersecting curves. By Proposition 3.1, automorphisms preserve torus pairs, and by Proposition 4.1, automorphisms preserve pants pairs. Since an edge can only correspond to a torus pair, a pants pair, or a pair of disjoint curves, we conclude that any \(\varphi\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\) induces an automorphism of the image of the natural inclusion \(\mathcal{C}^{\dagger}(S_{g})\hookrightarrow\mathcal{C}_{1}^{\dagger}(S_{g})\). We apply the theorem of Long-Margalit-Pham-Verberne-Yao to prove that an automorphism of \(\mathcal{C}_{1}^{\dagger}(S_{g})\) naturally induces a homeomorphism of \(S_{g}.\) It remains to show that the maps we construct are indeed the inverses of \(\Phi.\) For the sake of clarity, we name the maps we use. Let \(G\) be \(\mathcal{C}^{\dagger}(S_{g})\) if \(g\geq 2\) and \(\mathcal{C}_{\pitchfork}^{\dagger}(T)\) if \(g=1.\) Let \(\Psi_{1}:\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g})\to\operatorname{Aut}G\) be the map such that \(\Psi_{1}(f)=\overline{f}\) is the automorphism induced by \(f.\) We note that \(f\) and \(\overline{f}\) act the same way on the vertex sets of their corresponding graphs. Let \(\Psi_{2}:\operatorname{Aut}G\to\operatorname{Homeo}(S_{g})\) be the map constructed by Le Roux-Wolff [13] (if \(g=1\)) or Long-Margalit-Pham-Verberne-Yao [12] (if \(g\geq 2\)). Let \(\varphi\in\operatorname{Homeo}(S_{g}).\) Then, \[\Psi_{2}\circ\Psi_{1}\circ\Phi(\varphi) =\Psi_{2}\circ\Psi_{1}(f_{\varphi}),\text{ where }f_{\varphi}\text{ permutes vertices as prescribed by }\varphi\] \[=\Phi_{2}(\overline{f_{\varphi}})\] \[=\Psi_{2}(\Psi_{2}^{-1}(\varphi))\] \[=\varphi.\] Conversely, let \(f\in\operatorname{Aut}\mathcal{C}_{1}^{\dagger}(S_{g}).\) Then, \[\Phi\circ\Psi_{2}\circ\Psi_{1}(f) =\Phi\circ\Psi_{2}(\overline{f})\] \[=\Phi(\varphi_{\overline{f}}),\text{ where }\varphi_{ \overline{f}}\text{ permutes curves as prescribed by }\overline{f}\] \[=\Phi(\varphi_{f}),\text{ since }f\text{ and }\overline{f}\text{ permute vertices in the same way}\] \[=f.\] We conclude that the natural map \(\operatorname{Aut}G\to\operatorname{Homeo}(S_{g})\) is an isomorphism.
2309.11048
Containing Analog Data Deluge at Edge through Frequency-Domain Compression in Collaborative Compute-in-Memory Networks
Edge computing is a promising solution for handling high-dimensional, multispectral analog data from sensors and IoT devices for applications such as autonomous drones. However, edge devices' limited storage and computing resources make it challenging to perform complex predictive modeling at the edge. Compute-in-memory (CiM) has emerged as a principal paradigm to minimize energy for deep learning-based inference at the edge. Nevertheless, integrating storage and processing complicates memory cells and/or memory peripherals, essentially trading off area efficiency for energy efficiency. This paper proposes a novel solution to improve area efficiency in deep learning inference tasks. The proposed method employs two key strategies. Firstly, a Frequency domain learning approach uses binarized Walsh-Hadamard Transforms, reducing the necessary parameters for DNN (by 87% in MobileNetV2) and enabling compute-in-SRAM, which better utilizes parallelism during inference. Secondly, a memory-immersed collaborative digitization method is described among CiM arrays to reduce the area overheads of conventional ADCs. This facilitates more CiM arrays in limited footprint designs, leading to better parallelism and reduced external memory accesses. Different networking configurations are explored, where Flash, SA, and their hybrid digitization steps can be implemented using the memory-immersed scheme. The results are demonstrated using a 65 nm CMOS test chip, exhibiting significant area and energy savings compared to a 40 nm-node 5-bit SAR ADC and 5-bit Flash ADC. By processing analog data more efficiently, it is possible to selectively retain valuable data from sensors and alleviate the challenges posed by the analog data deluge.
Nastaran Darabi, Amit R. Trivedi
2023-09-20T03:52:04Z
http://arxiv.org/abs/2309.11048v1
Containing Analog Data Deluge at Edge through Frequency-Domain Compression in Collaborative Compute-in-Memory Networks ###### Abstract Edge computing is a promising solution for handling high-dimensional, multispectral analog data from sensors and IoT devices for applications such as autonomous drones. However, edge devices' limited storage and computing resources make it challenging to perform complex predictive modeling at the edge. Compute-in-memory (CiM) has emerged as a principal paradigm to minimize energy for deep learning-based inference at the edge. Nevertheless, integrating storage and processing complicates memory cells and/or memory peripherals, essentially trading off area efficiency for energy efficiency. This paper proposes a novel solution to improve area efficiency in deep learning inference tasks. The proposed method employs two key strategies. Firstly, a Frequency domain learning approach uses binarized Walsh-Hadamard Transforms, reducing the necessary parameters for DNN (by 87% in MobileNetV2) and enabling compute-in-SRAM, which better utilizes parallelism during inference. Secondly, a memory-immersed collaborative digitization method is described among CiM arrays to reduce the area overheads of conventional ADCs. This facilitates more CiM arrays in limited footprint designs, leading to better parallelism and reduced external memory accesses. Different networking configurations are explored, where Flash, SA, and their hybrid digitization steps can be implemented using the memory-immersed scheme. The results are demonstrated using a 65 nm CMOS test chip, exhibiting significant area and energy savings compared to a 40 nm-node 5-bit SAR ADC and 5-bit Flash ADC. By processing analog data more efficiently, it is possible to selectively retain valuable data from sensors and alleviate the challenges posed by the analog data deluge. Analog Data Deluge; Compute-in-SRAM; deep neural network; frequency transforms; low power computing ## I Introduction The advent of deep learning and its application in critical domains such as healthcare, finance, security, and autonomous vehicles has led to a data deluge, necessitating efficient computational strategies [1, 2, 3]. Deep neural networks (DNNs), which are increasingly deployed at the network's edge, are particularly challenging due to their complexity and the limited computing and storage resources at the edge [4]. This work presents novel techniques to address these challenges, focusing on the compute-in-memory (CiM) processing of DNNs and frequency-domain model compression. CiM integrates model storage and computations, reducing significant data movements between intermediate memory hierarchy and processing modules that hinder the performance of conventional digital architectures for DNNs. Traditional memory structures such as SRAM [5], RRAM [6], and embedded-DRAM [7, 8] can be adapted for CiM, making it an attractive scheme for cost-effective adoption in various systems-on-chip (SOC). CiM schemes leverage analog representations of operands to simplify their summation over a wire by exploiting Kirchhoff's law, thereby minimizing the necessary workload and processing elements [9, 10, 11, 12, 13]. Figure 1: **Addressing the Data Deluge with Frequency-Domain Processing of Neural Networks and Memory-immersed Collaborative Digitization:** The figure illustrates two key strategies for managing the deluge of data in neural networks and analog systems. **(a)** Frequency-Domain Processing of Neural Networks: Frequency transformations of neural tensors are used to manage the data deluge. **(b)** Memory-Inmersed Collaborative Digitization for Analog Data Deluge: The figure also illustrates how Compute-in-Memory (CiM) arrays are coupled for sequential (left) and parallel (right) reference generation for memory-immersed Analog-to-Digital Converters (ADC). These approaches are designed to manage the deluge of analog data by converting it into a digital format, making it more manageable and suitable for further processing. **(c)** The impact of processing increasing layers of ResNet20 with Walsh-Hadamard transforms (WHT) on prediction accuracy and model compression is shown. **(d)** The increase in multiply-accumulate (MAC) operations under frequency domain processing compared to conventional processing for MobileNetV2 and ResNet20 is also demonstrated. However, the analog computations in CiM present significant implementation challenges, particularly the need for digital-to-analog converters (DAC) and analog-to-digital converters (ADC) to operate on digital inputs and digitize the analog output for routing and storage. This work proposes a novel memory-immersed digitization that can preclude a dedicated ADC and its associated area overhead, Fig. 1(b) The proposed scheme uses parasitic bit-lines of memory arrays to form within-memory capacitive DAC, and neighboring memory arrays collaborate for resource-efficient digitization [14]. Although [13] similarly explored memory-immersed digitization, the presented results were based on simulations only, and only SA functionality was shown. Meanwhile, in this work, the proposed techniques are characterized on a test chip designed in 65 nm CMOS technology. We demonstrate 5-bit memory-immersed ADC operation using a network of 16\(\times\)32 compute-in-SRAM arrays. Compared to a 40 nm-node 5-bit SAR ADC, our 65 nm design requires \(\sim\)25\(\times\) less area and \(\sim\)1.4\(\times\) less energy by leveraging in-memory computing structures. Compared to a 40 nm-node 5-bit Flash ADC, our design requires \(\sim\)51\(\times\) less area and \(\sim\)13\(\times\) less energy. On the other hand, frequency-domain model compression, an efficient alternative to traditional model pruning techniques, leverages fast algorithms for transforms such as the discrete cosine transform (DCT) or discrete Fourier transform (DFT) to identify and remove redundant or uncorrelated information in the frequency domain [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. This work focuses on Walsh-Hadamard transform (WHT)-based model compression, which can reduce model size with limited accuracy loss, shown in Fig. 1(c). However, as demonstrated in Fig. 1(d) it also introduces a notable increase in the required multiply-accumulate (MAC) operations, offsetting the benefits of model size reduction. To address this challenge, this work presents a novel analog acceleration approach [27]. These techniques and architectures present a promising solution for sustainable edge computing, effectively addressing the challenges posed by the analog data deluge [5, 6, 7, 8, 9, 10, 13, 28, 29]. By improving area efficiency in deep learning inference tasks, reducing energy consumption, and leveraging output sparsity for efficient computation, these techniques enable better handling of high-dimensional, multispectral analog data [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. This makes them a promising solution for sustainable data processing at the edge, paving the way for the next generation of deep learning applications in scenarios where area and power resources are limited [1, 3, 4]. * Firstly, we introduce a Compute-In-Memory (CIM) architecture that capitalizes on Blockwise Walsh-Hadamard Transform (BWHT) and soft-thresholding techniques to compress deep neural networks effectively. This innovative CIM design enables computation in just two clock cycles at a speed of 4 GHz, a feat made possible by implementing full parallelism. Notably, our design eliminates the need for ADCs or DACs. Furthermore, we propose an early termination strategy that exploits output sparsity, thereby reducing both computation time and energy consumption. * Secondly, in scenarios where the inclusion of ADCs in our design becomes necessary, we present a memory-immersed collaborative digitization technique. This method significantly reduces the area overheads typically Figure 2: **Architecture and operation flow for an analog acceleration of frequency-domain neural processing:** The operation consists of four steps: (1) precharging bit lines (BL/BLB) and applying input, (2) enabling parallel local computations in O and OB, (3) activating row-merge to connect all cells row-wise and summing O/OB in sum lines (SL/SLB), and (4) comparing SL/SLB values and applying soft thresholding for accurate output generation. associated with conventional ADCs, thus facilitating the efficient digitization of large data volumes. This technique not only optimizes the use of space but also enhances the system's overall efficiency by streamlining the conversion process. It ensures that even with the inclusion of ADCs, the system maintains high performance while handling substantial data volumes. Section II provides the essential background information required to understand the techniques proposed in this paper. Section III delves into the proposed design for a CMOS-based in-memory frequency-domain transform. In Section IV, we elaborate on the proposed memory-immersed collaborative digitization technique. Section V is dedicated to discussing the sustainability aspects of our design. Finally, Section VI concludes the paper, summarizing the key points and findings. ## II Background and Related Works ### _Walsh-Hadamard Transform (WHT)_ The Walsh-Hadamard Transform (WHT) is akin to the Fast Fourier Transform (FFT) as both can transmute convolutions in the time or spatial domain into multiplications in the frequency domain. A distinguishing feature of WHT is that its transform matrix consists solely of binary values (-1 and 1), eliminating the need for multiplications, thereby enhancing efficiency. Given \(X,Y\in\mathbf{R}^{m}\) as the vector in the time-domain and WHT-domain, respectively, where \(m\) is an integer power of 2 (\(2^{k},k\in\mathbf{N}\)), the WHT can be expressed as: \[Y=W_{k}X \tag{1}\] Here, \(W_{k}\) is a \(2^{k}\times 2^{k}\) Walsh matrix. The Hadamard matrix \(H_{k}\) for WHT of \(X\) is constructed as follows: \[H_{k}=\left\{\begin{array}{ll}1,&k=0\\ \begin{bmatrix}H_{k-1}&H_{k-1}\\ H_{k-1}&-H_{k-1}\end{bmatrix},&k>0\end{array}\right. \tag{2}\] The Hadamard matrix is then rearranged to increase the sign change order, resulting in the Walsh matrix. This matrix exhibits the unique property of orthogonality, where every row of the matrix is orthogonal to each other, with the dot product of any two rows being zero. This property makes the Walsh matrix particularly advantageous in a wide range of signal and image processing applications [30]. However, WHT poses a computational challenge when the dimension of the input vector is not a power of two. To address this, a technique called blockwise Walsh-Hadamard transforms (BWHT) was introduced [31]. BWHT divides the transform matrix into multiple blocks, each sized to an integer power of two, significantly reducing the worst-case size of operating tensors and mitigating excessive zero-padding. ### _Frequency-Domain Compression of Deep Neural Networks_ Frequency-domain transformations, such as BWHT, can be incorporated into deep learning architectures for model compression. For instance, in MobileNetV2, which uses \(1\times 1\) convolutions in its bottleneck layers to reduce computational complexity, BWHT can replace these convolution layers, achieving similar accuracy with fewer parameters. Unlike \(1\times 1\) convolution layers, BWHT-based binary layers use fixed Walsh-Hadamard matrices, eliminating trainable parameters. Instead, a soft-thresholding activation function with a trainable parameter \(T\) can be used to selectively attend to important frequency bands. The activation function \(S_{T}\) for this frequency-domain compression is given as \[y=S_{T}(x)=sign(x)(|x|-T)=\left\{\begin{array}{ll}x+T,&x<-T\\ 0,&|x|\leq T\\ x-T,&x>T\end{array}\right. \tag{3}\] Similarly, in ResNet20, \(1\times 1\) convolutions can be replaced with BWHT layers. These transformations maintain matching accuracy while achieving significant compression on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet [31]. However, BWHT transforms also increase the necessary computations for deep networks. The subsequent section discusses how micro-architectures and circuits can enhance the computational efficiency of BWHT-based tensor transformations. ## III Analog-Domain Frequency Transforms The escalating data deluge in the digital world has brought analog domain processing to the forefront as a potential solution for accelerating vector and matrix-level parallel instructions, particularly for low-precision computations such as matrix-vector products in DNNs. Analog computations, by their nature, simplify processing cells and allow for the integration of storage and computation within a single cell. This feature is prevalent in many compute-in-memory designs. This integration significantly mitigates the data deluge by reducing data movement during deep learning computations, Figure 3: **Timing diagram of signal flows:** Waveforms of key signals in the CIM operation, including clock signal (CLK), precharge signal (PCH), bit lines (BL/BLB), column lines (CL/CLB), column-merge signal (CM), row lines (RL), row-merge signal (RM), sum lines (SL/SLB), and Operating points (O/OB). The four-step CIM operation is completed in just two clock cycles (**4 GHz**), accelerating processing times and enhancing performance. The compact cell design, comprised solely of NMOS transistors, ensures efficient use of space. Boosting techniques are applied to CM and RM signals to eliminate the impact of threshold voltage, further optimizing the operation of our CIM design. a critical bottleneck for energy and performance in traditional processors [32]. However, analog domain processing does not resolve the data deluge challenge entirely due to its reliance on ADCs and DACs for domain conversions of operands. The ADC and DAC operations introduce design complexities, significant area/power overheads, and limitations in technology scalability, all contributing to the data deluge. Furthermore, the performance of ADCs and DACs is constrained by speed, power consumption, and cost, further limiting the overall capabilities of analog domain computations. In the subsequent discussion, we present our proposed techniques for analog domain processing of frequency operations. These aim to alleviate the data deluge by eliminating the need for ADC/DAC conversions, even when operating on digital input vectors and computing output vectors in the digital domain. Our approach incorporates bitplane-wise input vector processing and co-designs learning methods that can operate accurately under extreme quantization, enabling ADC/DAC-free operations. ### _Crossbar Microarchitecture Design and Operation_ In Fig. 2, we propose a design that leverages analog computations for frequency domain processing of neural networks to address the data deluge. The design's crossbar combines six transistors (6T) NMOS-based cells for analog-domain frequency transformation. The corresponding cells for '-1' and '1' entries in the Walsh-Hadamard transform matrix are shown to the figure's right. The crossbar combines these cells according to the elements in the transform matrix. Since the transform matrix is parameter-free, computing cells in the proposed design are simpler by being based only on NMOS for a lower area than conventional 6T or 8T SRAM-based compute-in-memory designs. Additionally, processing cells in the proposed crossbar are _stitchable_ along rows and columns to enable perfect parallelism and extreme throughput, mitigating the data deluge further. The operation of the crossbar comprises four steps, which are marked in Fig. 2. These steps are designed to minimize data movement and thus address the data deluge. The steps include precharging, independent local computations, row-wise summing, and single-bit output generation. Fig. 3 shows the signal flow diagram for the above four steps. The 16 nm predictive technology models (PTM) simulation results are shown using the low standby power (LSTP) library in [33] and operating the system at 4 GHz clock frequency and VDD = 0.85V. Row-merge and column-merge signals in the design are boosted at 1.25V to avoid source degeneration. Unlike comparable compute-in-memory designs such as [12], which place product computations on bit lines, in our design, these computations are placed on local nodes in parallel at all array cells. This improves parallelism, energy efficiency, and performance by computing on significantly less capacitive local nodes than bit lines in traditional designs, thereby further addressing the data deluge. ### _ADC-Free by Training against 1-bit Quantization_ Fig. 4 presents the high-level operation flow of our scheme for processing multi-bit digital input vectors and generating the corresponding multi-bit digital output vector using the analog crossbar illustrated in Fig. 2. This scheme is designed to address the data deluge by eliminating the need for ADC and DAC operations. The scheme utilizes bitplane-wise processing of the multi-bit digital input vector and is trained to operate effectively with extreme quantization. In the figure, the input vector's elements with the same significance bits are grouped and processed in a single step using the scheme described in Fig. 2, which spans two clock cycles. The analog charge-represented output is computed along the row-wise charge sum lines and thresholded to generate the corresponding digital bits. This extreme quantization approach is applied to the computed MAC output, eliminating the need for ADCs. With multiple input bitplanes, labeled as needing ADCs or DACs, thereby further addressing the data deluge. The following training methodology achieves this. Consider the frequency-domain processing of an input vector \(\mathbf{x_{i}}\). In Fig. 4, we process \(\mathbf{x_{i}}\) by transforming it to the frequency domain, followed by parameterized thresholding, and then reverting the output to the spatial domain. Consider a DNN with \(n\) layers that chain the above sequence of operations as \(\mathbf{x_{i+1}}=F_{0}(S_{T,i}(F_{0}(\mathbf{x_{i}})))\). Here, \(F_{0}()\) is a parameter-free _approximate_ frequency transformation as followed in our scheme in Fig. 4. \(S_{T,i}()\) is a parameterized thresholding function at the i\({}^{th}\) layer whose parameters \(T_{i}\) are learned from the training data. ### _Early Termination for Energy Efficiency_ To further mitigate the data deluge, we introduce an early termination mechanism in our design. This mechanism is based on the observation that the contribution of higher bitplanes in the input vector to the final output is significantly less than the lower bitplanes due to the binary representation of numbers. Therefore, we can terminate the computation early if the higher bitplanes do not significantly affect the final output, thus saving energy and reducing data movement. We illustrate the early termination mechanism in Fig. 6. The mechanism involves a thresholding operation that compares the partial sum of the output bitplanes with a predefined threshold. If the partial sum is less than the threshold, the computation is terminated early, and the remaining higher bitplanes are ignored. This mechanism reduces the number of computations and data movement, thereby mitigating the data deluge. The threshold for early termination is a design parameter that can be tuned based on the accuracy-energy trade-off. A lower threshold leads to more frequent early terminations, saving more energy but potentially reducing the accuracy, and vice versa. Therefore, the threshold should be carefully chosen to balance the trade-off between energy efficiency and accuracy. Our proposed techniques for analog domain processing of frequency operations, including bitplane-wise input vector processing, ADC/DAC-free operations, and early termination mechanism, effectively address the data deluge by reducing data movement and computational complexity. These techniques also maintain high accuracy and energy efficiency, making them promising for future low-precision computations in deep neural networks. Fig. 7 shows our simulation results Figure 6: **Early Termination Technique:** This figure illustrates the impact of the early termination technique on the distribution of the soft-thresholding parameter (T). It shows how applying a unique loss function drives the T parameter towards (-1) and (1), aiding in workload reduction. The figure also presents a scenario where the value, after processing through three bit-planes, falls within the range of (-T) and (T), indicating a zero output and eliminating the need for further processing of the remaining input bits. The effectiveness of early termination techniques in reducing workload, maintaining accuracy, and their independence from the level of weight quantization are also demonstrated. Figure 7: **Performance analysis of proposed CIM architecture:****(a)** Examination of the supply voltage (VDD) impact on power consumption and accuracy, emphasizing the marked increase in power consumption at 1.3 volts. In this case, the clock frequency is 1 (GHz), and the memory size is \(32\times 32\) (bits). **(b)** Evaluation of the CIM architecture’s accuracy and power consumption across different memory array sizes (\(16\times 16,32\times 32,64\times 64\), and \(128\times 128\) (bits)), demonstrating the persistently high accuracy attributable to the highly parallel design. The supply voltage and clock frequency are set at 1 (V) and 1 (GHz). **(c)** Investigation of power consumption and accuracy trends concerning clock frequency, revealing that beyond 2.5 (GHz), the average power consumption escalates significantly, thus restricting the overall performance of the circuit. The supply voltage and memory size are 1 (V) and \(32\times 32\) (bits). supporting the claim. ## IV Memory-Immersed Collaborative Digitization ### _Integrating CiM Arrays for Collaborative Digitization_ Fig. 1(b) illustrates the implementation of memory-integrated ADC. We specifically focus on our methods and results for eight-transistor (8T) compute-in-SRAM arrays, widely utilized in numerous platforms. Unlike 6T compute-in-SRAM, 8T cells are less prone to bit disturbances due to separate write and inference ports, making them more suitable for technology scaling. However, our suggestions for memory-integrated ADC apply to other memory types, including 10-T compute-in-SRAM/eDRAM and non-volatile memory crossbars. In the proposed model, two adjacent CiM arrays work together for in-memory digitization, as depicted in Figure 8(a). When the left array calculates the input-weight scalar product, the right array performs SRAM-integrated digitization on the produced analog-mode multiply-average (MAV) outputs. Both arrays then switch their operating modes. Each array consists of 8-T cells, combining standard 6-T SRAM cells with a two-transistor weight-input product port shown in the figure. Memory cells for input-weight products are accessed using three control signals. For in-memory digitization of charge-domain product-sum computed in the left array, column lines (CLs) in the right array realize the unit capacitors of a capacitive DAC formed within the memory array. A precharge transistor array is integrated with the column lines to generate the reference voltages. The first reference voltage is generated by summing the charges of all column lines. The developed MAV voltage in the left CiM array is compared to the first reference voltage to determine the most significant bit of the digitized output. The next precharge state of memory-immersed capacitive DAC is determined, and the precharge and comparison cycles continue until the MAV voltage has been digitized to necessary precision. Using neighboring CiM arrays for the first reference voltage generation for in-memory digitization offers several key advantages. First, various non-idealities in analog-mode MAV computation become common-mode due to using an identical array for the first reference voltage generation. Thus, the non-idealities only minimally impact the accuracy of digitization. Second, collaborative digitization minimizes peripheral overheads. Only an analog comparator and simple modification in the precharge array are sufficient to realize a successive approximation search. Compared to traditional CiM approaches with a dedicated ADC at each array, our scheme's interleaving of scalar product computation and digitization cycles affects the achievable throughput. However, with simplified low-area peripherals, more CiM arrays can be accommodated than prior works employing dedicated ADCs. Therefore, our scheme compensates for the overall throughput at the system level by operating many parallel CiM arrays. This improved area efficiency of Figure 8: **Architecture and waveforms of SRAM-immersed ADC:****(a)** Coupling of left-right memory arrays for memory-immersed digitization. When the left array computes within-memory scalar product, the right array digitizes analog-domain computed output. Both arrays switch their operation subsequently for collaborative digitization. **(b)** Clocked comparator design combining n-type and p-type counterparts for rail-to-rail voltage comparison. **(c)** Transient waveforms. Figure 9: **Hybrid mode of SRAM-immersed ADC: A dot product-configured CM array is coupled to many ADC-configured arrays to the right for flash mode digitization of the initial most significant bits. After this, each left-array couples to the nearest right array to determine the remaining bits in SAR mode. Operational cycles are shown at the bottom.** CiM arrays in our scheme minimizes the necessary exchanges from off-chip DRAMs to on-chip structures in mapping large DNN layers, a significant energy overhead in conventional techniques. ### _Hybrid SRAM-Integrated Flash and SAR ADC Operation_ In addition to the nearest neighbor networking in Figure 8, more complex CiM networks can also be orchestrated for more time-efficient collaborative digitization in Flash and/or hybrid SAR + Flash mode. Figure 9 shows an example networking scheme where Array-1 couples with three memory arrays to the right for collaborative digitization in Flash mode. Here, the three right arrays simultaneously generate the respective reference voltages for the Flash mode of digitization and to determine the first two most significant bits in one comparison cycle. ### _Leveraging MAV Statistics for ADC's Time-Efficiency_ The hybrid scheme for data conversion further benefits from exploiting the statistics of the multiply average (MAV) computed by the CiM. In many CiM schemes, the computed MAV is not necessarily uniformly distributed. For example, in bit-plane-wise CiM processing, DACs are avoided by processing a one-bit input plane in one time step. This results in a skewed distribution of MAV, Fig. 10(a). The skewed distribution of MAV can be leveraged by implementing an asymmetric binary search for digitization, Fig. 10(b). Under the asymmetric search, the average number of comparisons reduces, thus proportionally reducing the energy and latency for the operation, Fig. 10(c). The proposed hybrid digitization scheme further exploits the asymmetric search, which can be accelerated by Flash digitization mode. ### _Test-Chip Design, Measurement Results, and Comparison to Traditional ADC_ A 65 nm CMOS test chip characterized the proposed SRAM-integrated ADC. The fabricated chip's micrograph and measurement setup are shown in Figures 11(a, b). Four compute-in-SRAM arrays of size 16x32 were implemented. The coupling of CiM arrays can also be programmed to realize hybrid Flash-SAR ADC operations, such as obtaining the two most significant bits in Flash mode and the remaining in SAR. Figure 11(c) shows the transient waveforms of different control signals and comparator outputs, showing a hybrid Flash + SAR ADC operation. Flash mode is activated in the first comparison cycle where CiM arrays generate the corresponding reference voltages, and the first two bits of MAV digitization are extracted. Subsequently, the operation switches to SAR mode, where the remaining digitization bits are obtained by engaging one array alone with another. In the last four cycles, other arrays become free to similarly operate on a proximal CiM array to digitize MAV in SAR mode. Figure 12(a) shows the measured staircase plot of input voltage to output codes and the comparison to an ideal staircase, demonstrating near-ideal performance. This suggests that the proposed SRAM-integrated ADC operates effectively, with the hybrid Flash + SAR ADC operation providing a flexible and efficient approach to digitization within the memory array. Fig. 13 and Table I show the design space exploration of pro- posed memory-immersed ADC compared to other ADC styles. In Fig. 13(a), leveraging in-memory structures for capacitive DAC formation, the proposed in-memory ADC is more area efficient than Flash and SAR styles. Significantly, Flash ADC's size increases exponentially with increasing bit precision. In Fig. 13(b), SAR ADC's latency increases with bit precision while Flash ADC can maintain a consistent latency but at the cost of the increasing area as shown in Fig. 13(a). A hybrid data conversion in the proposed in-memory provides a middle ground, i.e., lower latency than in SAR ADC. Figs. 13(c-d) show the impact of supply voltage and frequency scaling on in-memory ADC's power and accuracy for MNIST character recognition. In conclusion, the proposed method of integrating compute-in-memory (CiM) arrays for collaborative digitization offers a promising approach to handling the data deluge in modern computing systems. By leveraging the unique properties of 8T compute-in-SRAM arrays, the method provides a scalable and efficient solution for in-memory digitization. The hybrid SRAM-integrated Flash and SAR ADC operation further enhances time efficiency, while the exploitation of MAV statistics contributes to the ADC's time efficiency. The successful implementation and characterization of the proposed SRAM-integrated ADC on a 65 nm CMOS test chip further validate the effectiveness of this approach. ## V Sustainability The proposed CIM architecture and the associated techniques significantly contribute to sustainability in deep learning applications. The key to this sustainability lies in the efficient use of resources and the reduction of energy consumption. \begin{table} \begin{tabular}{l l l l} \hline **Architecture** & Tech. & Area & Energy \\ & & (\(\mu\)m\({}^{2}\)) & (pJ) \\ \hline \hline **SAR**[34] & 40 nm & 5235.20 & 105 \\ \hline **Flash**[34] & 40 nm & 10703.36 & 952 \\ \hline **In-Memory** (ours) & 65 nm & 207.8 & 74.23 \\ \hline \end{tabular} \end{table} Table I: Comparison of 5-bit in-memory ADC with 10 MHz clock against SAR and Flash architectures Figure 10: **Exploiting MAV statistics for ADC’s time-efficiency: (a) Distribution of MAV under the uniform distribution of input and weight bits for CiM scheme in Figure 2. (b) Asymmetric binary search for skewed MAV statistics. (c) For 5-bit data conversion, asymmetric search requires on average \(\sim\)3.7 comparisons, unlike symmetric binary search that requires \(\sim\)5 comparisons.** Firstly, the architecture leverages BWHT and soft-thresholding techniques to compress deep neural networks. This compression reduces the computational resources required, leading to more efficient use of hardware. By reducing the number of parameters in the BWHT layer, the architecture minimizes the memory footprint of deep learning models, reducing the energy required for data storage and retrieval. The early termination strategy enhances energy efficiency by leveraging output sparsity to reduce computation time. Secondly, the memory-immersed collaborative digitization among CiM arrays minimizes the area overheads of ADCs for deep learning inference. This allows significantly more CiM arrays to be accommodated within limited footprint designs, improving parallelism and minimizing external memory accesses. The results demonstrate the potential of the proposed techniques for area-efficient and energy-efficient deep learning applications. Lastly, the proposed techniques and architectures are designed to be robust and resilient, reducing the need for frequent hardware replacements or upgrades. This longevity contributes to sustainability by reducing electronic waste and the environmental impact associated with the production and disposal of hardware. ## VI Conclusions The proposed frequency-domain CIM architecture with early termination technique, and memory-immersed collaborative digitization present a comprehensive solution for sustainable and efficient deep learning applications. This leads to more efficient use of hardware and reduces the energy required for data storage and retrieval. The memory-immersed collaborative digitization among CiM arrays minimizes the area overheads of a conventional ADC for deep learning inference. This allows significantly more CiM arrays to be accommodated within limited footprint designs, improving parallelism and minimizing external memory accesses. The results demonstrate the potential of the proposed techniques for area-efficient and energy-efficient deep learning applications. These techniques contribute significantly to sustainability in deep learning applications. By improving area efficiency in deep learning inference tasks and reducing energy consumption, these techniques contribute to the sustainability of data processing at the edge. This approach enables better handling of high-dimensional, multispectral analog data. It helps alleviate the challenges the analog data deluge poses, making it a promising solution for sustainable data processing at the edge. In conclusion, the proposed techniques and architectures pave the way for the next generation of deep learning applications, particularly in scenarios where area and power resources are limited. ## VII Acknowledgment This work was supported by COGNISENSE, one of the seven centers in JUMP 2.0, a Semiconductor Research Cor Figure 11: **Test-chip and measurements:****(a)** Micrograph of fabricated design in 65 nm CMOS. Four compute-in-SRAM arrays, A1\(-\)A4 were fabricated. A1 interfaces with A2 to realize SRAM-immersed SAR ADC. A1 interfaces with A1\(-\)A4 to realize SRAM-immersed Flash ADC. **(b)** Measurement setup. **(c)** Measurement transient waveforms for hybrid SAR + Flash ADC operation. Figure 12: **Measured non-idealities of SRAM-immersed ADC:** (a) Output code vs. applied input voltage. (b) Differential and (c) integrated non-linearities. poration (SRC) program sponsored by DARPA.
2301.00767
A Survey on Federated Recommendation Systems
Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data, which greatly enhances the user privacy. Beside, federated recommendation systems enable to collaborate with other data platforms to improve recommended model performance while meeting the regulation and privacy constraints. However, federated recommendation systems faces many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we-(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review some robust aggregation strategies and several novel attacks against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4)introduce some open source platforms that can be used to build federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas.
Zehua Sun, Yonghui Xu, Yong Liu, Wei He, Lanju Kong, Fangzhao Wu, Yali Jiang, Lizhen Cui
2022-12-27T08:09:45Z
http://arxiv.org/abs/2301.00767v2
# A Survey on Federated Recommendation Systems ###### Abstract Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models by collecting the intermediate parameters instead of the real user data, which greatly enhances user privacy. Besides, federated recommendation systems can cooperate with other data platforms to improve recommendation performance while meeting the regulation and privacy constraints. However, federated recommendation systems face many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we--(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review several novel attacks and defenses against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4) introduce some realistic applications and public benchmark datasets for federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas. Recommendation Systems, Federated Learning, Privacy, Security, Heterogeneity, Communication Costs. ## I Introduction In recent years, recommendation systems have been widely used to model user interests so as to solve information overload problems in many real-world fields, e.g., e-commerce [1][2], news [3][4] and healthcare [5][6]. To further improve the recommendation performance, such systems usually collect as much data as possible, including a lot of private information about users, such as user attributes, user behaviors, social relations, and context information. Although these recommendation systems have achieved remarkable results in accuracy, most of them require a central server to store collected user data, which exists potential privacy leakage risks because user data could be sold to a third party without user consent, or stolen by motivated attackers. In addition, due to privacy concerns and regulatory restrictions, it becomes more difficult to integrate data from other platforms to improve recommendation performance. For example, regulations such as General Data Protection Regulation (GDPR) [7] set strict rules on collecting user data and sharing data between different platforms, which may lead to insufficient data for recommendation systems and further affects recommendation performance. Federated learning is a privacy-preserving distributed learning scheme proposed by Google [8], which enables participants to collaboratively train a machine learning model by sharing intermediate parameters (e.g., model parameters, gradients) instead of their real data. Therefore, combining federated learning with recommendation systems becomes a promising solution for privacy-preserving recommendation systems. In this paper, we term it federated recommendation system (FedRS). ### _Challenges_ While FedRS avoids direct exposure of real user data and provides a privacy-aware paradigm for model training, there are still some core challenges that need to be addressed. **Challenge 1: Privacy concerns for users.** Privacy protection is often the major goal of FedRS. In FedRS, each participant jointly trains a global recommendation model by sharing intermediate parameters instead of their real user-item interaction data, which makes an important step towards privacy-preserving recommendation systems. However, a curious server can still infer user ratings and user interaction behaviors from the uploaded intermediate parameters [9][10]. Besides, FedRS also faces the risk of privacy leakage when integrating auxiliary information (e.g., social features) to improve recommendation performance. **Challenge 2: Security attacks on FedRS.** In federated recommendation scenarios, participants may be malicious, and they can poison their local training samples or uploaded intermediate parameters to attack the security of FedRS. They can increase the exposure of specific products for profit [11], or destroy the overall recommendation performance of competing companies [12]. To ensure the fairness and performance of recommendations, FedRS must have the ability to detect and defend against poison attacks from participants. **Challenge 3: Heterogeneity in FedRS.** FedRS also faces the problem of system heterogeneity, statistical heterogeneity and privacy heterogeneity during the collaborative training by multiple clients. When training recommendation model locally, due to the difference in storage, computing and communication capabilities, the clients with limited capabilities may become stragglers and further affect the training efficiency. Besides, data (e.g., user attributes, ratings and interaction behavior) in different clients is usually not independent and identically distributed (Non-IID), and training a consistent global recommendation model for all users can't achieve the personalization of recommendation results. Moreover, in realistic applications, users often have different privacy needs and adopt different privacy settings. So simply using the same privacy budgets for users will bring unnecessary loss of recommendation accuracy and efficiency. **Challenge 4: Communication costs during FedRS model training and inference.** To achieve satisfactory recommendation performance, clients need to communicate with the central server for multiple rounds. However, real-world recommendation systems are usually built on complex deep learning models and millions of intermediate parameters need to be communicated [13]. In addition, clients must receive a large amount of item data from the server to generate recommendation results locally. Therefore, clients may be hard to afford severe communication costs, which greatly limits the application of FedRS in large-scale recommendation scenarios. ### _Related Surveys_ There are many surveys that have focused on recommendation systems or federated learning. For examples, Adomavicius \(et\ al.\)[14] provide a detailed categorization of recommendation methods and introduce various limitations of each method. Yang \(et\ al.\)[15] give the definition of federated learning and discuss its architectures and applications. And Li \(et\ al.\)[16] summarize the unique characteristics and challenges of federated learning. Besides, there are also some surveys on the privacy and security of federated learning. For examples, Viraaji \(et\ al.\)[17] identify and evaluate the privacy threats and security vulnerabilities in federated learning. And Lyu \(et\ al.\)[18] comprehensively explore the assumptions, reasons, principles and differences of the current attacks and defenses in the privacy and robustness fields of federated learning. However, the existing surveys usually treat recommendation systems and federated learning separately, and few work surveyed specific problems in FedRS [19]. Yang \(et\ al.\)[19] categorize FedRS from the aspect of the federated learning and discuss the algorithm-level and system-level challenges for FedRS. However, they do not provide comprehensive methods to address privacy, security, heterogeneity, and communication costs challenges. ### _Our Contribution_ Compared with the previous surveys, this paper makes the following contributions: Firstly, we provide a comprehensive overview of FedRS from the perspectives of definition, communication architectures and categorization. Secondly, we summarize the state-of-the-art studies of FedRS in terms of privacy, security, heterogeneity and communication costs areas. Thirdly, we introduce some applications and public benchmark datasets for FedRS. Fourthly, we discuss the promising future directions for FedRS. The rest of the paper is organized as follows: Section II discusses the overview of FedRS. Section III-Section VI summarize the state-of-the-art studies of FedRS from the aspects of privacy, security, heterogeneity and communication costs. Section VII introduces the applications and public benchmark datasets for FedRS. Section VIII presents some prospective research directions. Finally, Section IX concludes this survey. ## II Overview of Federated Recommendation Systems ### _Definition_ FedRS is a technology that provides recommendation services in a privacy-preserving way. To protect user privacy, the participants in FedRS collaboratively train the recommendation model by exchanging intermediate parameters instead of sharing their own real data. In the ideal case, the performance of recommendation model trained in FedRS should be close to the performance of the recommendation model trained in the data- centralized setting, which can be formalized as: \[|V_{FED}-V_{SUM}|<\delta. \tag{1}\] where \(V_{FED}\) is the recommendation model performance in FedRS, \(V_{SUM}\) is the recommendation model performance in traditional recommendation systems for centralized data storage, and \(\delta\) is a small positive number. Fig. 1: Communication architecture of FedRS. ### _Communication Architecture_ In FedRS, the data of participants is stored locally, and the intermediate parameters are communicated between the server and participants. There are two major communication architectures used in the study of FedRS, including client-server architecture and peer-peer architecture. **Client-Server Architecture**. Client-server architecture is the most common communication architecture used in FedRS, as shown in Fig. 1(a), which relies on a trusted central server to perform initialization and model aggregation tasks. In each round, the server distributes the current global recommendation model to some selected clients. Then the selected clients use the received model and their own data for local training, and send the updated intermediate parameters (e.g., model parameters, gradients) to the server for global aggregation. The client-server architecture requires a central server to aggregate the intermediate parameters uploaded by the clients. Thus, once the server has a single point of failure, the entire training process will be seriously affected [20]. In addition, the curious server may infer the clients' privacy information through the intermediate parameters, leaving potential privacy concerns [9]. **Peer-Peer Architecture**. Considering the single point of failure problem for client-server architecture in FedRS, Hegeds \(et\)\(al\). [21] design a peer-peer communication architecture with no central server involved in the communication process, which is shown in Fig. 1(b). During each communication round, each participant broadcasts the updated intermediate parameters to some random online neighbors in the peer to peer network, and aggregates received parameters into its own global model. In this architecture, the single point of failure and privacy issues associated with a central server can be avoided. However, the aggregation process occurs on each client, which greatly increases the communication and computation overhead for clients [22]. ### _Categorization_ In FedRS, the participants are responsible for the local training process as the data owners. They can be different mobile devices or data platforms. Considering the unique properties of different participant types, FedRS usually have different application scenarios and designs. Besides, there are also some differences between different recommendation models in the federation process. Thus, we summarize the current FedRS and categorize them from the perspectives of participant type and recommendation model. Fig. 2 shows the summary of the categorization of FedRS. #### Ii-C1 Participant Type Based on the type of participants, FedRS can be categorized into cross-device FedRS and cross-platform FedRS. **Cross-device FedRS**. In cross-device FedRS, different mobile devices are usually treated as participants [23][10]. The typical application of cross-device FedRS is to build a personal recommendation model for users without collecting their local data. In this way, users can enjoy recommend services while protecting their private information. The number of participants in cross-device FedRS is relatively large and each participant keeps a small amount of data. Considering the limited computation and communication abilities of mobile devices, cross-device FedRS cannot handle very complex training tasks. Besides, due to the power and the network status, mobile devices may drop out of the training process. Thus, the major challenges for cross-device FedRS are how to improve the efficiency and deal with the straggler problem of devices during the training process. **Cross-platform FedRS**. In cross-platform FedRS, different data platforms are usually treated as participants who want to collaborate to improve recommendation performance while meeting regulation and privacy constraints [24][25][26]. For example, to improve the recommendation performance, recommendation systems often integrate data from multiple platforms (e.g., e-commercial platforms, social platforms) [27]. However, due to privacy and regulation concerns, the different data platforms are often unable to directly share their data with each other. In this scenario, cross-platform FedRS can be used to collaboratively train recommendation models between different data platforms without directly exchanging their users' data. Compared to cross-device FedRS, the number of participants in cross-platform FedRS is relatively small, and each participant owns relative large amount of data. An important challenge for cross-platform FedRS is how to design a fair incentive mechanism to measure contributions and benefits of different data platforms. Besides, it is hard to find a trusted server to manage training process in cross-platform FedRS, so a peer to peer communication architecture can be a good choice in this case. #### Ii-C2 Recommendation Model According to the different recommendation models used in FedRS, FedRS can be categorized into matrix factorization based FedRS, deep learning based FedRS and meta learning based FedRS. **Matrix factorization based FedRS**. Matrix factorization [28] is the most common model used in FedRS, which formulates the user-item interaction or rating matrix \(R\in\mathbb{R}^{N\times M}\) as a linear combination of user profile matrix \(U\in\mathbb{R}^{N\times K}\) and item profile matrix \(V\in\mathbb{R}^{M\times K}\): \[R=UV^{T}. \tag{2}\] then uses the learned model to recommend new items to the user according to the predicted value. In matrix factorization model based FedRS, the user factor vectors are stored and updated locally on the clients, and only the item factor vectors [29] or the gradients of item factor vectors [23][10][9][30] Fig. 2: Categorization of federated recommendation systems. are uploaded to the server for aggregation. Matrix factorization model based FedRS can simply and effectively capture user tastes with the interaction and rating information between users and items. However, it still has many limitations such as sparsity (the number of ratings to be predicted is much smaller than the known ratings) and cold-start (new users and new items lack ratings) problems [14]. **Deep learning based FedRS**. To learn more complex representations of users and items and improve recommendation performance, deep learning technology has been widely used in recommendation systems. However, as privacy regulations get stricter, it becomes more difficult for recommendation systems to collect enough user data to build a high performance deep learning model. To make full use of user data while meeting privacy regulations, many effective deep learning model based FedRS have been proposed [31][32][33]. Considering different model structures, deep learning model based FedRS usually adopt different model update and intermediate parameter transmit processes. For examples, Perifanis \(et\)\(al\). [31] propose a federated neural collaborative filtering (FedNCF) framework based on NCF [34]. In FedNCF, the clients locally update the network weights as well as the user and item profiles, then upload the item profile and network weights after masking to the server for aggregation. Wu \(et\)\(al\). [32] propose a federated graph neural network (FedGNN) framework based on GNN. In FedGNN, the clients locally train GNN models and update the user/item embeddings from their local sub-graph, then send the perturbed gradients of GNN model and item embedding to the central server for aggregation. Besides, Huang \(et\)\(al\). [35] propose a federated multi-view recommendation framework based on Deep Structured Semantic Model (DSSM [36]). In FL-MV-DSSM, each view \(i\) locally trains the user and item sub-models based on their own user data and local shared item data, then send the perturbed gradients of both user and item sub-models to server for aggregation. Although deep learning model based FedRS achieve outstanding performance in terms of accuracy, the massive model parameters of deep learning models bring huge computation and communication overhead to the clients, which presents a serious challenge for real industrial recommendation scenarios. **Meta learning based FedRS**. The most of existing federated recommendation studies are built on the assumption that data distributed on each client is independent and identically (IID). However, learning a unified federated recommendation model often performs poorly when handling Non-IID and highly personalized data on clients. Meta learning model can quickly adapt to new tasks while maintaining good generalization ability [37], which makes it particularly suitable for FedRS. In meta learning model based FedRS, the server aggregates the intermediate parameters uploaded by clients to learn a model parameter initialization, and the clients fine-tune the initialed model parameters in the local training phase to fit their local data [38][39]. In this way, meta learning model based FedRS can adapt the clients' local data to provide more personalized recommendations. Although the performance of meta learning model based FedRS are generally better than learning a unified global model, the private information leakage can still occur during the learning process of model parameter initialization [38]. ## III Privacy of Federated recommendation Systems In the model training process of FedRS, the user data is stored locally and only the intermediate parameters are uploaded to a server, which can further protect user privacy while keeping recommendation performance. Nevertheless, several research works show that the central server can still infer some sensitive information based on intermediate parameters. For examples, a curious server can identify items the user has interacted with according to the non-zero gradients sent by the client [32]. Besides, the server can also infer the user ratings as long as obtaining the user uploaded gradients in two consecutive rounds [9]. To further protect the privacy of FedRS, many studies have incorporated other privacy protection mechanisms into the FedRS, including pseudo items, homomorphic encryption, secret sharing and differential privacy. This section introduces the application of each privacy mechanism used in FedRS, and compare their advantages and limitations. ### _Pseudo Items_ To prevent the server from inferring the set of items that users have interacted with based on non-zero gradients, some studies utilize pseudo items to protect user interaction behaviors in FedRS. The key idea of pseudo items is that the clients not only upload gradients of items that have been interacted with but also upload gradients of some sampled items that have not been with. For example, Lin \(et\)\(at\). [10] propose a federated recommendation framework for explicit feedback scenario named FedRec, in which they design an effective hybrid filling strategy to generate virtual ratings of unrated items by the following equation: \[r^{{}^{\prime}}_{ui}=\left\{\begin{aligned} \frac{\sum_{k=1}^{m}y_{uk}r_{uk}}{ \sum_{k=1}^{m}y_{uk}},& t<T_{predict}\\ \hat{r}_{ui},& t\geq T_{predict}\end{aligned}\right. \tag{3}\] where \(t\) denotes the number of current training iteration, and \(T_{predict}\) denotes the iteration number when choosing the average value or predict value as virtual rating value to a sampled item \(i\). However, the hybrid filling strategy in FedRec introduces extra noise to the recommendation model, which inevitably affects the model performance. To tackle this problem, Feng \(et\)\(at\). [40] design a lossless version of FedRec named FedRec++. FedRec++ divides clients into ordinary clients and denoising clients. The denoising clients collect noisy gradients from ordinary clients and send the summation of the noisy gradients to the server to eliminate the gradient noise. Although pseudo items can effectively protect user interaction behaviors in FedRS, it does not modify the gradients of rated items. The curious server can still infer user ratings on the gradients uploaded by users [9]. ### _Homomorphic Encryption_ To further protect the user ratings in FedRS, many studies attempt to encrypt intermediate parameters before uploading them to the server. Homomorphic encryption mechanism allows mathematical operation on encrypted data [41], so it is well suited for the intermediate parameters upload and aggregation processes in FedRS. For example, Chai \(et\)\(at\). [9] propose a secure federated matrix factorization framework named FedMF, in which clients use Paillier homomorphic encryption mechanism [42] to encrypt the gradients of item embedding matrix before uploading them to the server, and the server aggregates gradients on the cipher-text. Due to the characteristics of homomorphic encryption, FedMF can achieve the same recommendation accuracy as traditional matrix factorization. However, FedMF causes serious computation overheads since all computation operations are performed on the ciphertext and most of system's time is spent on server updates. Besides, FedMF assumes that all participants are honest and will not leak the secret key to the server, which is hard to guarantee in reality. Moreover, Zhang \(et\)\(at\). [43] propose a federated recommendation method (CLFM-VFL) for vertical federated learning scenarios where participants have more overlapping users but fewer overlapping features of users. CLFM-VFL uses homomorphic encryption to protect the gradients of user hidden vectors for each participant and cluster the users to improve recommendation accuracy and reduce matrix dimension. Besides, many studies also utilize homomorphic encryption mechanism to integrate private information from other participants to improve recommendation accuracy [32][44]. For examples, Wu \(et\)\(al\). [32] use homomorphic encryption mechanism to find the anonymous neighbors of users to expand the local user-item graph. Perifanis \(et\)\(al\). [44] use Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption mechanism [45] to incorporate learned parameters between user's friends after the global model is generated. Homomorphic encryption mechanism based FedRS can effectively protect user ratings while maintaining recommendation accuracy. Besides, it can prevent privacy leaks when integrating information from other participants. However, homomorphic encryption brings huge computation costs during operation process. And it is also a serious challenge to keep the secret key not be obtained by the server or other malicious participants. ### _Secret Sharing_ As another encryption mechanism used in FedRS, secret sharing mechanism breaks intermediate parameters up into multiple pieces, and distributes the pieces among participants, so that only when all pieces are collected can reconstruct the intermediate parameters. For example, Ying [30] proposes a secret sharing based federated matrix factorization framework named ShareMF. The participants divide the item matrix gradients \(g^{plain}\) into several random numbers that meet: \[g^{plain}=g^{sub1}+g^{sub2}+...+g^{subt}. \tag{4}\] Each participant keeps one of the random numbers and sends the rest to \(t-1\) sampled participants, then uploads the sum of received and kept numbers as hybrid gradients to the server for aggregation. ShareMF protects the user ratings and interaction behaviors from being inferred by the server, but the rated items can still be leaked to other participants who received the split numbers. To tackle this problem, Lin \(et\)\(al\). [47] combine secret sharing and pseudo items mechanisms to provide a stronger privacy guarantee. Secret sharing mechanism based FedRS can protect user ratings while maintaining recommendation accuracy, and have lower computation costs compared to homomorphic encryption based FedRS. But the exchange process of pieces between participants greatly increases the communication costs. ### _Local Differential Privacy_ Considering the huge computation or communication costs caused by encryption based mechanisms, many studies try to use perturbation based mechanisms to adapt to large-scale FedRS for industrial scenarios. Local differential privacy (LDP) mechanism allows statistical computations while guaranteeing each individual participant's privacy [48], which can be used to perturb the intermediate parameters in FedRS. For example, Dolui \(et\)\(al\). [29] propose a federated matrix factorization framework, which applies differential privacy on item embedding matrix before sending it to the server for weighted average. However, the server can still infer which \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Privacy Mechanisms & Ref & Main Protect Object & Accuracy Loss & Communication/Computation Costs \\ \hline Pseudo Items & [10][40][32][46][47] & Interaction Behaviors & \(\checkmark\) & Low Costs \\ \hline \multirow{4}{*}{Homomorphic Encryption} & [9] & Ratings & \multirow{4}{*}{High Computation Costs} \\ & [32] & High-order Graph & & \\ & [44] & Social Features & & \\ \hline Secret Sharing & [30][47] & Ratings & \(\checkmark\) & High Communication Costs \\ \hline Local Differential Privacy & [29][32][46] & Ratings & \(\checkmark\) & Low Costs \\ \hline \end{tabular} \end{table} TABLE I: Comparison between different privacy mechanism. items the user has rated just by comparing the changes in item embedding matrix. In order to achieve more comprehensive privacy protection during model training process, Wu \(et\)\(al\). [32] combine pseudo items and LDP mechanisms to protect both user interaction behaviors and ratings in FedGNN. Firstly, to protect user interaction behaviors in FedGNN, the clients randomly sample \(N\) items that they have not interacted with, then generate the virtual gradients of item embeddings by using the same Gaussian distribution as the real embedding gradients. Secondly, to protect user ratings in FedGNN, the clients apply a LDP module to clip the gradients according to their L2-norm with a threshold \(\delta\) and perturb the gradients by adding zero-mean Laplacian noise. The LDP module of FedGNN can be formulated as follow: \[g_{i}=clip(g_{i},\delta)+Laplace(0,\lambda). \tag{5}\] where \(\lambda\) is the Laplacian noise strength. However, the gradient magnitude of different parameters varies during training process, thus it is usually not appropriate to perturb gradients at different magnitudes with a constant noise strength. So Liu \(et\)\(al\). [46] propose to add dynamic noise according to the gradients, which can be formulated as follow: \[g_{i}=clip(g_{i},\delta)+Laplace(0,\lambda\cdot mean(g_{i})). \tag{6}\] Local differential privacy mechanism doesn't bring heavy computation and communication overhead to FedRS, but the additional noise inevitably affects the performance of the recommendation model. Thus, in the actual application scenario, we must consider the trade-off between privacy and recommendation accuracy. ### _Comparison_ To provide a stronger privacy guarantee, many privacy mechanisms (i.e., pseudo items, homomorphic encryption, differential privacy and secret sharing) have been widely used in FedRS, and the comparison between these mechanisms is shown in Table I. Firstly, the main protect objects of these mechanisms are different: pseudo items mechanism is to protect user interaction behaviors, and the rest mechanisms are to protect user ratings. Besides, homomorphic encryption can also integrate data from other participants in a privacy-preserving way. Secondly, homomorphic encryption and secret sharing are both encryption-based mechanisms, and they can protect privacy while keeping accuracy. However, the high computation cost of homomorphic encryption limits it's application in large-scale industrial scenarios. Although the secret sharing mechanism reduces the computation costs, the communication costs increase greatly. Pseudo items and differential privacy mechanisms protect privacy by adding random noise, which has low computation costs and don't bring additional communication costs. But the addition of random noise will inevitably affect model performance to a certain extent. ## IV Security of Federated recommendation Systems Apart from privacy leakage problems, traditional recommendation systems for centralized data storage are also vulnerable to poisoning attacks (shilling attacks) [49][50][51]. Attackers can poison recommendation systems and make recommendations as their desire by injecting well-crafted data into the training dataset. But most of these poisoning attacks assume that the attackers have full prior knowledge of entire training datasets. Such an assumption may be not valid for FedRS since the data in FedRS is distributed and stored locally for each participant. Thus, FedRS provides a stronger security guarantee than traditional recommendation systems. However, the latest studies indicate that attackers can still conduct poisoning attacks on FedRS with limited prior knowledge [11][12][52][53]. In this section, we summarize some novel poisoning attacks against FedRS and provide some defense methods. ### _Poisoning Attacks_ According to the goal of attacks, the poisoning attacks against FedRS can be categorized into targeted attacks and untargeted attacks as shown in Table II. #### Iv-A1 Target Poisoning Attacks The goal of target attacks on FedRS is to increase or decrease the exposure chance of specific items, which are usually driven by financial profit. For example, Zhang \(et\)\(al\). [11] propose a poisoning attack for item promotion (PiAttack) against FedRS by utilizing popularity bias. To boost the rank score of target items, PipAttack uses popularity bias to align target items with popular items in the embedding space. Besides, to avoid damaging recommendation accuracy and being detected, PipAttack designs a distance constraint to keep modified gradients uploaded by malicious clients close to normal ones. In order to further reduce the degradation of recommendation accuracy caused by targeted poisoning attacks, and the proportion of malicious clients needed to ensure the attack effectiveness, Rong [52] propose a model poisoning attack against FedRS (FedRecAttack), which makes use of a small proportion of public interactions to approximate the user feature matrix, then uses it to generate poisoned gradients. Both PipAttack and FedRecAttack rely on some prior knowledge. For example, PipAttack assumes the attack can access popularity information, and FedRecAttack assumes the attacker can get public interactions. So the attack effectiveness is greatly reduced in the absence of prior knowledge, which makes both attacks not generic in all FedRS. To make attackers conduct effective poisoning attacks to FedRS without prior knowledge, Rong \(et\)\(al\). [53] design two methods (i.e., random approximation and hard user mining) for malicious clients to generate poisoned gradients. In particular, random approximation (A-ra) uses Gaussian distribution to approximate normal users' embedding vectors, and hard user mining (A-hum) uses gradient descent to optimize users' embedding vectors obtained by A-ra to mine hard users. In this way, A-hum can still effectively attack FedRS with extremely small proportion of malicious users. #### Iv-A2 Untarget Poisoning Attacks The goal of untarget attacks on FedRS is to degrade the overall performance of the recommendation model, which are usually conducted by competing companies. For example, Wu \(et\)\(al\). [12] propose an untargeted poisoning attack to FedRS named FedAttack, which uses a globally hard sampling technique [60] to subvert model training process. More specifically, after inferring the user's interest from local user profiles, the malicious clients select candidate items that best match the user's interest as negative samples, and select candidate items that least match the user's interest as positive samples. FedAttack only modifies training samples, and the malicious clients are also similar to normal clients with different interests, thus FedAttack can effectively damage the performance of FedRS even under defense. ### _Defense Methods_ To reduce the influence of poisoning attacks on FedRS, many defense methods have been proposed in the literature, which can be classified into robust aggregation and anomaly detection. #### Iv-B1 Robust Aggregation The goal of robust aggregation is to guarantee global model convergence when up to 50% of participants are malicious [18], which selects statistically more robust values rather than the mean values of uploaded intermediate parameters for aggregation. **Median [54].** Median selects the median value of each updated model parameter independently as aggregated global model parameter, which can represent the center of the distribution better. Specifically, the server ranks each \(i-th\) parameter of \(n\) local model update, and uses the median value as \(i-th\) parameter of the global model. **Trimmed-Mean [54].** Trimmed-Mean removes the maximum and minimum values of each updated model parameter independently, and then takes the mean value as aggregated global model parameter. Specifically, the server ranks each \(i-th\) parameter of \(n\) local model update, removes \(\beta\) smallest and \(\beta\) largest values, and uses the mean value of remaining \(n-2\beta\) as \(i-th\) parameter of global model. In this way, Trimmed-Mean can effectively reduce the impact of outliers. **Krum and Multi-Krum [55].** Krum selects a local model that is the closest to the others as the global model. Multi-Krum selects multiple local models by using Krum, then aggregates them into a global model. In this way, even if the selected parameter vectors are uploaded by malicious clients, their impact is still limited because they are similar to other local parameters uploaded by normal clients. **Bulyan [56].** Bulyan is a combination of Krum and Trimmed-Mean, which iteratively selects \(m\) local model parameter vectors through Krum, and then performs Trimmed-Mean on these \(m\) parameter vectors for aggregation. With high dimensional and highly non-convex loss functions, Bulyan can still converge to effectual models. **Norm-Bounding [57].** Norm-Bounding clips the received local parameters to a fixed threshold, then aggregates them to update the global model. Norm-Bounding can limit the contribution of each local model updates so as to mitigate the affect of poisoned parameters on the aggregated model. **A-FRS [58].** A-FRS utilizes gradient-based Krum instead of model parameter-based Krum to filter malicious clients in momentum-based FedRS. A-FRS theoretically guarantees that if the selected gradient is close to the normal gradient, the momentum and model parameters will also be close to the normal momentum and model parameters. Although these robust aggregation strategies provide convergence guarantees to some extent, most of them (i.e., Bulyan, Krum, Median and Trimmed-mean) greatly degrade the performance of FedRS. Besides, some novel attacks(i.e., PipAttack, FedAttack) [11][12] utilize well-designed constraints to approximate the patterns of normal users and circumvent defenses, which further increases the difficulty of defense. \begin{table} \begin{tabular}{|p{11.4pt} p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline \multirow{2}{*}{Works} & \multirow{2}{*}{Ref} & \multicolumn{2}{c|}{Attack Type} & \multicolumn{2}{c|}{Poison Object} & \multicolumn{2}{c|}{Defense Type} & \multirow{2}{*}{Goal} \\ \cline{3-3} \cline{5-8} & & Target & & & \multicolumn{1}{p{28.5pt}|}{Untarget} & \multicolumn{1}{p{28.5pt}|}{Model} & \multicolumn{1}{p{28.5pt}|}{Data} & \multicolumn{1}{p{28.5pt}|}{RA} & \multicolumn{1}{p{28.5pt}|}{AD} \\ \hline PipAttack & [11] & ✓ & & & ✓ & & & Increase/decrease popularity of target items. \\ FedRecAttack & [52] & ✓ & & ✓ & & & Increase/decrease popularity of target items. \\ A-ra/A-hum & [53] & ✓ & & ✓ & & & Increase/decrease popularity of target items. \\ FedAttack & [12] & & ✓ & & ✓ & & & Degrade the overall performance of FedRS. \\ \hline Median & [54] & & & & & ✓ & Guarantee global model convergence. \\ Trimmed-Mean & [54] & & & & ✓ & Guarantee global model convergence. \\ (Multi-)Krum & [55] & & & & ✓ & Guarantee global model convergence. \\ Bulyan & [56] & & & & ✓ & Guarantee global model convergence. \\ Norm-Bounding & [57] & & & & ✓ & Guarantee global model convergence. \\ A-FRS & [58] & & & & ✓ & Guarantee global model convergence. \\ FSAD & [59] & & & & ✓ & & Guarantee global model convergence. \\ \hline \end{tabular} \end{table} TABLE II: Representative works on the security of FedRS. RA refers to robust aggregation and AD refers to anomaly detection. #### Iv-B2 Anomaly Detection The purpose of anomaly detection strategy is to identify the poisoned model parameters uploaded by malicious clients and filter them during the global model aggregation process. For example, Jiang \(et\)\(al.\)[59] propose an anomaly detection strategy named federated shilling attack detector (FSAD) to detect poisoned gradients in federated collaborative filtering scenarios. FSAD extracts 4 novel features according to the gradients uploaded by clients, then uses the gradient-based features to train a semi-supervised bayes classifier so as to identify and filter the poisoned gradients. However, in FedRS, the interests of different users vary widely, thus the parameters they uploaded are usually quite different, which increases the difficulty of anomaly detection [52]. ## V Heterogeneity of Federated Recommendation Systems Compared with traditional recommendation systems, FedRS face more severe challenges in terms of heterogeneity, which are mainly reflected in system heterogeneity, statistical heterogeneity and model heterogeneity, as shown in Fig. 3. System heterogeneity refers to client devices have significantly different storage, computation, and communication capabilities. Devices with limited capabilities greatly affect training efficiency, and further reduce the accuracy of the global recommendation model. [61]; Statistical heterogeneity refers to the data collected by different clients is usually not independent and identically distributed (non-IID). As a result, simply training a single global model is difficult to generalize to all clients, which affects the personalization of recommendations [62]; Privacy heterogeneity means that the privacy constraints of different users and information vary greatly, so simply treating them with the same privacy budgets will carry unnecessary costs [63]. This section introduces some effective approaches to address the heterogeneity of FedRS. ### _System Heterogeneity_ In FedRS, the hardware configuration, network bandwidth and battery capacity of participating clients vary greatly, which results in diverse computing capability, communication speed, and storage capability [16]. During the training process, the clients with limited capacity could become stragglers, and even drop out of current training due to network failure, low battery and other problems [20]. The system heterogeneity significantly delays the training process of FedRS, further reducing the recommendation accuracy of the global model. To make the training process compatible with different hardware structures and tolerate the straggling and exit issues of clients, the most common methods are asynchronous communication [64][20] and clients selection [65]. **Asynchronous communication.** Considering the synchronous communication based federated learning must wait for straggler devices during the aggregation process, many asynchronous communication strategies are presented to improve training efficiency. For examples, FedSA [64] proposes a semi-asynchronous communication method, where the server aggregates the local models based on their arrival order of each round. FedAsync [20] uses a weighted average strategy to aggregate the local models based on staleness, which assigns less weight to delayed feedback in the update process. **Clients selection**. Client selection approach selects clients for updates based on resource constraints so that the server can aggregate as many local updates as possible at the same time. For example, in FedCS [65], the server sends a resource request to each client so as to get their resource information, then estimates the required time of model distribution, updating and uploading processes based on the resource information. According to the estimated time, the server determines which clients can participant in the training process. Fig. 3: Heterogeneity of federated recommendation systems. ### _Statistical Heterogeneity_ Most of the existing federated recommendation studies are built on the assumption that data in each participant is independent and identically distributed (IID). However, the data distribution of each client usually varies greatly, hence training a consistent global model is difficult to generalize to all clients under non-IID data and inevitably neglects the personalization of clients [63]. To address the statistical heterogeneity problem of FedRS, many effective strategies have been proposed, which are mainly based on meta learning [39][66] and clustering [67][68]. **Meta learning.** As known as "learning to learn", meta learning technology aims to quickly adapt the global model learned by other tasks to a new task by using only a few samples [37]. The rapid adaptation and good generalization abilities make it particularly well-suited for building personalized federated recommendation models. For examples, FedMeta [39] uses Model-Agnostic Meta-Learning (MAML) [69] algorithm to learn a well-initialized model that can be quickly adapted to clients, and effectively improve the personalization and convergence of FedRS. However, FedMeta needs to compute the second-order gradients, which greatly increases computation costs. Besides, the data split process also brings a huger challenge for clients with limited samples. Based on FedMeta, Wang \(et\ al.\)[66] propose a new meta learning algorithm called Reptile which applies the approximate first-order derivatives for the meta-learning updates, which greatly reduces the computation overloads of clients. Moreover, Reptile doesn't need a data split process, which makes it also suitable for clients with limited samples. **Clustering.** The core idea of clustering is training personalized models jointly with the same group of homogeneous clients. For examples, Jie \(et\ al.\)[67] use historical parameter clustering technology to realize personalized federated recommendation, in which the server aggregates local parameters to generate global model parameters and clusters the local parameters to generate clustering parameters for different client groups. Then the clients combine the clustering parameters with the global parameters to learn personalized models. Luo \(et\ al.\)[68] propose a personalized federated recommendation framework named PerFedRec, which constructs a collaborative graph and integrates attribute information so as to jointly learn the user representations by federated GNN. Based on the learned user representations, clients are clustered into different groups. And each cluster learns a cluster-level recommendation model. At last, each client can obtain a personalized model by merging the global recommendation model, the cluster-level recommendation model, and the fine-tuned local recommendation model. Although clustering based approaches can alleviate statistical heterogeneity, the clustering and combination process greatly increase the computation costs. ### _Privacy Heterogeneity_ In reality, the privacy restrictions of different participants and information vary greatly, thereby using the same high level of privacy budget for all participants and information is unnecessary, which even increases the computation/communication costs and degrades the model performance. **Heterogeneous user privacy**. In order to adapt to the privacy needs of different users, Anelli \(et\ al.\)[70] present a user controlled federated recommendation framework named FedeRank. FedeRank introduces a probability factor \(\pi\in[0,1]\) to control the proportion of interacted item updates and masks the remain interacted item update by setting them to zero. In this way, FedeRank allows users to decide the proportion of data they want to share by themselves, which addresses the heterogeneity of user privacy. **Heterogeneous information privacy**. In order to adapt to the privacy needs of different information components, HPFL [63] designs a differentiated component aggregation strategy. To obtain the global public information components, the server directly weighted aggregates the local public components with the same properties. And to obtain the global privacy information components, the user and item representations are kept locally, and the server only aggregates the local drafts without the need to align the presentations. With the differentiated component aggregation strategy, HPFL can safely aggregate components with heterogeneous privacy constraints in user modeling scenarios. ## VI Communication Costs of Federated recommendation Systems To achieve satisfactory recommendation performance, FedRS requires multiple communications between the server and clients. However, the real-world recommendation systems are usually conducted by complexity deep learning models with large model size [71], and millions of parameters need to be updated and communicated [13], which brings severe communication overload to resource limited clients and further affects the application of FedRS in large-scale industrial scenarios. This section summarizes some optimization methods to reduce communication costs of FedRS, which can be classified into importance-based updating [72][22][73][74], model compression [75][76], active sampling [77] and one shot learning [78]. ### _Importance-based Model Updating_ Importance-based model updating selects important parts of the global model instead of the whole model to update and communicate, which can effectively reduce the communicated parameter size in each round. For examples, Qin \(et\ al.\)[72] propose a federated framework named PPRSF, which uses 4-layers hierarchical structure for reducing communication costs, including the recall layer, ranking layer, re-ranking layer and service layer. In the recall layer, the server roughly sorts the large inventory by using public user data, and recalls a relatively small number of items for each client. In this way, the clients only need to update and communicate the candidate item embeddings, which greatly reduces the communication costs between the server and clients, and the computation costs in the local model training and inference phases. However, the recall layer of PPRSF needs to get some public information about users, which raises certain difficulty and privacy concerns. Yi \(et\)\(al.\)[22] propose an efficient federated news recommendation framework called Efficient-FedRec, which breaks the news recommendation model into a small user model and a big news model. Each client only requests the user model and a few news representations involved in their local click history for local training, which greatly reduces the communication and computation overhead. To further protect specific user click history against the server, they transmit the union news representations set involved in a group of user click history by using a secure aggregation protocol [79]. Besides, Khan \(et\)\(al.\)[73] propose a multi-arm bandit method (FCF-BTS) to select part of the global model that contains a smaller payload for all clients. The rewards of the selection process are guided by Bayesian Thompson Sampling (BTS) [80] approach with Gaussian priors. Experiments show that FCF-BTS can reduce 90% model payload for highly sparse datasets. Besides, the selection process occurs on the server side, thus avoiding additional computation costs for the clients. But FCF-BTS causes 4% - 8% loss in recommendation accuracy. To achieve a better balance between recommendation accuracy and efficiency, Ai \(et\)\(al.\)[74] propose an all-MLP network that uses a Fourier sub-layer to replace the self-attention sub-layer in a Transformer encoder so as to filter noise data components unrelated to the user's real interests, and adapts an adaptive model pruning technique to discard the noise model components that doesn't contribute to model performance. Experiments show that all-MLP network can significantly reduce communication and computation costs, and accelerates the model convergence. Importance-based model updating strategies can greatly reduce communication and computation costs at the same time, but only selecting the important parts for updating inevitably reduces the recommendation performance. ### _Model Compression_ Model Compression is a well-known technology in distributed learning [81], which compresses the communicated parameters per round to be more compact. For examples, Konen \(et\)\(al.\)[75] propose two methods (i.e., structured updates and sketched updates) to decrease the uplink communication costs under federated learning settings. Structured updates method directly learns updates from a pre-specified structure parameterized using fewer variables. Sketched updates method compresses the full local update using a lossy compression way before sending it to the server. These two strategies can reduce communication costs by 2 orders of magnitude. To reduce the uplink communication costs in deep learning based FedRS, JointRec [76] combines low-rank matrix factorization [82] and 8-bit probabilistic quantization [83] methods to compress weight update. Supposing the weight update matrix of client n is \(H_{n}^{\alpha\times b}\), \(a\leq b\), low-rank matrix factorization decomposes \(H_{n}^{\alpha\times b}\) into two matrices: \(H_{n}^{\alpha\times b}=U_{n}^{a\times k}V_{n}^{k\times b}\), where \(k=b/N\) and N is a positive number that influences the compression performance. And 8-bit probabilistic quantization method transforms the position of matrix value into 8-bit value before sending it to the server. Experiments demonstrate that JointRec can realize \(12.83\times\) larger compression ratio while maintaining recommendation performance. Model compression methods achieve significant results in reducing uplink communication costs. However, the reduction of communication costs sacrifices the computation resources of the clients, so it's necessary to consider the trade-off between computation and communication costs when using model compression. ### _Client Sampling_ In traditional federated learning frameworks [8], the server randomly selects clients to participate in the training process and simply aggregates the local models by average, which requires a large number of communications to realize satisfactory accuracy. Client sampling utilizes efficient sampling strategies so as to improve training efficiency and reduce the communication rounds. For example, Muhammad \(et\)\(al.\)[77] propose an effective sampling strategy named FedFast to speed up the training efficiency of federated recommendation models while keeping more accuracy. FadFast consists of two efficient components: ActvSAMP and ActvAGG. ActvSAMP uses K-means algorithm to cluster users based on their profiles, and samples clients in equal proportions from each cluster. And ActvAGG propagates local updates to the other clients in the same cluster. In this way, the learning process for these similar users is greatly accelerated and the overall efficiency of the FedRS is consequently improved. Experiments show that FedFast reduces communication rounds by 94% compared to FedAvg [8]. However, FedFast is faced with the cold start problem because it requires a number of users and items for training. Besides, FedFast needs to retrain the model to support new users and items. ### _One Shot Federated Learning_ The goal of one shot federated learning mechanism is to reduce communication rounds of FedRS [84][85], which limits communication to a single round to aggregate knowledge of local models. For example, Eren \(et\)\(al.\)[78] implement an one-shot federated learning framework for cross-platform FedRS named FedSPLIT. FedSPLIT aggregates model through knowledge distillation [86], which can generate client specific recommendation results with just a single pair of communication rounds between the server and clients after a small initial communication. Experiments show that FedSPLIT realizes similar root-mean-square error (RMSE) compared with multi-round communication scenarios, but it is not applicable to the scenario where the participants are individual users. ## VII Applications And Public Benchmark Datasets This section introduces the typical applications and public benchmark datasets for FedRS. ### _Applications_ **Online services.** Currently, online services have been involved in various fields of our life such as news, movie and music. A large amount of private information of users is collected and stored centrally by service providers, which faces a serious risk of privacy leakage. User data may be sold to third parties by service providers or stolen by external hackers. FedRS can help users enjoy personalized recommendation services while keeping personal privacy, make the service providers more trusted by users, and ensure the recommendation service complies with the regulations. For example, Tan \(et\)\(al.\)[87] design a federated recommendation system that implements various popular recommendation algorithms to support lots of online recommendation services, and deploys it on a real-world content recommendation application. **Healthcare.** Healthcare recommendation enables patients to enjoy medical service from mobile applications instead of going to the hospital in person when obtaining satisfactory recommendations. Medical data is quite private and sensitive, which means it is hard to fuse user information from different hospitals or other organizations to improve recommendation quality. In this scenario, FedRS can break down the data silos and utilize these data without compromising patients' privacy. For example, Song \(et\)\(al.\)[88] develop a telecommunication-joint federated healthcare recommendation platform based on Federated AI Technology Enabler (FATE), which helps healthcare providers improve recommendation performance by complementing common user data (e.g., demographic information, user behaviors and geographic information) from mobile network operators. Besides, this platform designs a federated gradient boosting decision tree (FGDBT) model, improving 9.71% of precision and 4% of F1 score for healthcare recommendation. The platform has been deployed on both organizations and applied to online operation. **Advertisement.** Advertisement is another significant application of FedRS. Platforms that display advertisements often face the problem of insufficient user data and low click-through rates (CTR) for advertisements. FedRS is able to exploit user data across different platforms in a privacy-preserving way, which can better infer user interest and push advertisements more accurately. For example, Wu \(et\)\(al.\)[25] propose a native advertisement CTR prediction method named FedCTR, which can integrate multi-platform user behaviors (e.g., advertisements click behavior, search behavior and browsing behavior) for user interest modeling with no need for centralized storage. **E-commerce.** Currently, recommendation system plays a significant role in e-commerce platforms (e.g., Alibaba, Amazon). To provide users with more precise recommendation services, such systems try to integrate more auxiliary information (e.g., user purchasing power, social information). However, these data are usually distributed on different platforms and difficult to access directly due to regulations and privacy concerns. FedRS can address this problem effectively while meeting regulations and privacy. **Point-of-Interest.** Point-of-Interest (POI) recommendation exploits the user's historical check-in data and other modal information (e.g., POI attributes and social information) to recommend suitable POI sets for the user. However, the user's check-in data is very sensitive and sparse, and users are often reluctant to share their context information due to privacy concerns. FedRS can effectively address the data sparsity problem in a privacy-preserving way, which is quite beneficial for POI recommendation [89]. ### _Public Benchmark Datasets_ **MovieLens [90].** MovieLens rating datasets were published by GroupLens, which consist of user, movie, rating and timestamp information. MovieLens-100K contains 100,000 ratings from 943 users for 1682 movies, and MovieLens-1M contains 1,000,209 ratings from 6,040 users for 3,952 movies. **FilmTrust [91].** FilmTrust is a movie rating dataset crawled from the FilmTrust website. The dataset contains 35,497 ratings from 1,508 users for 2,071 films. **Foursquare [92].** Foursquare dataset is a famous benchmark dataset to evaluate POI recommendation models collected from Foursquare. The dataset contains 22,809,624 global-scale check-ins by 114,324 users on 3,820,891 POIs with 363,704 social relationships. **Epinions [93].** Epinions dataset is an online social network built from a consumer review site Epinions.com, which consists of user ratings and trust social network information. The dataset contains 188,478 ratings from 116,260 users for 41,269 items. **Mind [94].** Mind is a large-scale dataset for news recommendation collected from anonymous behavior logs of Microsoft News website, which contains about 160,000 English news articles and more than 15 million impression logs generated by 1 million users. **LastFM [95].** LastFM dataset was collected from Last.fm online music system, which consists of tagging, music artist listening, and social relationship information. The dataset contains 92,834 listening counts of 17,632 music artists by 1,892 users. **Book-Crossing [96].** Book-Crossing dataset is a 4-week crawl dataset from the Book-Crossing community. It contains 1,149,780 ratings (explicit / implicit) for 271,379 books by 278,858 anonymous users with demographic information. ## VIII Future Directions This section presents and discusses many prospective research directions in the future. Although some directions have been covered in the above sections, we believe they are necessary for FedRS, and need to be further researched. **Decentralized FedRS.** Most current FedRS are based on client-server communication architecture, which faces single-point-of-failure and privacy issues caused by the central server [97]. While much work has been devoted to decentralized federated learning [98][99], few decentralized FedRS have been studied. A feasible solution is to replace client-server communication architecture with peer-peer communication architecture to achieve fully decentralized federated recommendation. For example, Hegeds \(et\)\(al.\)[21] propose a fully decentralized matrix factorization framework based on gossip learning [100], where each participant sends their copy of the global recommendation model to random online neighbors in the peer to peer network. In addition, swarm learning [101], a decentralized machine learning framework that combines edge computing, blockchain based peer-peer networks and coordination, can keep confidentiality without the need for a central server. Therefore, it is also a promising way to implement decentralized recommendation systems. **Incentive mechanisms in FedRS.** FedRS collaborate with multiple participants to train a global recommendation model, and the recommendation performance of the global model is highly dependent on the quantity and quality of data provided by the participants. Therefore, it is significant to design an appropriate incentive mechanism to inspire participants to contribute their own data and participate in collaborative training, especially in the cross-organization federated recommendation scenarios. The incentive mechanisms must be able to measure the clients' contribution to the global model fairly and efficiently. **Architecture design for FedRS.** The recommendation systems in industrial scenarios usually consist of the recall layer and ranking layer, which generate recommendation results on the server side. Considering the privacy of users, FedRS must adopt different designs. A feasible solution is local recalling and ranking, where the server sends the entire set of candidate items to clients, and clients generate recommendation results locally. However, such design brings enormous communication, computation and memory costs for clients since there are usually millions of items in real-world recommendation systems. Another effective approach is to place the recall layer on the server side and the ranking layer on the client side, where clients send encrypted or noised user embedding to the server to recall top-N candidate items, then clients generate personalized recommendation results based one these candidate items via ranking layer [102]. Nevertheless, there is a risk of privacy leakage associated with this approach, because recalled items are known to the server. **Cold start problem in FedRS.** The cold start problem means that recommendation systems cannot generate satisfactory recommendation results for new users with little history interactions [103]. In federated settings, the user data is stored locally, so it is more difficult to integrate other auxiliary information (e.g., social relationships) to alleviate the cold start problem. Therefore, it is a challenging and prospective research direction to address the cold start problem while ensuring user privacy. **Secure FedRS.** In the real world, the participants in the FedRS are likely to be untrustworthy. Therefore, participants may upload poisoned intermediate parameters to affect recommendation results or destroy recommendation performance. Although some robust aggregation strategies [55] and detection methods [59] have been proposed to defend against poisoning attacks in federated learning settings, most of them don't work well in FedRS. On one hand, some strategies such as Krum, Median and Trimmed-mean degrade the recommendation performance to a certain extent. On the other hand, some novel attacks [12] use well-designed constraints to mimic the patterns of normal users, extremely increasing the difficulty of detection and defense. Currently, there are still no effective defense methods against these poisoning attacks while maintaining recommendation accuracy. ## IX Conclusion A lot of effort has been devoted to federated recommendation systems. A comprehensive survey is significant and meaningful. This survey summarizes the latest studies on aspects of privacy, security, heterogeneity and communication costs. Based on these aspects, we also make a detailed comparison among the existing designs and solutions. Moreover, we present many prospective research directions to promote development in this field. FedRS will be a promising field with huge potential opportunities, which requires more effort to develop. ## Acknowledgments This research is partially supported by the National Key R&D Program of China 2021YFF0900800, NSFC No.62202279, the Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) (No.2021CXGC010108), the Shandong Provincial Natural Science Foundation (No.ZR2022QF018), Shandong Province Outstanding Youth Science Foundation, the Fundamental Research Funds of Shandong University, CCF-Huawei Populus Grove Fund, and the Special Fund for Science and Technology of Guangdong Province under Grant (2021S0053). Sino-Singapore International Joint Research Project (No. 206-A021002).
2309.04323
Glassy phases of the Gaussian Core Model
We present results from molecular dynamics simulations exploring the supercooled dynamics of the Gaussian Core Model in the low- and intermediate-density regimes. In particular, we discuss the transition from the low-density hard-sphere-like glassy dynamics to the high-density one. The dynamics at low densities is well described by the caging mechanism, giving rise to intermittent dynamics. At high densities, the particles undergo a more continuous motion in which the concept of cage loses its meaning. We elaborate on the idea that these different supercooled dynamics are in fact the precursors of two different glass states.
Vittoria Sposini, Christos N. Likos, Manuel Camargo
2023-09-08T13:40:11Z
http://arxiv.org/abs/2309.04323v1
# Glassy phases of the Gaussian Core Model ###### Abstract We present results from molecular dynamics simulations exploring the supercooled dynamics of the Gaussian Core Model in the low- and intermediate-density regimes. In particular, we discuss the transition from the low-density hard-sphere-like glassy dynamics to the high-density one. The dynamics at low densities is well described by the _caging_ mechanism, giving rise to intermittent dynamics. At high densities, the particles undergo a more continuous motion in which the concept of cage loses its meaning. We elaborate on the idea that these different supercooled dynamics are in fact the precursors of two different glass states. Soft colloids encompass a large variety of nano- or mesoscopic aggregates undergoing thermal motion in a solvent. In most cases, these are polymer-based assemblies of various architectures and connectivities, such as linear chains, stars and rings, dendrimers, block-copolymer micelles, but also cross-linked nano- or microgels, which feature a high degree of flexibility and deformability [1]. The degree of softness is conveniently expressed by the ratio \(E_{\rm el}/k_{\rm B}T\) of their elastic deformation energy upon small overlaps or indentations, \(E_{\rm el}\), over the thermal energy \(k_{\rm B}T\), which can span several orders of magnitude [1; 2]. Their suspensions exhibit structural and dynamical anomalies [3; 4; 5], which accompany rich types of thermodynamic behavior, such as reentrant melting [7; 8; 9; 10; 11], and clustering [12; 13]. The deformability of these soft colloids manifests itself under flow as particle elongation, tumbling and shear-thinning [14; 15; 16; 17; 18]. Akin to hard colloids in the supercooled regime, soft colloids exhibit a sharp increase of the equilibrium relaxation time and heterogeneous dynamics [1; 19; 20; 21]. However, at very high concentrations, soft colloids can reach a particular aging regime characterized by an intermittent release of internal stresses which coincides with the onset of an anomalous decrease in local order [22]. Colloidal softness has been also related to the microscopic origin leading to the validity of the Stokes-Einstein relation for degrees of metastability for which it normally breaks down in the case of hard colloidal and molecular systems [23]. From a theoretical point of view, the effective interaction \(v(r)\) between a pair of such colloids does not grow fast as \(r\to 0\), so that the integral \(\int_{0}^{\infty}v(r){\rm d}^{3}r\) is finite. In some cases, these are effective interactions between the centers of mass of open, fractal, fully penetrable objects, for which \(v(r)\) remains finite (free of divergence) even if the separation \(r\) vanishes. Denoting with \(\hat{v}(k)\) the Fourier transform of \(v(r)\), such effective interactions are referred to as \(Q^{+}\) potentials if \(\hat{v}(k)\) is positive definite and as \(Q^{\pm}\) potential if \(\hat{v}(k)\) attains both positive and negative values, typically in an oscillatory fashion [24]. Systems belonging to the \(Q^{+}\) class undergo a reentrant fluid-crystal-fluid transition at low temperature and high density and they possess a maximum freezing temperature, beyond which no crystallization is possible whereas systems of the \(Q^{\pm}\) class bond together forming cluster crystals, where each lattice site is occupied by several overlapping particles [13; 24] Within the realm of \(Q^{+}\) potentials a prominent role is played by the Gaussian Core Model (GCM) introduced by Stillinger in the 70's [7]. The GCM is one of the simplest models for the description of systems such as polymer or dendrimer solutions [25; 26] and it entails an inter-particle Gaussian-shaped potential \[v(r)=\epsilon\exp[-(r/\sigma)^{2}], \tag{1}\] where \(\epsilon\) and \(\sigma\) are the energy and length scales. We define a reduced density \(\rho\to\rho\sigma^{3}\) and a reduced temperature \(T\to k_{B}T/\epsilon\), where \(\rho=N/V\) is the number density, \(k_{B}\) is Boltzmann's constant, and \(T\) the absolute temperature. We measure lengths in units of \(\sigma\) and times in units of \(\sigma\sqrt{m/\epsilon}\), where \(m\) is the mass of each particle. Whereas at low temperatures and low densities, the equilibrium properties of the GCM can be described by an effective hard-sphere mapping [7], at high densities a mean-field description sets in, giving rise to re-entrant melting below a threshold upper freezing temperature \(T_{u}=8.74\times 10^{-3}\), above which it remains fluid at all densities [7; 27; 28]. The equilibrium phase diagram of the GCM, along with two predictions for its vitrification line, arising from two different approximations, is shown in Fig. 1. Another prominent member of the \(Q^{+}\)-class is the Hertzian potential, \(v(r)=\epsilon(1-r/a)^{5/2}\Theta(1-r/a)\), which models the effective interaction between elastic spheres of diameter \(a\)[8]. The general features of the phase diagram of the Hertzian spheres are similar to those of the GCM, however, it must be emphasized that the former has finite support, as expressed by the Heaviside function \(\Theta(1-r/a)\), whereas the GCM is nonvanishing for arbitrarily large values of \(r\). At low densities, the GCM can be mapped onto an effective hard sphere potential [7], and its crystallization properties can be understood in such a context. The high-density part of the phase behavior of the GCM is more challenging. Previous studies [29; 30] have shown that at high densities nucleati and that Mode-Coupling Theory (MCT) provides an accurate description of the structural arrest of the GCM into an amorphous, glassy state. Moreover, it was found that the glass state at high densities displays strong dynamic fluctuations and a nearly Gaussian distribution of single particle displacements, features compatible with a geometric transition [31]. Such transition refers to a change in the topology of the rugged free energy landscape; in particular, below the transition temperature, the potential energy barriers become much larger than the available thermal energy, and the system is trapped close to local minima of the energy landscape [32]. The low- and intermediate-density glassy regimes of the GCM remain largely unexplored. Assuming that the low-density vitrification scenario follows the hard-sphere paradigm, it is then particularly interesting to ask the question as to how the vitrification scenario evolves towards the high-density regime and whether dynamically distinct glassy states exist in different density regimes of the GCM. A recent theoretical study [33] showed an unexpected density dependence of the glassy behavior of GCM particles, see Fig. 1). Similarly to the equilibrium crystallization behavior, the glass line shows a re-entrance upon increasing the density. However, at moderate densities, the characteristic order parameter at constant density displays sudden jumps when increasing the temperature. This trend suggests a transition between two different glasses, a continuous and a discretized one. In particular, the emergence of a discretized glass has been associated with the formation of _out-of-equilibrium_ local aggregates. Indeed, as mentioned above, the GCM is a \(Q^{+}\) potential for which no clusters form at equilibrium. In contrast, for ultrasoft particles belonging to the \(Q^{\pm}\) class for which cluster formation is an equilibrium phenomenon, the emergence of cluster-glasses has been recently reported [34; 35; 36; 37]. We note that in Fig. 1, the MCT-vitirification line stops at \(\rho\gtrsim 0.40\) and at about the same density the RT-vitirification line shows non-monotonic behavior with density. The reason for the former is a loss of the HNC solution for the one-component CGM, which is however recovered at much higher densities, \(\rho\gtrsim 1.00\). The RT, being a two-component approach, does converge in this region and results in the aforementioned non-monotonic behavior. Whereas it is an open question whether this behavior is connected with the convergence problems of the HNC in that region, the discretized glass predicted by the RT occurs already at lower densities, and thus the question of whether a distinct arrested state exists there is independent of the HNC-convergence issues. The goal of this work is to characterize the transition from low- to high-density glass from a dynamic point of view, focusing on the study of the supercooled regime. Indeed, when approaching the glass transition the system enters into a supercooled regime which represents in all respects a precursor of the glass [38]. Thus, we expect to observe differences between the two states already at the level of supercooled (glassy) dynamics. The supercooled regime of canonical glass-formers is usually described in terms of the _caging_ mechanism: each particle experiences trapping due to the neighboring particles that effectively create a cage around it; eventually, the fluctuations allow the particle to escape this local cage and move to the next one. The lower the temperature the more difficult it will be for the particle to escape from the cage. In these terms, the glass transition can be thought of as a localization transition. The cage size is related to the average inter-particle distance, which in turn depends on the density of the system. Such a mechanism is accurate for systems characterized by a harshly repulsive inter-particle potential, such as hard-sphere or Lennard-Jones systems [39; 40]. However, when dealing with bonded potentials, that is with potentials that do not diverge when two particles are at full overlap, and in the presence of long tails, this mechanism can break down. In particular, we show that for the GCM at intermediate densities, the idea of slow dynamics based on the concept of _caging_ must be revised. ## II Results Following the phase diagram in Fig. 1, we simulate the glassy dynamics of the GCM at different densities. As mentioned above, at high densities the one-component GCM vitrifies and thus, it is possible to approach the supercooled regime directly, without running into crystallization issues. This is not the case for low and intermediate densities, for which the one-component system would crystallize upon cooling. Therefore, in our simulations, we follow a random pinning procedure and freeze a fraction \(f=10\%\) of the particles in order to avoid crys Figure 1: Phase diagram of the GCM. The black solidification line and the red glass line from the Replica Theory (RT) are taken from the literature, in particular from Refs. [28] and [33], respectively. The blue glass line is calculated from Mode Coupling Theory (MCT) as described in Materials and Methods. The MCT line is recovered at densities \(\rho\gtrsim 1.00\), where it follows a monotonically decreasing trend with density [29; 30]; see main text. tallization and be able to approach the deep supercooled regime. All results reported below are calculated taking into account the mobile particles only and averaged over at least three different realizations of the pinning disorder (see Materials and Methods for more details on the simulation protocol). We focus our analysis on four different densities, _i.e._, \(\rho=0.10,0.15,0.40,1.00\), in order to characterize the transition between low- and high-density glassy dynamics. For each density, we spanned over different temperatures and selected those at which all four systems display the same diffusivity at long times. The motivation behind this choice of isodiffusivity points has different motivations: on the one hand, the isodiffusivity is accompanied by close similarities of the _static_ correlations: not only do the iso-peak-height lines of the radial distribution functions follow similar trends as the isodiffusivity ones [41; 42] but, but upon proper length rescaling, the static structure factors along such lines can be mapped onto a quasi-universal curve as well, as we will shortly demonstrate. On the other, the isodiffusivity lines are precursors of the vitrification line [19]. Accordingly, by choosing to simulate along such a locus we are lying on an equidistant line from the neighboring dynamically arrested states of the system. To this end, we calculated the mean-square displacements (MSD) as \[\langle\Delta\mathbf{r}^{2}(t)\rangle=\frac{1}{N}\left\langle\sum_{i=1}^{N}(\mathbf{ r}_{i}(t+t_{0})-\mathbf{r}_{i}(t_{0}))^{2}\right\rangle, \tag{2}\] where \(N\) is the particle number and the brackets \(\langle\cdots\rangle\) denote an average over all particles, with initial positions \(\mathbf{r}_{i}(t_{0})\) at time \(t_{0}\) and \(\mathbf{r}_{i}(t+t_{0})\) at a time interval \(t\) later. We also average over different initial times \(t_{0}\). The selected temperature values are reported in Fig. 2(a), in which the mean-squared displacement (MSD) is plotted, clearly showing the same long-time diffusivity for the four systems. We have also calculated the equal-time static structure factor \[S(\mathbf{q})=\frac{1}{N}\left\langle\sum_{i=1}^{N}\sum_{j=1}^{N}\exp\left[- \mathrm{i}\mathbf{q}\cdot(\mathbf{r}_{i}(t)-\mathbf{r}_{j}(t))\right]\right\rangle, \tag{3}\] which is shown in Fig. 2(b). After suitable rescaling due to the different densities, the overall structural properties of the four systems are comparable. Thermodynamic states along the isodiffusivity lines can be mapped on each other as far as the static correlations are concerned; the corresponding low- and high-density states lying on an isotherm can be termed _conjugate pairs_, in analogy to the \(T=0\) pairs of states that are coupled by exact duality relations [43]. This property, together with the same long-time diffusivity, would suggest that a re-mapping of all systems is possible and that also the particle dynamics of the associated systems can be collapsed onto a single curve upon suitable rescalings. This, however, is not the case. We consider next the intermediate scattering function (ISF), defined as \[F_{s}(q,t)=\frac{1}{N}\left\langle\sum_{j=1}^{N}\exp\left[-\mathrm{i}\mathbf{q} \cdot(\mathbf{r}_{j}(t+t_{0})-\mathbf{r}_{j}(t_{0}))\right]\right\rangle, \tag{4}\] which is reported in Fig. 3, revealing important differences in the relaxation dynamics. In particular, at the two lower densities, a clear two-step relaxation can be discerned, which becomes smoothed out at the intermediate one and practically disappears at the highest of the four, indicating the lack of a caging mechanism for the latter. Therefore, the ISF displays a faster relaxation for high density, while for low and intermediate densities the two-step behavior emerges, leading to an effectively slower relaxation. The coherent part of the ISF displays a similar trend, with no indication of decoupling between self and collective behaviour, at least in the regime investigated Figure 2: (a) main plot: the mean squared displacement \(\langle\Delta\mathbf{r}^{2}(t)\rangle\) of the GCM as a function of time, calculated at four different isodiffusive points, as indicated in the legend. Inset: corresponding local exponent \(\gamma(t)\) of the MSD as defined in (5). (b) structure factors \(S(q)\) calculated at the same isodiffusive points as panel (a); in the horizontal axis, the wavenumbers \(q\) of each structure factor are rescaled over the value related to the corresponding first peak (\(q_{\text{peak}}\)). within this work. There is, therefore, one single relaxation time associated with both the incoherent and the coherent intermediate scattering functions and not two separate ones, as is the case with other single-component ultrasoft systems, such as semiflexible minirings [44]. The aforementioned differences between low- and intermediate densities can already be noticed in Fig. 2(a) for the behavior of the MSD at times between the ballistic and diffusive regimes. For the lower densities, a much stronger caging effect, indicated by the development of a plateau in the MSD, emerges than for the higher ones. It is worth mentioning that the lack of a plateau at the higher density precisely compensates the fact that the ballistic motion is faster for the lower densities, so that when the former can enter a plateau while the motion at the higher density catches up. Consequently, all MSD's enter their diffusive regime at the same distance squared and at the same time, following thereafter the same diffusive pattern. These differences become quantitative by considering the local exponent \(\gamma(t)\) of the MSD, defined as \[\gamma(t)=\frac{\mathrm{d}\ln\langle\Delta\mathbf{r}^{2}(t)\rangle}{ \mathrm{d}\ln t}, \tag{5}\] and shown in the inset of Fig. 2(a). Whereas the local exponents for the two lowest densities show a clear transition from ballistic (\(\gamma(t)=2\)) to a diffusive (\(\gamma(t)=1\)) regimes through an intermediate plateau (\(\gamma(t)\cong 0\)), both the ballistic regime and the clear intermediate plateau disappear at the high-density part of the isodiffusivity line. This is a clear indication that at short-to-intermediate scales, the two motions differ, a prediction to be quantified and analyzed below by statistically analyzing individual particle trajectories. In Fig. 4 we report typical single-particle displacements, \(\Delta r_{i}(t)=\sqrt{|\mathbf{r}_{i}(t)-\mathbf{r}_{i}(t_{0})|^{2}}\). By looking at the left panels of Fig. 4 it is clear that the dynamics shifts from an intermittent-like behavior at low densities to a more continuous one at high densities. In order to classify this transition we make use of an analysis recently developed to unravel intermittent features in single-particle trajectories and based on a Local Convex Hull (LCH) method [45]. The main idea behind this analysis is to use geometric properties of the smallest convex shape (precisely the LCH) enclosing a small set of trajectory points to estimate the space explored by each particle in a specific time window (see Fig. 5). More specifically, our analysis focuses on the study of the LCH volume \(S_{V}(t)\) which, as mentioned in [45], with respect to other geometric quantities such as the diameter, is more sensitive to changes in the dimensionality and anisotropy of the particle motion. In Fig. 4, together with the single-particle displacements, we report also the corresponding time series \(S_{V}(t)\) calculated from the LCH method as described in Materials and Methods. We can observe that, if the particle motion has an intermittent-like behaviour, \(S_{V}(t)\) will display few and high peaks, while, if the particle motion follows a more continuous trend, \(S_{V}(t)\) will mostly oscillate around its single-particle average value \(\overline{S_{V}}\) with multiple lower peaks. Such trend suggests that performing a statistical analysis of the \(S_{V}(t)\) peaks can help in classifying different dynamical behaviors. Then, for each time series we find the peak locations and corresponding peak values \(S_{V}^{*}\). Moreover, we identify with \(\Delta t_{\text{SP}}\) the duration of the so called _slow phases_, that is the time \(S_{V}(t)\) stays below the threshold \(\overline{S_{V}}\) before crossing it. In Fig. 6 we report the probability distribution of \(\Delta t_{\text{SP}}\), of the peak height evaluated from the threshold value Figure 4: Typical single-particle displacements (left column) and corresponding \(S_{V}(t)\) time series from the LCH analysis (right column) calculated as described in Materials and Methods. The black solid line indicates the threshold dividing slow and fast phases and is calculated as the average over the the whole time series \(S_{V}(t)\), that is \(\overline{S_{V}}\). Figure 3: The self-intermediate scattering function \(F_{s}(q_{\text{peak}},t)\) for the four different iso-diffusive points, evaluated at the corresponding wavenumber \(q_{\text{peak}}\) where the structure factor of each system has its main peak. (_e.g._\(S_{V}^{*}-\overline{S_{V}}\)) and of the number of peaks. We can see immediately that \(p(\Delta t_{\rm SP})\) classifies the systems into two different dynamics: the distribution presents a fatter tail for the two lower density systems with respect to the higher-density systems. In addition, by looking at \(p(S_{V}^{*}-\overline{S_{V}})\) we observe that the peak height is more likely to assume larger values for the lower density systems than for the higher density ones. However, in this case, there is not a full rescale of the high-density systems, suggesting that \(\rho=0.40\) still belongs to a transition phase between the two dynamical regimes. We emphasize that the quantity \(p(S_{V}^{*}-\overline{S_{V}})\) has been rescaled by the average value \(\langle\overline{S_{V}}\rangle\) in order to eliminate trivial contributions stemming from the density-dependence of the volume explored by each particle (see also Fig. 7). Finally, the distribution of the number of peaks \(p(\#S_{V}^{*})\) complements the information provided by \(p(\Delta t_{\rm SP})\) suggesting an intermittent-like motion when fewer peaks (and longer slow phases) are detected and a more continuous one when a larger number of peaks (and shorter slow phases) is observed. Indeed, \(p(\#S_{V}^{*})\) shows that the system with the highest density is shifted towards larger values with respect to the two systems with lower values, which follow a similar distribution centered around smaller values; once again, the system with \(\rho=0.40\) displays an intermediate behavior confirming that at this density the system is in a transition phase between the two dynamical regimes. We further investigate the particle-to-particle variability of the threshold value \(\overline{S_{V}}\), which indicates the average volume explored by each particle within the simulation time window. To do so, we extract the value \(\overline{S_{V}}\) for each (mobile) particle and then build the histogram, as reported in Fig. 7. It can be seen that the distribution \(p(\overline{S_{V}})\) looks quite narrow for the highest density, suggesting a more homogeneous dynamics. Conversely, \(p(\overline{S_{V}})\) for the lower density is much broader, implying the presence of slow and fast particles in agreement with the concept of dynamic heterogeneity typical of canonical supercooled liquids. Of particular importance is the fact that in this case, the distribution has contributions even at \(\overline{S_{V}}=0\), suggesting that, within our time window, there are particles that do not move at all (or only very little) from their initial cage, for the lower densities, whereas this is not the case for the higher densities, for which the distribution vanishes for small values of \(\overline{S_{V}}\). Additional corroboration for the gradual disappearing of the standard _cage-escape_ mechanism of relaxation for the supercooled GCM-liquid at intermediate densities is offered by considering the single-particle displacement distribution \(P(\Delta r,t)\) and the corresponding non-Gaussian parameter \(\alpha_{2}(t)=3\langle\Delta r^{4}(t)\rangle/5[\langle\Delta r^{2}(t)\rangle] ^{2}-1\), which is used to quantify the deviation of particles displacements from a Gaussian distribution. Deviations of this quantity from zero are usually attributed to the presence of dynamic heterogeneity in the system but it has been shown, precisely in the context of the high-density GCM, that small values of \(\alpha_{2}(t)\) are compatible with strong dynamical heterogeneities in the case of a mean-field, geometric glass transition [31]. In Fig. 9 we show the calculated non-Gaussian parameter for the four isodiffusivity state points, finding that those of the low-density points differ drastically from those of the high-density points. In particular, there is a non-monotonic behavior of the curves and of their maximum values, which occurs roughly at the end of the caging time for \(\rho=0.10\) and \(\rho=0.15\) and somewhat earlier for \(\rho=0.40\) and \(\rho=1.00\). Thus, \(\alpha_{2}(t)\) follows the same trend in density as the iso-diffusivity line and other quantities char Figure 5: Single-particle trajectory and corresponding LCHs calculated for two sets of points centered in different time instants (red points). The volume of the LCH in (a) is clearly smaller than the one in (b), suggesting that a jump between two local cages can be identified as a peak in the time series of \(S_{V}(t)\) as it is visible in Fig. 4. acterizing the system, see Fig. 1 and Refs. [41; 46; 4]. Single-particle motions tend thus to become more and more Gaussian as the density grows, in agreement with the absence of a bimodal (cage/cage escape jump) distribution of the self-van Hove function [31] and thus with the gradual disappearance of the cage-hopping dynamics characteristic of the low-density supercooled GCM fluid. In fact, we explicitly confirm the suppression of hopping dynamics at intermediate and higher densities along our isodiffusivity line in Fig. 8, demonstrating indeed that the standard caging mechanism of dynamic slowing down in the supercooled liquid is valid only on the low-density side of the vitrification line. Finally, in Fig. 10 we report the non-ergodicity factor Figure 8: Evolution in time of the single-particle displacement distributions for the four isodiffusivity state points: (a) \(t=10^{2}\), (b) \(t=10^{3}\), (c) \(t=10^{4}\), (d) \(t=10^{5}\). In particular, we calculated the probability distributions of the logarithm of single-particle displacements which allow us to highlight the hopping motion, when present. Indeed, in panel (c) we can clearly see the emergence of a bimodal distribution for all but the highest density system. Figure 6: Probability distribution of (a) slow phase duration \(\Delta t_{\text{SP}}\), (b) peak height \(S_{v}^{*}\) with respect to the threshold value \(\overline{S_{V}}\) and (c) number of peaks. The distribution in (b) is rescaled by the average value \(\overline{S_{V}}\) which clearly depends on the density (see Fig. 7). Figure 7: Probability distribution of \(\overline{S_{V}}\) obtained by considering all the values extracted from each single-particle trajectory as indicated in Materials and Methods. Figure 9: The non-Gaussian parameter \(\alpha_{2}(t)\) of the GCM-supercooled fluids at the four isodiffusivity state points considered. Figure 6: Probability distribution of (a) slow phase duration \(\Delta t_{\text{SP}}\), (b) peak height \(S_{v}^{*}\) with respect to the threshold value \(\overline{S_{V}}\) and (c) number of peaks. The distribution in (b) is rescaled by the average value \(\overline{S_{V}}\) which clearly depends on the density (see Fig. 7). calculated within the MCT framework. As \(T\) decreases, the low-density curves tend to behave similarly; the same happens for the high-density graphs. However, comparing at the same temperature, the non-ergodicity factors show notorious differences, particularly for \(q\to 0\), suggesting that as \(\rho\) grows and the system is getting more and more incompressible, the density modulations of very long wavelengths (\(q\to 0\)) are more and more difficult to become arrested, leading eventually to a different a scenario of dynamic arrest, which is much closer to the ideal MCT-picture and it features long-wavelength _mobility_ modulations instead [29; 30; 31]. ## Discussion and conclusions The intermittent dynamics identified for the low-density GCM in the supercooled regime confirms the picture provided by the hard-sphere re-mapping, which appears to remain valid also for the glassy regime. Indeed, in this regime, the GCM displays a glassy behavior that is in agreement with the one of canonical HS-like glass formers. In particular, the dynamics is fully described by the hopping between local cages leading to standard features such as the emergence of the plateau in the MSD, dynamical heterogeneity, and non-Gaussianity. All these features have been re-classified in this work by making use of the LCH analysis. Upon increasing the density, the GCM undergoes a transition towards a different glassy state. The intermittent-like behavior is taken over by a more continuous more Gaussian dynamics. Our analysis shows how the slowing down at high densities of the GCM cannot be explained using the standard caging mechanism. As a matter of fact, the higher the density the less confining the potential becomes due to the increasing contribution of neighbouring particles to the energy landscape. The dynamics in this regime approaches more and more a mean-field-like picture, as already anticipated in [31]. With our single-particle analysis, we were able to fully capture the transition between these 2 glassy states (HS-like at low densities and mean-field-like at high densities) and to show how the intermediate density regime is characterized by a smooth transition between the two dynamics. Whether this transition in the supercooled fluid regime is the echo of an underlying glass-glass transition between two arrested states, akin to the distinct glassy states encountered, e.g., in colloid-polymer mixtures [47], is an open problem and it will be the subject of further investigations. ## Materials and methods ### Molecular Dynamics simulations Molecular dynamics (MD) simulations were performed using the open-source package LAMMPS [48] for a cubic box with periodic boundary conditions and several combinations of density \(\rho=N/V\) and temperature \(T\), where \(N\) and \(V\) are the total number of Gaussian particles and the volume of the box, respectively. The equations of motion were integrated using the velocity Verlet scheme [49], with a time step \(\Delta t/\tau=0.01\), where \(\tau=\sqrt{m\sigma^{2}/\epsilon}\) and \(m\) the mass unit for the particles. Due to the propensity of the system to crystallize at low temperatures, it is required to add some frustration degree into the system, which helps particles to remain in a disordered phase as temperature decreases. In this work, frustration is introduced through random pinning that avoids the inclusion of randomness in the interaction potential or compositional disorder [50; 51]. The initial configuration of the system was generated in a multiple steps process: at first, \(N_{p}=\lfloor fN\rfloor\) particles were randomly placed in the simulation box and an equilibration process took place at a relatively high temperature (\(T=0.01\)). Then they were permanently pinned and the remaining \(N_{m}=N-N_{p}\) particles (the mobile ones) were randomly inserted in the box preventing excessive overlapping with the previously inserted particles. By setting the target temperature \(T\), an equilibration run was performed for the mobile particles in which the system was thermalized by means of the Nose-Hoover thermostat. To consider the effect of the pinning protocol, a second pinning method was also employed. In that case, the whole system is equilibrated at a high temperature, then the fraction \(f\) of particles is permanently frozen and the system is lastly quenched to the target temperature for the equilibration of the mobile particles to occur. We worked with a total number of particles \(N=3456\), or \(5000\), and \(f=0.05\) or \(0.10\). The typical equilibration time for the mobile particles ranged from \(2\times 10^{6}\) to \(10^{7}\) time steps. After reaching a steady state, indicated by the absence of any drift in internal energy and pressure, a production run was performed, which typically ranged from \(10^{7}\) to \(10^{8}\) time steps. Figure 10: Non-ergodicity factor \(\phi(q)\) obtained from MCT (see Materials and Methods). We compare same temperatures at low density (orange curves) and intermediate density (red curves). By providing that all average measurements for given density and temperature are performed over both thermal fluctuations and different realizations of the pinning disorder, it is expected that the thermodynamics of the mobile particles remains unperturbed [50]. ### Local convex hull analysis To differentiate between different diffusive phases we analyse the single-particle trajectories making use of the Local Convex Hull (LCH) method developed in [45]. The convex hull of a finite set of points \(\{{\bf x}_{1},\ldots,{\bf x}_{n}\}\subset\mathbb{R}^{d}\) is the minimal convex shape that encloses all the points \({\bf x}_{1},\ldots,{\bf x}_{n}\). We consider the volume \(Q_{V}(i)\) of the LCH computed over \(2\tau_{0}+1\) trajectory points centered around \({\bf x}_{i}\). Then, we classify the point \({\bf x}_{i}\) by using all the estimators to which it contributes (in total \(2\tau_{0}+1\)), that is by introducing the discriminator: \[S_{V}(i)=\frac{1}{2\tau_{0}+1}\sum_{k=i-\tau_{0}}^{k=i+\tau_{0}}Q_{V}(k). \tag{6}\] The LCH method consists in: 1) mapping each trajectory into a one-dimensional time series by computing the discriminator \(S_{V}(n)\) for all points \({\bf x}_{n}\); and 2) selecting a threshold \(S_{V}\) such that the points \({\bf x}_{i}\) with \(S_{V}(i)>S_{V}\) are classified into the _fast_ phase whereas the points \({\bf x}_{i}\) with \(S_{V}(i)\leq S_{V}\) into the _slow_ phase. Following Ref. [45], we set the threshold \(S_{V}\) to be the average value of \(S_{V}(n)\) over the single trajectory, namely \[S_{V}\equiv\overline{S_{V}}=\frac{1}{N-4\tau_{0}}\sum_{n=2\tau_{0}+1}^{n=N-2 \tau_{0}}S_{V}(n). \tag{7}\] The parameter \(\tau_{0}\) was chosen empirically by looking at the MSD curves and selecting the time at which the system escapes from the plateau, generally used as an estimate of the caging time. For our system in particular we have \(\tau_{0}=10^{3}\) (in simulation time units). To calculate the LCH, we used the available algorithm in Python based on the Qhull library. ### Mode Coupling Theory To quantify the liquid-to-glass transition of the GCM system, the non-ergodicity factor \(\phi(q)\) was evaluated, which is defined as the long-time limit of the density autocorrelation function, i.e., \[\phi(q)=\lim_{t\to\infty}F(q,t)=\lim_{t\to\infty}\frac{\langle\rho({\bf q},t) \rho(-{\bf q},0)\rangle}{\langle\rho({\bf q},0)\rho(-{\bf q},0)\rangle}, \tag{8}\] where \(\rho({\bf q},t)=\sum_{j=1}^{N}\exp[{\rm i}{\bf q}\cdot{\bf r}_{j}(t)]\) and the sum is performed over the coordinates \({\bf r}_{j}(t)\) of all particles in the system. In the case \(\phi(q)\neq 0\), the system is considered non-ergodic and its state is identified as glassy, whereas \(\phi(q)=0\) corresponds to an ergodic fluid. Given the structural data obtained by solving the Ornstein-Zernike equation through the hypernetted-chain closure (HNC), the calculation of the non-ergodicity factor is readily achieved within the framework of the Mode Coupling Theory (MCT). According to it, \(\phi(q)\) fulfills the self-consistent equation [52]: \[\frac{\phi(q)}{1-\phi(q)}=\frac{1}{(2\pi)^{3}}\int{\rm d}^{3}k\,{\cal V}({\bf q },{\bf k})\phi(k)\phi(k^{\prime}), \tag{9}\] where \({\bf k}^{\prime}={\bf q}-{\bf k}\) and the kernel \({\cal V}({\bf q},{\bf k})\) can be expressed entirely in terms of the Fourier transform of direct correlation function \(\hat{c}(k)\), i.e., \[{\cal V}({\bf q},{\bf k})=\frac{\rho S(q)}{2q^{4}}\left[({\bf q}\cdot{\bf k}^{ \prime})\,\hat{c}(k^{\prime})+({\bf q}\cdot{\bf k})\,\hat{c}(k)\right]^{2}S(k )S(k^{\prime}), \tag{10}\] and \(S(k)=[1-\rho\,\hat{c}(k)]^{-1}\). ###### Acknowledgements. We thank S. Prestipino for providing the data from Ref. [28] shown in Fig. 1 and E. Zaccarelli for helpful discussions. V.S. acknowledges support of the European Commission through the Marie Sklodowska-Curie COFUND project REWIRE, Grant Agreement No. 847693. M.C. thanks Universidad Antonio Narino for financial support through Project No 2022202.
2309.07602
Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec?
Recently sequential recommendations and next-item prediction task has become increasingly popular in the field of recommender systems. Currently, two state-of-the-art baselines are Transformer-based models SASRec and BERT4Rec. Over the past few years, there have been quite a few publications comparing these two algorithms and proposing new state-of-the-art models. In most of the publications, BERT4Rec achieves better performance than SASRec. But BERT4Rec uses cross-entropy over softmax for all items, while SASRec uses negative sampling and calculates binary cross-entropy loss for one positive and one negative item. In our work, we show that if both models are trained with the same loss, which is used by BERT4Rec, then SASRec will significantly outperform BERT4Rec both in terms of quality and training speed. In addition, we show that SASRec could be effectively trained with negative sampling and still outperform BERT4Rec, but the number of negative examples should be much larger than one.
Anton Klenitskiy, Alexey Vasilev
2023-09-14T11:07:10Z
http://arxiv.org/abs/2309.07602v1
# Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec? ###### Abstract. Recently sequential recommendations and next-item prediction task has become increasingly popular in the field of recommender systems. Currently, two state-of-the-art baselines are Transformer-based models SASRec and BERT4Rec. Over the past few years, there have been quite a few publications comparing these two algorithms and proposing new state-of-the-art models. In most of the publications, BERT4Rec achieves better performance than SASRec. But BERT4Rec uses cross-entropy over softmax for all items, while SASRec uses negative sampling and calculates binary cross-entropy loss for one positive and one negative item. In our work, we show that if both models are trained with the same loss, which is used by BERT4Rec, then SASRec will significantly outperform BERT4Rec both in terms of quality and training speed. In addition, we show that SASRec could be effectively trained with negative sampling and still outperform BERT4Rec, but the number of negative examples should be much larger than one. recommender systems, sequential recsys, BERT4Rec, SASRec + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems Recommender systems Recommender systems Recommender systems Recommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommender systemsRecommenderRecommender systemsRecommender systemsRecommender systemsRecommenderRecommender systemsRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommender systemsRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommenderRecommender systemsRecommender behavior. Later various deep learning models have been introduced, including recurrent (for example, GRU4Rec (He et al., 2017; He et al., 2018)) and convolutional (for instance, Caser (Caser, 2017)) neural networks. After the arrival of the Transformer neural architecture (Rosenberg et al., 2017) models based on the self-attention mechanism have been shown to achieve state-of-the-art performance and became prevalent. Since the original SASRec (He et al., 2017) and BERT4Rec (He et al., 2018) papers a lot of research was done to further investigate the possibilities of Transformer-based models. Some works focused on improving self-attention mechanism (LSAN (Krizhevsky et al., 2014), LightSAN (He et al., 2017), Rec-denoiser (Cheng et al., 2017)), while others leverage additional side information (TiSARec (He et al., 2018), NOVA-BERT (Zhou et al., 2018)). Many publications introduced contrastive learning for sequential recommendations (CLASRec (Rosenberg et al., 2017), CoSeRec (Zhou et al., 2018), DuoRec (Zhou et al., 2018)). In order to accurately evaluate new state-of-the-art models, it is important to have good baselines. According to a recent study on BERT4Rec replicability (He et al., 2018), in most of the publications, BERT4Rec outperforms the SASRec model. Also, it was shown that some papers used under-fitted versions of BERT4Rec, but with proper training, it can achieve performance comparable with newer algorithms. In our work, we address similar questions about whether it is possible to achieve good results with the original SASRec architecture. ## 3. Loss Functions Let's suppose that we have a set of users \(U\) and a set of items \(I\) with size \(|I|\). Each user \(u\in U\) is represented by his corresponding sequence of interactions with items \(s_{u}=\{i_{1}^{(u)},i_{2}^{(u)},..,i_{uu}^{(u)}\}\). Each sequential deep learning model (GRU4Rec, SASRec, BERT4Rec) acts as an encoder of input sequence \(s_{u}\). The output of the last hidden layer is some representation of input sequence \(H_{u}=SequenceEncoder(s_{u}),H_{u}\in R^{n_{u}\times d}\), where \(d\) is hidden dimensionality of the model. It is used to calculate predicted relevances for items \(R_{u}=H_{u}E^{T}\), where \(E\in\mathbb{R}^{|I|\times d}\) is the item embedding matrix. Element \(r_{ti}^{(u)}\) of matrix \(R_{u}\) corresponds to the predicted relevance of item \(i\) at time step \(t\). The original SASRec implementation doesn't make calculations with a full embedding matrix during training. Instead, it takes a true positive item, samples one negative item, and computes their relevances \(r_{t,i_{t}}^{(u)}\) and \(r_{t,-}^{(u)}\). Then for these two items, the classic binary cross-entropy loss is used: \[\mathcal{L}_{BCE}=-\sum_{u\in U}\sum_{t=1}^{n_{u}}\log(\sigma(r_{t,i_{t}}^{(u )}))+\log(1-\sigma(r_{t,-}^{(u)})), \tag{1}\] where \(\sigma()\) is the sigmoid function. BERT4Rec implementations apply softmax over predicted relevances to get an output probability distribution for all items and compute cross-entropy loss: \[\mathcal{L}_{CE}=-\sum_{u\in U}\sum_{t\in T_{u}}\log\frac{\exp(r_{t,i_{t}}^{( u)})}{\sum_{i\in I}\exp(r_{t,i}^{(u)})} \tag{2}\] where \(r_{t,i_{t}}^{(u)}\) is predicted relevance for the ground truth item, and the second summation is done over a set of steps with masked items \(T_{u}\). If we use this loss for unidirectional models (SASRC and GRU4Rec), summation will be done over all steps in a sequence: \(T_{u}=\{1,2,..,n_{u}\}\). The choice of the loss function is independent of the choice of the model architecture (GRU4Rec/SASRC/BERT4Rec) and training objective (item masking or next item prediction). Therefore, we propose to compare different models with the same loss. In section 5 we show, that if we train the SASRec model with cross-entropy loss as BERT4Rec, it achieves better performance and trains much faster. Hence, following the title of the paper, we propose to add more negative ("dross") items to the loss function to improve the quality of the models. While training with cross-entropy loss over all items in the catalog leads to good performance, it can be computationally expensive or even unfeasible when the number of items becomes very large. To avoid this problem it is possible to sample negative items for loss calculation. For each user sequence in a batch, we sample \(N\) items a user hasn't interacted with and use the same set of negatives for each time step of a given sequence. As a result, we use the following sampled cross-entropy loss: \[\mathcal{L}_{CE-sampled_{N}}=-\sum_{u\in U}\sum_{t=1}^{n_{u}}\log\frac{\exp( r_{t,i_{t}}^{(u)})}{\exp(r_{t,i_{t}}^{(u)})+\sum_{i\in T_{N}^{-(u)}}\exp(r_{t,i }^{(u)})}, \tag{3}\] where \(I_{N}^{-(u)}\) is a set of \(N\) negative examples sampled for a given user. This approach is computationally more efficient than sampling a separate set of negatives for each time step and leads to good performance when \(N\) is large enough. A similar strategy for negative sampling was used in (He et al., 2017) to train the GRU4Rec model. ## 4. Experimental Settings ### Datasets We conduct experiments on five popular datasets, which are often used as sequential recommendations benchmarks. Amazon **Beauty** is a product review dataset crawled from Amazon.com (Yang et al., 2019). **Steam** is a dataset collected from Steam, a large online video game distribution platform (Zhou et al., 2018). **MovieLens-1m** and **MovieLens-20m** are two versions of widely used movie recommendations dataset (He et al., 2018). **Yelp** is a business reviews dataset (He et al., 2018). Unlike many previous publications, for exemple, (Krizhevsky et al., 2014; Zhou et al., 2018; Rosenberg et al., 2017), we don't filter it by date and use the whole dataset to have more data and obtain more reliable evaluation results. MovieLens-1m, MovieLens-20m, Amazon Beauty, and Steam have been used in original BERT4Rec publication (He et al., 2018) and a recent study on BERT4Rec replicability (He et al., 2018). For better reproducibility and fair comparison, we use exactly the same preprocessed versions of datasets from the BERT4Rec repository (He et al., 2018). For all datasets, the presence of a review or rating was converted to implicit feedback, and users with less than 5 interactions were discarded. The final statistics of datasets are shown in Table 1. ### Evaluation To compare our results with previous works, we follow common practice (He et al., 2018; He et al., 2018) and split each dataset into train, validation, and test partitions using the leave-one-out approach. For each user, the last item of the interaction sequence is used as the test data, the item before the last one is used as the validation data, and the remaining data is used for training. In some previous publications, including original SASRec and BERT4Rec papers, sampled metrics were used for evaluation. For each positive item in the test set, 100 negative items are sampled, and only these items are used for metrics calculation. However, it was shown that sampled metrics can lead to inconsistent performance measures because they are not always consistent with unsampled metrics and depend on the sampling scheme and a number of negative examples (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018). So we use full unsampled metrics for our experiments. Performance is evaluated on two top-k ranking metrics, which are most widely used in other publications: Normalized Discounted Cumulative Gain (NDCG@k) and Hit Rate (HR@k) with k=10, 100. Note that for the leave-one-out strategy, HitRate is equivalent to another popular ranking metric - Recall. We take k=10 because it is the most popular value and is present in almost all publications. In previous works with sampling metrics, other popular values were k=5 and k=20. It was a reasonable choice because the ranking was made for 101 sampled items. But for full unsampled metrics and datasets with a large number of items, small values of k could not be very informative, so we chose k=100 as the second value. ### Models For a fair comparison, we train and evaluate all sequential models with the same code, which is present in our GitHub repository (Beng et al., 2017). We implement models with PyTorch and train them with the popular PyTorch Lightning framework (Krizhevsky et al., 2015). We compare the following models in our experiments: **BPR-MF** - a classic matrix factorization-based approach with a pairwise BPR loss. We use fast GPU implementation of this model from the Implicit library (Krizhevsky et al., 2015). **SASRec** - the original version of SASRec, which uses binary cross-entropy loss (1). Code for model architecture was taken from the GitHub repository with the SASRec PyTorch implementation (Krizhevsky et al., 2015) with slight adaptation to our training code. **BERT4Rec** - BERT4Rec model. For the BERT backbone, we use the popular and efficient implementation from the HuggingFace Transformers library (Krizhevsky et al., 2015; Wang et al., 2017). **GRU4Rec** - our implementation of GRU4Rec model with cross-entropy loss (2). We simply change the backbone from the Transformers model to the standard GRU layer remaining all other code is the same. **SASRec+** - for short, we refer to our version as SASRec+. It is exactly the same model as the original SASRec but trained with cross-entropy loss (2). **SASRec+ <N>** - It is exactly the same model as the original SASRec, but trained with the sampled cross-entropy loss with \(N\) negative items (3). ### Implementation Details For BPR-MF, we selected the best parameters (the number of latent components, regularization, and the number of iterations) with Optuna (Beng et al., 2017). We trained the models with the learning rate 1e-3. The calculation was run 5 times for different seeds, and the metric values were averaged. For all sequential models, we have tuned hidden size, number of self-attention blocks, and attention heads. For all models and all datasets except MovieLens-20M, we used a hidden size of 64. For the MovieLens-20M dataset, which is much bigger than others, a small hidden size leads to serious underfitting, so 256 was the best latent size. For SASRec, we used 2 self-attention blocks and 1 attention head. For BERT4Rec we used 2 self-attention blocks and 2 attention heads. The masking probability for BERT4Rec was set to 0.2. For MovieLens datasets that have a lot of long sequences, we set a maximum sequence length of 200. For all other datasets, we set a maximum sequence length of 50. All models were trained with a batch size of 128 and Adam optimizer with the learning rate 1e-3. These settings are consistent with parameters used in previous papers (Krizhevsky et al., 2015; Krizhevsky et al., 2015; Wang et al., 2017). To determine the number of training epochs, we use the early stopping criterion. We measure the NDCG@10 metric on the validation set and stop training if the validation metric does not improve for a given number of epochs (patience parameter). For SASRec and GRU4Rec models, we set patience to 10 epochs and a maximum number of epochs to 100. For BERT4Rec, we set a maximum number of training epochs to 200 and patience to 20 to be sure that the model is not underfitted because BERT4Rec needs more time to converge, as shown in section 5.2. After early stopping, we restore model weights from the best epoch on the validation set, this step could be important in some circumstances (see section 5.2). For datasets other than MovieLens-1m, we calculate validation metrics on a sample from the full validation set (we take 10000 random users) to speed up training. ## 5. Results ### Overall Performance Comparison Table 2 summarizes the results of experiments on all five datasets. For sampled cross-entropy loss (3) we show metrics for \(N=3000\). Performance for other values of \(N\) is analyzed in section 5.3. Our experiments confirm previous results (Krizhevsky et al., 2015) that BERT4Rec is persistently better than vanilla SASRec with binary cross-entropy loss (1). However, when we train SASRec with cross-entropy loss (2) or (3), the situation is reversed. On all datasets except Steam SASRec+ and SASRec+ 3000 significantly outperform BERT4Rec. Moreover, BERT4Rec needs much more training time to achieve moderate performance. Remarkable that for MovieLens datasets even a good old GRU4Rec baseline could be competitive with BERT4Rec. This observation supports our opinion that unidirectional causal modeling is more appropriate for the next item prediction task than the bidirectional masking approach from BERT. In Table 3 we compare our results with previous works which used the same unsampled metrics on the MovieLens-1m dataset. The performance of our BERT4ec implementation is comparable with other papers, so our version is not underfitted. As for vanilla \begin{table} \begin{tabular}{l c c c c c} \hline **Dataset** & **Users** & **Items** & **Interactions** & **Avg. len.** & **Density** \\ \hline ML-1M & 6,040 & 3,416 & 999,611 & 165.49 & 4.85\% \\ \hline ML-20M & 138,493 & 26,744 & 20,000,263 & 144.41 & 0.54\% \\ \hline Steam & 281,428 & 13,044 & 3,488,885 & 12.40 & 0.10\% \\ \hline Beauty & 40,226 & 54,542 & 353,962 & 8.79 & 0.10\% \\ \hline Yelp & 279,106 & 148,415 & 4,350,510 & 15.59 & 0.11\% \\ \hline \end{tabular} \end{table} Table 1. Experimental datasets SAResec implementation, our numbers are better because we used longer maximum sequence lengths. If we train our vanilla SASRec model with a maximum sequence length of 50, as in other works, NDCG@10 will be equal to 0.1135. This value is pretty close to some other publications. ### Convergence speed To better analyze the convergence speed of different models, we plot the NDCG@10 metric on the validation set against the epoch number. Figure 1 demonstrates such curves for GRU4Rec, BERT4Rec, and our versions of SASRec on all datasets. It is clear that BERT4Rec needs much more training time and epochs to achieve satisfactory performance. This observation is consistent with recent works (Zhou et al., 2017; Zhang et al., 2018). It is worth noting that on Beauty and Yelp datasets, SASRec learns very quickly but then starts to overfit and validation performance degrades. BERT4Rec on the other hand doesn't overfit and oscillates near the final performance level. So in some circumstances, it is important to restore the best model weights after early stopping to achieve the best possible performance. ### Training with negative sampling Figure 2 demonstrates the performance for different numbers of negatives in sampled cross-entropy loss (3). It starts from modest values for small \(N\) and achieves or almost achieves the performance of SASRec+ with full cross-entropy loss (2) for a large number of negative items. We conclude that training with \(N\approx 1000\) is a good option, though the appropriate value of \(N\) could depend on the dataset at hand. \begin{table} \begin{tabular}{|l|l|r r|r r|r|} \hline **Dataset** & **Model** & **HR@10** & **HR@100** & **NDCG@10** & **NDCG@100** & **Training time** & **Best epoch** \\ \hline ML-1M & BPR-MF & 0.0762 & 0.3566 & 0.0383 & 0.0936 & 1 & 60 \\ GRU4Rec (our) & 0.2811 & 0.6359 & 0.1648 & 0.2367 & 641 & 90 \\ BERT4Rec & 0.2843 & 0.6680 & 0.1537 & 0.2322 & 1409 & 197 \\ SASRec & 0.2500 & 0.6492 & 0.1341 & 0.2153 & 486 & 53 \\ SASRec+ (our) & 0.3152 & 0.6743 & 0.1821 & 0.2555 & 540 & 63 \\ SASRec+ 3000 (our) & **0.3159** & **0.6808** & **0.1857** & **0.2603** & 769 & 85 \\ \hline ML-20M & BPR-MF & 0.0806 & 0.3373 & 0.0394 & 0.0892 & 176 & 350 \\ GRU4Rec (our) & 0.2813 & 0.6153 & 0.1730 & 0.2401 & 6319 & 30 \\ BERT4Rec & 0.2816 & 0.6311 & 0.1703 & 0.2408 & 14758 & 68 \\ SASRec & 0.2001 & 0.5932 & 0.1067 & 0.1851 & 2495 & 30 \\ SASRec+ (our) & 0.2983 & 0.6397 & 0.1833 & 0.2521 & 9959 & 46 \\ SASRec+ 3000 (our) & **0.3900** & **0.6592** & **0.1872** & **0.2581** & 4125 & 39 \\ \hline Steam & BPR-MF & 0.0431 & 0.1767 & 0.0223 & 0.0480 & 10 & 370 \\ GRU4Rec (our) & 0.1138 & 0.3842 & 0.0610 & 0.1138 & 869 & 16 \\ BERT4Rec & **0.1242** & **0.4132** & **0.0662** & **0.1228** & 4893 & 74 \\ SASRec & 0.0981 & 0.3608 & 0.0506 & 0.1016 & 2140 & 39 \\ SASRec+ (our) & 0.1191 & 0.3947 & 0.0641 & 0.1179 & 1262 & 16 \\ SASRec+ 3000 (our) & 0.1206 & 0.3974 & 0.0652 & 0.1192 & 1226 & 14 \\ \hline Beauty & BPR-MF & 0.0271 & 0.0970 & 0.0144 & 0.0280 & 1 & 400 \\ GRU4Rec (our) & 0.0291 & 0.0933 & 0.0163 & 0.0288 & 644 & 25 \\ BERT4Rec & 0.0338 & 0.1051 & 0.0187 & 0.0325 & 2325 & 87 \\ SASRec & 0.0246 & 0.0939 & 0.0126 & 0.0262 & 521 & 26 \\ SASRec+ (our) & **0.0533** & **0.1325** & **0.0327** & **0.0482** & 332 & 6 \\ SASRec+ 3000 (our) & 0.0490 & 0.1197 & 0.0295 & 0.0434 & 296 & 8 \\ \hline Yelp & BPR-MF & 0.0176 & 0.0967 & 0.0087 & 0.0236 & 1 & 100 \\ GRU4Rec (our) & 0.0425 & 0.1822 & 0.0216 & 0.0483 & 2677 & 5 \\ BERT4Rec & 0.0442 & 0.1912 & 0.0223 & 0.0505 & 10166 & 21 \\ SASRec & 0.0228 & 0.1067 & 0.0115 & 0.0274 & 655 & 3 \\ SASRec+ (our) & **0.0482** & **0.2005** & **0.0246** & **0.0539** & 2505 & 3 \\ SASRec+ 3000 (our) & 0.0462 & 0.1929 & 0.0237 & 0.0519 & 1965 & 5 \\ \hline \end{tabular} \end{table} Table 2. Overall Performance Comparison. Training time is in seconds. \begin{table} \begin{tabular}{|l|r|r|r|} \hline **Publication** & **SAResec NDCG@10** & **BERT4Rec NDCG@10** & **Best model** & **Best model NDCG@10** \\ \hline This paper & 0.1341 & 0.1537 & SASRec+ 3000 & 0.1857 \\ \hline Petrov et al. (Zhou et al., 2017) & 0.1078 & 0.1516 & ALBERT4Rec & 0.165 \\ \hline Du et al. (Du et al., 2017) & 0.0918 & 0.1097 & CBIT & 0.1694 \\ \hline Fan et al. (Fan et al., 2017) & 0.1121 & 0.1099 & LightSANs & 0.1151 \\ \hline Qiu et al. (Qiu et al., 2017) & 0.0910 & 0.0619 & DuoRec & 0.168 \\ \hline Liu et al. (Liu et al., 2017) & - & 0.1398 & NOVA-BERT & 0.168 \\ \hline \end{tabular} \end{table} Table 3. Results reported in the literature for the MovieLens-1m dataset. Metrics are copied from the respective publications. ## 6. Conclusion In this work, we show that with proper training unidirectional SASRec model is still a strong baseline for sequential recommendations. Previous works used binary cross-entropy loss with one negative example. If trained with the cross-entropy loss on a full item set or sampled cross-entropy loss with a large number of negative examples, it can outperform the bidirectional BERT4Rec model and achieve performance comparable with current state-of-the-art approaches. We encourage to use SASRec with the cross-entropy loss for future research as a baseline for more rigorous evaluation of new state-of-the-art algorithms.
2309.11060
The Self-energy of Nucleon for the Pole term of the Axial-vector Current and the Neutron $β$-decay
The effect of the pion propagator on the $\beta$-decay of neutron is investigated to account for the ratio of the axial-vector to the vector part $g_A/g_V$ by including the pole term. A suggestion is made that the sign and the size of the vertex correction is appropriate to improve the discrepancy between the result of the model hamiltonian and the experiment.
Susumu Kinpara
2023-09-20T04:40:39Z
http://arxiv.org/abs/2309.11060v1
###### Abstract ###### Abstract The effect of the pion propagator on the \(\beta\)-decay of neutron is investigated to account for the ratio of the axial-vector to the vector part \(g_{A}/g_{V}\) by including the pole term. A suggestion is made that the sign and the size of the vertex correction is appropriate to improve the discrepancy between the result of the model hamiltonian and the experiment. The Self-energy of Nucleon for the Pole term of the Axial-vector Current and the Neutron \(\beta\)-decay Susumu Kinpara _Institute for Quantum Medical Science (QST)_ _Chiba 263-8555, Japan_ ## 1 Introduction For the investigation of the mechanism of the reaction processes the nucleon is the elementary object in the intermediate energy regions. The idea of the point-like structure is inevitably modified by the interaction with the other particle as a probe. Particularly the lightest meson that is pion plays a decisive role on improvement of the state of nucleon. It is expressed by the propagator and the effect of the pion degrees of freedom is essential to evaluate the constants related to each process. The self-energy of nucleon is the starting point of the pion-nucleon system and also changes the form of the vertex part of the interaction. The relation between these functions is obtainable by using the equation of motion independent of the calculation of the perturbative expansion. Because of the derivative coupling the pseudovector \(\pi\)-N interaction appears to be difficult to control the divergences in the framework of the renormalization by the counter terms of the mass and the fields. In turn, by virtue of the non-perturbative term stemmed from the non-perturbative relation the divergences are removed together with the counter terms mentioned above. Adding the non-perturbative term the pseudovector coupling is connected to the pseudoscalar coupling. The additional interaction contains the self-energy and the vertex part is modified by the term. When the calculations of the pseudoscalar \(\pi\)-N system in free space do not give the perfect results the term of the self-energy is expected to account for the remaining gap. The method to construct the self-energy is based on the field theoretical treatment giving the form of the function as the series of powers in the Dirac operator. Applying it to the scattering processes the structure constants such as the magnetic moment and the polarizability of nucleon and the \(\pi\)-N scattering parameters of the \(S\) and \(P\) waves are described well by each model chosen separately for the respective phenomenon. For the low energy regions below the \(\Delta(1232)\) resonance the lowest-order approximation of the perturbative calculation with the self-energy does not attain to describe the \(\pi\)-N elastic scattering. The higher-order calculations could change the form of the self-energy largely so as to provide the volume of the low energy parameters well. On the other hand the method of the matrix inversion enables us reach the set of the scattering parameters which construct the self-energy as the output. The application of it to the photoproduction of pion is interesting for the pseudoscalar model needs the additional effects from the threshold to the resonance energy. In the present study the \(\beta\) decay of neutron is chosen to examine the form of the self-energy by the meson-exchange model of the field theory. ## 2 The vertex function with the axial-vector current The hadronic part of the lagrangian of the interaction \(L_{int}\) is assumed to consist of the vector \(L_{v}\) and the axial-vector \(L_{a}\) parts \[L_{int}=L_{v}+L_{a}=-g\,\bar{\psi}(x)\,\gamma_{\mu}\,(1-\gamma_{5})\,\vec{\tau }\,\psi(x)\,\vec{V}^{\mu} \tag{1}\] in which the \(\psi\) and \(\vec{V}_{\mu}\) are the fields of nucleon and the isovector vector boson respectively. The strength of the coupling constant \(g\) is assumed to be comparable with the electric charge and then allows us to expand the term in powers of \(g\) and neglect the terms of the higher-order practically. The \(\vec{\tau}\) is the 2\(\times\)2 isospin matrix. In order to derive the non-perturbative relation and examine the structure of the vertex part which unites the external lines of some bosons and fermions the set of two isovector currents that is the vector current \(\vec{B}_{\mu}\) and the axial-vector current \(\vec{A}_{\mu}\) \[(\vec{B}_{\mu},\vec{A}_{\mu})\equiv\bar{\psi}\,(\gamma_{\mu},\gamma_{5}\gamma_ {\mu})\,\vec{\tau}\,\psi \tag{2}\] is indispensable. The sum \(\vec{J}\equiv\vec{B}+\vec{A}\) suffices the relation \[\partial\cdot\vec{J}=-2iM\vec{\rho_{5}}+4\,g\,\vec{V}\times\vec{J} \tag{3}\] \[\vec{\rho_{5}}\equiv\bar{\psi}\,\gamma_{5}\,\vec{\tau}\,\psi \tag{4}\] where the nucleon mass \(M\) is the average value of proton and neutron. The equation of motion for \(\vec{V}\) is given as \[(\partial^{2}+m^{2})\vec{V}=g\,(1+m^{-2}\,\partial\partial\cdot)\vec{J} \tag{5}\] in which the weight of the mass of the boson \(m\) is not definite and possibly simulate the \(\rho\)-meson or the much heavier boson related to the weak interaction. In practice the \(O(g^{2})\) term is dropped approximately to derive the relations below. The generalized relation is derived for the \(T\)-product which consists of a set of the field operators \[T[\,V^{i}_{\mu}(x)\cdots\,\varphi_{a}(x_{a})\cdots] \tag{6}\] in which \(\varphi_{a}(x_{a})\) (\(\,\equiv\,\psi(x_{a})\) or \(\bar{\psi}^{T}(x_{a})\,\)) represents the Dirac fields. Operating \(\partial^{2}+m^{2}\) on the quantity (6) and using Eq. (5) without the \(O(g^{2})\) term and the following relation \[\delta(x_{a0}-x_{0})\,[\varphi_{a}(x_{a}),\,\rho^{i}_{5}(x)]=e_{ai}\varphi_{a }(x_{a})\,\delta^{4}(x_{a}-x) \tag{7}\] with \(e_{ai}=\gamma_{0}\gamma_{5}\tau_{i}\) for \(\varphi_{a}(x_{a})=\psi(x_{a})\) or \((\gamma_{0}\gamma_{5}\tau_{i})^{T}\) for \(\varphi_{a}(x_{a})=\bar{\psi}^{T}(x_{a})\) the generalized form of the equation of motion is obtained \[(\partial^{2}+m^{2})T[\,V^{i}_{\mu}(x)\cdots]=g\,T[\,J^{i}_{\mu}(x)\cdots]+i \frac{\delta}{\delta V^{\mu}_{i}(x)}T[\,\cdots\,]\] \[-2igm^{-2}M(\,\partial_{\mu}T[\rho^{i}_{5}(x)\cdots]+g^{0}_{\mu}\sum_{a,b, \cdots}e_{ai}\delta(x-x_{a})T[\cdots\,]\,) \tag{8}\] The third term of the right-hand side comes from the breaking of the conservation of the axial-vector current in comparison with the case of the quantum electrodynamics (QED). Another main difference from the QED is the part of the functional derivative in Eq. (8) defined as \[\frac{\delta V^{j}_{\nu}(x^{\prime})}{\delta V^{\mu}_{i}(x)}=\delta_{ij}\, \rho_{\mu\nu}(x-x^{\prime}) \tag{9}\] \[\rho_{\mu\nu}(x-x^{\prime})=\int\frac{d^{4}k}{(2\pi)^{4}}\,\rho_{\mu\nu}(k)\,e^{-k \cdot(x-x^{\prime})} \tag{10}\] \[\rho_{\mu\nu}(k)=g_{\mu\nu}-\frac{k_{\mu}k_{\nu}}{m^{2}}-(1-\frac{k^{2}}{m^{2}} )\eta_{\mu}\eta_{\nu} \tag{11}\] where \(\eta_{\mu}=(1,0,0,0)\). The additional terms in Eq. (11) are inherent in the massive vector boson. The delta part (\(\sim(m^{2}-k^{2})\eta_{\mu}\eta_{\nu}\)) is dropped due to the cancellation with the normal dependent term of the interacting hamiltonian. In spite of the breaking of the current conservation the differentiation of Eq. (8) and the use of the relation of the commutator \[\delta(x_{a0}-x_{0})[\varphi_{a}(x_{a}),J_{0i}(x)]=P_{ai}\,\varphi_{a}(x_{a}) \,\delta^{4}(x_{a}-x) \tag{12}\] with \(P_{ai}=(1-\gamma_{5})\tau_{i}\) for \(\varphi_{a}(x_{a})=\psi(x_{a})\) or \(-(1+\gamma_{5})\tau_{i}^{T}\) for \(\varphi_{a}(x_{a})=\bar{\psi}^{T}(x_{a})\) gives the useful form \[(\partial^{2}+m^{2})\partial^{\mu}\,T[\,V^{i}_{\mu}(x)\cdots]=-2igm^{-2}M( \partial^{2}+m^{2})\,T[\,\rho_{5i}(x)\cdots]\] \[+[\,i\partial^{\mu}\frac{\delta}{\delta V^{\mu}_{i}(x)}-g\sum_{a,b,\cdots}(P_ {ai}+2im^{-2}Me_{ai}\,g^{0}_{\mu}\,\partial^{\mu})\,\delta(x-x_{a})\,]\,T[ \cdots] \tag{13}\] in which the approximate form of the current \(\partial\cdot J_{i}(x)\approx-2iM\rho_{5}^{i}(x)\) is used. When the \(T\)-product consists of two boson fields the general relation in Eq. (13) gives the equation for the boson propagator \[iD^{ij}_{\mu\nu}(x-y)=\langle\,T[V^{i}_{\mu}(x)\,V^{j}_{\nu}(y)]\,\rangle \tag{14}\] \[(\partial^{2}+m^{2})\partial^{\mu}D^{ij}_{\mu\nu}(x-y)=\partial^{\mu}\rho^{ij }_{\mu\nu}(x-y)\] \[-2gm^{-2}M(\partial^{2}+m^{2})\langle\,T[\rho^{i}_{5}(x)\,V^{j}_{\nu}(y)]\,\rangle \tag{15}\] Using the \(T\)-product of three fields the vertex function \(\Gamma^{\mu}_{i}(x\,y\,;z)\) is defined so as to give the following relation \[\langle\,T[\psi(x)\,\bar{\psi}(y)\,V^{i}_{\mu}(z)]\,\rangle \tag{16}\] \[\equiv-g\int dx^{\prime}dy^{\prime}dz^{\prime}\,G(x-x^{\prime})\, \Gamma^{\nu}_{j}(x^{\prime}\,y^{\prime}\,;z^{\prime})\,G(y^{\prime}-y)\,D^{ji} _{\nu\mu}(z^{\prime}-z)\] with the nucleon propagator \[iG(x-y)=\langle\,T[\psi(x)\,\psi(y)]\,\rangle \tag{17}\] Operating \((\partial_{z}^{2}+m^{2})\partial_{z}^{\mu}\) on Eq. (16) with respect to \(z\) and applying Eq. (13) for three fields and Eq. (15) the relation between these functions are given as follows \[\int dx^{\prime}dy^{\prime}dz^{\prime}\,\rho_{\mu\nu}^{ij}(z-z^{\prime})\,G(x- x^{\prime})\,\partial_{z^{\prime}}^{\mu}\Gamma_{j}^{\nu}(x^{\prime}\,y^{ \prime}\,;z^{\prime})\,G(y^{\prime}-y)\] \[=i\delta(z-x)\,(1-\gamma_{5})\,\tau_{i}\,G(x-y)-i\delta(z-y)\,G(x-y)\,(1+ \gamma_{5})\,\tau_{i}\] \[+2im^{-2}M(\partial_{z}^{2}+m^{2})[\,\langle\,T[\,\rho_{5i}(z)\psi(x)\,\bar{ \psi}(y)\,]\,\rangle\] \[-ig\int dx^{\prime}dy^{\prime}dz^{\prime}\,\langle\,T[\,\rho_{5i}(z)V_{\nu}^{ j}(z^{\prime})\,]\,\rangle\,G(x-x^{\prime})\,\Gamma_{j}^{\nu}(x^{\prime}\,y^{ \prime}\,;z^{\prime})\,G(y^{\prime}-y)\,]\] \[-2m^{-2}M\,\partial_{z}^{0}\,[\,\delta(z-x)\,\gamma_{0}\,\gamma_{5}\,\tau_{i} \,G(x-y)+\delta(z-y)\,G(x-y)\,\gamma_{0}\gamma_{5}\,\tau_{i}\,] \tag{18}\] By the Fourier transform of Eq. (18) the relation of the current of the vertex function is obtained \[(p-q)\cdot\Gamma_{i}(p,q)\] \[=-A^{-1}[\,(1+\gamma_{5})\tau_{i}G(q)^{-1}-G(p)^{-1}(1-\gamma_{5})\tau_{i}\, ]-2M\rho_{5}\tau_{i}\] \[-2m^{-2}MA^{-1}g_{\mu 0}(p-q)^{\mu}\,[\,G(p)^{-1}\,\gamma_{0}\gamma_{5}\tau_{i}+ \gamma_{0}\gamma_{5}\tau_{i}\,G(q)^{-1}\,] \tag{19}\] \[A\equiv 1-(p-q)^{2}/m^{2} \tag{20}\] in the momentum space. Under the on-shell condition of the incoming (\(q\)) and the outgoing (\(p\)) momenta of nucleon such as \(\gamma\cdot p\to M\) and \(\gamma\cdot q\to M\) Eq. (19) becomes \((p-q)\cdot\Gamma_{i}(p,q)\rightarrow-2M\rho_{5}\tau_{i}\) and the breaking term of the conservation of the current remains. The term acts as the pole term of the vertex \(\Gamma_{i}(p,q)\) and contributes to the interaction. The effect of the pole term on the \(\beta\)-decay of neutron The higher-order correction of the pole term in Eq. (19) by the pion propagator is essential because the term of the lowest-order cancels with the other term as seen later. In the present study the vertex correction is calculated by the perturbative expansion with the pseudovector coupling interaction lagrangian density \[L_{pv}=-\frac{f}{m_{\pi}}\bar{\psi}(x)\,\gamma_{5}\gamma_{\mu}\vec{\tau}\,\psi(x )\,\partial^{\mu}\vec{\varphi}(x) \tag{21}\] where \(f\) and \(m_{\pi}\) are the coupling constant and the mass of pion respectively. Recently we have suggested that the pion-nucleon-nucleon three point vertex has the non-perturbative term \[\Gamma(p,q)=\gamma_{5}\gamma\cdot(p-q)+G(p)^{-1}\,\gamma_{5}+\gamma_{5}\,G(q)^ {-1} \tag{22}\] in which the perturbative term is represented by the lowest-order approximation herein. Leaving the self-energy out the vertex is in agreement with the pseudoscalar coupling irrespective of the on-shell condition. The exact nucleon propagator \(G(p)\) is expressed as \[G(p)=(\gamma\cdot p-M-\Sigma(p))^{-1} \tag{23}\] along with the self-energy \(\Sigma(p)\). Because of the non-perturbative term in Eq. (22) the quadratic divergent term in the numerator of the fraction cancels out the denominator. The convergent result of \(\Sigma(p)\) is obtained \[\Sigma(p)=Mc_{1}(p^{2})-\gamma\cdot p\,c_{2}(p^{2}) \tag{24}\] in terms of the coefficients \(c_{i}(p^{2})\) (\(i\)=1,2) as a function of \(p^{2}\). These are expanded around the on-shell point \(p^{2}=M^{2}\). Particularly the value of the zeroth order \(c\equiv c_{1}^{(0)}(M^{2})=c_{2}^{(0)}(M^{2})\) is necessary to examine the process of the \(\beta\) decay. The higher-order coefficients are dropped by the on-shell condition for the momenta of the initial neutron and the final proton. Using two quantities to characterize the correction that is the self-energy \(c\) and the coefficient of the pole term \(a_{n}\) the three-point vertex part \(\Gamma_{i}(p,q)\) is expressed as follows \[\Gamma_{i}(p,q)\to A^{-1}(1+c)\,[\,\gamma(1-\gamma_{5})+2M(1+c)^{-1}\hat{c}\, (p-q)^{-2}(p-q)\gamma_{5}\,]\,\tau_{i}\] \[+\,\Gamma^{\prime}_{i}(p,q) \tag{25}\] \[\hat{c}\equiv c+(A^{-1}-\sum_{n=0}^{\infty}a_{n})A \tag{26}\] under the assumption that \(\Gamma_{i}(p,q)\) is sandwitched by the Dirac spinors such as \(\bar{u}^{(s^{\prime})}(p)\Gamma_{i}(p,q)u^{(s)}(q)\). The \(\Gamma_{i}^{\prime}(p,q)\) represents the term for which the relation \((p-q)\Gamma_{i}^{\prime}(p,q)=0\) exists irrespective of the on-shell condition. At the moment it is not included since the term is related to the magnetism of the vector current and the part of the axial-vector current is independent of the strong interaction by the invariance of the charge conjugation [1]. When the mass of the vector boson is much larger than the momentum transfer (\(m^{2}\gg(p-q)^{2}\)), then \(A\approx 1\) and the relation in Eq. (26) is substituted to \(\hat{c}=c-\sum_{n=1}a_{n}\) in good approximation. Here the subscript of the coefficient \(a_{n}\) in the pole term specifies the number of the propagator of pion. The lowest-order \(a_{0}\,(=\)1) does not contribute to the pole term. The process mediated by the pion propagator is a part of \(a_{1}\) and it plays a prominent role. Since the diagram is related to the decay of the \(\pi^{-}\) it is free from the result of the perturbative calculation and the strength of the process depends on the decay constant in the model hamiltonian. To calculate the correction on the \(\beta\)-decay we treat the main part of \(a_{1}\) in a particular way different from the other part of the pole term. The momentum transfer is \((p-q)^{2}\approx 0\) and the divergence of the current is undetermined as \(\sim(p-q)^{2}/((p-q)^{2}-m_{\pi}^{2})\to 0\) unless the mass of the pion is neglected because of the factor arising from the loop integral. The effect of the pion propagator is important for the zeroth order of \(\rho_{5}\) given as \(\rho_{5}^{(0)}=a_{0}\gamma_{5}\) is eliminated from the pole term. Then our interest is to calculate the coefficient of the second-order \(a_{1}\propto(\frac{f}{m_{\pi}})^{2}\) at the limit \((p-q)^{2}\to 0\) of the on-shell nucleons. The coefficient \(a_{1}\) of the vertex correction \(\rho_{5}^{(2)}=a_{1}\gamma_{5}\) is given in the form of the integral as \[\rho_{5}^{(2)}=-(\frac{f}{m_{\pi}})^{2}\int\frac{d^{4}k}{i(4\pi)^{4}}\Delta(k )\Gamma(p,p-k)G(p-k)\gamma_{5}G(q-k)\Gamma(q-k,q) \tag{27}\] with respect to the momentum \(k\) of the virtual pion. To make the evaluation of the integral in Eq. (27) tractable the pion propagator is approximated by the free one such as \(\Delta(k)\approx\Delta_{0}(k)=(k^{2}-m_{\pi}^{2})^{-1}\) and the self-energy of the nucleon propagator is set equal to zero (\(G\approx G_{0}\)). The vertex part is simplified by neglecting the non-perturbative term (\(\Gamma(p,q)\approx\gamma_{5}\gamma\cdot(p-q)\)). The integral is performed by using the dimensional regularization method and it yields as \[a_{1}=\frac{G_{\pi}^{2}}{(4\pi)^{2}}\,(2D-\frac{7}{6}-\,{\rm log}\frac{M}{m_{\pi} })+O(m_{\pi}^{2}/M^{2}) \tag{28}\] \[D\equiv\frac{2}{\epsilon}+1-\gamma-{\rm log}\frac{m_{\pi}^{2}}{4\pi\mu^{2}} \tag{29}\] where \(G_{\pi}\equiv 2Mf/m_{\pi}\) and \(\gamma=0.577\cdots\) is the Euler's constant. The parameters \(\epsilon\) and \(\mu\) are ascribed to the shift of the dimension as \(4\to 4-\epsilon\). The calculation of the form factor is useful to remove the divergence in \(a_{1}\) without preparing the renormalized functions [2]. The relation \(D=0\) has been used to obtain the finite value of the \(\gamma\)-N-N vertex as a function of the momentum transfer \(Q^{2}\). Subtracting the divergent part by the above relation it yields \(a_{1}=-3.56085\) not including the term of the \(O(m_{\pi}^{2}/M^{2})\) order. Our interest is the pole term in Eq. (25) which possibly has an effect on the ratio between the vector and the axial-vector coupling constants \(\lambda\equiv g_{A}/g_{V}\). Since there is not a mass term in the denominator the matrix element gives a non-zero value at the limit \((p-q)^{2}\to 0\). In the case of the \(\beta\)-decay the initial state of neutron stands with the spin directed to the \(z\)-axis and the final state of baryon is specified by the proton momentum \(\vec{p}\) and the spin. The direction of momentum enters in the calculation through the parameter \(\theta\) which represents the average value of the polar angle as \(\cos\theta=C/2\) using the asymmetry parameter of proton \(C=-0.2377\)[3]. At the limit \((p-q)^{2}\to 0\) there exists the simple relation for the elements between the axial-vector part \(\Gamma_{A}\equiv\bar{u}^{(s^{\prime})}(p)\gamma\gamma_{5}u^{(s)}(q)\) and the pseudoscalar part \(\Gamma_{P}\equiv(p-q)^{-2}(p-q)\bar{u}^{(s^{\prime})}(p)\gamma_{5}u^{(s)}(q)\) as \[\Gamma_{P}\approx(2M)^{-1}{\rm cos}^{2}\theta\,\Gamma_{A} \tag{30}\] making assumption of the nucleon spin unchanged by the decay (\(s^{\prime}=s\)). Then \(\Gamma(p,q)\) is modified as \[\Gamma(p,q)\rightarrow(1+c)(\gamma-\gamma\gamma_{5}+\alpha\,\gamma\gamma_{5}) \tag{31}\] \[\alpha\equiv(1+c)^{-1}\,\hat{c}\,\,{\rm cos}^{2}\theta \tag{32}\] The size of \(\alpha\) is connected with the assymmetry of the momentum of proton. The pion effect of the pole term is interesting to study the \(\beta\) decay in detail and search for the value of \(c\) to construct the self-energy. It has been known that the ratio of the currents \(\lambda\equiv g_{A}/g_{V}\) is related to the decay constant of the charged pion as \[-\lambda=\frac{G_{\pi}F_{\pi}}{\sqrt{2}M} \tag{33}\] where \(G_{\pi}\) is same as the \(\pi\)-N coupling constant of the pseudoscalar pion [1]. Making use of the value \(f=1\) it yields \(G_{\pi}=13.4544\) appropriate to the calculations of the \(\pi\)-N elastic scattering. The decay constant \(F_{\pi}\,(>0)\) is determined from the relation of the decay rate \(\Gamma_{\pi}\) for the process \(\pi^{-}\to\mu^{-}+\bar{\nu}_{\mu}\) in which the value of the lifetime of \(\pi^{-}\) is given experimentally as \(\Gamma_{\pi}^{-1}=2.6033\times 10^{-8}\,\mbox{s}\). Using the Fermi coupling constant \(G_{F}=1.16637\times 10^{-5}\,\mbox{GeV}^{-2}\) it results in \(F_{\pi}=0.12825\,\mbox{GeV}\). These numerical values of the constants are applied to the relation in Eq. (33). There is a difference of \(-\lambda=1.300\) from the current experimental value \(-\lambda_{exp}=1.2695\pm 0.0029\)[4] roughly a few percent. We suggest modifying the \(g_{A}/g_{V}\) ratio as \(\lambda\to\lambda+\alpha\) to account for the \(\beta\) decay correctly. The correction \(\alpha\) depends on the self-energy and it is meaningful to examine the relation with the other phenomena. Using the value \(c=2\) of the \(n=2\) model the corrected value results in \(-(\lambda+\alpha)=1.2734\). Around the region \(c\) moves \(\alpha\) is positive and the improvement of the numerical value indicates the similarity to the electromagnetic property of the magnetic moment in favor of the universality of the electromagnetic and the weak interactions rather than the \(\pi\)-N elastic scattering under the strong interaction in the low-energy region below the resonance. ## 4 Summary and remarks Concerning the pole term of the vertex in the \(\beta\)-decay the lowest-order is suppressed by the cancellation with the other term and required to take into account the higher-order corrections by the pion propagator. It is consistent with the hypothesis of the conserved vector current which is not renormalized by the pion propagators. In spite that the second-order process mediated by pion is the part of the proper vertex the process is represented by the decay constant of the charged pion well and we have used the standard way of the partially conserved axial-vector current to give most amount of the ratio. The second-order result achieves to correct the excess of the main term by means of the asymmetry parameter of proton and the self-energy to determine the electromagnetic interaction. The numerical result is changed by adding the inversion of the proton spin and then it is compensated by the shift of the parameter of the self-energy to the lower values.
2309.14118
MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space. Multimodal (MM) models aim to extract the synergistic predictive potential of multiple data types to create a shared feature space with aligned semantic meaning across inputs of drastically varying sizes (i.e. images, text, sound). Most current MM architectures fuse these representations in parallel, which not only limits their interpretability but also creates a dependency on modality availability. We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN's composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion. By simulating the challenging bias of missing not-at-random (MNAR), this work shows that, contrary to MultiModN, parallel fusion baselines erroneously learn MNAR and suffer catastrophic failure when faced with different patterns of MNAR at inference. To the best of our knowledge, this is the first inherently MNAR-resistant approach to MM modeling. In conclusion, MultiModN provides granular insights, robustness, and flexibility without compromising performance.
Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley
2023-09-25T13:16:57Z
http://arxiv.org/abs/2309.14118v2
# MultiModN--Multimodal, Multi-Task, Interpretable Modular Networks ###### Abstract Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space. Multimodal (MM) models aim to extract the synergistic predictive potential of multiple data types to create a shared feature space with aligned semantic meaning across inputs of drastically varying sizes (i.e. images, text, sound). Most current MM architectures fuse these representations in parallel, which not only limits their interpretability but also creates a dependency on modality availability. We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN's composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion. By simulating the challenging bias of missing not-at-random (MNAR), this work shows that, contrary to MultiModN, parallel fusion baselines erroneously learn MNAR and suffer catastrophic failure when faced with different patterns of MNAR at inference. To the best of our knowledge, this is the first inherently MNAR-resistant approach to MM modeling. In conclusion, MultiModN provides granular insights, robustness, and flexibility without compromising performance. ## 1 Introduction The world is richly multimodal and intelligent decision-making requires an integrated understanding of diverse environmental signals, known as embodied intelligence [1]. Until recently, advances in deep learning have been mostly compartmentalized by data modality, creating disembodied domains such as computer vision for images, natural language processing for text, and so on. Multimodal (MM) learning has emerged from the need to address real-world tasks that cannot be robustly represented by a single signal type as well as the growing availability and diversity of digitized signals [2; 3; 4]. Some examples are diagnosis from a combination of medical tests and imagery [5; 6; 7], estimating sentiment from facial expression, text, and sound [8; 9; 10; 11], and identifying human activities from a combination of sensors [12]. The richer representations from synergistic data types also have the potential to increase the task space, where a single set of representations can generalize to several tasks. Multi-task (MT) learning has not only been shown to benefit the performance of individual tasks but also has the potential to greatly reduce computational cost through shared feature extraction [13]. In short, multimodal and multi-task learning hold significant potential for human-centric machine learning and can be summarized respectively as creating a shared feature space from various data types and deriving their relative semantic meaning across several tasks. **Limitations of current multimodal models.** Current MM models propose a parallel integration of modalities, where representations are fused and processed simultaneously [2; 3; 4]. Parallel fusion (hereafter P-Fusion) creates several fundamental limitations that we address in this work. The most important issue we seek to resolve in current MM architectures is their _dependence on modality availability_ where all modalities for all data points are required inputs during both training and inference. Modality-specific missingness is a common real-world problem and can fundamentally bias the model when the missingness of a modality is predictive of the label (known as missing not-at-random, MNAR). The common solution of restricting learning to data points with a complete set of modalities creates models that perform inequitably in populations with fewer available resources (i.e. when the pattern of MNAR is different between train and test sets). In complex real-world datasets, there is often no intersection of complete availability, thus necessitating the exclusion of modalities or significantly limiting the train set. On the other hand, imputation explicitly featurizes missingness, thus risking to create a trivial model that uses the _presence_ of features rather than their value for the prediction [14; 15]. The MNAR issue is particularly common in medicine, where modality acquisition is dependent on the decision of the healthcare worker (i.e. the decision that the model is usually attempting to emulate). For example, a patient with a less severe form of a disease may have less intensive monitoring and advanced imagery unavailable. If the goal is to predict prognosis, the model could use the missingness of a test rather than its value. This is a fundamental flaw and can lead to catastrophic failure in situations where the modality is not available for independent reasons (for instance resource limitations). Here, the featurized missingness would inappropriately categorize the patient in a lower severity class. For equitable real-world predictions, it is critical to adapt predictions to available resources, and thus allow composability of inputs at inference. Another key issue of current techniques that this work addresses is _model complexity_. Parallel fusion of various input types into a single vector make many post-hoc interpretability techniques difficult Figure 1: Comparison of modular MultiModN **(a)** vs. monolithic P-Fusion **(b)**. MultiModN inputs any number/combination of modalities (mod) into a sequence of \(mod\)-specific encoders (e). It can skip over missing modalities. A state (s) is passed to the subsequent encoder and updated. Each state can be fed into any number/combination of decoders (d) to predict multiple tasks. _Modules_ are identified as grey blocks comprising an encoder, a state, and a set of decoders. P-Fusion is a monolithic model. It inputs a _fixed_ number/combination of modalities (mod) into \(mod\)-specific encoders (e). Missing modalities are padded and encoded. Embeddings (emb) are concatenated and provided to a _single_ decoder in parallel (d) to predict a _single_ task. or impossible [16]. Depending on where the fusion occurs, it may be impossible to decompose modality-specific predictive importance. In this work, we leverage network modularization, compartmentalizing each modality and task into independent encoder and decoder modules that are inherently robust to the bias of MNAR and can be assembled in any combination or number at inference while providing continuous modality-specific predictive feedback. **Contributions.** We propose MultiModN, a multimodal extension of the work of Trottet et al. [17], which uses a flexible sequence of model and task-agnostic encoders to produce an evolving latent representation that can be queried by any number or combination of multi-task, model-agnostic decoder modules after each input (showcased in Figure 1). Specifically, we demonstrate that our modular approach of sequential MM fusion: **[1] matches parallel MM fusion** (P-Fusion) for a range of real-world tasks across several benchmark datasets, while contributing distinct advantages, such as being: **[2] composable at inference**, allowing selection of any number or combination of available inputs, **[3] is robust to the bias of missing not-at-random (MNAR) modalities**, **[4] is inherently interpretable**, providing granular modality-specific predictive feedback, and **[5] is easily extended to any number or combination of tasks**. We provide an **application-agnostic open-source framework** for the implementation of MultiModN: [https://github.com/epfl-iglobalhealth/MultiModN](https://github.com/epfl-iglobalhealth/MultiModN). Our experimental setup purposely limits our model performance to fairly compare the multimodal fusion step. At equivalent performance, our model architecture is by far superior to the baseline by virtue of being inherently modular, interpretable, composable, robust to systematic missingness, and multi-task. ## 2 Background Approaches to MM learning can be categorized by the depth of the model at which the shared feature space is created [2]. Late fusion (decision fusion) processes inputs in separate modality-specific sub-networks, only combining the outputs at the decision-level, using a separate model or aggregation technique to make a final prediction. While simple, late fusion fails to capture relationships between modalities and is thus not _truly_ multimodal. Early fusion (feature fusion), combines modalities at the input level, allowing the model to learn a joint representation. Concatenating feature vectors is a popular and simple approach [18; 19], but the scale of deployment is particularly limited by the curse of dimensionality. Finally, intermediate fusion (model fusion) seeks to fine-tune several feature extraction networks from the parameters of a downstream classifier. **Parallel Multimodal Fusion (P-Fusion).** Recently, Soenksen et al. [20] proposed a fusion architecture which demonstrated the utility of multiple modalities in the popular MM medical benchmark dataset, MIMIC [21; 22]. Their framework (HAIM or Holistic Artificial Intelligence in Medicine) generates single-modality embeddings, which are concatenated into a single one-dimensional multimodal fusion embedding. The fused embedding is then fed to a single-task classifier. This work robustly demonstrated the value of multimodality across several tasks and a rich combination of heterogeneous sources. HAIM consistently achieved an average improvement of 6-33% AUROC (area under the receiver operating characteristic curve) across all tasks in comparison to single-modality models. We use this approach as a P-Fusion baseline against our sequential fusion approach of MultiModN and extend it to several new benchmark datasets and multiple tasks. Soenksen et al. [20] perform over 14,324 experiments on 12 binary classification tasks using every number and combination of modalities. This extreme number of experiments, was necessary because the model is not composable nor capable of multi-task (MT) predictions. Rather, a different model is needed for each task and every combination of inputs for each task. In contrast, MultiModN is an extendable network, to which any number of encoders and decoders can be added. Thus, most of the 14,324 experiments could technically be achieved within one MultiModN model. Several other recent architectures utilize parallel fusion with transformers. UNiT (Unified Transformer) [23] is a promising multimodal, multi-task transformer architecture; however, it remains monolithic, trained on the union of all inputs (padded when missing) fed in parallel. This not only exposes the model to patterns of systematic missingness during training but also reduces model interpretability and portability1. [24]'s recent work has found similar results on the erratic behavior of transformers to missing modalities, although it is only tested on visual/text inputs. LLMs have also recently been used to encode visual and text modalities [25], but it is not clear how tabular and time-series would be handled or how this would affect the context window at inference. Combining predictive tasks with LLMs will also greatly impact interpretability, introducing hallucinations and complex predictive contamination where learned textual bias can influence outcomes. Footnote 1: The equivalent transformer architecture has 427,421 trainable parameters for the EDU dataset (Sec. 5) while MultiModNI achieves better performance with 12,159 parameters. Modular Sequential Multimodal Fusion.A _module_ of a modular model is defined as a self-contained computational unit that can be isolated, removed, added, substituted, or ported. It is also desirable for modules to be order invariant and idempotent, where multiple additions of the same module have no additive effect. We design MultiModNI to encode individual inputs, whereby module-exclusion can function as input _skippality_, allowing missing inputs to be skipped without influencing predictions. Thus, modular models can have various input granularities, training strategies, and aggregation functions. Some popular configurations range from hierarchies with shared layers to ensemble predictions and teacher-trainer transfer learning approaches [26; 27]. We expand on the sequential modular network architecture proposed by Trottet et al.[17] called MoDN (Modular Decision Networks) as a means of sequential MM fusion. MoDN trains a series of feature-specific encoder modules that produce a latent representation of a certain size (the _state_). Modules can be strung together in a mix-and-match sequence by feeding the state of one encoder as an input into the next. Therefore, the state has an additive evolution with each selected encoder. A series of decoders can query the state at any point for multiple tasks from various combinations of inputs, giving MoDN the property of combinatorial generalization. Thus, we extend MoDN to learn multiple tasks from multimodal inputs. By aligning feature extraction pipelines between MultiModNI and the P-Fusion baseline (inspired by HAIM) we can achieve a better understanding of the impact of monolithic-parallel fusion vs. sequential-modular MM fusion. Figure 1 provides a comparison between P-Fusion and MultiModNI, also formalized below. ## 3 Problem formulation Context.We propose a multi-task supervised learning framework able to handle any number or combination of inputs of varying dimension, irrespective of underlying bias in the availability of these inputs during training. We modularize the framework such that each input and task is handled by distinct encoder and decoder _modules_. The inputs represent various data modalities (i.e. image, sound, text, time-series, tabular, etc.). We assume that these inputs have synergistic predictive potential for a given target and that creating a multimodal shared feature space will thus improve model performance. The tasks represent semantically related observations. We hypothesize that jointly training on semantically related tasks will inform the predictions of each individual task. Notation.Formally, given a set of modalities (features) \(\mathcal{M}=\{mod_{1},\dots,mod_{|\mathcal{M}|}\}\) and a set of tasks (targets) \(\mathcal{T}=\{task_{1},\dots,task_{|\mathcal{T}|}\}\), let \(\mathcal{X}=\{(x_{1},y_{1}),(x_{2},y_{2}),\dots,(x_{N},y_{N})\}\) represent a multimodal, multi-task dataset with \(N\) data points (\(x_{1},\dots,x_{N}\)). Each point \(x\) has \(|\mathcal{M}|\) modalities (inputs): \(x=\big{\{}x_{mod_{1}},\dots,x_{mod_{|\mathcal{M}|}}\big{\}}\) and is associated with a set of \(|\mathcal{T}|\) targets (tasks): \(y=\big{\{}y_{task_{1}},\dots,y_{task_{|\mathcal{T}|}}\big{\}}\). Modalities comprise various sources (e.g. images from x-rays, CT), for simplicity, we consider sources and modalities as equal \(mod\) elements in \(\mathcal{M}\). Multimodal, multi-task, modular formulation.We decompose each data point \(x\) into \(|\mathcal{M}|\) sequential encoder _modules_ specific to its constituent modalities and each target \(y\) into \(|\mathcal{T}|\) decoder _modules_ specific to its constituent tasks such that any combination or number of modalities can be used to predict any combination or number of tasks. Our objective is to learn a set of function _modules_, \(\mathcal{F}\). Each function _module_ within this set, represented as \(\hat{f}^{i}_{j}\in\mathcal{F}\) maps combinations of modalities \(\mathcal{M}_{j}\) to combinations of tasks \(\mathcal{T}_{i}\), i.e. \(f^{i}_{j}:\mathcal{M}_{j}\to\mathcal{T}_{i}\). It is important to note that \(\mathcal{M}_{j}\) is an element of the powerset of all modalities and \(\mathcal{T}_{i}\) is an element of the powerset of all tasks. Extension to time-series.In the above formulation, the \(\mathcal{M}\) encoder _modules_ are handled in sequence, thus naturally aligning inputs with time-series. While the formulation does not change for time-series data, it may be optimized such that \(f^{i}_{j}\) represents a single time step. This is relevant in the real-world setting of a data stream, where inference takes place at the same time as data is being received (i.e. predicting student performance at each week of a course as the course is being conducted). The continuous prediction tasks (shown for EDU and Weather in Sec. 6) demonstrate how MultiModN can be used for incremental time-series prediction. ## 4 MultiModN: Multimodal, Multi-task, Modular Networks (Our model) Building on [17] (summarized and color-coded in Figure 0(a)), the MultiModN architecture consists of three modular elements: a set of State vectors \(\mathcal{S}=\left\{s_{0},\ldots,s_{\left|\mathcal{M}\right|}\right\}\), a set of modality-specific Encoders \(\mathcal{E}=\left\{e_{1},\ldots,e_{\left|\mathcal{M}\right|}\right\}\), and a set of task-specific Decoders \(\mathcal{D}=\left\{d_{1},\ldots,d_{\left|\mathcal{T}\right|}\right\}\). State \(s_{0}\) is randomly initialized and then updated sequentially by \(e_{i}\) to \(s_{i}\). Each \(s_{i}\) can be decoded by one, any combination, or all elements of \(\mathcal{D}\) to make a set of predictions. All encoder and decoder parameters are subject to training. **States (\(\mathcal{S}\)).** Akin to hidden state representations in Recurrent Neural Networks (RNNs), the state of MultiModN is a vector that encodes information about the previous inputs. As opposed to RNNs, state updates are made by any number or combination of modular, modality-specific encoders and each state can be decoded by modular, task-specific decoders. Thus the maximum number of states by any permutation of \(n\) encoders is \(n!\). For simplicity, we limit the combinatorial number of states to a single order (whereby \(e_{i}\) should be deployed before \(e_{i+1}\)) in which any number or combination of encoders may be deployed (i.e. one or several encoders can be skipped at any point) as long as the order is respected. Thus, the number of possible states for a given sample is equal to \(2^{\left|\mathcal{M}\right|}\). Order invariance could be achieved by training every permutation of encoders \(\left|\mathcal{M}\right|!\), i.e. allowing encoders to be used in any order at inference, as opposed to this simplified implementation of MultiModN in which order is fixed. At each step \(i\), the encoder \(e_{i}\) processes the previous state, \(s_{i-1}\), as an input and outputs an updated state \(s_{i}\) of the same size. When dealing with time-series, we denote \(s_{t(0)}\) as the state representing time \(t\) before any modalities have been encoded, and as \(s_{t(0,1,4,5)}\) as the state at time \(t\) after being updated by encoders \(e_{1}\), \(e_{4}\) and \(e_{5}\), in that order. Encoders (\(\mathcal{E}\)).** Encoders are modularized to represent a single modality, i.e. \(\left|\mathcal{E}\right|=\left|\mathcal{M}\right|\). An encoder \(e_{i}\) takes as input the combination of a single modality (of any dimension) and the previous state \(s_{i-1}\). Encoder \(e_{i}\) then outputs a \(s_{i}\), updated with the new modality. For simplicity, we fix the state size between encoders. Due to modularization, MultiModN is model-agnostic, whereby encoders can be of any type of architecture (i.e. Dense layers, LSTM, CNN). For experimental simplicity, we use a single encoder design with a simple dense layer architecture. The input vectors in our experiments are 1D. When a modality is missing, the encoder is skipped and not trained (depicted in Figure 1). **Decoders (\(\mathcal{D}\)).** Decoders take any state \(s_{i}\) as input and output a prediction. Each decoder is assigned to a single task, that is \(\left|\mathcal{D}\right|=\left|\mathcal{T}\right|\), i.e. MultiModN is not multiclass, but multi-task (although a single task may be multiclass). Decoders are also model-agnostic. Our implementation has regression, binary, and multiclass decoders across static targets or changing time-series targets. Decoder parameters are shared across the different modalities. The decoder predictions are combined across modalities/modules by averaging the loss. Interestingly, a weighted loss scheme could force the model to emphasize certain tasks over others. As shown in [17], MultiModN can be completely order-invariant and idempotent if randomized during training. For interpretability, sequential inference (in any order) is superior to parallel input due to its decomposability, allowing the user to visualize the effect of each input and aligning with Bayesian reasoning. **Quantification of Modularity.** The modularity of a network can be quantified, whereby neurons are represented by nodes (vertices) and connections between neurons as edges. There are thus comparatively dense connections (edges) within a _module_ and sparse connections between them. Partitioning modules is an NP-complete problem [28]. We present modules that are defined _a priori_, whereby a module comprises one encoder \(e_{i}\) connected to one state \(s_{i}\), which is in turn connected to a set of \(\left|\mathcal{T}\right|\) tasks (a _module_ is depicted as a grey box in Figure 0(a)). Following a formalization of modularity quantitation proposed by Newman et al. [29], we compute the modularity score for MultiModN and show that it tends to a perfect modularity score of 1 with each added modality and each added task. When viewed at the network granularity of these core elements, P-Fusion is seen as a monolithic model with a score of 0. The formula is elaborated in Appendix Sec. B. ### P-Fusion: Parallel Multimodal Fusion (Baseline) We compare our results to a recent multimodal architecture inspired by HAIM (Holistic AI in Medicine) [20]. As depicted in Figure 0(b), HAIM also comprises three main elements, namely, a fixed set of modality-specific encoders \(\mathcal{E}=\left\{e_{1},\dots,e_{|\mathcal{M}|}\right\}\) which create a fixed set of embeddings \(\mathcal{B}=\left\{emb_{1},\dots,emb_{|\mathcal{M}|}\right\}\), that is concatenated and fed into a single-task decoder (\(d_{1}\)). HAIM achieved state-of-the-art results on the popular and challenging benchmark MIMIC dataset, showing consistently that multimodal predictions were between 6% and 33% better than single modalities. Enceders (\(\mathcal{E}\)).Contrary to the flexible and composable nature of MultiModN, the sequence of encoders in P-Fusion is fixed and represents a unique combination of modalities. It is thus unable to skip over modalities that are missing, instead padding with neutral values and explicitly embedding the non-missing modalities. The encoders are modality-specific pre-trained neural networks. Embeddings (\(\mathcal{B}\)).Multimodal embeddings are fused in parallel by concatenation. Decoders (\(\mathcal{D}\)).Concatenated embeddings are passed to a single-task decoder. Architecture alignment.We align feature extraction between MultiModN and P-Fusion to best isolate the effect of sequential (MultiModN) vs. parallel (P-Fusion) fusion. As depicted in Appendix Figure 8, we let MultiModN take as input the embeddings created by the P-Fusion pre-trained encoders. Thus both models have identical feature extraction pipelines. No element of the MultiModN pipeline proposed in Figure 0(a) is changed. The remaining encoders and decoders in both models are simple dense layer networks (two fully connected ReLU layers and one layer for prediction). Importantly, MultiModN encoders and decoders are model-agnostic and can be of any architecture. ## 5 Datasets We compare MultiModN and P-Fusion on three popular multimodal benchmark datasets across 10 real-world tasks spanning three distinct domains (healthcare, education, meteorology). The healthcare dataset (MIMIC) is particularly challenging in terms of multimodal complexity, incorporating inputs of vastly varying dimensionality. Education (EDU) and Weather2k have a particular focus on time-series across modalities. Appendix Sec. C details features, preprocessing, and tasks (\(task_{1-10}\)). Mimic.MIMIC [30] is a set of deidentified electronic medical records comprising over \(40,000\) critical care patients at a large tertiary care hospital in Boston. The feature extraction pipeline replicated according to our baseline of P-Fusion[20], making use of patient-level feature embeddings extracted from pre-trained models as depicted in Appendix Figure 8. We select the subcohort of \(921\) patients who have valid labels for both diagnoses and all four modalities present. We use all four modalities as inputs: chest x-rays (image), chart events (time-series), demographic information (tabular), and echocardiogram notes (text). For simplicity, we focus on two diagnostic binary classification tasks: cardiomegaly (\(task_{1}\)) and enlarged cardiomediastinum (\(task_{2}\)). These tasks were selected for their semantic relationship and also because they were reported to benefit from multimodality [20]. Thus, we have four modality-specific encoders and two binary classification diagnostic decoders. Education (EDU).This educational time-series dataset comprises \(5,611\) students with over 1 million interactions in a 10-week Massively Open Online Course (MOOC), provided to a globally diverse population. It is benchmarked in several recent works [31; 32; 33]. Our modeling setting is replicated from related literature, with \(45\) handcrafted time-series features regarding problem and video modalities extracted for all students at each weekly time-step [34]. We use two modality-specific encoders (problem and video) and three popular decoder targets: binary classifiers (\(task_{3-4}\)) of pass/fail and drop-out, and a continuous target of next week's performance (\(task_{5}\)) [35]. Weather2k._Weather2k_ is a 2023 benchmark dataset that combines tabular and time-series modalities for weather forecasting [36]. The data is extracted from \(1,866\) ground weather stations covering \(6\) million km\({}^{2}\), with \(20\) features representing hourly interactions with meteorological measurements and three static features representing the geographical location of the station. We create five encoders from different source modalities: geographic (static), air, wind, land, and rain and align with the benchmark prediction targets [36] on five continuous regression targets: short (24 hr), medium (72 hr), long term (720 hr) temperature forecasting, relative humidity and visibility prediction (\(tasks_{6-10}\)). ## 6 Experiments **Overview.** We align feature extraction pipelines between MultiModN and the P-Fusion baseline in order to isolate the impact of parallel-monolithic vs. sequential-modular fusion (described in 4.1 and depicted in Appendix Sec. B). We thus do not expect a significant difference in performance, but rather aim to showcase the distinct benefits that can be achieved with modular sequential multimodal fusion _without compromising baseline performance_. In the following subsections, we perform four experiments to show these advantages. **[1]**MultiModN performance is not compromised compared to P-Fusion in single-task predictions. **[2]**MultiModN is able to extend to multiple tasks, also without compromising performance. **[3]**MultiModN is inherently composable and interpretable, providing modality-specific predictive feedback. **[4]**MultiModN is resistant to MNAR bias and avoids catastrophic failure when missingness patterns are different between train and test settings. **Model evaluation and metrics.** All results represent a distribution of performance estimates on a model trained 5 times with different random weight initializations for the state vector and weights. Each estimate uses a completely independent test set from an 80-10-10 K-Fold train-test-validation split, stratified on one or more of the prediction targets. We report metrics (macro AUC, BAC, MSE) with 95% confidence intervals, as aligned with domain-specific literature of each dataset [34; 20; 36]. **Hyperparameter selection.** Model architectures were selected among the following hyperparameters: state representation sizes [1; 5; 10; 20; 50; 100], batch sizes [8; 16; 32; 64; 128], hidden features [16; 32; 64; 128], dropout [0, 0.1, 0.2, 0.3], and attention [0, 1]. These values were grouped into 3 categories (small, medium, large). We vary one while keeping the others fixed (within groups). Appendix Figure 9 shows that MultiModN is robust to changing batch size, while dropout rate and hidden layers negatively impact larger models (possibly overfitting). The most specific parameter to MultiModN is state size. As expected, we see negative impacts at size extremes, where small states likely struggle to transfer features between steps, while larger ones would be prone to overfitting. ### Exp. 1: Sequential modularization in MultiModN does not compromise performance **Setup.** A single-task model was created for each \(task_{1-10}\) across all three datasets. Each model takes all modalities as input. We compare MultiModN and P-Fusion in terms of performance. AUROCs can be visualized in Figure 2 while BAC and MSE are detailed in Table 1. As feature extraction pipelines between MultiModN and P-Fusion are aligned, this experiment seeks to investigate if sequential modular fusion compromises model performance. To compress the multiple predictions of time-series into a single binary class, we select a representative time step (EDU \(tasks_{3-4}\) at 60% course completion) or average over all time steps (Weather \(tasks_{9-10}\) evaluated on a 24h window). **Results.** Both MultiModN and P-Fusion achieve state-of-the-art results on single tasks using multimodal inputs across all 10 targets. In Figure 1(c), we binarize the continuous weather task (humidity prediction) as an average across all time steps. The task is particularly challenging for the P-Fusion baseline, which has random performance (AUROC: 0.5). Compared with P-Fusion, Figure 2: **MultiModN does not compromise performance in single-tasks.** AUROC for six binary prediction tasks in (a) MIMIC, (b) EDU, and (c) Weather2k. Tasks predicted by P-Fusion are compared with MultiModN. 95% CIs are shaded. MultiModN shows a 20% improvement, which is significant at the \(p<0.05\) level. As the temporality of this task is particularly important, it could be hypothesized that the sequential nature of MultiModN better represents time-series inputs. Nevertheless, all weather targets are designed as regression tasks and show state-of-the-art MSE scores in Table 1 where MultiModN achieves baseline performance. We provide an additional parallel fusion transformer baseline with experimental results showcased in Appendix Sec. E.4. The results indicate that MultiModN matches or outperforms the multimodal transformer in the vast majority of single- and multi-task settings, and comes with several interpretability, missingness, and modularity advantages. Specifically, using the primary metric for each task (BAC for classification and MSE for regression tasks), MultiModN beats the transformer baseline significantly in 7 tasks, overlaps 95% CIs in 11 tasks, and loses slightly (0.01) in 2 regression tasks. MultiModN matches P-Fusion performance across all 10 tasks in all metrics reported across all three multimodal datasets. Thus, modularity does not compromise predictive performance. ### Exp. 2: Multi-task MultiModN maintains baseline performance in individual tasks Setup.The modular design of MultiModN allows it to train multiple task-specific decoders and deploy them in any order or combination. While multi-task models have the potential to enrich feature extraction (and improve the model), it is critical to note that all feature extraction from the raw input is performed before MultiModN is trained. MultiModN is trained on embeddings extracted from pre-trained models (independently of its own encoders). This is done purposely to best isolate the effect of parallel-monolithic vs. sequential-modular fusion. We train three multi-task MultiModN models (one for each dataset, predicting the set of tasks in that dataset, i.e. \(tasks_{1-2}\) in MIMIC, \(tasks_{3-5}\) in EDU, and \(tasks_{6-10}\) in Weather) and compare this to 10 single-task MultiModN models (one for each \(tasks_{1-10}\)). Monolithic models, like P-Fusion are not naturally extensible to multi-task predictions. Thus P-Fusion (grey bars in Figure 3) can only be displayed for single-task models. This experiment aims to compare MultiModN performance between single- and multi-task architectures to ensure that this implementation does not come at a cost to the predictive performance of individual tasks. Results.In Figure 3 we compare the single-task P-Fusion (grey bars), to single- and multi-task implementations of MultiModN (in color). The results demonstrate that MultiModN is able to maintain its performance across all single-prediction tasks even when trained on multiple tasks. We additionally include the results of our model on various numbers and combinations of inputs, described further in Appendix Sec. E.5. The baseline would have to impute missing features in these combinations, exposing it to catastrophic failure in the event of systematic missingness (Sec. 6.4). \begin{table} \begin{tabular}{c|c c c c|c c c c c c} \hline & \multicolumn{2}{c|}{**MIMIC**} & \multicolumn{2}{c|}{**Education (EDU)**} & \multicolumn{3}{c}{**Weather**} \\ \hline \hline & Cardiomegaly & ECM & Success & Dropout & Next Week & Temp. (24h) & Temp. (72h) & Temp. (720h) & Humidity & Visibility \\ \hline _Metric_ & BAC & BAC & BAC & MSE & MSE & MSE & MSE & MSE & MSE \\ **MultiModN** & 0.75 \(\pm\)0.04 & 0.71 \(\pm\)0.03 & 0.93 \(\pm\)0.04 & 0.83 \(\pm\)0.02 & 0.01 \(\pm\)0.01 & 0.03 \(\pm\)0.01 & 0.03 \(\pm\)0.01 & 0.03 \(\pm\)0.01 & 0.02 \(\pm\)0.01 & 0.10 \(\pm\)0.01 \\ **P-Fusion** & 0.75 \(\pm\)0.02 & 0.69 \(\pm\)0.03 & 0.92 \(\pm\)0.03 & 0.87 \(\pm\)0.05 & 0.01 \(\pm\)0.01 & 0.02 \(\pm\)0.01 & 0.03 \(\pm\)0.01 & 0.02 \(\pm\)0.01 & 0.03 \(\pm\)0.01 & 0.08 \(\pm\)0.02 \\ \hline \hline \end{tabular} \end{table} Table 1: MultiModN **does not compromise performance in single-tasks.** Performance for binary and continuous prediction tasks in MIMIC, EDU, and Weather, comparing P-Fusion and MultiModN. \(95\%\) CIs are shown. _ECM: Enlarged Cardiomediastinum, Temp: Temperature_. Figure 3: **Multi-task MultiModN maintains baseline performance in individual tasks.** Single- and multi-task MultiModN on the prediction of individual tasks, compared with the monolithic P-Fusion (can only be single-task). AUC for binary (**left**) and MSE for continuous (**right**). Error bars: 95% CIs. MultiModN has the significant advantage of being naturally extensible to the prediction of multiple tasks without negatively impacting the performance of individual tasks. ### Exp. 3: MultiModN has inherent modality-specific local and global model explainabilty Setup.Parallel MM fusion obfuscates the contribution of individual inputs and requires add-on or post hoc methods to reveal unimodal contributions and cross-modal interactions [37; 38; 39]. Soenksen et al. [20] used Shapley values [40] to derive marginal modality contributions. While these post hoc methods provide valuable insight, they are computationally expensive and challenging or impossible to deploy at inference. In contrast, MultiModN confers inherent modality-specific interpretability, where the contribution of each input can be decomposed by module. We use \(task_{1-2}\) in MIMIC to compute two measures: **[1]** Importance score, where each encoder is deployed alone, providing predictive importance of a single modality by subtracting predictions made from the prior state. This can be computed across all data points or individual data points. **[2] Cumulative probability**, where the prediction from each multi-task decoder is reported in sequence (i.e. given the previously encoded modalities). We demonstrate this on a random patient from the test set, who has a true label of 0 for both tasks. Further plots are in Appendix Sec. E.2. Results.Monolithic P-Fusion models cannot be decomposed into any modality-specific predictions, and its (single-task) prediction is only made after inputting all modalities. In contrast, Figure 4 shows MultiModN provides granular insights for both importance score and cumulative prediction. We observe that the Text modality is the most important. The cumulative prediction shows the prior strongly predicts positivity in both classes and thus that \(S_{0}\) has learned the label prevalence. The predictions naturally produced by MultiModN provide diverse and granular interpretations. ### Exp. 4: MultiModN is robust to catastrophic failure from biased missingness Setup.MultiModN is designed to predict any number or combination of tasks from any number or combination of modalities. A missing modality is skipped (encoder \(e_{i}\) is not used) and not padded/encoded. Thus, MultiModN avoids featurizing missingness, which is particularly advantageous when missingness is MNAR. Featurizing MNAR can result in catastrophic failure when MNAR patterns differ between train and test settings. We demonstrate MultiModN's inherent robustness to catastrophic MNAR failure by training MultiModN and P-Fusion on four versions of MIMIC with various amounts (0, 10, 50, or 80%) of MNAR by artificially removing one modality in one class only. Figure 5 compares MultiModN and P-Fusion on \(task_{1}\) when tested in a setting that has either no missingness or where the MNAR pattern is different (i.e. label-flipped). Results.Figure 5 shows a dramatic catastrophic failure of P-Fusion in a label-flipped MNAR test set (**black solid line**) compared with MultiModN. P-Fusion is worse than random at 80% MNAR (AUROC: 0.385). In contrast, MultiModN only loses 10% in MNAR flip, remarkably, matching performance in a test with no missingness. Further plots in Appendix E.3. Figure 4: **Inherent modality-specific model explainability in MultiModN.** Heatmaps show individual modality contributions (IMC) **(top)** and cumulative contributions (CP) **(bottom)**: respectively importance score (global explainability) or **cumulative probability** (local explainability). The multi-task MultiModN for \(task_{1-2}\) in MIMIC is compared to two single-task P-Fusion models. IMC are only possible for MultiModN (only 1 modality encoded, rest are skipped). CP are made sequentially from states encoding all previous modalities. P-Fusion is unable to naturally decompose modality-specific contributions (can only make predictions once all modalities are encoded). IMC is computed across all patients in the test set. CP is computed for a single patient, (true label = 0 for both \(task_{1-2}\)). The CP heatmap shows probability ranging from **confident negative diagnosis (0)** to perfect uncertainty and confident positive diagnosis (1). MultiModN is robust to catastrophic missingness (MNAR failure) where P-Fusion is not. ## 7 Conclusion We present MultiModN, a novel sequential modular multimodal (MM) architecture, and demonstrate its distinct advantages over traditional monolithic MM models which process inputs in parallel. By aligning the feature extraction pipelines between MultiModN and its baseline P-Fusion, we better isolate the comparison between modular-sequential MM fusion vs. monolithic-parallel MM fusion. We perform four experiments across 10 complex real-world MM tasks in three distinct domains. We show that neither the sequential modularization of MultiModN nor its extension to multi-task predictions compromise the predictive performance on individual tasks compared with the monolithic baseline implementation. Training a multi-task model can be challenging to parameterize across inter- and cross-task performance [13, 41]. We perform no specific calibration and show that MultiModN is robust to cross-task bias. Thus, at no performance cost, modularization allows the inherent benefits of multi-task modeling, as well as providing interepretable insights into the predictive potential of each modality. The most significant benefit of MultiModN is its natural robustness to catastrophic failure due to differences in missingness between train and test settings. This is a frequent and fundamental flaw of many domains and especially impacts low-resource settings where modalities may be missing for reasons independent of the missingness in the train set. More generally, modularization creates a set of self-contained modules, composable in any number or combination according to available inputs and desired outputs. This composability not only provides enormous flexibility at inference but also reduces the computational cost of deployment. Taken together, these features allow MultiModN to make resource-adapted predictions, which have a particular advantage for real-world problems in resource-limited settings. **Limitations and future work.** The main limitation for studying MM modeling is the scarcity of large-scale, open-source, MM datasets that cover multiple real-world tasks, especially for time-series. Additionally, while MultiModN is theoretically able to handle any number or combination of modalities and tasks, this has not been empirically tested. Having a high combinatorial generalization comes at a computational and performance cost, where the'memory' of a fixed-size state representation will likely saturate at scale. The performance of MultiModN is purposely limited in this work by fixing the feature extraction pipeline, to best isolate the effect of sequential fusion. Future work leveraging MultiModN model-agnostic properties would be able to explore the potential performance benefit. This is particularly interesting for time-series, for which the state'memory' may need to be parameterized to capture predictive trends of varying shapes and lengths. ## 8 Acknowledgements This project was substantially co-financed by the Swiss State Secretariat for Education, Research and Innovation (SERI).
2309.13111
Back To The Roots: Tree-Based Algorithms for Weakly Supervised Anomaly Detection
Weakly supervised methods have emerged as a powerful tool for model-agnostic anomaly detection at the Large Hadron Collider (LHC). While these methods have shown remarkable performance on specific signatures such as di-jet resonances, their application in a more model-agnostic manner requires dealing with a larger number of potentially noisy input features. In this paper, we show that using boosted decision trees as classifiers in weakly supervised anomaly detection gives superior performance compared to deep neural networks. Boosted decision trees are well known for their effectiveness in tabular data analysis. Our results show that they not only offer significantly faster training and evaluation times, but they are also robust to a large number of noisy input features. By using advanced gradient boosted decision trees in combination with ensembling techniques and an extended set of features, we significantly improve the performance of weakly supervised methods for anomaly detection at the LHC. This advance is a crucial step towards a more model-agnostic search for new physics.
Thorben Finke, Marie Hein, Gregor Kasieczka, Michael Krämer, Alexander Mück, Parada Prangchaikul, Tobias Quadfasel, David Shih, Manuel Sommerhalder
2023-09-22T18:00:03Z
http://arxiv.org/abs/2309.13111v1
# Back to the Roots: ###### Abstract Weakly supervised methods have emerged as a powerful tool for model-agnostic anomaly detection at the Large Hadron Collider (LHC). While these methods have shown remarkable performance on specific signatures such as di-jet resonances, their application in a more model-agnostic manner requires dealing with a larger number of potentially noisy input features. In this paper, we show that using boosted decision trees as classifiers in weakly supervised anomaly detection gives superior performance compared to deep neural networks. Boosted decision trees are well known for their effectiveness in tabular data analysis. Our results show that they not only offer significantly faster training and evaluation times, but they are also robust to a large number of noisy input features. By using advanced gradient boosted decision trees in combination with ensembling techniques and an extended set of features, we significantly improve the performance of weakly supervised methods for anomaly detection at the LHC. This advance is a crucial step towards a more model-agnostic search for new physics. ## I Introduction The search for new physics at the Large Hadron Collider (LHC) requires powerful and model-agnostic anomaly detection methods [1; 2]. Weakly supervised approaches [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] have proven effective in identifying specific signatures, such as di-jet resonances, where signal and background can be distinguished using a limited set of hand-crafted features. In these approaches, a classifier is generally trained to distinguish a signal region from a background-dominated control region or background template. In interesting applications, the signal fraction in the signal region is small, and the classifier needs to identify a small number of signal events in an overwhelming background using only the labels for the signal region and background template. Thus, even a relatively simple supervised classification problem becomes increasingly challenging in the weakly supervised setting as the signal fraction decreases. Extending these methods to a wider range of new physics models without prior knowledge of the essential discriminative features requires handling a larger number of potentially noisy input features. This is a significant challenge as the performance of traditional deep neural networks can degrade under such conditions. In this paper we explore the use of boosted decision trees (BDTs) as classifiers in weakly supervised anomaly detection at the LHC. Boosted decision trees are known for their strength in analyzing tabular data [14]. In particular, on small and medium training sets, BDTs outperform deep learning methods, see for example the reviews [14; 15] and references therein. In the case of weakly supervised anomaly detection, only the signal events separate the two classes. Therefore, while the total training set may be large, the small number of signal data may favor BDTs. In addition, BDTs are generally less affected by uninformative features [14] (see also Appendix B). In a high-dimensional input feature space for truly model-agnostic anomaly detection, many of the input features will inevitably be uninformative for a given signal model. As such, robustness to uninformative features is an important property of a model-agnostic method. This robustness and the efficient training and evaluation times make BDTs an interesting alternative to deep neural networks. An important aspect of improving performance in weakly supervised learning is ensembling, which harnesses the collective power of multiple classifiers to achieve improved classification accuracy. Although ensembling can be applied to various machine learning methods, the fast training and evaluation time of BDTs allows ensembling to be used most efficiently. We study the potential superiority of boosted decision trees over deep neural networks for weakly supervised anomaly detection using modern gradient boosting techniques and ensembling methods. Having obtained classifiers that scale well to a larger number of input features, we investigate whether weak supervision techniques can take advantage of additional quantities describing the substructure of jets. The remainder of this paper is structured as follows: Section II describes weak anomaly detection and modern boosted decision trees; Section III introduces the LHC Olympics (LHCO) benchmark dataset used; Section IV discusses the empirical results of our studies; and Section V presents a summary and outlook for future work. In Appendix A we compare the performance of different BDT architectures, in Appendix B we analyze the robustness of BDTs with respect to noisy features, and in Appendix C we discuss the impact of ensembling in more detail. ## II Methods ### Optimal anomaly score and its approximations According to the Neyman-Pearson lemma [16], the most powerful model-agnostic anomaly score is the likelihood ratio between data and background events \[R_{\text{optimal}}(\mathbf{x})=\frac{p_{\text{data}}(\mathbf{x})}{p_{\text{bg}}(\mathbf{ x})}\,, \tag{1}\] where \(\mathbf{x}\) is a set of features that describe the events (e.g. kinematics, high-level variables etc.). This is monotonic with the signal-to-background likelihood ratio for any, not necessarily specified, signal model. We will refer to (1) as the _optimal anomaly score_. In practice, in any realistic high energy physics context, it is not possible to achieve the optimal anomaly score, for several reasons. First, the true background density is generally unknown--our simulations of the Standard Model and the detector response are imperfect, so at best we have an approximation to \(p_{\text{bg}}(\mathbf{x})\). Furthermore, the densities themselves are intractable and at best we have samples from them. Finally, the data density is also unknown and must also be approximated from samples. Existing methods [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] attempt to approximate the optimal anomaly score through binary classifiers that are trained to distinguish between samples drawn from the events within a signal region (SR) and samples drawn from a data-driven background template. To date, this classification task has been exclusively performed using deep neural networks. This work will instead focus on the benefits of using boosted decision trees. So far, methods have primarily focused on the case of _resonant anomaly detection_, where the SR is a window in a resonant variable such as dijet invariant mass, and the background template is derived somehow from neighboring control regions. However, the idea of the optimal anomaly score and approximating it with a classifier is more general and not necessarily limited to resonant anomaly detection, see e.g. [17]. In order to separate out the different approaches to deriving background templates from the performance of the binary classifier itself--the issue of interest in this work--we will focus on a version of the optimal anomaly score, previously termed the _idealized anomaly detector_ (IAD) in [9], which is a binary classifier trained on SR data and a _perfectly modeled_ background template. In other words, we presume that the background events in the SR and the background template events are drawn from the exact same distribution. Focusing on the IAD in our work ensures that any relative improvement in significance between approaches is solely due to an improvement in the classification task itself. ### Boosted decision trees Gradient Boosted Decision Trees (GBDTs) are established as the best performing methods on tabular data. In recent years, LightGBM [18] has become the state of the art, mainly due to its fast training and evaluation times achieved by histogramming the inputs [19]. We use the HistGradientBoostingClassifier implementation in scikit-learn[20], which is based on LightGBM, because it is easy to use and performs well. We use default hyperparameters unless stated otherwise, i.e. a learning rate of 0.1, a maximum number of leaf nodes per tree of 31, a maximum of 255 bins per feature, and early stopping with a patience of 10 iterations. The maximum number of iterations is increased with respect to the default to 200 to ensure that all training is stopped at the minimum validation loss and not at the maximum number of trees. The GBDT implementation uses sub-sampling for individual trees such that the predictions can vary for difficult classification tasks like weakly supervised anomaly detection with small signal fractions. To further stabilize and improve performance, our classifier uses an ensemble of \(N\) independent runs of the BDT by averaging the \(N\) predictions. For this ensemble, the split between training and validation is randomized. This results in a further increase in performance compared to a fixed split. Unless otherwise specified, our classifier uses an ensemble of \(N=50\) independent runs. We compare the performance of our default GBDT implementation with other BDT architectures in Appendix A. ### Neural Network We use the neural network architecture of Ref. [9], however we have implemented it in Tensorflow [21] using Keras [22]. The network is a fully connected neural network with a binary cross-entropy loss. It consists of 3 hidden layers, each containing 64 nodes and using ReLU activation. The output layer employs the softmax activation. Training is conducted over 100 epochs using the Adam [23] optimizer, a learning rate of \(10^{-3}\), and a batch size of 128. Early stopping with a patience of 10 epochs is used in analogy to the BDT. The NN classifier also applies an ensemble of \(N=50\) independent runs with a randomized validation split. ## III Data ### The dataset Our studies are performed using the R&D dataset [24] from the LHC Olympics 2020 (LHCO) [1]. For the most part, we focus on the original R&D signal model, \(Z^{\prime}\to X(\to qq)Y(\to qq)\), with masses \(m_{Z^{\prime}}=3.5\,\mathrm{TeV}\), \(m_{X}=500\,\mathrm{GeV}\) and \(m_{Y}=100\,\mathrm{GeV}\). In this signal model, the boosted dijets have 2-prong substructure. In Section IV.4, we further investigate an alternative signal model also included with the LHCO R&D dataset, where the \(X\) and \(Y\) particles decay to 3-prong substructure, i.e. \(Z^{\prime}\to X(\to qqq)Y(\to qqq)\)[24]. The dataset contains \(1\,000\,000\) QCD dijet events for the background and \(100\,000\) signal events, both simulated with Pythia 8[25; 26] and Delphes 3.4.1[27]. Reconstructed particles are clustered into jets with FastJet[28] using the anti-\(k_{T}\) algorithm [29] with a distance parameter of \(R=1\). Additionally, a trigger of \(p_{\mathrm{T}}>1.2\,\mathrm{TeV}\) must be passed for all events. Of the background events, approximately \(120\,000\) fall in the SR between \(3.3\,\mathrm{TeV}\) and \(3.7\,\mathrm{TeV}\). We also use \(612\,858\) additional QCD background events in the SR, which can be found here [30], and which have been employed previously in Ref. [9]. These events are used for the background template for the IAD, for the supervised classifier, and for the construction of the test dataset. Unless otherwise specified, \(1000\) signal events are injected into the \(1\,000\,000\) background events of the R&D dataset. As in Ref. [9], the same signal events are used in different classifiers unless \(S/B\) is varied. This allows for better comparability between BDT and NN results, as the high computational cost of the NN makes it prohibitive to train multiple classifiers for each setup (see also Section III.3). This results in \(772\) signal events in the SR and corresponds to \(S/B=6\times 10^{-3}\) and \(S/\sqrt{B}=2.2\). For training and validation, a total of approximately \(272\)k SR background events in the background template are trained against \(120\)k SR data events for the IAD and \(54\)k SR signal events for the supervised classifier. In both cases a \(50\,\%\) validation split is used, which is randomized between different runs as mentioned above. Testing is performed on \(340\)k SR background and \(20\)k SR signal events. ### Features The features used for training are similar to those used in Ref. [9]. We select the two jets with the highest \(p_{T}\). As features we use the invariant mass of the lighter of the two jets, \(m_{J_{1}}\), the difference in jet mass between the two jets, \(\Delta m_{J}=m_{J_{2}}-m_{J_{1}}\), and several features based on their n-subjettiness [31; 32]. The original R&D dataset consisted of \(\tau_{1}\), \(\tau_{2}\) and \(\tau_{3}\) subjettiness features computed with angular weighting parameter \(\beta=1\). In this work, we also computed additional subjettiness features (\(\tau_{n}\) up to \(n=9\)) for both jets, and using three different values for \(\beta\) (\(\beta=0.5\), \(\beta=1\) and \(\beta=2\)). Thus, a total of \(54\) subjettiness features are available considering both jets. To investigate the impact of the subjettiness features on the classification performance, we consider various feature sets as listed in Tab. 1. The subjettiness ratios are defined as \(\tau_{ij}\equiv\tau_{i}/\tau_{j}\). ### Metric Throughout this work we use the significance improvement characteristic (SIC) as our main metric. We define \[\mathrm{SIC}=\frac{\epsilon_{S}}{\sqrt{\epsilon_{B}}} \tag{2}\] as the ratio of the fraction of correctly identified signal events \(\epsilon_{S}\) to the square root of the fraction of background events misidentified as signal \(\sqrt{\epsilon_{B}}\). This is an approximation of the expected sensitivity gain over an inclusive search, assuming a dominant Poisson uncertainty in the background event count. For very low background and signal efficiencies, the statistical error of the SIC value increases, causing large fluctuations. As the uncertainty of the background efficiency dominates, a cut-off on the relative statistical error of the background efficiency at \(20\%\) is introduced. For the BDT, all results show the median and \(68\,\%\) confidence intervals of \(10\) independent classifiers each employing our ensembling procedure. As training the NNs is computationally intensive, the results of only one NN classifier are shown. However, for the large ensemble of \(N=50\) independent trainings per classifier, the variance is expected to be small; this has been verified by examining \(10\) different NN classifiers with \(N=10\) for all results presented here. \begin{table} \begin{tabular}{c|c|c} Name & \# features & Features \\ \hline Baseline & 4 & \(\{m_{J_{1}},\,\Delta m_{J},\,\tau_{21}^{\beta=1,J_{1}},\,\tau_{21}^{\beta=1, J_{2}}\}\) \\ \hline Extended 1 & 10 & \(\{\begin{matrix}\{m_{J_{1}},\,\Delta m_{J},\,\tau_{N,N-1}^{\beta=1,J_{1}},\, \tau_{N,N-1}^{\beta=1,J_{2}}\}\\ \text{for}\,\,2\leq N\leq 5\end{matrix}\) \\ \hline Extended 2 & 12 & \(\{\begin{matrix}\{m_{J_{1}},\,\Delta m_{J},\,\tau_{N}^{\beta=1,J_{1}},\,\tau_{N }^{\beta=1,J_{2}}\}\\ \text{for}\,\,N\leq 5\end{matrix}\) \\ \hline Extended 3 & 56 & \(\{\begin{matrix}\{m_{J_{1}},\,\Delta m_{J},\,\tau_{N}^{\beta,J_{1}},\,\tau_{N }^{\beta,J_{2}}\}\\ \text{for}\,\,N\leq 9\text{ and}\,\,\beta\in\{0.5,1,2\}\end{matrix}\) \\ \hline \end{tabular} \end{table} Table 1: Subjettiness feature sets considered for training. Full training feature sets always include \(m_{J_{1}}\) and \(\Delta m_{J}\) as well. Details of the observables are given in the text. Results ### Stability under noisy features We first investigate our so-called baseline setup which considers the four features \(m_{J_{1}}\), \(\Delta m_{J}\), \(\tau_{21}^{1}\) and \(\tau_{21}^{2}\) and is close to the one originally considered in [9]. We confirm that BDTs perform at least on par with NNs in this setup as shown in Fig. 1. The IAD BDT outperforms the IAD NN for all values of signal efficiency and achieves a maximum SIC value of 15. As expected, supervised training using signal and background labels outperforms the weakly supervised IAD, with a difference in maximum achievable SIC of about 2. For the relatively simple supervised classification task, BDT and NN achieve the same performance. Next, we test the stability of NN- and BDT-based idealized anomaly detection in the presence of noisy features. Since in anomaly detection the potential signal is not known, it is expected that some input features do not carry relevant information for its discrimination. We mimic this by adding additional input features that consist of Gaussian noise with mean zero and a standard deviation of one. The rest of the architecture is unchanged and the noise features are drawn from the _same_ normal distribution for signal and background. In Fig. 2 (left panel) we observe a drastic drop in SIC to a maximum value below 4 when only five Gaussian noise (5G) features are included in the idealized neural network classifier. The performance of the BDT IAD, on the other hand, is much more stable against uninformative features. This is illustrated in Fig. 2 (right panel), where we show that the BDT IAD still performs significantly better than random for up to 50 Gaussian noise features. The robustness of the BDT with respect to uninformative features is analyzed in more detail in Appendix B. Ensembling, as introduced in Section II.2, is important for the performance and stability of the BDT classifier in the presence of noisy features. For the NN, on the other hand, the improvement due to ensembling is much less significant (and much more expensive to obtain). As the size of the error band suggests, the BDT classifier would even benefit from a larger ensemble with \(N>100\) when many noisy features are added. The effect of ensembling is discussed in more detail in Appendix C. ### Expanding the pool of features Having identified the BDTs as a method that is stable under the inclusion of potentially noisy features, we can explore how additional physics features can improve anomaly detection performance. In Fig. 3, we compare the SIC curves for the IAD in the baseline setup as in [9] with the IAD employing various extended feature sets using additional jet substructure information. The extended sets are given in Table 1, and include additional subjettiness ratios (extended set 1), individual subjettiness features up to \(\tau_{5}\) (extended set 2), and a set of 54 subjettiness features computed with different angular weighting parameters \(\beta\) (extended set 3). We show results for the NN IAD classifier (Fig. 3, left panel) and the BDT IAD classifier (Fig. 3, right panel). As expected from the study of Gaussian noise, Section IV.1, the NN IAD classifier is very sensitive to the selection of features. The inclusion of additional subjettiness ratios leads to a dramatic reduction in performance, while the inclusion of individual subjettiness features improves the classification properties. The BDT IAD classifier, on the other hand, is characterized not only by its higher performance, but also by its stability under the inclusion of distinct features. Additional subjettiness ratios and, in particular, the inclusion of individual subjettiness features lead to a dramatic improvement in performance. Thus, unlike the NN, adding more and more features does not affect the performance of the BDT. From our studies in Section IV.1, it is clear that adding more and more noisy features will eventually degrade performance. We observed a slight drop in performance for the BDT when we added even more features (beyond extended set 3) based on the subjettinesses. However, again the NN classifier suffers much more in this case. ### Dependence on amounts of signal and background So far we have only considered a fixed number of signal and background events in the SR. To detect a potential signal, it is of course not essential whether our classifier reaches \(\text{SIC}=20\) or \(\text{SIC}=40\). Much more important is the minimum signal fraction that is still detected by the Figure 1: Significance improvement characteristic (SIC) curve, eq. (2), for the baseline scenario with four features introduced in [9]. We compare the idealized anomaly detector (IAD) and a fully supervised setting for the BDT and the NN classifier. classifier for a given signal region size. Therefore, it is important to investigate the potential sensitivity of the classifiers for a range of signal fractions and signal region sizes. Figure 4 shows the maximum SIC value as a function of the number of signal events in the SR for the default fixed number of 120k background events. Achieving significance improvement for rarer signals will translate into analysis sensitivity for lower cross-sections. For the baseline set of features, the BDT plateaus at a SIC of 15, which is reached with a minimum of approximately 500 signal events, corresponding to \(S/B=4\times 10^{-3}\). The same SIC value can be reached with the extended feature set 3 with less than half the number of events, reaching a plateau SIC value of about 40. Figure 5 shows the maximum improvement in significance for the IAD BDT as a function of the number of signal and background events in the signal region. In contrast to the previous results, for which the background template is larger than the signal region (see Section III.1) in accordance with the studies in Ref. [9], in Fig. 5 the size of the background template is equal to the number of background events \(N_{bkg}\) in the SR such that a larger range of \(N_{bkg}\) may be scanned with the existing data. The top panel shows the results for the baseline setup, and the bottom panel shows the results for the inclusive set of all subjettiness features (extended set 3). In both panels it can be seen that the maximum significance improvement for a given signal significance \(S/\sqrt{B}\) is largely independent of the absolute number of background events. Hence, for larger datasets, the same significance improvement is achieved with a smaller ab Figure 3: The impact of physics features on the NN and BDT classifiers. We show the SIC curves of the IAD NN with the four baseline features and the extended feature sets based on subjettiness variables as introduced in Tab. 1 (left panel), and the corresponding results for the IAD BDT (right panel). Figure 2: The impact of uninformative features on the NN and BDT classifiers. We show the SIC curves of the IAD NN/BDT classifiers employing the four baseline features and additional 1, 2, 5, 10, 30 and 50 input features drawn from Gaussian noise (left/right panel). For 30 and 50 Gaussian noise features, BDT classifiers with \(N=100\) are used instead of the usual \(N=50\) ensembling. solute signal fraction. Comparing the top and bottom panels, the dramatic increase in performance of the BDT classifier is evident, with SIC values increasing from 15 for the small feature set to about 40 for the extended feature set. Even more importantly, the minimal significance in the original dataset for a discovery with high significance is decreased quite a bit. ### Three-prong signal So far, we have focused on the LHCO R&D signal consisting of 2-pronged jets. Here we will compare the performance of NN vs. BDT based classifiers across our different feature sets for a different signal (also from the LHCO R&D dataset) consisting of 3-pronged jets. Clearly, an anomaly detector that claims to be both high performing and model agnostic should be able to find both signals with high sensitivity. Figure 6 summarizes our results on the 3-prong signal. We see that with the baseline feature set, the NN is essentially no better than random on the 3-prong signal. The \(\tau_{21}\) features of the baseline set offer very little discrimination power for the 3-prong signal (i.e. are essentially uninformative noise features), so they are evidently degrading the performance of the NN. Meanwhile, the BDT is able to achieve a modest significance improvement, presumably by leveraging the mass features and not being confused by the uninformative \(\tau_{21}\) features. The comparison between NN and BDT is even more dramatic with extended set 1. The NN classifier again shows its noise sensitivity: while extended set 1 includes the separating \(\tau_{32}\) subjettiness ratios for each jet, it also includes higher subjettiness ratios. These are essentially uninformative noise features for the 3-prong signal, and thereby prevent better classification with the NN. However, once again, the BDT does not suffer from this problem, and achieves a significant performance gain relative to the baseline features. Using the subjettinesses in extended sets 2 and 3 instead of the ratios yields significantly better NN and BDT classifiers, with the BDT again achieving the best overall performance. Comparing with Fig. 3, we see that NN and BDT classifiers trained on extended sets 2 and 3 perform well as anomaly detectors on both 2-prong and 3-prong signals and can reasonably be called "model agnostic". Figure 4: The impact of physics features on the sensitivity improvement. We show the maximum SIC values as a function of the number of signal events in the SR, with the number of background events held fixed. The BDT IAD performance is shown for the baseline feature set and the extended feature set 3. The NN performance is shown for the baseline set for comparison. Figure 5: The maximum SIC values as a function of the number of signal events \(N_{sig}\) in the SR and the number of background events \(N_{bkg}\) in the SR and the background template. The BDT IAD performance is shown for the baseline feature set (upper panel) and the more inclusive extended set 3 (lower panel). ## V Conclusion In this paper we have investigated the role of different machine learning methods (deep learning vs. boosted decision trees) on classification performance in weakly supervised anomaly detection. Normally, with the large amount of data and high-dimensional, structured feature sets we have in high energy physics, deep learning wins over BDTs hands down, and this has been widely presumed in the literature since the deep learning revolution came to our field. However, weakly supervised anomaly detection presents special challenges, for which, surprisingly, deep learning may not be ideally suited. Although the amount of data is nominally large, in this anomaly detection context one is trying to detect minute differences between data and background--the amount of signal is small, and so in a sense the effective dataset size is small. Furthermore, in the anomaly detection problem, one wants to be model agnostic, so one wants a broad selection of features, and for a given signal model maybe only a small subset of them are actually informative (able to discriminate between signal and background). The rest of the features are uninformative noise. It is well known and well studied in the ML literature that BDTs can outperform deep learning in these settings--tabular data, small to medium dataset sizes, and in the presence of noisy, uninformative features, see e.g. [14]. In this work, we have confirmed this in the context of weakly supervised anomaly detection, applied to the setting of the LHC. We have seen that adding by hand a number of uninformative Gaussian noise features to both signal and background completely destroys the performance of an NN classifier, but the anomaly detection ability of a BDT classifier remains robust for at least up to \(\sim 50\) Gaussian noise features. We then considered the performance of the idealized anomaly detector with the addition of more physics-motivated features (additional n-subjettinesses \(\tau_{n=3,\ldots,9}\), although other families of features such as e.g. energy flow polynomials [33] would be possible as well). The NN is unstable with respect to additional features and its performance can deteriorate or improve depending on which features are added. Meanwhile the BDT's performance is more stable and consistently improves as more features are added (up to a certain point--see Fig. 2). Finally, we have shown that with more n-subjettiness features, both NNs and BDTs can in principle detect both 2-prong and 3-prong signals, better realizing the original promise of model-agnostic idealized anomaly detection. One issue we encountered is the lack of reliable schemes for choosing anomaly detection methods. We are interested in such model-agnostic metrics for model selection; this is work in progress. Also, the issue of \(m_{JJ}\) sculpting was left for future work--this important issues should be studied in more detail and something like LaCathode[11] may be needed to mitigate the sculpting problems. Furthermore, this work was focused on the idealized situation where the background templates are assumed to be perfectly modeled. This allowed us to fairly and evenly compare the performance of BDTs and NNs across different feature sets. However, in any realistic scenario, the background template must be derived from data (e.g. fully data-driven side-band interpolation as in [6; 9; 10; 11] or simulation-assisted interpolation as in [7; 8; 12]). An essential next step in this study of weakly supervised, model-agnostic anomaly detection is to consider the performance of BDT and NN classifiers using various feature sets that are realistically derived from data. The prospects for this study are very bright. These techniques (BDT classifiers) can be easily applied to real LHC data, searches based on CWoLa hunting by ATLAS already exist [34], and their physical performance should be significantly improved using BDTs and larger feature sets. Although it was not the primary objective of this work, we also observe that BDTs are much more efficient in their use of computational resources than NNs--they can be trained and evaluated in a fraction Figure 6: Same as Fig. 3 but for a signal model with three-prong jets. of the time. This could be enormously beneficial to real searches (such as [34]) that use weak supervision, where a very large number of classifiers typically need to be trained and evaluated, especially for the estimation of systematic uncertainties. **Note added:** While this work was being completed, we learned of the work of [35] who also studied the applications of BDTs to weakly supervised anomaly detection at the LHC. ## Acknowledgements TF, MH, MK, and AM would like to thank Thea Klaeboe Aarrestad for sparking our interest in boosted decision trees. TF is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 400140256 - GRK 2497: The physics of the heaviest particles at the Large Hadron Collider. The research of MH, MK and AM is supported by the DFG under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery. GK, MS, and TQ acknowledge support by the DFG under Germany's Excellence Strategy 390833306 - EXC 2121: Quantum Universe. DS is supported by DOE grant DOE-SC0010008. Computations were performed with computing resources granted by RWTH Aachen University under project rwth0934. ## Code The code for this paper can be found at [https://github.com/uhh-pd-ml/treebased_anomaly_detection](https://github.com/uhh-pd-ml/treebased_anomaly_detection). ## Appendix A Comparison of different BDT architectures Here, we investigate the performance of other tree-based algorithms in addition to the HGBDT used in the main text. The considered models are: * HGBDT (Histogrammed Gradient BDT): A histogrammed version of gradient boosted decision trees. Gradient boosting trains subsequent models on the residuals of the previous ensemble state. The HistGradientBoostingClassifier implementation in scikit-learn[20] is used. * Random forest: Multiple independent decision trees are built based on subsampling and use a majority vote for classification. The RandomForestClassifier implementation in scikit-learn[20] is used. * Adaboost: A boosting algorithm where decision trees are iteratively added to improve classification results. Weights are introduced for the trees as well as for the samples to focus on misclassified samples. The AdaBoostClassifier implementation in scikit-learn[20] is used. * ROOT TMVA BDT: A BDT implementation that is frequently used in Particle Physics applications. Here, we us version 6.28.4 [36]. Using the default settings, this implementation is also based on Adaboost. For the weakly supervised classification both with and without additional noise features, the results are summarized in Fig. 7. As in the main text ten classifiers are used for creating the SIC curve and the corresponding error bands. However, instead of using ensembling with \(N=50\), we here use \(N=10\) because not all models are fast to train. For the ROOT TMVA model, we used the default settings. The optimal parameters of the other models were tuned based on the validation loss of the weak classification task without noise using the optuna software package [37]. For the GBDT, we find that the default parameters described in Section II.2 are close to optimal such that we do not use any optimization in the main text. Without noise, all tree-based algorithms show similar performance, in particular for low signal efficiencies. When adding 10 Gaussian noise features, however, a clear performance gap can be seen: While the Adaboost, random forest and ROOT TMVA models drop significantly, the histogrammed GBDT retains much of its original performance. Due to the superiority of the histogrammed GBDT in these noise studies, we use it throughout this work. These results are compatible with what is generally considered the state of the art in DT based models [19]. ## Appendix B Uninformative features, rotational invariance, and tabular data In [14], a key argument for decision tree (DT) based models outperforming deep learning on tabular data is their robustness to uninformative features. This robustness stems from the fact that tabular data is not invariant under rotation, and therefore an algorithm with the same property should be used to learn from it. Since NNs are rotationally invariant [38], they must learn the ideal feature orientation in an increasingly high-dimensional input space and then identify the most informative features. The information in the original orientation of the data, which in our case is physics based, is lost. On the other hand, DT-based models are not rotationally invariant and therefore only operate on the correct orientation with respect to the physics of the event. Since the data used in this work is tabular, we expect the same behavior in the weakly supervised setting: While the performance of BDT-based models should break down significantly under a random rotation of the data, NN-based models should still show a similar performance. However, if no rotation is applied, BDT-based models should outperform NNs, and the performance gap is expected to increase significantly as more uninformative features are used. Figure 8 shows the performance of both BDT and NN with and without a random rotation applied to the input features. As in Appendix A, the medians and error bands are based on ten classifiers using ensembling with \(N=10\). We observe the expected drop in performance for a BDT trained on rotated features, where the maximum SIC value on the baseline features is halved. When three noise features are added, the BDT is unable to maintain better than random performance. A slight drop in SIC of about 2 is also observed for the NN on the rotated features. With the addition of noise features, the drop is slightly larger, but the NN still remains better than random. ## Appendix C Ensemble In general, the use of ensembles of different classifiers is a well-established technique in many machine learning applications. A single BDT is already a form of ensemble learning, as boosting combines different decision trees. Often, different ML models are combined in an ensemble to exploit their different strengths. Typically, the ensemble outperforms each of the individual models. This can be seen for example in the comparison of the top taggers in [39]. In weakly supervised anomaly detection, the goal is to Figure 8: The impact of a random rotation on the BDT IAD and NN IAD using the baseline feature set and the baseline features with additional 3 Gaussian noise features. SIC curves are compared with and without a random rotation applied for NN models (left panel) and BDT models (right panel). Figure 7: Comparison of different tree-based algorithms for the baseline setup (left panel) as well as for the baseline setup with additional 10 Gaussian noise features (right panel). We show results for the HGBDT used in the main text, a random forest (RF), an Adaboost classifier as well as the BDT classifier implementation in ROOT TMVA [36]. lower the threshold for detecting a small fraction of signal in an overwhelming background. Around this threshold, the performance of classifiers becomes unstable. The statistical fluctuation in the assignment of signal events to the training or validation set can be large enough to have a significant impact on the quality of the classification. In addition, subsampling is used in our BDT implementation, resulting in variability in performance. For NNs, the random initialization of the network has a similar effect. The large variance in individual BDT or NN predictions close to the signal fraction threshold requires the use of an ensemble classifier as described in Sections II.2 and II.3. For the BDT classifier with an ensembling of \(N=50\), the performance is significantly increased even with respect to the best individual training, as shown in Fig. 9 for the default baseline setup with 10 additional Gaussian noise features. The variance of the ensemble classifier is quite small as shown in Fig. 2. For the NN, an increase in the performance of the ensemble classifier is also observed. However, since no single run achieves a max SIC value significantly above 2, the NN ensemble cannot match the performance of the BDT ensemble. In addition, considering the very different computational costs of training BDTs and NNs, the BDT ensemble not only performs better, but is also much more cost effective.
2309.09149
Generalized Frobenius Number of Three Variables
For $ k \geq 2 $, we let $ A = (a_{1}, a_{2}, \ldots, a_{k}) $ be a $k$-tuple of positive integers with $\gcd(a_{1}, a_2, \ldots, a_k) =1$ and, for a non-negative integer $s$, the generalized Frobenius number of $A$, $g(A;s) = g(a_1, a_2, \ldots, a_k;s)$, the largest integer that has at most $s$ representations in terms of $a_1, a_2, \ldots, a_k$ with non-negative integer coefficients. In this article, we give a formula for the generalized Frobenius number of three positive integers $(a_1,a_2,a_3)$ with certain conditions.
Kittipong Subwattanachai
2023-09-17T04:14:41Z
http://arxiv.org/abs/2309.09149v1
# Generalized Frobenius number of three variables ###### Abstract. For \(k\geq 2\), we let \(A=(a_{1},a_{2},\ldots,a_{k})\) be a \(k\)-tuple of positive integers with \(\gcd(a_{1},a_{2},\ldots,a_{k})=1\) and, for a non-negative integer \(s\), the generalized Frobenius number of \(A\), \(g(A;s)=g(a_{1},a_{2},\ldots,a_{k};s)\), the largest integer that has at most \(s\) representations in terms of \(a_{1},a_{2},\ldots,a_{k}\) with non-negative integer coefficients. In this article, we give a formula for the generalized Frobenius number of three positive integers \((a_{1},a_{2},a_{3})\) with certain conditions. Key words and phrases:Frobenius problem, Generalized Frobenius numbers, linear Diophantine problem 2020 Mathematics Subject Classification: Primary 11D07; Secondary 11B34; ## 1. Introduction For a positive integer \(k\geq 2\), \(A=(a_{1},a_{2},\ldots,a_{k})\) is a \(k\)-tuple of positive integers with \(\gcd(A)=\gcd(a_{1},a_{2},\ldots,a_{k})=1\). If we let \(\mathrm{R}(A)=\mathrm{R}(a_{1},\ldots,a_{k})=\{x_{1}a_{1}+\ldots+x_{k}a_{k}\mid x _{i}\in\mathbb{Z}_{\geq 0},a_{i}\in A,i=1,2,\ldots,k\}\) be the set of integers representable as non-negative linear combinations of \(a_{1},\ldots,a_{k}\) and let \(\mathrm{NR}(A)=\mathrm{NR}(a_{1},a_{2},\ldots,a_{k})=\mathbb{Z}_{\geq-1} \setminus\mathrm{R}(A)\) be the set of integers not representable as non-negative integer combinations of \(a_{1},a_{2},\ldots,a_{k}\). It is known that \(\mathrm{NR}(A)\) is finite if and only if \(\gcd(a_{1},a_{2},\ldots,a_{k})=1\), see for example [13]. There is the well-known linear Diophantine problem, posed by Sylvester [19], known as the _Frobenius problem1_: Given positive integers \(a_{1},a_{2},\ldots,a_{k}\) such that \(\gcd(a_{1},a_{2},\ldots,a_{k})=1\), find the largest integer that _cannot_ be expressed as a non-negative integer linear combination of these numbers. _The largest integer_ is called the _Frobenius number_ of the tuple \(\mathrm{A}=(a_{1},a_{2},\ldots,a_{k})\), and is denoted by \(g(A)=g(a_{1},a_{2},\ldots,a_{k})\). With the above notation, the Frobenius number is given by Footnote 1: It is also known as the coin problem, postage stamp problem, or Chicken McNugget problem, involves determining the largest value that cannot be formed using only coins of specified denominations. \[g(A)=\max\mathrm{NR}(A).\] Note that if all non-negative integers can be expressed as a non-negative integer linear combination of \(A\), then \(g(A)=-1\). For example, \(g(1,2)=-1\). For two variables \(A=\{a,b\}\subset\mathbb{Z}_{>0}\), it is shown by Sylvester [15] that \[g(a,b)=ab-a-b. \tag{1}\] For example, consider \(A=(a,b)=(3,5)\). Then the Frobenius number of \(A\) is given by \(g(3,5)=15-3-5=7\), which means that all integers \(n>7\) can be expressed as a non-negative integer linear combination of \(3\) and \(5\). Tripathi [17] has provided explicit but complicate formulas for calculating the Frobenius number in three variables. However, it is important to note that closed-form solutions for the general case become increasingly challenging when the number of variables exceeds three (\(k>3\)). Nevertheless, various formulas have been proposed for Frobenius numbers in specific scenarios or special cases. For example, explicit formulas in some particular cases of sequences, including arithmetic, geometric-like, Fibonacci, Mersenne, and triangular (see [12] and references therein) are known. For a given positive integer \(n\), we let \(d(n;A)=d(n;a_{1},a_{2},\ldots,a_{k})\) be the number of representations to \(a_{1}x_{1}+a_{2}x_{2}+\cdots+a_{k}x_{k}=n\). Its generating series is given by \[\sum_{n\geq 0}d(n;a_{1},\ldots,a_{k})x^{n}=\frac{1}{(1-x^{a_{1}})(1-x^{a_{2}} )\cdots(1-x^{a_{k}})}.\] Sylvester [14] and Cayley [6] show that \(d\left(n;a_{1},a_{2},\ldots,a_{k}\right)\) can be expressed as the sum of a polynomial in \(n\) of degree \(k-1\) and a periodic function of period \(a_{1}a_{2}\cdots a_{k}\). Using Bernoulli numbers, Beck, Gessel, and Komatsu [1] derives the explicit formula for the polynomial section. Tripathi [16] provides a formula for \(d(n;a_{1},a_{2})\). Komatsu [8] shows that the periodic function part is defined in terms of trigonometric functions for three variables in the pairwise coprime case. Binner [4] provides a formula for the number of non-negative integer solutions to the equation \(ax+by+cz=n\) and finds a relationship between the number of solutions and quadratic residues. In this work, we will focus on a generalization of the Frobenius number. For a given non-negative integer \(s\), let \[g(A;s)=g(a_{1},a_{2},\ldots,a_{k};s)=\max\{n\in\mathbb{Z}\mid d(n;A)\leq s\}\] be the largest integer such that the number of expressions that can be represented by \(a_{1},a_{2},\ldots,a_{k}\) is at most \(s\). Notice that \(g(a_{1},a_{2},\ldots,a_{k})=g(a_{1},a_{2},\ldots,a_{k};0)\). That means all integers bigger than \(g(A;s)\) have at least \(s+1\) representations. The \(g(A;s)\) is called _the generalized Frobenius number_. Furthermore, \(g(A;s)\) is well-defined (i.e. bounded above) (see [7]) As a generalization of (1), for \(A=(a,b)\) and \(s\in\mathbb{Z}_{\geq 0}\), (see [3]), an exact formula for \(g(A,s)=g(a,b;s)\) is given by \[g(a,b;s)=(s+1)ab-a-b. \tag{2}\] In general, we have \(d\big{(}g(A;s);A\big{)}\leq s\), but in the case \(|A|=2\) one can show that actually \(d\big{(}g(A;s);A\big{)}=s\). Similar to the \(s=0\) case, exact formulas for the generalized Frobenius number in the cases \(k\geq 3\) are still unknown. For \(k=3\) exact formulas are just known for special cases. For example, there are explicit results in the case of triangular numbers [10], repunits [9] and Fibonacci numbers [11]. Recently, Binner [5] provide bounds for the number of solutions \(a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}=n\) and use these bounds to solve \(g(a_{1},a_{2},a_{3};s)\) when \(s\) is large. In 2022, Woods [18] provide formulas and asymptotics for the generalized Frobenius problem using the restricted partition function. Our main result is the following explicit formula for a special case of the generalized Frobenius number in three variables. **Theorem 1**.: _Let \(a_{1},a_{2}\) and \(a_{3}\) be positive integers with \(\gcd(a_{1},a_{2},a_{3})=1\) and let \(s\in\mathbb{Z}_{\geq 0}\). If \(d_{1}=gcd(a_{2},a_{3})\) and suppose that \(a_{1}\equiv 0\pmod{\frac{a_{2}}{d_{1}}}\) or \(a_{1}\equiv 0\pmod{\frac{a_{3}}{d_{1}}}\), then_ \[g\bigg{(}a_{1},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_ {1}^{2}}\right\rceil\bigg{)}=(s+1)\frac{a_{2}a_{3}}{d_{1}}+a_{1}d_{1}-a_{1}-a_ {2}-a_{3}.\] **Remark 2**.: In Theorem 1, the order of integers in a tuple \((a_{1},a_{2},a_{3})\) is not necessary due to symmetry of \(g\). So, if \(d_{2}=\gcd(a_{1},a_{3})\) and \(a_{2}\equiv 0\pmod{\frac{a_{1}}{d_{2}}}\) or \(a_{2}\equiv 0\pmod{\frac{a_{3}}{d_{2}}}\), then \[g\bigg{(}a_{1},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{1}a_{3}}{a_{2}d_ {2}^{2}}\right\rceil\bigg{)}=(s+1)\frac{a_{1}a_{3}}{d_{2}}+a_{2}d_{2}-a_{1}-a_ {2}-a_{3}.\] Similarly, if \(d_{3}=\gcd(a_{1},a_{2})\) and \(a_{3}\equiv 0\pmod{\frac{a_{1}}{d_{3}}}\) or \(a_{3}\equiv 0\pmod{\frac{a_{2}}{d_{3}}}\), then \[g\bigg{(}a_{1},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{1}a_{2}}{a_{3}d_ {1}^{2}}\right\rceil\bigg{)}=(s+1)\frac{a_{1}a_{2}}{d_{3}}+a_{3}d_{3}-a_{1}-a_ {2}-a_{3}.\] **Remark 3**.: Notice that \[\mathbf{U}_{(a_{1},a_{2},a_{3})}:=\bigcup_{i=1}^{3}\left\{\sum_{j=0}^{s}\left \lceil\frac{j\prod_{1\leq\ell\leq 3}a_{\ell}}{a_{i}d_{i}^{2}}\right\rceil \mid s\geq 0\right\}\subseteq\{d(n;a_{1},a_{2},a_{3})\mid n\in\mathbb{Z}_{>0}\}.\] In Example 9, we consider \((a_{1},a_{2},a_{3})=(10,15,21)\). Since \(d(120;10,15,21)=6\) but \[6\not\in\mathbf{U}_{(10,15,21)}=\{0,1,2,3,4,5,7,9,11,14,17,20,22,24,\ldots\} \subsetneq\{d(n;10,15,21)\mid n\in\mathbb{Z}_{>0}\}.\] However, they are equal in some cases. For example, if \(a_{1},a_{2}\) and \(a_{3}\) are of the form in Corollary 10, then we obtain that \[\mathbf{U}_{(a_{1},a_{2},a_{3})}=\{t_{k}\mid k\in\mathbb{Z}_{\geq 0}\}=\{d(n;a_{ 1},a_{2},a_{3})\mid n\in\mathbb{Z}_{>0}\},\] where \(t_{k}\) is the \(k\)th triangular number which is given by \(t_{k}=\binom{k+1}{2}\). We will give the proof in Section 3. Then, in Section 4, we discuss some special cases of Theorem 1 and relate them to previous study. **Acknowledgement:** This project was supported by the Development and Promotion for Science and Technology Talents Project (DPST), Thailand. I extend my sincere appreciation to Prof. Henrik Bachmann and Prof. Kohji Matsumoto for their invaluable guidance and support as my supervisors. ## 2. Preliminary Lemmas Before proving Theorem 1, we introduce some Lemmas. Beck and Kifer [2] show the following result on \(g(a_{1},a_{2},\ldots,a_{k};s)\) in terms of \(\ell=\gcd(a_{2},a_{3},\ldots,a_{k})\). **Lemma 4**.: _[_2_, Lemma 4]_ _For \(k\geq 2\), let \(A=(a_{1},\ldots,a_{k})\) be a \(k\)-tuple of positive integers with \(\gcd(A)=1\). If \(\ell=\gcd(a_{2},a_{3},\ldots,a_{k})\), let \(a_{j}=\ell a_{j}^{\prime}\) for \(2\leq j\leq k\). Then for \(s\geq 0\)_ \[g(a_{1},a_{2},\ldots,a_{k};s)=\ell\,g\big{(}a_{1},a_{2}^{\prime},a_{3}^{\prime },\ldots,a_{k}^{\prime};s\big{)}+a_{1}(\ell-1).\] The next lemma give an upper bound for the number of representations to \(a_{1}x_{1}+\cdots+a_{k}x_{k}=g(a_{1},\ldots,a_{k};s)-jc\), for all integers \(j\) such that \(0\leq jc\leq g(a_{1},\ldots,a_{k};s)\) when \(c\equiv 0\pmod{a_{r}}\) for some \(r\in\{1,\ldots,k\}\). **Lemma 5**.: _For \(k\geq 2\), let \(A=(a_{1},\ldots,a_{k})\) be a \(k\)-tuple of positive integers with \(\gcd(A)=1\) and let \(s\in\mathbb{Z}_{\geq 0}\). If \(c\) is a positive integer such that \(c\equiv 0\pmod{a_{r}}\) for some \(r=1,\ldots,k\), then, for all integers \(0\leq jc\leq g(A;s)\),_ \[d\big{(}g(A;s)-jc;A\big{)}\leq s.\] Proof.: Suppose that \(c\in\mathbb{Z}_{>0}\) such that \(c\equiv 0\pmod{a_{r}}\) for some \(r=1,\ldots,k\). Assume that there exists \(0\leq j\leq\frac{g(A;s)}{c}\) such that \[d\big{(}g(A;s)-jc;A\big{)}\geq s+1.\] So, there are _at least_\(s+1\) non-negative integer solutions \((x_{1},\ldots,x_{k})\) such that \[g(A;s)-jc=\sum_{\ell=1}^{k}x_{\ell}a_{\ell}.\] Since \(c\equiv 0\pmod{a_{r}}\), then \(c=a_{r}q\) for some \(q\in\mathbb{Z}_{0}\). So, we obtain that \[g(A;s)=x_{1}a_{1}+\ldots+x_{r-1}a_{r-1}+(x_{r}+jq)a_{r}+x_{r+1}a_{r+1}+\ldots+ x_{k}a_{k},\] this means that \(g(A;s)\) has _at least_\(s+1\) non-negative representations in terms of \(a_{1},\ldots,a_{k}\). We get a contradiction since \(g(A;s)\) must have at most \(s\) representations. To accomplish the proof of Theorem 1, we need the following lemma. If \(k=2\), says \(A=\{a,b\}\), then, for a non-negative number \(j\leq g(a,b;s)/c\), \(d\big{(}g(a,b;s)-jc;a,b\big{)}=i\) is equivalent to \(g(a,b;i-1)<g(a,b;s)-jc\leq g(a,b;i)\). **Lemma 6**.: _Let \(a,b\in\mathbb{Z}_{>0}\) with \(\gcd(a,b)=1\), \(s\in\mathbb{Z}_{\geq 0}\), \(i\in\{0,1,\ldots,s\}\). Suppose that \(c\) is a positive integer such that \(c\equiv 0\pmod{a}\) or \(c\equiv 0\pmod{b}\) and \(j\) is a non-negative integer such that \(j\leq\frac{g(a,b;s)}{c}\). Then_ \[d\big{(}g(a,b;s)-jc;a,b\big{)}=i,\] _if and only if,_ \[g(a,b;i-1)<g(a,b;s)-jc\leq g(a,b;i).\] _Here we set \(g(a,b;-1)\) to be \(-2\)._ Proof.: Without loss of generality, we only prove the case \(c\equiv 0\pmod{a}\). For convenient, throughout the proof, for \(s\geq 0\), we let \(g_{s}:=g(a,b;s)\), which by (1) is \(g_{s}=(s+1)ab-a-b\). (\(\Rightarrow\)) Suppose that \(d(g_{s}-jc;a,b)=i\) for some \(i=0,1,\ldots,s\). By the definition of \(g_{i}\), it follows immediately that \(g_{s}-jc\leq g(a,b;i)\). Clearly, if \(i=0\), then \(-2<g_{s}-jc\leq g_{i}\), we are done. So, assume that \(i\geq 1\). If \(g_{s}-jc=g_{i-1}\), then \(d(g_{s}-jc;a,b)=i-1\), a contradiction. It remains to show that \[g_{i-1}<g_{s}-jc.\] We will prove this statement by assuming that \(g_{s}-jc<g_{i-1}\). Since \(d(g_{s}-jc;a,b)=i\), by the definition of \(d\), there are \(i\) non-negative integer solutions \((x,y)\) such that \[g_{s}-jc=xa+yb\] If we let \(\Delta=g_{i-1}-g_{s}+jc\), then \(\Delta>0\) and \[\Delta=jc-(s+1-i)(ab).\] Since \(c=ak\) for some \(k\in\mathbb{Z}_{>0}\), it follows that \[\Delta=\big{(}jk-(s+1-i)b\big{)}a.\] So, \(\Delta\in\mathbb{Z}_{>0}\). Therefore, we obtain \[g_{i-1}=g_{s}-jc+\Delta=\big{(}x+jk-(s+1-i)b\big{)}a+yb,\] this implies that \(g_{i-1}=g(a,b;i-1)\) has at least \(i\) representations in terms of \(a\) and \(b\), a contradiction. Therefore, \(g_{i-1}<g_{s}-jc\leq g_{i}\). \((\Leftarrow)\) Suppose that \(g_{i-1}<g_{s}-jc\leq g_{i}\) for some \(i=0,1,2,\ldots,s\). Clearly, since \(g_{i-1}<g_{s}-jc\), we have \[d(g_{s}-jc;a,b)\geq i.\] Our goal is to show that \(d(g_{s}-jc;a,b)=i\). If \(g_{s}-jc=g_{i}\), then we are done. So we assume that \(g_{s}-jc<g_{i}\) and also assume that \(d(g_{s}-jc;a,b)>i\). Then, there are _at least_\(i+1\) non-negative integer solutions \((x,y)\) such that \[g_{s}-jc=xa+yb.\] Let \(\Delta=g_{i}-g_{s}+jc\). Then \(\Delta>0\) and \[\Delta=jc-(s-i)ab.\] We have \(c=ka\) for some \(k\in\mathbb{Z}_{>0}\). Thus, \[\Delta=\big{(}jk-(s-i)b\big{)}a.\] One can see that \(\Delta\in\mathbb{Z}_{>0}\). Therefore, we obtain that \[g_{i}=g_{s}-jc+\Delta=\big{(}x+jk-(s-i)b\big{)}a+yb,\] i.e., \(g_{i}\) has _at least_\(i+1\) representations in terms of \(a\) and \(b\), a contradiction. Therefore, \[d(g_{s}-jc;a,b)=i.\] Similarly to that, we can prove the case where \(c\equiv 0\pmod{b}\) ## 3. Proof of Theorem 1 By applying Lemma 4, Lemma 5 and Lemma 6, we can prove Theorem 1 as follows. Proof of Theorem 1.: Suppose that \(d_{1}=\gcd(a_{2},a_{3})\) and \(a_{1}\equiv 0\pmod{\frac{a_{2}}{d_{1}}}\). By applying Lemma 4, we obtain that \[g\bigg{(}a_{1},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_{ 1}^{2}}\right\rceil\bigg{)}=d_{1}g\bigg{(}a_{1},\frac{a_{2}}{d_{1}},\frac{a_{ 3}}{d_{1}};\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}}\right \rceil\bigg{)}+a_{1}(d_{1}-1). \tag{3}\] We will show that \[g\bigg{(}a_{1},\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};\sum_{j=0}^{s}\left\lceil \frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil\bigg{)}=g\bigg{(}\frac{a_{2}}{d _{1}},\frac{a_{3}}{d_{1}};s\bigg{)}. \tag{4}\] Then one can see that, for \(m\in\mathbb{Z}_{\geq 0}\), \[d\bigg{(}m;a_{1},\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\bigg{)}=\sum_{j=0}^{ \lfloor\frac{m}{a_{1}}\rfloor}d\bigg{(}m-ja_{1};\frac{a_{2}}{d_{1}},\frac{a_{3 }}{d_{1}}\bigg{)}. \tag{5}\] Put \(m=g\bigg{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\bigg{)}\), then we obtain that \[d\Bigg{(}g\bigg{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\bigg{)};a_{1}, \frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\bigg{)}=\sum_{j=0}^{\lfloor\frac{g \left(\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\right)}{a_{1}}\rfloor}d\Bigg{(} g\bigg{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\bigg{)}-ja_{1};\frac{a_{2}}{d _{1}},\frac{a_{3}}{d_{1}}\Bigg{)}. \tag{6}\] By Lemma 5, we have that each value of \(d\bigg{(}g\big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\big{)}-ja_{1};\frac {a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\bigg{)}\) have to be equal to any of \(0,1,\ldots,s\). To calculate the right-hand side of (6), we count the number of \(0\leq j\leq g\big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\big{)}/a_{1}\) such that \[d\Bigg{(}g\bigg{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\bigg{)}-ja_{1}; \frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\bigg{)}=i, \tag{7}\] for all \(i=1,2,\ldots,s\). For convenient, let \(g_{s}:=g\big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\big{)}\). For given \(i\) the \(j\) such that (7) holds are, by Lemma 6, those with \(g_{i-1}<g_{s}-ja_{1}\leq g_{i}\). By (2), this is equivalent to \[\frac{ia_{2}a_{3}}{d_{1}^{2}}-\frac{a_{2}}{d_{1}}-\frac{a_{3}}{d_{1}}\,<\,(s+ 1)\frac{a_{2}a_{3}}{d_{1}^{2}}-\frac{a_{2}}{d_{1}}-\frac{a_{3}}{d_{1}}-ja_{1}\, \leq\,(i+1)\frac{a_{2}a_{3}}{d_{1}^{2}}-\frac{a_{2}}{d_{1}}-\frac{a_{3}}{d_{1}}.\] So, \[(s+1-i)\frac{a_{2}a_{3}}{a_{1}d_{1}^{2}}>j\geq(s-i)\frac{a_{2}a_{3}}{a_{1}d_{ 1}^{2}}.\] Thus, by Lemma 6, there are \[\left\lceil(s+1-i)\frac{a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil-\left\lceil(s -i)\frac{a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil\] of \(j\) in \([0,g_{s}/a_{1})\) such that \(d\big{(}g_{s}-ja_{1};\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\big{)}=i\) for \(i=1,2,\ldots,s\). So, by (6), we have \[d\Bigg{(}g\Big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\Big{)} ;a_{1},\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}}\Bigg{)}\] \[=\sum_{j=0}^{\left\lfloor\frac{a_{2}}{a_{1}}\right\rfloor}d\Bigg{(} g\bigg{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\bigg{)}-ja_{1};\frac{a_{2}}{d_{1}}, \frac{a_{3}}{d_{1}}\Bigg{)}\] \[=s\left\lceil\frac{a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil+(s-1) \bigg{(}\left\lceil\frac{2a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil-\left\lceil \frac{a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil\bigg{)}+(s-2)\bigg{(}\left\lceil \frac{3a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil-\left\lceil\frac{2a_{2}a_{3}}{a _{1}d_{1}^{2}}\right\rceil\bigg{)}+\] \[\qquad\cdots+\Bigg{(}\left\lceil\frac{sa_{2}a_{3}}{a_{1}d_{1}^{2} }\right\rceil-\left\lceil\frac{(s-1)a_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil \bigg{)}\] \[=\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}} \right\rceil.\] Therefore, by the choice of \(m=g\Big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\Big{)}\) and (5), this value \(m\) is the largest that the right-hand side of (5) is (less than or) equal to \(\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil\). Then \[g\Big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\Big{)}=g\Bigg{(}a_{1},\frac{ a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};\sum_{j=0}^{s}\left\lceil\frac{ja_{2}a_{3}}{a _{1}d_{1}^{2}}\right\rceil\Bigg{)}.\] Hence, by (3) and (2), \[g\Bigg{(}a_{1},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{2} a_{3}}{a_{1}d_{1}^{2}}\right\rceil\Bigg{)} =d_{1}g\Big{(}\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};s\Big{)}+a_{ 1}(d_{1}-1)\] \[=d_{1}\Big{(}(s+1)\frac{a_{2}a_{3}}{d_{1}^{2}}-\frac{a_{2}}{d_{1} }-\frac{a_{3}}{d_{1}}\Big{)}+a_{1}d_{1}-a_{1}\] \[=(s+1)\frac{a_{2}a_{3}}{d_{1}}+a_{1}d_{1}-a_{1}-a_{2}-a_{3}.\] Compared to the results in [9, 11] our main theorem seems more useful when \(s\) is large, since their results have an upper bound on \(s\). The result in [4] holds for \(s\) is extremely large. For example, by [4, Section 3.2]), \(g(16,23,37;s)\) can be found for \(s\geq 157291918\). Therefore, our result behaves nicely for \(s\) not too large. In [18] the value for \(s\) is not explicitly given. ## 4. Further Properties and Special Cases The next proposition shows that, for given \(a,b\in\mathbb{Z}_{>0}\) with \(\gcd(a,b)=1\), if \(c\in\mathbb{Z}_{>0}\) such that \(c\equiv 0\pmod{a}\) or \(c\equiv 0\pmod{b}\), then the sequence \(\big{(}d\big{(}g(a,b;s)-jc;a,b\big{)}\big{)}_{j\geq 0}\) is decreasing. **Proposition 7**.: _Let \(a,b\in\mathbb{Z}_{>0}\) with \(\gcd(a,b)=1\) and let \(s\in\mathbb{Z}_{\geq 0}\). Suppose that \(c\in\mathbb{Z}_{>0}\) such that \(c\equiv 0\pmod{a}\) or \(c\equiv 0\pmod{b}\). If \(j_{1},j_{2}\in\mathbb{Z}_{\geq 0}\) such that \(0\leq j_{1}<j_{2}\leq\frac{g(a,b;s)}{c}\), then_ \[d\big{(}g(a,b;s)-j_{2}c;a,b\big{)}\leq d\big{(}g(a,b;s)-j_{1}c;a,b\big{)}.\] Proof.: For convenient, we let \(g_{s}:=g\big{(}a,b;s\big{)}\). According to Lemma 5, we can assume that \[d(g_{s}-j_{1}c;a,b)=i,\] for some \(i=0,1,\ldots,s\). If \(i=0\), then, by Lemma 6, \[0\leq g_{s}-j_{1}c\leq g_{0}.\] Since \(j_{1}<j_{2}\leq\frac{g_{s}}{c}\), we have \[0\leq g_{s}-j_{2}c<g_{s}-j_{1}c\leq g_{0}.\] Then, by Lemma 6, \[d\big{(}g_{s}-j_{2}c;a,b\big{)}=0=d\big{(}g_{s}-j_{1}c;a,b\big{)}.\] Assume that \(i\geq 1\). Again by Lemma 6, we have \[g_{i-1}<g_{s}-j_{1}c\leq g_{i}.\] If \(g_{i-1}<g_{s}-j_{2}c<g_{s}-j_{1}c\leq g_{i}\), It follows immediately that \[d\big{(}g_{s}-j_{2}c;a,b\big{)}=i=d\big{(}g_{s}-j_{1}c;a,b\big{)}.\] If \(g_{s}-j_{2}c\leq g_{i-1}\), then, without loss of generality, assume that there exists \(k\in\{1,2,\ldots,i-1\}\) such that \[g_{k-1}<g_{s}-j_{2}c\leq g_{k}.\] Therefore, by Lemma 6, \[d\big{(}g_{s}-j_{2}c;a,b\big{)}=k<i=d\big{(}g_{s}-j_{1}c;a,b\big{)}.\] For example, if we let \(a=3,b=7,c=6\), and \(s=2\). Then \(g(a,b;s)=g(3,7;2)=63-3-7=53\). We have \(6\equiv 0\pmod{3}\). Then, \[\left(d\big{(}g(3,7;2)-6j;3,7\big{)}\right)_{j\geq 0} =\big{(}d(63-6j;4,5)\big{)}_{j\geq 0}\] \[=(4,3,3,3,2,2,2,2,1,1,1,0,0,\ldots),\] which is decreasing. However, if \(a=4,b=5,c=3\), and \(s=1\). Then \(g(a,b;s)=g(4,5;1)=40-4-5=31\). We have \(c\not\equiv 0\pmod{a}\) and \(c\not\equiv 0\pmod{b}\) and the sequence \[\left(d\big{(}g(4,5;1)-3j;4,5\big{)}\right)_{j\geq 0} =\big{(}d(31-3j;4,5)\big{)}_{j\geq 0}\] \[=(1,2,2,1,1,1,1,1,0,1,0,0,\ldots),\] which is not decreasing. The first application of Theorem 1 is to calculate the Frobenius number for three consecutive triangular integers, which are the numbers of dots in an equilateral triangle. The explicit formula for the \(n\)th triangular number is given by \(t_{n}=\binom{n+1}{2}=\frac{n(n+1)}{2}\). Robles-Perez and Rosales [12] show that, for \(n\in\mathbb{Z}_{>0}\), \(\gcd(t_{n},t_{n+1},t_{n+2})=1\) and \[\gcd(t_{n+1},t_{n+2})=\begin{cases}\frac{n+2}{2}&\text{ if $n$ is even;}\\ n+2&\text{ if $n$ is odd.}\end{cases}\] **Corollary 8**.: _[_10_, Theorem 1]_ _Let \(n\in\mathbb{Z}_{>0}\) and \(s\in\mathbb{Z}_{\geq 0}\). If \(d_{1}=\gcd(t_{n+1},t_{n+2})\) and \(d_{3}=\gcd(t_{n},t_{n+1})\), then, we have_ \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n+1}t_{n+2}}{ t_{n}d_{1}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n+1}t_{n+2}}{d_{1}}+t_{n}d_{1}-t_ {n}-t_{n+1}-t_{n+2}, \tag{8}\] _and_ \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n}t_{n+1}}{ t_{n+2}d_{3}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n}t_{n+1}}{d_{3}}+t_{n+2}d_{3}-t_ {n}-t_{n+1}-t_{n+2}. \tag{9}\] Proof.: If \(n\) is even, then \(d_{1}=\gcd(t_{n+1},t_{n+2})=\frac{(n+2)}{2}\) and \(\frac{t_{n+1}}{d_{1}}=n+1\). Since \(t_{n}=\frac{n(n+1)}{2}\equiv 0\pmod{n+1}\), by applying Theorem 1, we have \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n+1}t_{n+2}}{ t_{n}d_{1}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n+1}t_{n+2}}{d_{1}}+t_{n}d_{1}-t_ {n}-t_{n+1}-t_{n+2}.\] Since \(n\) is even, it follows that \(d_{3}=\gcd(t_{n},t_{n+1})=n+1\) and \(\frac{t_{n+1}}{d_{3}}=\frac{n+2}{2}\). Since \(t_{n+2}=\frac{(n+2)(n+3)}{2}\equiv 0\pmod{\frac{n+2}{2}}\), by Remark 2, we obtain \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n+1}t_{n+2}}{ t_{n}d_{1}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n}t_{n+1}}{d_{3}}+t_{n+2}d_{3}-t_ {n}-t_{n+1}-t_{n+2}.\] If \(n\) is odd, then \(d_{1}=\gcd(t_{n+1},t_{n+2})=n+2\) and \(\frac{t_{n+1}}{d_{1}}=\frac{n+1}{2}\). By applying Theorem 1 since \(t_{n}=\frac{n(n+1)}{2}\equiv 0\pmod{\frac{n+1}{2}}\), we have \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n+1}t_{n+2}}{ t_{n}d_{1}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n+1}t_{n+2}}{d_{1}}+t_{n}d_{1}-t_ {n}-t_{n+1}-t_{n+2}.\] Since \(n\) is odd, it follows that \(d_{3}=\gcd(t_{n},t_{n+1})=\frac{n+1}{2}\), \(\frac{t_{n+1}}{d_{3}}=n+2\), and \(t_{n+2}=\frac{(n+2)(n+3)}{2}\equiv 0\pmod{n+2}\). Then, by Remark 2, we have \[g\Big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=0}^{s}\left\lceil\frac{jt_{n}t_{n+1}}{ t_{n+2}d_{3}^{2}}\right\rceil\Big{)}=(s+1)\frac{t_{n}t_{n+1}}{d_{3}}+t_{n+2}d_{3}-t_ {n}-t_{n+1}-t_{n+2}.\] Note that we can write (8) in Corollary 8 as the same result in [10]: For even \(n\), we have \[g\big{(}t_{n},t_{n+1},t_{n+2};s(s+1)+\sum_{j=1}^{s}\left\lceil\frac{6j}{n} \right\rceil\big{)}=\frac{(n+1)(n+2)(2s(n+3)+3n)}{4}-1,\] and, for odd \(n\geq 3\), \[g\big{(}t_{n},t_{n+1},t_{n+2};\sum_{j=1}^{s}\left\lceil\frac{j}{2}\big{(}1+\frac{3 }{n}\big{)}\right\rceil\big{)}=\frac{(n+1)(n+2)((n+3)s+3(n-1))}{4}-1.\] **Example 9**.: Let \(n=4\). Then \((t_{4},t_{5},t_{6})=(10,15,21)\). We have \(d_{1}=\gcd(15,21)=3\). For convenient, we let \(\Sigma_{1}:=\sum_{j=0}^{s}\left\lceil\frac{jt_{5}t_{6}}{t_{4}d_{1}^{2}}\right\rceil\). Then, by (8), for \(s\geq 0\), \(g\big{(}t_{4},t_{5},t_{6};\Sigma_{1}\big{)}\) are shown as follow: \begin{tabular}{|c|c c c c c c c c c c|} \hline s & 0 & 1 & 2 & 3 & 4 & 5 & \(\ldots\) & 100 & \(\ldots\) & \(10^{4}\) & \(\ldots\) \\ \hline \(\Sigma_{1}\) & 0 & 4 & 11 & 22 & 36 & 54 & \(\ldots\) & 17700 & \(\ldots\) & 175020000 & \(\ldots\) \\ \hline \(g(10,15,21;\Sigma_{1})\) & 89 & 194 & 299 & 404 & 509 & 614 & \(\ldots\) & 10589 & \(\ldots\) & 1050089 & \(\ldots\) \\ \hline \end{tabular} In the same way, \(d_{3}=\gcd(10,15)=5\). If \(\Sigma_{2}:=\sum_{j=0}^{s}\left\lceil\frac{jt_{4}t_{5}}{t_{6}d_{3}^{2}}\right\rceil\). Then, by (9), for \(s\geq 0\), \(g\big{(}t_{4},t_{5},t_{6};\Sigma_{2}\big{)}\) are shown as follow: \begin{tabular}{|c|c c c c c c c c c c|} \hline s & 0 & 1 & 2 & 3 & 4 & 5 & \(\ldots\) & 100 & \(\ldots\) & \(10^{4}\) & \(\ldots\) \\ \hline \(\Sigma_{2}\) & 0 & 1 & 2 & 3 & 5 & 7 & \(\ldots\) & 1486 & \(\ldots\) & 14291429 & \(\ldots\) \\ \hline \(g(10,15,21;\Sigma_{2})\) & 89 & 119 & 149 & 179 & 209 & 239 & \(\ldots\) & 3089 & \(\ldots\) & 300089 & \(\ldots\) \\ \hline \end{tabular} Beck and Kifer [2] presented a formula for calculating the generalized Frobenius number for special cases. We give an alternative proof of their result using Theorem 1 as follows. **Corollary 10**.: _[_2_, Theorem 1 (\(k=3\) and \(t=2\))] Let \(m_{1},m_{2},m_{3}\) be pairwise coprime numbers and let \(a_{1}=m_{2}m_{3},a_{2}=m_{1}m_{3},a_{3}=m_{1}m_{2}\). Then, for \(n\in\mathbb{Z}_{\geq 0}\),_ \[g(a,b,c;t_{n}) =\operatorname{lcm}(a,b,c)(n+2)-a-b-c\] \[=m_{1}m_{2}m_{3}(n+2)-m_{1}m_{2}-m_{1}m_{3}-m_{2}m_{3}\,.\] _Moreover, we have \(\{d(n;a,b,c)\mid n\geq 1\}=\{t_{k}\mid k\in\mathbb{Z}_{\geq 0}\}\)._ Proof.: It is clearly that \(d_{1}=\gcd(a_{2},a_{3})=m_{1},d_{2}=\gcd(a_{1},a_{3})=m_{2}\), and \(d_{3}=\gcd(a_{1},a_{2})=m_{3}\). Furthermore, \(a_{1}\equiv 0\pmod{\frac{a_{2}}{m_{1}}}\), \(a_{2}\equiv 0\pmod{\frac{a_{3}}{m_{2}}}\), and \(a_{3}\equiv 0\pmod{\frac{a_{1}}{m_{3}}}\). Moreover, for \(n\in\mathbb{Z}_{\geq 0}\) we have \[\sum_{j=0}^{n}\left\lceil\frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil=\sum_{j =0}^{n}\left\lceil\frac{ja_{1}a_{3}}{a_{2}d_{2}^{2}}\right\rceil=\sum_{j=0}^{n} \left\lceil\frac{ja_{1}a_{2}}{a_{3}d_{3}^{2}}\right\rceil=\sum_{j=0}^{n}j=t_{n}.\] So, by Theorem 1, we obtain that for \(n\in\mathbb{Z}_{\geq 0}\), \[g\big{(}a_{1},a_{2},a_{3};t_{n}\big{)} =(n+1)m_{1}m_{2}m_{3}+m_{1}m_{2}m_{3}-m_{2}m_{3}-m_{1}m_{3}-m_{1}m _{2}\] \[=(n+2)m_{1}m_{2}m_{3}-m_{2}m_{3}-m_{1}m_{3}-m_{1}m_{2}\] \[=\operatorname{lcm}(a,b,c)(n+2)-a-b-c.\] Moreover, by Theorem 1 of [2], we obtain that \[\{d(n;a,b,c)\mid n\geq 1\}=\{t_{k}\mid k\in\mathbb{Z}_{\geq 0}\}.\] For example, let \((m_{1},m_{2},m_{3})=(2,5,11)\) in Corollary 10. Then \(a_{1}=55,a_{2}=22\) and \(a_{3}=10\). Then, for \(n\in\mathbb{Z}_{>0}\), we compute \(g(55,22,10:t_{n})\) by using Corollary 10. Hence, we have \[g(55,22,10;t_{n})=110(n+2)-10-22-55=110n+133.\] \begin{tabular}{|c|c c c c c c c c c|} \hline s & 1 & 2 & 3 & 4 & 5 & 6 & \(\dots\) & 100 & \(\dots\) \\ \hline \(t_{n}\) & 1 & 3 & 6 & 10 & 15 & 21 & \(\dots\) & 5050 & \(\dots\) \\ \hline \(g(10,22,55;t_{n})\) & 243 & 353 & 463 & 573 & 683 & 793 & \(\dots\) & 11133 & \(\dots\) \\ \hline \end{tabular} Furthermore, we obtain that \(\{d(n;55,22,10)\mid n\geq 1\}=\{t_{k}\mid k\geq 0\}\). The next special case is when one of the components is 1 and the others are arbitrary. **Corollary 11**.: _Let \(a,b\in\mathbb{Z}_{>0}\) and \(s\in\mathbb{Z}_{\geq 0}\). Then_ \[g\Big{(}1,a,b;\sum_{j=0}^{s}\left\lceil\frac{jb}{a}\right\rceil\Big{)}=sb-1\] Proof.: Recall that, in Section 3, we show that the equation (4) holds, that is \[g\Big{(}a_{1},\frac{a_{2}}{d_{1}},\frac{a_{3}}{d_{1}};\sum_{j=0}^{s}\left\lceil \frac{ja_{2}a_{3}}{a_{1}d_{1}^{2}}\right\rceil\Big{)}=g\Big{(}\frac{a_{2}}{d_ {1}},\frac{a_{3}}{d_{1}};s\Big{)}. \tag{10}\] At the beginning of the proof of Theorem 1, we assumed that \(a_{1}\equiv 0\pmod{\frac{a_{2}}{d_{1}}}\). If \(d_{1}=1\), then \(a_{1}=ka_{2}\) for some \(k\in\mathbb{Z}_{>0}\). Then, by applying Lemma 4 on the left-hand side of (10), we obtain that \[g\Big{(}ka_{2},a_{2},a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{3} }{k}\right\rceil\Big{)} =a_{2}g\Big{(}k,1,a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{3}}{k} \right\rceil\Big{)}+a_{3}(a_{2}-1)\] \[=a_{2}g\Big{(}1,k,a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{3}}{k} \right\rceil\Big{)}+a_{2}a_{3}-a_{3}.\] Applying (2) to the right-hand side of (10), we have \[g(a_{2},a_{3};s)=(s+1)a_{2}a_{3}-a_{2}-a_{3}.\] Therefore, \[a_{2}g\Big{(}1,k,a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{3}}{k}\right\rceil \Big{)}+a_{2}a_{3}-a_{3}=(s+1)a_{2}a_{3}-a_{2}-a_{3}.\] Hence \[g\Big{(}1,k,a_{3};\sum_{j=0}^{s}\left\lceil\frac{ja_{3}}{k}\right\rceil\Big{)} =sa_{3}-1.\] Since \(k\) and \(a_{3}\) are arbitrary, thus for \(a,b\in\mathbb{Z}_{>0}\) \[g\Big{(}1,a,b;\sum_{j=0}^{s}\left\lceil\frac{jb}{a}\right\rceil\Big{)}=sb-1,\] as desired. **Example 12**.: Let \((a,b)=(4,9)\). Compute \(g\big{(}1,4,9;\sum_{j=0}^{s}\left\lceil\frac{9j}{4}\right\rceil\big{)}\) by using Corollary 11 for \(0\leq s\leq 13\). The result are shown as follows: \begin{tabular}{|c|c c c c c c c c c c c|} \hline s & 0 & 1 & 2 & 3 & 4 & 5 & \(\dots\) & 100 & \(\dots\) & \(10^{4}\) & \(\dots\) \\ \hline \(\Sigma:=\sum_{j=0}^{s}\left\lceil\frac{9j}{4}\right\rceil\) & 0 & 3 & 8 & 15 & 24 & 36 & \(\dots\) & 11400 & \(\dots\) & 112515000 & \(\dots\) \\ \hline \(g(1,4,9;\Sigma)\) & -1 & 8 & 17 & 26 & 35 & 44 & \(\dots\) & 899 & \(\dots\) & 89999 & \(\dots\) \\ \hline \end{tabular} Similarly, if \((a,b)=(9,4)\), by using Corollary 11 the \(g\big{(}1,9,4;\sum_{j=0}^{s}\left\lceil\frac{4j}{9}\right\rceil\big{)}\) are as follows: \begin{tabular}{|c|c c c c c c c c c c c c|} \hline s & 0 & 1 & 2 & 3 & 4 & 5 & \(\dots\) & 100 & \(\dots\) & \(10^{4}\) & \(\dots\) \\ \hline \(\Sigma:=\sum_{j=0}^{s}\left\lceil\frac{4j}{9}\right\rceil\) & 0 & 1 & 2 & 4 & 6 & 9 & \(\dots\) & 2289 & \(\dots\) & 22228889 & \(\dots\) \\ \hline \(g(1,9,4;\Sigma)\) & -1 & 3 & 7 & 11 & 15 & 19 & \(\dots\) & 399 & \(\dots\) & 39999 & \(\dots\) \\ \hline \end{tabular}
2309.10418
Graph Neural Networks for Dynamic Modeling of Roller Bearing
In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable use in real-time operational digital twin systems for monitoring the health state of rotating machines. By representing the bearing's components as nodes in a graph, the GNN can effectively model the complex relationships and interactions among them. We utilize a dynamic spring-mass-damper model of a bearing to generate the training data for the GNN. In this model, discrete masses represent bearing components such as rolling elements, inner raceways, and outer raceways, while a Hertzian contact model is employed to calculate the forces between these components. We evaluate the learning and generalization capabilities of the proposed GNN framework by testing different bearing configurations that deviate from the training configurations. Through this approach, we demonstrate the effectiveness of the GNN-based method in accurately predicting the dynamics of rolling element bearings, highlighting its potential for real-time health monitoring of rotating machinery.
Vinay Sharma, Jens Ravesloot, Cees Taal, Olga Fink
2023-09-19T08:30:10Z
http://arxiv.org/abs/2309.10418v1
# Graph Neural Networks for Dynamic Modeling of Roller Bearing ###### Abstract In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable use in real-time operational digital twin systems for monitoring the health state of rotating machines. By representing the bearing's components as nodes in a graph, the GNN can effectively model the complex relationships and interactions among them. We utilize a dynamic spring-mass-damper model of a bearing to generate the training data for the GNN. In this model, discrete masses represent bearing components such as rolling elements, inner raceways, and outer raceways, while a Hertzian contact model is employed to calculate the forces between these components. We evaluate the learning and generalization capabilities of the proposed GNN framework by testing different bearing configurations that deviate from the training configurations. Through this approach, we demonstrate the effectiveness of the GNN-based method in accurately predicting the dynamics of rolling element bearings, highlighting its potential for real-time health monitoring of rotating machinery. GNN Bearings Dynamic Model ## 1 Introduction Real-time condition monitoring is essential for realizing real-time operational digital twins of complex systems such as rotating equipment. Digital twins enable real-time fault diagnosis and prognosis, mitigating the risk of catastrophic system failures and reducing maintenance costs through early intervention in case of faults. However, purely data-driven methods often struggle to capture the underlying dynamics and generalize to operating conditions not included in the training datasets. Consequently, they fall short of accurately predicting the long-term evolution of physical system states. To address these challenges, physics-informed neural networks (PINNs) have emerged as potential solution. PINNs integrate the partial differential equation (PDE) of the underlying system into the loss function, thereby regularizing the solution learned by the neural network. These methods have demonstrated significant success in various mechanics problems, including stress prediction in homogeneous elastic plates [1], composites [2], and heterogeneous materials [3]. In the context of multibody dynamical systems (MBD), the PINN loss can be formulated based on the Lagrangian of the system [4], Hamiltonian [5], or conservation of energy [6]. However, applying PINNs to systems with a large number of components presents challenges. It requires explicit derivation of either the PDE or analytical expressions for the conserved quantities, which can be cumbersome for complex multi-component systems. Additionally, enforcing boundary conditions becomes challenging, particularly in multi-component systems where boundaries dynamically form due to contact between different components [7]. Therefore, to handle a large number of interacting components, a network with an encoded inductive bias in its architecture is necessary. Graph neural networks (GNNs) [8; 9; 10] provide a promising solution to these challenges by representing input components as nodes in a graph and modeling interactions between them as messages passed over the edges of the graph. Due to their encoded inductive bias, they generalize well to systems with varying configurations and boundary conditions. A lot of physical systems consist of components that interact with each other which makes the graph structure of the GNNs very suitable. In message-passing-GNNs (MP-GNNs), the topological structure of a multi-component system can be represented as a graph where nodes represent the state of different components and edges between the nodes represent the interactions between those components. The pairwise interactions are then modeled as messages passed over the edges. MP-GNNs comprise two networks:(i) an edge network that takes edge features between two nodes (e.g., the distance vector) and generates a message, and (ii) a node network that takes the aggregated messages from all the neighboring nodes and produces a new node state. This process is repeated several times until the final node state is decoded as a target output. Depending on the task, the target of the graph neural network can be the predicted aggregated acceleration of the node. These models have successfully been applied to various dynamics prediction tasks. They have shown success in simple systems such as particle and spring-mass systems [11], as well as in more complex scenarios like three-dimensional skeleton-based human motion prediction [12]. Expanding upon previous research on simple mass-spring systems, our study delves into the specificities of modelling bearing dynamics. Accurate fast modeling of bearing dynamics is vital for timely fault detection and failure prediction in rotating equipment. Building upon the efficacy of GNNs in capturing complex relationships and dynamics, our aim is to develop a graph-neural-network-based simulator that can accurately capture the complex interactions between different components in a bearing. Compared with Finite Element Analysis (FEA) or parameter-calibrated lumped parameter models, a GNN-based bearing model can have specific advantages. Specifically, a significant reduction in computational complexity [10], where dynamics are learned solely from measurements without requiring knowledge of stiffness, mass and damping factors. Moreover, in contrast to pure data-driven methods, this method is interpretable (allowing for the inference of physical quantities not explicitly trained for), generalizable (enabling extrapolation to unseen conditions such as new shaft loads), and flexible (allowing the construction of new graphs, such as by changing the number of rolling elements in the bearing). In this work, we present the first proof of concept for the application of GNNs in modeling bearing dynamics. To achieve this, we train a GNN on a simple 2D dynamic bearing model, demonstrating its interpretability, generalizability, and flexibility. In future work, we anticipate that this concept can be extended to real sensor data and advanced FEA simulations, such as those involving complex elasto-hydrodynamic forces. The proposed model incorporates specific node features, edge features, and graph connections that are essential for modelling a bearing as a graph. We further introduce the use of Message-Passing Graph Neural Networks (MP-GNNs) as proposed in [9] to predict the evolution of this dynamic. However, in contrast to [9], we propose modifications to the GNN architecture to decode roller loads from the edges and capture the dynamics of the raceways and rolling elements from the nodes. This paper is organized as follows: In Section 2, we first introduce a 2D dynamic bearing model. This analytical model serves as the generator of simulation trajectories on which we train the graph-based bearing model described in Section 3. In Section 4, we describe the bearing configurations and operating conditions used to generate the simulation data for training and testing the model. In Section 5, we present the results of our experiments and evaluate the performance of the graph-based bearing model. Finally, in 6, we summarize the findings of the study, discuss their implications, and suggest future extensions of the proposed research study. ## 2 Dynamic bearing model In this work, a 2D dynamic bearing model is used to simulate the behavior of a cylindrical roller bearing (CRB) for training and validation of the GNN. The chosen physics-based bearing model captures the essential components including the inner and outer rings, as well as multiple rolling elements [13] as depicted in figure 1. Nonlinear contact models, based on the work of [14] and [15], are utilized to model the contacts between the rings and rolling elements. To establish the mechanical connections, the inner and outer rings are connected to the ground through springs and dampers. It is important to note that the bearing in this work is considered stationary and all bodies are restricted to horizontal and vertical movements only. The forces acting on each body are calculated as a function of their velocities and positions in space. Using the mass of the inner and outer ring, their respective accelerations can be computed and the Runge-Kutta method (RK4) is used to numerically integrate these to their updated positions and velocities. The rolling elements are assumed to have negligible mass and their positions are determined as a function of the inner and outer ring positions. To introduce external stimuli to the system, a time-varying vertical force is applied to the outer ring as an input. The internal loads, positions, and velocities of all components serve as outputs and are used to train the GNN. Figure 1 shows schematic of the dynamic bearing model. ## 3 A graph-based model of bearings The bearing model can be effectively represented as a graph. Figure 3 depicts a graph representation \(\mathcal{G}=(\nu,\varepsilon)\) of the 2D dynamical model discussed in section 2. The graph representation captures the essential components of the model, Figure 1: Schematic overview of the 2D bearing model used to generate signals for the GNN. Figure 3: Graph Representation of 2d Dynamic Model: The node features include the position \(\vec{x}_{i}\) and velocity \(\vec{v}_{i}\) of the centers of components (set to zero for rollers), external force \(\vec{F}_{ext}\), and node type. The edge features include the relative distance \(\vec{dx_{ij}}\) and its magnitude between the rollers and the circumferences of the inner and outer rings. Figure 2: The inputs and outputs of the bearing model. including the inner ring, outer ring, and rolling elements, as nodes \(\nu\) in the graph. The interactions between the rolling elements and the rings, characterized by non-linear contacts, are represented by edges \(\varepsilon\) in the graph. In the following sections, we will elaborate on the GNN model, providing more details about the node and edge features, as well as the learning process. ### Node and edge features **Node Features**: The nodes in the graph represent the inner ring, outer ring, and rollers of the bearing system. Each node is characterized by a set of features denoted as \(\nu_{i}=\vec{x_{i}},\vec{v_{i}},\vec{F}_{ext},\text{type}\). For the nodes representing the inner and outer rings, the feature \(\vec{x_{i}}\) corresponds to the position of their respective centers and their velocity is captured by the feature \(\vec{v_{i}}\), measured in millimeters per second. For the nodes representing the rollers, the features \(\vec{x_{i}}\) and \(\vec{v_{i}}\) are set to zero. This choice reflects the assumption that the dynamics of rollers is purely governed by pair-wise interactions with the rings through relative positional features, which are encoded in the edges connecting the nodes. In terms of external forces, the node representing the outer ring has a non-zero value for the feature \(\vec{F}_{ext}\), which accounts for the externally applied vertical radial force on the outer ring. Whereas, the inner ring and rollers have a value of zero for the external force feature \(\vec{F}_{ext}\), indicating that no external forces are applied to them. To distinguish between the different components, the node type is encoded as a categorical variable. This allows for differentiation between the three types of components: the inner ring, outer ring, and rollers within the graph representation of the bearing system. **Edge Features**: The rollers are connected to the inner ring and outer ring nodes through bidirectional edges \(\varepsilon_{ij}=d\vec{x_{ij}},||d\vec{x_{ij}}||\). These edges capture the 2D distance vector \(\vec{dx_{ij}}\) between the roller center and the circumference of the inner or outer ring, along with its scalar magnitude \(||d\vec{x_{ij}}||\). The choice of using the distance vector and its magnitude as edge features is motivated by the assumption that the non-linear contact between the rollers and the rings can be modeled by non-linear springs. In this model, the forces depend solely on the elongation or compression of the springs, which is captured by the relative distance vector between the roller center and the ring circumference. To calculate the 2D distance vector, the points on the circumferences of the inner and outer rings are determined, taking into account that the center of the roller is positioned midway between them. This approach ensures an accurate representation of the spatial relationship between the rollers and the rings, enabling the modeling of the contact forces and interactions within the bearing system. It is worth highlighting that our approach distinguishes itself from the previous applications of GNNs in predicting dynamics of spring-mass or particle systems [9; 10] by incorporating absolute position and velocity features on the nodes representing the inner and outer rings. While earlier applications primarily focused on capturing pair-wise interactions that are independent of position, our objective in modeling bearings expands to encompass the dynamics of both the inner and outer rings. This consideration takes into account their interactions with the ground through springs and dampers as shown in Figure 1. ### Model To predict the dynamics of the bearing, we utilize an encode-process-decode architecture, employing a message-passing graph neural network framework. The schematic of the model described in this section is illustrated in Figure 4. **Encode**: The encoder takes a graph \(\mathcal{G}=(\nu,\varepsilon)\) and uses separate Multi-Layer Perceptrons (MLPs) \(f^{\varepsilon}_{enc}\) and \(f^{\nu}_{enc}\) to encode the edge and node features into latent vectors \(E_{ij}\) and \(V_{i}\), respectively, each of size 64. The encoded graph is represented as \(\mathcal{G}_{0}=(V,E)\). **Process**: The processor consists of multiple blocks with unshared weights, where each block performs sequential message passing over the input graph and produces transformed node and edge latent vectors. Residual connections are employed between the input and output edge/node latent vectors to facilitate information flow. The initial block takes the encoded graph as input, and subsequent blocks take the output of the previous block. Within each block, MLPs \(f^{E}\) and \(f^{V}\) are used to apply transformations to the latent edge vector \(E_{ij}\) and latent node vector \(V_{i}\) using for edges and nodes, respectively. The edge transformation is described as follows: \(E^{\prime}ij\gets f^{E}(Eij,V_{i},V_{j})\), Here, \(V_{i}\) and \(V_{j}\) denote the latent vectors of the sender and receiver nodes, respectively, whereas \(E_{ij}\) represents the latent vector of the connecting edge. The node transformation is described as follows: \(V^{\prime}i\gets f^{V}(V_{i},\sum_{j}E^{\prime}ij)\) At each node, the transformed latent vectors of incoming edges are aggregated using a permutation-invariant summation function. The resulting sum, along with the node latent vector, is concatenated and fed into the MLP \(f^{V}\). This MLP processes the input and generates the transformed node latent vector, incorporating the information from the aggregated edge vectors. In Figure 4, the transformed graph after the first message passing step is denoted as \(\mathcal{G}_{1}=(V^{\prime},E^{\prime})\). The next processor block takes \(\mathcal{G}_{1}\) as input, performs similar transformations with separate MLPs \(g^{E}\) for edges and \(g^{V}\) for nodes, resulting in the transformed graph \(\mathcal{G}_{2}\). This process continues for \(M\) message passing blocks, and the output after \(M\) blocks, denoted as \(\mathcal{G}_{M}\), serves as input to the decoder. **Decoder**: The decoder comprises an edge decoder MLP \(f^{E}_{dec}\) and a node decoder MLP \(f^{N}_{dec}\). This is different from the previous applications of GNNs in dynamics prediction tasks [9; 10] where only the node dynamics are decoded from the node latent vectors using a decoder MLP. **Edge Decoder**: The edge decoder MLP takes the latent vectors at each edge as input and predicts a 2D contact force for each edge: \(F_{edge}\gets f^{E}_{dec}(E^{\prime}_{M})\). **Node Decoder**: The node decoder MLP takes the latent vectors at each node as input and predicts the net 2D force on each node: \(F_{node}\gets f^{N}_{dec}(V^{\prime}_{\mathcal{G}_{M}})\). ## 4 Case study In this study, we utilized the dynamic bearing model described in Section 2 to simulate trajectories of four bearings with different numbers of rolling elements (13, 14, 15, and 16). These bearings were modeled after the SKF N209 ECP cylindrical roller bearing, which has a pitch diameter of 65.5mm and a roller diameter of 11mm. The length of the rollers is 12mm. Additionally, a horizontal and vertical spring with a stiffness of 5e6N/m is connecting the inner ring to the ground. Dampers with damping ratios of 5e4Ns/m and 1e4Ns/m are used to dampen the inner and outer rings, respectively. The simulations were conducted under zero rpm conditions, with an initial external load applied to the outer ring. The range of initial external loads varied from 5000 N to 23000 N, with increments of 2000 N. During each trajectory, an external load was instantaneously applied at the 0th time step. The initial condition of the bearing is all set to 0, so this results in a step response of the system. The external load was doubled at 2500 time steps and subsequently reduced back to the initial load at the 5000th time step. An example of the variation in external load over time is depicted in Figure 5. **Training Data**: The GNN was trained using the trajectories of bearings equipped with 13, 14, and 16 rolling elements. At each time step, the positions and velocities of each component (inner ring, outer ring, and rolling elements) were Figure 4: Encode-Process-Decode architecture with message passing GNN: The Encoder transforms the graph \(\mathcal{G}\) into \(\mathcal{G}_{0}\). Processor 1 takes \(\mathcal{G}_{0}\) as input and transforms it into \(\mathcal{G}_{1}\) after a single message passing step. Subsequent Processors sequentially transform \(\mathcal{G}_{1}\). Finally, the Decoder decodes both the latent nodes and edges of the graph \(\mathcal{G}_{M}\). used to construct the graph representation \(\mathcal{G}_{t}\) of the bearing system. The roller loads were used as the ground truth for the decoded edges, while the total forces acting on each component were used as the ground truth for the decoded nodes. The objective of the GNN was to predict the roller loads and net forces on each component at each time step. **Testing Data**: The model was evaluated on the bearing with 15 rollers and an initial load of 13000 N. The applied load during this validation case is shown in figure 5. We tested its ability to predict the roller loads and net forces on the components given the state vector of each component at a specific time \(t\), under different external loads applied to the bearing. ## 5 Results In this section, we evaluate the performance of the trained GNN on the test data. We compare the GNN's predictions with those generated by the 2D dynamic model, focusing on a single time step without performing roll-outs. The evaluation covers both the loaded roller (top of the bearing) and the non-loaded roller (bottom of the bearing), as illustrated in Figure 6. Figure 5: Applied external load on the outer ring Figure 6: Position of top and bottom rolling elements Figure 7: Prediction of loading and unloading of bottom roller with inner-ring dispacement Figure 8 illustrates the predicted loads for the loaded roller number 8 and compares it to the simulated data. It is worth noting the presence of oscillatory dynamics resulting from the sudden application of an external load at the time steps 0 and 2500, as depicted in Figure 5. These dynamics arise due to the connection of the inner and outer rings to the ground through dampers. The proposed GNN demonstrates its capability to accurately predict loads even in dynamic regimes of the bearing for the loaded rollers. Moreover, the GNN's performance shows a significant improvement once the bearing reaches a steady state. This improvement is further supported by the percentage error (\(\frac{prediction-groundtruth}{groundtruth}*100\%\)) for the loaded rollers, depicted in Figure 9 for 50-time steps. Figure 10 illustrates the predicted load for the unloaded roller. It can be observed that the GNN predicts still small loads for the unloaded rollers even though the ground truth value is zero. While in the first plot in the figure, higher errors in predictions are observed until 50th-time steps, the performance improves once the initial oscillatory dynamics subside. In the second plot, the same observations can be made, however, the magnitude of errors in the initial dynamics phase is lower. Figures 11 and 12 present a comparison between the predicted forces on the inner ring and outer ring, respectively, and the ground truth at different time-step ranges. It is evident that when a sudden load is applied at the 0th and 2500th-time steps, both the inner and outer rings experience high dynamical forces. The figures indicate that the GNN predicts a small constant force during these instances, which suggests a limitation in accurately capturing short-term dynamical forces. However, as the rings return to a stable dynamics regime, the GNN demonstrates accurate force predictions **Verification of the learned underlying physics**: To verify whether the GNN has learned the correct underlying physics, an artificial trajectory of a bearing with 15 rollers was generated. The experimental setup involved fixing the center of the outer ring at the origin and providing displacement to the inner ring along the y-direction. Initially, the inner ring was centered at the origin and then displaced vertically within the range of -0.05 mm to +0.05 mm. This is equal to a compression of the roller which is located at the bottom-dead-center in the range of -0.05 mm to +0.05 mm. Figure 8: Comparison of predictions of roller loads by the GNN for the loaded roller \(\#8\) (shown in Figure 6) with the results obtained from the dynamic bearing simulator (ground truth). Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750 Figure 9: Percentage Error in roller load predictions for loaded roller \(\#8\). Error at time step 1 is around 70%. Figure 11: Comparison of predicted force on the inner-ring by GNN with the dynamic bearing simulator (ground truth) for single time-step predictions. Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750 Figure 12: Comparison of predicted force on the outer-ring by GNN with the dynamic bearing simulator (ground truth) for single time-step predictions. Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750 Figure 10: Comparison of predictions of roller loads by the GNN for the non-loaded roller \(\#\)0 (Figure 6) with the results obtained from the dynamic bearing simulator (ground truth). Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750 The generation of the artificial trajectory involved computing the initial positions of each rolling element based on the known radius of the inner and outer rings. We made the assumption that the rolling elements were positioned midway between the circumferences of the inner and outer rings and uniformly distributed along the 360-degree rotation of the bearing. Figure 7 depicts the predicted and true loads as a function of roller deformation for the bottom dead center roller in the bearing (see Figure 6). When the inner ring is displaced in the negative y-direction, the bottom-dead-center roller experiences compression, resulting in positive loads. Conversely, displacement of the inner ring in the positive y-direction leads to the unloading of the roller, causing it to experience zero loads. The GNN successfully predicts the increase in load for the roller up to a displacement of 0.02 mm of the inner ring in the positive and negative y-direction respectively. Moreover, it is particularly noteworthy that the GNN accurately captures the unloading phenomenon, faithfully reproducing the non-linear loading graph. It can be also noted that the largest deviation between the GNN's prediction and the ground truth is only 15 percent. These findings demonstrate the GNN's capability to understand and reproduce the expected load changes in response to inner-ring displacements for different rollers within the bearing system. This indicates that the GNN has indeed learned the correct underlying physics of the bearing system, as it accurately predicts the expected behavior of the rollers under varying inner-ring displacements. ## 6 Conclusions This study demonstrates the successful application of a graph neural network framework for predicting the dynamics of bearings. By representing the bearing as a graph and utilizing a message-passing graph neural network, we accurately predict loads at individual time steps based on external load and ring positions/velocities. Our study demonstrates the ability to infer dynamics from trajectory measurements without explicit stiffness, mass, and damping information. In contrast to pure data-driven methods, our approach offers interpretability, generalizability to new conditions such as external load, and flexibility to adapt to varying bearing configurations. This proof-of-concept study paves the way for future research, wherein roll-out trajectories can be generated from initial conditions. To enhance accuracy, our future research aims to include dampers and springs that connect the rings to the ground. This extension will help address the significant errors in force predictions on inner and outer rings during the oscillatory dynamics regime that occurs during sudden loading. Additionally, future work will consider including bearing rotation as an important parameter. While our GNN is currently trained on Hertzian contact, it has the potential to capture intricate Elasto-hydrodynamic forces with measured data or FEA simulations, supported by the universal approximation theorem. This study highlights the potential of graph neural networks in modeling bearing dynamics and opens up new possibilities for advancing bearing diagnostics, prognostics, and the development of real-time operational digital twins for monitoring the health of rotating machinery.
2302.14518
Generalization Error Bounds for Noisy, Iterative Algorithms via Maximal Leakage
We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in $L_2$-norm and the additive noise is isotropic Gaussian noise, then one can obtain an upper-bound on maximal leakage in semi-closed form. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for other scenarios of interest.
Ibrahim Issa, Amedeo Roberto Esposito, Michael Gastpar
2023-02-28T12:13:57Z
http://arxiv.org/abs/2302.14518v2
# Asymptotically Optimal Generalization Error Bounds for Noisy, Iterative Algorithms ###### Abstract We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in \(L_{2}\)-norm, then adding isotropic Gaussian noise leads to optimal generalization bounds: indeed, the input and output of the learning algorithm in this case are asymptotically statistically independent. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for several scenarios of interest. N osiye iterative algorithms, generalization error, maximal leakage, Gaussian noise ## 1 Introduction One of the key challenges in machine learning research concerns the "generalization" behavior of learning algorithms. That is: if a learning algorithm performs well on the training set, what guarantees can one provide on its performance on new samples? While the question of generalization is understood in many settings ((Bousquet et al., 2003; Shalev-Shwartz and Ben-David., 2014)), existing bounds and techniques provide vacuous expressions when employed to show the generalization capabilities of deep neural networks (DNNs) ((Bartlett et al., 2017, 2019; Jiang et al., 2020; Zhang et al., 2021)). In general, classical measures of model expressivity (such as Vapnik-Chervonenkis (VC) dimension ((Vapnik and Chervonenkis, 1991)), Rademacher complexity ((Bartlett and Mendelson, 2003)), etc.), fail to explain the generalization abilities of DNNs due to the fact that they are typically over-parameterized models with less training data than model parameters. A novel approach was introduced by (Russo and Zou, 2016), and (Xu and Raginsky, 2017) (further developed by (Steinke and Zakynthinou, 2020; Bu et al., 2020; Esposito et al., 2021; Esposito and Gastpar, 2022) and many others), where information-theoretic techniques are used to link the generalization capabilities of a learning algorithm to information measures. These quantities are algorithm-dependent and can be used to analyze the generalization capabilities of general classes of updates and models, _e.g._, noisy iterative algorithms like Stochastic Gradient Langevin Dynamics (SGLD) ((Pensia et al., 2018; Wang et al., 2021)), which can thus be applied to deep learning. Moreover, it has been shown that information-theoretic bounds can be non-vacuous and reflect the real generalization behavior even in deep learning settings ((Dziugaite and Roy, 2017; Zhou et al., 2018; Negrea et al., 2019; Haghifam et al., 2020)). In this work we adopt and expand the framework introduced by (Pensia et al., 2018), but instead of focusing on the mutual information between the input and output of an iterative algorithm, we compute the maximal leakage ((Issa et al., 2020)). Maximal leakage, together with other information measures of the Sibson/Renyi family (maximal leakage can be shown to be Sibson Mutual information of order infinity ((Issa et al., 2020))), have been linked to high-probability bounds on the generalization error ((Esposito et al., 2021)). In particular, given a learning algorithm \(\mathcal{A}\) trained on data-set \(S\) (made of \(n\) samples) one can provide the following guarantee in case of the \(0-1\) loss: \[\mathbb{P}(|\text{gen-err}(\mathcal{A},S)|\geq\eta)\leq 2\exp(-2n\eta^{2}+ \mathcal{L}\left(S{\rightarrow}\mathcal{A}(S)\right)). \tag{1}\] This deviates from much of the literature in which the focus is on bounding the **expected** generalization error instead ((Xu and Raginsky, 2017; Steinke and Zakynthinou, 2020)). Consequently, if one can guarantee that for a class of algorithms, the maximal leakage between the input and the output is bounded, then one can provide an **exponentially decaying** (in the number of samples \(n\)) bound on the probability of having a large generalization error. This is in general not true for mutual information, which can typically only guarantee a linearly decaying bound on the probability of the same event ((Bassily et al., 2018)). Moreover, a bound on maximal leakage implies a bound on mutual information (cf. Equation (6)) and, consequently, a bound on the expected generalization error of \(\mathcal{A}\). The main advantage of maximal leakage lies in the fact that it depends on the distribution of the samples only through its support. It is thus naturally independent from the distribution over the samples and particularly amenable to analysis, especially in additive noise settings. The contributions of this work can be summarized as follows: * we derive novel bounds on \(\mathcal{L}\left(S{\rightarrow}\mathcal{A}(S)\right)\) whenever \(\mathcal{A}\) is a noisy, iterative algorithm (SGLD-like), which then implies generalization with high-probability; * we show that the bounds provided on maximal leakage strictly improve the bounds provided by (Pensia et al., 2018), and we thus provide a tighter bound on the expected generalization error of said algorithms as well; * larger neural networks generalize better; * we leverage the analysis to extrapolate the optimal type of noise to be added (in the sense that it minimizes the induced maximal leakage), based on the assumptions imposed on the algorithm. In particular, * if one assumes the \(L_{p}\) norm of the gradient to be bounded, with \(p\leq 2\), our analysis shows that adding Gaussian noise is asymptotically optimal; * if one assumes the \(L_{\infty}\) norm of the gradient to be bounded, then adding uniform noise is optimal; Hence, the analysis and computation of maximal leakage can be used to inform the design of novel noisy, iterative algorithms. ### Related Work The line of work exploiting information measures to bound the expected generalization started in (Russo and Zou, 2016; Xu and Raginsky, 2017) and was then refined with a variety of approaches considering Conditional Mutual Information (Steinke and Zakynthinou, 2020; Haghifam et al., 2020), the Mutual Information between individual samples and the hypothesis (Bu et al., 2019) or improved versions of the original bounds (Issa et al., 2019; Hafez-Kolahi et al., 2020). Other approaches employed the Kullback-Leibler Divergence with a PAC-Bayesian approach (McAllester, 2013; Zhou et al., 2018). Moreover, said bounds were then characterized for specific SGLD-like algorithms, denoted as "noisy, iterative algorithms" and used to provide novel, non-vacuous bounds for Neural Networks (Pensia et al., 2018; Negrea et al., 2019; Haghifam et al., 2020; Wang et al., 2023) as well as for SGD algorithms (Neu et al., 2021). Recent efforts tried to provide the optimal type of noise to add in said algorithms and reduce the (empirical) gap in performance between SGLD and SGD (Wang et al., 2021). All of these approaches considered the KL-Divergence or (variants of) Shannon's Mutual Information. General bounds on the expected generalization error leveraging arbitrary divergences were given in (Esposito and Gastpar, 2022; Lugosi and Neu, 2022). Another line of work considered instead bounds on the probability of having a large generalization error (Bassily et al., 2018; Esposito et al., 2021; Hellstrom and Durisi, 2020) and focused on large families of divergences and generalizations of the Mutual Information (in particular of the Sibson/Renyi-family, including conditional versions). ## 2 Preliminaries, Setup, and a General Bound ### Preliminaries #### 2.1.1 Information Measures The main building block of the information measures considered in this work is the Renyi's \(\alpha\)-divergence between two measures \(P\) and \(Q\), \(D_{\alpha}(P\|Q)\) (which can be seen as a parametrized generalization of the Kullback Leibler-divergence) (van Erven and Harremoes, 2014, Definition 2). Starting from Renyi's Divergence and the geometric averaging that it involves, Sibson built the notion of Information Radius (Sibson, 1969) which can be seen as a special case of the following quantity (Verdu, 2015): \(I_{\alpha}(X,Y)=\min_{Q_{Y}}D_{\alpha}(P_{XY}\|P_{X}Q_{Y})\). Sibson's \(I_{\alpha}(X,Y)\) represents a generalization of Shannon's mutual information, indeed one has that: \(\lim_{\alpha\to 1}I_{\alpha}(X,Y)=I(X;Y)=\mathbb{E}_{P_{XY}}\left[\log \left(\frac{dP_{XY}}{dP_{X}P_{Y}}\right)\right].\) Differently, when \(\alpha\rightarrow\infty\), one gets: \[I_{\infty}(X,Y)=\log\mathbb{E}_{P_{Y}}\left[\operatorname*{ess-sup}_{P_{X}} \frac{dP_{XY}}{dP_{X}P_{Y}}\right]=\mathcal{L}\left(X\!\!\rightarrow\!\!Y \right), \tag{2}\] where \(\mathcal{L}\left(X\!\!\rightarrow\!\!Y\right)\) denotes the maximal leakage from \(X\) to \(Y\), a recently defined information measure with an operational meaning in the context of privacy and security (Issa et al., 2020). Maximal leakage represents the main quantity of interest for the scope of this paper, as it is amenable to analysis and has been used to bound the generalization error (Esposito et al., 2021). As such, we will bound the maximal leakage between the input and output of generic noisy iterative algorithms. To that end, we mention a few useful properties of \(\mathcal{L}\left(X{\rightarrow}Y\right)\). If \(X\) and \(Y\) are jointly continuous random variables, then (Issa et al., 2020, Corollary 4) \[\mathcal{L}\left(X{\rightarrow}Y\right)=\log\int\underset{P_{X}}{\text{ess-sup}} \,f_{Y|X}(y|x)dy, \tag{3}\] where \(f_{Y|X}\) is the conditional pdf of \(Y\) given \(X\). Moreover, maximal leakage satisfies the following chain rule (the proof of which is given in Appendix A): **Lemma 1**: _Given a triple of random variables \(\left(X,Y_{1},Y_{2}\right)\), then_ \[\mathcal{L}\left(X{\rightarrow}Y_{1},Y_{2}\right)\leq\mathcal{L}\left(X{ \rightarrow}Y_{1}\right)+\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}\right), \tag{4}\] _where the conditional maximal leakage \(\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}\right)=\text{ess-sup}_{P_{Y_{1}}} \,\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}=y_{1}\right)\), where the latter term is interpreted as the maximal leakage from \(X\) to \(Y_{2}\) with respect to the distribution \(P_{XY_{2}|Y_{1}=y_{1}}\). Consequently, for random variables \(\left(X,\left(Y_{i}\right)_{i=1}^{n}\right)\),_ \[\mathcal{L}\left(X{\rightarrow}Y^{n}\right)\leq\sum_{i=1}^{n}\mathcal{L}\left( X{\rightarrow}Y_{i}|Y^{i-1}\right). \tag{5}\] Moreover, one can relate \(\mathcal{L}\left(X{\rightarrow}Y\right)\) to \(I(X;Y)\) through \(I_{\alpha}\). Indeed, an important property of \(I_{\alpha}\) is that it is non-decreasing in \(\alpha\), hence for every \(\infty>\alpha>1\): \[I(X;Y)=I_{1}(X,Y)\leq I_{\alpha}(X,Y)\leq I_{\infty}(X,Y)=\mathcal{L}\left(X{ \rightarrow}Y\right). \tag{6}\] For more details on Sibson's \(\alpha\)-MI we refer the reader to (Verdu, 2015), as for maximal leakage the reader is referred to (Issa et al., 2020). #### 2.1.2 Learning Setting Let \(\mathcal{Z}\) be the sample space, \(\mathcal{W}\) be the hypothesis space, and \(\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}_{+}\) be a loss function. Say \(\mathcal{W}\subseteq\mathbb{R}^{d}\). Let \(S=\left(Z_{1},Z_{2},\ldots,Z_{n}\right)\) consist of \(n\) i.i.d samples, where \(Z_{i}\sim P\), with \(P\) unknown. A learning algorithm \(\mathcal{A}\) is a mapping \(\mathcal{A}:\mathcal{Z}^{n}\rightarrow\mathcal{W}\) that given a sample \(S\) provides a hypothesis \(W=\mathcal{A}(S)\). \(\mathcal{A}\) can be either a deterministic or a randomized mapping and undertaking a probabilistic (and information-theoretic) approach one can then equivalently consider \(\mathcal{A}\) as a family of conditional probability distributions \(P_{W|S=s}\) for \(s\in\mathcal{Z}^{n}\)_i.e._, an information channel. Given a hypothesis \(w\in\mathcal{W}\) the true risk of \(w\) is denoted as follows: \[L_{P_{Z}}(w)=\mathbb{E}_{P}[\ell(w,Z)] \tag{7}\] while the empirical risk of \(w\) on \(S\) is denoted as follows: \[L_{S}(w)=\frac{1}{n}\sum_{i=1}^{n}\ell(w,Z_{i}). \tag{8}\] Given a learning algorithm \(\mathcal{A}\), one can then define its generalization error as follows: \[\text{gen-err}_{\mathcal{P}}(\mathcal{A},S)=L_{\mathcal{P}}(\mathcal{A}(S))-L _{S}(\mathcal{A}(S)). \tag{9}\] Since both \(S\) and \(\mathcal{A}\) can be random, \(\text{gen-err}_{\mathcal{P}}(\mathcal{A},S)\) is a random variable and one can then study its expected value or its behavior in probability. Bounds on the expected value of the generalization error in terms of information measures are given in (Xu and Raginsky, 2017; Issa et al., 2019; Bu et al., 2019; Steinke and Zakynthinou, 2020) stating different variants of the following bound ((Xu and Raginsky, 2017, Theorem 1)): if \(\ell(w,Z)\) is \(\sigma^{2}\)-sub-Gaussian1 then Footnote 1: A \(0\)-mean random variable \(X\) is said to be \(\sigma^{2}\)-sub-Gaussian if \(\log\mathbb{E}[\exp(\lambda X)]\leq\sigma^{2}\lambda^{2}/2\) for every \(\lambda\in\mathbb{R}\). \[|\mathbb{E}[\text{gen-err}_{\mathcal{P}}(\mathcal{A},S)]|\leq\sqrt{\frac{2 \sigma^{2}I(S;\mathcal{A}(S))}{n}}. \tag{10}\] Thus, if one can prove that the mutual information between the input and output of a learning algorithm \(\mathcal{A}\) trained on \(S\) is bounded (ideally, growing less than linearly in \(n\)) then the expected generalization error of \(\mathcal{A}\) will vanish with the number of samples. Alternatively, Esposito et al. (2021) demonstrate high-probability bounds, involving different families of information measures. One such bound, which is relevant to the scope of this paper is the following (Esposito et al., 2021, Corollary 2): assume \(\ell(w,Z)\) is \(\sigma^{2}\)-sub-Gaussian and let \(\alpha>1\) \[\mathbb{P}(|\text{gen-err}_{P}(\mathcal{A},S)|\geq t)\leq 2\exp\left(-\frac{ \alpha-1}{\alpha}\left(\frac{nt^{2}}{2\sigma^{2}}-I_{\alpha}(S,\mathcal{A}(S) )\right)\right), \tag{11}\] taking the limit of \(\alpha\rightarrow\infty\) in (11) leads to the following (Esposito et al., 2021, Corollary 4): \[\mathbb{P}(|\text{gen-err}_{P}(\mathcal{A},S)|\geq t)\leq 2\exp\left(-\left( \frac{nt^{2}}{2\sigma^{2}}-\mathcal{L}\left(S{\rightarrow}\mathcal{A}(S) \right)\right)\right). \tag{12}\] Thus, in this case, if one can prove that the maximal leakage between the input and output of a learning algorithm \(\mathcal{A}\) trained on \(S\) is bounded, then the **probability** of the generalization error of \(\mathcal{A}\) being larger than any constant \(t\) will decay **exponentially fast** in the number of samples \(n\). ### Problem Setup We consider iterative algorithms, where each update is of the following form: \[W_{t}=g(W_{t-1})-\eta_{t}F(W_{t-1},Z_{t})+\xi_{t},\ \forall\ t\geq 1, \tag{13}\] where \(Z_{t}\subseteq S\) (sampled according to some distribution), \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a deterministic function, \(F(W_{t-1},Z_{t})\) computes a direction (e.g., gradient), \(\eta_{t}\) is a constant step-size, and \(\xi_{t}=(\xi_{t1},\ldots,\xi_{td})\) is noise. We will assume for the remainder of this paper that \(\eta_{t}\) has an absolutely continuous distribution. Let \(T\) denote the total number of iterations, \(W^{t}=(W_{1},W_{2},\ldots W_{t})\), and \(Z^{t}=(Z_{1},Z_{2},\ldots,Z_{t})\). The algorithms under consideration further satisfy the following two assumptions * **Assumption 1 (Sampling):** The sampling strategy is agnostic to parameter vectors: \[P(Z_{t+1}|Z^{t},W^{t},S)=P(Z_{t+1}|Z^{t},S).\] (14) * **Assumption 2 (\(\mathbf{L_{p}}\)-Boundedness):** For some \(p>0\) and \(L>0\), \(\sup_{w,z}\|F(w,z)\|_{p}\leq L\). As a consequence of the first assumption and the structure of the iterates, we get: \[P(W_{t+1}|W^{t},Z^{T},S)=P(W_{t+1}|W_{t},Z_{t+1}). \tag{15}\] The above setup was proposed by Pensia _et al._Pensia et al. (2018), who specifically studied the case \(p=2\). Denoting by \(W\) the final output of the algorithm (some function of \(W^{T}\)), they show that **Theorem 2** ((Pensia et al., 2018, Theorem 1)): _If the boundedness assumption holds for \(p=2\) and \(\xi_{t}\sim\mathcal{N}(0,\sigma_{t}^{2}I_{d})\), then_ \[I(S;W)\leq\frac{d}{2}\sum_{t=1}^{T}\log\left(1+\frac{\eta_{t}^{2}L^{2}}{d \sigma_{t}^{2}}\right). \tag{16}\] By virtue of inequality (10), this yields a bound on the expected generalization error. In this work, we derive bounds on the maximal leakage between \(\mathcal{L}\left(S{\rightarrow}W\right)\) for iterative noisy algorithms, which leads to high-probability bounds on the generalization error (cf. equation (12)). We consider different scenarios in which \(F\) is bounded in \(L_{1}\), \(L_{2}\), or \(L_{\infty}\) norm, and the added noise is Laplace, Gaussian, or Uniform. ### Notation Given \(d\in\mathbb{N}\), \(w\in\mathbb{R}^{d}\), and \(r>0\), let \(\mathcal{B}_{p}^{d}(w,r)=\{x\in\mathbb{R}^{d}:\|x-w\|_{p}\leq r\}\) denote the \(L_{p}\)-ball of radius \(r\) and center \(w\), and let \(V_{p}(d,r)\) denote its corresponding volume. When the dimension \(d\) is clear from the context, we may drop the superscript and write \(\mathcal{B}_{p}(w,r)\). Given a set \(S\), we denote its complement by \(\overline{S}\). The \(i\)-th component of \(w_{t}\) will be denoted by \(w_{ti}\). We denote the pdf of the noise \(\xi_{t}\) by \(f_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}\). The following functional will be useful for our study: given \(d\in\mathbb{N}\), \(p>0\), a pdf \(f:\mathbb{R}^{d}\to R\), and an \(r\geq 0\), define \[h(d,p,f,r):=\int\limits_{\overline{\mathcal{B}_{p}^{d}}(0,r)}\sup_{x\in \mathcal{B}_{p}^{d}(0,r)}f(w-x)\mathrm{d}w. \tag{17}\] We denote the "positive octant" by \(A_{d}\), i.e., \[A_{d}:=\{w\in\mathbb{R}^{d}:w_{i}\geq 0,\text{ for all }i\in\{1,2,\ldots,d\}\}. \tag{18}\] Since we will mainly consider pdfs that are symmetric (Gaussian, Laplace, uniform), the \(h\) functional "restricted" to \(A_{d}\) will be useful: \[h_{+}(d,p,f,r):=\int\limits_{\overline{\mathcal{B}_{p}^{d}}(0,r)\cap A_{d}} \sup_{x\in\mathcal{B}_{p}^{d}(0,r)}f(w-x)\mathrm{d}w. \tag{19}\] ### General Bound **Proposition 3**: _Suppose \(f_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is maximized for \(x=0\). If Assumptions 1 and 2 hold for some \(p>0\), then_ \[\mathcal{L}\left(S{\rightarrow}W\right)\leq\sum_{t=1}^{T}\log\left(f_{t}(0)V_{ p}(d,\eta_{t}L)+h(d,p,f_{t},\eta_{t}L)\right), \tag{20}\] _where \(h\) is defined in equation (17)._ The above bound is appealing as it implicitly poses an optimization problem: given a constraint on the noise pdf \(f_{t}\) (say, a bounded variance), one may choose \(f_{t}\) as to minimize the upper bound in equation (20). Moreover, despite its generality, we show that it is tight in several interesting cases, including when \(p=2\) and \(f_{t}\) is the Gaussian pdf. The series of steps leading to the upper bound include only one inequality (the source of the "looseness" of the bound), which could be viewed as due to replacing Assumption 2 by the following statement: for all \(w\), \(\cup_{z}F(w,z)=\mathcal{B}_{p}^{d}(0,L)\), i.e., in addition to assuming \(F(w,z)\in\mathcal{B}_{p}^{d}(0,L)\), we assume that, for every \(w\), every point in the ball \(\mathcal{B}_{p}^{d}(0,L)\) is attained for some \(z\). In the next section, we consider several scenarios for different values of \(p\) and different noise distributions. As a testament to the tractability of maximal leakage, we derive exact semi-closed form expressions for the bound of Proposition 3. Finally, it is worth noting that the form of the bound allows to choose different noise distributions at different time steps, but these examples are outside the scope of this paper. * We proceed as in the the work of (Pensia et al., 2018): \[\mathcal{L}\left(S{\rightarrow}W\right)\leq\mathcal{L}\left(Z^{T}{ \rightarrow}W^{T}\right)\leq\sum_{t=1}^{T}\mathcal{L}\left(Z^{T}{ \rightarrow}W_{t}|W^{t-1}\right)=\sum_{t=1}^{T}\mathcal{L}\left(Z_{t}{ \rightarrow}W_{t}|W_{t-1}\right),\] (21) where the first inequality follows from Lemma 2 of (Pensia et al., 2018) and the data processing inequality for maximal leakage (Issa et al., 2020, Lemma 1), the second inequality follows Lemma 1, and the equality follows from (15). Now, \[\exp\left\{\mathcal{L}\left(Z_{t}{\rightarrow}W_{t}|W_{t-1}=w_{t -1}\right)\right\} =\int_{\mathbb{R}^{d}}\underset{P_{z_{t}}}{\text{ess-sup}}\,p(w_{ t}|Z_{t})\mathrm{d}w_{t}\] (22) \[=\int_{\mathbb{R}^{d}}\underset{P_{z_{t}}}{\text{ess-sup}}\,f_{t }\left(w_{t}-g(w_{t-1})+\eta_{t}F(w_{t-1},Z_{t})\right)\mathrm{d}w_{t},\] (23) \[=\int_{\mathbb{R}^{d}}\underset{P_{z_{t}}}{\text{ess-sup}}\,f_{t }\left(w_{t}+\eta_{t}F(w_{t-1},Z_{t})\right)\mathrm{d}w_{t},\] (24) where the last equality follows from a change of a variable \[\tilde{w}_{t}=w_{t}-g(w_{t-1})\]. Finally, since \[\eta_{t}F(w_{t-1},z_{t})\in\mathcal{B}_{p}(0,\eta_{t}L)\] by assumption, we can further upper-bound the above by: \[\exp\left\{\mathcal{L}\left(Z_{t}{\rightarrow}W_{t}|W_{t-1}=w_{ t-1}\right)\right\}\] (25) \[\leq\int_{\mathbb{R}^{d}}\underset{x_{t}\in\mathcal{B}_{p}(0,\eta _{t}L)}{\sup}f_{t}\left(w_{t}+x_{t}\right)\mathrm{d}w_{t}\] (26) \[=\int\limits_{\mathcal{B}_{p}(0,\eta_{t}L)}\,\underset{x_{t}\in \mathcal{B}_{p}(0,\eta_{t}L)}{\sup}f_{t}\left(w_{ti}+x_{ti}\right)\mathrm{d}w_ {t}+\int\limits_{\mathcal{B}_{p}^{c}(0,\eta_{t}L)}\,\underset{x_{t}\in \mathcal{B}_{p}(0,\eta_{t}L)}{\sup}f_{t}\left(w_{t}+x_{t}\right)\mathrm{d}w_{t}\] (27) \[=f_{t}(0)V_{p}(d,\eta_{t}L)+\int\limits_{\overline{\mathcal{B}_{ p}}(0,\eta_{t}L)}\,\underset{x_{t}\in\mathcal{B}_{p}(0,\eta_{t}L)}{\sup}f_{t} \left(w_{t}-x_{t}\right)\mathrm{d}w_{t},\] (28) where the last equality follows from the assumptions on \[f_{t}\] ## 3 Boundedness in \(L_{2}\)-Norm Considering the case where \(F\) computes a gradient, then boundedness in \(L_{2}\)-norm is a common assumption. It is commonly enforced, for instance, using gradient clipping (Abadi et al., 2016, 2020). The case in which \(p=2\) and the noise is Gaussian leads to the strongest result in this paper: **Theorem 4**: _If the boundedness assumption holds for \(p\leq 2\) and \(\xi_{t}\sim\mathcal{N}(0,\sigma_{t}^{2}I_{d})\), then_ \[\mathcal{L}\left(S{\rightarrow}W\right)\leq\sum_{t=1}^{T}\log\left(\frac{V_{2 }(d,\eta_{t}L)}{(2\pi\sigma_{t}^{2})^{d/2}}+\frac{1}{\Gamma\left(\frac{d}{2} \right)}\sum_{i=0}^{d-1}\Gamma\left(\frac{i+1}{2}\right)\left(\frac{\eta_{t}L} {\sigma_{t}\sqrt{2}}\right)^{d-1-i}\right), \tag{29}\] _where \(V_{2}(d,r)=\frac{\pi^{d/2}}{\Gamma\left(\frac{d}{2}+1\right)}r^{d}\). Consequently, for fixed \(T\),_ \[\lim_{d\rightarrow\infty}\mathcal{L}\left(S{\rightarrow}W\right)=0. \tag{30}\] Remarkably, equation (30) states that, as \(d\) grows, \(S\) and \(W\) are asymptotically independent. The bound is asymptotically optimal for \(\mathcal{L}\left(S{\rightarrow}W\right)\) (indeed it yields an equality). More importantly, the induced high probability bound by equation (12) is also optimal. Indeed, at the limit when \(S\) and \(W\) are independent, the bound (12) recovers (the order optimal) McDiarmid's inequality _i.e._, under the assumptions of Theorem 4 and considering the \(0-1\) loss: \[\mathbb{P}(|\text{gen-err}(\mathcal{A},S)|\geq t)\leq 2\exp(-2nt^{2}). \tag{31}\] This can be seen as an explanation of the (arguably un-intuitive) phenomenon that deeper networks often generalize better (also analyzed by (Wang et al., 2023)). By contrast, this is not captured in the bound by (Pensia et al., 2018) given in equation (16). Indeed, it is growing as a function of \(d\), and tends to a non-zero value: \[\lim_{d\rightarrow\infty}\frac{d}{2}\sum_{t=1}^{T}\log\left(1+\frac{\eta_{t}^ {2}L^{2}}{d\sigma_{t}^{2}}\right)=\sum_{t=1}^{T}\frac{\eta_{t}^{2}L^{2}}{2 \sigma_{t}^{2}}, \tag{32}\] where the equality follows from the fact that \(\lim_{n\rightarrow\infty}(1+c/n)^{c}=e^{c}\) for all \(c\in\mathbb{R}\). Notably, however, \(I(S;W)\leq\mathcal{L}\left(S{\rightarrow}W\right)\) (cf. equation (6)), so that for large \(d\), we have \[I(S;W)\leq\mathcal{L}\left(S{\rightarrow}W\right)\leq\text{ right-hand side of \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq for all \(q\leq p\). In particular, adding Gaussian noise is asymptotically optimal (in the sense discussed above) when Assumption 2 holds for any \(p\leq 2\). To show that the right hand side of Equation (29) goes to zero as \(d\rightarrow\infty\) is equal to zero, we use Stirling's approximation of the Gamma function: for all \(x>0\), \[\sqrt{2\pi}x^{x-\frac{1}{2}}e^{-x}\leq\Gamma(x)\leq\sqrt{2\pi}x^{x-\frac{1}{2} }e^{-x}e^{\frac{1}{12x}}. \tag{34}\] The details of the computation can be found in Appendix B. We now turn to the proof of inequality (29). The conditions of Proposition 3 are satisfied, thus it is sufficient to prove the bound for \(p=2\): \[\mathcal{L}\left(S{\rightarrow}W\right) \leq\sum_{t=1}^{T}\log\left(f_{t}(0)V_{2}(d,\eta_{t}L)+\int\limits _{\overline{\mathcal{B}}_{2}(0,\eta_{t}L)}\sup_{x_{t}\in\mathcal{B}_{2}(0, \eta_{t}L)}f_{t}(w_{t}-x_{t})\mathrm{d}w_{t}\right)\] (35) \[=\sum_{t=1}^{T}\log\!\left(\!\!\frac{V_{2}(d,\eta_{t}L)}{(2\pi \sigma_{t}^{2})^{\frac{d}{2}}}+\!\!\!\int\limits_{\overline{\mathcal{B}}_{2} (0,\eta_{t}L)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 4 Boundedness in \(L_{\infty}\)-Norm The bound in Proposition 3 makes minimal assumptions about the pdf \(f_{t}\). In many practical scenarios we have more structure we could leverage. In particular, we make the following standard assumptions in this section: * \(\xi_{t}\) is composed of i.i.d components. Letting \(f_{t0}\) be the pdf of a component, \(f_{t}(x_{t}){=}\prod\limits_{i=1}^{d}f_{t0}(x_{ti})\). * \(f_{t0}\) is symmetric around 0 and non-increasing over \([0,\infty)\). In this setting, Proposition 3 reduces to a very simple form for \(p=\infty\): **Theorem 6**: _Suppose \(f_{t}\) satisfies the above assumptions. If Assumptions 1 and 2 hold for \(p=\infty\), then_ \[\mathcal{L}\left(S{\rightarrow}W\right)\leq\sum\limits_{t=1}^{T}d\log\left(1 +2\eta_{t}Lf_{t0}(0)\right). \tag{41}\] Unlike the bound of Theorem 4, the limit of \(d\) to infinity here is infinite. However, the bounded-\(L_{\infty}\) assumption is _weaker_ than assuming a bounded \(L_{2}\)-norm. Moreover, the assumption of having a bounded \(L_{\infty}\)-norm is satisfied in (Pichapati et al., 2019) where the authors clipped the gradient in terms of the \(L_{\infty}\)-norm, thus "enforcing" the assumption. On the other hand, the theorem has an intriguing form as, under standard assumptions, the bound depends on \(f_{t0}\) only through \(f_{t0}(0)\). This naturally leads to an optimization problem: given a certain constraint on the noise, which distribution \(f^{\star}\) minimizes \(f(0)\)? The following theorem shows that, if the noise is required to have a bounded variance, then \(f^{\star}\) corresponds to the uniform distribution: **Theorem 7**: _Let \(\mathcal{F}\) be the family of probability densities (over \(\mathbb{R}\)) satisfying for each \(f\in\mathcal{F}\):_ 1. \(f\) _is symmetric around 0._ 2. \(f\) _is non-increasing over_ \([0,\infty)\)_._ 3. \(\mathbf{E}_{f}[X^{2}]\leq\sigma^{2}\)_._ _Then, the distribution minimizing \(f(0)\) over \(\mathcal{F}\) is the uniform distribution \(\mathcal{U}(-\sigma\sqrt{3},\sigma\sqrt{3})\)._ That is, uniform noise is optimal in the sense that it minimizes the upper bound in Theorem 6 under bounded variance constraints. The proof of Theorem 7 is deferred to Appendix E. ### Proof of Theorem 6 Since the assumptions of Proposition 3 hold, then \[\mathcal{L}\left(S{\rightarrow}W\right) \leq\sum\limits_{t=1}^{T}\log\left(f_{t}(0)V_{\infty}(d,\eta_{t} L)+\int\limits_{\mathcal{B}_{\infty}(0,\eta_{t}L)}\sup\limits_{x_{t}\in \mathcal{B}_{\infty}(0,\eta_{t}L)}f_{t}(w_{t}-x_{t})\mathrm{d}w_{t}\right) \tag{42}\] \[=\sum\limits_{t=1}^{T}\log\left((2\eta_{t}Lf_{t0}(0))^{d}+\int \limits_{\mathcal{B}_{\infty}(0,\eta_{t}L)}\prod\limits_{i=1}^{d}\sup\limits_ {x_{ti}:|x_{ti}|\leq\eta_{t}L}f_{t0}(w_{ti}-x_{ti})\mathrm{d}w_{t}\right). \tag{43}\] It remains to show that \(h(d,\infty,f_{t},\eta_{t}L)\) (i.e., the second term inside the \(\log\) in Equation (17)) is equal to \((1+2\eta_{t}Lf_{t0}(0))^{d}-(2\eta_{t}Lf_{t0}(0))^{d}\). We will derive a recurrence relation for \(h\) in terms of \(d\). To simplify the notation, we drop the subscript \(t\) and ignore the dependence of \(h\) on \(p=\infty\), \(f_{t}\), and \(\eta_{t}L\), so that we simply write \(h(d)\) (and correspondingly, \(h_{+}(d)\), cf. Equation (19)). By symmetry, \(h(d)=2^{d}h_{+}(d)\). Letting \(w^{d-1}:=(w_{1},\ldots,w_{d-1})\), we will decompose the integral over \(\overline{\mathcal{B}_{\infty}^{d}}(0,\eta_{t}L)\) into two disjoint subsets: 1) \(w^{d-1}\notin\mathcal{B}_{\infty}^{d-1}(0,\eta_{t}L)\), in which case \(w_{d}\) can take any value in \(\mathbb{R}\), and 2) \(w^{d-1}\in\mathcal{B}_{\infty}^{d-1}(0,\eta_{t}L)\), in which case \(w_{d}\) must satisfy \(|w_{d}|>\eta_{t}L\). \[h_{+}(d) =\int\limits_{\overline{\mathcal{B}_{\infty}^{d-1}}(0,\eta_{t}L )\cap A_{d-1}}\prod_{i=1}^{d-1}\sup_{x_{i}:|x_{i}|\leq\eta_{t}L}f(w_{i}-x_{i}) \int_{0}^{\infty}\sup_{x_{d}:|x_{d}|\leq\eta_{t}L}f(w_{d}-x_{d})\mathrm{d}w_{d }\mathrm{d}w^{d-1} \tag{44}\] \[+\int\limits_{\mathcal{B}_{\infty}^{d-1}}(0,\eta_{t}L)\cap A_{d-1 }}\prod_{i=1}^{d-1}\sup_{x_{i}:|x_{i}|\leq\eta_{t}L}f(w_{i}-x_{i})\int_{\eta_{ t}L}^{\infty}\sup_{x_{d}:|x_{d}|\leq\eta_{t}L}f(w_{d}-x_{d})\mathrm{d}w_{d }\mathrm{d}w^{d-1} \tag{45}\] The innermost integral of line (45) is independent of \(w^{d-1}\) so that the outer integral is equal to \(h_{+}(d-1)\). Similarly, the innermost integral of line (44) is independent of \(w^{d-1}\), and the supremum in the outer integral yields \(f(0)\) for every \(i\). Hence, we get \[h(d)=(1+2\eta_{t}Lf(0))\,h(d-1)+(2\eta_{t}Lf(0))^{d-1}, \tag{46}\] the detailed proof of which is deferred to Appendix D. Finally, it is straightforward to check that \(h(1)=1\), hence \(h(d)=(1+2\eta_{t}Lf(0))^{d}-(2\eta_{t}Lf(0))^{d}\). ## 5 Boundedness in \(L_{1}\)-Norm In this section, we consider the setting where Assumption 2 holds for \(p=1\). By Proposition 3, any bound derived for \(p=2\) holds for \(p=1\) as well. In particular, Theorem 4 applies so that \(\mathcal{L}\left(S\!\!\to\!\!W\right)\) goes to zero when the noise is Gaussian. Nevertheless, it is possible to compute a semi-closed form directly for \(p=1\) (cf. Theorem 9 below), which would be inherently tighter. Considering the optimality of Gaussian noise for the \(p=2\) case, and the optimality of uniform noise (in the sense discussed above) for \(p=\infty\) case, one might wonder if Laplace noise is optimal for the \(p=1\) case. We answer this question in the negative, as the limit of the leakage in this case is a non-zero constant (cf. Theorem 8), as opposed to the zero limit when the noise is Gaussian. ### Bound for Laplace noise We say \(X\) has a Laplace distribution, denoted by \(X\sim\mathrm{Lap}(\mu,1/\lambda)\), if its pdf is given by \(f(x)=\frac{\lambda}{2}e^{-\lambda|x-\mu|}\) for \(x\in\mathbb{R}\), for some \(\mu\in\mathbb{R}\) and \(\lambda>0\). The corresponding variance is given by \(2/\lambda^{2}\). **Theorem 8**: _If the boundedness assumptions holds for \(p=1\) and \(\xi_{t}\) is composed of i.i.d components, each of which is \(\sim\mathrm{Lap}(0,\frac{\sigma_{t}}{\sqrt{2}})\), then_ \[\mathcal{L}\left(S\!\!\to\!\!W\right)\leq\sum_{t=1}^{T}\log\left(\frac{V_{1}(d, \eta_{t}L)}{(\sigma_{t}\sqrt{2})^{d}}+\sum_{i=0}^{d-1}\frac{(\sigma_{t}\eta_{t} L/\sqrt{2})^{i}}{i!}\right), \tag{47}\] _where \(V_{1}(d,r)=\frac{(2r)^{d}}{d!}\). Consequently, for fixed \(T\),_ \[\lim_{d\to\infty}\mathcal{L}\left(S{\rightarrow}W\right)\leq\sum_{t=1}^{T}\frac{ \sigma_{t}\eta_{t}L}{\sqrt{2}}. \tag{48}\] **Proof** We give a high-level description of the proof (as similar techniques have been used in proofs of earlier theorems) and defer the details to Appendix F. Since the multivariate Laplace distribution (for i.i.d variables) depends on the \(L_{1}\)-norm of the corresponding vector of variables, we need to solve the following problem: given \(R>0\) and \(w\notin\mathcal{B}_{1}(0,R)\), compute \[\inf_{x\in\mathcal{B}_{1}(0,R)}\|w-x\|_{1}. \tag{49}\] The closest element in \(\mathcal{B}_{1}(0,R)\) will lie on the hyperplane defining \(\mathcal{B}_{1}\) that is in the same octant as \(w\), so the problem reduces to projecting a point on a hyperplane in \(L_{1}\)-distance (the proof in the appendix does not follow this argument but arrives at the same conclusion). Then, we need to compute \(h(d,1,f_{t},\eta_{t}L)\). We use a similar approach as in the proof of Theorem 6, that is, we split the integral and derive a recurrence relation. ### Bound for Gaussian noise Finally, we derive a bound on the induced leakage when the added noise is Gaussian: **Theorem 9**: _If the boundedness assumptions holds for \(p=1\) and \(\xi_{t}\sim\mathcal{N}(0,\sigma_{t}^{2}I_{d})\), then_ \[\mathcal{L}\left(S{\rightarrow}W\right)\leq\sum_{t=1}^{T}\log\left(\frac{V_{1 }(d,R_{t})}{(2\pi\sigma^{2})^{\frac{d}{2}}}+\frac{(2\eta_{t}L)^{d-1}(\sigma_{t }\sqrt{2d})}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}((d-1)!)}\sum_{i=0}^{d-1}\left( \frac{\sigma_{t}\sqrt{2d}}{\eta_{t}L}\right)^{i}\Gamma\left(\frac{i+1}{2} \right)\right). \tag{50}\] Theorem 9 is tighter than Theorem 4 for any given \(d\) moreover one has, again, that: \[\lim_{d\to\infty}\mathcal{L}\left(S{\rightarrow}W\right)=0. \tag{51}\] Equation (51) follows from Theorem 4 and the fact that the bound in Proposition 3 is increasing in \(p\) (cf. discussion below Theorem 4). In order to prove Theorem 9 one has to solve a problem similar to the one introduced in Theorem 8 (cf. equation (49)). However, in this case a different norm is involved: i.e., given \(R>0\) and \(w\notin\mathcal{B}_{1}(0,R)\), one has to compute \[\inf_{x\in\mathcal{B}_{1}(0,R)}\|w-x\|_{2}. \tag{52}\] Again, one can argue that the point achieving the infimum lies on the hyperplane defining \(\mathcal{B}_{1}\) that is in the same octant as \(w\). In other words, the minimizer \(x^{\star}\) is such that the sign of each component is the same sign as the corresponding component of \(w\) (and lies on the boundary of \(\mathcal{B}_{1}\)). Thus, we are simply projecting a point on a hyperplane. The induced integral is solved by an opportune choice of change of variables. The details of the proof are given in Appendix G. ## Appendix A Proof of Lemma 1 Recall the definition of maximal leakage and conditional maximal leakage: **Definition 10** (Maximal Leakage (Issa et al., 2020, Definition 1)): _Given two random variables \((X,Y)\) with joint distribution \(P_{XY}\),_ \[\mathcal{L}\left(X{\rightarrow}Y\right)=\log\sup_{U:U-X-Y}\frac{ \mathbf{Pr}(\hat{U}(Y)=U)}{\max_{u}P_{U}(u)}, \tag{53}\] _where \(U\) takes values in a finite, but arbitrary, alphabet, and \(\hat{U}(Y)\) is the optimal estimator (i.e., MAP) of \(U\) given \(Y\)._ Similarly, **Definition 11** (Conditional Maximal Leakage (Issa et al., 2020, Definition 6)): _Given three random variables \((X,Y,Z)\) with joint distribution \(P_{XYZ}\),_ \[\mathcal{L}\left(X{\rightarrow}Y|Z\right)=\log\sup_{U:U-X-Y|Z} \frac{\mathbf{Pr}(\hat{U}(Y,Z)=U)}{\mathbf{Pr}(\hat{U}(Z)=U)}, \tag{54}\] _where \(U\) takes values in a finite, but arbitrary, alphabet, and \(\hat{U}(Y,Z)\) and \(\hat{U}(Z)\) are the optimal estimators (i.e., MAP) of \(U\) given \((Y,Z)\) and \(U\) given \(Z\), respectively._ It then follows that \[\mathcal{L}\left(X{\rightarrow}Y_{1},Y_{2}\right) =\log\sup_{U:U-X-(Y_{1},Y_{2})}\frac{\mathbf{Pr}(\hat{U}(Y_{1},Y_ {2})=U)}{\max_{u}P_{U}(u)} \tag{55}\] \[=\log\sup_{U:U-X-(Y_{1},Y_{2})}\frac{\mathbf{Pr}(\hat{U}(Y_{1},Y_ {2})=U)}{\mathbf{Pr}(\hat{U}(Y_{1})=U)}\frac{\mathbf{Pr}(\hat{U}(Y_{1})=U)}{ \max_{u}P_{U}(u)}\] (56) \[\leq\log\sup_{U:U-X-(Y_{1},Y_{2})}\frac{\mathbf{Pr}(\hat{U}(Y_{1},Y_{2})=U)}{\mathbf{Pr}(\hat{U}(Y_{1})=U)}\cdot\sup_{U:U-X-(Y_{1},Y_{2})}\frac {\mathbf{Pr}(\hat{U}(Y_{1})=U)}{\max_{u}P_{U}(u)}\] (57) \[\leq\log\sup_{U:U-X-Y_{2}|Y_{1}}\frac{\mathbf{Pr}(\hat{U}(Y_{1},Y _{2})=U)}{\mathbf{Pr}(\hat{U}(Y_{1})=U)}\cdot\sup_{U:U-X-Y_{1}}\frac{\mathbf{ Pr}(\hat{U}(Y_{1})=U)}{\max_{u}P_{U}(u)}\] (58) \[=\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}\right)+\mathcal{L} \left(X{\rightarrow}Y_{1}\right), \tag{59}\] where the last inequality follows from the fact that \(U-X-(Y_{1},Y_{2})\) implies \(U-X-Y_{2}|Y_{1}\). The fact that \[\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}\right)=\operatorname*{ ess\text{-}sup}_{P_{Y_{1}}}\mathcal{L}\left(X{\rightarrow}Y_{2}|Y_{1}=y_{1} \right), \tag{60}\] has been shown for discrete alphabets in Theorem 6 of (Issa et al., 2020). The extension to continuous alphabets is similar (with integrals replacing sums, and pdfs replacing pmfs, where appropriate). Finally, it remains to show equation (5). We proceed by induction. The case \(n=2\) has already been shown above. Assume the inequality is true up to \(n-1\) variables, then \[\mathcal{L}\left(X{\rightarrow}Y^{n}\right) \leq\mathcal{L}\left(X{\rightarrow}Y_{1}\right)+\underset{P_{Y_{ 1}}}{\text{ess-sup}}\,\mathcal{L}\left(X{\rightarrow}Y_{2}^{n}|Y_{1}=y_{1}\right) \tag{61}\] \[\leq\mathcal{L}\left(X{\rightarrow}Y_{1}\right)+\underset{P_{Y_ {1}}}{\text{ess-sup}}\sum_{i=2}^{n}\mathcal{L}\left(X{\rightarrow}Y_{i}|Y^{i-1 },Y_{1}=y_{1}\right)\] (62) \[=\sum_{i=1}^{n}\mathcal{L}\left(X{\rightarrow}Y_{i}|Y^{i-1} \right), \tag{63}\] where the second inequality follows from the induction hypothesis. ## Appendix B Proof of equation (30) For notational convenience, let \(c_{1}=\frac{\sigma_{t}\sqrt{2}}{\eta_{t}L}\) and \(c_{2}=\frac{2e}{c_{1}^{2}}\). Then, \[\sum_{i=0}^{d-1}\frac{\Gamma\left(\frac{i+1}{2}\right)}{\Gamma \left(\frac{d}{2}\right)}c_{1}^{i-(d-1)} =1+\sum_{i=0}^{d-2}\frac{\Gamma\left(\frac{i+1}{2}\right)}{\Gamma \left(\frac{d}{2}\right)}\leq 1+\sum_{i=0}^{d-2}c_{1}^{(i-(d-1))}\frac{ \left(\frac{i+1}{2}\right)^{\frac{i}{2}}e^{-\frac{i+1}{2}}e^{\frac{1}{12e}}}{ \left(\frac{d}{2}\right)^{\frac{d-1}{2}}e^{-\frac{d}{2}}} \tag{64}\] \[=1+e^{\frac{1}{12}}\left(\frac{2e}{c_{1}^{2}d}\right)^{\frac{d-1} {2}}\sum_{i=0}^{d-2}\left(\frac{(i+1)c_{1}^{2}}{2e}\right)^{\frac{i}{2}}\] (65) \[\leq 1+e^{\frac{1}{12}}\left(\frac{c_{2}}{d}\right)^{\frac{d-1} {2}}\sum_{i=0}^{d-2}\left(\frac{d}{c_{2}}\right)^{\frac{i}{2}}\] (66) \[=1+e^{\frac{1}{12}}\left(\frac{c_{2}}{d}\right)^{\frac{d-1}{2}} \frac{\left(\frac{d}{c_{2}}\right)^{\frac{d-1}{2}}-1}{\sqrt{\frac{d}{c_{2}}}-1}\] (67) \[=1+e^{\frac{1}{12}}\frac{1-\left(\frac{c_{2}}{d}\right)^{\frac{d- 1}{2}}}{\sqrt{\frac{d}{c_{2}}}-1}\xrightarrow{d\rightarrow\infty}1. \tag{68}\] Moreover, \[\frac{V_{2}(d,\eta_{t}L)}{(2\pi\sigma_{t}^{2})^{d/2}}=\frac{\pi^{d/2}}{\Gamma \left(\frac{d}{2}+1\right)}\left(\frac{\eta_{t}L}{\sqrt{2\pi\sigma_{t}^{2}}} \right)^{d}=V_{2}\left(d,\frac{\eta_{t}L}{\sqrt{2\pi\sigma_{t}^{2}}}\right) \xrightarrow{d\rightarrow\infty}0. \tag{69}\] Combining equations (68) and (69) yields the desired limit. ## Appendix C Proof of equation (40) To evaluate the integral in line (39), we write it in spherical coordinates: \[h(d,2,f_{t},\eta_{t}L)\] \[=\int\limits_{\overline{\mathcal{B}_{2}}(0,\eta_{t}L)}\frac{1}{(2 \pi\sigma_{t}^{2})^{\frac{d}{2}}}\exp\left\{-\frac{(\|w_{t}\|_{2}-\eta_{t}L)^{ 2}}{2\sigma_{t}^{2}}\right\}\mathrm{d}w_{t}.\] \[=\frac{1}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\int_{0}^{2\pi}\int _{0}^{\pi}\dots\int_{0}^{\pi}\int_{\eta_{t}L}^{\infty}e^{\frac{-(\rho-\eta_{t} L)^{2}}{2\sigma_{t}^{2}}}\rho^{d-1}\sin^{d-2}(\phi_{1})\sin^{d-3}(\phi_{2})\dots \sin(\phi_{d-2})\mathrm{d}\rho\mathrm{d}\phi_{1}^{d-1}\] \[=\frac{2\pi}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\left(\int_{0}^{ \pi}\sin^{d-2}(\phi_{1})\mathrm{d}\phi_{1}\right)\dots\left(\int_{0}^{\pi} \sin(\phi_{d-2})\mathrm{d}\phi_{d-2}\right)\!\!\left(\int_{\eta_{t}L}^{\infty }e^{\frac{-(\rho-\eta_{t}L)^{2}}{2\sigma_{t}^{2}}}\rho^{d-1}\mathrm{d}\rho \right). \tag{70}\] Now, note that for any \(n\in\mathbb{N}\), \(\int_{0}^{\pi}\sin^{n}(x)\mathrm{d}x=2\int_{0}^{\pi/2}\sin^{n}(x)\mathrm{d}x\), and \[\int_{0}^{\pi/2}\sin^{n}(x)\mathrm{d}x \stackrel{{\text{(a)}}}{{=}}\int_{0}^{1}\frac{u^{n}} {\sqrt{1-u^{2}}}\mathrm{d}u\stackrel{{\text{(b)}}}{{=}}\frac{1}{ 2}\int_{0}^{1}t^{\frac{n-1}{2}}(1-t)^{-\frac{1}{2}}\mathrm{d}y\stackrel{{ \text{(c)}}}{{=}}\frac{1}{2}\mathrm{Beta}\left(\frac{n+1}{2},\frac{1}{2}\right)\] \[=\frac{\sqrt{\pi}\Gamma\left(\frac{n+1}{2}\right)}{2\Gamma\left( \frac{n}{2}+1\right)}, \tag{71}\] where (a) follows from the change of variable \(u=\sin x\), (b) follows from the change of variable \(t=u^{2}\), (c) follows from the definition of the Beta function: \(\mathrm{Beta}(s_{1},s_{2})=\int_{0}^{1}t^{s_{1}-1}(1-t)^{s_{2}-1}\), and the last equality is a known property of the Beta function (\(\Gamma(1/2)=\sqrt{\pi}\)). Consequently, \[2\pi\left(\int_{0}^{\pi}\sin^{d-2}(\phi_{1})\mathrm{d}\phi_{1} \right)\dots\left(\int_{0}^{\pi}\sin(\phi_{d-2})\mathrm{d}\phi_{d-2}\right)\] \[=(2\pi)\prod_{i=1}^{d-2}\frac{\sqrt{\pi}\Gamma\left(\frac{i+1}{2 }\right)}{\Gamma\left(\frac{i}{2}+1\right)}=(2\pi)\pi^{\frac{d-2}{2}}\frac{ \Gamma(1)}{\Gamma(d/2)}=2\pi^{d/2}\frac{1}{\Gamma(d/2)}. \tag{72}\] To evaluate the innermost integral, the following identity will be useful: \[\int_{0}^{\infty}x^{n}e^{-x^{2}}dx=\frac{1}{2}\int_{0}^{\infty}t^{\frac{n+1}{ 2}}e^{-t}dt=\frac{\Gamma\left(\frac{n+1}{2}\right)}{2}, \tag{73}\] where the first equality follows from the change of variable \(t=x^{2}\). Then, \[\int_{\eta_{t}L}^{\infty}e^{\frac{-(\rho-\eta_{t}L)^{2}}{2\sigma_{t}^ {2}}}\rho^{d-1}d\rho =\int_{0}^{\infty}e^{\frac{-\rho^{2}}{2\sigma_{t}^{2}}}(\rho+\eta_ {t}L)^{d-1}d\rho \tag{74}\] \[=\int_{0}^{\infty}\sum_{i=0}^{d-1}{d-1\choose i}(\eta_{t}L)^{d-1-i }\rho^{i}e^{\frac{-\rho^{2}}{2\sigma_{t}^{2}}}d\rho\] (75) \[\stackrel{{\text{(a)}}}{{=}}\sum_{i=0}^{d-1}{d-1 \choose i}(\eta_{t}L)^{d-1-i}\int_{0}^{\infty}\left(\sigma_{t}\sqrt{2}\right)^ {i+1}t^{i}e^{-t^{2}}d\rho\] (76) \[\stackrel{{\text{(b)}}}{{=}}(\eta_{t}L)^{d-1}( \sigma_{t}\sqrt{2})\sum_{i=0}^{d-1}\left(\frac{\sigma_{t}\sqrt{2}}{\eta_{t}L} \right)^{i}\frac{\Gamma((i+1)/2)}{2}. \tag{77}\] where (a) follows from the change of variable \(t=\rho/(\sigma\sqrt{2})\), and (b) follows from (73). Finally, combining equations (70), (72), and (77), we get \[g(d,\sigma_{t},\eta_{t}L) =\frac{2\pi^{d/2}}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}\Gamma(d/2)} (\eta_{t}L)^{d-1}(\sigma_{t}\sqrt{2})\sum_{i=0}^{d-1}\left(\frac{\sigma_{t} \sqrt{2}}{\eta_{t}L}\right)^{i}\frac{\Gamma((i+1)/2)}{2} \tag{78}\] \[=\left(\frac{\eta_{t}L}{\sigma_{t}\sqrt{2}}\right)^{d-1}\frac{1}{ \Gamma(d/2)}\sum_{i=0}^{d-1}\left(\frac{\sigma_{t}\sqrt{2}}{\eta_{t}L}\right) ^{i}\Gamma((i+1)/2). \tag{79}\] ## Appendix D Proof of equation (46) The innermost integral of line (45) evaluates to \[\int_{\eta_{t}L}^{\infty}\sup_{x_{d}:|x_{d}|\leq\eta_{t}L}f(w_{d}-x_{d}) \mathrm{d}w_{d}=\int_{\eta_{t}L}^{\infty}f(w_{d}-\eta_{t}L)\mathrm{d}w_{d}= \int_{0}^{\infty}f(w_{d})\mathrm{d}w_{d}=\frac{1}{2}, \tag{80}\] where the first equality follows from the monotonicity assumptions, the second from a change of variable, and the third from the symmetry assumption. Similarly, the innermost integral of line (44) evaluates to \[\int_{0}^{\infty}\sup_{x_{d}:|x_{d}|\leq\eta_{t}L}f(w_{d}-x_{d}) \mathrm{d}w_{d} \tag{81}\] \[=\int_{0}^{\eta_{t}L}\sup_{x_{d}:|x_{d}|\leq\eta_{t}L}f(w_{d}-x_{d })\mathrm{d}w_{d}\mathrm{d}w^{d-1}+\int_{\eta_{t}L}^{\infty}\sup_{x_{d}:|x_{d} |\leq\eta_{t}L}f(w_{d}-x_{d})\mathrm{d}w_{d}\] (82) \[=\eta_{t}Lf(0)+\frac{1}{2}. \tag{83}\] Combining equations (45), (80), and (83), we get \[h_{+}(d) =\left(\eta_{t}Lf(0)+\frac{1}{2}\right)\int\limits_{\mathcal{B}_{ \infty}^{d-1}(0,\eta_{t}L)\cap A_{d-1}}\prod\limits_{i=1}^{d-1}\sup\limits_{x_ {i}:|x_{i}|\leq\eta_{t}L}f(w_{i}-x_{i})\mathrm{d}w^{d-1} \tag{84}\] \[\quad+\frac{1}{2}\int\limits_{\mathcal{B}_{\infty}^{d-1}(0,\eta_ {t}L)\cap A_{d-1}}\prod\limits_{i=1}^{d-1}\sup\limits_{x_{i}:|x_{i}|\leq\eta_{t }L}f(w_{i}-x_{i})\mathrm{d}w^{d-1}\] (85) \[=\left(\eta_{t}Lf(0)+\frac{1}{2}\right)h_{+}(d-1)+\frac{1}{2}( \eta_{t}Lf(0))^{d-1}, \tag{86}\] where the second equality follows from the fact that \(f\) is maximized at 0, and \(\mathcal{B}_{\infty}^{d-1}(0,\eta_{t}L)\cap A_{d-1}\) is a \((d-1)\)-dimensional hypercube of side \(\eta_{t}L\) (with volume \((\eta_{t}L)^{d-1}\)). Now, \[h(d)=2^{d}h_{+}(d)=\left(1+2\eta_{t}Lf(0)\right)h(d-1)+(2\eta_{t}Lf(0))^{d-1}. \tag{87}\] ## Appendix E Proof of Theorem 7 Consider any \(f\in\mathcal{F}\), and let \[f_{+}(x)=\begin{cases}f(x),&x\geq 0,\\ 0,&x<0,\end{cases}\qquad\text{and}\qquad f_{-}(x)=\begin{cases}0,&x\geq 0,\\ f(x),&x<0.\end{cases} \tag{88}\] Then \[\mathbf{var}_{f}(X^{2})=\int_{-\infty}^{+\infty}(f_{-}(x)+f_{+}(x))x^{2}dx= \int_{0}^{\infty}2f_{+}(x)x^{2}dx, \tag{89}\] where the second equality follows from the symmetry assumption. Note that \(2f_{+}\) is a valid probability density over \([0,\infty)\), and let \(X_{+}\sim f_{+}\). Then, by previous equation, \[\mathbf{var}_{f}(X^{2}) =\mathbf{E}_{(2f_{+})}\left[X_{+}^{2}\right]=\int_{0}^{\infty}2x \left(1-\mathbf{Pr}(X_{+}\leq x)\right)dx \tag{90}\] \[\geq\int_{0}^{1/(2f(0))}2x\left(1-2xf(0)\right)dx=\frac{1}{12f^{2 }(0)}. \tag{91}\] Hence, \[f(0)\geq\frac{1}{2\sqrt{3}\sqrt{\mathbf{var}_{f}(X^{2})}}\geq\frac{1}{2\sqrt {3}\sigma}, \tag{92}\] which is achieved by the uniform distribution \(\mathcal{U}(-\sigma\sqrt{3},\sigma\sqrt{3})\). ## Appendix F Proof of Theorem 8 First, we show that the limit of the right-hand side of equation (47) is given by the right-hand side of equation (48). Note that \[\frac{V_{1}(d,\eta_{t}L)}{(\sigma_{t}\sqrt{2})^{d}}=V_{1}\left(d,\frac{\eta_{ t}L}{\sigma_{t}\sqrt{2}}\right)\xrightarrow{d\to\infty}0. \tag{93}\] On the other hand, \[\lim_{d\to\infty}\sum_{i=0}^{d-1}\frac{(\sigma_{t}\eta_{t}L/\sqrt{2})^{i}}{i!}= \sum_{i=0}^{\infty}\frac{(\sigma_{t}\eta_{t}L/\sqrt{2})^{i}}{i!}=e^{\sigma_{t} \eta_{t}L/\sqrt{2}}. \tag{94}\] Since \(T\) is finite, the limit and the sum are interchangeable, so that the above two equations yield the desired limit. We now turn to the proof of inequality (47). For notational convenience, set \(\lambda_{t}=\frac{\sigma_{t}}{\sqrt{2}}\) (so that \(f_{t0}(x)=\frac{\lambda_{t}}{2}e^{-\lambda|x|}\) for all \(x\in\mathbb{R}\)) and \(R_{t}=\eta_{t}L\). Since the noise satisfies the assumptions of Proposition 3, we get \[\mathcal{L}\left(S{\rightarrow}W\right) \leq\sum_{t=1}^{T}\log\left(f_{t}(0)V_{1}(d,R_{t})+\int\limits_{ \mathcal{B}_{1}(0,R_{t})}\sup_{x_{t}\in\mathcal{B}_{1}(0,R_{t})}f_{t}(w_{t}- x_{t})\mathrm{d}w_{t}\right) \tag{95}\] \[=\sum_{t=1}^{T}\log\left(\frac{V_{1}(d,R_{t})}{(\lambda_{t}/2)^{d }}+\int\limits_{\mathcal{B}_{1}(0,R_{t})}\sup_{x_{t}\in\mathcal{B}_{1}(0,R_{t })}\left(\frac{\lambda_{t}}{2}\right)^{d}\exp\left\{-\lambda\|w_{t}-x_{t}\|_{1 }\right\}\mathrm{d}w_{t}\right). \tag{96}\] Recall \(h(d,p,f_{t},R_{t})\) (cf. equation (17)) is defined to be the second term inside the \(\log\). Similarly to the strategy adopted in the proof of Theorem 6, we will derive a recurrence relation for \(h\) in terms of \(d\), as such we will again suppress the dependence on \(p\), \(f_{t}\), and \(R_{t}\) in the notation, and write \(h(d)\) only (and correspondingly \(h_{+}(d)\)). **Lemma 12**: _Given \(w\in\overline{\mathcal{B}_{1}^{d}}(0,R)\cap A_{d}\) (\(A_{d}\) defined in equation (18)),_ \[\inf_{x\in\mathcal{B}_{1}^{d}(0,R)}\|w-x\|_{1}=\sum_{i=1}^{d}w_{i}-R. \tag{97}\] **Proof** Since we are minimizing a continuous function over a compact set, then the infimum can be replaced with a minimum. _Claim:_ There exists a minimizer \(x^{\star}\) such that for all \(i\), \(x_{i}^{\star}\leq w_{i}\). _Proof of Claim:_ Consider any \(x\in\mathcal{B}_{1}(0,R)\) such that there exists \(j\) satisfying \(x_{j}>w_{j}\). Note that \(w_{j}\geq 0\) by assumption. Now define \(x^{\prime}=(x_{1},\ldots,x_{j-1},w_{j},x_{j+1},\ldots,x_{d})\). Then \(\|x^{\prime}\|_{1}<\|x\|_{1}\) so that \(x^{\prime}\in\mathcal{B}_{1}(0,R)\). Moreover, \(\|w-x^{\prime}\|_{1}\leq\|w-x\|_{1}\) as desired. \(\blacksquare\) Now, \[\inf_{x\in\mathcal{B}_{1}^{d}(0,R)}\|w-x\|_{1}=\inf_{\begin{subarray}{c}x\in \mathcal{B}_{1}^{d}(0,R):\\ x_{i}\leq w_{i},\;\forall\;i\end{subarray}}\|w-x\|_{1}=\inf_{\begin{subarray}{ c}x\in\mathcal{B}_{1}^{d}(0,R):\\ x_{i}\leq w_{i},\;\forall i\end{subarray}}\sum_{i=1}^{d}(w_{i}-x_{i})=\sum_{i= 1}^{d}w_{i}-R. \tag{98}\] Given the above lemma, we will derive the recurrence relation by decomposing the integral over \(\overline{\mathcal{B}_{1}^{d}}(0,R_{t})\) into two disjoint subsets: 1) \(w^{d-1}\notin\mathcal{B}_{1}^{d-1}(0,R_{t})\), in which case \(w_{d}\) can take any value in \(\mathbb{R}\), and 2) \(w^{d-1}\in\mathcal{B}_{1}^{d-1}(0,R_{t})\), in which case \(w_{d}\) must satisfy \(|w_{d}|>R_{t}-\|w^{d-1}\|_{1}\). \[h_{+}(d) =\int\limits_{\overline{\mathcal{B}_{1}^{d}}(0,R_{t})\cap A_{d}} \sup_{x_{t}\in\mathcal{B}_{1}(0,R_{t})}\left(\frac{\lambda_{t}}{2}\right)^{d}e^ {-\lambda_{t}\left(\sum_{i=1}^{d}w_{t}-R_{t}\right)}\mathrm{d}w_{t} \tag{99}\] \[=\int\limits_{\overline{\mathcal{B}_{1}^{d-1}}(0,R_{t})\cap A_{d}} \left(\frac{\lambda_{t}}{2}\right)^{d-1}e^{-\lambda_{t}\left(\sum_{i=1}^{d-1}w _{t}-R_{t}\right)}\left(\int_{0}^{\infty}\frac{\lambda_{t}}{2}e^{-\lambda_{t} w_{d}}\mathrm{d}w_{d}\right)\mathrm{d}w^{d-1}\] (100) \[\quad+\int\limits_{\mathcal{B}_{1}^{d-1}(0,R_{t})\cap A_{d}} \left(\frac{\lambda_{t}}{2}\right)^{d-1}e^{-\lambda_{t}\left(\sum_{i=1}^{d-1} w_{t}-R_{t}\right)}\left(\int_{R_{t}-\sum_{i=1}^{d-1}w_{i}}^{\infty}\frac{ \lambda_{t}}{2}e^{-\lambda_{t}w_{d}}\mathrm{d}w_{d}\right)\mathrm{d}w^{d-1}\] (101) \[=\frac{1}{2}h_{+}(d-1)+\int\limits_{\mathcal{B}_{1}^{d-1}(0,R_{t })\cap A_{d}}\left(\frac{\lambda_{t}}{2}\right)^{d-1}e^{-\lambda_{t}\left( \sum_{i=1}^{d-1}w_{t}-R_{t}\right)}\left(\frac{1}{2}e^{-\lambda_{t}\left(R_{t }-\sum_{i=1}^{d}w_{i}\right)}\right)\mathrm{d}w^{d-1}\] (102) \[=\frac{1}{2}h_{+}(d-1)+\frac{1}{2}\left(\frac{\lambda_{t}}{2} \right)^{d-1}\frac{V_{1}(d-1,R_{t})}{2^{d-1}}\] (103) \[=\frac{1}{2}h_{+}(d-1)+\frac{1}{2}\left(\frac{\lambda_{t}R_{t}}{ 2}\right)^{d-1}\frac{1}{(d-1)!}. \tag{104}\] Hence, \[h(d)=2^{d}h_{+}(d)=h(d-1)+\frac{(\lambda_{t}R_{t})^{d-1}}{(d-1)!}. \tag{105}\] It is easy check that \(h(1)=1\), and hence \[h(d)=\sum_{i=0}^{d-1}\frac{(\lambda_{t}R_{t})^{i}}{i!} \tag{106}\] satisfies the base case and the recurrence relation. Re-substituting \(\eta_{t}L\) and \(\sigma_{t}/\sqrt{2}\) for \(R_{t}\) and \(\lambda_{t}\), respectively, yields the desired result in equation (47). ## Appendix G Proof of Theorem 9 Let \(R_{t}=\eta_{t}L\). Since the noise satisfies the assumptions of Proposition 3, we get \[\mathcal{L}\left(S\!\!\rightarrow\!\!W\right) \leq\sum_{t=1}^{T}\log\left(f_{t}(0)V_{1}(d,R_{t})+\int\limits_{ \overline{\mathcal{B}_{1}}(0,R_{t})}\sup_{x_{t}\in\mathcal{B}_{1}(0,R_{t})}f _{t}(w_{t}-x_{t})\mathrm{d}w_{t}\right) \tag{107}\] \[=\sum_{t=1}^{T}\log\left(\frac{V_{1}(d,R_{t})}{(2\pi\sigma^{2})^{ \frac{d}{2}}}+\int\limits_{\overline{\mathcal{B}_{1}}(0,R_{t})}\sup_{x_{t}\in \mathcal{B}_{1}(0,R_{t})}\frac{1}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\exp \left\{-\frac{\|w_{t}-x_{t}\|_{2}^{2}}{2\sigma_{t}^{2}}\right\}\mathrm{d}w_{ t}\right). \tag{108}\] Consider \[h_{+}(d)=\int\limits_{\overline{\mathcal{B}_{1}}(0,R_{t})\cap A_{d}}\sup\limits_{x _{t}\in\mathcal{B}_{1}(0,R_{t})}\frac{1}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\exp \left\{-\frac{\|w_{t}-x_{t}\|_{2}^{2}}{2\sigma_{t}^{2}}\right\}\mathrm{d}w_{t}. \tag{109}\] First we solve \(\inf\limits_{x_{t}\in\mathcal{B}_{1}(0,R_{t})}\|w_{t}-x_{t}\|_{2}\). If \(w_{t}\in A_{d}\), then the infimum is achieved for \(x_{t}^{\star}\in A_{d}\) as well (one can simply flip the sign of any negative component, which cannot increase the distance). In the subspace \(A_{d}\), the boundary of the \(L_{1}\) ball is defined by the hyperplane \(\sum_{i=1}^{d}x_{ti}=R_{t}\). As such, finding the minimum distance corresponds to projecting the point \(w\) to the given hyperplane: \[\inf\limits_{x_{t}\in\mathcal{B}_{1}(0,R_{t})}\|w_{t}-x_{t}\|_{2}=\min\limits _{\begin{subarray}{c}x_{t}\in\mathcal{B}_{1}(0,R_{t})\cap A_{d}:\\ \sum_{i=1}^{d}x_{i}=R_{t}\end{subarray}}\|w_{t}-x_{t}\|_{2}=\frac{\sum_{i=1}^{d }w_{ti}-R_{t}}{\sqrt{d}}. \tag{110}\] Now, \[g_{+}(d)=\int\limits_{\overline{\mathcal{B}_{1}}(0,R_{t})\cap A_{d}}\frac{1}{ (2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\exp\left\{-\frac{(\sum_{i=1}^{d}w_{ti}-R_ {t})^{2}}{2d\sigma_{t}^{2}}\right\}\mathrm{d}w_{t}. \tag{111}\] For notational convenience, we drop the \(t\) subscript in the following. We perform a change of variable as follows: \(\tilde{w}_{d}=\sum_{i=1}^{d}w_{i}\). Hence, for \(w\notin\mathcal{B}_{1}(0,R)\), \(\tilde{w}_{d}\geq R\). Since \(w_{d}\geq 0\), then \(\sum_{i=1}^{d-1}w_{i}\leq\tilde{w}_{d}\). For \(x\in\mathbb{R}\), define \(S(x):=\{w^{d-1}\in\mathbb{R}^{d-1}:\sum_{i=1}^{d-1}w_{i}\leq x\}\). Then, \[h_{+}(d) =\int_{R}^{\infty}\int\limits_{S(\tilde{w}_{d})}\frac{1}{(2\pi \sigma^{2})^{\frac{d}{2}}}e^{-\frac{(\tilde{w}_{d}-R)^{2}}{2d\sigma^{2}}} \mathrm{d}w^{d-1}\mathrm{d}w_{d} \tag{112}\] \[=\frac{1}{(2\pi\sigma_{t}^{2})^{\frac{d}{2}}}\int_{R}^{\infty}e^ {-\frac{(\tilde{w}_{d}-R)^{2}}{2d\sigma^{2}}}\left(\int\limits_{S(\tilde{w}_{ d})}\mathrm{d}w^{d-1}\right)\mathrm{d}w_{d}\] (113) \[\overset{\text{(a)}}{=}\frac{1}{(2\pi\sigma^{2})^{\frac{d}{2}}( (d-1)!)}\int_{R}^{\infty}\tilde{w}_{d}^{d-1}e^{-\frac{(\tilde{w}_{d}-R)^{2}}{2 d\sigma^{2}}}\mathrm{d}w_{d}\] (114) \[\overset{\text{(b)}}{=}\frac{1}{(2\pi\sigma^{2})^{\frac{d}{2}}( (d-1)!)}R^{d-1}(\sigma\sqrt{2d})\sum_{i=0}^{d-1}\left(\frac{\sigma\sqrt{2d}}{ R}\right)^{i}\frac{\Gamma((i+1)/2)}{2}, \tag{115}\] where (a) follows from the fact that the innermost integral corresponds to the volume of a scaled probability simplex (scaled by \(\tilde{w}_{d}\)), and (b) follows from the same computations as in Equations (74) to (77) (with \(\tilde{\sigma}=\sigma\sqrt{d}\)). Nothing that \(g(d)=2^{d}g_{+}(d)\) yields the desired the term in equation (50).
2309.17363
Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with task performance enhanced for more regular and symmetric forms compared to their geometrically irregular counterparts. Such studies conclude that this behavior implies the existence of discrete symbolic structure in human mental representations, and that replicating such behavior in neural network architectures will require mechanisms for symbolic processing. In this study, we argue that human biases towards geometric regularity can be reproduced in neural networks, without explicitly providing them with symbolic machinery, by augmenting them with an architectural constraint that enables the system to discover and manipulate relational structure. When trained with the appropriate curriculum, this model exhibits human-like biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can exhibit human-like regularity biases and generalization. This approach provides insights into the neural mechanisms underlying geometric reasoning and offers an alternative to prevailing symbolic "Language of Thought" models in this domain.
Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths
2023-09-29T16:12:51Z
http://arxiv.org/abs/2309.17363v1
Relational Constraints on Neural Networks Reproduce Human Biases Towards Abstract Geometric Regularity ###### Abstract Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with task performance enhanced for more regular and symmetric forms compared to their geometrically irregular counterparts. Such studies conclude that this behavior implies the existence of discrete symbolic structure in human mental representations, and that replicating such behavior in neural network architectures will require mechanisms for symbolic processing. In this study, we argue that human biases towards geometric regularity can be reproduced in neural networks, without explicitly providing them with symbolic machinery, by augmenting them with an architectural constraint that enables the system to discover and manipulate relational structure. When trained with the appropriate curriculum, this model exhibits human-like biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can exhibit human-like regularity biases and generalization. This approach provides insights into the neural mechanisms underlying geometric reasoning and offers an alternative to prevailing symbolic "Language of Thought" models in this domain. ## 1 Introduction Humans have the amazing capability of building useful abstractions that can capture regularities in the external world. Understanding what is responsible for this special feature of human intelligence relative to other animals is a longstanding goal in cognitive science (Penn et al., 2008; Berwick and Chomsky, 2016). One domain in which cognitive scientists have observed this "human singularity" (Dehaene et al., 2022) is in geometric reasoning: early _Homo sapiens_ 100,000 years ago were able to produce structured abstract geometric shapes and drawings on caves (Henshilwood et al., 2011), whereas similar behaviors have not been observed for non-human primates despite years of human contact (Saito et al., 2014). Such observations, as well as rigorous empirical work (e.g., Sable-Meyer et al. 2021; Sable-Meyer et al. 2022) have led some cognitive scientists to conclude that human mental representations uniquely contain discrete domain-specific symbols that are recursively and compositionally combined to produce abstractions that support the capacity for generalization that is characteristic of human behavior (Dehaene et al., 2022). A corollary of this hypothesis is that artificial neural networks cannot, in principle, produce human-like intelligence without the exogenous addition of explicit symbolic machinery and/or representations (Dehaene, 2021; Marcus, 2020). Indeed, empirical work in this domain has shown that explicitly symbolic models fit human behavior better than standard neural networks (Sable-Meyer et al., 2021). This has led to the view, by some, that symbolic "Language of Thought" models are the best models of humans' mental representations (Quilty-Dunn et al., 2022). However, the fact that human behavior, or their _inductive biases_, may be described effectively with abstract symbolic processing does not necessarily imply that their internal representations are based on discrete symbols (Griffiths et al., 2023). Consequently, there may be other forms of representations, such as the continuous vector spaces of neural networks, that could, under the right conditions, produce this behavior without explicit symbolic machinery (McCoy et al., 2018). In the present work, we provide an existence proof of this point by revisiting recent empirical cognitive science work showing humans' regularity biases towards abstract geometric concepts (Sable-Meyer et al., 2021; 2022). We show that standard neural networks augmented with a simple constraint that favors relational information processing can replicate human generalization and regularity biases without needing to build in explicit symbolic machinery. Specifically, we implement an architectural motif, known as the _relational bottleneck_ (Webb et al., 2023a), that allows networks to exploit relations between objects rather than the attributes of individual objects. We focus on the results of two studies. The first is the work of Sable-Meyer et al. (2022), in which humans were tested on a standard working memory task, Delayed-Match to Sample (DITS), using image stimuli sampled from a generative Language of Thought model of geometric concepts. The second is a study by Sable-Meyer et al. (2021), in which humans and non-human primates were tested on a version of the Oddball Detection task, a simple categorization paradigm in which participants identify a deviant stimulus in a group of quadrilateral stimuli. We show that a standard neural network, augmented with a relational bottleneck and trained with an appropriately designed curriculum using the same data as the studies by Sable-Meyer et al. (2021) and Sable-Meyer et al. (2022), exhibited human-like biases for abstract geometric regularity. These results offer an alternative interpretation of such biases, suggesting that with the appropriate inductive biases and curriculum neural networks can exhibit features associated with the capacity for symbolic processing without the need to hardcode the network with symbolic representations and/or mechanisms. ## 2 Historical Background and Related Work For decades, cognitive scientists and AI researchers have embraced two main approaches to building intelligent systems: symbolic models (Fodor, 1975) and neural networks (Rumelhart and McClelland, 1986). Fodor (1975) proposed the "Language of Thought" (LoT) hypothesis: that higher-order cognition in humans is the product of recursive combinations of pre-existing, conceptual primitives, analogous to the way in which sentences in a language are constructed from simpler elements. Symbolic models are well-suited to naturally embed the abstract, structured knowledge humans possess, such as causal theories (Goodman et al., 2011) or hierarchical motor programs that draw handwritten characters (Lake et al., 2015). Neural networks, on the other hand, emphasize _emergence_ of these abstract concepts purely from data within completely unstructured, distributed representations (McClelland et al., 2010). Despite the incredible recent success of neural networks in machine learning, cognitive scientists have hypothesized that their systematic failure at generalizing out of their training distribution comes from a failure to embed the kinds of abstract structural knowledge that can exist in symbolic models (Lake et al., 2017; Marcus, 2003). Recent work has suggested that these capacities may emerge through learning in neural networks that implement _relational reasoning_. Relational reasoning involves abstracting over the details of particular stimuli or domains and extracting more general forms of structure that are broadly useful for capturing regularities in the external world (Gentner, 1983; Holyoak, 2012). This can be accomplished in neural networks by introducing an architectural inductive bias: the relational bottleneck (Webb et al., 2023a). The general principle of the relational bottleneck is that some components of the network are restricted to operating on relations over representations rather than the representations themselves (Webb et al., 2020; 2023b; Mondal et al., 2023). For example, the network might be constrained to use the similarity or distance between two embeddings rather than the embeddings themselves. Critically, unlike many hybrid neuro-symbolic models (Plate, 1995; Touretzky, 1990; Mao et al., 2019) the relational bottleneck does not introduce pre-specified symbolic primitives or any explicit mechanisms for symbolic processing, relying instead on the emergence of abstract concepts within unstructured, distributed representations. The motivation of the relational bottleneck is similar to that of other works that have built neural network architectures more sensitive to relational reasoning (Barrett et al., 2018; Santoro et al., 2017; Shanahan et al., 2020). The Language of Thought (LoT) approach has been applied to a variety of domains in cognitive science, including learning causal theories (Goodman et al., 2011), representations of numbers (Piantadosi et al., 2012), and logical concepts (Piantadosi et al., 2016). However, geometry has recently emerged as one of the domains in which the strongest arguments in favor of this kind of representation have been made (Sable-Meyer et al., 2021, 2022; Dehaene et al., 2022). This setting is also a natural one in which to explore the predictions of neural network models, as geometric stimuli can be presented directly to models in the form of images. In the remainder of the paper, we present a detailed analysis of two of the studies that have been held up as providing support for the LoT approach, demonstrating how neural networks that are constrained to focus on relations are capable of reproducing the key patterns in human behavior. ## 3 Training Neural Networks on a Language of Thought for Geometry ### Background Sable-Meyer et al. (2022) presented a study designed to test the Language of Thought hypothesis in the setting of geometry. The study was based on a model of geometric concept learning also developed by Sable-Meyer et al. (2022). This model framed concept learning as program induction within the DreamCoder framework (Ellis et al., 2021). A base programming language was defined such that programs can be written to generate shortage, where motor programs that draw geometric shapes are generated through recursive combination of symbolic primitives within a Domain Specific Language (DSL, Fig. 1A). The DSL contains motor primitives, such as tracing a particular curve and changing direction, as well as primitives to recursively combine subprograms such as \(Concat\) (concatenate two subprograms together) and \(Repeat\) (repeat a subprogram \(n\) times). These symbolic programs can then be rendered into images such as the ones seen in Fig. 1. Since each image has an underlying program, the minimum description length (MDL; Ellis et al. 2021) of the program was used to model the psychological complexity of the corresponding geometric pattern. Abstract geometric patterns were generated by this symbolic LoT model (Fig. 1A) and used as stimuli in a standard working memory task, based on a Delayed-Match to Sample (DMTS, Fig. 1B) paradigm. In this task, human participants were instructed to memorize a geometric stimulus. Fol Figure 1: **Geometric Language of Thought and Delayed Match to Sample Task** (A) Primitives of the generative Language of Thought (LoT) model implemented in Sable-Meyer et al. (2022). Primitives are recursively composed to produce symbolic programs that can be rendered into abstract geometric pattern stimuli. (B) Schematic of the working memory Delayed-Match to Sample (DMTS) task. A target stimulus is shown at the beginning, followed by a delay period, and the the target image must be selected out of a group of choice images containing distractors. lowing the memorization phase, participants were presented with a blank screen for two seconds. Subsequently, they were shown six option stimuli, among which one matched the original stimulus they had memorized (the target image), while the remaining five were distractors. The objective for participants was to accurately select the image they had seen during the encoding phase and avoid choosing any of the distractor images. In preceding work (Sable-Meyer et al. 2021, discussed further in the next section), the authors suggested that perception of abstract geometric stimuli can be based on two systems: a high-level, general-purpose symbolic system, supposedly only available to humans; and a lower-level, domain-specific shape invariant object recognition system, available to both humans and non-human primates, that can be modeled by a standard Convolutional Neural Network (CNN) model of object recognition in the brain (specifically, the Ventral Visual Stream; Kubilius et al. 2019). To study the first system, Sable-Meyer et al. (2022) chose distractor stimuli that were maximally similar to the target image based on hidden representations of a pre-trained CNN model of the Ventral Visual system (CorNet; Kubilius et al. 2019) and the average grey-level of the image. Even with difficult distractors, humans excelled at the task, with error rates as low as \(1.82\%\). ### Neural Network Modeling We trained two Recurrent Neural Networks (RNNs; one baseline and one implementing a relational bottleneck) on this task, using the LoT model of (Sable-Meyer et al., 2022) to generate a large training corpus of geometric stimuli and holding out the specific stimuli used in the human experiments for the test set. Stimuli were encoded by a CNN encoder, which was comprised of a pre-trained CNN model (CorNet; Kubilius et al. 2019). On each trial, an encoded representation of the stimulus was used as the input to an LSTM (Fig. 2A), followed by encoded representations of three additional timesteps-worth of blank input images4 (Fig. 2A). The resulting output embedding of the LSTM corresponds to the working memory content of the human participants during choice time ("Memory Embedding", see Fig. 2A). The model is subsequently presented with the choice images (Fig. 2). We implemented two types of decision processes to classify the target image out of the six choice images (one target, five distractors). One of these was a standard baseline model, and the other was augmented with a relational bottleneck (Webb et al. 2023a; Fig. 2B). Footnote 4: The delay period for the human experiments was 2 seconds, while the average stimulus presentation time was around 1.2s. Given this, we believe three timesteps makes the task for the networks at least as hard if not harder than the human task. Figure 2: **DITS Task Architecture Implementation** (A) Target and delay images are passed through a pretrained CNN encoder (Kubilius et al., 2019). The outputs of the encoder are passed to an LSTM, producing memory embeddings that correspond to participants’ working memory representation of the initial target stimulus when performing the DMTS task. Each of the choice images are encoded using the same CNN encoder. (B) In the baseline model (left), the memory embeddings are simply concatenated to the choice embeddings and passed to a fully connected layer that produces the logits classifying the target image. In the relational bottleneck model (right), the embeddings are used to compute the similarity between each choice embedding and the memory embedding, and these similarities are used to produce the logits. For the baseline model, the embeddings of the six choice stimuli, along with the memory embedding, were concatenated and simultaneously fed into a standard feedforward layer that was used to classify the target image. For the Relational Bottleneck model, the cosine similarity between the memory embedding and each choice embedding was computed; those similarities were then used to produce the prediction of the target image. This restricted the model to processing the _relations_ between its memory of the target image and the choice stimulus, without "intrusion" from any stimulus-specific attributes of the choice stimuli. During training, distractors were chosen randomly, but during testing, we used the exact same trials that were presented to human participants in the empirical study Sable-Meyer et al. (2022), in which difficult distractors were chosen based on similarity to pretrained CorNet representations (Kubilius et al., 2019) and average grey-levels. ### Results We tested both implementations of the model on the exact same trials given to human participants in Sable-Meyer et al. (2022). Performance of the baseline model was well below human performance (Fig. 3B). However, the relational bottleneck model generalized extremely well to the test set, performing significantly better than the baseline model (\(p<0.001\)) and approximating the performance of human participants. In addition, it handled longer delay periods substantially better than the baseline model (Fig. 3C), demonstrating its ability to maintain abstract representations of these geometric stimuli more robustly through the delay period. The results suggest that it is possible to achieve human-like performance on this task with a neural network model augmented by a simple constraint that favors learning relations, without imbuing the model with any explicit symbolic representations. The training corpus we used had stimuli containing very rich geometric abstractions (see Fig. 1A and Fig. 7). While our results suggest that inclusion of a relational bottleneck may be _necessary_ to produce representations that support out-of-distribution generalization, it is not clear whether it is _sufficient_ even in cases of a more impoverished training corpus. Previous work has shown that a rich training data distribution can also contribute to such generalization (Chan et al., 2022). To address this, we tested whether the relational bottleneck would produce similar human-like performance when training on a relatively more restricted training corpus. Figure 3: **DITS Results** (A) Training accuracy across epochs of baseline and relational bottleneck models. Both models eventually reach near-perfect accuracy. (B) Results on tasks held out from model training that were taken directly from the human trials in Sable-Meyer et al. (2022). The black bar denotes chance performance, while the green bar denotes mean human performance. Error bars are 95% confidence intervals over model training seeds. The Relational Bottleneck model performs much better out of distribution. (C) We increased the delay period from 3 timesteps to 20. Though both models suffer in performance, the Relational Bottleneck model still performs much better. ## 4 Human-like vs Monkey-like Processing of Quadralateral Stimuli ### Background Inspired by early anthropological work investigating abstract geometric concepts in cave drawings and behavioral research comparing geometric reasoning in humans and non-human primates, Sable-Meyer et al. (2021) compared diverse human groups (varying in education, cultural background, and age) to non-human primates on a simple oddball discrimination task. Participants were shown a set of five reference shapes and one "oddball" shape and prompted to identify the oddball (Fig. 4). The reference shapes were generated based on basic geometric regularities: parallel lines, equal sides, equal angles, and right angles. Reference shapes consisted of 11 types of quadrilaterals varying in their geometric regularity, from squares (most regular) to random quadrilaterals containing no parallel lines, right angles, or equal angles/sides (least regular) (Fig. 4B). In each trial, five different versions of the same reference shape (e.g, a square) were shown in different sizes and orientations. The oddball shape was a modified version of the reference shape, in which the lower right vertex was moved such that it violated the regularity of the original reference shape (e.g, moving the lower right vertex of a trapezoid such that it no longer has parallel sides). Fig. 4A shows an example trial. Sable-Meyer et al. (2021) found that humans, across many different ages, cultures, and education levels, are naturally sensitive to these geometric regularities (right angles, parallelism, symmetry, etc) whereas non-human primates are not. Specifically, they found that human performance is best on the Oddball task for the most regular shapes, and systematically decreases as shapes become more irregular. Conversely, non-human primates perform well above chance, but they perform worse than humans overall and, critically, show no influence of geometric regularity (Fig. 4B). To address this pattern of findings, Sable-Meyer et al. (2021) implemented two computational models: a symbolic model and a neural network model. The symbolic model implemented oddball identification using an explicitly symbolic feature space constructed from the shapes' discrete geometric properties. The neural network model was a pretrained CNN model of the Ventral Visual stream (CORNet; Kubilius et al. 2019).3 Sable-Meyer et al. (2021) found that the symbolic model Figure 4: **Quadrilateral Oddball Task** (A) The Oddball task of Sable-Meyer et al. (2021) used six quadrilateral stimulus images, in which five images were of the same reference shape (differing in scale and rotation) and one was an oddball (highlighted in red) that diverged from the reference shape’s geometric properties. In this example, the reference shape is a rectangle; note that the Oddball does not have four right angles like the rectangles. (B) Sable-Meyer et al. (2021) examined error rates for humans, monkeys, and pre-trained CNNs (Kubilius et al., 2019) across quadrilaterals of decreasing geometric regularity (from squares, which have the highest regularity, to random quadrilaterals that have little regularity). Humans performed significantly better on more regular images, with error rates trending significantly upwards with decreasing regularity, whereas monkey and CNN error rates did not exhibit a significant error rate trend as a function of regularity. fit the human performance of their Oddball task significantly better than the neural network model, and in particular it captured the effect of increasing error with increasing geometric irregularity. Conversely, the neural network model fit the monkey behavior better, exhibiting no systematic relationship with the level of geometric regularity (Fig. 4B). They interpreted this as evidence that the human sensitivity to geometric regularity requires the presence of unique symbolic representations that are absent in both neural networks and non-human primates. ### Neural Network Modeling Here, we show that a neural network trained on the same stimuli used by Sable-Meyer et al. (2021), and provided with a relational bottleneck, exhibits the sensitivity of geometric regularity observed in humans, without the explicit specification of discrete symbolic representations. We started with the ResNet CNN architecture 3, but we modified this architecture to directly compute the Oddball judgements end-to-end using the relational bottleneck, using the method described in Kerg et al. (2022) (Fig. 5A). Specifically, a \(6\times 6\) cosine similarity matrix is computed across each of the six stimuli, and the similarity matrix is fed into a feedforward layer that produces an Oddball decision. This structure forces the model to make decisions based on the relations between choice stimuli rather than the attributes of an individual choice stimulus. Figure 5: **Oddball Task Architecture Implementation** (A) To make an oddball decision using the Relational Bottleneck, we compute an oddball judgement directly from the \(6\times 6\) similarity matrix of the encoder’s choice embeddings. (B) We implemented two types of contrastive pretraining on a ResNet CNN architecture: (top) a standard contrastive objective based on SimCLR (Chen et al., 2020) and (bottom) a novel contrastive objective using distances in a geometric feature space. We pretrained the CNN using one of two contrastive objectives (Fig. 5B): **Standard** and **Geometric**. The **Standard** objective was based on SimCLR (Chen et al., 2020). Specifically, simple random rotations and scaling were applied to individual quadrilateral images, and then the CNN was trained to push its representations of those images together, to be more similar (i.e., less distant) to their augmented counterparts, and pull its representations of different quadrilateral images apart, to be more dissimilar (i.e., more distant) from each other. The **Geometric** objective used the geometric features utilized in Sable-Meyer et al. (2021) as the feature space over which to define distances. Those geometric features were binary vectors corresponding to the presence or absence of equal angles, equal sides, parallel lines, and right angles of the quadrilateral. During training, this effectively pushed quadrilaterals with similar geometric features together and pulled quadrilaterals with different geometric features apart. This allowed us to train the network to exhibit the same abstractions defined by the geometric features _without building in the geometric features themselves_. During testing and inference, the geometric features were completely discarded. This is similar to previous work instilling human biases into neural network agents (Kumar et al., 2022), in which the _tabula rasa_ neural networks that were co-trained with symbolic information exhibited human biases without explicitly implementing any symbolic representations. ### Results Similar to the effect observed in the study by Sable-Meyer et al. (2022) discussed in the previous section, the geometric regularity effect observed for humans in Sable-Meyer et al. (2021) was an inverse relationship between geometric regularity and error rate (see green plot in Fig. 4B). For example, humans performed best on the most regular shapes, such as squares and rectangles. This regularity effect was again absent in the monkey error rates (Fig. 4B). Following Sable-Meyer et al. (2021), we show, for each of our networks, the error rates for quadrilaterals sorted by geometric regularity and how well they match human and monkey error rates (Fig. 6). The Geometric pre-trained model showed a strong fit to human behavior (\(r=0.72\)) and a significant effect of geometric regularity (\(p<0.001\); Fig. 6). The Standard (SimCLR) pre-trained model, however, showed a strong fit to _monkey_ behavior (\(r=0.70\)), but not to _human_ behavior (\(r=0.005\)), nor did they show the geometric regularity effect (\(p=0.99\); Fig. 6). This indicates that, although the relational bottleneck was necessary, it was not sufficient on its own to reproduce human behavior on this task. However, coupled with the appropriate training, it was able to reproduce the pattern of results observed for human behavior in Sable-Meyer et al. (2021). These results suggest that, with the appropriate structural biases and training experience, it is possible for neural network to Figure 6: **Oddball Task Results** (A) Mean error rates over the 11 types of quadrilaterals for each type of network. The Geometric pre-trained network showed a significant trend between error rate and geometric regularity (\(p<.001\)), while the Standard (SimCLR) pre-trained network did not (\(p=0.99\)). (B). We correlated error rates across quadrilaterals for each model with the corresponding error rates of humans and monkeys. Geometric pre-training of quadrilaterals led to human-like error patterns, whereas SimCLR pre-training led to more monkey-like error patterns. Error bars are 95% confidence intervals across different model training runs. learn representations that exhibit human-like biases in the geometric oddball task without explicitly imposing symbolic representations on the network. ## 5 Discussion A prevailing theory in cognitive science is that abstractions that support strong generalization reflect the presence of symbolic systems innate in humans that may be absent in animals (Fodor, 1975; Quilty-Dunn et al., 2022; Dehaene et al., 2022). Along similar lines, it has been argued that, without explicitly imbuing neural networks with such capabilities, they will not be able to exhibit the same cognitive flexibility as humans (Marcus, 2020; Dehaene, 2021). Empirical findings in the studies by (Sable-Meyer et al., 2021) and Sable-Meyer et al. (2022) have been offered in support of these conjectures. Here, we provide evidence to the contrary, showing how the introduction of a simple, neurally plausible relational inductive bias, coupled with the appropriate training experiences, is sufficient to reproduce behavior consistent with the formation of abstract representations in neural networks. The domain of the empirical work we re-examine involves the visual perception of geometric patterns (Sable-Meyer et al., 2021, 2022). Sable-Meyer et al. (2022) show that humans are adept at processing geometric patterns, using a delayed-match-to-sample working memory task with stimuli sampled from a generative probabilistic program induction model (Ellis et al., 2021). We trained two types of RNN models on this task: a baseline model and a model with a relational bottleneck that is biased to focus on _relations between stimuli_ to classify the target image. Consistent with the claims of Sable-Meyer et al. (2022), a baseline model does not reach human-level performance out of its training distribution. However, a model with the relational bottleneck does indeed reach human performance on the test set, showing that a simple constraint that favors learning relations can allow neural networks to achieve human-level performance on this task. Sable-Meyer et al. (2021) further show that humans are sensitive to geometric regularity when performing a visual perception task, the Oddball task, using quadrilateral stimuli, whereas non-human primates and standard CNNs (Kubilius et al., 2019) are not. Here, we found that even with a relational bottleneck, a network trained with a standard contrastive learning objective produced the same monkey-like behavior observed from the CNN trained by Sable-Meyer et al. (2021). However, when trained constrastively on distances produced by geometric features, the model did reproduce the human geometric regularity effect. One important difference between the two tasks is that, the Delayed-Match-to-Sample task (Sable-Meyer et al., 2022) used reaction times (RTs) to show the geometric regularity effect in humans, whereas the Oddball task (Sable-Meyer et al., 2021) used error rates. This is because error rates in the former were near zero, and therefore RTs were required to observe significant effects. One limitation of our study is that we did not construct an analogue to human RTs for our RNN models. Instead, we used out-of-training-distribution accuracy as the main performance metric. In the oddball task (Sable-Meyer et al., 2021), where human error rates were higher, we were able to conduct a more direct comparison, where we observed a clear correspondence between human (or monkey) behavior and our models. A further difference between the two experiments is that the model of the Oddball task required geometric contrastive pre-training to match human performance (producing monkey-like behavior without this objective). We believe this is because the dataset used in the Delayed-Match-to-Sample task features a richer distribution of stimuli (Fig. 7) sampled from a Bayesian program induction model (DreamCoder; Ellis et al. 2021). Building a training distribution of samples from such a Bayesian model has an interpretation of effectively distilling the Bayesian model's rich prior into a neural network (McCoy and Griffiths, 2023). In contrast, the Oddball dataset consisted of a relatively simple set of 11 quadrilaterals, which may not be sufficiently diverse to allow the network to extract more abstract representations (see Chan et al. 2022 for a similar argument about how the richness of training data affects the post-training capabilities of Large Language Models). Our work provides evidence that simple modifications to standard neural networks are sufficient to reproduce human behavior on tasks used in cognitive science to showcase allegedly unique human capabilities. It may be possible that such geometric regularity biases can be instilled in neural networks by other methods. For example, previous work has shown Vision Transformer architectures, like humans, are biased more towards shapes than textures (Tuli et al., 2021). In general, we suggest that human-like behavior and abstractions can be instilled in neural networks using a variety of strategies, including through specialized architectures (Webb et al., 2023, 2020), specialized loss functions/training curricula (Kumar et al., 2022; Kepple et al., 2022), and/or highly rich data distributions (McCoy and Griffiths, 2023; Chan et al., 2022). A hallmark of human intelligence is the ability to develop highly general abstractions that capture the essential structure in their environments in a strikingly sample-efficient manner (Gershman, 2017; Lake et al., 2017). Our work highlights the possibility of neural network-based architectures achieving the same level of intelligence without built-in, explicitly symbolic machinery, recapitulating a classic debate in cognitive science (Rumelhart and McClelland, 1986). Given the success of this approach in the geometric setting, we anticipate that similar models may be able to capture behavior that has previously been explained in terms of symbolic representations in learning causal relationships, numerical representations, and logical concepts. ## 6 Acknowledgements S.K is supported by a Google PhD Fellowship. We thank Mathias Sable-Meyer for assisting us with accessing the data in his work and general advice.
2309.08243
Reply to "Comment on `Extending the laws of thermodynamics for arbitrary autonomous quantum systems'"
In his Comment [1], Philip Strasberg (PS) argues from the analysis of different examples that the framework we have presented in [2] does not recover known results of macroscopic textbook thermodynamics. Here, we show that such apparent contradictions disappear when the necessary assumptions the aforementioned known results pre-suppose are applied. Those assumptions concern the control ability of the observer, the nature of the described degree of freedom, or the scale of the systems. The ability to relax those assumptions is precisely a motivation of our framework, which can explore the capacity of quantum systems to exchange work and heat even at scales not captured by textbook thermodynamics. We take the opportunity of this reply to further expand on the use of our framework and its connections with traditional thermodynamics.
Cyril Elouard, Camille Lombard Latune
2023-09-15T08:36:42Z
http://arxiv.org/abs/2309.08243v1
Reply to "Comment on 'Extending the laws of thermodynamics for arbitrary autonomous quantum systems" ###### Abstract In his Comment [1], Philip Strasberg (PS) argues from the analysis of different examples that the framework we have presented in [2] does not recover known results of macroscopic textbook thermodynamics. Here, we show that such apparent contradictions disappear when the necessary assumptions the aforementioned known results pre-suppose are applied. Those assumptions concern the control ability of the observer, the nature of the described degree of freedom, or the scale of the systems. The ability to relax those assumptions is precisely a motivation of our framework, which can explore the capacity of quantum systems to exchange work and heat even at scales not captured by textbook thermodynamics. We take the opportunity of this reply to further expand on the use of our framework and its connections with traditional thermodynamics. In his Comment [1], Philip Strasberg (abbreviated PS in the following) argues that the framework we have presented in [2] is "in conflict with textbook thermodynamics". He provides several examples for which he claims that our framework does not yield the physically expected behavior, or provides a wrong estimate of entropy production. We show below that his conclusions are obtained by incorrectly comparing our findings with intuitions from classical macroscopic thermodynamics without applying the necessary assumptions they pre-suppose. More precisely, the framework we introduced [2] provides the flexibility to describe all the resources that can be stored in a quantum system and whose consumption is equivalent to work. One of the novel possibilities opened by our framework is the analysis of completely autonomous quantum machines, composed of several systems which can be efficiently manipulated locally. This is therefore a natural emphasis of our article [2]. Nevertheless, as we mention in [2], our results hold in principle for larger scale systems, up to the scales at which thermodynamics has initially been developed. However, when increasing the system size, it becomes natural to expect that only partial control is practically possible (meaning that only certain types of nonequilibrium resources can be manipulated in practice). In addition, still with increasing system size, new phenomena such as equilibration of an isolated system become possible, when the relevant degrees of freedom are described. Finally, again in a perspective of very large systems, new scaling properties emerge, e.g. coupling energies become typically negligible in front of bulk. All these important assumptions connect microscopic quantum mechanical description to phenomena of the macroscopic world. As our approach is, by design, formulated at the quantum-mechanical level, it is natural that these assumptions must be added on top of our framework to address these macro-scale phenomena. This task was beyond the scope of our first article [2], except for the notion of partial control for which we provided a methodology in Appendix D. In the remainder of this reply, we analyze the examples mentioned by PS to show that such assumptions can be added to our formalism to describe those macroscopic situation if needed. Conversely, our framework allows to selectively relax those assumptions of textbook thermodynamics, allowing to analyze new behavior relevant at the quantum scale. For pedagogical purposes, we start with the examples of two systems in a pure state, which was mentioned by PS as a criticism of the notion of effective temperature we use. ## Ex1: Two systems initially in their ground state As PS points out in [1], traditional thermodynamics predicts that two identical systems initially in their ground state (or equivalently, in thermal equilibrium states at equal vanishing temperatures) should exchange no heat flow when they are put in contact. We emphasize that this statement is derived for two macroscopic systems (in the thermodynamic limit). In contrast, our framework allows us to analyze the case where we couple two quantum systems, whatever their size. Starting from the two systems each in the ground state of its Hamiltonian, we consider that a coupling Hamiltonian is switched on at time t=0. If the coupling term does not commute with the two local Hamiltonians (the only non-trivial case), the two systems are at \(t=0^{+}\) in an out-of-equilibrium state, which starts evolving for \(t>0\): in general, the systems' energies and entropies will vary, and as pointed out by PS, there will be an increase of the systems' effective temperatures between time \(t=0\) and \(t>0\), which we interpret as heat flowing into those two systems.
2309.05767
Natural Language Supervision for General-Purpose Audio Representations
Audio-Language models jointly learn multimodal text and audio representations that enable Zero-Shot inference. Models rely on the encoders to create powerful representations of the input and generalize to multiple tasks ranging from sounds, music, and speech. Although models have achieved remarkable performance, there is still a performance gap with task-specific models. In this paper, we propose a Contrastive Language-Audio Pretraining model that is pretrained with a diverse collection of 4.6M audio-text pairs employing two innovative encoders for Zero-Shot inference. To learn audio representations, we trained an audio encoder on 22 audio tasks, instead of the standard training of sound event classification. To learn language representations, we trained an autoregressive decoder-only model instead of the standard encoder-only models. Then, the audio and language representations are brought into a joint multimodal space using Contrastive Learning. We used our encoders to improve the downstream performance by a margin. We extensively evaluated the generalization of our representations on 26 downstream tasks, the largest in the literature. Our model achieves state of the art results in several tasks leading the way towards general-purpose audio representations.
Benjamin Elizalde, Soham Deshmukh, Huaming Wang
2023-09-11T18:50:21Z
http://arxiv.org/abs/2309.05767v2
# Natural Language Supervision for General-Purpose Audio Representations ###### Abstract Audio-Language models jointly learn multimodal text and audio representations that enable Zero-Shot inference. Models rely on the encoders to create powerful representations of the input and generalize to multiple tasks ranging from sounds, music, and speech. Although models have achieved remarkable performance, there is still a performance gap with task-specific models. In this paper, we propose a Contrastive Language-Audio Pretraining model that is pretrained with a diverse collection of 4.6M audio-text pairs employing two innovative encoders for Zero-Shot inference. To learn audio representations, we trained an audio encoder on 22 audio tasks, instead of the standard training of sound event classification. To learn language representations, we trained an autoregressive decoder-only model instead of the standard encoder-only models. Then, the audio and language representations are brought into a joint multimodal space using Contrastive Learning. We used our encoders to improve the downstream performance by a margin. We extensively evaluated the generalization of our representations on 26 downstream tasks, the largest in the literature. Our model achieves state of the art results in several tasks leading the way towards general-purpose audio representations. Code will be on GitHub1. Footnote 1: [https://github.com/microsoft/CLAP](https://github.com/microsoft/CLAP) Benjamin Elizalde\({}^{*}\), Soham Deshmukh\({}^{*}\), Huaming Wang Microsoft {benjaminm, sdeshmukh, huawang}@microsoft.com contrastive learning, general purpose audio representation, zero-shot, language, sounds ## 1 Introduction Recent research in the audio domain focuses on learning representations that generalize to a wide range of downstream tasks across different domains. The 2021 Holistic Evaluation of Audio Representations (HEAR) [1] took a major step in this direction by providing a comprehensive setup to benchmark audio representations. The models were pretrained on a large dataset -AudioSet [2] (1.7M files)- using Supervised, Self-Supervised or Unsupervised Learning. All the methods have to undergo additional fine-tuning to use their representations on a given downstream task. Zero-Shot models can be applied to any task directly achieving flexibility and generalization. One of the most successful type are Contrastive Language-Audio Pretraining (CLAP) models that jointly learn multimodal text and audio representations. Authors in [3] introduced a CLAP model that achieved state of the art (SoTA) in 16 downstream tasks. Subsequent literature showed that the choice of audio and text encoders are critical to generate powerful representations and increase performance across tasks [4, 5, 6]. For example, upgrading from CNN to audio transformers (HTSAT) to encode audio and from BERT to RoBERTa to encode text. Another conclusion is that scaling up the number of training pairs improves overall performance. However, simply adding pairs may result in a drop of performance in certain domains and tasks [4, 5, 3, 6]. CLAP's performance is dependent on the diversity of the text and audio training pairs and how noisy they are. Wav2clip [7] and Audicclip [8] used 200k and 1.7M audio-text pairs respectively from AudioSet, a dataset annotated for sound events. Authors paired audio with class labels rather than with sentence-level descriptions, potentially missing the context and language semantics of descriptions, but with good Zero-Shot performance in 3 and 9 tasks respectively. CLAP [3] used 128k pairs but the text were descriptions coming from audio captioning and a web-sourced dataset. It was evaluated on 16 tasks and significantly improved over its predecessors. LAION CLAP [4] used a collection of 2.5M pairs, further improving performance in 8 tasks. Authors later added music and speech-related training pairs, but performance in sound event classification (ESC50) degraded by an absolute 1%. Wavcaps[6] used 500k pairs, but cleaned up the noisy web-sourced descriptions with a Chat-GPT language model. Results outperformed the literature in 8 tasks. Therefore, when scaling up pairs it is essential to verify performance trade offs by evaluating generalization across different domains and tasks. In this paper we make the following contributions. To learn audio representations, we trained an audio encoder on 22 audio tasks. To learn language representations, we trained an autoregressive decoder-only model. We pretrained our CLAP model with an unprecedented 4.6 million audio-text pairs and extensively evaluated the generalization of our representations on 26 downstream tasks, the largest in the literature, achieving SoTA results in several. ## 2 Method Contrastive Language-Audio Pretraining (Fig 1) jointly trains an audio an a text encoder to learn multimodal representations which can be used for different types of inference. ### Contrastive Language-Audio Pretraining Let the processed audio be \(X_{a}\) s.t. \(X_{a}\in\mathbb{R}^{F\times T}\) where \(F\) are the number of spectral components (e.g. Mel bins) and \(T\) are the number of time bins. Let the text be represented by \(X_{t}\). Each audio-text pair in a batch of \(N\) is represented as \(\{X_{a},X_{t}\}_{i}\) where \(i\in[0,N]\). For convenience, we drop the \(i\) notation, and henceforth \(\{X_{a},X_{t}\}\) will denote a batch of N. From the pairs, the audio and text are passed to an audio encoder \(f_{a}(.)\) and a text encoder \(f_{t}(.)\) respectively. For a batch of N: \[\hat{X}_{a}=f_{a}(X_{a});\hat{X}_{t}=f_{t}(X_{t}) \tag{1}\] where \(\hat{X}_{a}\in\mathbb{R}^{N\times V}\) are the audio representations of dimensionality \(V\), and \(\hat{X}_{t}\in\mathbb{R}^{N\times U}\) are the text representations of dimensionality \(U\). We brought audio and text representations, \(\hat{X}_{a}\) and \(\hat{X}_{t}\), into a joint multimodal space of dimension \(d\) by using a learnable projection layer: \[E_{a}=L_{a}(X_{a});E_{t}=L_{t}(X_{t}) \tag{2}\] where \(E_{a}\in\mathbb{R}^{N\times d}\), \(E_{t}\in\mathbb{R}^{N\times d}\), \(L_{a}\) and \(L_{t}\) are the projections for audio and text respectively. Now that the audio and text embeddings (\(E_{a}\), \(E_{t}\)) are comparable, we can measure similarity: \[C=\tau(E_{t}\cdot E_{a}^{\top}) \tag{3}\] where \(\tau\) is a temperature parameter to scale the range of logits. The similarity matrix \(C\in\mathbb{R}^{N\times N}\) has \(N\) matching pairs in the diagonal and \(N^{2}-N\) non-matching pairs in the off-diagonal. \[\mathcal{L}=0.5(\ell_{text}(C)+\ell_{audio}(C)) \tag{4}\] where \(\ell_{k}=\frac{1}{N}\sum_{i=0}^{N}\log diag(softmax(C))\) along text and audio axis respectively. We used this symmetric cross-entropy loss (\(\mathcal{L}\)) over the similarity matrix to jointly train the audio and text encoders along with their projection layers. ### Audio and Text Encoders **Audio Encoder:** To process audio, we trained a transformer-based audio encoder (HTSAT [9]) on 22 audio tasks using a similar method to this paper [10]. We called it HTSAT-22. We hypothesized that an encoder trained on multiple audio tasks would improve generalization and thus performance across tasks. The method learns an audio encoder and a mapper network to prompt a large language model to perform multiple audio tasks, such as classification, captioning, retrieval and audio Q&A. The architecture is trained essentially as a captioning system, where it learns to generate a free-form text output \(c^{i}\) in an autoregressive fashion conditioned on the audio prompt \(p^{i}\). Note that \(\gamma\) denotes the model's trainable parameters. The loss function is Cross-Entropy: \[\mathcal{L}=-\sum_{i=1}^{N}\sum_{j=1}^{l}\log p_{\gamma}(c_{j}^{i}|p_{1}^{i},...,p_{2k}^{i},c_{1}^{i},...,c_{j-1}^{i}) \tag{5}\] **Text Encoder:** To process text, we adapted GPT2 (base 124M), which is an autoregressive model that has exhibited impressive abilities for text tasks. We addressed the challenge - _How to make an autoregressive model produce a sentence-level representation?_ Autoregressive models built with transformer-decoder blocks, take an input text and output the most likely sequence of words (tokens), one after the other. In contrast, models built with transformer-encoder blocks (BERT or RoBERTA) output a sentence-level representation in a continuous space. To make GPT2 output a sentence-level representation, we appended the special token \(<|endoftext|>\) at the end of each input text. During contrastive pretraining, we use the representations from this token as sentence-level representations. This forces the token to contain the aggregate information from the text input. ### Evaluation **Zero-Shot Inference:** We used CLAP's ability to determine the similarity between audio and text. Let's consider a target dataset with \(C\) class labels and \(N\) test audios. First, we compute CLAP's audio and text embeddings for \(N\) audios and \(C\) classes using the pretrained encoders. Second, we compute the cosine similarity between each testing audio and all the class labels. In the case of retrieval, we treat text queries as classes. Each test audio will have as many logits as classes. Third, logits are turned into a probability distribution by applying softmax for binary or multiclass classification; sigmoid for multilabel classification; and left unaltered for retrieval. **Audio Captioning:** In the architecture of Fig 1, a test audio is passed to the pretrained audio encoder, then to a mapper network, and then to GPT2 to generate a description. At training time, only the weights of the mapper network are learned with a captioning loss (Eq.5) and the training split. Figure 1: CLAP learns audio and a text embeddings that can be compared in a multimodal space. The pretrained encoders can be used for Zero-Shot Classification, Text to Audio and Audio to Text Retrieval, and Audio Captioning. ## 3 Experiments **Training Datasets.** Collecting pairs is perhaps the main bottleneck of scaling up CLAP models. We gathered the largest collection with 4.6 million audio and text pairs from different datasets and web archives. The audios describe human sounds and activities, environmental sounds, acoustic scenes, music, sound effects, and speech emotion. To study the effect of encoders in Table 1, we used the same training sets as CLAP [3]. Unlike the authors, we did not include the test set of AudioCaps and Clotho, so the number of pairs was 119k instead of 128k. The training datasets for the 4.6M collection are: WavCaps [6], AudioSet [2], FSD50K [11], Clotho [12], AudioCaps [13], MACS [14], WavText5k [5], SoundDesc [15], Nysmth [16], FMA [17], Mosi [18], Meld [19], Iemocap [20], Mosei [21], MSP-Podcast [22], CochlScene [23], LJspeech [24], EpicKitchen [25], Kinectics700 [26], findsounds.com. Details will be on GitHub. **Downstream Tasks.** We used 26 downstream tasks from different domains: sound events, vocal sounds, surveillance sounds, and acoustic scenes classification; audio captioning; retrieval; music, instruments, and note attributes classification; speech emotions and language classification; keyword spotting; and speaker counting. To study the effect of encoders in Table 1, we used a subset of 16 tasks. Details will be on GitHub. **Pre-processing.** We used log Mel spectrogram representations of audio with a sampling rate of 44.1 KHz, hop size of 320 frames, window size 1024 frames, and 64 Mel bins in the range of 50-8000 Hz. During training, each audio clip is randomly truncated to a continuous segment of 7 secs, or padded if shorter. The batches with pairs are randomly sampled. **Encoders.** For our proposed CLAP model, we used the audio and text encoders HTSAT-22+GPT2 described in Sec.2.2. For comparison, in Table 1 we used the two best combinations of encoders in the literature CNN14+BERT and HTSAT+RoBERTa [3, 4, 6]. We also included the text encoder from CLIP because it was used by different authors [8, 7, 4]. Both, the audio and text embeddings are projected into a multimodal space with independent learnable projection layers with an output dimension of 1024. **Training.** We trained by unfreezing both encoders for 40 epochs, although the overall performance peaked in the first 10 epochs. We report the performance of the downstream tasks corresponding to the epoch that yielded the best Zero-Shot score (average of all tasks). We hypothesize that the model corresponding to such epoch will generalize better to unseen datasets and serve the community better. It is possible that the performance of each task was higher or lower in a different epoch. Batch size was 1,536. We used Adam Optimiser with an initial learning rate \(10^{-3}\) and reduce the learning rate on plateau by \(10^{-1}\) with a patience of 15. The temperature parameter \(\tau\) is learnable and initialised to 0.007. ## 4 Results and Discussion The results comparing different audio and text encoders are in Table 1 and the results of our proposed CLAP are in Table 2. ### Proposed audio and text encoder Our proposed encoders HTSAT-22+GPT2 outperformed two of the best combination of encoders in the literature, as shown in Table 1. To compare overall performance, we used Zero-Shot score, which is the average of the metrics from all 16 tasks. HTSAT-22+GPT2 achieved 0.480, an absolute 9% higher than the most common combinations HTSAT+RoBERTa and CNN14+BERT with 0.431 and 0.428 respectively. All encoder combinations performed better than random. Although different combinations did better at different tasks, none of them excelled at a specific domain. Our HTSAT-22 audio encoder is the major contributor to performance improvement. HTSAT-22 is pretrained on 22 \begin{table} \begin{tabular}{c|c|c c c|c c|c|c|c} \hline & \begin{tabular}{c} Zero-Shot \\ Score \(\uparrow\) \\ \end{tabular} & \multicolumn{2}{c|}{\begin{tabular}{c} Sound Event Classification \(\uparrow\) \\ \end{tabular} } & \begin{tabular}{c} Vocal Sound \\ Classification \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Surveillance \\ Sound \\ \end{tabular} & \begin{tabular}{c} Action \\ Classification\(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Acoustic Scene \\ Classification\(\uparrow\) \\ \end{tabular} \\ \hline Model & Average & ESC50 & FSD50K & USSR & \begin{tabular}{c} DCASE17 \\ Task 4 \\ \end{tabular} & \begin{tabular}{c} Vocal \\ SSSA \\ \end{tabular} & \begin{tabular}{c} SESA \\ \end{tabular} & \begin{tabular}{c} ESC50 \\ Actions \\ \end{tabular} & \begin{tabular}{c} TUT 2017 \\ \end{tabular} \\ \hline CNN14+BERT & 0.428 & 0.826 & 0.302 & 0.732 & 0.300 & 0.495 & 0.749 & 0.495 & 0.296 \\ HTSAT+CLIP & 0.430 & 0.813 & 0.289 & 0.748 & 0.277 & 0.645 & 0.761 & 0.442 & 0.219 \\ HTSAT+RoBERTa & 0.431 & 0.811 & 0.322 & 0.757 & 0.226 & 0.610 & 0.745 & 0.475 & 0.285 \\ HTSAT+GPT2 & 0.435 & 0.819 & 0.336 & 0.767 & 0.242 & 0.646 & 0.644 & **0.503** & 0.286 \\ \hline HTSAT-22+RoBERTa & 0.454 & 0.879 & 0.388 & 0.767 & 0.209 & 0.682 & 0.656 & 0.481 & **0.369** \\ HTSAT-22+CLIP & 0.469 & 0.830 & **0.411** & **0.791** & 0.229 & 0.692 & 0.723 & 0.488 & 0.292 \\ HTSAT-22+GPT2 & **0.480** & **0.882** & 0.403 & 0.750 & **0.337** & **0.692** & **0.762** & 0.475 & 0.317 \\ \hline \end{tabular} \begin{tabular}{c|c|c c|c c|c|c|c} \hline & \multicolumn{3}{c|}{Music Classification \(\uparrow\)} & \multicolumn{3}{c|}{Instrument Classification \(\uparrow\)} & \multicolumn{3}{c|}{Speech Emotion} & \multicolumn{3}{c|}{KWS\(\uparrow\)} & \multicolumn{3}{c}{Speaker} \\ & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Model & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \hline CNN14+BERT & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ HTSAT+CLIP & 0.992 & 0.156 & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ HTSAT+RoBERTa & 0.992 & 0.178 & 0.436 & 0.352 & 0.263 & 0.2 & 0.098 & 0.149 & 0.149 \\ HTSAT+GPT2 & 1 & 0.150 & 0.539 & 0.322 & 0.234 & 0.171 & **0.139** & 0.155 & 0.155 \\ \hline HTSAT-22+RoBERTa & 1 & 0.209 & 0.309 & 0.402 & 0.301 & **0.278** & 0.129 & 0.207 \\ HTSAT-22+CLIP & 1 & 0.280 & 0.517 & **0.462** & 0.275 & 0.233 & 0.116 & 0.094 \\ HTSAT-22+GPT2 & **1** & **0.289** & 0.487 & 0.425 & **0.297** & 0.217 & 0.089 & **0.254** \\ \hline \end{tabular} \end{table} Table 1: Zero-Shot performance on 16 downstream tasks and 119k training pairs. Our proposed encoders (HTSAT-22+GPT2) outperformed the best combinations in the literature. Higher is better for all numbers. The metrics are mAP for FSD50k and ESC50-actions; F1-score for DCASE17; all others use Accuracy. Zero-Shot score is the average of the metrics. audio tasks in contrast to HTSAT which is pretrained only on sound event classification. Hence, suggesting that generating pretraining on multiple audio tasks can improve the representations from the audio encoder. Comparing HTSAT-22+GPT2 to HTSAT+GPT2 evidenced major improvements such as LibriCount10 (absolute 10%), NS Instrument (absolute 7%) and ESC50 (absolute 6%). The proposed GPT2 autoregressive model improves upon the popular RoBERTa. Using GPT2 with either HTSAT or HTSAT-22 yielded the best performance over the other text encoders. We hypothesize that the improvement comes from two reasons. First, GPT2 has a larger vocabulary of 50k tokens compared to BERT and RoBERTa with 30k. Second, our modified GPT2 autoregressive predicts tokens till \(<|endoftext|>\) used for sentence-level representation. This acts as self-supervision and forces the model to learn and put emphasis on the ordering of words. ### Scaling proposed CLAP architecture Our CLAP model established new Zero-Shot SoTA on most of the 26 downstream tasks as shown in Table 2. To benchmark our model, we used the best numbers in the literature coming from different models. When no number was available, we used random performance. In some cases, performance improvement is more than double the benchmark literature. Some highlights are Music Genres with 58.4% acc. vs 25%, Vocal Sounds with 80% acc. vs 49.5%, Acoustic Scenes with 53.8% acc. vs 29.6%. Some downstream tasks do not constitute a true Zero-Shot setup as the audio files in the training set were part of the 4.6M pairs (see Sec.3). For instance, FSD50k audio and web descriptions were used in training but not the class labels. We did not fine-tune CLAP encoders for any task. ### Generalization and individual domain performance Adding diversity and scaling the audio-text pairs in training presents a trade-off that increases performance in some tasks but decreases it in others. As expected, adding training pairs that resemble the domain from a given task helps, hence diversity is essential for generalization. For example, CLAP [3] did not include emotion recognition training pairs and achieved 17.1% acc. in RAVDESS and 23.4% in CREMAD. We added emotion-related pairs and improved accuracy to 31.5% and 30% respectively. Nonetheless, more pairs can cause a distribution shift, creating a mismatch between training and some testing data. For example, our model achieved a slightly lower score than a model [6] trained with 500k pairs on ESC50 (94.8% vs 93.9% acc.). Another example is with GTZAN Music vs Speech, where a model [3] with 128k pairs achieved 100% acc. over ours with 99.2%. Even our model in Table 1 achieved 100% acc with 119k pairs. We should expect that as we add training pairs, performance across tasks will vary. Hence, zero-shot models should be evaluated across different domains and tasks with focus on generalization rather than on overfitting to specific tasks. Audio-Text (A-T) and Text-Audio (T-A) Retrieval performance fell short of the benchmark. We measured the tasks with mAP@10, which is the ranking metric of IEEE DCASE, and R@1. Our model outperformed the literature in terms of mAP@10 for Clotho (A-T: 0.155 vs 0.138 and T-A: 0.257 vs 0.204), and struggled only with A-T AudioCaps (A-T: 0.319 vs 0.457 and T-A: 0.51 vs 0.51). Both datasets are sensitive to out-of-domain training data and adding training pairs did not translate into an improvement. This was demonstrated by authors in [5] who unsuccessfully tried to add 39k files from SounDesc or authors in [4] with 500k from Wavcaps or authors in [6] with 1.7M from AudioSet. ## 5 Conclusion We introduced a CLAP model using our proposed HTSAT22 and GPT2 encoders along with a collection of 4.6M training pairs. Zero-shot models should be evaluated across different tasks with a focus on generalization rather than on overfitting to specific tasks. We evaluated CLAP in 26 downstream tasks and established new SoTA in most of them, hence leading the way with general-purpose audio representations. \begin{table} \begin{tabular}{c|c c c c c|c c c|c c|c} \hline & \multicolumn{4}{c|}{Sound Event Classification \(\uparrow\)} & \multicolumn{2}{c|}{Vocal Sound} & \multicolumn{2}{c|}{Surveillance} & Action & Acoustic Scene \\ & \multicolumn{4}{c|}{Sound Event Classification \(\uparrow\)} & \multicolumn{2}{c|}{Classification \(\uparrow\)} & Sound Classif.\(\uparrow\) & \multicolumn{2}{c|}{Classification\(\uparrow\)} & \multicolumn{2}{c|}{Classification\(\uparrow\)} & Classification\(\uparrow\) \\ \hline Model & ESC50 & FSD50K & USSK & \begin{tabular}{c} DCASE17 \\ Task 4 \\ \end{tabular} & AudioSet & \begin{tabular}{c} Vocal \\ Sound \\ \end{tabular} & SESA & \begin{tabular}{c} ESC50 \\ Actions \\ \end{tabular} & \begin{tabular}{c} TUT 2017 \\ \end{tabular} \\ \hline Benchmark & **0.948**[6] & 0.302 [3] & 0.806 [6] & 0.3 [3] & 0.058 [3] & 0.495 [3] & 0.25 & 0.045 [3] & 0.296 [3] \\ **HISTAT-22+GPT2** & 0.939 & **0.485** & **0.823** & **0.466** & **0.102** & **0.8** & **0.65** & **0.509** & **0.538** \\ \hline \hline \multirow{3}{*}{Model} & \multicolumn{4}{c|}{Music Classification \(\uparrow\)} & \multicolumn{4}{c|}{Instrument Classification \(\uparrow\)} & \multicolumn{4}{c|}{Speech Emotion} & \multirow{3}{*}{KWS\(\uparrow\)} & Speaker \\ & \multicolumn{4}{c|}{GTAEN} & \multicolumn{4}{c|}{Nis} & \multicolumn{4}{c|}{NS} & \multicolumn{4}{c|}{NS} & \multicolumn{4}{c|}{Beijing} & \multicolumn{4}{c|}{NS Inst.} & CRE & \multicolumn{2}{c|}{RAV} & Speech & Libri \\ & Music Speech & Genres & Pitch & Velocity & Qualities & Opera & family & \multicolumn{2}{c|}{MN-D} & \multicolumn{2}{c|}{DESS} & \multicolumn{2}{c|}{Command} & \multicolumn{2}{c|}{Count10} \\ \hline Benchmark & **1**[3] & 0.25 [3] & 0.015 & 0.2 & 0.1 & **0.539**[3] & 0.09 & 0.234 [3] & 0.171 [3] & 0.139 [3] & 0.155 [3] \\ **HISTAT-22+GPT2** & 0.992 & **0.584** & **0.444** & **0.222** & **0.489** & 0.466 & **0.479** & **0.3** & **0.315** & **0.164** & **0.246** \\ \hline \hline \end{tabular} \begin{tabular}{c|c c c|c c c|c c c|c} \hline & \multicolumn{4}{c|}{Audio Captioning \(\uparrow\)} & \multicolumn{4}{c|}{Audio-Text Retrieval \(\uparrow\)} & \multicolumn{4}{c}{Text-Audio Retrieval \(\uparrow\)} \\ \hline Model & AudioCaps & Clotho & AudioCaps & Clotho & Clotho & AudioCaps & AudioCaps & Clotho & Clotho \\ & \multicolumn{4}{c|}{R@1} & mAP@10 & R@1 & mAP@10 & R@1 & mAP@10 & R@1 & mAP@10 & R@1 & mAP@10 \\ \hline Benchmark & 0.438[27] & 0.215[27] & **0.517**[6] & **0.457**[4] & **0.234**[6] & 0.138[4] & **0.397**[6] & 0.51[4] & **0.195**[6] & 0.204[4] \\ **HISTAT-22+GPT2** & **0.455** & **0.271** & 0.425 & 0.319 & 0.229 & **0.155** & 0.356 & **0.51** & 0.157 & **0.257** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on 26 downstream tasks using our proposed encoders and 4.6M training pairs. As the benchmark, we used the best numbers in the literature, when no number was available we used random performance. Higher is better for all tasks. The evaluation metrics are mAP for FSD50k, ESC50-Actions, AudioSet, and NS Qualities; F1-score for DCASE17; and SPIDEr for Captioning; all others use Accuracy.
2309.13663
A Probabilistic Approach to the Existence of Solutions to Semilinear Elliptic Equations
We study a semilinear elliptic equation with a pure power nonlinearity with exponent $p>1$, and provide sufficient conditions for the existence of positive solutions. These conditions involve expected exit times from the domain, $D$, where a solution is defined, and expected occupation times in suitable subdomains of $D$. They provide an alternative new approach to the geometric or topological sufficient conditions given in the literature for exponents close to the critical Sobolev exponent. Moreover, unlike standard results, in our probabilistic approach no \emph{a priori} upper bound restriction is imposed on $p$, which might be supercritical. The proof is based on a fixed point argument using a probabilistic representation formula. We also prove a multiplicity result and discuss possible extensions to the existence of sign changing solutions. Finally, we conjecture that necessary conditions for the existence of solutions might be obtained using a similar probabilistic approach. This motivates a series of natural questions related to the characterisation of topological and geometrical properties of a domain in probabilistic terms.
Ma Elena Hernandez-Hernandez, Pablo Padilla-Longoria
2023-09-24T15:14:09Z
http://arxiv.org/abs/2309.13663v1
# _A probabilistic approach to the existence of solutions to semilinear elliptic equations._ ###### Abstract. We study a semilinear elliptic equation with a pure power nonlinearity with exponent \(p>1\), and provide sufficient conditions for the existence of positive solutions. These conditions involve expected exit times from the domain, \(D\), where a solution is defined, and expected occupation times in suitable subdomains of \(D\). They provide an alternative new approach to the geometric or topological sufficient conditions given in the literature for exponents close to the critical Sobolev exponent. Moreover, unlike standard results, in our probabilistic approach no _a priori_ upper bound restriction is imposed on \(p\), which might be supercritical. The proof is based on a fixed point argument using a probabilistic representation formula. We also prove a multiplicity result and discuss possible extensions to the existence of sign changing solutions. Finally, we conjecture that necessary conditions for the existence of solutions might be obtained using a similar probabilistic approach. This motivates a series of natural questions related to the characterisation of topological and geometrical properties of a domain in probabilistic terms. Key words and phrases:Semilinear elliptic equations; critical Sobolev exponent; Brownian Motion; Expected Occupation Time; Expected Exit Time ## 1. Introduction This paper provides a probabilistic characterisation of sufficient conditions for the existence of solutions to \[\left.\begin{array}{rl}\Delta u+\lambda u^{p}=0,&\text{ in }D,\\ u>0,&\text{ in }D,\\ u=0,&\text{ on }\partial D,\end{array}\right\} \tag{1.1}\] for a given open bounded domain \(D\) in \(\mathrm{I\!R}^{d}\), \(d>2\), with a \(C^{2}\) boundary \(\partial D\), \(\lambda>0\) and \(p>1\). Our method relies on rewriting (1.1) as an integral equation in terms of the mathematical expectation of a Brownian motion and, thus, translating the existence of a solution to (1.1) into a corresponding fixed point problem. Sufficient conditions for the existence of a fixed point are then given in terms of expected exit times and expected occupation times of a Brownian motion on \(D\), the region of interest. An advantage of this approach is that it does not require any _a priori_ upper-bound restriction on the values of \(p\), a strong limitation common to all the analytic approaches or the classical topological methods of the calculus of variations. Of particular interest is _the critical Sobolev exponent equation_ \[\left.\begin{array}{rl}\Delta u+u^{p^{*}-1}=0,&\text{ in }D,\\ u=0,&\text{ on }\partial D,\end{array}\right\} \tag{1.2}\] where \(p^{*}=\frac{2d}{d-2}\) is the critical Sobolev exponent for the Sobolev embedding \(H_{0}^{1,2}(D)\subseteq L^{p}(D)\). This equation has been extensively studied and several conditions are known for the existence and multiplicity of solutions (see the discussion below). In fact, it is related to the Yamabe problem, which was solved in the 1980's (see e.g. [27], or [32] for a recent survey on the subject). Providing necessary conditions for semilinear elliptic equations with nonlinearities involving powers \(p\approx p^{*}\) (or bigger) remains an open question. Establishing probabilistic conditions for the existence of solutions gives the possibility of exploring necessary conditions for existence from this new perspective. In fact, this approach suggests a series of natural questions related to the probabilistic characterisation of geometric and topological properties of domains (see [30]). The main contribution of this paper is twofold: Introduction The classical problem of finding a solution of the following problem is to determine the solution of the following equation \[-\Delta u=u^{6}\ {\rm in}\ D,\quad u=0\ {\rm on}\ \partial D. \tag{1.1}\] In particular, (1.1) has no positive solution if \(D\) is a ball in \({\rm I\!R}^{d}\), for \(d\geq 3\), whereas if \(D\) is an annulus there is a positive solution for all \(d\). More generally, Coron (1984) [13] showed that the existence of positive solutions to (1.1) holds for more general domains with a sufficiently small hole. **Theorem 1.1**.: _Suppose that \(D\) satisfies the following conditions:_ 1. \(D\supset\{x\in{\rm I\!R}^{d}\,:\,r_{1}<|x|<r_{2}\}\)_,_ 2. \(\bar{D}\not\supset\{x\in{\rm I\!R}^{d}\,:\,|x|<r_{1}\}\)_,_ 3. \(D\supset\{x\in{\rm I\!R}^{d}\,:\,|x|<r_{1}\}\)_,_ 4. _for some_ \(0<r_{1}<r_{2}<\infty\)_._ _Then, for any \(0<r_{1}<r_{2}<\infty\), there is a positive solution for all \(d\)._ Proof.: The first part of the proof is the following. **Theorem 1.2**.: _Suppose that \(D\) satisfies the following conditions:_ 1. \(D\supset\{x\in{\rm I\!R}^{d}\,:\,r_{1}<|x|<r_{2}\}\)_,_ [MISSING_PAGE_POST] ### Case: \(p>p^{*}\) In this case, it is known that one can find examples of nontrivial domains \(D\) in the sense of Bahri and Coron (1988) for which (1.1) has no solution, see, e.g., [36, 37]. Additionally, [10] shows that for all \(p\geq p^{*}\) one can find contractible domains where the number of positive solutions to (1.1) is arbitrarily large. On the other hand, the relationship between probability theory and differential equations goes back to the pioneer works of Bachelier (1900) [2] and Kolmogorov. Later on Kakutani (1944, 1945) [23, 24] showed the connection between Brownian motion and harmonic functions, and Kac (1949, 1951) [21], [22] established the probabilistic representation of solutions to some partial differential equations. It is well-known, for instance, that the transition probability function for Brownian motion (or Gauss kernel) is the fundamental solution of the heat equation. As for the Dirichlet boundary problem with the Laplacian operator, its solution -given as a Poisson integral formula for harmonic functions-admits a probabilistic representation in terms of a Brownian expectation [12], [4]. In this case, the _Poisson kernel_ (or _harmonic measure_) corresponds to the distribution of the _exit distribution_\(W_{\tau}\) of the Brownian motion, where \(\tau\) is the first exit time from the given domain. A sufficient (analytic) condition for the existence (and uniqueness) of a solution to the Dirichlet boundary problem is the _Poincare cone condition_. Probabilistic conditions are also known in terms of the probabilistic concept of _regularity_ of the boundary of \(D\) (see Definition 2.1 below). More recently, E.B. Dynkin (1991) [18] established connections between the theory of superprocesses and positive solutions of non-linear partial differential equations involving the operator \(Lu-f(u)\), where \(L\) is a strongly elliptic differential operator in \(\mathbb{R}^{d}\). An explicit formula for the solution for certain type of functions \(f\) was given in terms of hitting probabilities and additive functionals of the corresponding \((L,f)\)-superdiffusion (see S. Watanabe (1968) [43] and Dawson (1975) [14]). An \((L,f)\)-superdiffusion is obtained as a _high-density, short-life, small-mass limit_ particle systems evolving according to a Markov process with generator \(L\) and where the nonlinear function \(f\) describes its branching mechanism. The admissible functions \(f\) belong to a wide class \(\Psi\) of monotone increasing convex functions, which includes the family \(f(u)=u^{p}\), and \(f(x,u)=k(x)u^{p}\), \(k>0\), with the power \(p\) restricted to \(1<p\leq 2.\) The solution of the particular case \(\Delta u-f(u)\), where \(f(u)=u^{p}\), \(1<p\leq 2\) is given in terms of the so called _super-Brownian motion_ (see also Dawson (1993) [15]). In this context, another outstanding result due to Le-Gall (1999) [26] relies on a Brownian snake approach to construct the exit measure of the super-Brownian motion to get a probabilistic solution to the Dirichlet problem associated with the equation \(\Delta u=u^{2}\) in a regular domain. Some related ideas using the Feynman-Kac formula to study the Dirichlet problem for Schrodinger operators \(H=\frac{1}{2}\Delta+V\) can be found in Aizenman and Simon (1982) [1] and references cited therein. The probabilistic approach considered here to solve the Dirichlet problem for the Poisson equation combines the Schauder fixed point theorem with the probabilistic representation of the solution to the Dirichlet equation for the Laplacian operator (see Proposition 2.2). This approach has the advantage that there is not an _a priori_ upper-bound restriction on the values of \(p\) as in the analytical setting for which the use of a variational approach for \(d\geq 3\) imposes the restriction \(p\leq p^{*}\), or as in the superdiffusion approach mentioned above wherein the relation of \(p\) to the branching mechanism of the underlying particle systems is meaningful only when \(1<p\leq 2\). The structure of the paper is as follows: Section 2 introduces some notation and recalls the probabilistic representation of the solution to the Poisson equation. Our main result, Theorem 3.1, is proved in Section 3 using Schauder's fixed point theorem. In Section 4 we discuss some applications and give a multiplicity result. Finally, Section 5 is devoted to our conclusions and open problems. ## 2. Preliminaries Let \((\Omega,\mathcal{F},(\mathcal{F}_{t}),\mathbb{P})\) be a filtered probability space satisfying the usual conditions. Let \(W^{x}=(W^{x}_{t})_{t\geq 0}\) be a \(d-\)dimensional \((\mathcal{F}_{t})\)-Brownian motion started at \(x\in\mathrm{I\!R}^{d}\). For any open \(D\subset\mathrm{I\!R}^{d}\) and \(x\in D\), define the stopping time \(\tau_{D}(W^{x})\) as the first exit time of \(W^{x}\) from \(D\), that is \[\tau_{D}^{x}\equiv\tau_{D}(W^{x}):=\inf\{t\geq 0\,:\,W^{x}_{t}\notin D\}, \tag{2.1}\] with the standard convention \(\inf\varnothing=\infty\). We say that \(D\) is a _transient domain_ whenever \(\mathbb{P}\left[\tau_{D}^{x}<\infty\right]=1\) for all \(x\in D\). In particular, any bounded domain is transient. By continuity of the paths of \(W^{x}\), \(W^{x}(\tau_{D}^{x})\in\partial D\), i.e. \(\tau_{D}^{x}\) is also the first time the process \(W^{x}\) hits the boundary \(\partial D\) of \(D\). As usual, \(\mathbb{E}_{x}\) and \(\mathbb{P}_{x}\) denote, respectively, the expectation and probability with respect to the Brownian motion started at \(x\in\mathrm{I\!R}^{d}\); and \(||f||_{\infty}\) denotes the sup-norm of a function \(f\) on \(\mathrm{I\!R}^{d}\). _Remark 1_.: For any bounded domain \(D\subset\mathrm{I\!R}^{d}\), \(d\geq 3\), the Brownian motion is transient [33, Theorem 3.26] and thus the overall expected occupation time spent in \(D\), defined by \(\mathbb{E}_{x}\int_{0}^{\infty}\mathbf{1}_{D}\left(W_{t}\right)\mathrm{d}t\), \(x\in D\), is finite, which in turn implies that \(\sup_{x\in D}\mathbb{E}_{x}\left[\tau_{D}\right]<\infty\). We now recall the probabilistic concept of _regular boundary_ as given in [25, Chapter 4.2, p.245], which will play an important role for the continuity of solutions at the boundary. Intuitively, a point \(x\in\partial D\) is regular if the Brownian path started at \(x\) exits \(\bar{D}\) immediately. **Definition 2.1**.: _Let \(D\) be an open set. A point \(x\in\partial D\) is said to be regular for \(D\) if the first hitting time \(\sigma_{D}^{x}:=\inf\{t>0\,:\,W_{t}^{x}\in D^{c}\}\) satisfies \(\mathbb{P}[\sigma_{D}^{x}=0]=1\)._ It follows [25, Theorem 4.2.12, p. 245] that for any \(d\geq 2\), any point \(x\in\partial D\) is regular for \(D\) if, and only if, for every bounded, measurable function \(f:\partial D\mapsto\mathbb{R}\) which is continuous at \(x\), one has \[\lim_{y\to x,y\in D}\mathbb{E}_{x}f\left(W_{\tau_{D}}\right)=f(x). \tag{2.2}\] Let us now recall the following well-known result about the existence of solutions to the Poisson equation and its probabilistic representation (see, e.g., [33, Chapter 8, Remark 8.7]): **Proposition 2.2**.: _Let \(D\subset\mathrm{I\!R}^{d}\) be a bounded open domain. If \(f\in C_{b}(D)\) and \(u\in C(\bar{D})\cap C^{2}(D)\) is a solution of the Poisson problem: \(\frac{1}{2}\Delta u=f\) in \(D\), \(u=0\) on \(\partial D\), then \(u\) admits the probabilistic representation_ \[u(x)=\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}f(W_{t})\,\mathrm{d}t\right],\quad x \in D. \tag{2.3}\] _Conversely, if \(f\) is Holder continuous and every \(x\in\partial D\) is regular for \(D\), then (2.3) solves the Poisson problem for \(f\)._ _Remark 2_.: The regularity assumption in Proposition 2.2 for each \(x\in\partial D\) guarantees the boundary condition \(u(x)=0\) on \(\partial D\). Some classical criteria for regularity are, for instance, that either \(D\) satisfies the Poincare cone condition2, or that \(\liminf_{r\downarrow 0}\frac{\mathcal{L}(B_{r}(x)\cap D^{c})}{r^{d}}>0\) for \(x\in D\). Here \(\mathcal{L}\) denotes the Lebesgue measure on \(\mathrm{I\!R}^{d}\). See also [25, Chapter 4.2, Section C] and [33, Chapter 8.4, Theorem 8.37]. Footnote 2: A domain \(D\subset\mathrm{I\!R}^{d}\) satisfies the Poincaré cone condition at \(x\in\partial D\) if there exists an \(h>0\) and a cone \(V\) based at \(x\) with opening angle \(\alpha>0\) such that \(B_{h}(x)\cap V\subset D^{c}\). We can now state the equivalence between the solution to (1.1) and the solution to its corresponding nonlinear integral equation. **Lemma 2.3**.: _Let \(D\subset\mathrm{I\!R}^{d}\) be a bounded open domain and \(u\in C^{2}_{b}(D)\cap C(\bar{D})\). Assume \(h:\mathbb{R}\mapsto\mathbb{R}\) is a continuous function. If \(u\) is a solution to (1.1), then \(u\) solves the (non-linear) integral equation_ \[u(x)=\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}h(u(W_{t}))\,\mathrm{d}t\right]. \tag{2.4}\] _Conversely, if \(h\) is \(C^{1}(\mathbb{R})\) and each \(x\in\partial D\) is regular for \(D\), then (2.4) is a solution to (1.1)._ Proof.: We only need to verify that \(h\circ u:D\mapsto\mathbb{R}\) satisfies the conditions of Proposition 2.2. Indeed, for the first statement, since \(u\in C(\bar{D})\) and \(D\) is bounded, the image of \(u\) is a bounded interval. Hence, the continuity of \(h\) implies that the composition \(h\circ u\) is a bounded function. For the second statement, since \(h\in C^{1}(\mathbb{R})\), \(||u||_{\infty}<\infty\) and \(u\in C^{2}_{b}(D)\), then \(\nabla(h\circ u)=h^{\prime}(u)\Delta u\) and, further, \(|h^{\prime}(u)\Delta u|\leq\sup_{|z|\leq||u||_{\infty}|}h^{\prime}(z)||u||_{C^ {2}(D)}\,<\,\infty\). Thus, \(h\circ u\,\in\,C^{1}_{b}(D)\), which implies then the Lipschitz (and so the Holder) continuity of \(h\circ u\), as required. ### Newtonian potentials and Green functions As mentioned before, the connections between probability theory and the theory of PDEs is understood via the analytic concepts of Newtonian potentials and Green functions [12]. In particular, the probabilistic conditions provided in this paper to study the existence of solutions to (1.1) are related to the analytic concept of _Green potentials3_. For completeness, we recall this concept in what follows. Footnote 3: In fact, the proof of the result given in [3] is based on a careful study of the properties of the Green function. See also [40]. For any suitable function \(f\), its _Newtonian potential_, denoted by \(Uf\), is the operator defined for each \(x\) by \[Uf(x)\stackrel{{ def}}{{=}}c_{d}\,\int_{\mathbb{R}^{d}}\frac{f(y )}{|x-y|^{d-2}}\,\mathrm{d}y,\qquad\text{where}\qquad c_{d}\stackrel{{ def}}{{=}}\frac{\Gamma(d/2-1)}{(2\pi)^{d/2}}. \tag{2.5}\] For each \(x\), the mapping \(u_{x}(y)\equiv u(x,y)=\frac{1}{|x-y|^{d-2}}\) is known as the Newton gravitational potential (or also the Coulomb electrostatic potential). Apart from a numerical constant, it represents the potential induced by a mass or charge placed at the point \(x\) and evaluated at the point \(y\). It is known [4, Proposition 3.1, p. 104] that if \(f\geq 0\) and \(d\geq 3\), then the following equality holds \[Uf(x)=\int_{0}^{\infty}T_{t}f(x)\,\mathrm{d}t,\] where \(T_{t}\) is the transition operator for the Brownian motion, given by \[T_{t}f(x)\stackrel{{ def}}{{=}}\mathbf{E}_{x}\left[f(X_{t}) \right]=\int_{\mathbb{R}^{d}}p(t;x,y)f(y)\,\mathrm{d}y,\] with \(p(t;x,y)\) being the transition function of the Brownian motion \[p(t;x,y)=\frac{1}{(2\pi t)^{d/2}}e^{-\frac{|x-y|^{2}}{2t}}.\] The connection between the Newtonian potential \(Uf\) and the transition operator of the Brownian motion can be seen from the following equality \[\int_{0}^{\infty}p(t;x,y)\,\mathrm{d}t=\frac{\Gamma(d/2-1)}{2\pi^{d/2}}\frac{ 1}{|x-y|^{d-2}}\ \equiv\ c_{d}\,u(x,y).\] Moreover, given any domain \(D\), the Green potential operator for \(D\) is defined, for any suitable \(f\geq 0\) by \[G_{D}f(x)\stackrel{{ def}}{{=}}\mathbf{E}_{x}\left[\int_{0}^{ \tau_{D}}f(W_{t})\,\mathrm{d}t\right].\] Note that * If \(D=\mathbb{R}^{d}\), then \(G_{\mathbb{R}^{d}}f=Uf\) * If \(f\equiv\mathbb{I}\) (\(\mathbb{I}\) denotes the identity function), then \[G_{D}\mathbb{I}(x)=\mathbf{E}_{x}[\tau_{D}],\] * If \(f(x)=\mathbf{1}_{A}(x)\) for a Borel set \(A\), then \[G_{D}\mathbf{1}_{A}(x)=\mathbf{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{B}(W_ {s})\,\mathrm{d}s\right],\] that is, \(G_{D}\mathbf{1}_{A}(x)\) is the expected occupation time in \(A\) before leaving \(D\). In fact, one has the following expression \[G_{D}\mathbf{1}_{A}(x)=G_{D}(x,A)=\int_{A}g_{D}(x,y)\,\mathrm{d}y,\] where \(g_{D}(x,y)\) is called the Green function for \(D\). * If \(f=\mathbf{1}_{B}\) for a Borel set \(B\), the integral \[\int_{0}^{\infty}\mathbf{1}_{B}(X_{t})\,\mathrm{d}t,\] represents the total "occupation time" the Brownian motion spends on \(B\). Thus, \(U\mathbf{1}_{B}(x)\) denotes the corresponding expected occupation time (cf. Remark 1). ## 3. Main result We can now state sufficient conditions to guarantee the existence of a positive solution to the integral equation (2.4) for the function \(h:\mathbb{R}_{+}\to\mathbb{R}\), \(h:y\mapsto\lambda y^{p}\), \(p>1\), \(\lambda>0\). Let us first introduce some additional notation. Let \(V\) be an open subset of \(D\). For each \(x\in\bar{D}\), set \(L_{D}[V]\equiv G_{D}\mathbf{1}_{V}\). That is, \(L_{D}[V]\,:\,\bar{D}\to\mathbb{R}_{+}\) by \[L_{D}[V]\,:\,x\;\mapsto\;\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{ \{W_{s}\in V\}}\,\mathrm{d}s\right]. \tag{3.1}\] Recall that, for each \(x\in\bar{D}\), the value \(L_{D}[V](x)\) gives the expected occupation time spent by the Brownian motion \(W^{x}\) (started at \(x\)) on the subset \(V\) before leaving \(D\). _Remark 3_.: Note that, for each \(x\in D\), \(L_{D}[V]\) is finite thanks to the transience of the Brownian motion for \(d\geq 3\), and further \(L_{D}[V](x)=0\) for all \(x\in\partial D\). _Remark 4_.: Observe also that (by monotonicity) for any \(U\subset V\subset D\) it follows that \[L_{D}[U](x)\quad\leq\quad L_{D}[V](x),\qquad x\in D.\] In particular, for each \(x\in D\), \[L_{D}[V](x)\quad\leq\quad L_{D}[D](x)\quad=\quad\mathbb{E}_{x}\left[\int_{0}^{ \tau_{D}}\mathbf{1}_{\{W_{s}\in D\}}\,\mathrm{d}s\right]\quad=\quad\mathbb{E} _{x}[\tau_{D}], \tag{3.2}\] which in turn implies that \[\inf_{y\in V}L_{D}[V](y)\quad\leq\quad\sup_{x\in\bar{D}}\mathbb{E}_{x}[\tau_{ D}]. \tag{3.3}\] We can now state our main result. **Theorem 3.1**.: _Let \(d\geq 3\), \(p>1\) and \(\lambda>0\). Let \(D\subset\mathrm{I\!R}^{d}\) be a bounded open domain such that each boundary point \(x\in\partial D\) is regular for \(D\). Suppose that there exists a partition \(D_{1},D_{2}\subset D\), with \(D_{1}\subset\subset D\) such that there exist positive constants \(m\) and \(M\) such that \(m\leq M\) satisfying the following conditions_ \[\sup_{x\in\bar{D}}\mathbb{E}_{x}[\tau_{D}]\quad\leq\quad\frac{M^{1-p}}{ \lambda}, \tag{3.4}\] \[\inf_{x\in D_{1}}\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{\{W_{s} \in D_{1}\}}\,\mathrm{d}s\right]\quad\geq\quad\frac{m^{1-p}}{\lambda}, \tag{3.5}\] \[M\,\sup_{x\in D_{2}}\mathbb{E}_{x}\left[\tau_{D}\right]\,\left(\sup_{x\in D_{ 2}}\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{\{W_{s}\in D_{2}\}}\, \mathrm{d}s\right]\right)^{p}\quad\leq\quad\left(\frac{m}{\lambda}\right)^{p}, \tag{3.6}\] _then (1.1) has a positive solution \(u\in C_{b}^{2}(D)\cap C(\overline{D})\) such that_ \[u\quad\geq\quad m\quad>\quad 0\quad\text{ in}\quad D_{1},\quad\text{ and }\quad \quad||u||_{\infty}\quad\leq\quad M. \tag{3.7}\] _Remark 5_.: Before presenting the proof let us comment on the assumptions of Theorem 3.1. * The inequality in (3.5) implies the existence of a subset \(D_{1}\subset D\) for which the expected occupation time of a Brownian motion in \(D_{1}\) before leaving \(D\) is bounded below by a positive constant. This guarantees that the solution is nontrivial. 2. The inequality in (3.6) imposes a condition between the exit time from \(D\) starting on \(D_{2}\) (the complement of \(D_{1}\)) and the expected occupation time on \(D_{2}\) before leaving \(D\). As a matter of fact, this condition ensures that the fixed point will be small in the complement of \(D_{1}\), i.e. it will be concentrated in \(D_{1}\). In other words, that the iteration procedure can be localised (cf. the multiplicity result Theorem 4.1). 3. The above conditions represent, in some sense, a probabilistic way of saying that the domain \(D\) is close to having a hole, i.e. that the domain is close to a topologically nontrivial one and they should be compared to previous sufficient conditions for existence of solutions to equation (1.1), e.g. [13, 16] or the general result [3]. Notice that Ding's result makes clear that a purely topological condition cannot be necessary for the existence of solutions, but rather a combination of topological and geometric features of the domain. In this respect, our probabilistic formulation captures these ideas. More generally, the question of how topological or geometric features can be characterised in probabilistic terms, for example, using occupation and exit times of stochastic processes, is a natural and interesting one. In this respect, we refer to [39] and [31] where strong convexity and simple connectedness are studied from a probabilistic point of view. Proof.: (of Theorem 3.1) By Lemma 2.3, proving the existence of a positive solution for the boundary problem (1.1) is equivalent to solving the nonlinear integral equation (2.4). Let us thus rewrite (1.1) as a fixed point problem \(u(x)=(Tu)(x)\) for a well-defined operator \(T\) as described below. Let \(C(\overline{D})\) denote the space of real-valued continuous functions on \(\overline{D}\), endowed with the sup norm \(||\cdot||_{\infty}\). Take a partition \(D_{1},D_{2}\subset D\), and positive constants \(M,m>0\) as in the statement. For any real-valued function \(g\) on \(\bar{D}\), and any open set \(V\subset D\), define \(L^{g}_{D}[V]\,:\,\bar{D}\to\mathbb{R}_{+}\) by \[L^{g}_{D}[V]\;:\;x\;\mapsto\;\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}g(W_{s}) \mathbf{1}_{\{W_{s}\in V\}}\,\mathrm{d}s\right]. \tag{3.8}\] Define \(\mathbf{B}\) as the set \[\mathbf{B}:=\left\{u\in C(\overline{D});\;u(x)=0,\;x\in\partial D:\;i)\,\inf _{y\in D_{1}}L^{u}_{D}[D_{1}](y)\geq m,\quad ii)\,\sup_{y\in D_{2}}L^{u}_{D}[ D_{2}](y)\leq m,\quad\text{and}\quad iii)\,||u||_{\infty}\,\leq\,M\right\}. \tag{3.9}\] Notice that this set is nonempty. Indeed, by Urysohn's separation theorem, there exists a continuous function, \(u\), vanishing at the boundary of \(D\), with \(u\equiv m\) in \(D_{1}\) and \(0\leq u\leq m\) in \(D\). It is further necessary to mollify \(u\) to guarantee its smoothness. Observe that \(\mathbf{B}\) is convex and a closed subset of the space \(C(\overline{D})\). Therefore, \((\mathbf{B},||\cdot||_{\infty})\) is a complete metric space. Let \(h(y)=\lambda\,y^{p}\). Define the operator \(T\) on \(\mathbf{B}\), for each \(u\in\mathbf{B}\), as the mapping \(Tu:\bar{D}\to\mathbb{R}\), whose value at \(x\in D\) is given by \[(Tu)(x):=\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}h(u(W_{t}))\,\mathrm{d}t\right], \tag{3.10}\] and \((Tu)(x)=0\) for \(x\in\partial D\). To prove the statement we will rely on the Schauder fixed point theorem. We will proceed then in two steps: First we show that the operator \(T\) maps \(\mathbf{B}\) into \(\mathbf{B}\), and that \(T\) is a continuous operator. For this, note that if \(u\in\mathbf{B}\) then \(Tu\in C(\overline{D})\). Moreover, since \(||u||_{\infty}\leq M\) and \(p>1\), it follows that for each \(x\in D\), \[|Tu(x)|\leq\lambda\,\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}||u||_{\infty}^{p} \,\mathrm{d}t\right]\;\leq\;\lambda\,M^{p}\sup_{x\in D}\mathbb{E}_{x}[\tau_{D}] \;\leq M,\] where the last inequality follows from the equality in (3.4). Hence, \(||Tu||_{\infty}\leq M\). On the other hand, to prove that \[\inf_{y\in D_{1}}L_{D}^{T}[D_{1}](y)\geq m,\] observe that \[\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}Tu(W_{s})\,\mathbf{1}_{\{W_ {s}\in D_{1}\}}\,\mathrm{d}s\right] =\lambda\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\mathbb{E}_{W_{s }}\left(\int_{0}^{\infty}\mathbf{1}_{\{\tau_{D}>r\}}u^{p}(W_{r})\,\mathrm{d}r \right)\,\mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}s\right]\] \[\geq\lambda\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\inf_{x\in D _{1}}\left\{\mathbb{E}_{x}\left(\int_{0}^{\tau_{D}}u(W_{r})\,\mathrm{d}r \right)\right\}^{p}\,\mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}s\right]\] \[\geq\lambda\,\inf_{x\in D_{1}}\left\{\mathbb{E}_{x}\left(\int_{0 }^{\tau_{D}}u(W_{r})\,\mathrm{d}r\right)\right\}^{p}\,\mathbb{E}_{y}\left[\int _{0}^{\tau_{D}}\mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}s\right]\] \[\geq\lambda\,\inf_{x\in D_{1}}\left\{\mathbb{E}_{x}\left(\int_{0 }^{\tau_{D}}u(W_{r})\mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}r\right)\right\} ^{p}\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{\{W_{s}\in D_{1}\}} \,\mathrm{d}s\right]\] \[\geq\lambda\,m^{p}\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}} \mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}s\right]\] \[\geq\lambda\,m^{p}\inf_{y\in D_{1}}\mathbb{E}_{y}\left[\int_{0}^{ \tau_{D}}\mathbf{1}_{\{W_{s}\in D_{1}\}}\,\mathrm{d}s\right]\geq m\] where we have used Jensen's inequality, inequality \(i)\) in the definition of \(\mathbf{B}\), and condition (3.5). Taking the infimum over \(y\in D_{1}\) in the previous inequality we obtain the desired result. It remains to be proved that \[\sup_{y\in D_{2}}L_{D}^{T}[D_{2}](y)\leq m.\] \[\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}Tu(W_{s})\,\mathbf{1}_{\{ W_{s}\in D_{2}\}}\,\mathrm{d}s\right] =\lambda\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\mathbb{E}_{W_{s }}\left(\int_{0}^{\infty}\mathbf{1}_{\{\tau_{D}>r\}}u^{p}(W_{r})\,\mathrm{d}r \right)\,\mathbf{1}_{\{W_{s}\in D_{2}\}}\,\mathrm{d}s\right]\] \[\leq\lambda\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\left\{ \mathbb{E}_{W_{s}}\left(\int_{0}^{\infty}\mathbf{1}_{\{\tau_{D}>r\}}\,u(W_{r} )\,\mathrm{d}r\right)\right\}^{1/p}\mathbf{1}_{\{W_{s}\in D_{2}\}}\,\mathrm{d}s\right]\] \[\leq\lambda\,\mathbb{E}_{y}\left[\int_{0}^{\tau_{D}}\left\{\sup_ {x\in D_{2}}\mathbb{E}_{x}\left(\int_{0}^{\tau_{D}}u(W_{r})\,\mathrm{d}r \right)\right\}^{1/p}\mathbf{1}_{\{W_{s}\in D_{2}\}}\,\mathrm{d}s\right]\] \[\leq\lambda\,\left\{\sup_{x\in D_{2}}\mathbb{E}_{x}\left(\int_{0 }^{\tau_{D}}u(W_{r})\,\mathrm{d}r\right)\right\}^{1/p}\,\mathbb{E}_{y}\left[ \int_{0}^{\tau_{D}}\mathbf{1}_{\{W_{s}\in D_{2}\}}\,\mathrm{d}s\right]\] \[\leq m,\] where we have used Jensen's inequality, and condition (3.6). Taking the supremum over \(y\in D_{2}\) in the previous inequality we obtain the desired result. We can thus conclude that \(T:\mathbf{B}\to\mathbf{B}\), as required. We now prove that \(T\) is continuous. Let \(u,v\in\mathbf{B}\) and \(x\in D\), then the mean-value theorem yields \[|(Tu)(x)-(Tv)(x)| \leq \lambda\,\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\left|u^{p}(W_{t} )-v^{p}(W_{t})\right|\mathrm{d}t\right]\] \[\leq K\,\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\left|u(W_{t})-v(W_{t} )\right|\mathrm{d}t\right],\] where \(K:=\lambda\,pM^{p-1}\), since \(u^{p}\) is smooth and we can take the Lipschitz constant as the maximum of its derivative. Hence, \[||T(u)-T(v)||_{\infty} \leq \lambda\,pM^{p-1}||u-v||_{\infty}\sup_{x\in D}\mathbb{E}_{x}\left[ \tau_{D}\right].\] By Remark 1, we obtain that \(||T(u)-T(v)||_{\infty}\leq C\,||u-v||_{\infty}\), for all \(u,v\in\mathbf{B}\), where \(C=\lambda\,pM^{p-1}\sup_{x\in D}\mathbb{E}_{x}\left[\tau_{D}\right]\) is a finite constant independent of \(u\) and \(v\). Hence, \(T\) is continuous. In fact, by the Arzela-Ascoli theorem, \(T(\mathbf{B})\) is precompact. Therefore, by Schauder's fixed point theorem4, \(T\) has a fixed point \(u_{*}\in\mathbf{B}\), as required. Footnote 4: Schauder’s Fixed point theorem: _Let \(X\) be a Banach space, \(M\subset X\) be non-empty, convex, and closed, and \(T:M\subset X\mapsto M\) be a continuous operator such that \(T(M)\) is precompact. Then \(T\) has a fixed point._ \(\diamondsuit\) ## 4. Applications In what follow we present two examples. In the first, we consider the case of the ball and \(p>2\), which includes most values of the critical Sobolev exponent (\(d>6\)). We show that, at least for a sufficiently large radius, the hypotheses of Theorem 3.1 cannot hold, since by Pohozaev's identity the only solution is trivially identically equal to zero. In the second example for the case of an annulus it is shown that the conditions of the main theorem hold, provided the inner and outer radii are suitably chosen. This recovers Coron's result (see [13]) and shows that our result is nonempty. **Example 1.** Consider the case where the domain is \(D=B_{T}(0)\subset\mathrm{I\!R}^{d}\), \(p>2\), and \(d\geq 3\). It is known that in this case, the only solution to (1.1) is the trivial one. Hence, we expect that, for consistency, at least one of the conditions in Theorem 3.1 would not be satisfied. Indeed, at least for \(T\) sufficiently large we can prove that (3.4) and (3.5) hold, but (3.6) does not. Let \(0<r<R\). Take \(D_{1}\equiv B_{R}\setminus B_{r}\) and \(D_{2}\equiv D_{1}^{c}\). It is not difficult to see that (3.4) holds with \(M:=\left(\lambda\,T^{2}/d\right)^{\frac{1}{1-p}}\); and by the Krylov-Safanov inequality, there exists some \(K(T,R,r,d)>0\) such that (3.5) holds with \(m=\left(\lambda\,K\right)^{\frac{1}{1-p}}\). Now suppose that (3.6) holds, then note that \[\left(\frac{m}{\lambda}\right)^{p} \geq M\,\sup_{x\in D_{2}}\mathbb{E}_{x}\left[\tau_{D}\right]\, \left(\sup_{x\in D_{2}}\mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{ \left\{W_{\epsilon}\in D_{2}\right\}}\,\mathrm{d}s\right]\right)^{p}\] \[=\frac{M^{2-p}}{\lambda}\,\left(\sup_{x\in D_{2}}\mathbb{E}_{x} \left[\int_{0}^{\tau_{D}}\mathbf{1}_{\left\{W_{\epsilon}\in D_{2}\right\}}\, \mathrm{d}s\right]\right)^{p}\] \[\geq\frac{M^{2-p}}{\lambda}\left(\inf_{x\in D_{1}}\mathbb{E}_{x} \left[\int_{0}^{\tau_{D}}\mathbf{1}_{\left\{W_{\epsilon}\in D_{1}\right\}}\, \mathrm{d}s\right]\right)^{p}\] \[\geq\frac{M^{2-p}}{\lambda}\left(\frac{m^{1-p}}{\lambda}\right)^{ p}.\] Hence, \[m^{p}\geq\frac{M^{2-p}}{\lambda}m^{p-p^{2}} \tag{4.1}\] and thus \[m^{p^{2}}\geq\left(\frac{\lambda\,T^{2}}{d}\right)^{\frac{2-p}{1-p}}\frac{1}{ \lambda}=\left(\frac{T^{2}}{d}\right)^{\frac{2-p}{1-p}}\lambda^{\frac{1}{1-p}}. \tag{4.2}\] Therefore, since \(p>2\) and \(d\geq 3\), the inequality above does not hold if one takes \(T\) sufficiently large. Notice that the estimates we are providing are certainly not optimal. A careful study with \(p\) equal to the critical Sobolev exponent should provide a proof for any ball, independently of its size. Moreover, in principle, by Pohozhaev's identity, these consideration should extend to any starshaped domain. **Example 2.** Take positive constants \(\delta,r,R\) and \(T\) satisfying \(0<\delta<r<R<T\). Also, let us fix \(1<p<\sqrt{2}\) and \(d=3\). Consider now the domain given by the annulus \(D\equiv A(\delta,T)\stackrel{{ def}}{{=}}B_{T}\setminus B_{\delta}\), and take the partition \(D_{1}\) and \(D_{2}\) as in Example 1. Besides, we will require that \(D_{2}\) contains all the \(x\) such that \[|x|=(\delta T(\delta+T)/2)^{1/3}.\] Observe that \(D_{1}=A(r,R)\) and \(D_{2}=A(\delta,r)\cup A(R,T)\). We need to show that the conditions (3.4)-(3.6) in Theorem 3.1 hold for this case. Indeed, to obtain \(M\) in condition (3.4), one can prove that5 Footnote 5: In particular, note that when \(\delta\to 0\), then we recover the result for the ball \(B_{T}(0)\). \[\mathbb{E}_{x}(\tau_{A(\delta,T)})=\frac{1}{3}\left(T^{2}+T\delta+\delta^{2}- \frac{\delta T(\delta+T)}{|x|}-|x|^{2}\right),\qquad x\in A(\delta,T). \tag{4.3}\] Define, for \(z\in[r,R]\), the mapping \(f(z)=\frac{1}{3}\left(T^{2}+T\delta+\delta^{2}-\frac{\delta T(\delta+T)}{z}-z ^{2}\right)\). Then \[f^{\prime}(z)=\frac{1}{3}\left(\frac{\delta T(T+\delta)}{z^{2}}-2z\right)=0 \quad\text{ if and only if }\quad z^{3}=\frac{\delta T(T+\delta)}{2}. \tag{4.4}\] A direct calculation shows that this condition determines the maximum of \(f\) and so we obtain \[\sup_{x\in A(\delta,T)}\mathbf{E}[\tau_{A(\delta,T)}]=\frac{T^{2}+T\delta+ \delta^{2}}{3}-\frac{1}{3}\left(\frac{\delta T(T+\delta)}{2}\right)^{2/3}. \tag{4.5}\] Hence, condition (3.4) holds with \[M\equiv M(\delta,T,p)=\left(\frac{T^{2}+T\delta+\delta^{2}}{3}-\frac{1}{3} \left(\frac{\delta T(T+\delta)}{2}\right)^{2/3}\right)^{\frac{1}{1-p}}.\] As before, the second condition in (3.5) holds with \(m=K(\delta,r,R,T)^{\frac{1}{1-p}}\) for some positive constant \(K\) thanks to the Krylov-Safanov inequality. It remains to prove that (3.6) holds. Since \(D_{2}\) contains the points \(x\in A(\delta,T)\) for which the supremum in (4.3) is achieved, it follows that \[\sup_{x\in D_{2}}\mathbf{E}[\tau_{D}]=M^{1-p}.\] Therefore, using that \[M\ \sup_{x\in D_{2}}\mathbb{E}_{x}\left[\tau_{D}\right]\ \left(\sup_{x\in D_{2}} \mathbb{E}_{x}\left[\int_{0}^{\tau_{D}}\mathbf{1}_{\{W_{x}\in D_{2}\}}\, \mathrm{d}s\right]\right)^{p}\ \leq M\ \left(\sup_{x\in D_{2}}\mathbb{E}_{x}\left[\tau_{D} \right]\right)^{p+1},\] it follows that \[M^{(1-p)(p+1)+1}=M^{2-p^{2}}=\left(\frac{T^{2}+T\delta+\delta^{2}}{3}-\frac{1 }{3}\left(\frac{\delta T(T+\delta)}{2}\right)^{2/3}\right)^{\frac{2-p^{2}}{1- p}}<(T^{2}+T\delta+\delta^{2})^{\frac{2-p^{2}}{1-p}}\leq m^{p},\] which holds for \(T\) sufficiently large and \(p\in(1,\sqrt{2})\). As an immediate extension we obtain the following multiplicity result: **Theorem 4.1**.: _Let \(d\geq 3\) and \(p>1\). Let \(D\subset\mathds{R}^{d}\) be a bounded open domain such that each \(x\in\partial D\) is regular for \(D\). Assume that there is a subset \(V\) with \(s\) connected components \(V_{1},\ldots,V_{s}\) in \(D\). If for each partition \(D^{i}_{1}\equiv V_{i}\) and \(D^{i}_{2}\equiv V^{c}_{i}\), \(i=1,\ldots s\), we can find corresponding \(m_{k}\), \(M_{k}\) satisfying the inequalities (3.4)-(3.6), then there are at least \(2^{s}-1\) distinct solutions to problem (1.2)._ Proof.: The proof of this theorem follows directly by applying a similar argument to the one used in the proof of Theorem 3.1. Note that, excluding the empty set (which should not be taken into account, since it corresponds to the zero solution), there are N:=\(2^{s}-1\) possible open subsets \(\hat{V}_{k}\subset D\), \(k=1,\ldots,N\), corresponding to a fixed choice of connected components. Define \[I_{k}:=\text{set of indices }i\in\{1,\ldots,s\}\text{ such that }V_{i}\in\hat{V}_{k}.\] For each subset \(\hat{V}_{k}\) we can thus define the partition \(\hat{D}^{k}_{1}\equiv\hat{V}_{k}\) and \(\hat{D}^{k}_{2}=D\setminus\hat{V}_{k}\). Define the constants \[\hat{m}_{k}:=\max\{m_{i}:i\in I_{k}\}\qquad and\qquad\hat{M}_{k}:=\min\{M_{i}: i\in I_{k}\}.\] A fixed point argument can now be used: for each \(k=1,\ldots,N\), define the operator \(T_{k}\) as in (3.10) but now defined on the test functions \[\mathbf{B}_{k}:=\Big{\{}u\in C(\overline{D}),\ u(x)=0,\,x\in \partial D:\ i)\inf_{y\in\hat{D}_{1}^{k}}L_{D}^{u}[\hat{D}_{1}^{k}](y)\geq\hat{ m}_{k},\\ \sup_{y\in\hat{D}_{2}^{k}}L_{D}^{u}[\hat{D}_{2}^{k}](y)\leq\hat{ m}_{k},\quad\text{and}\quad iii)\,||u||_{\infty}\leq\,\hat{M}_{k}\Big{\}}.\] The proof follows from a similar argument as before implying the existence of a nontrivial fixed point in \(\hat{V}_{k}\), for each \(k\), and therefore, excluding the trivial case, there are, at least, \(2^{s}-1\) solutions, as required. \(\diamondsuit\) ## 5. Conclusions In this work we present a probabilistic approach to study the existence of solutions to a type of elliptic equations. We highlight the following relevant observations: In probabilistic terms, we provided sufficient conditions to guarantee the existence of positive solutions to (1.1). Indeed, the hypotheses on the domain \(D\) in Theorem 3.1 can be viewed as a probabilistic characterization of the existence of a "hole" in \(D\). Intuitively, it should be contrasted with the sufficient conditions on the topology of \(D\) given by Bahri and Coron in [13, 3], where it is shown that if there is a hole, then a nontrivial solution exists and the result by Ding ([16] in which a "quasihole" is defined and it is proved that it is a sufficient condition for existence. In the same spirit the multiplicity results in [35] and [34] can be phrased as guaranteeing as many solutions as the number of holes (or quasiholes) of the domain. Theorem 3.1 shows that there exists a nontrivial connection between the topology/geometry of the underlying domain \(D\) and expected exit and occupation times of the Brownian motion. This approach should be contrasted with the use of other methodological analytical tools such as potential theory and Green's functions. It is natural to conjecture that conditions similar to the ones given in the main theorem might be also necessary for the existence of a nontrivial solution. Our approach also suggests that there has to be a natural characterisation of topological and geometrical properties of a domain in probabilistic terms as it is known for [31, 39], where connectedness and simple connectedness are studied. This is reminiscent of the conditions on the capacity of the boundary of the domain of elliptic equations for boundary conditions to be achieved, as well as the probabilistic characterisation of regularity of a boundary point (analytically verified through the Poincare's cone condition) in terms of expected exit times. There are several natural extensions of the ideas here presented. In the first place, the study of sign-changing solutions and multiplicity results along the same lines as Theorem 4.1 can be dealt with using these probabilistic methods. Additionally, it is possible to introduce a potential in equation 1.1 as in a nonlinear Schrodinger equation, for which similar probabilistic representation formulas to the ones we consider exist. Finally, it would be interesting to consider different kinds of differential operators, other than Laplacian, for instance, more general elliptic operators involving different stochastic processes, e.g. the fractional Laplacian and operators associated to Levy processes. It is natural to conjecture that the conditions will be given in terms of the exit and occupation times of the corresponding processes.
2309.11074
Power boundedness in Fourier and Fourier-Stieltjes algebra of an ultraspherical hypergroup
Let $H$ be an ultraspherical hypergroup associated with a locally compact group $G$ and a spherical projector $\pi$ and let $A(H)$ and $B(H)$ denote the Fourier and Fourier-Stieltjes algebras, respectively, associated with $H.$ In this note, we study power boundedness and Ces\`aro boundedness in $B(H)$. We also characterize the power bounded property for both $A(H)$ and $B(H).$
Reza Esmailvandi, Mehdi Nemati, N. Shravan kumar
2023-09-20T05:40:09Z
http://arxiv.org/abs/2309.11074v1
# Power boundedness in Fourier and Fourier-Stieltjes algebra of an ultraspherical Hypergroup ###### Abstract. Let \(H\) be an ultraspherical hypergroup associated with a locally compact group \(G\) and a spherical projector \(\pi\) and let \(A(H)\) and \(B(H)\) denote the Fourier and Fourier-Stieltjes algebras, respectively, associated with \(H.\) In this note, we study power boundedness and Cesaro boundedness in \(B(H)\). We also characterize the power bounded property for both \(A(H)\) and \(B(H).\) Key words and phrases:Fourier algebras, ultraspherical hypergroups, locally compact groups, power boundedness, Cesaro boundedness 2010 Mathematics Subject Classification: 43A62, 43A30,; Secondary: 46J10 ## 1. Introduction Let \(G\) be a locally compact abelian group. A classical result of Schreiber [10] states that the algebras \(L^{1}(G)\) and \(M(G)\) are power bounded if and only if \(G\) is compact and finite, respectively. A natural generalizations of the group algebra and the measure algebra are the Fourier and Fourier-Stieltjes algebras associated to general locally compact groups. These algebras are introduced by Eymard [1] and since then, they have been a major object of study in abstract harmonic analysis. In 2011, Lau and Kaniuth showed that \(A(G)\) and \(B(G)\) are power bounded if and only if \(G\) is discrete and finite, respectively. Let \(G\) be a locally compact group and let \(\pi\) be a spherical projector. Let \(H\) be an ultraspherical hypergroup associated to \(G\) and \(\pi.\) Let \(A(H)\) and \(B(H)\) denote the Fourier and Fourier-stieltjes algebras on \(H\), introduced by Muruganandam in 2008 [11]. The purpose of this paper, is to extend the result of Lau and Kaniuth on groups to the context of ultraspherical hypergroups. More precisely, we show that \(A(H)\) and \(B(H)\) possess the power bounded property if and only if \(H\) is discrete and finite, respectively. In Section 4, we characterize the power boundedness of \(A(H).\) In fact, this is the content of Theorem 4.3. Initially, in Section 3, results from [10, 10] are used to deduce relations among power bounded elements of \(B(H)\) and certain sets from the coset ring. We also study the sequence \(\{u^{n}\}_{n\in\mathbb{N}}\) in relation to the same sets considered, where \(u\) is a power bounded element of \(B(H).\) Finally, we Introduction Let \(H\) be a hypergraph with a set \(\mathcal{C}(G)\) of all nonempty compact subsets of \(G\). A hypergraph \(H\) is called _\(\pi\)-radial_ if \(\pi(f)=f\) and similarly, a measure \(\mu\) is called \(\pi\)-radial if \(\pi^{*}(\mu)=\mu\). Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi(f)=f\) and similarly, a measure \(\mu\) is called \(\pi\)-radial if \(\pi^{*}(\mu)=\mu\). Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi(f)=f\) and similarly, a measure \(\mu\) is called \(\pi\)-radial if \(\pi^{*}(\mu)=\mu\). Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi(f)=f\) and similarly, a measure \(\mu\) is called \(\pi\)-radial if \(\pi^{*}(\mu)=\mu\). Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial. Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial. Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(G).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial. Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(H).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial. Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(H).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{ y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial. Let \(H=\{\dot{x}=\mathcal{O}x:x\in G\}\), equipped with the natural quotient topology under the quotient map \(p:G\to H\). We identify \(M(H)\) with the space of all \(\pi^{*}\)-radial measures in \(M(H).\) Define the product on \(M(H)\) as \(\delta_{\dot{x}}*\delta_{\dot{y}}=\pi^{*}(\pi^{*}(\delta_{x})*\pi^{*}(\delta_{y}))\) for all \(x,y\in G\). With this structure \(H\) becomes a locally compact hypergroup, called a spherical hypergroup [13, Theorem 2.12]. A spherical hypergroup \(H\) is called _\(\pi\)-radial_ if \(\pi\) is called \(\pi\)-radial is further called an ultraspherical hypergroup if the modular function on \(G\) is \(\pi\)-radial. One of the most common and interesting example of an ultraspherical hypergroup is the double coset hypergroup. Let \(G\) be a locally compact group containing a compact subgroup \(K.\) Define \(\pi:C_{c}(G)\to C_{c}(G)\) as \[\pi(f)(x)=\int_{K}\int_{K}f(k^{\prime}xk)\ dk\ dk^{\prime}.\] Then \(\pi\) defines a spherical projector and the resulting hypergroup is an ultraspherical hypergroup. Specific examples of the double coset hypergroups are orbit hypergroups which are constructed as follows. Let \(G\) be a locally compact group and let \(K\) be a compact subgroup of \(Aut(G),\) the group of all topological automorphisms of \(G.\) For each \(x\in G\) denote by \(K(x)=\{\alpha(x):\alpha\in K\}\) the orbit of \(x\) under \(K.\) The orbit hypergroup \(G_{K}\) is the space of all orbits \(K(x),x\in G,\) equipped with the quotient topology and the obvious hypergroup structure. Now, consider the semidirect product \(G\rtimes K\), where the group product of \((x,\alpha)\) with \((y,\beta)\) in \(G\rtimes K\) is given by \[(x,\alpha)(y,\beta)=(x\alpha(y),\alpha\beta).\] Then the map \(K(x)\to K(x,\beta)K\) is a toplogical hypergroup isomorphism between \(G_{K}\) and the double coset hypergroup \((G\rtimes K)//K\) (see [2, Section 8]). Another interesting class of examples for ultraspherical hypergroups in the context of Lie groups is due to Damek and Ricci [1]. They called the spherical projectors as average projector. Unlike the theory of locally compact groups, the existence of a Haar measure on hypergroups remains an open problem. But for specific cases of hypergroups including commutative hypergroups, discrete hypergroups, and compact hypergroups, a unique (up to a scalar multiple) Haar measure always exists, [1]. It is shown in [10, Proposition 2.14] that every ultraspherical hypergroup admits a Haar measure. More precisely, if \(H\) is an ultraspherical hypergroup associated to a locally compact group \(G,\) then a left Haar measure on \(H\) is given by \[\int_{H}f(\dot{x})d\dot{x}=\int_{G}f(p(x))dx,\qquad(f\in C_{c}(H)).\] The following lemma is well known in the case of locally compact groups. **Lemma 2.2**.: _Let \(H\) be a non-discrete ultraspherical hypergroup with Haar measure \(\lambda.\) Then \(H\) possesses a compact nowhere dense subset of positive Haar measure._ Proof.: Suppose that \(H\) is second countable. In particular, \(H\) is separable. Let \(\{t_{n}\}\) be a countable dense subset of \(H.\) Now, for each \(n\in\mathbb{N},\) choose an open set \(V_{n}\) such that \(\dot{e}\in V_{n}\) and \(\lambda(t_{n}*V_{n})\leq\frac{1}{2^{n+1}}.\) Let \(E=\left(\underset{n\in\mathbb{N}}{\cup}t_{n}*V_{n}\right)^{c}.\) Then \(E\) satisfies the requirements of the lemma. Now, suppose that \(H\) is not second countable. Using [13, Theorem 1.1], there exists a compact subhypergroup \(K\) of \(H\) such that \(H//K\) is second countable. Now, by the preceding paragraph, there exists a subset \(\widetilde{E}\) of \(H//K\) such that \(\widetilde{E}\) is nowhere dense and has positive Haar measure. Let \(E=\mathfrak{q}^{-1}(\widetilde{E}),\) where \(\mathfrak{q}\) denotes the canonical quotient map from \(H\) onto \(H//K.\) This set \(E\) satisfies the requirements of the lemma. Let \(G\) be a locally compact group. The coset ring, denoted \(\mathscr{R}(G),\) is defined as the Boolean ring generated by all cosets of subgroups of \(G.\) The closed coset ring, denoted \(\mathscr{R}_{c}(G),\) is defined to be \[\mathscr{R}_{c}(G)=\{E\in\mathscr{R}(G):\text{$E$ is closed in $G$}\}.\] Let \(H\) be an ultraspherical hypergroup associated with \(G\). We will write \[\mathscr{R}(H)=\{\dot{E}\subseteq H:p^{-1}(\dot{E})\in\mathscr{R}(G)\},\] and \[\mathscr{R}_{c}(H)=\{\dot{E}\subseteq H:p^{-1}(\dot{E})\in\mathscr{R}_{c}(G)\}.\] ### Fourier Algebra Let \(B(H)\) denote the Fourier-steiltjes algebra on \(H\). Then \[B(H)=B_{\pi}(G)=\{u\in B(G):u\;is\;\pi\text{-}radial\}.\] Let \(C^{*}(H)\) denote the hypergroup C*-algebra associated with \(H.\) Then \(B(H)=C^{*}(H)^{*}.\) For each \(u\in B(H),\) we denote by \(\tilde{u}\) the element in \(B_{\pi}(G)\) associated with \(u;\) that is, \(u(\dot{x})=\tilde{u}(x)\) for all \(x\in G\) and \(\|u\|_{B(H)}=\|\tilde{u}\|_{B(G)}.\) Let \(A(H)\) denote the Fourier algebra corresponding to the ultraspherical hypergroup \(H\). Recall that the Fourier algebra \(A(H)\) is semisimple, regular and Tauberian [14, Theorem 3.13]. As in the group case, let \(\lambda\) denote the left regular representation of \(H\) on \(L^{2}(H)\) given by \[\lambda(\dot{x})(f)(\dot{y})=f(\ddot{x}*\dot{y})\quad(\dot{x},\dot{y}\in H,f \in L^{2}(H)).\] This can be extended to \(L^{1}(H)\) by \(\lambda(f)(g)=f*g\) for all \(f\in L^{1}(H)\) and \(g\in L^{2}(H)\). Let \(C^{*}_{\lambda}(H)\) denote the completion of \(\lambda(L^{1}(H))\) in \(B(L^{2}(H))\) which is called the reduced \(C^{*}\)-algebra of \(H\). The von Neumann algebra generated by \(\{\lambda(\dot{x}):\dot{x}\in H\}\) is called the von Neumann algebra of \(H\), and is denoted by \(VN(H)\). Note that \(VN(H)\) is isometrically isomorphic to the dual of \(A(H)\) Moreover, \(A(H)\) can be considered as an ideal of \(B_{\lambda}(H)\), where \(B_{\lambda}(H)\) is the dual of \(C^{*}_{\lambda}(H)\). As an immediate consequence of [16, Theorem 4.2] and [15, Lemma 3.7] we obtain the following lemma. **Lemma 2.3**.: _Suppose that \(H\) is an ultraspherical hypergroup associated to an amenable locally compact group \(G\). Then_ 1. \(C^{*}_{\lambda}(H)=C^{*}(H)\) _and_ \(B_{\lambda}(H)=B(H)\)_._ 2. _The Fourier algebra_ \(A(H)\) _admits a bounded approximate identity. Let_ \((e_{\alpha})\) _be a bounded approximate identity of_ \(A(H)\)_. Then, after passing to a subnet if necessary,_ \(e_{\alpha}\to 1\) _in the weak_\({}^{*}\)_-topology_ \(\sigma(B_{\lambda}(H),C^{*}_{\lambda}(H))\)_._ **Lemma 2.4**.: _Let \(H\) be an ultraspherical hypergroup. Then \(C^{*}(H)\) is a Banach \(B(H)\)-bimodule with the module action given by_ \[\langle\varphi\cdot\lambda(f),\psi\rangle=\langle\lambda(f),\varphi\psi \rangle,\qquad(\varphi,\psi\in B(H),f\in L^{1}(G)).\] _In particular, pointwise multiplication in \(B(H)\) is weak\({}^{*}\)-separately continuous._ Proof.: For each \(\varphi\in B(H)\) and \(f\in L^{1}(H)\), we have \[\langle\varphi\cdot\lambda(f),\psi\rangle=\langle\lambda(f),\varphi\psi \rangle=\int_{H}f(\dot{x})\varphi(\dot{x})\psi(\dot{x})d\dot{x}=\langle\lambda (\varphi f),\psi\rangle.\] Thus \(\varphi\cdot\lambda(f)\in C^{*}(H).\) Now, the conclusion follows from the density of \(L^{1}(H)\) inside \(C^{*}(H)\) ### Commutative Banach algebras Let \(\mathcal{A}\) be a regular, semisimple, commutative Banach algebra with the Gelfand structure space \(\Delta(\mathcal{A})\). For a closed ideal \(I\) of \(\mathcal{A}\), the zero set of \(I\), denoted \(Z(I)\), is a closed subset of \(\Delta(\mathcal{A})\) given by \[Z(I)=\{x\in\Delta(\mathcal{A}):\widehat{a}(x)=0\text{ for all }a\in I\}.\] Associated to each closed subset \(E\) of \(\Delta(\mathcal{A})\), we define the following distinguished ideals with zero set equal to \(E\): \[k_{\mathcal{A}}(E) =\{a\in\mathcal{A}:\widehat{a}(x)=0\text{ for all }x\in E\},\] \[j_{\mathcal{A}}(E) =\{a\in\mathcal{A}:\widehat{a}\text{ has compact support disjoint from }E\},\] \[J_{\mathcal{A}}(E) =\overline{j_{\mathcal{A}}(E)}.\] Note that, for each ideal \(I\) of \(\mathcal{A}\) with zero set \(E\), we have \(j_{\mathcal{A}}(E)\subseteq I\subseteq k_{\mathcal{A}}(E)\)(cf. Kaniuth [14, Theorem 5.1.6]). Recall that a closed subset \(E\) of \(\Delta(\mathcal{A})\) is a _set of synthesis_ or _spectral set_ if \(k_{\mathcal{A}}(E)=J_{\mathcal{A}}(E)\); that is, \(k_{\mathcal{A}}(E)\) is the only closed ideal with zero set equal to \(E\). **Definition 2.5**.: An element \(a\) of a Banach algebra \(\mathcal{A}\) is said to be _power bounded_ if \(\sup_{n\in\mathbb{N}}\|a^{n}\|<\infty\). The Banach algebra \(\mathcal{A}\) is said to have the _power boundedness property_ if any \(a\in A\) with \(r(a)=\inf_{n\in\mathbb{N}}\|a^{n}\|^{\frac{1}{n}}\leq 1\), is power bounded. It is clear that if \(a\in\mathcal{A}\) is power bounded, then \(r(a)\leq 1\). ## 3. Power bounded elements and Coset ring In this section, we study two sets (defined below) associated to an element of \(B(H).\) This section is motivated by some results in [10, 10, 11]. ### Coset ring and power bounded elements We shall begin by defining those two sets. **Definition 3.1**.: Let \(u\in B(H)\). Let \(\dot{E}_{u}\) and \(\dot{F}_{u}\) denote the sets in \(H\) associated to \(u\) given by \[\dot{E}_{u}=\{\dot{x}\in H:|u(\dot{x})|=1\}\qquad\text{and}\qquad\dot{F}_{u}= \{\dot{x}\in H:u(\dot{x})=1\}.\] Also, associated to \(u\) are the sets \(E_{u}\) and \(F_{u}\) in \(G\) defined by \[E_{u}=p^{-1}(\dot{E}_{u})\qquad\text{and}\qquad F_{u}=p^{-1}(\dot{F}_{u}).\] **Remark 3.2**.: Let \(u\in B(H)\) and \(\tilde{u}\) be the element in \(B(G)\) associated with \(u\). Then \[\{x\in G:\tilde{u}(x)=1\}=p^{-1}(\dot{F}_{u})=F_{u},\] and \[\{x\in G:|\tilde{u}(x)|=1\}=p^{-1}(\dot{E}_{u})=E_{u}.\] The following theorem generalizes [10, Theorem 4.1] to the context of ultraspherical hypergroups. Also, the proof of this is a direct consequence of Remark 3.2 and [10, Theorem 4.1]. **Lemma 3.3**.: _Let \(H\) be any locally compact ultraspherical hypergroup and let \(u\in B(H)\) be power bounded. Then the sets \(\dot{E}_{u}\) and \(\dot{F}_{u}\) are in \(\mathscr{R}_{c}(H)\)._ The proof of our next result follows the same lines as in [11, Proposition 3.6] and hence we omit it. **Lemma 3.4**.: _Let \(K\) be a closed subhypergroup of an ultraspherical hypergroup \(H\). Then for any \(u\in B(K)\), there exists \(v\in B(H)\) such that \(v|_{K}=u\) and \(\|v\|_{B(H)}=\|u\|_{B(K)}\)._ Our next lemma says that we cannot replace in Lemma 3.3, the weaker condition that \(\|u\|=1.\) As the proof of this follows similar to [10, Lemma 4.3], we omit it. **Lemma 3.5**.: _Let \(\dot{E}\) be a closed subset of \(H\) such that \(H\setminus\dot{E}\) is \(\sigma\)-compact. Then there exists \(u\in B(H)\) with \(\|u\|_{\infty}=1\) and \(\dot{F}_{u}=\dot{E}\)._ Here is an analogue of the Host's idempotent theorem. **Lemma 3.6**.: _Let \(u\) be a complex-valued functions on \(H\). Then \(u\) is an idempotent of \(B(H)\) if and only if \(u=\mathbf{1}_{\dot{E}}\) for some \(\dot{E}\in\mathscr{R}_{c}(H)\)._ Proof.: Let \(u\) be an idempotent element of \(B(H).\) Then \(\tilde{u}\) is an idempotent element of \(B(G)\) and hence by Host's idempotent theorem [10], there exists \(E\in\mathscr{R}_{c}(G)\) such that \(\tilde{u}=\mathbf{1}_{E}.\) Now, it is plain to show that \(E=p^{-1}(\dot{E}_{u})\). Hence \(\dot{E}_{u}\in\mathscr{R}_{c}(H)\) and \(u=\mathbf{1}_{\dot{E}_{u}}\). For the converse, let \(u=\mathbf{1}_{\dot{E}}\) for some \(\dot{E}\in\mathscr{R}_{c}(H)\). Then, by Host's idempotent theorem [10], \(v=\mathbf{1}_{p^{-1}(\dot{E})}\) is an idempotent in \(B(G)\). It is clear that \(v\) is constant on each orbit \(\mathcal{O}_{x}\) and hence \(v\in B_{\pi}(G)\). Now, since for each \(x\in G\) we have \(v(x)=u(\dot{x})\), thus \(u\) is an idempotent in \(B(H)\). **Remark 3.7**.: Since elements of \(B(H)\) are continuous functions, it is clear that an idempotent of \(B(H)\) has to be of the form \(\mathbf{1}_{\dot{F}}\) where \(\dot{F}\) is a clopen subset of \(H.\) As an immediate consequence of Lemma 3.6 and Lemma 3.3, we obtain the following corollary. **Corollary 3.8**.: _Let \(H\) be an ultraspherical hypergroup. For any subset \(\dot{E}\) of \(H\), the following are equivalent._ 1. \(\dot{E}\in\mathscr{R}_{c}(H)\)_;_ 2. \(\dot{E}=\dot{F}_{u}\) _for some idempotent_ \(u\in B(H)\)_;_ 3. \(\dot{E}=\dot{F}_{u}\) _for some power bounded_ \(u\in B(H)\)_._ In the next corollary, we show that the interior and boundary of a set from the coset ring also belongs to the coset ring. **Corollary 3.9**.: _Let \(H\) be an ultraspherical hypergroup associated to an amenable locally compact group \(G\) and let \(\dot{E}\in\mathscr{R}_{c}(H)\)._ 1. _The interior_ \(\dot{E}^{\circ}\) _and the boundary_ \(\partial\dot{E}\) _of_ \(\dot{E}\) _both belong to_ \(\mathscr{R}_{c}(H).\)__ 2. \(\dot{E}^{\circ}=\dot{F}_{u}\) _for some power bounded_ \(u\in B(H)\)_._ 3. \(\partial\dot{E}=\ddot{F}_{u}\) _for some power bounded_ \(u\in B(H)\)_._ Proof.: By Corollary 3.8, we only have to show (i). Since the quotient map \(p:G\to H\) is open, we have \(p^{-1}(\dot{E}^{\circ})=p^{-1}(\dot{E})^{\circ}\) and \(p^{-1}(\partial\dot{E})=\partial p^{-1}(\dot{E})\). Now since \(p^{-1}(\dot{E})\in\mathscr{R}_{c}(G)\), by [11, Lemma 1.1], \(p^{-1}(\dot{E})^{\circ}\) and \(\partial p^{-1}(\dot{E})\) both belong to \(\mathscr{R}_{c}(G).\) In other words, the sets \(\dot{E}^{\circ}\) and \(\partial\dot{E}\) are in \(\mathscr{R}_{c}(H)\) ### On the sequence \(\{u^{n}\}_{n\in\mathbb{N}}\) We now study the sequence \(\{u^{n}\}_{n\in\mathbb{N}}\), where \(u\) is a power bounded element of \(B(H)\). Our first lemma gives the properties of the weak*-limit of the sequence \(\{u^{n}\}_{n\in\mathbb{N}}\), whenever it exists. **Lemma 3.10**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). Let \(u\in B(H)\) and suppose that \(\theta=\text{weak}^{*}\text{-}\lim_{n}\,u^{n}\) exists. Then_ 1. \(\theta\) _is an idempotent._ 2. \(\theta\) _satisfies_ \(\theta u=\theta\) _and_ \(\theta=\mathbf{1}_{\dot{F}_{u}^{\circ}}\)_._ 3. _the set_ \(\dot{F}_{u}^{\circ}\) _is closed in_ \(G\) _and_ \(\lambda_{H}(\dot{F}_{u}\setminus\dot{F}_{u}^{\circ})=0\)_._ Proof.: By Lemma 2.4, multiplication in \(B(H)\) is separately weak\({}^{*}\) continuous and hence for each \(\varrho\in C^{*}(H)\), we have \[\langle u\theta,\varrho\rangle=\lim_{n}\langle u^{n+1},\varrho\rangle= \langle\theta,\varrho\rangle,\] i.e., \(u\theta=\theta.\) Further, \[\langle\theta^{2},\varrho\rangle=\lim_{n}\lim_{m}\langle u^{n}u^{m},\varrho \rangle=\lim_{n}\lim_{m}\langle u^{n+m},\varrho\rangle=\langle\theta,\varrho\rangle,\] showing that \(\theta\) is an idempotent. By Lemma 3.6 and Remark 3.7, there exists a clopen set \(\dot{E}\in\mathscr{R}(H)\) such that \(\theta=\mathbf{1}_{\dot{E}}.\) The remaining assertions follows as in [13, Lemma 3.1] Our next result gives conditions under which the weak* limit used in Lemma 3.10 holds. As the proof of this follows exactly as in the [13, Lemma 3.2], we omit the proof. **Lemma 3.11**.: _Let \(u\) be a power bounded element of \(B(H)\) and suppose that \(\lim_{n\to\infty}\ \|u^{n+1}-u^{n}\|=0\). Then \(\theta=\text{weak}^{*}\text{-}\lim_{n}\,u^{n}\) exists._ Here is a characterization of the power boundedness of an element \(u\in B(H)\) in terms of the set \(\dot{F}_{u}.\) For the corresponding result on locally compact groups, see [13, Theorem 3.3]. **Theorem 3.12**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\) and let \(u\in B(H)\) such that \(\lim_{n\to\infty}\ \|u^{n+1}-u^{n}\|=0\). Then the following are equivalent._ 1. \(u\) _is power bounded._ 2. _The set_ \(\dot{F}_{u}^{\circ}\) _is closed in_ \(H\)_, belongs to the coset ring_ \(\mathscr{R}(H)\) _and satisfies weak_\({}^{*}\text{-}\lim_{n}\,\mathbf{1}_{H\setminus\dot{F}_{u}^{\circ}}u^{n}=0\)_._ Proof.: In view of Lemma 2.4, Lemma 3.10 and Lemma 3.11, the proof of this follows exactly as in [13, Theorem 3.3]. The next lemma is a special case of [10, Corollary 1.3] and hence we omit the proof. **Lemma 3.13**.: _Let \(H\) be an ultraspherical hypergroup. Then for any power bounded element \(u\in B(H)\), the element \(\frac{1+u}{2}\) is power bounded and_ \[\lim_{n\to\infty}\left\|\left(\frac{1+u}{2}\right)^{n+1}-\left(\frac{1+u}{2} \right)^{n}\right\|=0.\] Combining Lemmas 3.11 and 3.13 and the fact that \(F_{u}=F_{\frac{1+u}{2}}\) one obtains the following description of the power bounded elements of \(B(H)\). **Corollary 3.14**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). Then, for any power bounded \(u\in B(H)\), \(\theta=\text{weak}^{*}\text{-}\lim_{n}(\frac{1+u}{2})^{n}\) exists and \(\theta=\mathbf{1}_{\dot{F}_{u}^{\circ}}\)._ The following is a simple observation. **Lemma 3.15**.: _Let \(H\) be an ultraspherical hypergroup associated a locally compact group \(G\). If \(u\in B(H)\) is power bounded, then so is \(|u|\)._ Proof.: Let us first note that \(\|u\|=\|\bar{u}\|\) for all \(u\in B(H)\), by [11, Remark 2.9]. Let \(u\in B(H)\) be power bounded with \(\gamma=\sup_{n}\|u^{n}\|.\) Then for each \(n\in\mathbb{N}\), we have \[\||u|^{2n}\|=\|u^{n}\bar{u}^{n}\|\leq\|u^{n}\|\cdot\|\bar{u}^{n}\|=\|u^{n}\|^{ 2}\leq\gamma^{2}\] and \[\||u|^{2n+1}\|\leq\||u|\|\cdot\||u|^{2n}\|\leq\gamma^{2}\||u|\|.\] Thus, \(|u|\) is a power bounded element of \(B(H)\). The next result is an analogue of [10, Proposition 3.5]. **Corollary 3.16**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). Then, the set \(\dot{E}_{u}^{\circ}\) is closed in \(H\) and \(\lambda(\dot{E}_{u}\setminus\dot{E}_{u}^{\circ})=0\)._ Proof.: This follows from Lemma 3.10, Lemma 3.15 and the fact that \(\dot{E}_{u}=\dot{F}_{|u|}\). As an immediate consequence of Corollary 3.14 we obtain the following lemma. **Lemma 3.17**.: _Let \(u\in B(H)\) be power bounded and let \(\theta=\text{weak}^{*}\text{-}\lim_{n\to\infty}(\frac{1+u}{2})^{n}\). Then we have_ 1. \(\langle\theta,f\rangle=\int_{\dot{F}_{u}}f(\dot{x})d\dot{x}\) _for all_ \(f\in L^{1}(H)\)_;_ 2. \(\theta=0\) _if and only if_ \(\lambda_{H}(\dot{F}_{u})=0\) **Lemma 3.18**.: _Let \(u\in B(H)\) be power bounded. Then the following assertions hold:_ 1. _If_ \(\dot{E}_{u}=\dot{F}_{u}\)_, then_ \(\theta=\text{weak}^{*}\text{-}\text{lim}_{n\to\infty}u^{n}\) _exists and_ \(\theta=\mathbf{1}_{\dot{F}_{u}^{\circ}}\)_._ 2. _If_ \(H\) _is discrete, then_ \(\theta=\text{weak}^{*}\text{-}\text{lim}_{n\to\infty}u^{n}\) _exists if and only if_ \(\dot{E}_{u}=\dot{F}_{u}\)_._ 3. _weak_\({}^{*}\text{-}\text{lim}_{n\to\infty}|u|^{n}=0\) _if and only if_ \(\dot{E}_{u}^{\circ}=\emptyset\)_._ 4. _If weak_\({}^{*}\text{-}\text{lim}_{n\to\infty}|u|^{n}=0\)_, then weak_\({}^{*}\text{-}\text{lim}_{n\to\infty}u^{n}=0\)_._ Proof.: (i). Assume that \(\dot{E}_{u}=\dot{F}_{u}\). Then, we can show that the sequence \(\{u^{n}\}_{n\in\mathbb{N}}\) converges pointwise to \(\mathbf{1}_{\dot{F}_{u}}\) and also by Corollary 3.16, \(\lambda(\dot{F}_{u}\setminus\dot{F}_{u}^{\circ})=0\). Using these and the fact that \(\|u\|_{\infty}\leq 1\), we have \[\lim_{n}u^{n}(\dot{x})f(\dot{x})=\mathbf{1}_{\dot{F}_{u}^{\circ}}(\dot{x})f( \dot{x})\qquad\text{and}\qquad|u^{n}f|\leq|f|,\] almost everywhere on \(H\) and for all \(f\in L^{1}(H)\). Now, the Lebesgue's dominated convergence theorem yields that \[\langle u^{n},\lambda(f)\rangle=\int_{H}u^{n}fd\lambda\to\int_{H}\mathbf{1}_{ \dot{F}_{u}^{\circ}}fd\lambda=\langle\mathbf{1}_{\dot{F}_{u}^{\circ}},\lambda (f)\rangle\] as \(n\to\infty\). Here, the last equality is a consequence of the fact that if \(u\in B(H)\) is power bounded, then \(\mathbf{1}_{\dot{F}_{u}^{\circ}}\in B(H)\). Now, the equality \(\theta=\mathbf{1}_{\dot{F}_{u}^{\circ}}\) follows from a simple approximation argument. (ii). If \(\theta=\text{weak}^{*}\text{-}\text{lim}_{n\to\infty}u^{n}\) exists, then by Lemma 3.10, \(\theta=\mathbf{1}_{\dot{F}_{u}}\). Since \(H\) is discrete, weak\({}^{*}\)-topology on \(B(H)\) is stronger than topology of pointwise convergence and thus we have \[\lim_{n\to\infty}u^{n}(\dot{x})=\mathbf{1}_{\dot{F}_{u}}(\dot{x}),\ \dot{x}\in H.\] Since \(\dot{F}_{u}\subseteq\dot{E}_{u}\), we only need to show the reversed inclusion. Towards a contradiction, suppose that \(\dot{x}\in\dot{E}_{u}\setminus\dot{F}_{u}\). Then \(|u^{n}(\dot{x})|=1\) for all \(n\in\mathbb{N}\) and hence \[|u^{n}(\dot{x})-\mathbf{1}_{\dot{F}_{u}}(\dot{x})|=|u^{n}(\dot{x})|\to 1\neq 0,\] as \(n\to\infty\), a contradiction. The converse follows from (i). (iii). If \(\dot{E}_{u}^{\circ}=\emptyset\), then Corollary 3.16 implies that \(\lambda_{H}(\dot{E})=0\). Since \(u\in B(H)\) is power bounded, we have \(|u(\dot{x})|_{\infty}<1\) for all \(\dot{x}\in H\setminus\dot{E}\). Thus \[\lim_{n\to\infty}|u^{n}(\dot{x})|=0,\qquad\text{(a.e. on $H$)}.\] The rest of the proof runs as in (i). Assume that weak\({}^{*}\text{-}\text{lim}_{n\to\infty}|u|^{n}=0\). Then we have \[\lambda_{H}(\dot{E}_{u}^{\circ})\leq\lambda_{H}(\dot{E}_{u})=\lambda_{H}(\dot{ F}_{|u|})=0,\] the last equality being a consequence of Lemma 3.17. Hence (iii) holds. (iv). The same reasoning as in the first part of the proof of (iii) applies to here. The assertion of the following corollary is an immediate consequence of Lemma 3.18 and the fact \(\dot{E}_{uv}=\dot{E}_{u}\cap\dot{E}_{v}\) for all power bounded elements \(u,v\in B(H).\) **Corollary 3.19**.: _Let \(u,v\) be elements of \(B(H)\) such that weak\({}^{*}\)-\(\lim_{n\to\infty}\lvert u\rvert^{n}=0\) and \(v\) is power bounded. Then weak\({}^{*}\)-\(\lim_{n\to\infty}(uv)^{n}=0.\)_ Here is the final result of this section. This is about powers of power bounded elements and an element from the ideal \(J_{A(H)}(C),\) where \(C\) is a closed subset of \(H.\) As the proof of this follows exactly as in [11, Proposition 4.1], we shall omit the proof. **Theorem 3.20**.: _Let \(u\) be a power bounded element of \(B(H)\) and let \(C\) be a closed subset of \(H\). Then the following are equivalent:_ 1. \(\lim_{n\to\infty}\left\|\frac{1}{n}\sum_{k=0}^{n-1}\lvert u\rvert^{2k}v \right\|_{A(H)}=0\) _, for all_ \(v\in J_{A(H)}(C)\)_._ 2. \(\lim_{n\to\infty}\|u^{n}v\|_{A(H)}=0\)_, for all_ \(v\in J_{A(H)}(C)\)_._ ### Cesaro boundedness Here we deal with Cesaro boundedness. This concept was introduced by Mustafayev [11] in order to study certain non-abelian analogues of the ergodic theorem. We shall derive relations to Cesaro bounded elements with the sets \(E_{u}\) and \(F_{u}\) as done in the previous sections. We now begin with the definition of Cesaro boundedness. **Definition 3.21**.: Let \(\mathcal{A}\) be a complex Banach algebra with the unit element \(e\). An element \(a\in\mathcal{A}\) is said to be Cesaro bounded if \[\sup_{n\in\mathbb{N}}\left\|\frac{1}{n}\sum_{k=0}^{n-1}a^{k}\right\|<\infty.\] The following example shows that Cesaro boundedness does not imply power boundedness. **Remark 3.22**.: The Assani matrix \(T=\begin{pmatrix}-1&2\\ 0&-1\end{pmatrix}\) is Cesaro bounded, but not power bounded. The following is an analogue of Lemma 3.3 **Proposition 3.23**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). If \(u\in B(H)\) is Cesaro bounded, then \(\dot{F}_{u}\in\mathscr{R}_{c}(H)\)._ Proof.: Since \(u\in B(H)\) is Cesaro bounded, the corresponding \(\tilde{u}\in B(G)\) is Cesaro bounded. Thus, by [11, Proposition 2.4], \(p^{-1}(\dot{F}_{u})=F_{\tilde{u}}\in\mathscr{R}_{c}(G)\), which is equivalent to saying that \(\dot{F}_{u}\in\mathscr{R}_{c}(H)\). The following is the analogue of Lemma 3.10. **Lemma 3.24**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). Let \(u\in B(H)\) and suppose that_ \[\theta=\text{weak}^{*}\text{-}\underset{n}{\lim}\frac{1}{n}\sum_{k=0}^{n-1}u^{k}\] _exists. Then_ 1. \(\theta\) _is an idempotent and satisfies_ \(\theta u=\theta\)_. More precisely,_ \(\theta=\mathbf{1}_{\dot{F}_{u}^{\circ}}\)_._ 2. _The set_ \(F_{u}^{\circ}\) _is closed in_ \(H\) _and_ \(\lambda(F_{u}\setminus F_{u}^{\circ})=0\)_._ Proof.: First note that \[\theta u =\text{weak}^{*}\text{-}\underset{n}{\lim}\frac{1}{n}\sum_{k=1}^ {n}u^{k}\] \[=\text{weak}^{*}\text{-}\underset{n}{\lim}\left[\frac{1}{n}\sum_ {k=0}^{n}u^{k}-\frac{1}{n}\cdot 1\right]\] \[=\text{weak}^{*}\text{-}\underset{n}{\lim}\left[\frac{n+1}{n}( \frac{1}{n+1}\sum_{k=0}^{n}u^{k})-\frac{1}{n}\cdot 1\right]\] \[=\text{weak}^{*}\text{-}\underset{n}{\lim}\frac{1}{n+1}\sum_{k =0}^{n}u^{k}=\theta.\] By Lemma 2.4 multiplication in \(B(H)\) is separately weak* continuous and therefore \(\theta=\theta^{2}\), i.e., \(\theta\) is an idempotent in \(B(H).\) So, by Lemma 3.6 and Remark 3.7 there is clopen subset \(\dot{E}\) of \(H\) in \(\mathscr{R}_{c}(H)\) such that \(\theta=\mathbf{1}_{\dot{E}}.\) Now, the remaining can be proved by using the same arguments as those employed in the proof of Lemma 3.10. Our next result is an analogue of Corollary 3.14. **Proposition 3.25**.: _Let \(H\) be an ultraspherical hypergroup associated to a locally compact group \(G\). Let \(u\) be a Cesaro bounded element of \(B(H)\). Then_ \[\mathbf{1}_{\dot{F}_{u}^{\circ}}=\text{weak}^{*}\text{-}\underset{n}{\lim} \frac{1}{n}\sum_{k=0}^{n-1}u^{k}.\] Proof.: Let \(u\) be a Cesaro bounded element of \(B(H)\). Then for each \(f\in L^{1}(H)\), the sequence \((\frac{1}{n}\sum_{k=0}^{n-1}u^{k})_{n}\) converges pointwise to \(\mathbf{1}_{\dot{F}_{u}}f\) as \(n\to\infty\) and is also dominated \(|f|.\) Hence by the dominated convergence theorem \[\langle\psi_{n},\lambda(f)\rangle =\int_{H}\psi_{n}(\dot{x})f(\dot{x})d\dot{x}\to\int_{H}\mathbf{1}_{ \mathring{F}_{u}}(\dot{x})f(\dot{x})d\dot{x}\] \[=\int_{H}\mathbf{1}_{\mathring{F}_{u}^{\circ}}(\dot{x})f(\dot{x}) d\dot{x}+\int_{H}\mathbf{1}_{\partial\mathring{F}_{u}}(\dot{x})f(\dot{x})d\dot{x}\] \[=\int_{H}\mathbf{1}_{\mathring{F}_{u}^{\circ}}(\dot{x})f(\dot{x}) d\dot{x}\qquad(\text{since }\lambda(F_{u}\setminus F_{u}^{\circ})=0)\] \[=\langle\mathbf{1}_{\mathring{F}_{u}^{\circ}},\lambda(f)\rangle.\] Now, using the density of \(\lambda(L^{1}(H))\) in \(C^{*}(H)\) and the boundedness of the sequence \((\frac{1}{n}\sum_{k=0}^{n-1}u^{k})_{n},\) the conclusion follows. **Proposition 3.26**.: _Let \(H\) be an ultraspherical hypergroup and let \(u\) be a Cesaro bounded element of \(B(H)\) such that \(\lim_{n}\frac{1}{n}\|u^{n}v\|_{A(H)}=0\) for all \(v\in A(H)\)._ _Consider the following statements:_ 1. \(\theta_{v}=\lim_{n}\frac{1}{n}\sum_{k=0}^{n-1}u^{k}v\) _exists in_ \(A(H)\)_-norm topology for all_ \(v\in A(H)\)_._ 2. \(\mathring{F}_{u}\) _is an open subset of_ \(H.\)__ 3. \(\theta_{v}=\mathbf{1}_{\mathring{F}_{u}}v\) _and_ \(\|\mathbf{1}_{\mathring{F}_{u}}v\|_{A(H)}=dist(\overline{(1-u)A(H)},v)\) _for all_ \(v\in A(H)\)_._ _Then the following assertions hold: \((i)\Longleftrightarrow(ii)\Longrightarrow(iii).\) Furthermore, if u is Cesaro contractive, then (iii) implies the following:_ 1. \(\|\mathbf{1}_{\mathring{F}_{u}}v\|_{A(H)}=dist(\overline{(1-u)A(H)},v)\) _for all_ \(v\in A(H)\)_._ Proof.: The proof of this is exactly the same as in [10, Theorem 2.8] **Proposition 3.27**.: _Let \(H\) be an ultraspherical hypergroup such that the underlying locally compact group \(G\) is amenable. Then for any Cesaro bounded element \(u\in B(H)\), we have the following:_ 1. _The sets_ \(\overline{(1-u)A(H)}\) _and_ \(I(\mathring{F}_{u})\) _are equal._ 2. _The ideal_ \(\overline{(1-u)A(H)}\) _has a bounded approximate identity._ 3. _Let_ \((e_{\alpha})\) _be a bounded approximate identity of_ \(\overline{(1-u)A(H)}\)_. Then, after passing to a subnet if necessary,_ \(e_{\alpha}\to\mathbf{1}_{H\setminus\mathring{F}_{u}^{\circ}}\) _in the weak_\({}^{*}\)_-topology_ \(\sigma(B_{\lambda}(H),C_{\lambda}^{*}(H))\)_._ 4. _There exists an idempotent_ \(\theta\in B(H)\) _such that_ \(\overline{(1-u)A(H)}^{weak^{*}}=\theta B(H)\)_. More precisely,_ \(\theta=\mathbf{1}_{H\setminus\mathring{F}_{u}^{\circ}}\)_._ _Moreover, If \(\mathring{F}_{u}\) is an open subset of \(H\), then_ 1. \(\overline{(1-u)A(H)}=I(\mathring{F}_{u})=\mathbf{1}_{H\setminus\mathring{F}_{u }}A(H)\)_._ Proof.: (i). Suppose that \(u\in B(H)\) is Cesaro bounded. Then, by Proposition 3.23 and Lemma 4.11, \(\mathring{F}_{u}\) is a set of synthesis for \(A(H)\). Now, since \(Z(\overline{(1-u)A(H)})=\mathring{F}_{u}\), the equality of \(\overline{(1-u)A(H)}\) and \(I(\mathring{F}_{u})\) holds. (ii). This follows from (i) and Lemma 4.11. (iii). Let \((e_{\alpha})\) be a bounded approximate identity of \(\overline{(1-u)A(H)}\). Passing to a subnet if necessary, we can assume that \(e_{\alpha}\to\theta\) in the weak\({}^{*}\)-topology \(\sigma(B_{\lambda}(H),C_{\lambda}^{*}(H))\) for some \(\theta\in B(H)\). Let \(f\in C_{c}(H)\). Then \[\langle e_{\alpha},\lambda(f)\rangle =\int_{H}e_{\alpha}(\dot{x})f(\dot{x})d\dot{x}\] \[=\int_{H\setminus\dot{F}_{u}}e_{\alpha}(\dot{x})f(\dot{x})d\dot{x }+\int_{\dot{F}_{u}}e_{\alpha}(\dot{x})f(\dot{x})d\dot{x}\] \[=\int_{\dot{F}\setminus\dot{F}_{u}^{\circ}}e_{\alpha}(\dot{x})f( \dot{x})d\dot{x}+\int_{H\setminus\dot{F}_{u}^{\circ}}e_{\alpha}(\dot{x})f( \dot{x})d\dot{x}\qquad(\text{since }e_{\alpha}|_{\dot{F}_{u}}=0)\] \[=\int_{H\setminus\dot{F}_{u}^{\circ}}e_{\alpha}(\dot{x})f(\dot{x })d\dot{x}\qquad(\text{since by Lemma \ref{lem:2.2}, }\lambda(\dot{F}_{u}\setminus\dot{F}_{u}^{\circ})=0)\] \[=\int_{H\setminus\dot{F}_{u}^{\circ}}e_{\alpha}(\dot{x})v(\dot{x })f(\dot{x})d\dot{x}\qquad(v\in I(\dot{F}_{u})\text{ with }v\mid_{\text{supp}(f)\cap(H \setminus\dot{F}_{u})}=1)\] \[=\int_{H}e_{\alpha}(\dot{x})v(\dot{x})\mathbf{1}_{H\setminus\dot {F}_{u}^{\circ}}(\dot{x})f(\dot{x})d\dot{x}\] \[=\langle e_{\alpha}v,\lambda(\mathbf{1}_{H\setminus\dot{F}_{u}^ {\circ}}f)\rangle\to\langle v,\lambda(\mathbf{1}_{H\setminus\dot{F}_{u}^{ \circ}}f)\rangle\] \[=\langle\mathbf{1}_{H\setminus\dot{F}_{u}^{\circ}},\lambda(f)\rangle,\] where the last equality holds because \[H\setminus\dot{F}_{u}^{\circ}=(H\setminus\dot{F}_{u})\cup(\dot{F}_{u} \setminus\dot{F}_{u}^{\circ})\quad\text{and}\quad\ \lambda(\dot{F}_{u}\setminus\dot{F}_{u}^{\circ})=0.\] Now, using the density of \(\lambda(C_{c}(H))\) in \(C^{*}(H)\) and boundedness of the net \((e_{\alpha})\), we have \[\text{weak}^{*}\text{-}\underset{\alpha}{\lim}e_{\alpha}=\mathbf{1}_{H \setminus\dot{F}_{u}^{\circ}}.\] (iv). The statement follows from (iii) and the fact that \(\mathbf{1}_{H\setminus\dot{F}_{u}^{\circ}}\) is an identity for \(\overline{(1-u)A(H)}^{weak^{*}}\) in \(B(H)\). (v). If \(\dot{F}_{u}\) is open, then, by (iv), \(\mathbf{1}_{H\setminus\dot{F}_{u}}\) is an idempotent in \(B(H)\). Thus \(\mathbf{1}_{H\setminus\dot{F}_{u}}A(H)\) is closed ideal of \(A(H)\). Let \((e_{\alpha})\) be a bounded approximate identity of \(A(H)\). Then \((\mathbf{1}_{H\setminus\dot{F}_{u}}e_{\alpha})\) is a bounded approximate identity for \(I(\dot{F}_{u})\). By (i), \(\overline{(1-u)A(H)}=I(\dot{F}_{u})\), thus for each \(v\in\overline{(1-u)A(H)}\), we have \[v=\lim_{\alpha}\mathbf{1}_{H\setminus\dot{F}_{u}}e_{\alpha}v.\] Hence, \(v\in\overline{\mathbf{1}_{H\setminus\dot{F}_{u}}A(H)}=\mathbf{1}_{H \setminus\dot{F}_{u}}A(H)\). The reverse inclusion is clear. The following proposition gives a description the set \(F_{u}\), where \(u\) is a Cesaro bounded. **Proposition 3.28**.: _Let \(H\) be an ultraspherical hypergroup. Let \(u\) be a Cesaro bounded element of \(B(H)\) satisfying \(\lim_{n}\frac{1}{n}\|u^{n}v\|_{A(H)}=0\) for all \(v\in A(H)\) and \(\dot{F}_{u}\) is an open subset of \(H\), then_ \[\overline{(1-u)A(H)}=\mathbf{1}_{H\setminus\dot{F}_{u}}A(H).\] Proof.: The proof of this is exactly same as [14, Corollary 2.10]. **Corollary 3.29**.: _Let \(H\) be an ultraspherical hypergroup associated to an amenable locally compact group \(G\). Then for any power bounded element \(u\in B(H)\), we have the following:_ 1. _The sets_ \(\overline{(1-u)A(H)}\) _and_ \(I(\dot{F}_{u})\) _are equal._ 2. _Let_ \(A_{0}(u)=\{v\in A(H):\lim_{n}\|u^{n}v\|=0\}\)_. Then_ \(A_{0}(u)\) _and_ \(I(\dot{E}_{u})\) _are equal._ 3. _The ideal_ \(\overline{(1-u)B(H)}\) _has a bounded approximate identity._ 4. _There exists an idempotent_ \(\theta\in B(H)\) _such that_ \(\overline{(1-u)B(H)}^{weak^{*}}=\theta B(H)\)_._ Proof.: As powerboundedness implies Cesaro boundedness, this corollary is an immediate consequence of Proposition 3.27. **Definition 3.30**.: Let \(H\) be a discrete ultraspherical hypergroup. For any subset \(\dot{E}\) of \(C_{\lambda}^{*}(H)\), we write \[C_{\lambda}^{*}(E)=\overline{\operatorname{span}\{\lambda(\dot{x}):\dot{x}\in \dot{E}\}}^{\|\cdot\|\dot{C}_{\lambda}^{*}(H)}\] The following lemma generalises Lemma 4.3 of [13] to the context of ultraspherical hypergroups. As the proof of this is same as the proof of [13, Lemma 4.3] we omit the proof. **Lemma 3.31**.: _Let \(H\) be a discrete ultraspherical hypergroup. If \(E\in\mathscr{R}(H)\), then \(\mathbf{1}_{\dot{E}}\cdot C_{\lambda}^{*}(H)=C_{\lambda}^{*}(E)\)._ **Corollary 3.32**.: _Let \(H\) be a discrete ultraspherical hypergroup, \(\dot{E}\in\mathscr{R}_{c}(H)\) and \(u\in B_{\lambda}(H)\). Then the following are equivalent._ 1. \(u^{n}\mathbf{1}_{\dot{E}}\to 0\) _in weak_\({}^{*}\) _topology of_ \(B_{\lambda}(H)\)_._ 2. _For each_ \(T\in C_{\lambda}^{*}(\dot{E}),\lim_{n\to\infty}\langle u^{n},T\rangle=0\)_._ ## 4. Power boundedness of \(A(h)\) and \(B(h)\) In this section, we study the power boundedness of the Fourier algebra \(A(H)\) and Fourier-Stieltjes algebra \(B(H)\). ### Power bounded property of \(A(h)\) We shall begin with a simple lemma. **Lemma 4.1**.: _Let \(U\) be an open subset of \(H.\) Then there exists \(u\in A(H)\) and an open set \(V\) of \(H\) such that the following holds:_ 1. \(0\leq u\leq 1\)_._ 2. \(\|u\|_{A(H)}=1=u(\dot{e})\)_._ 3. \(\text{supp}(u)\subset U\)_._ 4. \(u(\dot{x})>0\) _for all_ \(\dot{x}\in V\)_._ Proof.: There exists a symmetric, relatively compact neighbourhood \(V_{1}\) of \(\dot{e}\) in \(H\) such that \(V_{1}^{2}\subset U.\) Let \(u=\frac{1}{\lambda(V_{1})}\mathbf{1}_{V_{1}}*\mathbf{1}_{V_{1}}\) and \(V=\{\dot{x}\in H:\ u(\dot{x})>0\}.\) Then \(u\) and \(V\) satisfy the requirements of the lemma. **Lemma 4.2**.: _Let \(H\) be a second countable ultraspherical hypergroup and let \(K\) a closed subset of \(H.\) Then there exists \(u\in A(H)\) such that_ 1. \(0\leq u\leq 1\) _and_ 2. \(K=u^{-1}(0)\)_._ Proof.: Let \(\mathcal{V}\) be a neighbourhood basis of \(\dot{e}\) in \(H\) such that for each \(V\in\mathcal{V},\) there exists \(u_{V}\in A(H)\) with the property that \(0\leq u_{V}\leq 1,\)\(\|u_{V}\|_{A(H)}=1=u(\dot{e}),\)\(\text{supp}(u_{V})\subset V\) and \(u_{V}(\dot{x})>0\) for all \(\dot{x}\in V\) (such neighbourhoods exists by the previous lemma). Let \(K\) be any closed set in \(H\) and \(W=H\setminus K.\) Since \(H\) is second countable, there are sequences \(\{\dot{a}_{n}\}_{n\in\mathbb{N}}\subset W\) and \(\{V_{n}\}_{n\in\mathbb{N}}\subset\mathcal{V}\) such that \(W=\bigcup_{n\in\dot{n}}\dot{a}_{n}*V_{n}.\) For each \(n\in\mathbb{N},\) let \(u_{n}=u_{V_{n}}\) and let \(u=\sum_{1}^{\infty}2^{-n}L_{\dot{a}_{n}}u_{n}\) where \(L_{\dot{a}}f(\dot{x}):=f(\dot{\bar{a}}*\dot{x})\). Then \(u\in A(H),\) since \(\|L_{\dot{a}_{n}}u_{n}\|\leq\|u_{n}\|=1.\) By [10, Lemma 4.1B] it is obvious that \(u(\dot{x})>0\) for all \(\dot{x}\in W\) and \(u(\dot{x})=0\) for all \(\dot{x}\in K.\) **Theorem 4.3**.: _Let \(H\) be an ultraspherical hypergroup. The Banach algebra \(A(H)\) has the power boundedness property if and only if \(H\) is discrete._ Proof.: If \(H\) is discrete, then the power boundedness of \(A(H)\) is a consequence of [11, Corollary 2.3]. Thus, we are left with the proof of the forward part. Towards a contradiction, assume that \(H\) is nondiscrete. Suppose that \(H\) is second countable. In view of Lemma 2.2 and Lemma 4.2 the proof now follows exactly as in [11, Proposition 1]. Now let \(H\) be an ultraspherical hypergroup which is not second countable. Let \(U\) be a symmetric relatively compact neighbourhood of \(\dot{e}\) and let \(H^{\prime}=\bigcup\limits_{n\in\mathbb{N}}U^{n},\) where \(U^{n}=U*U*\ldots*U\) (\(n\)-times). It is clear that \(H^{\prime}\) is an open subhypergroup of \(H\) and hence closed. Since \(A(H)\) has the power bounded property, it follows from [11, Lemma 5.1] that \(A(H^{\prime})\) also has the power bounded property. Since \(H\) is non-discrete, so is \(H^{\prime}.\) Choose a sequence \(\{U_{n}\}_{n\in\mathbb{N}}\) of relatively compact neighbourhoods of \(\dot{e}\) such that \(\lambda(U_{n})\to 0.\) By [11, Theorem 1.1], there exists a compact subhypergroup \(H^{\prime\prime}\subseteq\underset{n\in\mathbb{N}}{\cap}U_{n}\) such that \(H^{\prime}//H^{\prime\prime}\) is a second countable subset of \(H\). Since \(H\) is non-discrete, so is \(H^{\prime}\). Since \(H\) is non-discrete, so is \(H^{\prime}\). Choose a sequence \(\{U_{n}\}_{n\in\mathbb{N}}\) of relatively compact neighbourhoods of \(\dot{e}\) such that \(\lambda(U_{n})\to 0.\) By [11, Theorem 1.1], there exists a compact subhypergroup \(H^{\prime\prime}\subseteq\underset{n\in\mathbb{N}}{\cap}U_{n}\) such that \(H^{\prime}//H^{\prime\prime}\) is a second countable subset of \(H\). Since \(H\) is non-discrete, so is \(H^{\prime}\). countable ultraspherical hypergroup. Since \(H^{\prime}//H^{\prime\prime}\) is also an ultraspherical hypergroup based on the subgroup \(p^{-1}(H^{\prime})\) and the spherical projector \(\pi|_{C_{c}(p^{-1}(H^{\prime}))}\), it follows that \(A(H^{\prime}//H^{\prime\prime})\) can be identified isometrically inside \(A(H^{\prime}).\) As \(A(H^{\prime})\) has the power bounded property, it follows that \(A(H^{\prime}//H^{\prime\prime})\) also has the power bounded property. Therefore, by the preceding paragraph it follows that \(H^{\prime}//H^{\prime\prime}\) is discrete, which forces us to conclude that \(H^{\prime\prime}\) is open and hence \(\lambda(H^{\prime\prime})>0\), a contradiction. ### Power bounded property for \(B(h)\) Our main aim of this part is to show that \(B(H)\) is power bounded if and only if \(H\) is finite. Our idea is to make use of weakly almost periodic functions on \(H\) and weakly almost periodic functionals on \(L^{1}(H).\) We shall begin this section by defining the same. **Definition 4.4**.: 1. Let \(H\) be an arbitrary locally compact hypergroup. A function \(f\in C_{b}(H)\) is called weakly almost periodic if the left orbit \(O_{L}(f)=\{\ell_{x}f:x\in H\}\), where \(\ell_{x}f(t)=f(x*t)\) (\(t\in H\)), is relatively weakly compact in \(C_{b}(H)\). We denote the set of all weakly almost periodic functions on \(H\) by \(wap(H)\). 2. Let \(H\) be a locally compact hypergroup with a Haar measure \(\lambda\). Let \(f\in L^{\infty}(H)\) and \(\mathcal{O}_{L}(f)=\{a\cdot f:a\in L^{1}(H),\|a\|\leq 1\}\), where \(a\cdot f=\frac{1}{\Delta}\tilde{a}*f\) (\(a\in L^{1}(H)\)). The functional \(f\) is called weakly almost periodic on \(L^{1}(H)\) if \(\mathcal{O}_{L}(f)\) is \(\sigma(L^{\infty}(H),L^{\infty}(H)^{*})\)-relatively compact. We denote the set of all weakly almost periodic functionals on \(L^{1}(H)\) by \(WAP(L^{1}(H))\). **Lemma 4.5**.: _Let \(H\) be a locally compact hypergroup with a Haar measure \(\lambda\). Let \(f\in L^{\infty}(H)\) which is in \(WAP(L^{1}(H))\). Then \(f\) is \(\lambda\)-a.e continuous on \(H\)._ Proof.: Let \(f\in WAP(L^{1}(H))\) and \((e_{\alpha})_{\alpha}\) be an approximate identity for \(L^{1}(H)\) with \(\|e_{\alpha}\|\leq 1\) for all \(\alpha\). As \(f\in WAP(L^{1}(H))\) and also as the net \((e_{\alpha})\) converges to a right identity of \(L^{\infty}(H)^{*}\), after passing to a subnet if necessary, we can assume that weak-\(\lim_{\alpha}e_{\alpha}\cdot f=f.\) Now, since \(e_{\alpha}\cdot f\in C_{b}(H)\) for all \(\alpha\), we conclude that \(f\in\overline{C(H)}^{weak}=C(H).\) **Lemma 4.6**.: _Let \(H\) be a locally compact hypergroup with a Haar measure \(\lambda\) and let \(f\in C_{b}(H)\). Then \(f\in wap(H)\) if and only if the set \(\mathbb{O}_{L}(f)=\{f\cdot\mu:\mu\in M(H),\|\mu\|\leq 1\}\) is relatively weakly compact in \(C_{b}(H)\)._ Proof.: Assume that \(\mathbb{O}_{L}(f)\) is relatively weakly compact. Since \(\mathcal{O}_{L}(f)\subset\mathbb{O}_{L}(f)\), we have \(f\in wap(H)\). For the converse, let \(\mathcal{O}_{L}(f)\) be relatively weakly compact in \(C_{b}(H)\). Then, by Krein-Smulian theorem [12, Theorem 2.8.14], the closure of \(co(O_{L}(f))\) in \(C_{b}(H)\) is weakly compact. Let \(\mu\in M(H)\) with \(\|\mu\|\leq 1\) such that \(C=supp(\mu)\) is compact. By [1, p.71, Corollary 2], there exists a net \((\mu_{\alpha})_{\alpha}\) in \(co\{\delta_{x}:x\in C\}\) such that \[\mu=\text{weak}^{*}\text{-}\underset{\alpha}{\lim}\ \mu_{\alpha}.\] Now, since the net \((f\cdot\mu_{\alpha})_{\alpha}\) is contained in \(\overline{co}(O_{L}(f))\) and as \(\overline{co}(O_{L}(f))\) is weakly compact, by passing to a subnet if necessary, we can assume that \(f\cdot\mu_{\alpha}\to h\) for some \(h\in C_{B}(H)\) in the weak topology. Thus for each \(t\in H\), we have \[h(t)=\langle\delta_{t},h\rangle =\underset{\alpha}{\lim}\langle\delta_{t},f\cdot\mu_{\alpha}\rangle\] \[=\underset{\alpha}{\lim}\langle\mu_{\alpha}*\delta_{t},f\rangle\] \[=\langle\mu*\delta_{t},f\rangle\] \[=\langle\delta_{t},f\cdot\mu\rangle.\] Hence, \(f\cdot\mu=h\in\overline{co}(O_{L}(f))\). Now, let \(\mu\) be an arbitrary measure in the unit ball of \(M(H)\). Since the set of compactly supported measures is norm-dense in \(M(H)\), there is a sequence \(\{\mu_{n}\}_{n\in\mathbb{N}}\) of compactly supported measures in \(M(H)\) such that \(\lim_{n}\|\mu_{n}-\mu\|=0\). Thus \[\|f\cdot\mu_{n}-f\cdot\mu\|_{\infty}\leq\|f\|_{\infty}\|\mu_{n}-\mu\|\to 0,\] Hence, \(f\cdot\mu\in\overline{co}(O_{L}(f))\) and therefore, \(\mathbb{O}_{L}(f)\subseteq\overline{co}(O_{L}(f))\), which implies that it is relatively weakly compact. **Proposition 4.7**.: _Let \(H\) be a locally compact hypergroup with a Haar measure \(\lambda\). Then \(WAP(L^{1}(H))=wap(H)\)._ Proof.: Assume that \(f\in WAP(L^{1}(H))\) and \((e_{\alpha})_{\alpha}\) be an approximate identity of \(L^{1}(H)\) of bound 1. Let \(\mu\) be an arbitrary measure in the unit ball of \(M(H)\). Then, since \(f\in WAP(L^{1}(H))\), After passing to a subnet if necessary, we can assume that \(f\cdot\mu*e_{\alpha}\to g\) in the \(\sigma(L^{\infty}(H),L^{\infty}(H)^{*})\)-topology for some \(g\in C_{b}(H)\). Now, since \(\sigma(L^{\infty}(H),L^{\infty}(H)^{*})\)-topology on \(C_{b}(H)\) is stronger than \(\sigma(C_{b}(H),M(H))\)-topology, we have \[\underset{\alpha}{\lim}\langle f\cdot\mu*e_{\alpha},\nu\rangle=\underset{ \alpha}{\lim}\langle f,\mu*e_{\alpha}*\nu\rangle=\langle f\cdot\mu,\nu\rangle\] for all \(\nu\in M(H)\), so \(g=f\cdot\mu\). Thus \(f\cdot\mu*e_{\alpha}\to f\cdot\mu\) in the \(\sigma(L^{\infty}(H),L^{\infty}(H)^{*})\)-topology. Hence, \(f\cdot\mu\in\mathcal{O}_{L}(f)\) and consequently \(\mathbb{O}_{L}(f)\subseteq\mathcal{O}_{L}(f)\) which implies that \(\mathbb{O}_{L}(f)\) is relatively weakly compact in \(C_{b}(H)\). Therefore, by Lemma 4.6, \(f\in wap(H).\) Thus, \(WAP(L^{1}(H))\subseteq wap(H).\) Now the other way inclusion is a consequence of Lemma 4.6. As an immediate consequence of [11, Proposition 4.2.8] we obtain the following lemma. **Lemma 4.8**.: _Let \(H\) be a locally compact hypergroup with a Haar measure. Then \(B(H)\subset wap(H)\)._ Here is the main result of this section. It has been known since the work of Arens [1] that there are two Banach algebra products on the bidual \(L^{1}(H)^{**},\) each extending the convolution on \(L^{1}(H),\) and \(L^{1}(H)\) is called _Arens regular_ if those products coincide. According to [12, Theorem 3.14], the Banach algebra \(L^{1}(H)\) is Arens regular if and only if \(WAP(L^{1}(H))=L^{\infty}(H).\) **Theorem 4.9**.: _Let \(H\) be an ultraspherical hypergroup. Then the following statements are equivalent:_ 1. _Every subset of_ \(H\) _belongs to_ \(\mathscr{R}(H)\)_._ 2. \(L^{1}(H)\) _is Arens regular._ 3. \(H\) _is finite._ 4. _The Fourier-Stieltjes algebra_ \(B(H)\) _has power boundedness property._ Proof.: (i) \(\Longrightarrow\) (ii). First note that, by hypothesis and Lemma 3.6, \(\mathbf{1}_{\{\dot{e}\}}\in B(H).\) This implies that \(H\) is discrete and hence \(\mathscr{R}(H)=\mathscr{R}_{c}(H)\). Now, let \(\dot{E}\) be any subset of \(H\). Applying hypothesis and Lemma 3.6 again, we get \(\mathbf{1}_{\dot{E}}\in B(H)\subseteq WAP(L^{1}(H))\). Now density of simple functions in \(L^{\infty}(H)\) implies that \(WAP(L^{1}(H))=L^{\infty}(H),\) which in turn implies that \(L^{1}(H)\) is Arens regular. (ii) \(\Longrightarrow\) (iii). This follows from [10, Theorem 5.2.3]. (iii) \(\Longrightarrow\) (i) is obvious. (iii) \(\Longrightarrow\) (iv). If \(H\) is finite, then clearly \(B(H)=A(H)\) and hence, by Theorem 4.3, \(B(H)\) has the power boundedness property. (iv) \(\Longrightarrow\) (iii). Suppose that \(B(H)\) is power bounded. Then \(A(H)\) is also power bounded. Thus, by Theorem 4.3, \(H\) is discrete. Towards a contradiction, assume that \(H\) has an infinite countable subhypergroup \(K.\) As \(B(H)\) is power bounded, by Lemma 3.4, \(B(K)\) also possesses the power boundedness property. Note that, by Lemma 3.5, for any subset \(\dot{E}\) of \(K\) there exists \(u\in B(K)\) with \(\|u\|_{\infty}=1\) such that \(F_{u}=\dot{E}\). Now since \(u\) is power bounded, by Corollary 3.8, \(\dot{E}\in\mathscr{R}_{c}(K)\). Thus \(K\) is finite, a contradiction. Hence, \(H\) is finite. ### Power boundedness of \(I(\dot{E})\) Our next result deals with the power boundedness of the ideal \(k(\dot{B}).\) This is an analogue of [11, Proposition 2.5]. **Proposition 4.10**.: _Let \(H\) be an ultraspherical hypergroup and \(\dot{E}\) a closed subset of \(H\). If the ideal \(k(\dot{E})\) has the power boundedness property, then every compact subset of \(H\setminus\dot{E}\) belongs to \(\mathscr{R}_{c}(H)\)._ Proof.: Let \(\dot{C}\) be a compact subset of \(H\setminus\dot{E}\). Then, by [12, Lemma 10.1 C], there exists an open \(\sigma\)-compact subhypergroup \(K\) of \(H\) which contains \(\dot{C}\). Since \(K\setminus\dot{C}\) is \(\sigma\)-compact, by Lemma 3.5, there exists \(u\in B(K)\) with \(\|u\|_{\infty}=1\) such that \(\dot{F}_{u}=\dot{C}\). Now, use Lemma 3.4, to get \(u_{0}\in B(H)\) that is \(u\) on \(K\) and vanishes outside \(K\). Also, using regularity of \(A(H)\), one can find \(v\in A(H)\) such that \(0\leq v\leq 1\), \(v\equiv 1\) on \(\dot{C}\) and \(\operatorname{supp}(v)\cap\dot{E}=\emptyset.\) Let \(u_{1}=vu_{0}\). Then \(u_{1}\in k(\dot{E}),\|u_{1}\|_{\infty}=1\) and \(\dot{F}_{u_{1}}=\dot{C}\). By hypothesis, \(u_{1}\) is power bounded and hence, by Lemma 3.3, \(\dot{C}\in\mathscr{R}_{c}(H)\). We shall end this section with a result which is of independent interest. This result is about the existence of bounded approximate identity in the ideals \(I(\dot{E})\). **Lemma 4.11**.: _Let \(H\) be an ultraspherical hypergroup associated to an amenble locally compact group \(G\). If \(\dot{E}\in\mathscr{R}_{c}(H)\), then_ 1. \(\dot{E}\) _is a set of synthesis for_ \(A(H).\)__ 2. \(I(\dot{E})\) _has a bounded approximate identity._ Proof.: (i). This follows from [1, Theorem 3.1]. 3. Since \(p^{-1}(\dot{E})\in\mathscr{R}_{c}(G)\), by [1, Lemma 2.2] the ideal \(I(p^{-1}(\dot{E}))\) has a bounded approximate identity. Let \(v\in I(p^{-1}(\dot{E}))\) and let \(v_{0}\) be the element of \(A(H)\) associated with \(\pi(v)\). Then for each \(\dot{x}\in E\), we have \[v_{0}(\dot{x})=\pi(v)(x)=\int_{\mathcal{O}_{x}}v(z)d\pi^{*}(\delta_{x})(z),\] since \(\mathcal{O}_{x}\subseteq p^{-1}(\dot{E})\). Thus, we can identify \(\pi(I(p^{-1}(\dot{E})))\) with \(I(\dot{E})\). Suppose that \((e_{\alpha})_{\alpha}\) is a bounded approximate identity of \(I(p^{-1}(\dot{E}))\) and let \((\dot{e}_{\alpha})_{\alpha}\) be its associated net in \(A(H)\), i.e., \(\tilde{e}_{\alpha}=\pi(e_{\alpha}).\) Then, for each \(w\in I(\dot{E})\), we have, \[\|w\dot{e}_{\alpha}-w\|_{A(H)} = \|\tilde{w}\tilde{\dot{e}}_{\alpha}-\tilde{w}\|_{A(G)}\] \[= \|\pi(\tilde{w}e_{\alpha}-\tilde{w})\|_{A(G)}\] \[\leq \|\tilde{w}e_{\alpha}-\tilde{w}\|_{A(G)}\to 0.\] This completes the proof. ## Acknowledgement The second author would like to thank the Science and Engineering Board, India, for the MATRICS project fund with the Project No:MTR/2018/000849.
2310.13700
Augmenting Heritage: An Open-Source Multiplatform AR Application
AI NeRF algorithms, capable of cloud processing, have significantly reduced hardware requirements and processing efficiency in photogrammetry pipelines. This accessibility has unlocked the potential for museums, charities, and cultural heritage sites worldwide to leverage mobile devices for artifact scanning and processing. However, the adoption of augmented reality platforms often necessitates the installation of proprietary applications on users' mobile devices, which adds complexity to development and limits global availability. This paper presents a case study that demonstrates a cost-effective pipeline for visualizing scanned museum artifacts using mobile augmented reality, leveraging an open-source embedded solution on a website.
Corrie Green
2023-09-28T16:36:25Z
http://arxiv.org/abs/2310.13700v1
# Augmenting Heritage: An Open-Source Multiplatform AR Application ###### Abstract AI NeRF algorithms, capable of cloud processing, have significantly reduced hardware requirements and processing efficiency in photogrammetry pipelines. This accessibility has unlocked the potential for museums, charities, and cultural heritage sites worldwide to leverage mobile devices for artifact scanning and processing. However, the adoption of augmented reality platforms often necessitates the installation of proprietary applications on users' mobile devices, which adds complexity to development and limits global availability. This paper presents a case study that demonstrates a cost-effective pipeline for visualizing scanned museum artifacts using mobile augmented reality, leveraging an open-source embedded solution on a website. Augmented Reality Visualization - PWA This preprint has not undergone peer review or any post-submission corrections and is being reviewed at an appropriate journal. \(\copyright\)Corrie Green1 Robert Gordon University [email protected] Footnote 1: [https://orcid.org/0000-0003-0404-3668](https://orcid.org/0000-0003-0404-3668). ## 1 Introduction The loss of the additional dimension when visualizing 3D scanned objects on a 2D screen can lead to a reduction of user insight and engagement when compared to physically interacting or viewing museum artifacts. Many museums hold collections that are not available for interaction due to being behind a display case or aren't available due to a lack of exhibition space. This paper presents a cost-effective pipeline for adopting 3D photogrammetry scans into a browser-based solution allowing for display on both mobile and desktop devices using Google's 3D Model viewer framework supporting Augmented Reality (AR) [1]. Due to the low-cost options available of high-fidelity scanning technology today, there has been a mass adoption of digital twinning of cultural heritage sites and museum artifacts for preservation [2], allowing future generations to experience historically rich sites that may have since been damaged or lost. By 3D scanning artifacts using photogrammetry or lidar-based approaches, we can accurately reproduce a digital twin for an artifact or historical site for use in visualization and interactive applications. Although scanning technologies have improved, providing an engaging and accessible presentation environment for scanned models has been lagging. Mobile augmented reality allows users with smartphone devices to augment information in their real-world environment. In addition to overlaying information such as text, we can present access to the third dimension for artifacts that are traditionally behind glass protection in museum exhibits allowing visitors to explore all its perspectives. Without an accessible and always available database of models, scanned objects may not be fully exploited by members of the public and research community. In addition, due to the closure of museums around the world during global pandemics, artifacts have been preserved without physical access. A generous interface would support in representing the richness of cultural heritage collections, allowing for visualization for visitors around the world at any time to explore and enrich interpretation between associated collections [3]. Figure 1: Photo of the Scanning environment made available at the Museum of Childhood Dingwall Figure 3: Screenshot demonstrated the unprocessed 3D scan without data cleanup of a Scottish dress with a comparison of the cleaned scan on the right Figure 2: Screenshot of the developed website being rendered on a desktop browser Link to the project repository and website: [https://github.com/corriedotto/3D-Museum-Library](https://github.com/corriedotto/3D-Museum-Library)
2309.06508
Exceptional point induced quantum phase synchronization and entanglement dynamics in mechanically coupled gain-loss oscillators
The optomechanical cavity (OMC) system has been a paradigm in the manifestation of continuous variable quantum information over the past decade. This paper investigates how quantum phase synchronization relates to bipartite Gaussian entanglement in coupled gain-loss mechanical oscillators, where the gain and loss rates are engineered by driving the cavity with blue and red detuned lasers, respectively. We examine the role of exceptional point in a deterministic way of producing self-sustained oscillations that induce robust quantum correlations among quadrature fluctuations of the oscillators. Particularly, steady phase synchronization dynamics along with the entanglement phenomena are observed in the effective weak coupling regime above a critical driving power. These phenomena are further verified by observing the mechanical squeezing and phase space rotations of the Wigner distributions. Additionally, we discuss how the oscillators frequency mismatches and decoherence due to thermal phonons impact the system dynamics. These findings hold promise for applications in phonon-based quantum communication and information processing.
Joy Ghosh, Souvik Mondal, Shailendra K. Varshney, kapil Debnath
2023-09-12T18:30:51Z
http://arxiv.org/abs/2309.06508v1
Exceptional point induced quantum phase synchronization and entanglement dynamics in mechanically coupled gain-loss oscillators ###### Abstract The optomechanical cavity (OMC) system has been a paradigm in the manifestation of continuous variable quantum information over the past decade. This paper investigates how quantum phase synchronization relates to bipartite Gaussian entanglement in coupled gain-loss mechanical oscillators, where the gain and loss rates are engineered by driving the cavity with blue and red detuned lasers, respectively. We examine the role of exceptional point in a deterministic way of producing self-sustained oscillations that induce robust quantum correlations among quadrature fluctuations of the oscillators. Particularly, steady phase synchronization dynamics along with the entanglement phenomena are observed in the effective weak coupling regime above a critical driving power. These phenomena are further verified by observing the mechanical squeezing and phase space rotations of the Wigner distributions. Additionally, we discuss how the oscillators' frequency mismatches and decoherence due to thermal phonons impact the system dynamics. These findings hold promise for applications in phonon-based quantum communication and information processing. ## I Introduction Synchronization is a natural phenomenon widely observed around us, where two or more systems tend to act similarly at the same time. Huygens initially proposed the notion of a synchronized oscillation in the early 17th century during experiments involving mechanical clocks [1]. Since then, it has been found in various processes such as the flashing of fireflies, chemical reactions, neuron networks, heart cells, etc [2]. Synchronization in different classical setups was extensively studied in the past, but in the quantum domain, it gained popularity after Mari et al. proposed a measure to compute complete synchronization and phase synchronization for continuous variable systems [3], which has been applied in different fields like cavity QED [4], atomic ensembles [5], VdP oscillators [6], spin chains [7], etc. The principle of quantum synchronization differs fundamentally from its classical counterpart due to Heisenberg's uncertainty relation [3]. Earlier studies have demonstrated that synchronization is closely linked to other quantum correlations such as entanglement [7], mutual information [6], and discord [8]. The coexistence of quantum synchronization and entanglement is a fascinating phenomenon. Previous research has shown that superconducting qubits emitting entangled photons can synchronize [7]. Additionally, clock synchronization has been achieved using entangled photons generated through SPDC [9]. In quantum many-body systems, it has been observed that entanglement and synchronization are closely linked and can lead to collective cooperative behavior [10]. Moreover, it has been confirmed that spin-1 systems can also be synchronized through entanglement [7]. In this context, optomechanical architectures [11] appeared to be a promising platform to test the spontaneous synchronization among micro or nanomechanical oscillators, where two mechanical or cavity modes can be directly coupled through phonon or photon tunneling. Multiple synchronization schemes have been developed in optomechanics [12; 13; 14; 15], among which enhancing the nonlinearity is considered a primary feature. Periodic modulation [16; 17; 18] and quadratic coupling [19] are frequently utilized for this purpose. Recently, a counter-intuitive phenomenon of noise-induced synchronization has been observed [20]. Experimental realizations such as self-organized phonon lasers [21] and synchronization blockade [22] are also reported. The curiosity about the interplay between quantum synchronization and entanglement in optomechanical setup is emerging in recent times [23; 24; 25; 19]. Most of the works earlier demonstrated this idea by optical coupling only, mechanical interaction-based design to test quantum synchronization is not well-realized in the current literature. The idea of exceptional points (EPs) in mechanically coupled gain-loss structures is a novel tool that evolved rapidly in the past decade [26]. EPs refer to fundamental degeneracies in gain-loss cavities or waveguides [27; 28; 29], where the system's eigenvalues coalesce and become degenerate. EP-based optomechanical structures have been studied for mass sensing [30], optomechanically induced transparency [31], sideband generation [32] offering better controllability and low power threshold requirement. It has also been applied to achieve synchronization and frequency-locking effect in the classical domain [33; 34]. Operation near exceptional point is utilized to delay sudden death of entanglement [35] and it is also reported that entanglement is greatly enhanced in gain-loss OMC systems [36]. Based on the previous literature, it appears that EP has the potential to establish entanglement, so the question we propose is, can it also develop robust quantum synchronization? If non-identical oscillators can be entangled and synchronized simultaneously through EP, this can be applied in the field of quantum communication and information processing. In this paper, we present a configuration consisting of two mechanically coupled OMCs with symmetrical properties. By employing blue and red detuned lasers to drive the cavities, the gain-loss characteristics of the mechanical oscillators can be manipulated [32]. This configuration can lead to self-sustained oscillations in both oscillators, which is considered to investigate quantum phase synchronization by employing Mari's criterion [3]. The entanglement between the two oscillators is also established simultaneously and estimated by logarithmic negativity [37]. Based on the numerical calculations, a rich connection between phase synchronization and entanglement is further clarified. The Wigner distributions are plotted to demonstrate the squeezed and synchronized Gaussian states. Our findings reveal that by taking advantage of the exceptional point, we can switch quickly into limit cycle oscillation that generates the sustainable quantum correlation among quadrature fluctuations of the oscillators. Moreover, the system's parameters such as the mechanical coupling rate or driving field strength can be suitably modified to control quantum phase synchronization and entanglement dynamics in a flexible manner. This approach holds the potential to advance our understanding of the relationship between quantum phase synchronization and entanglement phenomena. The work is organized as follows. In Sec. II, the quantum Langevin equations are described along with the theoretical model. The numerical simulation of quantum correlation properties is defined in Sec. III by taking the covariance matrix approach. Section IV holds the results and discussion about the possible relationship between these two phenomena, and Sec. V concludes the work. ## II Model and Classical Dynamics In this study, we consider a system comprising two optomechanical cavities that are identical but oppositely detuned, both coupled mechanically, as depicted in Fig.1. The coupling between the two oscillators is facilitated through phonon tunneling. The Hamiltonian of the complete system can be expressed as follows (taking \(\hbar=1\)) \[\hat{\mathcal{H}} = \sum_{j=1,2}[-\Delta_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}+\frac{ \omega_{mj}}{2}(\hat{p}_{j}^{2}+\hat{q}_{j}^{2})-g_{0j}\hat{a}_{j}^{\dagger} \hat{a}_{j}\hat{q}_{j} \tag{1}\] \[+iE_{j}(\hat{a}_{j}^{\dagger}-\hat{a}_{j})]-J\hat{q}_{1}\hat{q}_{2}\] The Hamiltonian is written here in the rotating frame of the driving frequency (\(\omega_{L}\)) with cavity detuning from optical resonance is \(\Delta_{j}=\omega_{oj}-\omega_{L}\). Here \(\hat{a}_{j}^{\dagger}(\hat{a}_{j})\) are the creation(annihilation) operators associated with the optical field with frequency (\(\omega_{oj}\)) and \(\hat{q}_{j}\) and \(\hat{p}_{j}\) are the dimensionless position and momentum operators of the \(j^{th}\) mechanical oscillators with frequencies \(\omega_{mj}\). The optomechanical coupling of each cavity is taken as \(g_{0j}\) and the laser driving field strength of the two single-mode cavities is \(E_{j}\). The mechanical coupling strength \(J\) acts as a bosonic Gaussian channel between the oscillators, assumed to be much smaller than mechanical frequency (\(J\ll\omega_{j}\)). The dissipative dynamics of the system are described by the following set of nonlinear quantum Langevin equations, \[\partial_{t}\hat{a}_{j} = -(\kappa-i\Delta_{j})\hat{a}_{j}+ig_{0}\hat{a}_{j}\hat{q}_{j}+E_ {j}+\sqrt{2\kappa}\hat{a}_{j}^{in}\] \[\partial_{t}\hat{q}_{j} = \omega_{mj}\hat{p}_{j}\] \[\partial_{t}\hat{p}_{j} = -\omega_{mj}\hat{q}_{j}-\gamma_{mj}\hat{p}_{j}+J\hat{q}_{3-j}+g_{ 0}\hat{a}_{j}^{\dagger}\hat{a}_{j}+\hat{\eta}_{j} \tag{2}\] Here, \(\gamma_{mj}\) and \(\kappa\) represent the intrinsic dissipation of mechanical oscillators and optical cavities. We have taken the cavity decay rate (\(\kappa\)) and optomechanical constant (\(g_{0j}\)) identical for both cavities for simplicity. The laser driving amplitude provided for both cavities is also the same (\(E_{1}=E_{2}=E\)). The stochastic noise operators for optical and mechanical systems are given as \(\hat{a}_{j}^{in}\) and \(\hat{\eta}_{j}\), satisfying the standard correlation \(\langle\hat{a}_{i}^{in\dagger}(t)\hat{a}_{j}^{in}(t^{\prime})+\hat{a}_{j}^{in \dagger}(t^{\prime})\hat{a}_{i}^{in}(t)\rangle=\delta_{ij}\delta(t-t^{\prime})\) and \(\frac{1}{2}\langle\hat{\eta}_{i}(t)\hat{\eta}_{j}(t^{\prime})+\hat{\eta}_{j}(t^ {\prime})\hat{\eta}_{i}(t)\rangle=\gamma_{mj}(2\bar{n}_{m}+1)\delta_{ij}\delta (t-t^{\prime})\) under Markovian approximation [38; 39]. The mean thermal phonon occupancy of the mechanical systems at temperature \(T\) is taken as same as \(\bar{n}_{m}=[\exp(\frac{\hbar\omega_{mj}}{k_{B}T})-1]^{-1}\) (where \(k_{B}\) is the Boltzmann constant). The quantum Langevin equations are usually solved using the standard linearization technique, where the classical mean dynamics and quadrature fluctuations are separated. The classical dynamical equations, by decomposing cavity and me Figure 1: (Color online) Schematic diagram of two Optomechanical cavities coupled mechanically, driven by red (\(\Delta_{1}<0\)) and blue (\(\Delta_{2}>0\)) detuned laser fields, respectively. The opposite detunings characterize the gain-loss effect, whereas \(E_{j}\) (\(j=1,2\)) represents the optical driving power incident on one end of the Fabry–Pérot cavity. chanical operators into two parts, \(\hat{\mathcal{O}}(t)=\langle\hat{\mathcal{O}}(t)\rangle+\delta\mathcal{O}(t)\), where \(\mathcal{O}=a_{j},q_{j},p_{j}\), given as \[\partial_{t}\langle\hat{a}_{j}\rangle = -(\kappa-i\Delta_{j})\langle\hat{a}_{j}\rangle+ig_{0}\langle\hat{ q}_{j}\rangle\langle\hat{a}_{j}\rangle+E\] \[\partial_{t}\langle\hat{q}_{j}\rangle = \omega_{mj}\langle\hat{p}_{j}\rangle\] \[\partial_{t}\langle\hat{p}_{j}\rangle = -\omega_{mj}\langle\hat{q}_{j}\rangle-\gamma_{mj}\langle\hat{p}_{ j}\rangle+J\langle\hat{q}_{3-j}\rangle \tag{3}\] \[+g_{0}|\langle\hat{a}_{j}\rangle|^{2}\] The linearized equations describing quadrature fluctuations are \[\partial_{t}\delta a_{j} = -(\kappa-i\Delta_{j})\delta a_{j}+ig_{0}(\langle\hat{a}_{j} \rangle\delta q_{j}+\langle\hat{a}_{j}\rangle\delta a_{j})\] \[+\sqrt{2\kappa}\hat{a}_{j}^{in}\] \[\partial_{t}\delta q_{j} = \omega_{mj}\delta p_{j}\] \[\partial_{t}\delta p_{j} = -\omega_{mj}\delta q_{j}-\gamma_{mj}\delta p_{j}+J\delta q_{3-j} +g_{0}(\langle\hat{a}_{j}\rangle\delta a_{j}^{\dagger} \tag{4}\] \[+\langle\hat{a}_{j}^{*}\rangle\delta a_{j})+\hat{\eta}_{j}\] Assuming \(|\langle a_{j}\rangle|^{2}\gg 1\), we have ignored all higher-order terms in the above equations. Under this condition, when the cavity decay rate is much larger than the effective optomechanical coupling strength, i.e. \(\kappa\gg G_{j}\) (where \(G_{j}=g_{0}\langle a_{j}\rangle\)), the cavity fields can be safely eliminated from the governing equations [40; 41]. In that case, the effective Hamiltonian can be written as [32] \[\mathcal{H}_{eff} = (\Omega_{m1}-i\Gamma_{m1})b_{1}^{\dagger}b_{1}+(\Omega_{m2}+i \Gamma_{m2})b_{2}^{\dagger}b_{2} \tag{5}\] \[-J(b_{1}^{\dagger}b_{2}+H.c)\] Here the effective mechanical frequency and effective decay (gain) rates are modified, given by \(\Omega_{mj}=(\omega_{mj}\pm\Delta\omega_{mj})\) and \(\Gamma_{mj}=\gamma_{mj}\mp\gamma_{oj}\) respectively, where \(\Delta\omega_{mj}\) and \(\gamma_{oj}\) are the optomechanically induced modifications [32; 40]. In the resolved sideband regime (\(\omega_{mj}\gg\kappa\)), \(\Delta\omega_{mj}\) can be safely neglected and \(\gamma_{oj}=4G_{j}^{2}/\kappa\). \(b_{j}\) and \(b_{j}^{\dagger}\) are the creation and annihilation operators related to phonons of the mechanical oscillators, given as \(q_{j}=\frac{(b_{j}^{\dagger}+b_{j})}{\sqrt{2}}\) and \(p_{j}=\frac{i(b_{j}^{\dagger}-b_{j})}{\sqrt{2}}\). The corresponding eigen frequencies of the coupled mechanical modes with \(|\Omega_{m1}-\Omega_{m2}|\ll\omega_{mj}\) are given as \[\omega_{\pm}\approx\frac{\Omega_{m1}+\Omega_{m2}}{2}-i\frac{\Gamma_{m1}-\Gamma_ {m2}}{4}\pm\sqrt{J^{2}-(\frac{\Gamma_{m1}+\Gamma_{m2}}{4})^{2}} \tag{6}\] From Eq.(6) it can be seen that the phase transition between strongly coupled and weakly coupled regions occurs at \(J=(\Gamma_{m1}+\Gamma_{m2})/2\), which is also known as the exceptional point (EP). We numerically investigate the mechanical dynamics by solving Eq.(3) with the following set of parameters, \(\omega_{m1}=\omega_{m}\), \(\omega_{m2}=1.008\omega_{m}\), \(-\Delta_{1}=\Delta_{2}=\omega_{m}\), \(\kappa=0.1\omega_{m}\), \(\gamma_{m1}=10^{-2}\omega_{m},\gamma_{m2}=10^{-4}\omega_{m}\), \(g_{0j}=10^{-4}\omega_{m}\) that can be experimentally achievable in the resolved sideband regime setup. The reason for choosing one mechanical oscillator more dissipative than the other (i.e. \(\gamma_{m1}>\gamma_{m2}\)) is given in the next section. The ode45 method in Matlab is used to solve the dynamical equations with the initial conditions assumed to be zero for all the variables. By fixing the mechanical coupling rate at \(J=0.03\omega_{m}(\ll\omega_{m})\) and varying the driving amplitude \(E\), the classical mechanical dynamics in various regimes can be observed. Fig.2(a) depicts the average dynamics of the mechanical positions at driving amplitude, \(E=100\omega_{m}\) which corresponds to the effective coupling condition \(J>\frac{\Gamma_{m1}+\Gamma_{m2}}{4}\). The mechanical oscillations show amplitude-modulated dynamics conveying strong coupling between the resonators, but eventually, the dynamics decay with the effective rate \((\Gamma_{m1}-\Gamma_{m2})/4\). Increasing the driving power at \(E=500\omega_{m}\) brings the system in the weakly coupled zone that corresponds to the effective condition \(J<\frac{\Gamma_{m1}+\Gamma_{m2}}{4}\). In this regime, the mechanical energies are localized, and the oscillations in both resonators amplify. The optomechanical nonlinearity has saturated the growth as shown in Fig 2(b) with the corresponding phase portrait shown in Fig 2(c). Though we have assumed a small resonance frequency difference, the dynamics in both the mechanical oscillators evolve with a locked phase, consistent with the results shown in [30; 32]. The variation of the mechanical oscillation amplitude \(A_{1,2}\) with the driving strength is shown in Fig. 2(d). From this, it can be found that the transition point occurs at \(E=E_{p}=390\omega_{m}\). After the EP, a sudden amplification of oscillation (denoted by Figure 2: (Color online) Classical dynamics of (a) decaying oscillation of the mechanical gain (blue) and loss (red) oscillators before reaching EP at driving power level \(E=100\omega_{m}\). (b) Self-sustained oscillation after crossing EP at power level \(E=500\omega_{m}\). (c) Phase-portrait at the same driving strength exhibiting limit cycle. (d) Amplitudes of position variables (\(q_{1,2}\)) of the mechanical oscillators against driving power \(E\), started developing after EP. The shaded region specifies sudden amplification of mechanical amplitudes after the EP which stabilized at \(E=490\omega_{m}\). the shaded region) occurred, and the limit cycles were reached when \(E=490\omega_{m}\). The difference of amplitudes increases with increasing \(E\) as the influence of mechanical coupling \(J\) becomes weaker, i.e. \(J\ll~{}(\Gamma_{m1}+\Gamma_{m2})/4\), which results in the effectively lesser flow of mechanical energies from cavity 2 to cavity 1. The transition point is consistent with the analytical solution of the effective mechanical mode picture given by Eq.(6). ## III Quadrature fluctuations and correlations In this section, we discuss the quantum correlation properties of the quadrature fluctuations and further define the phase synchronization along with the entanglement generation schemes of the coupled mechanical oscillators. Introducing the quadrature operators for optical fields \(\hat{x}_{j}=\frac{1}{\sqrt{2}}(\delta a_{j}^{\dagger}+\delta a_{j})\) and \(\hat{y}_{j}=\frac{i}{\sqrt{2}}(\delta a_{j}^{\dagger}-\delta a_{j})\) and for the noise operators \(\hat{x}_{j}^{in}=\frac{1}{\sqrt{2}}(\delta a_{j}^{in\dagger}+\delta a_{j}^{in})\), \(\hat{y}_{j}^{in}=\frac{i}{\sqrt{2}}(\delta a_{j}^{in\dagger}-\delta a_{j}^{in})\), the set of Eq.(4) describing fluctuations can be rewritten in compact matrix form as \[\partial_{t}u=\mathcal{A}(t)u+(t) \tag{7}\] Here \(u^{T}=(\delta q_{1},\delta p_{1},\delta x_{1},\delta y_{1},\delta q_{2}, \delta p_{2},\delta x_{2},\delta y_{2})\) is the quadrature fluctuation vector and \(n^{T}=(0,\eta_{1},\sqrt{2\kappa}\delta x_{1}^{in},\sqrt{2\kappa}\delta y_{1}^ {in},0,\eta_{2},\sqrt{2\kappa}\delta x_{2}^{in},\sqrt{2\kappa}\delta y_{2}^ {in})\) is the input noise vector with the drift matrix \(\mathcal{A}\) is given in Appendix A. The statistical correlations of quadrature fluctuations can be found by studying the evolution of the drift matrix \(\mathcal{A}\). The formal solution of Eq.(7) can be written as \(u(t)=M(t)u(0)+\int_{0}^{t}M(\tau)\mathcal{N}(t-\tau)d\tau\), where \(M(t)=e^{\mathcal{A}t}\). The stability conditions of the system are obtained by numerically solving the eigenvalues of the drift matrix \(\mathcal{A}\), in which the system becomes unstable when any one of the real part of the eigenvalues becomes positive. Fig.3 shows the stable and unstable regions for varying driving strength, where we notice that for higher \(\gamma_{m1}/\gamma_{m2}\) ratio, the system becomes unstable in the effective weak coupling regime (\(E\gtrsim E_{p}\)). On the other hand, fixing \(\gamma_{m1}\sim\gamma_{m2}\) makes the system unstable throughout the whole driving power range. This would hamper the effect of EP on the emergence of entanglement and synchronization dynamics, as finite oscillations may get induced below the critical driving power \(E<E_{p}\). Since the fluctuation dynamics in Langevin equations are linearized and noises are also taken as zero-mean Gaussian distribution, the evolved states are also time-dependent Gaussian states with zero means irrespective of initial conditions [42]. Therefore, Gaussian dynamics can be fully characterized by the covariance matrix (CM) formalism [43]. Let, \(\mathcal{V}\) be the covariance matrix whose elements are defined as \[\mathcal{V}=\frac{1}{2}\langle u_{i}(t)u_{j}(t)+u_{j}(t)u_{i}(t)\rangle \tag{8}\] Here \(u_{j}\) is the \(j^{th}\) entry of the quadrature vector \(u\) defined and the evolution of the covariance matrix and its elements are governed by the following differential equation \[\partial_{t}\mathcal{V}=\mathcal{AV}+\mathcal{V}\mathcal{A}^{T}+\mathcal{N} \tag{9}\] \(\mathcal{N}\) is the diffusion matrix for noise, which satisfies the correlation formula \(\frac{1}{2}\langle n_{i}(t)n_{j}(t^{\prime})+n_{j}(t^{\prime})n_{i}(t)\rangle= N_{ij}\delta(t-t^{\prime})\). This is used to deduce noise correlation vector as \(\mathcal{N}=\)Diag\([0,\gamma(2\bar{n}_{m}+1),\kappa,\kappa,0,\gamma(2\bar{n}_{m}+1),\kappa,\kappa]\). CM for the whole system has the following form \[\mathcal{V}_{8\times 8}=\begin{pmatrix}\mathcal{V}_{m_{1}}&\mathcal{V}_{m_{1},a_{1}}& \mathcal{V}_{m_{1},m_{2}}&\mathcal{V}_{m_{1},a_{2}}\\ \mathcal{V}_{a_{1},m_{1}}&\mathcal{V}_{a_{1}}&\mathcal{V}_{a_{1},m_{2}}& \mathcal{V}_{a_{1},a_{2}}\\ \mathcal{V}_{m_{2},m_{1}}&\mathcal{V}_{m_{2},a_{1}}&\mathcal{V}_{m_{2}}& \mathcal{V}_{m_{2},a_{2}}\\ \mathcal{V}_{a_{2},m_{1}}&\mathcal{V}_{a_{2},a_{1}}&\mathcal{V}_{a_{2},m_{2}}& \mathcal{V}_{a_{2}}\end{pmatrix} \tag{10}\] Here \(m_{1}\) and \(m_{2}\) denote the mechanical modes of the vibrating oscillators and \(a_{1}\) and \(a_{2}\) are the modes corresponding to optical fields. Each block of \(\mathcal{V}\) represents a \(2\times 2\) square matrix. The off-diagonal elements of the matrix represent the covariance of different subsystems while diagonal elements refer variance of each system. From this matrix, we can easily calculate the correlation properties between two different subsystems. The coupled mechanical system can be easily described by extracting the submatrix \(\mathcal{V}^{\prime}\) from Eq.(10), which has the following form \[\mathcal{V^{\prime}}_{4\times 4}=\begin{pmatrix}\mathcal{V}_{m_{1}}&\mathcal{V}_{m_{1}, m_{2}}\\ \mathcal{V}_{m_{1},m_{2}}^{T}&\mathcal{V}_{m_{2}}\end{pmatrix} \tag{11}\] By singular value decomposition, it can be shown that the \(2\times 2\) symplectic matrices of Eq.(11) can be written as \(\mathcal{V}_{mj}=(2\bar{n}_{m}+1)R(\phi)S(2r)R^{\mathcal{I}}(\phi)\) where \(S(r)=\exp[r(b_{j}^{2}-b_{j}^{12})]\) is the squeezing operator for \(j^{th}\) mechanical mode with squeezing parameter \(r\) and \(R(\phi)=\exp[r(b_{j}^{2}-b_{j}^{12})]\). Figure 3: (Color online) The maximum eigenvalues of \(\mathcal{A}\) against the driving power strength for different damping ratios of the oscillators, other parameters remain the same. The critical driving power \(E_{p}\) is depicted by the red star on the horizontal axis. \(\begin{pmatrix}\cos\phi_{j}&-\sin\phi_{j}\\ \sin\phi_{j}&\cos\phi_{j}\end{pmatrix}\) is the phase rotation operator with rotation angle \(\phi_{j}\). ### Quantum phase synchronization In the previous section, we demonstrated that the oscillators are phase-locked for limit cycles. So the measurement of quantum phase synchronization is taken corresponding to the quadrature fluctuations of the operators, which can be evaluated by the position and momentum quadrature errors for the oscillators, given as \(\delta q_{-}=\frac{1}{\sqrt{2}}[\delta q_{1}(t)-\delta q_{2}(t)]\) and \(\delta p_{-}=\frac{1}{\sqrt{2}}[\delta p_{1}(t)-\delta p_{2}(t)]\). The figure of merit to calculate quantum phase synchronization is defined as [3] \[S_{p}(t)=\frac{1}{2}\langle\delta p_{-}^{\prime}(t)^{2}\rangle^{-1} \tag{12}\] Here \(\delta p_{-}^{\prime}=\frac{1}{\sqrt{2}}[\delta p_{1}^{\prime}(t)-\delta p_{2 }^{\prime}(t)]\) is the phase-locking operator associated with the mechanical oscillators. Where \[\begin{pmatrix}\delta p_{j}^{\prime}(t)\\ \delta q_{j}^{\prime}(t)\end{pmatrix}=R(\phi)\begin{pmatrix}\delta p_{j}(t)\\ \delta q_{j}(t)\end{pmatrix} \tag{13}\] \(R(\phi)\) is the rotation matrix and the phase is defined as \(\phi_{j}=\arctan[\langle p_{j}\rangle/\langle q_{j}\rangle]\in[0,2\pi]\). In the case of quantum phase synchronization, we obtain equal quadrature variances for both oscillators i.e. \(\langle\delta q_{-}^{\prime}(t)^{2}\rangle=\langle\delta p_{-}^{\prime}(t)^{2}\rangle\)[18]. Analytical expression of \(S_{p}\) can be easily obtained from the covariance matrix. ### Bipartite Gaussian entanglement The bipartite entanglement between the Gaussian states can be estimated by the following expression of logarithmic negativity [44; 45] \[E_{n}=\max[0,-\log(2\nu^{-})] \tag{14}\] where \[\nu^{-}=\sqrt{\frac{\Sigma-\sqrt{\Sigma^{2}-4\det(\mathcal{V}^{\prime})}}{2}} \tag{15}\] is the smallest symplectic eigenvalue of the partial transpose of the submatrix \(\mathcal{V}^{\prime}\) in with \(\Sigma=\det(\mathcal{V}_{m_{1}})+\det(\mathcal{V}_{m_{2}})-2\det(\mathcal{V}_ {m_{1},m_{2}})\). According to Simon's criterion of positive partial transpose (PPT) [37], the necessary and sufficient condition for bipartite Gaussian states to be entangled is \(\nu^{-}<0.5\). ## IV Results and discussion In this section, we establish the relationship between entanglement and phase synchronization between the coupled mechanical oscillators and discuss the significance of the EP in the proposed system for developing stable quantum correlation dynamics. ### Phase synchronization and entanglement dynamics We begin the analysis by numerically solving Eq.(9), which describes the behavior of the CM elements associated with the optical and mechanical modes. For the numerical simulation, the initial condition of \(\mathcal{V}(0)=\frac{1}{2}\mathrm{Diag}[1,1,1,1,1,1,1]\) is used. This corresponds to the vacuum state for both the cavities and thermal state for the mechanical oscillators with mean thermal phonon number \(\bar{n}_{m}=0\), which can be achieved by pre-cooling them to their ground state [46]. The CM specifies how quadrature fluctuations are correlated across different bipartite subsystems, and its elements are specified in Eq.(10). From the matrix \(\mathcal{V}\), we have extracted \(\mathcal{V}^{\prime}\) as given in Eq.(11), which only contains information about the vibrating oscillators. \(\mathcal{V}^{\prime}\) is associated with the characterization of phase synchronization (\(S_{p}\)) and entanglement (\(E_{n}\)) parameters, as defined in Eq.(12) and (14). While tuning the driving power \(E\) and surpassing the exceptional point threshold (\(E>E_{p}\)), the coupled oscillators enter into the limit cycle regime, where the consistent correlation of the quadrature fluctuations becomes noticeable. Beyond the EP zone, indicated by the shaded area in Fig.2(d), there is a sudden surge in mechanical Figure 4: (Color online) (a) Quantum phase synchronization and (b) Entanglement dynamics of the two mechanical oscillators in the self-sustained regime with driving power strength \(E=500\omega_{m}\) (dotted line) and \(E=600\omega_{m}\) (solid line), other parameters remain same as used before. vibrations, triggering instability within the system. In this case, the EP corresponds to critical driving power \(E_{p}=390\omega_{m}\), however, quantum correlation dynamics did not prevail until \(E\) reaches \(490\omega_{m}\), which is in congruence with the earlier finding [36]. The steady dynamics of the phase synchronization parameter \(S_{p}\) can be observed from Fig.4(a) for two different power levels, \(E=500\omega_{m}\) and \(E=600\omega_{m}\) in the limit cycle regime. \(S_{p}\) increases with high driving powers, as the optomechanical nonlinearity becomes strong in the cavities. A similar behavior is also observed for entanglement dynamics \(E_{n}\), shown in Fig.4(b). \(E_{n}\) depicts the dynamics of logarithmic negativity (\(E_{n}>0\)) for the same driving powers as considered for the synchronization parameter \(S_{p}\). The enhancement in \(S_{p}\) and \(E_{n}\) occurs as long as the driving power compensates for the weak effective coupling condition. Both dynamics can be further enhanced by increasing the driving power, which effectively increases the nonlinearity of the system. We also notice the death and rebirth of entanglement [47] happens at the lower driving strength (\(E=500\omega_{m}\)), which is near the EP and vanishes quickly as the power increases. This type of dynamics changes differently with higher power levels beyond \(E\gg E_{p}\) and different frequency mismatches between the oscillators, which we will discuss later in this section. Over time, entanglement and synchronization both exhibit periodic variations. This is due to the fact that quantum fluctuations follow classical orbits as long as all Lyapunov exponents of the classical equations are negative [48]. It is essential to understand that steady quantum correlation dynamics arise with weak coupling conditions only. When the driving amplitude is not strong enough (\(E<E_{p}\)), classical dynamics cannot be sustained, and the oscillators do not entangle or synchronize. ### Wigner distribution and fidelity In order to further confirm the influence of exceptional points on synchronization and entanglement, we plot the two-mode Wigner distribution function \(W(q,p)\) of the coupled oscillators for various driving powers. The Gaussian Wigner distributions of the two mechanical modes \(m_{1}\) and \(m_{2}\) are defined as [49; 50] \[W(q,p)=\frac{1}{2\pi\sqrt{\det(\mathcal{V}_{m_{j}})}}\exp[-\frac{u_{j}\mathcal{ V}_{m_{j}}^{-1}u_{j}^{T}}{2}] \tag{16}\] where \(j=1,2\) represents the two coupled oscillators and \(u_{j}\) and \(\mathcal{V}_{mj}\) are first and second-order moment vectors of the \(j^{th}\) mechanical mode. The first-order moment vector, \(u_{j}\), indicates the position of the origin and the second-moment vector \(\mathcal{V}_{mj}\) can be found from diagonal block matrices of Eq.(11). However, \(u_{j}\) does not provide any relevant information and can be conveniently set to zero. The phase space dynamics of the gain (loss) oscillators are represented with blue (red) colors in Fig.5. The driving power \(E=100\omega_{m}\) corresponds to both oscillators being in their ground state with nearly equal Figure 5: (Color online) The Wigner distribution function \(W(q,p)\) of gain (blue) and loss (red) oscillators at different driving powers. The upper panel (a)-(d) represent \(W(q,p)\) upto exceptional point with \(E=100\omega_{m}\), \(E=200\omega_{m}\), \(E=300\omega_{m}\), and \(E=400\omega_{m}\), whereas in the lower panel (e)-(h) represent \(W(q,p)\) after the exceptional point with \(E=500\omega_{m}\), \(E=600\omega_{m}\), \(E=700\omega_{m}\), and \(E=800\omega_{m}\), respectively. Other parameters remain the same, and time is fixed at \(t/\tau=5000\) (where \(\tau=1/\omega_{m}\)) for both oscillators. net gain or loss, as shown in Fig.5(a). This occurs because of the strong coupling, which causes a coherent exchange of energy between them. As a result, there is no indication of squeezing or rotation in the dynamics of the phase space. With the rise in driving power, (b) \(E=200\omega_{m}\) and (c) \(E=300\omega_{m}\), the dispersion in phase space causes a decrease in the Wigner density function. This fact can also be verified by the decaying dynamics of classical oscillation in Fig.2(a). Near the exceptional point at \(E=400\omega_{m}\) in Fig.5(d), we notice an overlapping in the Wigner distribution functions and the two mechanical modes are closest. Due to mechanical amplification at EP, the Wigner functions become delocalized, and abrupt stretching occurs in phase space, which is a sign of dynamical instability [46]. But this delocalization vanishes quickly as the system moves away from the EP zone. From Fig.5(e), the squeezing effect is evident in both oscillators, indicating entanglement. Additionally, Wigner's distributions start to rotate in phase space because the weak coupling cannot support energy exchange anymore and mechanical energy is localized in each oscillator. As we move further from EP, the degree of phase space rotation changes, as well as the squeezing effect. The origin of phase synchronization dynamics and entanglement of the limit cycle power levels in Fig.4(a),(b) can be traced back to this point. Fig.5(f)-(h), represents several other Wigner distributions at power levels \(E=600\omega_{m}\), \(E=700\omega_{m}\) and \(E=800\omega_{m}\). Also, the shape and phase space rotation angle of Wigner functions remain constant over time, at a fixed driving power. Fig.6 represents \(W(q,p)\) at three subsequent times with power level remaining \(E=600\omega_{m}\). It can be noted that when two oscillators are phase synchronized, their angle of rotation remains constant and the squeezing parameter does not vary with time. Therefore, the proposed system exhibits both phase synchronization and entanglement simultaneously. Another important aspect is to verify the behavior of Wigner functions by calculating fidelity for the Gaussian states. In this system, fidelity is determined by comparing the overlap of two Gaussian states. Essentially, it measures the level of similarity between these states, defined as [49; 50] \[f=\frac{\exp[-\frac{1}{2}(u_{1}-u_{2})(\mathcal{V}_{m_{1}}+\mathcal{V}_{m_{1}} )^{-1}(u_{1}-u_{2})^{T}]}{\sqrt{\Delta+\delta}-\sqrt{\delta}} \tag{17}\] With \(\Delta=\det(\mathcal{V}_{m_{1}}+\mathcal{V}_{m_{1}})\) and \(\delta=4(\det[\mathcal{V}_{m_{1}}]-0.25)(\det[\mathcal{V}_{m_{2}}]-0.25)\). The dynamics of fidelity \(f\) is shown in Fig.7. As expected from the Wigner distribution functions of Fig.5(d), fidelity near the exceptional point i.e. \(E=400\omega_{m}\) is almost unity. When we move further from EP, fidelity decreases, which is also evident from the shape of Wigner distributions. Fig.7 represents fidelity dynamics at two limit cycle power levels (b) Figure 6: (Color online) Time evolution of the Wigner distributions of the gain (loss) oscillator with driving power \(E=600\omega_{m}\) at different times (a) \(t/\tau=3000\), (b) \(t/\tau=4000\) and (c) \(t/\tau=5000\), (\(\tau=1/\omega_{m}\)) other parameters remain the same. Figure 7: Fidelity, \(f\), of the coupled mechanical systems near the exceptional point (a) \(E=400\omega_{m}\) and after the EP junction at (b) \(E=500\omega_{m}\) and (c) \(E=600\omega_{m}\), other parameters remain the same. \(E=500\omega_{m}\) and (c) \(E=600\omega_{m}\), same as used before. As fidelity fluctuates at high power, there is a greater likelihood of phase synchronization and entanglement. The fidelity dynamics indicate two initially uncorrelated Gaussian states were squeezed and phase space rotated by surpassing the EP, if they were unchanged, we would not observe these phenomena. ### Effect of frequency mismatch and finite thermal phonons It is important to consider the frequency difference between the coupled oscillators when examining phase synchronization and entanglement characteristics. To explore the deviations, the frequency mismatch is set with four distinct values in the range \(0-1\%\) of \(\omega_{m}\). Note that \(\delta\omega_{m}(=\omega_{m2}-\omega_{m1})\) should be maintained very small due to the assumption considered in the eigenvalue Eq.(6) in the classical analysis. Fig.8 shows the driving strengths \(E\) in the limit cycle regime after the EP-induced amplification, where the time average of the quantum phase synchronization parameter (a) \(S_{p}\) and the entanglement parameter logarithmic negativity (b) \(E_{n}\) is plotted while varying the driving powers. The time averages of the parameters are calculated by using the formula \(\langle h(t)\rangle=\lim_{t\rightarrow\infty}\frac{1}{l^{2}}\int_{0}^{T}h(t)dt\), where \(h(t)=S_{p},E_{n}\). It is clear from Fig.8 that the maximum of the averages of \(S_{p}\) and \(E_{n}\) occurs when the frequency mismatch is smallest. i.e. \(\delta\omega_{m}=0.2\%\). Interestingly, we observe a greater tendency towards synchronization and entanglement with an increase in \(\delta\omega_{m}\), contradictory to the classical case, resulting in similarity to the blockade phenomenon [51, 24]. When the deviation in frequency, \(\delta\omega_{m}\), is \(0.8\%\), the maximum range of driving power in the limit cycle region is required to synchronize and entangle the oscillators. The quantum correlation dynamics in both cases show a decline after reaching a maximum value as higher driving strength \(E\gg E_{p}\) creates a significant difference in oscillation amplitudes shown in Fig.2(d). Although phase synchronization lasts longer at higher power than entanglement, it shows that entanglement is more sensitive to frequency differences and stronger driving forces. The impact of thermal phonons on quantum dynamics is another essential parameter to analyze the deviations. The calculations mentioned so far do not account for thermal noise. However, as the system temperature rises, there is a corresponding increase in mean thermal phonon numbers. Fig.9 displayed dynamics of phase synchronization and entanglement for various thermal phonon numbers when increased from idealistic condition of \(\bar{n}_{m}=0\) to \(\bar{n}_{m}=10\) and to \(\bar{n}_{m}=20\). As the temperature increases, the amplitudes of entanglement and phase synchronization dynamics diminish, due to the decoherence. However, phase synchronization is more resilient than entanglement, even at high temperatures. ## V Conclusion In this paper, we explored entanglement and quantum phase synchronization dynamics in a gain-loss optomechanical system with mechanically coupled oscillators. By applying opposite detunings, we induced gain or loss effects in the mechanical oscillators. Using experimentally feasible parameters, we observed various oscillation dynamics classically, including damping and self-sustained vibrations. These oscillators showed phase-locked behavior in the weak coupling limit cycle regimes, which could be accessed by adjusting laser power and tuning the exceptional point. Quantum correlations of quadrature fluctuation operators emerged during limit cycle oscillations, revealing entanglement and synchronized quantum phases between the coupled mechanical modes. As driving power increased, effective coupling became weaker, and entanglement and synchronization dynamics enhanced. These phenomena initially grew but later decreased due to factors like higher driving strength and various frequency mismatches. Corresponding Wigner function distributions help to visualize the Figure 8: (Color online) Time average of (a) phase synchronization (\(S_{p}\)) and (b) entanglement (\(E_{n}\)) parameters for different frequency mismatch \(\delta\omega_{m}\) with driving power after EP region, the other parameters are same as used. Figure 9: (Color online) Time evolution of (a) phase synchronization and (b) entanglement at different mean thermal phonon numbers such as \(\bar{n}_{m}=0\), \(\bar{n}_{m}=10\) and \(\bar{n}_{m}=20\) at driving power \(E=600\omega_{m}\), other parameters same as above. -evolved Gaussian states that are squeezed and phase-space rotated. Remarkably, phase synchronization remained robust against thermal noise, while entanglement was more sensitive to temperature changes. Our numerical calculations have shown that mechanical oscillators can manipulate Gaussian quantum information by adjusting the EP through the phonon transfer mechanism acting as Gaussian channels. ## Appendix A Drift Matrix The drift matrix \(\mathcal{A}\) is given by \[\mathcal{A}(t)=\begin{pmatrix}0&\omega_{m1}&0&0&0&0&0&0&0&0\\ -\omega_{m1}&-\gamma_{1}&\sqrt{2}g_{0}Re(\langle a_{1}\rangle)&\sqrt{2}g_{0} Im(\langle a_{1}\rangle)&J&0&0&0\\ \sqrt{2}g_{0}Im(\langle a_{1}\rangle)&0&-\kappa&-(\Delta_{1}+g_{0}\langle q_ {1}\rangle)&0&0&0&0\\ \sqrt{2}g_{0}Re(\langle a_{1}\rangle)&0&(\Delta_{1}+g_{0}\langle q_{1} \rangle)&-\kappa&0&0&0&0\\ 0&0&0&0&0&\omega_{m2}&0&0\\ J&0&0&0&-\omega_{m2}&-\gamma_{2}&\sqrt{2}g_{0}Re(\langle a_{2}\rangle)&\sqrt{2}g _{0}Im(\langle a_{2}\rangle)\\ 0&0&0&0&\sqrt{2}g_{0}Im(\langle a_{2}\rangle)&0&-\kappa&-(\Delta_{2}+g_{0} \langle q_{2}\rangle)\\ 0&0&0&0&\sqrt{2}g_{0}Re(\langle a_{2}\rangle)&0&(\Delta_{2}+g_{0}\langle q _{2}\rangle)&-\kappa\end{pmatrix}\]
2309.13688
Path integral formalism for the free Dirac propagator in spherical coordinates
The relativistic Green's function of a free spin-1/2 fermion is derived using the Feynman path integral formalism in spherical coordinates. The Green's function is reduced to an exactly solvable path integral by an appropriate coordinate transformation. The result is given in terms of spherical Bessel functions and spherical spinors, and agrees with previous solutions of the problem.
Sreya Banerjee, Zoltán Harman
2023-09-24T16:31:57Z
http://arxiv.org/abs/2309.13688v1
# Path integral formalism for the free Dirac propagator in spherical coordinates ###### Abstract The relativistic Green's function of a free spin-\(\frac{1}{2}\) fermion is derived using the Feynman path integral formalism in spherical coordinates. The Green's function is reduced to an exactly solvable path integral by an appropriate coordinate transformation. The result is given in terms of spherical Bessel functions and spherical spinors, and agrees with previous solutions of the problem. pacs: XX.XX.XXNo PACS code given ## 1 Introduction Green's functions of the Dirac equation has been used in different areas of atomic physics: in the calculation of radiative corrections in atoms and highly charged ions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10], tested by various state-of-the-art experimental methods (see e.g. [11; 12; 13; 14; 15; 16]); in describing relativistic atomic processes [17; 18], multiphoton interactions [19], x-ray scattering [20; 21], atoms in external fields [22], and the weak decay in muonic atoms [23], to name a few examples. In the current article, we derive the free Dirac Green's function using the path integral formalism of Feynman. The Green's function is reduced in Biedenharn's basis [24] into a radial path integral, the effective action of which is similar to that of a non-relativistic particle. In order to express the energy-dependent Green's function in a closed form, we convert the radial path integral to the path integral of an isotropic harmonic oscillator through coordinate transformation along with local time rescaling, in analogy with earlier works of Inomata and collaborators on the hydrogen atom [25; 26]. The final results agree with the solution of the inhomogeneous Dirac equation in previous works, e.g. in Refs. [1; 27]. ## 2 First- and second-order Dirac equation The time-independent Dirac equation is customarily expressed, in natural units (\(c=1\), \(\hbar=1\)), as \[(E-\mathbf{\alpha}\cdot\hat{\mathbf{p}}-\beta m)\Psi=0\,. \tag{1}\] Here, \(E\) is the energy, \(m\) is the mass of a particle, and \(\hat{\mathbf{p}}\) is the operator of 3-momentum. The \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\alpha_{3})\) and \(\beta\) are the usual \(4\times 4\) Dirac matrices, and \(\Psi\) is the bispinor wave function. The Green's function \(G(\mathbf{r}_{2},\mathbf{r}_{1};E)\) depends on the two positions \(\mathbf{r}_{1}\), \(\mathbf{r}_{2}\) and the energy, and it satisfies the inhomogeneous Dirac equation \[(m-\hat{M})G(\mathbf{r}_{2},\mathbf{r}_{1};E)=\delta(\mathbf{r}_{2}-\mathbf{r}_{1})\,, \tag{2}\] where \(\hat{M}=-\beta\mathbf{\alpha}\cdot\hat{\mathbf{p}}+\beta E\). This Green's function can be expressed as [28; 26] \[G(\mathbf{r}_{2},\mathbf{r}_{1};E)=(m+\hat{M})g(\mathbf{r}_{2},\mathbf{r}_{1};E)\,, \tag{3}\] where \(g(\mathbf{r}_{2},\mathbf{r}_{1};E)\) is the solution of the second-order - or iterated - inhomogeneous Dirac equation \[(m^{2}-\hat{M}^{2})g(\mathbf{r}_{2},\mathbf{r}_{1};E)=\delta(\mathbf{r}_{2}-\mathbf{r}_{1})\,. \tag{4}\] The second-order Dirac equation resembles the Schrodinger equation, and its solution has a simpler form than the usual first-order equation [24]. We use spherical coordinates to find explicit expressions for these Green's functions. Generally, operators of interest can be rewritten using the Dirac operator \(\hat{K}=\beta(\hat{\mathbf{\Sigma}}\cdot\hat{\mathbf{L}}+1)\), the radial momentum operator \(\hat{p}_{r}=\frac{1}{r}(\mathbf{r}\cdot\hat{\mathbf{p}}-i)\), and \(\alpha_{r}=\frac{\mathbf{\alpha}\cdot\mathbf{r}}{r}\), which is the component of the Dirac matrix \(\mathbf{\alpha}\) in the direction \(\mathbf{r}\). Here, the operator \(\hat{\mathbf{\Sigma}}=1\otimes\mathbf{\sigma}\), with \(\mathbf{1}\) being the \(2\times 2\) unit matrix and \(\mathbf{\sigma}\) the vector of the Pauli matrices, defines the Dirac spin operator \(\hat{\mathbf{S}}_{\rm D}=\frac{1}{2}\hat{\mathbf{\Sigma}}\). As usual, \(\hat{\mathbf{L}}\) is the operator of orbital angular momentum, and the eigenvalue of \(\hat{\mathbf{L}}^{2}\) is \(l(l+1)\), with \(l\) being an integer. The operator \(\hat{K}\) commutes with the first-order Dirac Hamiltonian and with \(\alpha_{r}\). The Dirac operator is related to the total angular momentum operator [29]\(\hat{\mathbf{J}}\) via \(\hat{K}^{2}=\hat{\mathbf{J}}^{2}+\frac{1}{4}\). The operator \(\hat{\mathbf{J}}^{2}\) has the eigenvalues \(j(j+1)\), with \(j\) being half-integer, and thus the eigenvalues of \(\hat{K}^{2}\) are \((j+\frac{1}{2})^{2}\). The eigenvalues of \(\hat{K}\) are \[\kappa=\mp\left(j+\frac{1}{2}\right)\,, \tag{5}\] for \(j=l\pm\frac{1}{2}\). ## 3 Path integral form of the Green's function We seek the solution to equation (5) using the integral representation \[g(\mathbf{r}_{2},\mathbf{r}_{1};E)\equiv\left\langle\mathbf{r}_{2}\right|\hat{g}\left|\mathbf{r}_ {1}\right\rangle=\frac{i}{2m}\int_{0}^{\infty}\left\langle\mathbf{r}_{2}\right|e^{ i\hat{H}u}\left|\mathbf{r}_{1}\right\rangle\,du\,. \tag{6}\] In the above equation, the integration is done with respect to the time-like parameter \(u\) and the integrand \(\left\langle\mathbf{r}_{2}\right|e^{i\hat{H}u}\left|\mathbf{r}_{1}\right\rangle\) can be interpreted as a propagator that describes a system evolving with the parameter \(u\) from \(\mathbf{r}_{1}\) to \(\mathbf{r}_{2}\), and has an effective Hamiltonian, \(\hat{H}\). As stated by Feynman, propagators can be written in the form of path integrals [30]. Here, we express the integral in Eq. (6) in terms of a path integral. Finally, we evaluate the Green's function for the Dirac equation by using Eq. (4). The effective Hamiltonian can be written as \[\hat{H}=\frac{1}{2m}(m^{2}-\hat{M}^{2})\,. \tag{7}\] With the operators introduced above, following Biedenharn [31], \(\hat{M}\) can be written as \[\hat{M}=-\beta(\alpha_{r}\hat{p}_{r})+i\frac{\alpha_{r}\hat{K}}{r}+\beta E\,, \tag{8}\] and the effective Hamiltonian (7) can be cast in the form \[\hat{H}=\frac{1}{2m}\left(\hat{p}_{r}^{2}+\frac{\hat{K}(\hat{K}-\beta)}{r^{2} }-E^{2}+m^{2}\right)\,. \tag{9}\] In Eq. (9), the coefficient of the \(1/r^{2}\) term in the Hamiltonian is analogous to the term containing the orbital angular momentum term in the Hamiltonian of the Schrodinger equation. To establish this correspondence, we use a specific case of the Martin-Glauber operator [32] \[\hat{\mathscr{L}}=-\beta\hat{K}, \tag{10}\] for which the following holds: \(\hat{\mathscr{L}}^{2}=\hat{K}^{2}\). The eigenvalue of \(\hat{\mathscr{L}}\), \(\gamma\), is also given as \[\gamma=\mp\left(j+\frac{1}{2}\right)\,. \tag{11}\] We define \(\lambda\) as a function of \(\gamma\), \(\lambda\equiv\left|\gamma\right|+\frac{1}{2}(\text{sign}\gamma-1)\), such that the eigenvalue equation [32] \[\hat{\mathscr{L}}\left(\hat{\mathscr{L}}+1\right)\left|\lambda\right\rangle= \lambda(\lambda+1)\left|\lambda\right\rangle\,. \tag{12}\] is fulfilled. The angular wave functions \(\left\langle\theta\phi\middle|\lambda\right\rangle=\left\langle\theta\phi \middle|j,\mu,\kappa,\tilde{\beta}\right\rangle\) are the simultaneous eigenstates of the operators \(\hat{\mathscr{L}}(\hat{\mathscr{L}}+1)\), \(\hat{\mathbf{j}}^{2}\), \(\hat{J}_{z}\), \(\hat{K}\) and the \(\beta\) matrix, with the eigenvalues \(\lambda(\lambda+1)\), \(j(j+1)\), \(\mu\), \(\kappa\) and \(\tilde{\beta}=\pm 1\), respectively. They can be written in the explicit bispion form as [24; 26] \[\left\langle\theta\phi\middle|j,\mu,\kappa,\tilde{\beta}=-1 \right\rangle=\begin{pmatrix}0\\ \chi_{\kappa}^{\mu}\end{pmatrix},\] \[\left\langle\theta\phi\middle|j,\mu,\kappa,\tilde{\beta}=1\right\rangle =\begin{pmatrix}\chi_{-\kappa}^{\mu}\\ 0\end{pmatrix}\,, \tag{13}\] where the \(\chi_{\kappa}^{\mu}\) are the well-known spherical spinors [29] \[\chi_{\kappa}^{\mu}=\sum_{\mu^{\prime}=\pm\frac{1}{2}}C\left(l,\frac{1}{2}\,, j;\mu-\mu^{\prime}\,,\mu^{\prime}\,,\mu\right)Y_{l}^{\mu-\mu^{\prime}}(\theta, \phi)\chi_{\frac{1}{2}}^{\mu^{\prime}}\,, \tag{14}\] expressed in terms of the Clebsch-Gordan coefficients \(C(\dots)\), the spherical harmonics \(Y_{l}^{\mu-\mu^{\prime}}\), and the unit spinors \[\chi_{\frac{1}{2}}^{\frac{1}{2}}=\begin{pmatrix}1\\ 0\end{pmatrix}\,,\quad\chi_{\frac{1}{2}}^{-\frac{1}{2}}=\begin{pmatrix}0\\ 1\end{pmatrix}\,. \tag{15}\] The Hamiltonian of Eq. (9) is now represented in this basis as \[\hat{H}_{\lambda}=\frac{\hat{p}_{r}^{2}}{2m}+\frac{\lambda(\lambda+1)}{2mr^{2} }-\frac{k^{2}}{2m}\,, \tag{16}\] where \(k^{2}=E^{2}-m^{2}\). Having defined the radial Hamiltonian, the Green's function for the second-order Dirac equation can be expressed as a partial wave expansion \[\left\langle\mathbf{r}_{2}\right|\hat{g}\left|\mathbf{r}_{1}\right\rangle=\sum_{ \lambda}\left\langle\theta_{2}\phi_{2}\middle|\lambda\right\rangle\left\langle r _{2}\right|\hat{g}_{\lambda}\left|r_{1}\right\rangle\left\langle\lambda| \theta_{1}\phi_{1}\right\rangle\,. \tag{17}\] Here, the radial Green's function is \[\left\langle r_{2}\right|\hat{g}_{\lambda}\left|r_{1}\right\rangle=\frac{i}{2m }\int\,\left\langle r_{2}\right|e^{-i\hat{H}_{\lambda}u}\left|r_{1}\right\rangle du\,, \tag{18}\] where the operator \(\hat{H}_{\lambda}\) is defined in Eq. (16). This radial Hamiltonian undergoes a evolution in \(u\), and is very similar in form to the radial propagator of the non-relativistic hydrogen atom as, introduced by Inomata [25]. The Green's function given in Eq. (17) can now be reduced using Eq. (16-18), yielding \[\left\langle\mathbf{r}_{2}\right|\hat{g}\left|\mathbf{r}_{1}\right\rangle=\sum_{j, \kappa}\left\langle r_{2}\right|\hat{g}_{\lambda}\left|r_{1}\right\rangle \Omega_{\kappa,\kappa^{\prime}}^{j}(\theta_{2}\phi_{2}|\theta_{1}\phi_{1})\beta^ {2}\,, \tag{19}\] where \[\Omega_{\kappa,\kappa^{\prime}}^{j}(\theta_{2}\phi_{2}|\theta_{1}\phi_{1})=\sum_ {\mu}\chi_{\kappa}^{\mu}(\theta_{2},\phi_{2})\chi_{\kappa}^{\mu\dagger}( \theta_{1},\phi_{1})\,. \tag{20}\] In order to proceed with the construction of the path integral in spherical coordinates, we establish the partial action term for a very short time interval \(t_{j}=u_{j}-u_{j-1}\). This partial action can be approximated as \(S(\mathbf{r}_{j},\mathbf{r}_{j-1})\approx t_{j}L(\Delta\mathbf{r}_{j}/t_{j},\mathbf{r}_{j})\), where \(\Delta\mathbf{r}_{j}=\mathbf{r}_{j}-\mathbf{r}_{j-1}\).The total path is divided into \(N\) intervals by local time rescaling such that \(r_{0}=r_{1}\), \(r_{N}=r_{2}\), and \(u=\sum t_{j}\). The radial action term for the Hamiltonian in Eq. (16) is given as \[S(t_{j})=\frac{m(\Delta r_{j})^{2}}{2t_{j}}-\frac{\lambda(\lambda+1)t_{j}}{2mr_{j }r_{j-1}}+\frac{(E^{2}-m^{2})t_{j}}{2m}\,. \tag{21}\] Since angular motion has rotational symmetry, we concern ourselves with only the radial motion associated with this action. As such, the corresponding radial function is \[R_{\lambda}(r_{j},r_{j-1})=\frac{it_{j}}{2mr_{j}r_{j-1}}\] \[\times\exp\left\{\frac{im(\Delta r_{j})^{2}}{2t_{j}}-\frac{it_{j} \lambda(\lambda+1)}{2mr_{j}r_{j-1}}+\frac{ik^{2}t_{j}}{2m}\right\}\,. \tag{22}\] Summing over the Feynman histories, the radial propagator for the radial function in Eq. (22), is obtained as \[K_{\lambda}(r_{2},r_{1};u)= \tag{23}\] \[\lim_{N\to\infty}\int\prod_{j=1}^{N}\{R_{\lambda}(r_{j},r_{j-1})\} \prod_{j=1}^{N}\left[\frac{m}{2\pi it_{j}}\right]^{\frac{3}{2}}\prod_{j=1}^{N-1 }(r^{2}dr)\,.\] This expression for the radial propagator can be represented as [25; 26] \[K_{\lambda}(r_{2},r_{1};u)=\langle r_{2}|\,e^{-i\hat{H}_{\lambda }u}\,|r_{1}\rangle=(r_{1}r_{2})^{-1} \tag{24}\] \[\times\lim_{N\to\infty}\int\,\exp\left[i\sum_{j=1}^{N}S(t_{j}) \right]\prod_{j=1}^{N}\left[\frac{m}{2\pi it_{j}}\right]^{\frac{1}{2}}\prod_{j =1}^{N-1}dr_{j}\,.\] In order to solve the path integral in Eq. (23), we simplify the radial function in Eq. (22). Taking all contributions in \(t_{j}\) upto first order in consideration, for small \(t_{j}\), we can apply the approximation formula \[I_{\nu}\left(\frac{n}{t_{j}}\right)= \tag{25}\] \[\left(\frac{2\pi n}{t_{j}}\right)^{-\frac{1}{2}}\exp\left\{\frac{ n}{t_{j}}-\frac{1}{2}\left[\left(\nu^{2}-\frac{1}{4}\right)\frac{t_{j}}{n}+ \mathscr{O}(t_{j}^{2})\right]\right\}\,,\] which reduces the radial function to the form \[R_{\lambda}(r_{j},r_{j-1})=\left(\frac{i\pi t_{j}}{2mr_{j}r_{j- 1}}\right)^{\frac{1}{2}} \tag{26}\] \[\times\exp\left[\frac{im(r_{j}^{2}+r_{j-1}^{2})}{2t_{j}}+\frac{ ik^{2}t_{j}}{2m}\right]I_{\lambda+\frac{1}{2}}\left(\frac{mr_{j}r_{j-1}}{it_{j}} \right)\,.\] Substituting this expression in Eq. (23), we obtain \[K_{\lambda}(r_{2},r_{1};u)=(r_{1}r_{2})^{-\frac{1}{2}}\left( \frac{-im}{u}\right) \tag{27}\] \[\times\exp\left\{\frac{ik^{2}u}{2m}\right\}\exp\left\{\frac{1}{2 }\frac{im(r_{1}^{2}+r_{2}^{2})}{u}\right\}I_{\lambda+\frac{1}{2}}\left(\frac{ mr_{j}r_{j-1}}{iu}\right)\,.\] Eq. (27) gives the radial propagator for the radial Hamiltonian and when substituted in Eq. (18) yields the radial Green's function in the form \[\langle r_{2}|\,g_{\lambda}\,|r_{1}\rangle=(r_{1}r_{2})^{-\frac{1 }{2}}\int\left(\frac{-im}{u}\right) \tag{28}\] \[\times\exp\left\{\frac{ik^{2}u}{2m}\right\}\exp\left\{\frac{1}{2 }\frac{im(r_{1}^{2}+r_{2}^{2})}{u}\right\}I_{\lambda+\frac{1}{2}}\left(\frac{ mr_{j}r_{j-1}}{iu}\right)\,du\,.\] However, the integration on the right hand side of Eq. (28) does not have a known closed-form solution. Thus, to enable the calculation process, and to bring the integral in Eq. (28) to a reducible form, following [25], we modify the radial action in Eq. (21), such that it represents the action of a three-dimensional isotropic harmonic oscillator, by replacing the radial variable \(r_{j}\) by \(\rho_{j}=\sqrt{r_{j}}\) and the local time-slicing parameter \(t_{j}\) by \(\sigma_{j}=t_{j}/4\bar{r}_{j}\), where the geometric mean is \(\bar{r}_{j}=\sqrt{r_{j}r_{j-1}}=\rho_{j}\rho_{j-1}=\bar{\rho}_{j}^{2}\). For small values of \(t_{j}\), the geometric mean, \(\bar{r}_{j}\), gives the mid-point value which is well-defined for a classical path. The action term can now be represented as \[S(\tau_{j})=\frac{m(\Delta\rho_{j})^{2}}{2\sigma_{j}}+\frac{m( \Delta\rho_{j})^{4}}{8\sigma_{j}\bar{\rho_{j}}^{2}}-\frac{2\lambda(\lambda+1) \sigma_{j}}{m\bar{\rho}_{j}^{2}} \tag{29}\] \[-\frac{1}{2}m\omega^{2}\bar{\rho_{j}}^{2}\sigma_{j}\,,\] with \(\omega=\frac{2ik}{m}\). The measure of the integrand in the Eq. (24) also changes according to the transformed variables, it is expressed as \[\prod_{j=1}^{N}\left[\frac{m}{2\pi it_{j}}\right]^{\frac{1}{2}}\prod_{j=1}^{N-1 }dr_{j}=\frac{1}{\sqrt{4\rho_{1}\rho_{2}}}\prod_{j=1}^{N}\left[\frac{m}{2\pi i \sigma_{j}}\right]^{\frac{1}{2}}\prod_{j=1}^{N-1}d\rho_{j}\,.\] Despite these modifications, we encounter a different problem that makes the integration of Eq. (24) difficult; the second term in the modified radial action expression contains \(\rho_{j}\) raised to its fourth power which causes the integral to diverge. Rewriting Eq. (23) after making the necessary substitutions, we obtain \[\langle r_{2}|\,e^{-i\hat{H}_{\lambda}u}\,|r_{1}\rangle= \tag{30}\] \[(\rho_{1}\rho_{2})^{-2}\lim_{N\to\infty}\int\exp\biggl{[}i\sum_{j= 1}^{N}\frac{m(\Delta\rho_{j})^{2}}{2\sigma_{j}}+\frac{m(\Delta\rho_{j})^{4}}{8 \sigma_{j}(\bar{\rho}_{j})^{2}}\] \[-\frac{2\lambda(\lambda+1)\sigma_{j}}{m(\bar{\rho}_{j})^{2}}- \frac{1}{2}m(\omega)^{2}(\bar{\rho}_{j})^{2}\sigma_{j}\biggr{]}\] \[\times(4\rho_{1}\rho_{2})^{-2}\prod_{j=1}^{N}\left[\frac{m}{2\pi i \sigma_{j}}\right]^{\frac{1}{2}}\prod_{j=1}^{N-1}d\rho_{j}\,.\] In order to overcome the problem posed by \(\rho_{j}^{4}\), we use the integral formula [25] that is valid for large \(A\) and integer \(n\): \[\int x^{2n}\exp\bigl{[}-Ax^{2}+Bx^{4}+\mathscr{O}(x^{6})\bigr{]} dx= \tag{31}\] \[\int x^{2n}\exp\biggl{[}-Ax^{2}+\frac{3}{4}BA^{-2}+\mathscr{O}(A ^{-3})\biggr{]}dx\,.\] This allows the fourth-order term to be represented by a replacement term given by \(-3\sigma_{j}/(8m\bar{\rho}_{j}^{2})\), yielding \[\langle r_{2}|\,e^{-i\hat{H}_{\lambda}u}\,|r_{1}\rangle= \tag{32}\] \[(\rho_{1}\rho_{2})^{-2}\lim_{N\to\infty}\int\exp\biggl{[}i\sum_{j= 1}^{N}\frac{m(\Delta\rho_{j})^{2}}{2\sigma_{j}}+\frac{3\sigma_{j}}{8m(\bar{\rho} _{j})^{2}}\] \[-\frac{2\lambda(\lambda+1)\sigma_{j}}{m(\bar{\rho}_{j})^{2}}-\frac{ 1}{2}m(\omega)^{2}(\bar{\rho}_{j})^{2}\sigma_{j}\biggr{]}\] \[\times(4\rho_{1}\rho_{2})^{-\frac{1}{2}}\prod_{j=1}^{N}\left[\frac {m}{2\pi i\sigma_{j}}\right]^{\frac{1}{2}}\prod_{j=1}^{N-1}d\rho_{j}\,.\] We express this equation in a compact form as \[\langle r_{2}|\,e^{-i\hat{H}_{\lambda}u}\,|r_{1}\rangle=\frac{1}{2}(\rho_{1} \rho_{2})^{-\frac{3}{2}}\tilde{K}_{\lambda}(\rho_{2},\rho_{1};\sigma)\,, \tag{33}\] where the propagator \(\tilde{K}\) of the \(\sigma\) evolution is defined as \[\tilde{K}_{\lambda}(\rho_{2},\rho_{1};\sigma)=(\rho_{1}\rho_{2})^{-1} \tag{34}\] \[\times\lim_{N\rightarrow\infty}\int\exp\left[i\sum_{j=1}^{N}\tilde {S}(\sigma_{j})\right]\prod_{j=1}^{N}\left[\frac{m}{2\pi i\sigma_{j}}\right]^{ \frac{1}{2}}\prod_{j=1}^{N-1}d\rho_{j}\,,\] and the modified action term is now given as \[\tilde{S}(\sigma_{j})=\frac{m(\Delta\rho_{j})^{2}}{2\sigma_{j}}-\frac{\lambda ^{{}^{\prime}}(\lambda^{{}^{\prime}}+1)\sigma_{j}}{2m(\bar{\rho}_{j})^{2}}- \frac{1}{2}m\omega^{2}(\bar{\rho}_{j})^{2}\sigma_{j}\,, \tag{35}\] where \(\lambda^{{}^{\prime}}=2\lambda+\frac{1}{2}\). This effective action term is analogous to the radial action term of a three-dimensional harmonic oscillator. Thus the propagator in Eq. (18), evolving with \(u\), has also been reduced to the propagator of a harmonic oscillator, and can be evaluated by following the procedure introduced by Inomata and Peak [33]. Thus we obtain the radial propagator, in terms of the modified Bessel function \(I_{\nu}(x)\) with \(\nu=\lambda^{{}^{\prime}}+\frac{1}{2}\), as \[\tilde{K}_{\lambda}(\rho_{2},\rho_{1};\sigma)=-i(\rho_{1}\rho_{2 })^{-\frac{1}{2}}(m\omega)\csc(\omega\sigma) \tag{36}\] \[\exp\left[\frac{1}{2}im\omega\left((\rho_{1}^{2}+\rho_{2}^{2} \right)\cot(\omega\sigma)\right]I_{\lambda^{{}^{\prime}}+\frac{1}{2}}\left( \frac{m}{i}\omega\rho_{1}\rho_{2}\csc(\omega\sigma)\right)\,.\] It is to be noted that in the limit that \(\omega\) reduces to zero, the propagator in Eq. (36) reduces to that defined in Eq. (27), which is the propagator for a free particle in three dimensions. However, since we are concerned with determining the energy-dependent Green's function in a closed form, we proceed with a finite-valued \(\omega\). Substituting the radial propagator from the above equation into Eq. (33) yields \[\left\langle r_{2}\right|e^{-i\hat{H}_{\lambda}u}\left|r_{1} \right\rangle=\frac{1}{2}(r_{1}r_{2})^{-1}(2k) \tag{37}\] \[\times\csc\left(\frac{ikt}{m(r_{1}r_{2})^{\frac{1}{2}}}\right) \exp\left[-k(r_{1}+r_{2})\cot\left(\frac{ikt}{m(r_{1}r_{2})^{\frac{1}{2}}} \right)\right]\] \[\times I_{2\lambda+1}\left(2k(r_{1}r_{2})^{\frac{1}{2}}\csc\left( \frac{ikt}{m(r_{1}r_{2})^{\frac{1}{2}}}\right)\right)\,.\] Using this result for the integrand in Eq. (18) we obtain \[\left\langle r_{2}\right|\hat{g}_{\lambda}\left|r_{1}\right\rangle =(r_{1}r_{2})^{-\frac{1}{2}}\int\exp[ik(r_{1}+r_{2})\coth q] \tag{38}\] \[\times I_{2\lambda+1}\left(-2ik(r_{1}r_{2})^{\frac{1}{2}}\csc hq \right)\csc hq\,dq\,,\] where \[q=\frac{kt}{(4r_{1}r_{2}m^{2})^{\frac{1}{2}}}\,.\] The type of integral in Eq. (38) has a closed solution [34]: \[\int\exp[ik(r_{1}+r_{2})\coth q] \tag{39}\] \[\times I_{2\lambda+1}\left(-2ik(r_{1}r_{2})^{\frac{1}{2}}\csc hq \right)\csc hq\,dq=\] \[\frac{\Gamma(\lambda+1)}{2ik(r_{1}r_{2})^{\frac{1}{2}}\Gamma(2 \lambda+2)}\] \[\times M_{0,\lambda+\frac{1}{2}}(-2ikr_{2})W_{0,\lambda+\frac{1}{2 }}(-2ikr_{1})\,,\] where \(\Gamma\) is the gamma function, and \(M\) and \(W\) are the Whittaker functions. They can be expressed in terms of the modified Bessel functions [35]: \[M_{0,\nu}(2z) =2^{2\nu+\frac{3}{2}}\Gamma(1+\nu)\sqrt{z}I_{\nu}(z)\,,\] \[W_{0,\nu}(2z) =\sqrt{\frac{2z}{\pi}}K_{\nu}(z)\,.\] Substituting these into Eq. (39), we obtain \[\left\langle r_{2}\right|\hat{g}_{\lambda}\left|r_{1}\right\rangle =\frac{\Gamma(\lambda+1)}{2ikr_{1}r_{2}\Gamma(2\lambda+2)}2^{2\lambda+\frac{3} {2}} \tag{40}\] \[\times\Gamma\left(\lambda+\frac{3}{2}\right)\sqrt{-ikr_{2}}I_{ \lambda+\frac{1}{2}}(-ikr_{2})\] \[\times\sqrt{\frac{-2ikr_{1}}{\pi}}K_{\lambda+\frac{1}{2}}(-ikr_{ 1})\,.\] Using this expression for the radial part in the expression for the total Green's function of the iterated Dirac equation yields \[\left\langle r_{2}\right|\hat{g}\left|\vec{r}_{1}\right\rangle= \sum_{j,\kappa}\frac{\Gamma(\lambda+1)}{2ikr_{1}r_{2}\Gamma(2\lambda+2)}2^{2 \lambda+\frac{3}{2}} \tag{41}\] \[\times\Gamma\left(\lambda+\frac{3}{2}\right)\sqrt{-ikr_{2}}I_{ \lambda+\frac{1}{2}}(-ikr_{2})\] \[\times\sqrt{\frac{-2ikr_{1}}{\pi}}K_{\lambda+\frac{1}{2}}(-ikr_{1} )\Omega^{j}_{\kappa,\kappa^{\prime}}(\theta_{2}\phi_{2}|\theta_{1}\phi_{1}) \beta^{2}\,.\] The modified Bessel functions of the first and second kind can be expressed in terms of spherical functions [35], and so we obtain \[\left\langle\vec{r}_{2}\right|\hat{g}\left|\vec{r}_{1}\right\rangle =\sum_{j,\kappa}\frac{\Gamma(\lambda+1)}{2ikr_{1}r_{2}\Gamma(2\lambda+2)}2^{2 \lambda+\frac{3}{2}} \tag{42}\] \[\times\Gamma\left(\lambda+\frac{3}{2}\right)\sqrt{-ikr_{2}}(i)^{- \left(\lambda+\frac{1}{2}\right)}\sqrt{\frac{2ikr_{2}}{\pi}}j_{\lambda}(ikr_{2})\] \[\times\sqrt{\frac{-2ikr_{1}}{\pi}}\frac{\pi}{2}(i)^{\lambda+\frac{ 3}{2}}\sqrt{\frac{2ikr_{1}}{\pi}}h_{\lambda}(ikr_{1})\Omega^{j}_{\kappa,\kappa^ {\prime}}(\theta_{2}\phi_{2}|\theta_{1}\phi_{1})\beta^{2}\,,\] where \(j_{\lambda}\) and \(h_{\lambda}\) are the spherical Bessel and Hankel functions, respectively. The operator \(\hat{M}\), when acting on a state with a given \(\kappa\), takes the form \[i\beta\alpha_{r}\left[\frac{\partial}{\partial r}+\frac{1-\gamma\beta}{r} \right]+\frac{\kappa E}{\gamma}\beta\,. \tag{43}\] We can turn to calculating the free Dirac Green's function from Eq. (4): \[\left\langle\vec{r}_{2}\right|\hat{G}\left|\vec{r}_{1}\right\rangle =\sum_{j,\kappa}\frac{\Gamma(\lambda+1)}{2ikr_{1}r_{2}\Gamma(2\lambda+2)}2^{2 \lambda+\frac{3}{2}}\] \[\times\Gamma\left(\lambda+\frac{3}{2}\right)\sqrt{-ikr_{2}}(i)^{ -\left(\lambda+\frac{1}{2}\right)}\sqrt{\frac{2ikr_{2}}{\pi}}\] \[\times\sqrt{-2ikr_{1}}\frac{\pi}{2}(i)^{\lambda+\frac{3}{2}} \sqrt{\frac{2ikr_{1}}{\pi}}h_{\lambda}(ikr_{1})\] \[\times\left(m+i\beta\alpha_{r}\left[\frac{\partial}{\partial r}+ \frac{1-\gamma\beta}{r}\right]+\frac{\kappa E}{\gamma}\beta\right)j_{\lambda}( ikr_{2})\] \[\times\Omega^{j}_{\kappa,\kappa^{\prime}}(\theta_{2}\phi_{2}| \theta_{1}\phi_{1})\beta^{2}\,. \tag{44}\] Using the relation for the derivative of spherical Bessel functions [35] one obtains \[\left\langle\mathbf{r}_{2}\right|\hat{G}\left|\mathbf{r}_{1}\right\rangle= \sum_{j,\kappa}k(2\lambda+2)ih_{\lambda}(ikr_{1}) \tag{45}\] \[\times\biggl{\{}\left(m+\frac{\kappa E}{\gamma}\beta\right)j_{ \lambda}(ikr_{2})\Omega^{j}_{\kappa,\kappa^{\prime}}(\theta_{2}\phi_{2}|\theta _{1}\phi_{1})\beta^{2}\] \[+i\left[k\left\{\frac{\lambda}{ikr_{2}}j_{\lambda}(ikr_{2})-j_{ \lambda+1}(ikr_{2})\right\}+\frac{(1\pm\gamma)}{r_{2}}j_{\lambda}(ikr_{2})\right]\] \[\times\Omega^{j}_{\kappa,\kappa^{\prime}}(\theta_{2}\phi_{2}| \theta_{1}\phi_{1})\beta^{2}\alpha_{r}\biggr{\}}\,.\] On account of the relations \[\alpha_{r} =i\sigma_{r}\beta\gamma^{1}\gamma^{2}\gamma^{3}\,,\] \[\gamma^{i} =\beta\alpha_{i}\,,\quad\text{with }i\in\{1,2,3\}\,,\] \[\sigma_{r}\Omega^{j}_{\kappa,\kappa^{\prime}} =-\Omega^{j}_{\kappa,-\kappa^{\prime}}\,.\] the Green's function of the first-order free Dirac equation can be finally written as \[\left\langle\mathbf{r}_{2}\right|\hat{G}\left|\mathbf{r}_{1}\right\rangle =\sum_{j,\kappa}ik(2\lambda+2)h_{\lambda}(ikr_{1})\] \[\times\biggl{\{}\left(m+\frac{\kappa E}{\gamma}\beta\right)j_{ \lambda}(ikr_{2})\Omega^{j}_{\kappa,\kappa^{\prime}}(\theta_{2}\phi_{2}| \theta_{1}\phi_{1})\beta^{2}\] \[-\tilde{\beta}\left[k\left(\frac{\lambda}{ikr_{2}}j_{\lambda}(ikr _{2})-j_{\lambda+1}(ikr_{2})\right)+\frac{(1\pm\gamma)}{r_{2}}j_{\lambda}(ikr _{2})\right]\] \[\times\Omega^{j}_{\kappa,-\kappa}(\theta_{2}\phi_{2}|\theta_{1} \phi_{1})\alpha_{1}\alpha_{2}\alpha_{3}\biggr{\}}\,. \tag{46}\] This formula is equivalent to the free Green's function derived by different methods [1; 7; 27]. ## 4 Summary The free Dirac Green's function has been derived from first principles within the path integral formalism. Spherical coordinates were used, in order to arrive to a form applicable in atomic physics calculations. The Green's function has been transformed in Biedenharn's basis [24] into a radial path integral, with an effective action resembling the action of the Schrodinger equation. The radial path integral has been converted through coordinate transformation and with a time rescaling to that of a classical isotropic harmonic oscillator which reduces the problem to an exactly solvable form. The final result is expressed with spherical Bessel functions and spherical spinors in Eq. (46). ## Acknowledgements This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 273811115 - SFB 1225 (ISOQUANT). ## Author contributions S. B. and Z. H. conceived the model, interpreted the results and wrote the paper. S. B. performed most of the calculations, in consultation with Z. H. All authors gave final approval for publication. ## Data Availability Statement This manuscript has no associated data or the data will not be deposited.
2310.20321
Numerical realization of the Mortensen observer via a Hessian-augmented polynomial approximation of the value function
Two related numerical schemes for the realization of the Mortensen observer or minimum energy estimator for the state reconstruction of non-linear dynamical systems subject to deterministic disturbances are proposed and compared. Both approaches rely on a polynomial approximation of the value function associated with the energy of the disturbances of the system. Such an approximation is obtained via interpolation considering not only the values but also first and second order derivatives of the value function in a set of sampling points. The scheme is applied to four examples and the results are compared with the well known extended Kalman filter.
Tobias Breiten, Karl Kunisch, Jesper Schröder
2023-10-31T09:56:08Z
http://arxiv.org/abs/2310.20321v2
Numerical realization of the Mortensen observer via a Hessian-augmented polynomial approximation of the value function ###### Abstract. Two related numerical schemes for the realization of the _Mortensen observer_ or _minimum energy estimator_ for the state reconstruction of non-linear dynamical systems subject to deterministic disturbances are proposed and compared. Both approaches rely on a polynomial approximation of the value function associated with the energy of the disturbances of the system. Such an approximation is obtained via interpolation considering not only the values but also first and second order derivatives of the value function in a set of sampling points. The scheme is applied to four examples and the results are compared with the well known _extended Kalman filter_. Keywords: non-linear observer design, minimum energy estimation, Hamilton-Jacobi-Bellman equation, value function approximation AMS subject classification: 93B53, 49M05, 49L12 ## 1. Introduction We consider a non-linear dynamical system subject to linear disturbances of the form \[\dot{x}(t) =f(x(t))+Fv(t),\ t\in(0,T],\] \[x(0) =x_{0}+\eta,\] where \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\), \(F\in\mathbb{R}^{n,m}\), \(x_{0},\eta\in\mathbb{R}^{n}\). Here, it is assumed that \(v\) and \(\eta\) are unknown perturbations resulting from modeling and measurement errors or incomplete system knowledge. Our interest is to use a priori information about the dynamics \(f,F,x_{0}\) together with (perturbed) output data \(y\) on \([0,T]\) to construct a state estimate \(\widehat{x}\) such that \(\widehat{x}(t)\approx x(t)\) for all \(t\in[0,T]\). For this purpose, we assume linear measurements subject to linear disturbances \[y(t)=Cx(t)+\mu(t),\] where \(C\in\mathbb{R}^{r,n}\) is known and \(\mu\in L^{2}(0,T;\mathbb{R}^{r})\) is unknown. The problem of reconstructing and predicting signals from partially known data has a far reaching history dating back at least to the seminal works by Wiener [36], Kalman [19, 20] and Stratanovich [35]. While the viewpoint in these articles relies on stochastic disturbances and therefore focuses on filtering theory, in this manuscript, we consider a deterministic, control theoretic perspective and aim at approximations \(\widehat{x}\) characterized by an observer of the form \[\dot{\widehat{x}}(t) =f(\widehat{x}(t))+L(t,\widehat{x}(t))(y(t)-C\widehat{x}(t)),\ t \in(0,T],\] \[\widehat{x}(0) =x_{0},\] ## 1. Introduction In this paper, we study the _differential Riccati equation_, which is a nonlinear nonlinear equation, which is a nonlinear nonlinear equation, which is a nonlinear nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear nonlinear equation, which is a nonlinear nonlinear equation, which is a nonlinear nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear nonlinear equation, which is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a * By combining quasi-random Halton sampling around the trajectory of an extended Kalman filter, we propose a numerically efficient data generation procedure. This strategy is theoretically motivated by the fact that the extended Kalman filter can be understood as a first-order approximation to the Mortensen observer. * Resorting to its original formulation in [30], in addition to the Luenberger-type formulation of the Mortensen observer, we also address an alternative formulation which is based on the pointwise (in time) minimization of the value function over the state space and numerically investigate its performance when compared to the first formulation. In a similar spirit, we also compare the extended Kalman filter to our approximations of the Mortensen observer with particular emphasis on systems whose value functions exhibit a spatially strongly non-quadratic behavior. * With regard to its applicability in the context of larger state space dimensions, we utilize a hyperbolic cross approximation which allows us to compute HJB-based observers up to dimensions \(n=40\). **Notation.** If not mentioned otherwise, \(\|\cdot\|\) will denote the Euclidean norm on \(\mathbb{R}^{d}\), where the dimension \(d\) varies. The associated scalar product is denoted by \(\langle\cdot,\cdot\rangle\). Further we denote by \(I_{d}\) the identity matrix of dimension \(d\). The space of matrices with \(d_{1}\) rows and \(d_{2}\) columns and real entries will be denoted by \(\mathbb{R}^{d_{1},d_{2}}\). If not mentioned otherwise it is equipped with the Frobenius norm. For \(1\leq p\leq\infty\) we denote by \(L^{p}(0,T;\mathbb{R}^{d})\) the Lebesgue spaces, while \(H^{1}(0,T;\mathbb{R}^{d})\) denotes the Sobolev space of functions with a first weak derivative in \(L^{2}(0,T;\mathbb{R}^{d})\). All mentioned function spaces are equipped with their respective standard norms. For two Banach spaces \(X\) and \(Y\) we denote by \(L(X,Y)\) the space of linear and bounded mappings from \(X\) to \(Y\). For a function \(f\colon X\to Y\) its Frechet derivative is denoted by \(Df\). In case \(X\) and \(Y\) are finite dimensional, we identify \(Df\) with the Jacobian matrix. ## 2. The Mortensen observer In this section we briefly recall the motivation and definition of the well-known _Mortensen observer_ and the essential concepts needed for its construction. A more thorough discussion can be found in [6], for example, see also the original work [30]. The strategy of the approach at hand is to minimize the energy of the disturbances in the system. The mathematical derivation is based on a specific optimal control problem. For fixed \(t\in(0,T]\), \(\xi\in\mathbb{R}^{n}\), and \(y\in L^{2}(0,T;\mathbb{R}^{r})\) it reads \[\begin{split}\inf_{\begin{subarray}{c}x\in H^{1}(0,t;\mathbb{R}^ {n})\\ v\in L^{2}(0,t;\mathbb{R}^{m})\end{subarray}}J(x,v;t)&\coloneqq \frac{1}{2}\|x(0)-x_{0}\|^{2}+\frac{1}{2}\int_{0}^{t}\|v(s)\|^{2}+\alpha\|y( s)-Cx(s)\|^{2}\,\mathrm{d}s,\\ \text{s.t. }e(x,v;t,\xi)&\coloneqq(\dot{x}-f(x)-Fv,x(t)- \xi)=0.\end{split} \tag{2.1}\] For \(\xi\in\mathbb{R}^{n}\) the corresponding minimal value function is defined by \[\mathcal{V}(t,\xi)\coloneqq\inf_{\begin{subarray}{c}(x,v)\text{ s.t.}\\ e(x,v;t,\xi)=0\end{subarray}}J(x,v;t),\ t\in(0,T],\qquad\quad\mathcal{V}(0, \xi)\coloneqq\frac{1}{2}\|\xi-x_{0}\|^{2}. \tag{2.2}\] From the point of view of optimal control theory problem (2.1) differs from the classic literature with respect to the boundary condition which is given at the right boundary of the time interval instead of the left one. Indeed the system evolves backwards in time, which has a crucial effect on the numerical treatment. Systems that are stable when considered forward in time might turn unstable when considered as evolving backwards in time. This issue will be discussed in more detail in our examples. We recall that \(\mathcal{V}\) can be characterized as the solution of the _Hamilton-Jacobi-Bellman equation_ associated with (2.1). Since the state equation evolves backwards in time, in our case the HJB equation is given as a forward equation and reads \[\begin{split}\partial_{t}\mathcal{V}(t,\xi)&=- \nabla_{\xi}\mathcal{V}(t,\xi)^{\top}f(\xi)-\frac{1}{2}\|F^{\top}\nabla_{\xi} \mathcal{V}(t,\xi)\|^{2}+\frac{\alpha}{2}\|y(t)-C\xi\|^{2},\\ \mathcal{V}(0,\xi)&=\frac{1}{2}\|\xi-x_{0}\|^{2}. \end{split} \tag{2.3}\] Depending on the regularity of the underlying control problem one might have to consider _viscosity solutions_ instead of classical ones, c.f. [14]. In [8] conditions were given which ensure that \(\mathcal{V}\) is in fact space-time \(C^{1}\)-regular in a neighborhood of the model. The Mortensen observer is defined via a pointwise minimization of the value function, i.e., \[\widehat{x}_{\mathrm{M}}(t)\coloneqq\operatorname*{arg\,min}_{\xi\in\mathbb{R }^{n}}\mathcal{V}(t,\xi). \tag{2.4}\] Under appropriate assumptions it can be characterized as a solution of the observer equation \[\begin{split}\dot{\widehat{x}}_{\mathrm{M}}(t)&=f( \widehat{x}_{\mathrm{M}}(t))+\alpha\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x }_{\mathrm{M}}(t))^{-1}C^{\top}(y(t)-C\widehat{x}_{\mathrm{M}}(t)),\ t\in(0,T],\\ \widehat{x}_{\mathrm{M}}(0)&=x_{0},\end{split} \tag{2.5}\] c.f. [8]. In the derivation and motivation of our numerical strategies we assume that the underlying system and functions fulfill all necessary assumptions needed to ensure well-definedness of (2.4) and well-posedness of (2.5). This specifically includes sufficient smoothness of the value function. A theoretical result on these issues for systems with a quadratic right-hand side can be found in [8], where existence of a solution \(\widehat{x}_{\mathrm{M}}\in H^{1}(0,T;\mathbb{R}^{n})\) was established. **Relation to the Kalman filter.** In the case of linear dynamics, i.e., \(f(\xi)=A\xi\) for some \(A\in\mathbb{R}^{n,n}\), it can be shown that the value function is quadratic in \(\xi\) and explicitly given by \[\mathcal{V}(t,\xi)=\frac{1}{2}(\xi-\widehat{x}(t))^{\top}\Sigma(t)^{-1}(\xi- \widehat{x}(t))+\frac{\alpha}{2}\int_{0}^{t}\|y(s)-C\widehat{x}(s)\|^{2}\, \mathrm{d}s, \tag{2.6}\] where \[\begin{split}\dot{\Sigma}(t)&=A\,\Sigma(t)+\Sigma (t)\,A^{\top}-\alpha\Sigma(t)\,C^{\top}C\,\Sigma(t)+FF^{\top},\ \Sigma(0)=I_{n},\\ \dot{\widehat{x}}(t)&=A\widehat{x}(t)+\alpha\Sigma(t )\,C^{\top}(y(t)-C\widehat{x}(t)),\ \widehat{x}(0)=x_{0},\end{split} \tag{2.7}\] c.f. [6]. Note that \(\Sigma(t)\) coincides with the inverted Hessian \(\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}(t))^{-1}\) and is given as the solution of a differential Riccati equation, while \(\widehat{x}\) is given as the solution of the observer equation which agrees with the formulation in (2.5). System (2.7) is the deterministic version of the widely used _Kalman-Bucy filter_[19, 20]. **The extended Kalman filter.** In engineering practice observers for non-linear problems are frequently based on the use of the Kalman filter on a linearization of the system at the observer trajectory. This leads to the so-called extended Kalman filter which for our setting is formulated as \[\dot{\Sigma}(t) =\mathrm{D}f(\widehat{x}_{\mathrm{K}}(t))\,\Sigma(t)+\Sigma(t)\, \mathrm{D}f(\widehat{x}_{\mathrm{K}}(t))^{\top}-\alpha\Sigma(t)\,C^{\top}C\, \Sigma(t)+FF^{\top},\ \Sigma(0)=I_{n}, \tag{2.8}\] \[\dot{\widehat{x}}_{\mathrm{K}}(t) =f(\widehat{x}_{\mathrm{K}}(t))+\alpha\Sigma(t)\,C^{\top}(y(t)-C \widehat{x}_{\mathrm{K}}(t)),\ \widehat{x}_{\mathrm{K}}(t)=x_{0}.\] In contrast to (2.7) here the differential Riccati equation is coupled with the observer equation. For our non-linear examples we will employ an implementation of this extended Kalman filter as a first approximation of the Mortensen observer. While the extended Kalman filter is a powerful technique, we shall also present an example demonstrating that the state reconstruction \(\widehat{x}_{\mathrm{M}}\) based on Mortensen can significantly differ from the state reconstruction \(\widehat{x}_{\mathrm{K}}\) obtained by means of the extended Kalman filter. We further point out that the extended Kalman filter and the Mortensen observer are closely related. Both are based on feeding the observation defect back into the model using gains that are characterized via respective differential Riccati(-type) equations. To see that the latter holds for the Mortensen observer consider the second order spatial derivative of the HJB (2.3) \[\partial_{t}\nabla^{2}_{\xi\xi}\mathcal{V}(t,\xi) =-\nabla^{2}_{\xi\xi}\mathcal{V}(t,\xi)\mathrm{D}f(\xi)-\mathrm{ D}f(\xi)^{\top}\nabla^{2}_{\xi\xi}\mathcal{V}(t,\xi) \tag{2.9}\] \[-\nabla^{2}_{\xi\xi}\mathcal{V}(t,\xi)FF^{\top}\nabla^{2}_{\xi \xi}\mathcal{V}(t,\xi)-\mathrm{D}\left(\mathrm{D}f(\xi)^{\top}\right)\nabla_{ \xi}\mathcal{V}(t,\xi)+\alpha C^{\top}C\] \[-\nabla^{3}_{\xi^{3}}\mathcal{V}(t,\xi)\left(f(\xi)+FF^{\top} \nabla_{\xi}\mathcal{V}(t,\xi)\right).\] Due to (2.4) it holds \(\nabla_{\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))=0\) for all \(t\in[0,T]\). With (2.9) it follows \[\partial_{t}\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{ \mathrm{M}}(t)) =-\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t)) \mathrm{D}f(\widehat{x}_{\mathrm{M}}(t))-\mathrm{D}f(\widehat{x}_{\mathrm{M}} (t))^{\top}\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t)) \tag{2.10}\] \[-\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))FF ^{\top}\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))+\alpha C ^{\top}C\] \[-\nabla^{3}_{\xi^{3}}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))f( \widehat{x}_{\mathrm{M}}(t)).\] We define \(\Pi(t)=\nabla^{2}_{\xi\xi}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))^{-1}\) and multiply (2.10) with \(\Pi(t)\) from the left and the right. Subsequently the Mortensen observer can be characterized via \[\dot{\Pi}(t) =\mathrm{D}f(\widehat{x}_{\mathrm{M}}(t))\Pi(t)+\Pi(t)\mathrm{D}f (\widehat{x}_{\mathrm{M}}(t))^{\top}-\alpha\Pi(t)C^{\top}C\Pi(t)+FF^{\top} \tag{2.11}\] \[+\Pi(t)\nabla^{3}_{\xi^{3}}\mathcal{V}(t,\widehat{x}_{\mathrm{M} }(t))f(\widehat{x}_{\mathrm{M}}(t))\Pi(t),\ \Pi(0)=I_{n},\] \[\dot{\widehat{x}}_{\mathrm{M}}(t) =f(\widehat{x}_{\mathrm{M}}(t))+\alpha\Pi(t)\,C^{\top}(y(t)-C \widehat{x}_{\mathrm{M}}(t)),\ \widehat{x}_{\mathrm{M}}(t)=x_{0},\] and thus a comparison with (2.8) shows that the gains for the extended Kalman filter and the Mortensen observer are characterized via Riccati(-type) equations that differ only by the summand involving the third derivative of the value function. However, since usually the third derivative \(\nabla^{3}_{\xi^{3}}\mathcal{V}\) is not available in practical computations, this characterization does not offer any advantages when approximating the Mortensen observer. ## 3. Polynomial approximation of the value function In this work we propose a scheme for the numerical approximation of the Mortensen observer for non-linear systems. We will consider two different strategies, namely construction by pointwise minimization of the value function according to (2.4) and by solving the observer equation (2.5). For both approaches an appropriate approximation of the value function \(\mathcal{V}\) is essential. Due to the _curse of dimensionality_ solving the HJB equation (2.3) for \(\mathcal{V}\) is extremely challenging for systems of medium and high state dimension. Instead, we sample the value function and use linear regression to obtain a polynomial approximation of \(\mathcal{V}\). This approach is inspired by [2], where the samples of the value function are augmented by samples of its gradient. We extend this idea to also sampling the Hessian matrices. Indeed, the structure of the observer equation (2.5) calls for an accurate approximation of the Hessian of \(\mathcal{V}\). Here the Hessian samples turn out to be very helpful. In the following we describe the steps necessary for the approximation of the value function. First we discuss the generation of a data set for training by solving open-loop optimal control problems and differential Riccati equations. The former yields the values and first order derivatives of the value function, while the latter provides the Hessian matrices. Subsequently we shall present the construction of an appropriate polynomial basis using Chebyshev polynomials and the hyperbolic cross index set. Approximation properties of hyperbolic cross based polynomials are well analyzed in the literature. We refer to, e.g., [12, Section 4.2] where convergence rates in terms of Sobolev- and Besov space norms are provided. Finally we are prepared to construct a polynomial approximation of the value function \(\mathcal{V}\) using linear regression. ### Generating a dataset First we generate a data set of the form \[\left\{(t^{i},\xi^{i}),\mathcal{V}(t^{i},\xi^{i}),\nabla_{\xi}\mathcal{V}(t^{i },\xi^{i}),\nabla_{\xi\xi}^{2}\mathcal{V}(t^{i},\xi^{i})\right\}_{i=1}^{N}. \tag{3.1}\] Our strategy of choosing the sample points \((t^{i},\xi^{i})\) is presented below. First we discuss how the value function and its first and second order derivatives are evaluated for such a given point. For this purpose let \((t^{*},\xi^{*})\in(0,T]\times\mathbb{R}^{n}\) denote a generic sampling point. We numerically solve the associated open-loop optimal control problem (2.1) using a gradient descent approach based on _Pontryagin's maximum principle_. According to this principle an optimal triple \((\bar{x},\bar{v},p)\in H^{1}(0,t^{*};\mathbb{R}^{n})\times L^{2}(0,t^{*}; \mathbb{R}^{m})\times H^{1}(0,t^{*};\mathbb{R}^{n})\) consisting of a state trajectory, control and adjoint state satisfies the first order necessary optimality condition: \[\dot{\bar{x}} =f(\bar{x})+F\bar{v},\ \bar{x}(t^{*})=\xi^{*}, \tag{3.2}\] \[-\dot{p} =\mathrm{D}f(\bar{x})^{\top}p-\alpha C^{\top}(y-C\bar{x}),\ p(0)=x _{0}-\bar{x}(0),\] (3.3) \[\bar{v}+F^{\top}p =0. \tag{3.4}\] Note that \(\bar{v}+F^{\top}p\) coincides with the gradient of the reduced cost functional \(\tilde{J}\) which is given by \[\tilde{J}(v;t)=J(x(v),v;t),\] where \(x(v)\) denotes the solution of the state equation with control \(v\), i.e., \(e(x(v),v;t,\xi)=0\) holds. With a representation of the gradient at hand we can set up the gradient descent scheme. We denote by \(v_{j}\) the control chosen in the \(j\)-th iteration, \(x_{j}\) and \(p_{j}\) are the corresponding solutions of the state and adjoint equations, respectively. We express the corresponding gradient as \(\mathcal{G}_{j}=v_{j}+F^{\top}p_{j}\). For choosing a stepsize in each iteration of the gradient descent we opt for a combination of the _Barzilai-Borwein_ step-size-control [3, 4] and a specific line search scheme. In the \(k\)-th iteration the initial stepsize is set to \({\sigma_{k}}^{-1}\), where \[\sigma_{k}\coloneqq\frac{\langle\mathcal{S}_{k-1},\mathcal{Y}_{k-1}\rangle}{ \langle\mathcal{S}_{k-1},\mathcal{S}_{k-1}\rangle},\ \ \text{for even}\ k,\ \text{and}\qquad\qquad\sigma_{k}\coloneqq\frac{\langle\mathcal{Y}_{k-1}, \mathcal{Y}_{k-1}\rangle}{\langle\mathcal{S}_{k-1},\mathcal{Y}_{k-1}\rangle}, \ \ \text{for odd}\ k, \tag{3.5}\] and \(\mathcal{S}_{k-1}\coloneqq v_{k}-v_{k-1}\) and \(\mathcal{Y}_{k-1}\coloneqq\mathcal{G}_{k}-\mathcal{G}_{k-1}\). The stepsize \(h_{k}\) used in the \(k\)-th iteration is then determined by an application of the non monotone line search introduced in [10]. ``` 0: Initial controls \(v_{-1}\) and \(v_{0}\), tolerances \(\varepsilon_{\text{rel}}\), \(\varepsilon_{\text{abs}}>0\). 0: Optimal triple \((\bar{x},\bar{v},p)\) Set \(k\) = 0. Compute \(x_{-1}\), \(x_{0}\) via (3.2). Compute \(p_{-1}\), \(p_{0}\) via (3.3). Set \(\mathcal{G}_{-1}=v_{-1}+F^{\top}p_{-1}\), \(\mathcal{G}_{0}=v_{0}+F^{\top}p_{0}\) while\(\|\mathcal{G}_{k}\|/\|\mathcal{G}_{-1}\|>\varepsilon_{\text{rel}}\) and \(\|\mathcal{G}_{k}\|>\varepsilon_{\text{abs}}\)do Compute \(\sigma_{k}\) according to (3.5). Obtain stepsize \(h_{k}\) by non monotone line search according to [10] starting with \(\sigma_{k}^{-1}\). Set \(v_{k+1}=v_{k}-h_{k}\mathcal{G}_{k}\). Compute \(x_{k+1}\) via (3.2). Compute \(p_{k+1}\) via (3.3). Set \(\mathcal{G}_{k+1}=v_{k+1}+F^{\top}p_{k+1}\). Set \(k=k+1\). endwhile ``` **Algorithm 1** Gradient descent for open-loop optimal control problem Algorithm 1 allows the computation of approximations of the optimal state, control, and adjoint state \((\bar{x},\bar{v},p)\) for a given \((t^{*},\xi^{*})\). The value of the cost functional can be computed as \(\mathcal{V}(t^{*},\xi^{*})=J(\bar{x},\bar{v};t^{*})\). Via a feedback rule one obtains the gradient as \(\nabla_{\xi}\mathcal{V}(t^{*},\xi^{*})=-p(t^{*})\) without any additional computational cost. The data point is completed by a computation of the Hessian which is achieved in the following manner. **Evaluation of the Hessian in sample points.** First we show that \(\Xi(s)=\nabla^{2}_{\xi\xi}\mathcal{V}(s,\bar{x}(s))\) satisfies a specific differential Riccati equation. By the verification theorem the optimal control can be characterized via the feedback rule [9] \[\bar{v}(s)=-F^{\top}p(s)=F^{\top}\nabla_{\xi}\mathcal{V}(t,\bar{x}(s)),\quad s \in[0,t^{*}]. \tag{3.6}\] With the chain rule it follows that \[\tfrac{\mathrm{d}}{\mathrm{d}s}\Xi(s)=\partial_{t}\nabla^{2}_{\xi\xi} \mathcal{V}(s,\bar{x}(s))+\nabla^{3}_{\xi^{3}}\mathcal{V}(s,\bar{x}(s))\dot{ \bar{x}}(s).\] Inserting (2.9) with \(\xi=\bar{x}(s)\) yields \[\dot{\Xi}(s) =-\nabla^{2}_{\xi\xi}\mathcal{V}(s,\bar{x}(s))\mathrm{D}f(\bar{x }(s))-\mathrm{D}f(\bar{x}(s))^{\top}\nabla^{2}_{\xi\xi}\mathcal{V}(s,\bar{x}(s))\] \[-\nabla^{2}_{\xi\xi}\mathcal{V}(s,\bar{x}(s))FF^{\top}\nabla^{2}_ {\xi\xi}\mathcal{V}(s,\bar{x}(s))-\mathrm{D}\left(\mathrm{D}f(\bar{x}(s))^{ \top}\right)\nabla_{\xi}\mathcal{V}(s,\bar{x}(s))+\alpha C^{\top}C\] \[-\nabla^{3}_{\xi^{3}}\mathcal{V}(s,\bar{x}(s))\left(f(\bar{x}(s))+ FF^{\top}\nabla_{\xi}\mathcal{V}(s,\bar{x}(s))-\dot{\bar{x}}(s)\right).\] With (3.6) it follows that the last summand on the right hand side vanishes and thus \[\dot{\Xi}(s) =-\Xi(s)\mathrm{D}f(\bar{x}(s))-\mathrm{D}f(\bar{x}(s))^{\top}\Xi(s )-\Xi(s)FF^{\top}\Xi(s)\] \[+\mathrm{D}\left(\mathrm{D}f(\bar{x}(s))^{\top}\right)p(s)+\alpha C ^{\top}C, \tag{3.7}\] \[\Xi(0) =I_{n}.\] The initial condition follows directly from the definition in (2.2). After solving this DRE we obtain \(\nabla^{2}_{\xi\xi}\mathcal{V}(t^{*},\xi^{*})=\nabla^{2}_{\xi\xi}\mathcal{V}( t^{*},\bar{x}(t^{*}))=\Xi(t^{*})\) and the data point is complete. **Remark 3.1**.: _It is possible to augment the sampled data by the time derivative. Since we already have access to the gradient, the time derivative \(\partial_{t}\mathcal{V}\) can be computed via the HJB equation (2.3) without any noteworthy additional cost. However, this did not yield any convincing advantages in our experiments and it is therefore omitted in this work._ **Choosing sampling points.** For the generation of the data set (3.1) an appropriate choice of sampling points \((t_{i},\xi_{i})_{i=1}^{N}\) is essential. Note that we will only require evaluations of the value function and its derivatives close to the observer trajectory \(\widehat{x}_{\mathrm{M}}\). For the approach based on solving the observer equation this can immediately be seen in (2.5). The same holds when minimizing the value function over the space variable if one assumes to have access to a sufficiently good initial candidate for the minimization, c.f., Section 4. The extended Kalman filter offers an approximation \(\widehat{x}_{\mathrm{K}}\) of the Mortensen observer at comparatively small computational cost. Therefore we shall sample the value function locally around the observer trajectory \(\widehat{x}_{\mathrm{K}}\). This is done in the following manner: First \(N_{\mathrm{Time}}\) time sample points are determined as the Chebyshev nodes in \([0,T]\), i.e., \[t_{k}=\tfrac{1}{2}\,T+\tfrac{1}{2}\,T\cos\left(\frac{2\left(N_{\mathrm{Time}} -k+1\right)-1}{2\,\pi\,N_{\mathrm{Time}}}\right),\qquad k=1,...,N_{\mathrm{ Time}}.\] These are precisely the roots of the Chebyshev polynomial of degree \(N_{\mathrm{Time}}\) rescaled to the domain \([0,T]\). In the following every time sampling point \(t_{k}\) will be individually completed by appropriate spatial sampling points. Turning to the spatial sampling, for each fixed \(k\) the Kalman observer trajectory \(\widehat{x}_{\mathrm{K}}\) is evaluated in \(t_{k}\). The spatial variable \(\xi\in\mathbb{R}^{n}\) is sampled quasi-randomly from a hyperrectangle \(Q^{k}\) around \(\widehat{x}_{\mathrm{K}}(t_{k})\) of the form \[Q^{k}=\bigtimes_{i=1}^{n}\left[\widehat{x}_{\mathrm{K}}(t_{k})_{i}-r_{k,i}, \widehat{x}_{\mathrm{K}}(t_{k})_{i}+r_{k,i}\right], \tag{3.8}\] where the specific side lengths \(r_{k,i}>0\) are chosen individually for every example and depend on \(\widehat{x}(t_{k})\), c.f. Section 5. We take \(N_{\mathrm{Space}}\) spatial samples \((\xi_{h}^{t_{k}})_{h=1}^{N_{\mathrm{Space}}}\) from \(Q^{k}\) using Halton quasi-random sequences1. Finally the set of sampling points is given as Footnote 1: [https://de.mathworks.com/help/stats/generating-quasi-random-numbers.html](https://de.mathworks.com/help/stats/generating-quasi-random-numbers.html) \[(t_{i},\xi_{i})_{i=1}^{N}=\bigcup_{k=1,h=1}^{N_{\mathrm{Time}},N_{\mathrm{ Space}}}\left\{(t_{k},\xi_{h}^{t_{k}})\right\}\] consisting of \(N=N_{\mathrm{Time}}\,N_{\mathrm{Space}}\) sample points. Let us note that initially we used tensorized Chebyshev nodes as spatial samples which worked out for examples of lower dimension. In higher dimensions, however, this is not feasible because the tensorization quickly results in a very large number of samples. An attempt of using random spatial samples uniformly distributed in \(Q^{k}\) did not yield the desired results as samples would tend to cluster instead of filling out the sampling domain evenly. Eventually we decided to construct the samples in a quasi-random fashion as it is done in [2]. ### Constructing a polynomial basis In this section we present the construction of the polynomial basis which we use for the approximation of the value function. The \((n+1)\)-dimensional basis polynomials will be given as products of one-dimensional Chebyshev polynomials representing the time variable and each spatial variable separately. First an appropriate domain \[\mathcal{D}=\mathcal{D}_{\mathrm{Time}}\times\mathcal{D}_{\mathrm{Space}} \subset\mathbb{R}\times\mathbb{R}^{n}\] needs to be determined. Here \[\mathcal{D}_{\mathrm{Space}}=\bigtimes_{i=1}^{n}\mathcal{D}_{i}\subset\mathbb{ R}^{n},\] and \(\mathcal{D}_{\mathrm{time}}\) and \(\mathcal{D}_{i}\) are the domains corresponding to the Chebyshev polynomials in the time variable and in the \(i\)-th spatial variable respectively. While the choice \(\mathcal{D}_{\mathrm{time}}=[0,T]\) is clear, the choice for the spatial domains is less obvious. We aim at choosing domains as small as possible while still ensuring that for any \(t\in[0,T]\) the set \(\mathcal{D}_{\mathrm{Space}}\) contains all spatial samples \((\xi_{i})_{i=1}^{N}\) and the evaluations of the (unknown) observer trajectory \(\widehat{x}_{\mathrm{M}}(t)\). Using the notation from (3.8) we define for all \(i\in\{1,...,n\}\) \[\mathcal{D}_{i}:=\left[\min_{k=1,...,M_{\mathrm{Time}}}\widehat{x}_{\mathrm{ K}}(t_{k})_{i}-r_{k,i},\max_{k=1,...,M_{\mathrm{Time}}}\widehat{x}_{\mathrm{ K}}(t_{k})_{i}+r_{k,i}\right].\] By rescaling the Chebyshev polynomials (originally defined on \((-1,1)\)) to the respective domains before normalizing, for the \(i\)-th spatial variable we obtain a one-dimensional orthonormal basis of \(L^{2}(\mathcal{D}_{i})\) and denote the basis functions by \((\phi_{k}^{i})_{k=0}^{\infty}\). Analogously we construct a basis of \(L^{2}([0,T])\) denoted by \((\phi_{k}^{\mathrm{Time}})_{k=0}^{\infty}\). These functions are used to define the elements of the tensor-product basis of \(L^{2}(\mathcal{D})\) via \[\Phi_{i}(t,\xi)\coloneqq\phi_{i_{1}}^{\mathrm{Time}}(t)\,\prod_{j=1}^{n}\phi_ {i_{j+1}}^{j}(\xi_{j}),\qquad\text{with }i=(i_{1},...,i_{n+1})\in\mathbb{N}_{0}^{n+1},\ \xi=(\xi_{1},...,\xi_{n}).\] Now assuming \(\mathcal{V}\in L^{2}\left([0,T]\times\mathcal{D}_{\mathrm{space}}\right) \cap L^{\infty}\left([0,T]\times\mathcal{D}_{\mathrm{space}}\right)\) the value function can be expanded into a series of the form \[\mathcal{V}(t,\xi)=\sum_{i\in\mathbb{N}_{0}}\theta_{i}\Phi_{i}(t,\xi),\] with \(\theta_{i}=\langle\mathcal{V},\Phi_{i}\rangle_{L^{2}(\mathcal{D})}\). This motivates the approximation of the value function in a polynomial basis of the form \(\{\Phi_{i}\}_{i\in\mathcal{J}}\), where \(\mathcal{J}\subset\mathbb{N}_{0}^{n+1}\) is an index set of finite cardinality. A proper choice of \(\mathcal{J}\) is crucial to the accuracy of the approximation and to the numerical feasability of its computation. Especially for large spatial dimensions \(n\) one needs to pay attention to the choice of indices to make sure that the set of basis polynomials does not grow too large. Since the dependence of the value function on the time variable and on the spatial variables are fundamentally different, we choose to treat them separately. Our treatment of the spatial polynomials is heavily inspired by what was done in [2], see also [1], and the monograph [12], therefore we will only briefly summarize the strategy here. Namely we make use of the hyperbolic cross index set to treat the spatial polynomials and obtain a tensor-product basis of \(\mathcal{D}_{\text{Space}}\). For a fixed number \(s\in\mathbb{N}\) the index set in question is defined as \[\mathcal{J}_{\text{Space}}(s)\coloneqq\left\{i=(i_{1},...,i_{n})\in\mathbb{N}_ {0}^{n}\,:\,\,\,\prod_{j=1}^{n}(i_{j}+1)\leq s+1\right\}.\] The elements of this basis will be fully tensorized with the polynomials \((\phi_{k}^{\text{Time}})_{k=0}^{d_{\text{Time}}}\) up to some prescribed maximal degree \(d_{\text{Time}}\in\mathbb{N}_{0}\). We obtain the polynomial basis represented by the index set \[\mathcal{J}=\mathcal{J}(d_{\text{Time}},s)\coloneqq\left\{i=(i_{\text{Time}}, i_{\text{Space}})\,:\,\,i_{\text{Time}}\in\{0,...,d_{\text{Time}}\},\,i_{ \text{Space}}\in\mathcal{J}_{\text{Space}}(s)\right\}.\] Let us emphasize the independence of the polynomial degree in time from the degrees of the spatial polynomials. It is motivated by the fact that the complexity of the value function with respect to time might heavily differ from its complexity in space. In the linear quadratic case for example the value function is known to be quadratic in space while the behaviour with respect to time can be much more complex and is unknown, c.f. (2.6). In such a setting it is reasonable to allow for a higher degree in the time polynomials while choosing a small hyperbolic cross index for the spatial polynomials. Note that this approach is in line with choosing the number of spatial and time samples individually, c.f. Subsection 3.1. ### Obtaining polynomial approximation by linear regression We finally recover the polynomial approximation of the value function by fitting its truncated polynomial representation to the generated data of the form (3.1). In order to allow a comparison of the performance improvements achieved by including gradients and Hessians in the fitting process we introduce three weights \(\beta_{0}\), \(\beta_{1}\), \(\beta_{2}\geq 0\) corresponding to the values of \(\mathcal{V}\), its gradient, and its Hessian respectively. For the construction of the vector of sampled data we define \[V_{\mathcal{V}}=\frac{1}{\sqrt{N}}\left(\mathcal{V}(t_{i},\xi_{i})\right)_{i= 1}^{N}\in\mathbb{R}^{N},\] further for \(j=1,...,n\) we set \[V_{\nabla,j}=\frac{1}{\sqrt{N}}\left(\frac{\partial}{\partial\xi_{j}} \mathcal{V}(t_{i},\xi_{i})\right)_{i=1}^{N}\in\mathbb{R}^{N},\] and finally for \(k,h=1,...,n\) satisfying \(k\leq h\) we define \[V_{\nabla^{2},k,h}=\frac{1}{\sqrt{N}}\left(\frac{\partial^{2}}{\partial\xi_{k }\xi_{h}}\mathcal{V}(t_{i},\xi_{i})\right)_{i=1}^{N}\in\mathbb{R}^{N}.\] The corresponding matrices are constructed as follows. For the function values we set \[A_{\mathcal{V}}=\frac{1}{\sqrt{N}}\left(\Phi_{j}(t_{i},\xi_{i})\right)_{i=1,j \in\mathcal{J}}^{i=N}\in\mathbb{R}^{N\times|\mathcal{J}|},\] further for \(k=1,...,n\) we set \[A_{\nabla,k}=\frac{1}{\sqrt{N}}\left(\frac{\partial}{\partial\xi_{k}}\Phi_{j}(t_{ i},\xi_{i})\right)_{i=1,j\in\mathcal{J}}^{i=N}\in\mathbb{R}^{N\times|\mathcal{J}|},\] and finally for \(k,h=1,...,n\) with \(k\leq h\) we set \[A_{\nabla^{2},k,h}=\frac{1}{\sqrt{N}}\left(\frac{\partial^{2}}{\partial\xi_{k} \xi_{h}}\Phi_{j}(t_{i},\xi_{i})\right)_{i=1,j\in\mathcal{J}}^{i=N}\in\mathbb{R }^{N\times|\mathcal{J}|}.\] Note that for the Hessian samples and polynomial evaluations we enforce \(k\leq h\) to exploit the symmetry of the Hessian matrix. This reduces the size of the least squares problem by \(N\frac{n(n-1)}{2}\). We finally set \[\mathbf{V}=\begin{pmatrix}\beta_{0}\,V_{\mathcal{V}}\\ \beta_{1}\,V_{\nabla,1}\\ \vdots\\ \beta_{1}\,V_{\nabla,n}\\ \beta_{2}\,V_{\nabla^{2},1,1}\\ \vdots\\ \beta_{2}\,V_{\nabla^{2},n,n}\end{pmatrix}\qquad\text{and}\qquad\mathbf{A}= \begin{pmatrix}\beta_{0}\,A_{\mathcal{V}}\\ \beta_{1}\,A_{\nabla,1}\\ \vdots\\ \beta_{1}\,A_{\nabla_{n}}\\ \beta_{2}\,A_{\nabla^{2},1,1}\\ \vdots\\ \beta_{2}\,A_{\nabla^{2},n,n}\end{pmatrix}\] and the coordinates \((\theta_{j})_{j\in\mathcal{J}}\) of the polynomial approximation of \(\mathcal{V}\) are given as the solution of the linear least squares problem \[\min_{\theta\in\mathbb{R}^{|\mathcal{J}|}}\|\mathbf{A}\theta-\mathbf{V}\|_{2}^ {2}. \tag{3.9}\] In our implementation we solve the least squares problem using the MATLAB(r) backslash routine, i.e., \(\theta=\mathbf{A}\backslash\mathbf{V}\). **Remark 3.2**.: _The results in our numerical experiments exhibit some minor oscillations. In particular the Hessian of the polynomial approximations are prone to this issue which might lead to noticeable deviations in the inverse. Such issues can be tackled by introducing a regularizing term in (3.9). Note, however, that even the implementation of a simple \(L^{2}\)-penalty term is non-trivial in our setup. This is due to the augmentation of the sampling by including derivatives. Furthermore, this adjustment comes with a substantial increase in computational cost. Due to the smoothing effect of integrating the observer equation (2.5) the moderate oscillations on the inverse Hessian did not pose an immediate problem to our approach. We therefore decided against the application of a regularizer in the least squares problem._ _Including an appropriate regularizer can furthermore ensure sparsity of the solution. We refer to [2] where a weighted LASSO was deployed._ ## 4. Realization of the observer trajectory After obtaining a polynomial approximation of the value function \(\mathcal{V}_{\mathrm{p}}\) we are in the position to realize the Mortensen observer trajectory \(\widehat{x}_{\mathrm{M}}\) numerically. As announced this is done using two different approaches namely by solving the observer equation (2.5) and by minimizing the value function according to (2.4). **Minimizing the value function.** We construct a a discrete approximation of the Mortensen observer trajectory on a time grid of \([0,T]\) using \(N_{\min}+1\) equidistant grid points, where \(t_{0}=0\) and \(t_{N_{\min}}=T\). For any \(k\in\{0,...,N_{\min}\}\) we construct the observer trajectory by setting \[\widehat{x}_{\min,k}=\widehat{x}_{\min}(t_{k})=\operatorname*{arg\,min}_{ \xi\in\mathbb{R}^{n}}\mathcal{V}_{\mathrm{p}}(t_{k},\xi).\] The minimization is realized using a gradient scheme with an _Armijo_ stepsize control. For the construction of \(\widehat{x}_{\min,0}\) we initialize the minimization by \(x_{0}\), which is the exact minimizer of \(\mathcal{V}(0,\cdot)\). For the construction of \(\widehat{x}_{\min,k}\) with \(k>0\) the initialization for the minimization is set to \(\widehat{x}_{\min,k-1}\). Assuming that \(\widehat{x}_{\mathrm{M}}\) is continuous and that \(N_{\min}\) was chosen large enough, this yields a sufficiently good initial estimate justifying the choice made for the sampling points in Subsection 3.1. **Solving the observer equation.** Another approximation of the observer trajectory \(\widehat{x}_{\mathrm{eq}}\) is computed by solving the observer equation where the inverse Hessian of the value function is approximated by \(\nabla^{2}_{\xi\xi}\mathcal{V}_{\mathrm{p}}(\cdot,\cdot)^{-1}\). Here a grid of \(N_{\mathrm{eq}}+1\) points with \(t_{0}=0\) and \(t_{N_{\mathrm{eq}}}=T\) is used. The equation is solved using a BDF4 scheme, where non-linearities are handled using a Newton scheme. ## 5. Numerical Tests In this section we apply the proposed methodology to four different models. First we consider a linear model allowing a comparison with the Kalman filter. We further illustrate the benefits of including the sampled Hessian in the regression problem and compare the realizations via a minimization of the value function according to (2.4) and via solving the observer equation (2.5). In the second and third example we consider two non-linear low-dimensional oscillators and set the focus on comparing the Mortensen observer with the extended Kalman filter. These examples further illustrate the challenges stemming from the fact that the systems need to be considered backwards in time. As a final example we present an agent based model of higher state-space dimension. In the linear case the Kalman filter and the Mortensen observer are equivalent on a theoretical level. Therefore we have access to an accurate approximation of the Mortensen observer that our numerical results can be compared to. Since this is not the case for the non-linear problems, we instead solve the observer equation (2.5) using a BDF4 scheme where we compute the inverse of the Hessian \(\nabla^{2}_{\xi\xi}\mathcal{V}(t,x)^{-1}\) by solving the corresponding open loop problem before solving the DRE as described in Subsection 3.1. The resulting trajectory will be denoted by \(\widehat{x}_{\mathrm{M}}\) and we will consider it to be the true Mortensen observer trajectory. ### Practical aspects We first turn our attention to the evaluation of the value function and its derivatives in the sampling points. Since these calculations are entirely independent of each other they can easily be parallelized. In our implementation this is done using parfor from the MATLAB(r) Parallel Computing Toolbox. The gradient descent scheme is implemented with a relative tolerance of \(10^{-6}\). For the non-linear examples we further implement an absolute tolerance of \(10^{-3}\) and terminate the scheme once one of the tolerances is reached. The state and adjoint equations are solved by an application of a BDF4 scheme using 1001 time discretization points. The non-linearities are treated by a Newton scheme with absolute tolerance \(10^{-12}\). The Riccati equations are solved using the same BDF4 scheme where the implicit time steps are realized by the MATLAB(r) routine icare. Only for the four initial time steps a Newton scheme is employed, where the absolute tolerance is set to \(10^{-10}\) for the first three examples and to \(10^{-8}\) for the fourth example. For the integration of the observer equation by means of the approximated value function we set \(N_{\rm eq}=10^{3}\) and deploy a BDF4 scheme in which non-linearities are treated using a Newton scheme with absolute tolerance \(10^{-8}\). The minimization of the approximated value function is performed via a gradient descent scheme with relative tolerance \(10^{-6}\) and absolute tolerance \(10^{-3}\) and with \(N_{\rm min}=10^{3}\). The trajectory \(\widehat{x}_{\rm K}\) resulting from the (extended) Kalman filter and the solution \(\Sigma\) of the corresponding Riccati equation are determined using the MATLAB(r) routine ode15s with 1001 equidistant discretization points and a relative tolerance of \(10^{-8}\). When solving for the true Mortensen trajectory via BDF4 non-linearities are treated by a Newton scheme with absolute tolerance \(10^{-6}\). The schemes were implemented in MATLAB(r) R2020b and the computations were run on a Lenovo ThinkPad T14s AMD Ryzen 7 PRO 4750U(16)@1.700GHz with 32GB DDR4 3200 MHz memory. The MATLAB(r) code used to obtain the numerical results is available in [34]. ### Test 1: Harmonic oscillator For a linear example we consider an undamped harmonic oscillator. The first order form of the model reads \[\dot{x}(t) =Ax(t)+Fv(t),\ x(0)=x_{0},\] \[y(t) =Cx(t)+\mu(t),\] where \[A=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},F=\begin{pmatrix}0\\ 1\end{pmatrix},C=\begin{pmatrix}1&0\end{pmatrix},x_{0}=\begin{pmatrix}1\\ 1\end{pmatrix}.\] In this formulation the first component of the state represents the position of the oscillator while the second component corresponds to its velocity. In our computations the disturbance of the dynamics is restricted to the velocity while the observation measures only the position. For the artificial construction of the measured data \(y\) we set the error in the dynamics to \(v(t)=\frac{1}{2}\cos(\frac{6}{5}t)\) and the observation error to \(\mu(t)=\frac{1}{2}\sin(\frac{t}{2})\). In order to reconstruct the state we apply our previously described methodology to approximate the Mortensen observer. In these calculations we consider the time horizon \([0,20]\). For the polynomial approximation of the value function we set the hyperbolic cross index to \(s=5\). Note that for this linear example we know a priori that the value function is quadratic in the spatial variable. For an accurate and cost effective approximation of \(\mathcal{V}\) one would restrict the spatial polynomials to a maximum degree of two. For general non-linear examples, however, such a priori knowledge is not available. We therefore decided to include spatial polynomials of higher degrees in the basis. For each computation the maximum degree for the time polynomials is set to be equal to the number of time samples used. The domains \(Q^{k}\) introduced in (3.8) for the spatial sampling are defined via the side lengths \(r_{k,1}=r_{k,2}=\max\left\{0.1,0.1\left\|\widehat{x}_{\rm K}(t_{k})\right\|\right\}\). In Table 1 we report results for different choices of sample numbers and sampling strategies (represented by the weights \((\beta_{0},\beta_{1},\beta_{2})\)): (i) The weights \((1,0,0)\) corresponds to the classical scheme of only considering the value function values. (ii) With the weights \((10^{-3},1,0)\) we pay attention mostly to the gradients. (iii) The combination \((10^{-3},0,1)\) implies a focus on the Hessian. (iv) Finally the weights \((1,1,\frac{1}{2})\) are derived from the coefficients of the general Taylor polynomial and imply a consideration of all available information. Note that the different weights result in different sizes of the least squares problem. Neglecting the computational effort of evaluating the samples the burden of approximating the value function lies in the least squares problem. We therefore decided to compare the estimated computational cost resulting from the different strategies by comparing the number of rows of the least squares matrix and display it in the fourth column of Table 1. In the last three columns we present the relative errors associated with the approximations of the Mortensen observer. The sixth column displays the relative error of the trajectory \(\widehat{x}_{\min}\) obtained via minimization of the approximated value function \(\mathcal{V}_{\mathrm{p}}\), i.e., \[e_{\min}=\frac{\|\widehat{x}_{\min}-\widehat{x}_{\mathrm{K}}\|_{L^{2}(0,T; \mathbb{R}^{n})}}{\|\widehat{x}_{\mathrm{K}}\|_{L^{2}(0,T;\mathbb{R}^{n})}}.\] The seventh column shows the relative error of the observer trajectory \(\widehat{x}_{\mathrm{eq}}\) obtained by solving the observer equation using the Hessian of the approximation of the value function \(\nabla^{2}_{\xi\xi}\mathcal{V}_{\mathrm{p}}\), i.e., \[e_{\mathrm{eq}}=\frac{\|\widehat{x}_{\mathrm{eq}}-\widehat{x}_{\mathrm{K}}\| _{L^{2}(0,T;\mathbb{R}^{n})}}{\|\widehat{x}_{\mathrm{K}}\|_{L^{2}(0,T;\mathbb{ R}^{n})}}.\] We further present the relative error of the inverted Hessian \[e_{\mathrm{gain}}=\frac{\|\nabla^{2}_{\xi\xi}\mathcal{V}_{\mathrm{p}}(\cdot, \widehat{x}_{\mathrm{M}}(\cdot))^{-1}-\Sigma(\cdot)\|_{L^{2}(0,T;\mathbb{R}^{n,n})}}{\|\Sigma\|_{L^{2}(0,T;\mathbb{R}^{n,n})}}.\] All entries of the table where \(e_{\min}\) or \(e_{\mathrm{eq}}\) are marked as failed represent parameter combinations that resulted in a polynomial approximation \(\mathcal{V}_{\mathrm{p}}\) for which the minimization or respectively the integration of the observer equation did not converge. Our experiments suggest that including the Hessian samples in the least squares problems leads to higher accuracy both in the approximated inverted Hessian and in the resulting observer trajectories. In row 11 the least squares matrix considering the Hessian information has 600 rows and yields a polynomial approximation of the value function based on which we approximate the Mortensen observer via integration of (2.5) with a relative error of order \(10^{-6}\). In our experiments we did not reach this level of accuracy without including the sampled Hessians in the least squares problem. Row 2 of Table 1 shows that even an increase of the number of rows in the LS-problem to 1800 results in an approximation of the observer equation with a relative error of order \(10^{-3}\). Comparing, e.g., row 11 and row 12 we further learn that sampling only the values and the Hessians seems to be more effective than additionally including the gradients. Not only do the gradients increase the size of the LS-problem, in most cases the resulting approximation of the value function also yields less accurate approximations of the Mortensen observer. Only in row 19 and 20 the two combinations of weights lead to observer trajectories with the same level of accuracy when integrating the observer equation. We further observe that in all parameter constellations the integration of the observer equation yields results preferable to the ones obtained by minimization of the value function. In row 6 we even find an example for which the observer equation leads to an approximation with a relative error of order \(10^{-2}\) while the minimization of the value function fails. To conclude the discussion of the linear example we present Figure 1 illustrating the development of the relative error of the inverted Hessian along time corresponding to the parameters set in row 11 of Table 1. \begin{table} \begin{tabular}{c||c c c|c c c} \hline & & & & & relative errors & \\ \hline & \(N_{\rm Time}\) & \(N_{\rm Space}\) & \((\beta_{0},\beta_{1},\beta_{2})\) & rows & \(e_{\rm gain}\) & \(e_{\rm min}\) & \(e_{\rm eq}\) \\ \hline \hline 1 & 30 & 20 & \((1,0,0)\) & 600 & \(1.7\times 10^{-4}\) & \(2.5\times 10^{-4}\) & \(\mathbf{1.0\times 10^{-5}}\) \\ 2 & 30 & 20 & \((10^{-3},1,0)\) & 1800 & \(4.7\times 10^{-2}\) & \(2.7\times 10^{-2}\) & \(\mathbf{2.3\times 10^{-3}}\) \\ 3 & 30 & 20 & \((\mathbf{10^{-3},0,1})\) & \(\mathbf{2400}\) & \(1.5\times 10^{-4}\) & \(2.4\times 10^{-4}\) & \(\mathbf{7.0\times 10^{-6}}\) \\ 4 & 30 & 20 & \((1,1,0.5)\) & 3600 & \(7.4\times 10^{-4}\) & \(5.2\times 10^{-4}\) & \(\mathbf{5.3\times 10^{-5}}\) \\ \hline 5 & 30 & 10 & \((1,0,0)\) & 300 & 18.2 & failed & failed \\ 6 & 30 & 10 & \((10^{-3},1,0)\) & 900 & \(1.5\times 10^{-1}\) & failed & \(\mathbf{1.4\times 10^{-2}}\) \\ 7 & 30 & 10 & \((\mathbf{10^{-3},0,1})\) & \(\mathbf{1200}\) & \(1.5\times 10^{-4}\) & \(2.4\times 10^{-4}\) & \(\mathbf{7.0\times 10^{-6}}\) \\ 8 & 30 & 10 & \((1,1,0.5)\) & 1800 & \(8.7\times 10^{-4}\) & \(7.1\times 10^{-4}\) & \(\mathbf{6.6\times 10^{-5}}\) \\ \hline 9 & 30 & 5 & \((1,0,0)\) & 150 & 86.4 & failed & failed \\ 10 & 30 & 5 & \((10^{-3},1,0)\) & 450 & 20.4 & failed & failed \\ 11 & 30 & 5 & \((\mathbf{10^{-3},0,1})\) & **600** & \(1.5\times 10^{-4}\) & \(2.4\times 10^{-4}\) & \(\mathbf{7.0\times 10^{-6}}\) \\ 12 & 30 & 5 & \((1,1,0.5)\) & 900 & \(3.5\times 10^{-3}\) & \(2.2\times 10^{-3}\) & \(\mathbf{9.7\times 10^{-5}}\) \\ \hline 13 & 20 & 5 & \((1,0,0)\) & 100 & 8.6 & failed & failed \\ 14 & 20 & 5 & \((10^{-3},1,0)\) & 300 & 24.0 & failed & failed \\ 15 & 20 & 5 & \((\mathbf{10^{-3},0,1})\) & **400** & \(2.4\times 10^{-3}\) & \(2.4\times 10^{-3}\) & \(\mathbf{2.3\times 10^{-4}}\) \\ 16 & 20 & 5 & \((1,1,0.5)\) & 600 & \(7.9\times 10^{-3}\) & \(5.6\times 10^{-3}\) & \(\mathbf{3.2\times 10^{-4}}\) \\ \hline 17 & 10 & 5 & \((1,0,0)\) & 50 & 30.3 & failed & failed \\ 18 & 10 & 5 & \((10^{-3},1,0)\) & 150 & 6.6 & failed & failed \\ 19 & 10 & 5 & \((\mathbf{10^{-3},0,1})\) & **200** & \(3.8\times 10^{-2}\) & \(6.4\times 10^{-1}\) & \(\mathbf{3.5\times 10^{-3}}\) \\ 20 & 10 & 5 & \((1,1,0.5)\) & 300 & \(3.8\times 10^{-2}\) & \(6.4\times 10^{-1}\) & \(\mathbf{3.5\times 10^{-3}}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of the relative errors of the inverted Hessian and the two observer trajectories obtained by minimization of \(\mathcal{V}\) and by solving the observer equation depending on the number of samples and weights used in the least squares problem. Figure 1: Relative error of the inverted Hessian measured in the Frobenius norm plotted along time. The computation is based on a polynomial approximation of \(\mathcal{V}\) obtained with the parameters \(N_{\rm Time}=30\), \(N_{\rm Space}=5\), \((\beta_{0},\beta_{1},\beta_{2})=(10^{-3},0,1)\), \(d_{\rm Time}=30\), and \(s=5\). ### Test 2: Van der Pol oscillator We now turn our attention to the first non-linear example. Again we approximate the Mortensen observer both via the minimization of \(\mathcal{V}_{\mathrm{p}}\) and via integration of the observer equation. Both approaches yield satisfying results but again the latter requires less data to do so. We further compare our findings to the results obtained using the extended Kalman filter. Specifically we consider the van der Pol oscillator. It is modelled by \[\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix} =A\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix}-\begin{pmatrix}0\\ x_{1}(t)^{2}\ x_{2}(t)\end{pmatrix}+Fv(t),\ \begin{pmatrix}x_{1}(0)\\ x_{2}(0)\end{pmatrix}=x_{0},\] \[y(t) =C\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix}+\mu(t),\] where \[A=\begin{pmatrix}0&1\\ -1&1\end{pmatrix},\ F=\begin{pmatrix}0\\ 1\end{pmatrix},\ C=(1&0)\,,\ x_{0}=\begin{pmatrix}0.1\\ 0.1\end{pmatrix}.\] The two state variables \(x_{1}\) and \(x_{2}\) correspond to the position and velocity of the system, respectively. Note that by introducing a third variable \(x_{3}=x_{1}^{2}\) this system can be transformed into one of state dimension \(n=3\) with a quadratic right hand side. Therefore theoretical results presented in [8] apply. The behaviour of this system is best characterized in terms of the phase space. All initial values are attracted to a limit cycle. Once the limit cycle is reached the state of the system will stay near that cycle indefinitely. Hence this system exhibits stable behaviour. However, when considered backwards in time the system turns unstable. Especially when starting in a point outside the limit cycle and observing the evolution of the system backwards in time the norm of the state will increase rapidly. These dynamics pose a crucial issue in our approach of realizing the Mortensen observer, specifically with regard to solving the open loop control problem for a given sample point \((t^{*},\xi^{*})\). In particular for large \(t^{*}\) and \(\xi^{*}\) close to the limit cycle the gradient descent scheme requires an accurate initial guess for the optimal control. In our experiments we tackled this issue in an iterative manner. For a fixed integer \(N_{\mathrm{it}}\) we first solve the optimal control problem for the tuple \((\frac{t^{*}}{N_{\mathrm{it}}},\xi^{*})\) using the zero control for initialization. The resulting optimal control is used as an initialization when solving the open loop problem for \((\frac{2\,t^{*}}{N_{\mathrm{it}}},\xi^{*})\). Therefore we have to solve \(N_{\mathrm{it}}\) control problems in order to obtain the desired evaluation of the sampling point. For the construction of the measured data \(y\) we set the disturbance in the dynamics to \(v(t)=\frac{1}{2}\cos(\frac{6}{5}\,t)\) and the disturbance in the observation is chosen as \(\mu(t)=\frac{3}{10}\sin(2\pi\,t)\). This example is considered over the time horizon \([0,7]\) and the domains for spatial sampling are set via the side lengths \(r_{k,1}=r_{k,2}=\max\,\{0.1,0.1\,\|\widehat{x}_{\mathrm{K}}(t_{k})\|\}\). The results are illustrated in Figure 2. In Figure 2a we show a plot of the value function \(\mathcal{V}\) evaluated for fixed times \(t\) for \(\xi\) inside the limit cycle. In order to integrate the observer equation the value function was approximated using \(30\) and \(25\) sampling points in time and space, respectively. In the LS-problem only the values and the Hessians where considered with respective weights \(10^{-3}\) and \(1\) resulting in a LS-matrix with \(3000\) rows. The polynomial basis is set via \(d_{Time}=9\) and \(s=9\). The computation took roughly \(20\) minutes and the results are presented in Figure 2b. The relative error of the obtained trajectory is given by \[\frac{\|\widehat{x}_{\text{eq}}-\widehat{x}_{\text{M}}\|_{L^{2}(0,7;\mathbb{R}^{ 2})}}{\|\widehat{x}_{\text{M}}\|_{L^{2}(0,7;\mathbb{R}^{2})}}=1.8\times 10^{-3}.\] For the realization via the minimization of \(\mathcal{V}_{\text{p}}\) we used \(60\) and \(50\) time an space samples, respectively. In the LS-problem only values and gradients were included using weights \(10^{-3}\) and \(1\), respectively, hence the LS-matrix has \(9000\) rows. The polynomial basis is given by \(d_{\text{Time}}=17\) and \(s=10\). Here the computation took about \(90\) minutes and the results are presented in Figure 2c. They exhibit a relative error of \[\frac{\|\widehat{x}_{\text{min}}-\widehat{x}_{\text{M}}\|_{L^{2}(0,7;\mathbb{R }^{2})}}{\|\widehat{x}_{\text{M}}\|_{L^{2}(0,7;\mathbb{R}^{2})}}=3.3\times 10^{-3}.\] From Figure 2 we observe that for this particular system the Mortensen observer and the extended Kalman filter lead to very similar trajectories for the reconstruction of the state. The following example provides a situation where such similarities do not occur. ### Test 3: Duffing oscillator The purpose of the following example is to show that for more complex systems there is a substantial difference between the extended Kalman filter and the Mortensen observer. To this end we consider the Duffing equation which in its state-space form is given by \[\begin{split}\frac{\text{d}}{\text{d}t}\begin{pmatrix}x_{1}(t) \\ x_{2}(t)\end{pmatrix}&=A\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix}+\begin{pmatrix}0\\ -\beta\,x_{1}(t)^{3}\end{pmatrix}+F\,v(t),\quad\begin{pmatrix}x_{1}(0)\\ x_{2}(0)\end{pmatrix}=x_{0}+\eta,\\ y(t)&=C\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix}+\mu(t),\end{split} \tag{5.1}\] where \[A=\begin{pmatrix}0&1\\ -\lambda&-\delta\end{pmatrix},\ F=\begin{pmatrix}0\\ 1\end{pmatrix},\ C=\begin{pmatrix}1&0\end{pmatrix}.\] A thorough discussion of equations of this type can be found in [17]. Again we point out that by introducing a third variable as \(x_{3}=x_{1}^{2}\) this system is equivalent to one of dimension \(n=3\) with a quadratic right hand side and results from [6] can be applied. However, the results presented there require the assumption that the difference of modeled and measured output \(\|y-C\tilde{x}\|_{L^{2}(0,T;\mathbb{R}^{r})}\) is sufficiently small. Here \(\tilde{x}\) is the model trajectory, i.e., the solution of the undisturbed model equation. For systems as sensitive as the Duffing oscillator this is a rather strong assumption because even small disturbances in the dynamics and in the initial value may cause major differences in the resulting trajectory. Just like the previous example this system is affected by the issue of backwards instability. In order to compute the evaluations in the sampling points we apply the iteration described in Subsection 5.3. We construct the measurement \(y\) as follows: In order to prompt chaotic behaviour of the system from \(t=0\) onwards we follow [6, 17] and set \[\lambda=-1,\ \beta=1,\ \delta=0.3,\ v(t)=\gamma\cos(\omega t),\ \gamma=0.5,\ \omega=1.2.\] As the disturbance in the observation we consider \(\mu(t)=0.05\sin(2\pi t)\) and for the initial value and disturbance we set \(x_{0}=\begin{pmatrix}0&0\end{pmatrix}^{\top}\) and \(\eta=\begin{pmatrix}-1.216&0.493\end{pmatrix}^{\top}\), respectively. The results are shown in Figure 3. Figure 3a shows plots of the value function for fixed times \(t\) evaluated around the minimizer. The frame for time \(t=5\) displays some numerical inaccuracies underlining the fact that the value function evaluation is not a trivial task. We suspect that for the final value \(\xi=\left[-0.58\qquad 0.35\right]^{\top}\) the gradient descent solving the open loop problem converged to a local instead of the global minimizer. Since this point lies outside the sampling rectangle used in our approximation scheme, this did not pose any issues while approximating the Mortensen observer. For this example we omit the minimization of \(\mathcal{V}_{\mathrm{p}}\) and focus on the integration of the observer equation (2.5). We consider the time horizon \([0,5]\). Here the domains for spatial sampling are defined via the side lengths \(r_{k,1}=r_{k,2}=\max\left\{0.1,0.4\left\|\widehat{x}_{\mathrm{K}}(t_{k})\right\|\right\}\). We decided to take \(35\) time and \(30\) space samples. The gradients \(\nabla_{\xi}\mathcal{V}\) will be omitted in the LS-problem while the values and Hessians are considered with the weights \(10^{-2}\) and \(1\), respectively. The polynomial basis is constructed using \(d_{\mathrm{Time}}=9\) and \(s=17\). We compare our obtained state reconstruction \(\widehat{x}_{\mathrm{eq}}\) to the trajectory \(\widehat{x}_{\mathrm{K}}\) obtained by means of the extended Kalman filter Figure 2. Van der Pol oscillator and with \(\widehat{x}_{\mathrm{M}}\) constructed as was described above. The results are presented in Figure 3b. The reader can observe a substantial difference between the reconstructions based on the Mortensen observer and the extended Kalman filter in the case of the Duffing equation. We also report that our approximation of the Mortensen observer has a relative error of \[\frac{\|\widehat{x}_{\mathrm{eq}}-\widehat{x}_{\mathrm{M}}\|_{L^{2}(0,5; \mathbb{R}^{2})}}{\|\widehat{x}_{\mathrm{M}}\|_{L^{2}(0,5;\mathbb{R}^{2})}}=9.4\times 10^{-3}.\] A first attempt at investigating the cause of this behaviour is presented in Figure 4. There we compare the terms in which extended Kalman filter and Mortensen observer differ, c.f. Section 2. In Figure 4a the ratio of the additional term in the Mortensen DRE and the right hand side of the extended Kalman filter DRE \[\frac{\Pi(t)\nabla_{\xi^{3}}^{3}\mathcal{V}(t,\widehat{x}_{\mathrm{M}}(t))f( \widehat{x}_{\mathrm{M}}(t))\Pi(t)}{\mathrm{D}f(\widehat{x}_{\mathrm{M}}(t)) \Pi(t)+\Pi(t)\mathrm{D}f(\widehat{x}_{\mathrm{M}}(t))^{\top}-\alpha\Pi(t)C^{ \top}C\Pi(t)+FF^{\top}}\] is plotted over time. In Figure 4b we present the relative difference of the observer gains \[\frac{\|\Pi(t)C^{\top}-\Sigma(t)C^{\top}\|}{\|\Pi(t)C^{\top}\|}.\] Clearly the quantifiers for the difference between the extended Kalman filter and the Mortensen observer are significant for the Duffing oscillator. They give a first explanation for the noticeable difference in the state reconstruction based on these two methods. In the same figure we also present these quantifiers for the Van der Pol oscillator. It turns out that they are considerably smaller. These observations certainly deserve further research. ### Test 4: Cucker-Smale model Finally we consider the Cucker-Smale model for agent based optimal consensus control. It should be noted that the original intent of modeling consensus behaviour is not considered in our discussion. We are interested in this model merely for its non-linear dynamics and the fact that the state space dimension can be easily adjusted by varying the number of agents. In order to apply our strategy to a system of medium sized state space dimension we consider the uncontrolled system with a disturbance. We discuss a system with \(N_{\mathrm{a}}\) agents with states \((z_{i},q_{i})\in\mathbb{R}^{2}\times\mathbb{R}^{2}\), for \(i=1,...,N_{\mathrm{a}}\). Here \(z_{i}\) and \(q_{i}\) correspond to the position and the velocity of the \(i\)-th agent moving in the plane. The dynamics are characterized by the equations \[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}z_{i}&=q _{i}\\ \frac{\mathrm{d}}{\mathrm{d}t}q_{i}&=\frac{1}{N_{ \mathrm{a}}}\sum_{j=1}^{N_{\mathrm{a}}}\frac{q_{j}-q_{i}}{1+\|z_{j}-z_{i}\|^{ 2}}+v_{i}\\ y_{i}&=z_{i}+\mu_{i},\end{split}\] where \(v_{i}\) is the disturbance in the dynamics and \(\mu_{i}\) represents the disturbance in the measurement. Analogous to the examples discussed above only the velocities are affected by system disturbances and the measurement consists of only the positions. The initial position and velocity of the \(i\)-th agent are set to \[z_{i}(0)=q_{i}(0)=\frac{1}{2}\begin{bmatrix}\cos(\frac{2i\pi}{N_{\mathrm{a}}} )\\ \sin(\frac{2i\pi}{N_{\mathrm{a}}})\end{bmatrix}.\] This choice places the agents on a circle around the origin and equips them with an initial velocity pointing straight away from the origin. For the construction of the measurement \(y\) we set the disturbance in the velocity of the \(i\)-th agent to \[v_{i}(t)=\frac{0.3}{\sqrt{N_{\mathrm{a}}}}\begin{bmatrix}0&1\\ -1&0\end{bmatrix}q_{i}(0).\] For the error in the measurement we set \[\mu_{i}(t)=\sin(2\pi\,t)\frac{0.8}{\sqrt{N_{\mathrm{a}}}}\begin{bmatrix}0&1\\ -1&0\end{bmatrix}q_{i}(0).\] We note that the direction of the disturbances is chosen to be orthogonal to the respective initial velocities. The specific choices for the initial condition and disturbances is not required to ensure satisfying results for our numerical scheme. Choosing random initial conditions from an appropriate domain and combining them with more general disturbances leads to approximations with the same order of accuracy. This particular setting was Figure 3. Duffing equation chosen because it leads to a visual representation with comparatively little overlap in the resulting trajectories. Again we focus only on the state reconstruction via the integration of the observer equation. For our computation we set the number of agents to \(N_{\mathrm{a}}=10\) resulting in a state space of dimension \(n=40\) and the time horizon is set to \([0,5]\). The domains for the spatial sampling are characterized by the side lengths \(r_{k,i}\), where for \(k=1,...,N_{\mathrm{Time}}\) and \(i=1,...,N_{\mathrm{a}}\) we set \[r_{k,2i}=r_{k,2i-1}=r_{k,2i+\frac{n}{2}}=r_{k,2i-1+\frac{n}{2}}=\max\left\{0.1,\left\|\left[\widehat{\widehat{x}}_{\mathrm{K}}(t_{k})_{2i-1}\right]\right\| \right\}.\] We use \(20\) time and \(10\) space samples, respectively and the polynomial basis is constructed based on setting \(d_{\mathrm{Time}}=10\) and \(s=4\). The LS-problem considers the sampled values of \(\mathcal{V}\) with a weight of \(10^{-3}\) while the Hessians are entering with the weight \(1\). The computations for this example took roughly \(160\) minutes. The resulting approximation of the Mortensen observer is compared with the extended Kalman filter in Figure (b)b and Figure (d)d and we observe that they agree. ## 6. Conclusion Two schemes for the numerical realization of the Mortensen observer were proposed. The examples under consideration show that both are viable options. In particular the integration of the observer equation based on an approximation of the value function yields satisfying results for systems of small and moderate state space dimension. The experiments further suggest that it is beneficial to not only consider samples of the values of the value function but also take its derivatives into account. Additionally, one of the examples shows that the Mortensen observer can substantially differ from the extended Kalman filter. ## Acknowledgement We thank B. Hoveler (TU Berlin) for many helpful comments and suggestions on the practical aspects of solving ordinary differential equations. T. Breiten and J. Schroder Figure 4. Comparing Van der Pol and Duffing gratefully acknowledge funding and support from the Deutsche Forschungsgemeinschaft via the project 504768428.
2309.11921
Two-chamber gas target for laser-plasma accelerator electron source
Exploring new target schemes for laser wakefield accelerators is essential to meet the challenge of increasing repetition rates while ensuring stability and quality of the produced electron beams. The prototyping of a two-chamber gas cell integrated into the beam line and operating in continuous gas flow is introduced and discussed in the frame of ionisation injection. We report the numerical fluid modeling used to assist the density profile shaping. We describe the test bench used for cell prototype assessment, in particular the plasma electron density and longitudinal distribution of species relevant for ionisation injection. The lifetime of the target key part is measured for different materials. Perspectives to high power operation are outlined.
P. Drobniak, E. Baynard, K. Cassou, D. Douillet, J. Demailly, A. Gonnin, G. Iaquaniello, G. Kane, S. Kazamias, N. Lericheux, B. Lucas, B. Mercier, Y. Peinaud, M. Pittman
2023-09-21T09:33:08Z
http://arxiv.org/abs/2309.11921v1
# Two-chamber gas target for laser-plasma accelerator electron source ###### Abstract Exploring new target schemes for laser wakefield accelerators is essential to meet the challenge of increasing repetition rates while ensuring stability and quality of the produced electron beams. The prototyping of a two-chamber gas cell integrated into the beam line and operating in continuous gas flow is introduced and discussed in the frame of ionisation injection. We report the numerical fluid modeling used to assist the density profile shaping. We describe the test bench used for cell prototype assessment, in particular the plasma electron density and longitudinal distribution of species relevant for ionisation injection. The lifetime of the target key part is measured for different materials. Perspectives to high power operation are outlined. Suggested keywords ## 1 Introduction Laser wakefield acceleration (LWFA) is a promising high-gradient accelerator technology, and the interest of the accelerator community is growing due to its compactness [1, 2, 3]. Significant progress has been made in the optimisation of laser-plasma electron source (so-called 'target') achieving GeV-level [4], but also controlled high charge beams, and optimisation of spectral brightness [5, 6]. Long operation runs at various repetition rates is also a key issue [7, 8]. All these improvements are possible only with advanced control of both laser and plasma target. In the under-dense plasmas used in plasma wakefield accelerators, the gas typically takes the form of supersonic jets, gas cells, capillary discharge waveguides [9] or plasma ovens [10]. Depending on repetition rate and integration constraints, targets are operated in pulsed or continuous gas flow mode. A deep understanding and control of the target density profile, species distribution and gas flow is essential to ensure high-quality and reproducible electron beam production. A high compactness approach using a two-chamber target directly integrated into the beamline is developed. Section 2 presents a review of existing laser-driven accelerator targets. Section 3 introduces the prototype mechanical design with fluid simulations, together with predicted density profiles. Section 4 describes the test bench used for target prototype experimental characterisation. Eventually section 5 concludes with the qualification of the fluid simulation model, and prototype lifetime consideration. ## 2 Targets for laser-driven plasma accelerator As reviewed by I. Prencipe _et al._[11] and J. Garland _et al._[12], several plasma target designs have been investigated in the last two decades: mainly gas jets, gas cells and capillary discharges. In all designs tried, the challenge is to tune the plasma composition and longitudinal density profile. For the particular case of laser-driven electron injectors, the target is composed by a first stage where injection occurs, a second stage for acceleration and a third with controlled density ramp to limit emittance growth [13]. The various approaches are summarised in Tab.1. Gas jets are the most commonly used and often based on a single jet technique using either the principle of self-injection [14], optical injection (with colliding pulses [15]), ionisation injection [16, 17], or down-ramp injection [18, 19, 20, 21] triggered by a shock using a blade [20, 18] or a wire [21] or by shaping the plasma with a transverse beam [19]. Other schemes have been proposed using two jets, the first jet being the injector, the second one the accelerating stage. For the injection, the techniques tried were down-ramp [22] or ionisation [23] injection. The main advantage of gas jets is the easy alignment with the laser and the wide solid angle for diagnostics. Pulsed operation is advised to avoid too much gas leaks, leading to pumping system overload and pollution. At high operation rate (typically kHz), gas jet high density tends to induce high thermal and mechanical loads, resulting in wearing of mechanical parts, vibrations and shot-to-shot instability [12; 24; 25]. Typical electron densities offered by gas jets lie in the range of \(10^{18}-10^{20}\) cm\({}^{-3}\). Gas cells are divided into two categories. The first one is a tank [26; 27] or several tanks [28; 29], filled with gas in steady state flow or pulsed mode. Apertures allow the laser to pass through, and keeping them as small as possible is critical to prevent leaks. The second category is gas channels [5; 30; 31], where gas is injected by various transverse inlets into a main longitudinal channel with reduced cross section. In most cases, the first transverse inlet is for electron injection, the other ones for electron acceleration. Gas exhaust occurs at the main channel entrance/exit and may additionally go through a specific transverse aspiration outlet. Whether using a tank or channel geometry, gas cells are particularly interesting for an ionisation injection regime, where a fraction of high-Z gas (called dopant) is added into a background gas. Various techniques have been developed to avoid continuous ionisation injection using a downward focusing in gas jet or gas cell or a sharp confinement of the dopant [28] allowing to reduce the accelerated beam energy spread and control beam-loading [6]. Whereas gas jets require quite high backing pressures (in the range of several bars), gas cells are less demanding in terms of gas consumption and pressure gradient in the gas circuit. Depending on the vacuum integration (differential pumping), they can be operated in pulsed or continuous gas-injection mode, which yields a better shot-to-shot stability [12]. On balance, the main drawback of gas cells are: (1) lifetime, since laser may enlarge cell apertures, (2) reduced solid angle for diagnostics, since their material may diffuse light, or potentially be coated by plasma pollution. The design investigated here is a gas cell divided in two separate chambers, delimited by transparent optical quality plane surfaces, specifically suited for transverse optical diagnostics. It is inspired by the pioneer work done by Kononenko [29], in a more compact approach and focusing on the dopant mitigation in the first zone, with pure background gas in the second zone. ## 3 Target multi-cell design The motivation of this work is: (1) a compact integration directly into the accelerator beam line, (2) a large range online tunability of dopant concentration and gas density profiles, (3) together with their online transverse optical diagnostics, (4) an easy replacement of critical elements which are strongly irradiated by the laser, especially at high repetition rate. ### Design and features The prototype design is presented in Fig. 1 and Fig. 2. It consists in a main body and two nozzles defining two separate chambers (called chamber 1 and 2 along laser propagation direction), each supplied with gas through an injection hose: helium doped with nitrogen (\(He/N_{2}\)) for the chamber 1 and pure helium (\(He\)) chamber 2. They are separated by a wall, with a small central aperture ranging from \(0.25\) to \(1\,\mathrm{mm}\) in diameter. The laser enters chamber 1 through the inlet nozzle, passes through the central aperture and exits chamber 2 by the outlet nozzle. The central separation serves as frontier between the target chambers. Some gas flow between chambers may appear and is governed by the pressure difference and conductance of the central aperture. Dimensions of the cell close to the axis are described in Fig. 3 and given in Tab. 2. Many combinations have been considered, and a typical configuration is given in Tab. 2. Varying the nozzle total length gives an adjustment of chamber 1 and 2 longitudinal dimensions called \(L_{2}\) and \(L_{4}\) (see Fig. 3). The tank volume of each chamber is \(\sim 5\,\)cm\({}^{3}\). Primary vacuum (sub-mbar) is ensured close to the nozzle exit by a pumping system connected with T-pipes (Fig. 2). Secondary vacuum is obtained further from the cell after a differential hole, both downstream and upstream, which produces a two-decade pressure drop. The main body has been manufactured using wire electro-discharge machining in aluminium block, while nozzles are either made of aluminium or MACOR ceramics. The centering mechanical tolerance (\(\pm 50\,\mu\)m center to center) was achieved for the nozzles using numerical milling machining. In addition to its mechanical features, the design allows to perform transverse optical diagnostics across chamber 1 and 2. The diagnostics can be placed in air thanks to optical windows, that are the direct frontier between chambers and experimental room. Such a feature is particularly interesting for convenient experimental measurements of gas and plasma characteristics. Compared to channel type gas cells the optical transverse diagnostics are eased even if the central separation wall introduces shadowing in the imaging of the two chambers for 2D spectroscopic light collection. The transverse distance from the center (interaction region) and the optical windows is \(\approx 3\,\)cm avoiding a rapid darkening due to pollution by the laser. ### Fluid simulation set-up The gas density distribution is modelled using the open-source fluid simulation code OpenFOAM[32]. Typical simulation cases from this article are online and open to the scientific community[33]. Depending on the desired problematic, the solver used is either: * _rhoPimpleFoam[34]_: for transient compressible single species simulation, * _interMixingFoam[35]_: for transient incompressible miscible fluids, No solver modeling miscibility for compressible flows were found in the OpenFOAM library, therefore two simulation steps were necessary. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \(L_{1}\) & \(L_{2}\) & \(L_{3}\) & \(L_{4}\) & \(L_{5}\) & \(D_{1}\) & \(D_{3}\) & \(D_{5}\) \\ \hline \(1\) & \(0.6\) & \(0.25\) & \(1.2\) & \(3\) & \(0.6\) & \(0.25\) & \(1\) \\ \end{tabular} \end{table} Table 2: Typical cell dimensions close to the axis, with nomenclature defined in Fig. 3. All in mm. Figure 3: Cell dimensions nomenclature with associated variables. \(D_{1}\), \(D_{3}\) and \(D_{5}\) respectively are the diameters of: inlet nozzle, central aperture and outlet nozzle. \(L_{1}\), \(L_{2}\), \(L_{3}\), \(L_{4}\) and \(L_{5}\) respectively correspond to the lengths of: inlet nozzle, chamber 1, central aperture, chamber 2 and outlet nozzle. Gas paths are indicated for \(He/N_{2}\) (blue filled region) and \(He\) (orange filled region). Laser path is schemed in red and propagates from left to right. Figure 2: CAD section view of target connected in the beamline with a laser propagating from left to right. Gases are injected with two connections on the top (only one visible in this ’section’ view), flow towards the chambers (blue and orange areas), and exit through the nozzles. They expand in the pumping tees and most of the flow is sucked out by efficient primary pumping (upwards in this schematic view). Two \(8\,\)mm-apertures (differential holes) provide a differential pumping at the entrance and exit of the T-pipes. The geometry is designed with a CAD software and automatic meshing is performed using the routine _snappyHexMesh[36]_. A 3D geometry applied to a reduced volume is used for simulations, as presented in Fig. 4, in order to limit the total number of cells to an average of \(10^{5}\) and thus limit the computation time. Boundary conditions are: fixed pressure at the inlets, constant volumetric flow at the outlets (estimated from the pump characteristics). Simulations are run on a computer single-cpu and the average simulation time is roughly \(1\,\mathrm{h}\) for the reduced volume case. ### Simulation of the density profiles for single species First simulations are run for pure \(He\) with _rhoPimpleFoam_. The resulting steady-state is obtained from an initial empty cell a few hundreds of \(\mu\)s after valve opening. The longitudinal density profiles obtained for cell dimensions \((L_{1},L_{2},L_{3},L_{4},L_{5},D_{1},D_{3},D_{5})=(1,0.6,0.25,1.2,3,0.5,\)\(0.95,0.6)\) at several operating pressures are presented in Fig. 5. Injection pressures for chamber 1 (\(p_{Left}\)) and chamber 2 (\(p_{Right}\)) satisfy \(p_{Left}=p_{Right}\), in order to have a flat plateau between both chambers. This feature prevents convection and is further discussed below in the gas mixture simulation. Note that \(D_{3}\) is voluntarily taken as \(0.95\,\mathrm{mm}\) to model a \(0.25\,\mathrm{mm}\) realistic damaged central aperture, but simulation results are similar. Within the \([10;120]\) mbar pressure range, the longitudinal profile shape is conserved, with slight compression effects, mostly at the outlet nozzle entrance. With cylindrical nozzles, the up- and down-ramp shape is pretty linear. The overall pressure profile can thus easily be approximated with scalable linear functions, that can be tabulated over a wide range of pressure to serve as input for optimisation with laser-plasma PIC simulation (see polygonal fit in Fig. 5). Also note that the pressure upstream the inlet nozzle quickly decreases below the mbar range for all configurations, limiting undesired laser-plasma interaction before the cell. The transverse density profile is depicted in Fig. 6, where three planes along the laser propagation axis are selected: inlet nozzle entrance (\(x_{1}\)), chamber 2 center (\(x_{2}\)), outlet nozzle center (\(x_{3}\)). Both chamber pressure is set to \(30\) mbar and the geometry is the same as for Fig. 5. Fig. 6 shows a constant transverse density, whether for a \(19\)\(\mu\)m (PALLAS project[37]) or a \(55\)\(\mu\)m (test bench) laser waist at the interface between the two chambers. The influence of inlet nozzle geometry is presented in Fig. 7, where the reference profile \((1,0.6,0.25,1.2,3,0.5,\)\(0.95,0.6)\) is kept and compared with other diameter or Figure 4: Longitudinal clip of the reduced mesh used in OpenFOAM simulations for a target configuration with geometry \((L_{1},L_{2},L_{3},L_{4},L_{5},D_{1},D_{3},D_{5})=(1,0.6,0.25,1.2,3,0.5,0.95,0.6)\). The _snappyHexMesh_-generated mesh is presented in the original CAD design.stl file (top image, the.stl is in grey) and zoomed-in with visible mesh refinement areas (bottom image). \(p\) is the pressure in Pa. Laser travels from left to right. Figure 5: Evolution of longitudinal on-axis density with pressure according to _rhoPimpleFoam_ simulations with pure \(He\). Simulations are run with \(p_{Left}=p_{Right}\) on geometry \((1,0.6,0.25,1.2,3,0.5,0.95,0.6)\), with names referring to the maximum pressure in the plateau. The associated electron density is in parenthesis, assuming \(He\) full ionisation. A polygonal fit is added in red. Inlet and outlet nozzle extensions are respectively depicted in red and grey areas. The cell center (central aperture) is depicted with a vertical black dashed line. Laser goes from left to right. length. The longitudinal extent of the up-ramp scales linearly with \(L_{1}\) but does not depend on \(D_{1}\). Increasing the diameter leads to higher upstream tee pressure that reaches the mbar range for \(D_{1}>0.5\) mm, but also degrades the flatness of the plateau in chamber 1. Ideally, the shortest and thinnest nozzle as possible is desired. We choose \(L_{1}=1\) mm and \(D_{1}=0.5\) mm for machining and robustness reasons. \(D_{1}\) must obviously also be larger than a few laser waists. The influence of outlet nozzle dimensions at \(30\) mbar is presented in Fig. 8, where the length \(L_{5}\) is varied between \([0;5]\) mm at fixed \(D_{5}=0.6\) mm and the diameter \(D_{5}\) between \([0.40;1.00]\) mm at fixed length \(L_{5}=3\) mm. The same reference case as for the inlet nozzle study is included for comparison. Similarly to the inlet nozzle, the ramp length linearly scales with nozzle length \(L_{5}\), with a preserved shape. Indeed, for each length tried (top graph in Fig. 8), the down-ramp follows the same pattern: a linear decrease along roughly \(L_{5}\) followed by an exponential decrease of \(\approx 1\) mm (gas expansion). The outlet diameter \(D_{5}\) has the same influence as previously observed with \(D_{1}\). Contrary to the inlet, the outlet profile has to be as smooth and long as possible for emittance preservation[38], which corresponds to large \(L_{5}\). Together with a small \(D_{5}\), this might be a problem due to laser divergence and possibly ablation. A compromise is made with \(L_{5}=3\) mm and \(D_{5}=0.60\) mm. Figure 6: Evolution of transverse density with propagation according to _rhoPimpleFoam_ simulations with pure \(He\) at injection pressures \(p_{Left}=p_{Right}=30\) mbar, with geometry \((1,0.6,0.25,1.2,3,0.5,0.95,0.6)\). Transverse plots (orange) are extracted at \(x_{1}\), \(x_{2}\) and \(x_{3}\) respectively corresponding to inlet nozzle entrance, chamber 2 center and outlet nozzle center. A longitudinal plot on axis is added (blue). Two typical laser envelopes are added: \(w_{0}=55\)\(\mu\)m in red and \(w_{0}=19\)\(\mu\)m in pink. Figure 7: Calculated normalised density for different inlet nozzle geometries from _rhoPimpleFoam_ simulations for pure \(He\) at \(p_{Left}=p_{Right}=30\) mbar. The reference geometry (magenta) is \((L_{1},L_{2},L_{3},L_{4},L_{5},D_{1},D_{3},D_{5})=(1,0.6,0.25,1.2,3,0.5,0.95,0.6)\) mm. The top graph presents results for the inlet nozzle length variation \(L_{1}\): \(0.25\), \(0.50\), \(0.75\) and \(1.00\) mm (reference geometry) with constant \(D_{1}=0.50\) mm. The bottom graph shows the influence of inlet nozzle diameter \(D_{1}\): \(0.30\), \(0.40\), \(0.50\) (reference geometry), \(0.70\) and \(1.00\) mm, with constant \(L_{1}=1.00\) mm. Laser travels from left to right. ### Simulation of dopant confinement As introduced earlier, dopant confinement is a key process to ensure high quality beams with a small energy spread. Specific incompressible two-gas simulations are run with _interMixingFoam_ to account for diffusion issues. They are performed in a reduced geometry with boundaries up to each nozzle center, where the flow can still be approximated as incompressible (\(Ma<0.3\)). The new outlet boundary conditions are simply the pressure values extracted from previous compressible simulations at the new physical boarders. Such an approximation is verified by simulating comparable cases, both in compressible (_rhoPimpleFoam_) and incompressible (_interMixingFoam_) mode with \(He\) only 1. Results for cell pressures within \([10-120]\) mbar are presented in Fig. 9, where the difference for the pressure between compressible and incompressible models \(\epsilon_{p}=(p_{comp}-p_{incomp})/p_{comp}\) [%] is added. Footnote 1: Results are still valid when adding a few % \(N_{2}\) in chamber 1, since the gas characteristics remain comparable, especially in low kinematic areas, such as the chambers interface. Figure 8: Calculated normalised density on axis for different outlet nozzle geometries from _rhoPimpleFoam_ simulations for pure \(He\) at \(p_{Left}=p_{Right}=30\) mbar. The reference geometry (magenta) is \((L_{1},L_{2},L_{3},L_{4},L_{5},D_{1},D_{3},D_{5})=(1,0.6,0.25,1.2,3,0.5,0.95,0.6)\) mm. The top graph presents results for the outlet nozzle length variation \(L_{5}\): \(1.00\), \(2.00\), \(3.00\) (reference geometry), \(4.00\) and \(5.00\) mm with constant \(D_{5}=0.60\) mm. The bottom graph shows the influence of outlet nozzle diameter \(D_{5}\): \(0.40\), \(0.50\), \(0.60\) (reference geometry), \(0.80\) and \(1.00\) mm, with constant \(L_{5}=3.00\) mm. Laser travels from left to right. Figure 9: Pressure distribution comparison on axis between compressible (_rhoPimpleFoam_) and incompressible (_interMixingFoam_) simulations, with pure \(He\) at plateau pressures: \(10\), \(30\), \(50\), \(80\) and \(120\) mbar. Upper graph displays compressible plots (dashed line) and incompressible plots (solid line). Lower graph presents the difference between compressible (’comp’) and incompressible (’incomp’) as: \(\epsilon_{p}=(p_{comp}-p_{incomp})/p_{comp}\) [%]. Target geometry is \((1,0.6,0.25,1.2,3,0.5,0.25,0.6)\) and a reduced mesh is used, limited to the presented x-axis extent. Positions of the inlet and outlet nozzles respectively are indicated by light red and grey areas. Laser goes from left to right. From Fig. 9, a good agreement appears between compressible and incompressible simulations, with a maximum \(8\) % deviation close to the nozzles, and almost no difference in the chambers, which is the zone where dopant mitigation should occur. The same kind of study was done for temperature and velocity profiles with the same conclusion at the diffusion interface, and a significant divergence close to the nozzles. Incompressible simulations thus correctly match compressible ones 2. Footnote 2: The reader might note that all plateaus actually have gradients. This particular shape is explained by the difficult and time-consuming search for a perfect match on axis (\(p_{Left}=p_{Right}\)) using approximate boundary conditions. The dopant confinement study is then performed with two gases in _interMixingFoam_ to evaluate the effect of pressure difference between chamber 1 and 2 \(\Delta p=p_{Right}-p_{Left}\) (convection) or statistical mixing of the two gases at equal pressure (diffusion). The result is shown in Fig. 10 for a test at \(30\) mbar, with \(He/N_{2}\) injected in chamber 1 at dopant concentration \(c_{N_{2}}=10\%\) and pure \(He\) in chamber 2. For a negative/positive gradient \(\Delta p=-1/+1\) mbar, the dopant is pushed to the right/left through convection (visible on the flow velocity \(U_{x}\) in Fig. 10). \(\Delta p=-1\) mbar causes \(N_{2}\) leaks towards chamber 2 and \(c_{N_{2}}\) never reaches \(0\) in chamber 2 (no dopant confinement). \(\Delta p=+1\) mbar triggers the opposite effect, with \(He\) leaking into chamber 1. This case however offers \(c_{N_{2}}=0\) in chamber 2 (dopant confinement). In both cases, the transition from \(c_{N_{2}}=10\) % to roughly \(0\) occurs on \(\approx 1.0\) mm. For equal pressures, the interface is centered on the central aperture (\(x=0\)) and the \(c_{N_{2}}\) transition is due to pure diffusion (no longitudinal flow velocity \(U_{x}\)). It takes \(\approx 0.5\) mm for \(N_{2}\) to decrease from \(10\) % (chamber 1) to strictly \(0\) (dopant confinement). Dopant confinement is thus ensured for equal pressures or a slight positive gradient \(\Delta p\). Setting \(\Delta p=0\) mbar provides a clear separation of gases in both chambers, with original mix and pure \(He\) remaining respectively in chamber 1 and 2, while positive gradients induce \(He\) leaks into chamber 1. The shortest \(c_{N_{2}}\) transition from \(10\) % to \(0\) occurs for equal pressures. Working with \(10\) % dopant is a dimensioning case and results are valid for lower concentrations. We observe in simulations that increasing the working pressure from \(10\) mbar to \(120\) mbar makes the tuning of the transition position more sensitive to the pressure difference as the central aperture conductance depends on the sum of the pressure in the two chambers. The transition length remains stable. Simulations have also been performed for a larger aperture up to \(0.95\) mm. As conductance is higher when increasing the central aperture diameter, it shows a higher sensitivity with \(\Delta p\). This sensitivity is confirmed by the experimental results in Section 5. Figure 10: Influence of a pressure gradient between chambers \(\Delta p=p_{Right}-p_{Left}\) (upper graph, solid lines) on dopant concentration \(c_{N_{2}}\) (upper graph, dashed lines) and longitudinal flow velocity \(U_{x}\) (bottom graph, solid lines). Results are obtained with incompressible miscible simulations (_interMixingFoam_) using \(He/N_{2}\) (at \(c_{N_{2}}=10\) %) and pure \(He\) respectively in chamber 1 and 2. Cell geometry used is \((1,0.6,0.25,1.2,3,0.5,0.25,0.6)\) with the same reduced mesh as for Fig. 9. The cell center is indicated with a vertical dash-dotted line (central aperture) and positions of the inlet/outlet nozzles are added. ## 4 Target test bench ### Experimental setup The vacuum and mechanical setup used at IJCLab target test facility is presented in Fig. 11 with its characteristics summed-up in Tab. 3. Gases are injected using a specific gas injection system (Fig. 12), with one injection line for each chamber. For chamber 1, the user specifies the \(He/N_{2}\) mixture injection mass flow and dopant concentration in \(\%\) (partial pressure ratio) within \([0;100]\pm 0.2\). For chamber 2, a pure \(He\) injection mass flow is set. Several gauges monitor the pressure. Their names follow the laser propagation direction: G1, G2, G3, G4 respectively measuring \(p_{1}\) (secondary vacuum), \(p_{2}\) (primary vacuum), \(p_{3}\) (primary vacuum) and \(p_{4}\) (secondary vacuum). Pressure at the injection is monitored with capacitance gauges GL and GR, whose measurements are independent of gas type. Monitoring the pressure is of particular importance to constantly check the state of the cell and alert on associated pollution propagating upstream the laser line. Additionally, gauges can serve as verification tool for fluid simulations. They allow to cross-check aperture-induced pressure drop Figure 11: Schematic diagram of the vacuum setup used for cell characterisation. Pressures are monitored using gauges G1, G2, G3 and G4 respectively measuring pressures \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\). Vacuum is ensured by secondary turbomolecular pumps (TMP1,TMP2) with forevacuum primary pump (PP). The target gas flow is directly pumped upward and downward the target with a roots pump (PRP). Gas injection pressures are measured with ceramic piezo type gauges GL and GR respectively measuring \(p_{Left}\) and \(p_{Right}\). Figure 12: Schematic diagram of the gas injection system. The mass flow controllers MFC1 and MFC2 set the concentration of \(N_{2}\) in the gas mixture. The MFC3 and MFC4 set the mass flow injected on the chamber 1 and chamber 2 respectively. The GAUGE-\(A-D\) are the gauges for injection automation. SV0X are solenoid valves and MVOX manual valves. \begin{table} \begin{tabular}{|l|c|c|c|} \hline element & parameter & value & unit \\ \hline TMP1,2 & pumping speed & \(300\) & l/s \\ TMP1,2 & gas through output & \(160\) & mbar.l/s \\ PP & pumping speed & \(40\) & l/s \\ PRP & pumping speed & \(>400\) & l/s \\ G1,G4 & gauge type & cold cathode & - \\ G2,G3 & gauge type & pirani & - \\ GL,GR & gauge type & capacitance & - \\ \hline \end{tabular} \end{table} Table 3: Characteristics of vacuum system elements given for helium at the working pressure far from axis (static pressure difference between chamber 2 and downstream pumping tee for instance). This serves as flow modeling validation regarding gas thermophysical properties and flow regime (laminar VS turbulent). ### Laser line The pump/probe optical setup is described in Fig. 13, where the laser beam comes from LaseriX platform[39]. The pump beam (characteristics in Tab. 4) is focused in the target and serves for ionisation. \(10\,\)% of the total energy is dedicated to the probe beam. The optical diagnostics used on the test bench for the target characterisation are: a wavefront sensor[40] for plasma channel density measurement[41, 42] and a visible spectrometer[43] (or a camera[44]) for ion species measurement. ## 5 Experimental qualification ### Neutral gas pressure measurement Simulation are cross-calibrated with experimental results: experimental pressure measurements validate the flow hypothesis (laminar versus turbulent) for the solver and its ability to reproduce pressure drops induced by the apertures. Simulations were performed with _rhoPimpleFoam_ in laminar mode. Whether for \(He\) or \(N_{2}\), the pressure in chamber 1 is predicted with an error below \(10\,\)%, which diminishes for higher injection pressures. The primary vacuum pressure \(p_{3}\) prediction slightly deviates from the experiment, with a maximum error corresponding to \(0.1\) mbar. Sources of deviations are: a central aperture shape not perfectly modeled, a pumping system overestimated in simulation at low pressures and the inability of the solver to model quasi-discontinuous flows for very low pressures. Simulations manage to correctly reproduce the flow down to \(0.1\) mbar. The relevance of gas property choice, such as viscosity is confirmed, together with the use of laminar mode: turbulence does not have to be activated, which greatly reduces the simulation time. ### Electron plasma density profiles Additionally to gauge pressure measurements far from axis, the density profile on axis is assessed using a wavefront sensor. The latter is used to record the phase difference introduced by the plasma channel in chamber 1, close to the axis. A typical phase map is presented in Fig. 14. The plasma channel has a constant diameter of \(100\)\(\mu\)m compatible with the laser width \(2\times w_{0}=110\)\(\mu\)m. 1D plots envelopes are standard deviations computed from 20 shot series taken at \(1\,\)Hz 3. The important noise level can \begin{table} \begin{tabular}{|l|c|c|c|} \hline Parameters & value & typical errors & unit \\ \hline central wavelength; \(\lambda_{0}\) & \(810\) & \(\pm 1\) & nm \\ minimum pulse duration (FWHM); \(\tau\) & 50 & \(\pm 5\) & fs \\ repetition rate & \(10\) & - & Hz \\ Flattened Gaussian beam order; \(N\) & \(5\) & - & - \\ energy on target; \(E_{0}\) & \(1\to 60\) & \(\pm 5\) & mJ \\ focal length; \(f\) & \(1100\) & - & mm \\ waist in the focal plane; \(w_{0}\) & \(55\) & \(\pm 5\) & \(\mu\)m \\ Strehl ratio & \(0.55\) & \(\pm 0.05\) & - \\ focal spot longitudinal position range; \(\Delta x_{foc}\) & \(30\) & - & mm \\ probe delay; \(\Delta t\) & \(180\) & \(\pm 0.03\) & ps \\ \hline \end{tabular} \end{table} Table 4: Parameters of the laser pulse from LaseriX platform[39] used for target gas ionisation (pump beam) and transverse optical diagnostics (probe beam) on IJCLab target characterisation test bench. Figure 13: Optical scheme used for target characterisation. The optical paths for the pump and probe beams respectively are in red and orange. The pump beam is focused into the target by MS (spherical mirror), the probe beam is extracted by BS1 (5/95 beamsplitter). Other optical components are: P (pinhole), DL (motorised delay line), DBS (dichroic beamsplitter), LEN1-4 (lenses), D (adjustable diaphragm), OBJ (microscope objective). Optical diagnostics are: WFS (wavefront sensor), VIS SPEC (visible spectrometer), TRANS (camera). be explained by: ambient air density variations integrated over the whole probe beam path (a few meters), test bench vibrations or laser ablated particles projecting impurities in the chambers. Since the phase remains quite stable above \(0.2\) mm from axis (no plasma), cropping is performed on each phase map, to increase the signal-to-noise ratio. In the worst case (low pressure), the signal-to-noise ratio was always \(>4\). Typical phase maps acquired for \(He\) at \(10\), \(30\), \(50\) and \(80\) mbar in chamber 1 are presented in Fig. 15. They display similar features than for Fig. 14. However, for a high pressure (\(80\) mbar) a longitudinal gradient appears, probably due to ionisation defocusing of the laser. Abel inversion is used on phase maps to retrieve the corresponding electron plasma density distribution. The resulting density maps are presented in Fig. 16 were inversion has been performed on phase maps from Fig. 15. Theoretical maximum densities expected for fully ionised \(He\) at \(10\), \(30\), \(50\) and \(80\) mbar are respectively \(4.86\times 10^{17}\), \(1.46\times 10^{18}\), \(2.43\times 10^{18}\) and \(3.89\times 10^{18}\) cm\({}^{-3}\). They are indicated with horizontal lines on Fig.16. For low pressure, electron density on axis is quite constant, slightly below the theoretical value with some peaks along propagation. For \(50\) and \(80\) mbar, ionisation likely remains at the \(He^{+}\) level, with a progressive drop for \(80\) mbar above \(0.55\) mm. Figure 14: Averaged phase difference measured with the wavefront sensor (on \(20\) consecutive shots) in chamber 1 for \(He\) at \(30\) mbar (pressure gauge measurement) with two additional slices: longitudinal plot extracted at \(y=0\) (bottom graph), transverse plot at \(x=0.64\) mm (right graph). The ’std’ for each 1D plot is added in the cloud around the mean curve (computed using the shot-to-shot variation). Laser goes from left to right. Figure 15: Averaged phases (on \(20\) consecutive shots) acquired with the wavefront sensor for \(He\) in chamber 1 at \(10\), \(30\), \(50\) and \(80\) mbar. Longitudinal plots (blue line) extracted at \(y=0\) are added on each image, with corresponding ’std’ (blue filled) based on their shot-to-shot variation. Laser goes from left to right. This confirms that at high pressure, the pump beam is not intense enough to ionise the two levels of \(He\), due to stronger ionisation defocusing. We conclude that theoretical measured pressures match simulations, with a longitudinal density having a constant plateau-like shape in the first chamber. A similar behaviour is expected for chamber 2, since its geometry is quite similar to chamber 1. ### Dopant longitudinal profile The prototype ability to confine the dopant is experimentally assessed with an imaging spectrometer. It relies on excited species emission, with a resolution of \(1\,\)nm over the considered wavelength range (\(400-600\,\)nm). To get a good resolution on the diffusion of \(N_{2}\) along the two chambers, pure \(N_{2}\) is injected in chamber 1, while pure \(He\) is injected in chamber 2. This case is not representative of a typical working point for the target, but will provide conservative information on dopant confinement. The spectrometer gives the possibility to select various emission lines to track simultaneously the corresponding species. Experimental results for different pressure gradients \(\Delta p=p_{Right}-p_{Left}\) are presented in Fig. 17, with the largest achievable central aperture diameter (\(0.95\) mm). For \(p_{Left}=p_{Right}\), the dopant is confined in chamber 1 (orange curve in Fig. 17). For a slight positive gradient, the dopant is strongly pushed to the left (red curve in Fig. 17), reducing the size of the injection zone (truncated ionisation injection). On the contrary, even for a rather small negative \(\Delta p\), the dopant leaks into chamber 2 (blue curve in Fig. 17). Nevertheless, a clear stable confinement of the gas mixture is demonstrated in agreement with the simulations. Figure 16: Electron densities obtained with Abel inversion on Fig. 15 phases maps. Additional symmetrisation is applied. A longitudinal plot (light blue line) at \(y=0\) is added. The straight horizontal line (dark blue) represents the density corresponding to fully ionised \(He\). Laser goes from left to right. Figure 17: Dopant localisation using spectrometer measurements for different gradient values \(\Delta p=p_{Right}-p_{Left}\) between chamber 1 and chamber 2, with an average plateau-pressure on axis of \(30\) mbar. Pure \(N_{2}\) and pure \(He\) respectively are injected in chamber 1 and 2. Geometry is \((1.5,1.15,0.25,0.95,1.5,0.5,0.95,0.5)\). The central aperture (central wall) position is added (black dashed line). Laser goes from left to right. ### Target lifetime Previous characterisation and simulations of course remain valid as long as the target retains its geometry under high intensity laser irradiation. The target main body is composed of aluminium, while nozzles are either in aluminium or in ceramics (MACOR). The most critical part of the design is the inlet nozzle. As shown in numerical simulations and experimental measurements the gas mixture confinement can be obtained with a central wall aperture diameter up to \(\approx 1\,\)mm. Typical aperture dimensions and shape variations before and after \(300\,000\) shots at \(60\) mJ for aluminium nozzles are presented in Fig. 18. Our experimental observation is that even at \(60\) mJ (well below the \(1\) J required for laser-plasma acceleration), the aluminium nozzles are strongly damaged. A solution is to use MACOR nozzles as shown in Fig. 19. Qualitatively, higher MACOR resistance is visible on the post-mortem pictures. An online estimation of the nozzle state can be done through pressure control. We experimentally observed an inlet pumping tee pressure rise up to the mbar range for aluminium nozzles after \(\sim 30\) min of operation, while remaining in a \(10^{-1}\,\)mbar range for MACOR nozzles. Ceramics greatly improve the cell lifetime. This conclusion from the characterisation test bench has to be confirmed on real scale laser plasma experiments. In the case of acceptable nozzle deterioration, we are able to take into account the evolution of nozzle apertures with time, with continuous adjustment of the gas injection flows to maintain a constant pressure in the chambers. In the optimisation of electron beam parameters, the laser focusing position may also be tuned during operation to counterbalance the elongation of the in-ramp length. After a few thousand shots, a saturation of the ablation is observed, leading to more stability. Regarding optical diagnostic, the design is quite robust and for more than \(10^{6}\) shots at \(60\) mJ, no optical window had to be replaced or even cleaned, allowing continuous cell characterisation and monitoring through transverse diagnostics. This is favoured by the distance between plasma and window of roughly \(30\) mm, preventing direct deposition of ablated material. ## 6 Discussion and conclusion The density profile of a two-chamber gas cell prototype for ionisation injection has been assessed using the open-source fluid simulation library OpenFOAM. It has been cross-checked with experimental results comming from the diagnostics installed on the LaseriX test bench. Simulation results are open to the scientific community. Our multi-cell target design offers density distribution control and precise dopant confinement, which have been experimentally demonstrated with online diagnostics that also allow to monitor the target state evolution during the experiment. For emittance conservation issues, the output nozzle can be shaped to optimise passive plasma lensing with an Figure 19: Comparison between aluminium (top row) and MACOR nozzles (bottom row) after approximately \(300\,000\) shots at \(60\,\)mJ. Images are: inlet nozzle concave face (A,E), inlet nozzle convex face (B,F), outlet nozzle convex face (C,G), outlet nozzle concave face (D,H). Initial diameters were \(D_{1}=D_{5}=500\pm 10\)\(\mu\)m for all four nozzles. Damaged apertures are approximated as circles. Damaged aluminium nozzles dimensions are \(D_{1}=830\pm 10\)\(\mu\)m (inlet nozzle, average of ’A’ and ’B’) and \(D_{5}=740\pm 10\)\(\mu\)m (outlet nozzle, average of ’C’ and ’D’). Damaged MACOR nozzles dimensions are \(D_{1}=710\pm 10\)\(\mu\)m (inlet nozzle, average of ’E’ and ’F’) and \(D_{5}=740\pm 10\)\(\mu\)m (outlet nozzle, average of ’G’ and ’G’). Nozzle aperture lengths are \(L_{1}=1.5\) mm (A,B,E,F) and \(L_{5}=1.5\) mm (C,D,G,H). Figure 18: Aluminium nozzle evolution before (top row) and after (bottom row) \(300\,000\) shots at \(60\,\)mJ. Images are: inlet nozzle concave face (A,E), inlet nozzle convex face (B,F), outlet nozzle convex face (C,G), outlet nozzle concave face (D,H). Initial nozzles dimensions are \(D_{1}=520\pm 10\)\(\mu\)m (inlet nozzle, average of ’A’ and ’B’) and \(D_{5}=600\pm 10\)\(\mu\)m (outlet nozzle, average of ’C’ and ’D’). Damaged nozzles dimensions are \(D_{1}=910\pm 10\)\(\mu\)m (inlet nozzle, average of ’E’ and ’F’) and \(D_{5}=990\pm 10\)\(\mu\)m (outlet nozzle, average of ’G’ and ’H’), with damaged apertures approximated as circles. Nozzle aperture lengths are \(L_{1}=1\) mm (A,B,E,F) and \(L_{5}=3\) mm (C,D,G,H). adapted density out-ramp. The target integration in the beamline also offers a compact laser-plasma injector design. This allows for a compact beam transport line for further injection into a second accelerating stage. For example, the first magnet for PALLAS project can theoretically be put as close as \(\approx 15\) cm from the source. The results of this target design characterisation have been used as input for Particle-in-Cell simulations, with the aim to find optimal working points for electron injection. Four parameters have been varied: chamber pressure \(p_{Left}\) (with \(p_{Left}=p_{Right}\)), dopant concentration \(c_{N_{2}}\), laser energy \(E_{0}\) and laser focal position \(x_{foc}\)[45]. The numerical results show that electron beams with a charge over \(30\) pC, energy ranging between \(150-250\) MeV, energy spread below \(5\,\%\) and transverse normalised emittance below \(2\,\mu\)m can be obtained. ## Appendix A Appendixes
2309.06285
Jersey Number Recognition using Keyframe Identification from Low-Resolution Broadcast Videos
Player identification is a crucial component in vision-driven soccer analytics, enabling various downstream tasks such as player assessment, in-game analysis, and broadcast production. However, automatically detecting jersey numbers from player tracklets in videos presents challenges due to motion blur, low resolution, distortions, and occlusions. Existing methods, utilizing Spatial Transformer Networks, CNNs, and Vision Transformers, have shown success in image data but struggle with real-world video data, where jersey numbers are not visible in most of the frames. Hence, identifying frames that contain the jersey number is a key sub-problem to tackle. To address these issues, we propose a robust keyframe identification module that extracts frames containing essential high-level information about the jersey number. A spatio-temporal network is then employed to model spatial and temporal context and predict the probabilities of jersey numbers in the video. Additionally, we adopt a multi-task loss function to predict the probability distribution of each digit separately. Extensive evaluations on the SoccerNet dataset demonstrate that incorporating our proposed keyframe identification module results in a significant 37.81% and 37.70% increase in the accuracies of 2 different test sets with domain gaps. These results highlight the effectiveness and importance of our approach in tackling the challenges of automatic jersey number detection in sports videos.
Bavesh Balaji, Jerrin Bright, Harish Prakash, Yuhao Chen, David A Clausi, John Zelek
2023-09-12T14:43:50Z
http://arxiv.org/abs/2309.06285v1
# Jersey Number Recognition using Keyframe Identification ###### Abstract Player identification is a crucial component in vision-driven soccer analytics, enabling various downstream tasks such as player assessment, in-game analysis, and broadcast production. However, automatically detecting jersey numbers from player tracklets in videos presents challenges due to motion blur, low resolution, distortions, and occlusions. Existing methods, utilizing Spatial Transformer Networks, CNNs, and Vision Transformers, have shown success in image data but struggle with real-world video data, where jersey numbers are not visible in most of the frames. Hence, identifying frames that contain the jersey number is a key sub-problem to tackle. To address these issues, we propose a robust keyframe identification module that extracts frames containing essential high-level information about the jersey number. A spatio-temporal network is then employed to model spatial and temporal context and predict the probabilities of jersey numbers in the video. Additionally, we adopt a multi-task loss function to predict the probability distribution of each digit separately. Extensive evaluations on the SoccerNet dataset demonstrate that incorporating our proposed keyframe identification module results in a significant **37.81%** and **37.70%** increase in the accuracies of 2 different test sets with domain gaps. These results highlight the effectiveness and importance of our approach in tackling the challenges of automatic jersey number detection in sports videos. ## 1 Introduction In recent years, the advent of deep learning has revolutionized various fields, enabling remarkable performance improvements. This phenomenon has now extended its influence into the realm of sports analytics, particularly in major team sports such as soccer, which enjoy extensive global viewership and participation. Teams across these sports are increasingly turning to vision-driven analytics to gain a competitive edge by evaluating player performance and making informed assessments. At the core of player evaluation lies the crucial component of unique player identification, representing one of the most coveted research challenges in this domain. Traditionally, jersey numbers have been relied upon to establish player identification on the field. However, the fast-paced nature of the game introduces inherent challenges such as motion blur and occlusion, which hinder accurate identification. Moreover, the limited visibility of jersey numbers, typically located on the back of the player's jersey, further complicates the identification process. Existing methods [1, 2, 3] mainly focus on capturing spatial context and work on static images. These approaches tend to perform poorly on videos since the visibility of jersey numbers is minimal across frames. Recent works such as [4, 5] try to overcome this issue by capturing temporal features using Vision Transformers and LSTMs. However, these methods still tend to give sub-optimal results on real-world tracklet data. Further investigation reveals that even with the inclusion of a temporal module, the absence of jersey numbers in the majority of frames leads to the extraction of spurious features. Consequently, the identification of keyframes, those instances capturing critical moments in the game, emerges as a vital sub-problem that demands ef fective solutions to ensure reliable player identification. To address and solve the above issues, we introduce a robust Keyframe Identification (KfId) module to extract frames containing high-level features for effective jersey number recognition. The extracted frames are fed to a spatio-temporal network to model the structural and temporal context of the frames of a tracklet. We also enhance the training strategy of our model by adopting a multi-task loss function to classify each digit separately. Incorporating the KfId module in our spatio-temporal network results in a 38% increase in test accuracy. The following items summarize the contributions of this paper: 1. We propose a _keyframe identification module_ that is robust to blur and occlusions using RoI and Spatial Context Aware filtering to facilitate effective jersey number recognition. 2. We conduct an extensive study to determine the _best training strategy for our model by experimenting with different heads for the loss function_. We further show that digit-wise classification is the best training strategy for our model. 3. We show that _our method outperforms previous jersey number recognition methods_ on Soccernet, the largest open-source dataset collected from soccer broadcast videos for unique player identification. The remainder of the work is structured as follows: Sections 2 discusses the related works on sports analytics for vision and jersey number identification approaches. Section 3 discusses the proposed framework extensively followed by experimentation in Section 4. Finally, the paper is concluded in Section 5 with directions for future work. ## 2 Related Work _Player identification from facial features_ Prior to the advancements of deep learning, many works on player identification were focused on using hand-crafted facial features to recognize players. [6] use face detectors to detect faces and then perform face recognition using a database of players' faces. [7] use SIFT to extract facial features and localize keypoints. The extracted keypoints are then matched using a similarity measure to label players. Other works such as [8, 9, 10] recognize the features of the player as a whole instead of focusing on specific parts of a player. The caveat with these approaches is that we cannot deterministically predict the player's faces throughout a Tracklet with high confidence, since in football, broadcast cameras are usually panned at different scales at different times, making facial features negligibly available. This is especially difficult when the data parsed from such videos contains high noise due to motion blurs and occlusions. To tackle this problem several works use the jersey number to uniquely identify players. _Jersey Number recognition from static images_. Gerke _et al._[11] recognize jersey numbers from soccer images using Convolutional Neural Networks (CNN). Li _et al._[1] use a CNN to classify the digits on a player's jersey, and mitigates the use of an extra object detection module and localizes the digits of all the players in a particular frame by using Spatial Transformer Networks (STN). Liu _et al._[3] propose a pose-guided multitask Recurrent CNN (RCNN) to jointly detect humans, human pose keypoints, and jersey numbers. Vats _et al._[2] use a multi-task learning to recognize the digits separately and the jersey number as a whole. Bhargavi _et al._[12] present a multi-stage network that takes advantage of pose to localize jersey numbers before detecting them using a secondary classifier. _Jersey Number recognition from player tracklets_ Vats _et al._[4] developed a transformer-based architecture to recognize jersey numbers from ice-hockey player tracklets. Liu _et al._[13] propose an end-to-end framework that detects players and performs unique identification through jersey number recognition from American football videos using a multi-stage approach. Chan _et al._[5] utilize LSTMs to extract temporal characteristics from player tracklets and Figure 1: Overview of the proposed framework for effective jersey number detection. **(a)** Keyframe identification module localizes the jersey number and eliminates outlier jersey detections. **(b)** Spatio-temporal neural network extracts the spatial and temporal context of the tracklet to identify the jersey number. (o) represents the output of a tracklet when passed to the keyframe identification module. recognize the jersey numbers of players in addition to the features extracted from a ResNet [14] model. Furthermore, they also employ 1D CNNs as a late score-level fusion method for classification. All of the above methods formulate this problem as a classification problem without taking note of the inherent bias (absence of jersey numbers in many frames) in real-world data. Our method tackles the above issue by using the KfId module to identify the frames that contain useful features of the jersey number. The proposed system can help tolerate different camera Field-of-Views (FoV) during broadcast, due to varied panning and angles, and help determine player identities in a more reliable way. ## 3 Methodology ### Keyframe Identification The KfID module is the key novelty of our work, which helps detect and aggregate jersey number features based on their visibility, providing a _spatial-context_ to the classification task. For a given player tracklet \(\mathcal{T}=\{F_{i}:F_{i}\in\mathbb{R}^{H\times W\times 3}\}_{i=1}^{t}\) consisting of \(t\) frames, \(KfID(\mathcal{T})=\mathcal{T}\setminus\{F_{n_{1}},F_{n_{2}},...,F_{n_{k}}\}\), where \(F_{n_{1}},F_{n_{2}},...,F_{n_{k}}\) are noisy frames with diminutive digit features. In a way, our KfID module works as a selective filter, eliminating those frames that biases our Spatio-Temporal Network towards inaccurate predictions due to inconsequential features. The pipeline for our KfID module is as follows: For each frame \(F_{i}\) of a given tracklet \(\mathcal{T}\), the JNL module localizes all the digits in \(F_{i}\), which are denoted as \(det_{i}\), as shown in Equation (1). The detections within \(F_{i}\) are then locally filtered (\(f_{local}^{(i)}\)) using RoI and Local Histogram Correlation (LHC) modules as shown in Equation (2). The Global Histogram Correlation (GHC) module captures the spatial similarity of the detections across frames of \(\mathcal{T}\) and further filters them to get the jersey numbers of our player of interest (\(f_{global}\)), as shown in Equation (3). \[det(\mathcal{T})=\{det_{i}\}_{i=1}^{t}=\{JNL(F_{i})\}_{i=1}^{t}. \tag{1}\] \[f_{local}(\mathcal{T})=\{f_{local}^{(i)}\}_{i=1}^{t}=\{LHC(RoI(det_{i}))\}_{i=1} ^{t} \tag{2}\] \[f_{global}(\mathcal{T})=GHC(f_{local}(\mathcal{T})) \tag{3}\] All the modules mentioned above will be discussed in detail Figure 2: Detailed illustration of the keyframe identification module which encompasses the following components- Jersey Number Localization (JNL), Region of Interest (RoI), Local Histogram Correlation (LHC), and Global Histogram Correlation (GHC). in the following subsections. A detailed illustration of our \(KfID\) module is shown in Figure 2. #### 3.1.1 Jersey Number Localization (JNL) The accuracy of jersey number classification is highly dependent on the quality of the input frames. When the input frames contain noise or other distortions, extracting useful information becomes challenging, making it difficult to accurately identify the jersey number of the target player. Thus, the presence of digits in each frame of the tracklet was first localized using an off-the-shelf detector [15], fine-tuned on our dataset [16] to reduce the search space for the identification network. This enables narrowing down the focus to the specific regions of the frames containing the numbers. #### 3.1.2 RoI-based Filtering The JNL module is susceptible to multiple spurious detections, as shown in Figure 3, where a player's leg is incorrectly detected for the presence of a jersey number. To mitigate the impact of these outliers, a Region of Interest (RoI) module is incorporated since for a given Tracklet \(T_{i}\), the jersey numbers are always localized in a specific region within a RoI. The RoI module is designed to filter out the JNL module's outlier detections and refine the accuracy of the predictions. Analogous to the IoU, for our specific case, we propose a custom intersection metric (\(I^{*}\)) is employed. The \(I^{*}\) metric, defined by the following equation (4), ensures the elimination of outlier digit detections originating from the JNL module: \[I^{*}=\frac{A(R_{1}\cap R_{2})}{min(A(R_{1}),A(R_{2}))+eps} \tag{4}\] where the custom intersection metric (\(I^{*}\)) is calculated based on the area of the intersection between the RoI (\(R_{1}\)) and the detection (\(R_{2}\)), as well as the areas of the RoI and the detection, respectively. Here, \(A(.)\) denotes the area of a region, and \(eps\) is a small constant (\(10^{-7}\)) added to make sure that the denominator is not \(0\). The \(I^{*}\) value is then compared to a threshold to determine the validity of the detection. #### 3.1.3 Spatial Context-Aware Processing Though the ROI module limits the search space for the detection of numbers in the frame, the visibility of the opposition team player's jersey number affects the performance of the prediction network. This problem can be predominantly seen in the second sequence of Figure 5 where the jersey number of the target player (in red jersey) is present along with the jersey number of the opposition player (in white jersey). Also, since the JNL module localizes each digit separately, a holistic representation of the jersey number is lacking. This can be seen in Figure 2 where the JNL module captures each digit in the frame separately. Thus, in order to address both these challenges associated with the visibility and holistic representation of jersey numbers, we propose a two-stage spatial processing module comprising the LHC and GHC. To begin with, each frame containing the detected digits undergoes conversion to the Hue, Saturation, and Value (HSV) color space. Unlike in the RGB representation, this transformation allows us to separate the color information of each component from the intensity of light, thus enabling us to better distinguish colors in various disruptive conditions. Specifically, we isolate the hue component and construct histograms to analyze the spatial correlation between different hue distributions. Figure 4 depicts the contrasting color layouts for different jersey numbers. Hence, by examining the dominant color space in each frame, we gain valuable insights into the characteristics of the jersey numbers. Next, the **LHC module** is employed to obtain a holistic Figure 4: Histogram representation of different jersey’s spatial color layout. Figure 3: Outlier detections from JNL module representation of the jersey number by comparing correlation scores within each frame of the tracklet. If two detections in the frame are in close proximity and demonstrate similar spatial layouts (as indicated by correlation scores), they are likely to correspond to two digits forming part of the same jersey number. Consequently, these detections are merged to create a holistic representation. Following the LHC module, we introduce the **GHC module** to specifically address the issue of jersey numbers present on opposition player jerseys. We cluster the constructed histograms of all the filtered detections of a tracklet to find out spatially similar detections. The cluster with the most number of detections is chosen and the detections in that cluster are passed on to the spatiotemporal network. By clustering the histograms based on distribution frequency within detections across all frames in a tracklet, we effectively model the spatial layout of player tracklets and distinguish the jersey number of the target player while filtering out unwanted detections within each tracklet. ### SpatioTemporal Jersey Number Recognition Extracting important spatial and temporal features from the pre-processed tracklets is essential for jersey number recognition. Since we have a large number of keyframes in certain tracklets, using all of the keyframes in a tracklet might lead to memory constraints and extraction of redundant features. We mitigate the above challenges by using a fixed sequence length for every tracklet and randomly sample frames while ensuring that any 2 frames sampled are at least \(d\) frames apart from each other. Following the frame sampling process, the randomly chosen frames are passed through a pretrained ResNet-18 network pretrained on the ImageNet dataset. This step allows us to extract 512-dimensional spatial features denoted as \(\mathcal{F}\). By leveraging the power of ResNet-18, we can effectively capture and encode spatial information relevant to jersey number recognition. To capture the temporal cues within the tracklet, we further process the extracted spatial features. The spatial features \(\mathcal{F}_{s}\in\mathbb{R}^{512}\) are fed into a bidirectional Long Short-Term Memory (bi-LSTM) network. The bi-LSTM network enables us to model both forward and backward temporal dependencies, effectively capturing the temporal dynamics present in the tracklet. As a result, we obtain \(\mathcal{F}_{t}\in\mathbb{R}^{256}\) temporal features that encompass the temporal characteristics necessary for accurate jersey number classification. The final step involves utilizing the obtained 256-dimensional temporal features to identify players effectively. In order to enhance our training strategy, we leverage a multi-task loss function to effectively label jersey numbers, as shown in Equation (5), where \(d_{1}\in\mathbb{R}^{11}\) and \(d_{2}\in\mathbb{R}^{11}\) are ground-truth digits of the jersey number, and \(p_{1}\in\mathbb{R}^{11}\) and \(p_{2}\in\mathbb{R}^{11}\) are the predictions made by the spatiotemporal network. \[L_{tot}=0.5*L_{d1}+0.5*L_{d2} \tag{5}\] where, \[L_{d_{1}}=-\Sigma_{i=0}^{10}d_{1}^{i}logp_{1}^{i} \tag{6}\] and \[L_{d_{2}}=-\Sigma_{j=0}^{10}d_{2}^{j}logp_{2}^{j} \tag{7}\] We conducted extensive studies to ascertain the best training scheme as demonstrated in Table 5. By leveraging the spatial features extracted from the pretrained ResNet-18 and the temporal cues captured by the bidirectional LSTM network, we enable robust and accurate jersey number recognition. The overall architecture of the spatiotemporal network is illustrated in Figure 1. ## 4 Experiments ### Datasets The dataset utilized in this research, referred to as "Soccernet" [16], comprises a total of 4,064 player tracklets. Each tracklet is associated with a single jersey number label. To facilitate model evaluation and training, the dataset has been partitioned into four distinct subsets: training, validation, testing, and challenge sets as outlined in Table 1. Here, the test and challenge sets are two different sets with certain domain gaps provided for evaluating the generalizability of the model. The table also shows the total number of frames extracted after the \(KfID\) module. A significant drop of 87.65% in the number of frames after Kfld module shows that a major chunk of the images in each tracklet does not have a visible jersey number. Some example frames of a few tracklets can be observed in Figure 5. The existing datasets [11, 1, 2, 3] used for jersey number detection predominantly use static images, limiting modeling of temporal context, since it is essential to address problems in regards to its visibility. The different datasets available in the literature for jersey number identification are compared in Table 2. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Tracklets & Number of Images & Keyframes \\ \hline Train & 1,141 & 587,543 & 68,881 \\ Validation & 286 & 146,886 & 17,220 \\ Test & 1,211 & 565,758 & 68,745 \\ Challenge & 1,426 & 750,092 & 98,504 \\ \hline Total & 4,064 & 2,052,306 & 253,350 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset split for training, validation, and testing ### Implementation Details We adopt a ResNet18 [14] backbone for the spatial network with an input size of 150 \(\times\) 120. Applying standard data augmentations such as ColorJitter, HorizontalFlip and normalization decreases the performance of our model. This is mainly because these augmentations decrease the quality of the data in hand, while some of them change the jersey number itself. Hence, we convert all the images to grayscale to enhance the contrast of the jersey number and the background without reducing the quality of the image. For the RoI module, we manually preset the RoI for every image based on its dimensions. More specifically, the top left and bottom right coordinates of the RoI are (\(\frac{w}{4}\), \(\frac{h}{5}\)) and (\(\frac{3w}{4}\), \(\frac{h}{2}\)) respectively, where \(w\) and \(h\) are the width and height of the image. For the temporal model, we evaluate and compare 3 different networks:- ViT [17], TCN [18] and LSTM [19]. For ViT, we used 8 heads and 2 transformer encoder layers following [4] for ideal performance. For TCNs, we used 2 1D CNN blocks, one for each digit, where each block consisted of 3 1D CNN layers followed by BatchNorm [20] and ReLU. For LSTMs, we used 2 LSTM blocks for classifying each digit separately. All the models were trained for 20,000 iterations using a batch size of 32. We used the AdamW [21] optimizer for the ViT model with a learning rate of 1e-4, and the Adam optimizer for the other models with a learning rate of 3e-3. Furthermore, a learning rate scheduler was employed to reduce the learning rate after every 2000 iterations for the first 6000 iterations. All the experiments were conducted on 4 NVIDIA 2080Ti GPUs with 12GB RAM each. ### Results #### 4.3.1 Keyframe Identification Module The performance enhancement of different temporal networks has been experimented with and without incorporating the KfId module as shown in Table 3. The aim of this experimentation is to showcase the ability of our module to improve the overall identification performance of different temporal networks. \begin{table} \begin{tabular}{c c} \hline \hline Dataset & Number of Images \\ \hline Gerke et al [11] & 8,281 \\ Liu et al [3] & 3,567 \\ Kanav et al [2] & 54,251 \\ Li et al [1] & 215, 036 \\ Kanav et al [4] (\(\dagger\)) & 670,410 \\ **SoccerNet** (\(\dagger\)) & 2,052,306 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of datasets in literature. (\(\dagger\)) - Uses temporal data Figure 5: Example frames of two different tracklets. The frames highlighted in red denote the keyframes extracted from our KfId module. Though a digit is visible in the frame highlighted in blue, our identification module ignored it, since it doesn’t spatially correspond to the player of interest. The findings presented in Table 3 reveal that the incorporation of the KfId module leads to a significant enhancement in test accuracy, as evidenced by a substantial 38% increase compared to the performance of the best-performing model without the module. This result highlights the pivotal role played by the KfId module in augmenting the overall performance of the spatiotemporal networks. By leveraging this module, the model gains the capability to identify crucial keyframes within video sequences, capturing salient information and notable temporal changes. The discerned keyframes provide valuable contextual cues, facilitating more accurate predictions and classifications. Consequently, the observed boost in test accuracy underscores the efficacy of integrating the KfId module and suggests its potential applicability for similar models. #### 4.3.2 Comparison to Existing Jersey Number Identification Works In order to assess the efficacy of our proposed architecture, we conducted a series of experiments comparing its performance against state-of-the-art networks on the SoccerNet dataset [16] as outlined in Table 4. We compared the performance with two different splits (Test, Challenge) from the dataset to evaluate the generalizability of the networks. Our model consistently outperforms its deterministic counterparts on both the test and challenge sets. These results underscore the remarkable generalizability of our model across datasets with inherent domain gaps. ### Ablation Study #### 4.4.1 Studying Different Training Strategies Experimentation with different heads for the loss function is done in Table 5 to determine the optimal objective and its influence on the training process. We experimented with the following heads to guide the model's prediction: 1.) Holistic (\(HO\)) jersey number identification aiming to predict the entire number in a single shot 2.) Digit-wise (\(DW\)) jersey number identification aiming to predict each digit seperately 3.) Length Control (\(LC\)) by aiming to control the length of the prediction by utilizing the number of digits in the number. From the above table 5, we can see that performing digit-wise classification leads to better results in comparison to using holistic labels. This is mainly because of the fact that there a lot of jersey numbers in the test and challenge set that are absent in the training set. Hence, using holistic jersey number labels will lead to unseen classes in the test set which results in erroneous predictions. Additionally, the inclusion of \(LC\) in the loss function poses challenges to the network's performance, particularly due to the prevalence of jersey numbers with lengths of either 1 or 2. Relying on the jersey number length for determining the final predictions yields sub-optimal performance. These findings underscore the significance of digit-wise classification and emphasize the limitations associated with holistic labels and the incorporation of length control in the context of jersey number recognition tasks. #### 4.4.2 Sequence Length To determine the optimal sequence length for training the temporal network, we conducted various experiments with different lengths as shown in Table 6. By varying the sequence length, we aimed to evaluate the model's ability to capture temporal dependencies across frames during training. \begin{table} \begin{tabular}{c c c} \hline \hline Method & Test Acc & Challenge Acc \\ \hline Gerke et al [11] & 32.57 & 35.79 \\ Kanav et al [2] & 46.73 & 49.88 \\ Li et al [1] & 47.85 & 50.60 \\ Kanav et al [4] & 52.91 & 58.45 \\ **Ours** & **68.53** & **73.77** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative comparison with the state-of-the-art methods \begin{table} \begin{tabular}{c c c} \hline \hline Method & Test Acc & Challenge Acc \\ \hline TCN & 27.08 & 30.17 \\ ViT & 19.90 & 23.78 \\ **LSTM** & **30.89** & **36.07** \\ \hline TCN (\(\dagger\)) & 67.54 (+40.46) & 63.81 (+33.64) \\ ViT (\(\dagger\)) & 58.62 (+38.72) & 65.37 (+41.59) \\ **LSTM** (\(\dagger\)) & **68.53 (+37.81)** & **73.77 (+37.70)** \\ \hline \hline \end{tabular} \end{table} Table 3: Results with and without KfId Module. (\(\dagger\)) - with the KfId module \begin{table} \begin{tabular}{c c c} \hline \hline HO & DW & LC & Test Acc \\ \hline ✓ & & 55.71 \\ ✓ & ✓ & 62.39 \\ ✓ & ✓ & 65.14 \\ & ✓ & ✓ & 63.77 \\ & ✓ & & **68.53** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study on different heads for the loss function The impact of sequence length on the performance of our model is presented in Table 6. It is evident from the results that increasing the sequence length leads to improved model performance. However, when the sequence length surpasses 40, the accuracy of the model begins to decline. This decline suggests that using a sequence length greater than 40 causes the model to extract redundant and unnecessary features, ultimately leading to overfitting to the training set. On the other hand, employing a sequence length below 30 results in underfitting as there is insufficient data available per tracklet to capture the relevant temporal features accurately. These findings emphasize the critical role of choosing an appropriate sequence length to strike a balance between capturing meaningful temporal dynamics and preventing overfitting. ## 5 Conclusion and Future Works In this work, we introduce and implement a robust keyframe identification module to enhance jersey number recognition from player tracklets with high-motion blur and occlusion. We further adopt spatiotemporal networks along with a multi-task loss function to perform digit-wise classification. We demonstrate the efficacy of our Kfd module by conducting extensive experiments and comparing our network with state-of-the-art methods. One possible future improvement to our proposal is to improve our spatial model to capture more significant spatial cues even in the presence of noise. ## 6 Acknowledgment This work was supported by Stathletes through the Mitacs Accelerate Program and Natural Sciences and Engineering Research Council of Canada (NSERC).
2305.00521
StyleLipSync: Style-based Personalized Lip-sync Video Generation
In this paper, we present StyleLipSync, a style-based personalized lip-sync video generative model that can generate identity-agnostic lip-synchronizing video from arbitrary audio. To generate a video of arbitrary identities, we leverage expressive lip prior from the semantically rich latent space of a pre-trained StyleGAN, where we can also design a video consistency with a linear transformation. In contrast to the previous lip-sync methods, we introduce pose-aware masking that dynamically locates the mask to improve the naturalness over frames by utilizing a 3D parametric mesh predictor frame by frame. Moreover, we propose a few-shot lip-sync adaptation method for an arbitrary person by introducing a sync regularizer that preserves lip-sync generalization while enhancing the person-specific visual information. Extensive experiments demonstrate that our model can generate accurate lip-sync videos even with the zero-shot setting and enhance characteristics of an unseen face using a few seconds of target video through the proposed adaptation method.
Taekyung Ki, Dongchan Min
2023-04-30T16:38:42Z
http://arxiv.org/abs/2305.00521v2
# StyleLipSync: Style-based Personalized Lip-sync Video Generation ###### Abstract In this paper, we present StyleLipSync, a style-based personalized lip-sync video generative model that can generate identity-agnostic lip-synchronizing video from arbitrary audio. To generate a video of arbitrary identities, we leverage expressive lip prior from the semantically rich latent space of a pre-trained StyleGAN, where we can also design a video consistency with a linear transformation. In contrast to the previous lip-sync methods, we introduce pose-aware masking that dynamically locates the mask to improve the naturalness over frames by utilizing a 3D parametric mesh predictor frame by frame. Moreover, we propose a few-shot lip-sync adaptation method for an arbitrary person by introducing a sync regularizer that preserves lips-sync generalization while enhancing the person-specific visual information. Extensive experiments demonstrate that our model can generate accurate lip-sync videos even with the zero-shot setting and enhance characteristics of an unseen face using a few seconds of target video through the proposed adaptation method. Please refer to our project page. ## 1 Introduction In the past few years, advances in deep learning have altered the dynamics of video creation. Now, users can easily make and edit videos with the help of deep learning. In particular, the task of generating a talking head video has received great interest due to its various practical uses. It can be applied in many applications such as film dubbing into a different language, face-to-face live chats, and virtual avatars in games and videos. Thus, a lot of prior works [19, 32, 26, 44, 43, 25] have been studied to generate a talking head video that has accurate lip shapes according to arbitrary audio inputs. Most of the prior works mainly focus on enhancing synchronization between lip shapes and audio input. Some of the previous methods [44, 8, 32] use intermediate structural representations such as landmarks and 3D models. They predicted the representations from the audio input and synthesized a talking head video of a target person. However, they suffered from inaccurate lip-sync results since such representations are too sparse to produce fine-grained details in lip-syncing. Recently, another line of methods [26, 25] mapped input audio to latent space and leveraged it to construct the mouth region of the target identity. While it achieves satisfactory results in lip-syncing, it generated blurry lower faces which are visually implausible. Furthermore, most methods only consider synthesizing frame-by-frame, lacking temporal consistency at the video level. In this paper, we propose StyleLipSync, a style-based lip-sync video generative model that can generate identity-agnostic lip-synchronizing video from the arbitrary audio input. Our model consists of the following components. First, different from a previous masking method [19, 26, 25, 5] which masks the entire lower half face, we propose Pose-aware Masking. We analyze that the previous masking method cause unpleasant artifacts and unnatural jaw moving in the generated videos. To circumvent this, we utilize a 3D face mesh predictor [10, 21] and generate lip masks with consideration of pose information and facial semantics such as jaw shape. Second, our image decoder is based on a style-based generator, namely StyleGAN [16, 17, 15]. StyleGANs have demonstrated their effectiveness in various facial generative tasks, including face editing[1, 2], face enhancement [40], and video generation [33, 22]. As a pre-trained StyleGAN already contains expressive and diverse face priors in style latent space [1], we leverage it to synthesize the high-fidelity lip region of the target person. Furthermore, thanks to the continuous and linear nature of the latent space [16, 11, 30], we linearly manipulate the style codes using the audio input to generate lip-synced video frames. Additionally, we propose Style-aware Masked Fusion to effectively adopt a skip-connection to our decoder, which helps to preserve the 2D structure of the image and improves lip fidelity. Finally, we propose a Moving-average based Latent Smoothing module that makes the latent trajectory smoother for enhancing the temporal consistency in the synthesized talking head video. While our model can synthesize a talking head video of the target person, there is a slight identity gap between a generated video and the target person. The gap can be noticeable, for example, in racial faces, which are relatively scarce in the training data. One approach to addressing this issue is to fine-tune the generator on a few seconds of video of the target person to create a personalized model. Several fine-tuning methods [28, 23, 35] have already been demonstrated and widely adopted by the industry to achieve product-level quality. However, we analyze that simply fine-tuning the generator loses its ability to generalize to arbitrary audio inputs, which is critical for generating talking head videos. Therefore, to minimize the side effect, we propose a sync regularizer enforcing the audio generalization performance. The key idea is to leverage the audio from the training data, not from the target video. Specifically, we not only optimize the generator to reconstruct the target video but also synthesize the video corresponding to the randomly sampled audio from the training data and maintain a sync correlation between the synthesized video and the audio. As a result, we obtain a personalized lip-sync generative model that can synthesize a video of the target person for arbitrary audio. Our contributions are summarized as follows: * We present StyleLipSync, a lip-sync video generative model which generates lip-synchronizing video in-the-wild of \(256\times 256\) resolution with accurate and natural lip movement from a given masked video frames, audio segment, and single reference image. * We additionally propose a few-shot adaptation method for totally unseen faces, which uses only a few seconds of video by introducing a sync regularizer to maintain audio generalization. * Experimental results show that StyleLipSync achieves state-of-the-art performance in terms of lip-sync and visual quality, even with the zero-shot setting. ## 2 Related Works ### Lip-sync Video Generation Lip-sync video generation aims to generate a talking face video with lip motion synchronized with the given input audio. Early works [19, 26] generate lip from the lower half masked face image corresponding to the input audio. Specifically, Wav2Lip [26] uses a pre-trained SyncNet [7] as a lip-sync expert which maximizes the correlation between the generated lip and the input audio. Similarly, we use lip-sync expert for audio-visual alignment, which is trained with a contrastive manner proposed in [22]. VideoRetalking [5] improves Wav2Lip [26] a two-stage manner, where it first generates low-resolution (\(96\times 96\)) video and then increases the resolution by using a single identity specific super-resolution network. In contrast, we propose a zero-shot model that directly generates lip-sync video of \(256\times 256\) resolution and also propose a unseen face adaptation method then enhances the personal characteristics from a few-shot target video. SyncTalkFace [25] introduces a lip memory network, which encodes lip motion features into a discrete space in the training phase and retrieves a lip feature from query audio at the inference phase. In contrast to [25], we utilize continuous and linear latent space from a pre-trained style-based generator [17] to generate lip images with high fidelity and take video consistency into account in the latent space. ### GAN Prior Style-based generators [16, 17, 15] demonstrate the power of their semantic latent spaces, namely \(\mathcal{W}\), in image generation, image editing [11, 30], and video generation [33]. GAN-inversion [1, 2, 27, 34] utilizes the pre-trained GANs to invert an image into corresponding latent code so one can manipulate the attributes of the image only within the latent space. Extended \(\mathcal{W}+\) have been shown its much expressive power. For instance, pSp [27] adopts the feature pyramid networks (FPNs) [20] to use \(\mathcal{W}+\), which follow the nature of the progressive generation [13] of StyleGAN [16] and achieves state-of-the-art performance in image-to-image translation (e.g., face in-painting). Similarly, we utilize \(\mathcal{W}+\) for diverse and strong lip prior since we aim to build a lip-sync video generative model of arbitrary identity. Recent works [40, 36, 39] not only adopt pre-trained GAN prior as their decoder but also introduce a skip-connection that concatenates the encoded and generated features, which helps the model preserve 2D spatial information. Specifically, GPEN [40] uses skip-connection that concatenates the encoded and generated features and achieves state-of-the-art performance in blind face restoration. StyleSwap [39] adopts it to face swap and introduces the ToMask branch that predicts the target facial attribute regions for swapping in a supervised manner. In contrast to those methods, we use an additive skip connection more efficient than the concatenation, along with the unsupervised predicted masked sum, which helps the decoder distinguish the target lip region from the whole face and therefore increases the lip image fidelity. ### Personalization Although GAN prior has been successful in various tasks [27, 34], it is still challenging to faithfully recover person-specific information that lies in the out-of-distribution [1, 2, 27, 34, 40]. Recently, a few-shot personalization has become an alternative to solving the problem [23]. Pivotal-tuning-inversion (PTI) [28] fine-tune the image generator while freezing a single latent code, namely _pivot_, to compensate the person-specific information in the generative process, not in the encoding process. MyStyle [23] adopts PTI [28] to image inpainting and semantic editing, by restricting the latent space to a subspace spanned by the multiple pivots from the few photos (roughly 100) of the target person. Stitch-it-in-time [35] adopts the pivotal-tuning to the video editing in a multi-stage manner, which leverages an off-the-shelf latent manipulation [29] to manipulate video in the latent space and then stitch it to the source video. Inspired by them, we propose a few-shot unseen face adaptation method that slightly fine-tunes the image decoder for a given latent code trajectory of a target identity and maintains the audio generalization by introducing a sync-regularizer. ## 3 Method Given lip masked video frames \(X_{1:T}=(X_{t})_{t=1}^{T}\) and audio segments \(A_{1:T}=(A_{t})_{t=1}^{T}\), a lip-sync method generates video frames \[\hat{X}_{1:T}=(\hat{X}_{t})_{t=1}^{T}, \tag{1}\] where \(\hat{X}_{1:T}\) has a lip movement synchronized with the audio segments \(A_{1:T}\). In contrast to the previous lip-sync methods [19, 26, 25, 5], we leverage 3D parametric facial mesh predictor [10, 21] to compute lip mask so that the generator can be aware of semantically meaningful facial pose information (section 3.1). We utilize a pre-trained StyleGAN [17] as our decoder \(\mathbf{G}\). When audio encoder \(\mathbf{E}_{aud}\) and reference encoder \(\mathbf{E}_{ref}\) map their inputs into the latent space \(\mathcal{W}+\) (section 3.3), the decoder \(\mathbf{G}\) generates lip-synced video frames \(\hat{X}_{1:T}\) from these latent codes (section 3.2) guided by the proposed _style-aware masked fusion_. For enhancing the temporal consistency, we propose a _Moving-average based Latent Smoothing_ module, which learns local motion between the latent codes, and makes video latent trajectory smoother. Finally, sync loss [7, 22] is used for the audio-lip synchronization. The overall framework of our model is described in Figure 2. ### Pose-aware Masking Dynamic head motion is an important factor in the natural talking style. However, existing methods [19, 26, 25, 5] employ the rectangular lower-half mouth masking method without consideration of the pose information. It often fails to detect appropriate masking regions when the head moves dynamically, which leads to unpleasant artifacts and unnatural jaw movement in the generated videos (see the first row in Figure 5 for examples). To address this limitation, we use the face meshes by leveraging a 3D face mesh predictor [10], which captures 3D parameters and predicts dense face geometry. We predict the 3D parameters and the mesh from given video frames.Then, the predicted expression parameter \(\delta\in\mathbb{R}^{64}\) is used to adjust the mesh to obtain naturally opened and closed mouth meshes. We normalize the mesh vertices using the predicted pose parameters \(\tau\in\mathbb{R}^{3}\) (translation), \(\gamma\in SO(3)\) (rotation) and leave only the lower frontal vertices. These meshes are combined and projected onto the original 2D plane to finally get our pose-aware lip masks. Figure 1 illustrates the framework of the pose-aware masking. This masking not only captures the pose information but also inherits facial semantics such as jaw shape. Ablation studies in section 5.4 show that the pose-aware masking helps the model to increase visual quality along with dynamic pose. ### Decoder **Lip Prior from Style-based Generator.** Generating the lip-sync videos from scratch for an arbitrary person is very hard since the mapping from audio to lip is basically one-to-many. In this paper, we leverage a style-based generator as our image decoder \(\mathbf{G}\) for the following two reasons. First, a pre-trained StyleGAN already contains expressive and diverse face priors [16, 17, 15] in the form of latent code, namely _style code_ in the latent space \(\mathcal{W}+\)[2]. Those latent spaces enable us to better synthesize the lip region of the target person with the diverse lip prior. Second, the style codes form the continuous and linear [16, 11, 30] latent spaces, which enables us to design a high-level visual transformation, such as natural motion, only with a linear transformation of latent code [37]. Hence, we can generate a talking head video with smooth lip motion by simply manipulating the style codes using audio, which the previous lip-sync methods never take into account. **Style-aware Masked Fusion (SaMF).** Recently, it has been explored that adopting skip-connections based on the concatenation to GAN-inversion helps to preserve the 2D spatial information of the input [39, 12, 40]. Similarly, we adopt an additive skip-connection to our model for effectively preserving the non-masked region of \(X_{1:T}\) while faithfully utilizing the latent space. Specifically, we propose _style-aware masked fusion_ (SaMF) for efficiently preserving the 2D spatial feature and relieving the information gap between masked and non-masked regions. SaMFs are introduced at the beginning of the decoder blocks. The decoder \(\mathbf{G}\) consists of \(L\) decoder blocks, each of which takes 2 style codes to modulate 3 convolution weights, as illustrated in Figure 3. The first style code in each decoder block modulates 2 different convolu Figure 1: Illustration of pose-aware masking. tion weights, one for the convolution in the original block and the other for the SaMF. SaMF learns to predict a 1-channel mask \(S_{t}^{l}\) of the current resolution from the encoded feature through the newly modulated convolution followed by the sigmoid, which is used for spatial weighted fusion of the encoded feature and the generated feature. Formally, given encoded feature \(\mathbf{E}_{face}^{l}(X_{t})\) and generated feature \(\mathbf{G}^{l-1}(X_{t})\) of same dimension \(\mathbb{R}^{h\times w\times c}\), SaMF firstly predicts a spatial mask \(S_{t}^{l}\in\mathbb{R}^{h\times w\times 1}\) from \(\mathbf{E}_{face}^{l}(X_{t})\) and then output fused feature as follows: \[S_{t}^{l}\otimes\mathbf{E}_{face}^{l}(X_{t})+(1-S_{t}^{l})\otimes\mathbf{G}^{ l-1}(X_{t}). \tag{2}\] Ablation studies (section 5.4) show that SaMFs improve the fidelity of the mouth since they separate masked and non-masked regions. ### Encoders Our model has three different encoders: face encoder \(\mathbf{E}_{face}\), reference encoder \(\mathbf{E}_{ref}\), and audio encoder \(\mathbf{E}_{aud}\). Face encoder \(\mathbf{E}_{face}\) takes masked video frames \(X_{1:T}\) as input and then outputs \(l\) 2D spatial features \(\mathbf{E}_{face}^{l}(X_{t})=\{\mathbf{E}_{face}^{l}(X_{t})\mid l\in[1,2, \cdots,L]\}\) for each \(t\). These features are injected into the decoder \(\mathbf{G}\) through the style-aware masked fusion to efficiently preserve 2D spatial structure, as described in section 3.2. Reference Encoder \(\mathbf{E}_{ref}\) maps a reference \(X_{ref}\) into \(2L\) reference style codes, each of which has 512 dimensions. We simply denote the reference style code as \(w_{ref}=[w_{ref}^{1}|w_{ref}^{2}|\cdots|w_{ref}^{2L}]\in\mathbb{R}^{512\times 2L}\). Similar to \(\mathbf{E}_{ref}\), audio encoder \(\mathbf{E}_{aud}\) maps a single audio segment into \(2L\) audio style codes, each of which has 512 dimensions. As we use \(T\) consecutive audio segments \(A_{1:T}\), \(\mathbf{E}_{aud}\) independently maps \(A_{t}\) into \(a_{t}=[a_{t}^{1}|a_{t}^{2}|\cdots|a_{t}^{2L}]\in\mathbb{R}^{512\times 2L}\). We simply denote \(a_{1:T}=(a_{t})_{t=1}^{T}\) for \(T\) audio style codes. Please refer to supplementary materials for the detailed encoder architectures. From these style codes \(w_{ref}\) and \(a_{1:T}\), we compute target video's style codes over frames \(w_{1:T}\) by \[w_{1:T}=(w_{1},w_{2},\cdots,w_{T}), \tag{3}\] where \(w_{t}=w_{ref}+a_{t}\in\mathcal{W}+\). We compute the style codes by simply adding these two different codes based on the linearity [11, 30] of the latent space \(\mathcal{W}+\). The style Figure 3: Illustration of a decoder block. The encoded feature \(\mathbf{E}_{face}^{l}(X_{t})\) is injected into \(l\)-th decoder block through Style-aware Masked Fusion (SaMF). Only the convolution in SaMF is trainable, while the others are frozen during the training phase. Figure 2: A framework of StyleLipSync. We leverage a 3D parametric mesh predictor [10, 21] to obtain pose-aware masked frames \(X_{1:T}\), which inherits the facial pose of input frames. Face encoder \(\mathbf{E}_{face}\) maps \(X_{1:T}\) into 2D spatial features and then fed into the decoder \(\mathbf{G}\) through _style-aware masked fusion_ (SaMF). Single reference image \(X_{ref}\) and audio segments \(A_{1:T}\) are mapped into latent space, followed by _Moving-average based Latent Smoothing_ (MaLS). This module outputs smooth video latent codes \(\tilde{w}_{1:T}\subseteq\mathcal{W}+\) that represent temporally consistent lip movement. With the guidance of SaMFs and the smooth video latent codes \(\tilde{w}_{1:T}\), StyleLipSync can generate temporally consistent lip-synced videos. codes \(w_{1:T}\) are then fed into the decoder \(\mathbf{G}\) through affine transformations to generate synced lip motion. **Temporal Consistency.** Thanks to the semantically rich latent space \(\mathcal{W}+\), our model can generate an accurate lip-sync video frame by frame. However, this frame-wisely independent encoding of style codes turns out to lead inconsistent mouth movements in the final results. To remedy this, we assume that the generated style codes \(w_{1:T}\) form a trajectory of the target video [38] in \(\mathcal{W}+\) and enforce the smooth local transition of the motion [31, 37] into the trajectory. Toward this, we introduce _Moving-average based Latent Smoothing_ (MaLS), each of which consists of a stack of the weighted moving-average [3] and 1D convolutions operating on the style codes along the time-axis. More precisely, we employ \(2L\) MaLS for \(L\) resolutions, each of which takes the \(l\)-th component of \(w_{t-1},w_{t}\), and \(w_{t+1}\) as its inputs to learn the local difference between them, and then we inject the local motions into \(w_{ref}\) to compute the smooth style code \(\tilde{w}_{t}\): \[\tilde{w}_{t}^{l}=w_{ref}^{l}+\text{MaLS}^{l}(w_{t-1}^{l},w_{t}^{l},w_{t+1}^{l }), \tag{4}\] where MaLS\({}^{l}\) denotes the \(l\)-th MaLS. Please refer to the supplementary materials for detailed MaLS architecture. With this smooth latent code \(\tilde{w}_{t}\), we compute the video frames: \[\hat{X}_{1:T}=\left(\mathbf{G}(\tilde{w}_{t},\mathbf{E}_{face}(X_{t}))\right) _{t=1}^{T}. \tag{5}\] For better initialization [27], we add the average code \(w_{avg}\) of the pre-trained generator to each \(\tilde{w}_{t}\) in (5). ### Training Object We train StyleLipSync to reconstruct target video frames from corresponding audio. We randomly choose \(T=5\) consequent frames with corresponding audio segments and 1 single reference frame. Image perceptual loss [41] is used to minimize perceptual image distance between generated frames \(\hat{X}\) and ground-truth frames \(Y\): \[\mathcal{L}_{lpips}=\sum_{i=1}^{N}\left\|\phi^{i}(\hat{X})-\phi^{i}(Y)\right\| _{1}, \tag{6}\] where \(N\) is the number of feature extractor and \(\phi^{i}(\cdot)\) is the \(i\) th feature extractor. Similar to [31], we use the multi-scale perceptual loss with 3 levels. For audio-visual alignment, we utilize SyncNet trained in a contrastive manner [22] that minimizes the cosine distance between generated frames \(\hat{X}\) and corresponding audio segment \(A\): \[\mathcal{L}_{sync}=1-\cos(f_{v}(\hat{X}),f_{a}(A)), \tag{7}\] where \(\cos(\cdot,\cdot)\) denotes the cosine similarity. \(f_{v}(\cdot)\) and \(f_{a}(\cdot)\) are the frame and audio feature extractor, respectively. The final objective \(\mathcal{L}_{train}\) is computed as: \[\mathcal{L}_{train}=\lambda_{1}\mathcal{L}_{lpips}+\lambda_{2}\mathcal{L}_{sync}, \tag{8}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are the balancing coefficients. ## 4 Unseen Face Adaptation Although StyleLipSync successfully generates accurately lip-synced videos with high fidelity, the model would fail to exactly synthesize unseen faces lying in the out-of-distribution. This problem refers to the _failure of id preservation_. Therefore, to handle this, we finetune our decoder \(\mathbf{G}\) on a target person video to make a personal model that is better able to synthesize toward the target person. Let \(X_{1:T}\) be masked video frames of _unseen face_, which corresponds to the audio segments \(A_{1:T}\), and \(X_{ref}\) be a reference frame. The frozen encoders convert each input into the intermediate representations, \(\tilde{w}_{1:T}\) and \(\mathbf{E}_{face}(X_{t})))_{t=1}^{T}\). Then we fine-tune the decoder \(\mathbf{G}_{\theta}\), now parameterized by \(\theta\), by minimizing the distance between \(\left(\mathbf{G}_{\theta}(\tilde{w}_{t},\mathbf{E}_{face}(X_{t}))\right)_{t=1} ^{T}\) and target frames, as same as (8). However, fine-tuning the decoder on a short video of a single identity leads to over-fitting and losing the lip-sync generality as the generator can memorize the target video [24]. To prevent the model from this scenario, we introduce a sync regularizer \(\mathcal{R}_{sync}\) to enforce audio generality to the decoder \(\mathbf{G}_{\theta}\) by leveraging the audio from the training dataset, not from the target video. Formally, given audio segments \(A^{\prime}_{1:T}\) randomly chosen from the training data (Voxceleb2 [6]) and \(w_{ref}\), we compute smooth style codes \(\tilde{w}_{1:T}^{\prime}\), and then decode them to a synced video \(X^{\prime}_{1:T}\). The sync regularizer \(\mathcal{R}_{sync}\) is defined as \[\mathcal{R}_{sync}=1-\cos(f_{v}(\hat{X}^{\prime}),f_{a}(A^{\prime})), \tag{9}\] which enforces \(\mathbf{G}_{\theta}\) to generate \(\hat{X}^{\prime}_{1:T}\) aligning with \(A^{\prime}_{1:T}\). The final object for a single person adaptation is given as follows: \[\theta^{*}=\underset{\theta}{\text{argmin}}\ \mathcal{L}_{train}+\lambda_{R} \mathcal{R}_{sync}, \tag{10}\] Figure 4: Adaptation for Unseen Face. We slightly tune the decoder \(\mathbf{G}_{\theta}\) with the proposed sync regularizer \(\mathcal{R}_{sync}\), while freeze all encoders’ weight. Face encoder \(\mathbf{E}_{face}\) and SaMFs are omitted here for simplicity. where \(\lambda_{R}\) is the regularizer coefficient. Ablation studies in section 5.4 show that \(\mathcal{R}_{sync}\) regularizes the audio generality even if we use the audio from the training set. ## 5 Experiments ### Dataset We train our model on Voxceleb2 [6], which consists of in-the-wild talking face videos collected from YouTube. It contains more than 145,000 videos of about 6,100 identities in the train set and more than 4,900 videos of more than 110 identities in the test set. We convert all videos into frames with 25 fps and then crop and resize those frames into \(256\times 256\) resolution, following the method of [31]. For more semantically rich face priors, we only use videos where the detected face bounding boxes' height (and width) is longer than 256. After pre-processing, the remaining 11051 videos and 340 videos are used as the training set and the test set, respectively. All audios are re-sampled to 16kHz and converted into a mel-spectrogram to be used as our audio representation similar to the method in [43, 22]. We also use HDTF [42] to further test our model for cross-id experiments. It is widely used to evaluate high-resolution talking face generative models, where the head pose dynamics are not as significant as the Voxceleb2. ### Implementation Details We pre-train StyleGAN2 [17] on the Voxceleb2 [6] following the implementation of [14]. For training StyleLipSync and SyncNet, we set the frame length \(T=5\), and employ Adam [18] optimizer with learning rate \(10^{-4}\) throughout the training phase in both cases. All experiments are performed on \(2\) TITAN RTX GPUs. We set \(\lambda_{1}=10,\lambda_{2}=0.1\) for the training and \(\lambda_{1}=10,\lambda_{2}=0.1,\lambda_{R}=0.1\) for the adaptation. For all inference, we use the first frame as the reference image. For image quality metric, we use PSNR, and SSIM. We also calculate CSIM [9] in the cross-id experiment that measures the face similarity between the images in the pre-trained face embedding space. For lip-sync quality metric, we use LMD, LSE-D, and LSE-C. LMD is the absolute distance of facial landmarks between the target and generated frames. LSE-D and LSE-C are proposed in [26], where LSE-D measures the distance between the lip and audio representations, and LSE-C measures the lip-sync confidence, respectively. ### Evaluation **Reconstruction Results.** We compare the other state-of-the-art methods in the lip-sync (Wav2Lip [26]) and talking face generation (ATVG [4], MakeItTalk [44], and PC-AVS [43]) on reconstruction results of Voxceleb2 test dataset. Table 1 shows that StyleLipSync outperforms all metrics except the LSE-C score. Wav2Lip [26] achieves the highest LSE-C score, however, it achieves low image quality metrics since it generates \(96\times 96\) low-resolution videos. PC-AVS [43] achieves comparable lip-sync scores with StyleLipSync, however, achieves low image quality metrics than our model since it highly relies on its specific pre-processing and fails to generate in many cases. We also illustrate the qualitative results in Figure 5(a). Wav2Lip [26] produces a lip-synchronized video with considerable visual artifacts since it cannot adapt to the video of dynamic head pose. MakeItTalk [44] generates low lip-sync video since it uses sparse facial landmarks. PC-AVS [43] generates an accurate lip-synchronized video following the input head pose. However, it struggles to preserve the unseen identity involving visual artifacts. StyleLipSync generates the natural lip motion with high fidelity and preserves the input identity, which is comparable to the ground truth video. **Cross-id Results.** To evaluate lip-sync generalization, we conduct cross-id experiments settings using unseen videos and audio. We randomly sample 10 videos of different identities and 150 audio without duplication from HDTF [42]. We use the first 10 seconds of videos and audio. For each video, we generate 15 lip-synced videos from 15 different audios, where those face-audio pairs are not from the same source. In Table 2, we report the LSE-D and LSE-C for lip-sync quality and the CSIM for face similarity. Wav2Lip [26] achieves a higher LSE-C score than StyleLipSync, however, it achieves a low CSIM score since it produces a low-resolution video with visual artifacts. MakeItTalk [44] achieves the best CSIM score, while the lip-sync quality is the worst since it uses sparse facial landmarks. PC-AVS [43] outperforms LSE-C while achieving the lowest CSIM since it can't preserve an unseen face's identity. StyleLipSync achieves the best LSE-D score and comparable CSIM score to MakeItTalk [44]. We show qualitative results of cross-id experiments with target lip references in Figure 5(b). PC-AVS [43] can generate accurate lip-synchronizing video compared to the target lip, while it fails to preserve facial details of the unseen face. MakeItTalk [44] produces a high-resolution and identity-preserving video, however, it is out-of-sync compared to the target. StyleLipSync generates a high-resolution lip synchronizing video compared to the target lip, without any \begin{table} \begin{tabular}{l||c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Voxceleb2 (Reconstruction)} \\ & \multicolumn{3}{c}{Image} & \multicolumn{3}{c}{Lip-Sync} \\ & SSIM \(\uparrow\) & PSNR \(\uparrow\) & LMD \(\downarrow\) & LSE-D \(\downarrow\) & LSE-C \(\uparrow\) \\ \hline Wav2LipSync2 [26] & 0.448 & 13.534 & 6.422 & 6.999 & **8.329** \\ ATVG\({}_{128\times 128}\)[4] & 0.461 & 13.349 & 7.165 & 8.821 & 5.421 \\ MakeItTalk2\({}_{326\times 266}\)[44] & 0.419 & 12.686 & 3.649 & 10.895 & 3.624 \\ PC-AVS\({}_{256\times 291}\)[43] & - & 0.369 & 13.210 & 2.812 & 7.278 & 7.699 \\ **Our256\({}_{16\times 256}\)** & **0.631** & **19.607** & **2.696** & **6.628** & 8.056 \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison of reconstruction on Voxceleb2 test data. The best score for each metric is in **bold**. visual artifacts. ### Ablation Studies **Ablation Studies on Zero-shot Model.** Figure 6 and Table 3 summarize the ablation studies on our zero-shot method of reconstruction on Voxceleb2 test data. If we replace the pose aware-masking with standard rectangular masking (w/o Pose mask) in [26, 5, 25], considerable visual artifacts occur around the masked region since it is insufficient to capture the pose difference between the reference and the target. To validate SaMF, we replace the modulated convolutions in SaMFs with the standard convolutions. Figure 6(c) shows that SaMFs significantly improve lip region's fidelity since the modulated convolution helps to be aware of the lip style. As shown in Table 3, MaLS significantly improves lip-sync quality, which cannot be reflected in a single image. Please refers to supplementary videos for ablation studies on MaLS. **Ablation Studies on Unseen Face Adaptation.** We conduct ablation studies on the proposed unseen face adaptation following the same setting in Table 2. Additionally, we use 60 seconds of video for each 10 personalized models and 15 audios of 10 seconds from different identities for \begin{table} \begin{tabular}{l||c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{HDTF (Cross-Id)} \\ & Image & \multicolumn{2}{c}{Lip-Sync} \\ & CSIM \(\uparrow\) & LSE-D \(\downarrow\) & LSE-C \(\uparrow\) \\ \hline Wav2Lip\({}_{96\times 96}\)[26] & 0.656 & 7.047 & 8.576 \\ ATVG\({}_{128\times 128}\)[4] & 0.287 & 8.668 & 6.040 \\ MakeFLalk\({}_{256\times 256}\)[44] & **0.770** & 10.641 & 4.725 \\ PC-AVS\({}_{224\times 224}\)[43] & 0.238 & 6.921 & **8.858** \\ \hline **Ours\({}_{256\times 256}\)** & 0.737 & **6.825** & 8.209 \\ \hline \end{tabular} \end{table} Table 2: Quantitative comparison of cross-identity results for unseen face. We report CSIM [9] as the image quality metric since there is no ground truth frames for the cross-id experiments. The best score for each metric is in **bold**. Figure 5: Comparison with state-of-the-art. Figure 6: Qualitative comparison of zero-shot model. inference. Figure 7(b) shows the lip-sync metrics according to the adaptation step. In the cases without the sync regularizer, the models lose the lip-sync generality, in other words, it memorizes the short target video as the adaptation phase proceeds. Introducing the sync regularizer with the sync loss stabilizes the lip-sync metrics and even improves them compared to the zero-shot results. If we use the sync regularizer without the sync loss, lip-sync quality is stabilized, however slightly lower than the zero-shot results due to the lack of ground-truth audio-visual correlation. Audio generalization for unseen face data is maintained even though we used audio from the learning data. Figure 7(a) supports the validity of the adaptation method. It shows that visual difference between the zero-shot results and the adaptation results. The zero-shot model can generate an accurate lip motion for the target audio, while it shows a slight difference in person-specific details compared to the reference images. Through the proposed adaptation method, personal-specific lip shape, teeth, and wrinkles are faithfully recovered while maintaining the lip motion of the zero-shot results. ### User Study We further conduct a user study based on MOS (Mean opinion score) to compare the perceptual quality of each model, including our zero-shot and personalized model. 5 videos generated by each model in cross-id setting are used for this study. 20 participants scored lip-sync accuracy, face similarity, and visual quality of each video in the range of 1 to 5. As shown in Table 4, our models outperform all metrics. Specifically, our zero-shot model achieves the highest lip-sync accuracy, and our adaptation model achieves the highest score in face similarity and visual quality with competitive lip-sync accuracy. Please refer to the supplementary materials for the demo videos. ## 6 Conclusion We proposed StyleLipSync, a lip-sync video generative model for arbitrary identity, which leverages expressive lip priors from a pre-trained style-based generator. In contrast to existing lip-sync generative models, we introduce pose-aware masking for lip region by utilizing a 3D parametric mesh predictor, which inherits the pose information in the mask itself. Designing a smooth lip motion by using the moving-average based latent smoothing in the continuous and linear latent space, StyleLipSync can generate temporally consistent lip motion. Furthermore, we propose a few-shot lip-sync adaptation method for a single person who lies in the out-of-distribution, which uses a few seconds of the target person's video. Experimental results show that our StyleLipSync can generate realistic lip-sync video from arbitrary audio even with the zero-shot setting and the proposed adaptation method enhance the person-specific information without losing the lip-sync generality. \begin{table} \begin{tabular}{l||c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{User Study (MOS)} \\ & Lip-sync & Face & Visual \\ & Accuracy & Similarity & Quality \\ \hline Wav2Lip [26] & \(3.76\pm 0.20\) & \(2.98\pm 0.25\) & \(2.03\pm 0.21\) \\ ATVG [4] & \(2.19\pm 0.21\) & \(2.45\pm 0.25\) & \(1.54\pm 0.17\) \\ MakeFlark [44] & \(2.32\pm 0.24\) & \(3.47\pm 0.23\) & \(2.95\pm 0.24\) \\ PC-AVS [43] & \(3.28\pm 0.21\) & \(2.55\pm 0.22\) & \(2.51\pm 0.22\) \\ **Ours** [FOOTNOTE:]Footnote : Experiments on the face adaptation with cross-id setting.[ENDFOOTNOTE] & **\(401\pm 0.16\)** & \(3.42\pm 0.22\) & \(3.55\pm 0.20\) \\ **Ours** [FOOTNOTE:]Footnote : Experiments on the face adaptation with cross-id setting.[ENDFOOTNOTE] & **\(3.52\pm 0.21\)** & **\(\textbf{4.03\pm 0.17}\)** & **\(\textbf{3.64\pm 0.19}\)** \\ \hline \hline \end{tabular} \end{table} Table 4: Mean Opinion score (MOS) user study results with \(95\%\) confidence interval on cross-id setting. The score ranges in 1 to 5. The best score for each metric is in **bold**. \begin{table} \begin{tabular}{l||c|c|c|c|c} \hline \hline \multirow{2}{*}{Method (ours)} & \multicolumn{4}{c}{Voxcele2 (Reconstruction)} \\ & \multicolumn{2}{c}{Image} & \multicolumn{2}{c}{Lip-sync} \\ & SSIM \(\uparrow\) & PSNR \(\uparrow\) & LMD \(\downarrow\) & LSE-D \(\downarrow\) & LSE-C \\ \hline w/o Pose mask & 0.602 & 18.867 & 3.057 & 6.771 & 7.748 \\ w/o MaLS & 0.593 & 18.186 & 2.740 & 6.994 & 7.577 \\ w/o SaMF & 0.591 & 18.181 & 2.764 & 6.838 & 7.780 \\ **Full** & **-6.63T** & **19.607** & **-2.696** & **-6.628** & **-8.056** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on zero-shot model. The best score for each metric is in **bold**. Figure 7: Experiments on the face adaptation with cross-id setting.
2309.16096
Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness
The susceptibility of modern machine learning classifiers to adversarial examples has motivated theoretical results suggesting that these might be unavoidable. However, these results can be too general to be applicable to natural data distributions. Indeed, humans are quite robust for tasks involving vision. This apparent conflict motivates a deeper dive into the question: Are adversarial examples truly unavoidable? In this work, we theoretically demonstrate that a key property of the data distribution -- concentration on small-volume subsets of the input space -- determines whether a robust classifier exists. We further demonstrate that, for a data distribution concentrated on a union of low-dimensional linear subspaces, utilizing structure in data naturally leads to classifiers that enjoy data-dependent polyhedral robustness guarantees, improving upon methods for provable certification in certain regimes.
Ambar Pal, Jeremias Sulam, René Vidal
2023-09-28T01:39:47Z
http://arxiv.org/abs/2309.16096v2
# Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness ###### Abstract The susceptibility of modern machine learning classifiers to adversarial examples has motivated theoretical results suggesting that these might be unavoidable. However, these results can be too general to be applicable to natural data distributions. Indeed, humans are quite robust for tasks involving vision. This apparent conflict motivates a deeper dive into the question: Are adversarial examples truly unavoidable? In this work, we theoretically demonstrate that a key property of the data distribution - concentration on small-volume subsets of the input space - determines whether a robust classifier exists. We further demonstrate that, for a data distribution concentrated on a union of low-dimensional linear subspaces, exploiting data structure naturally leads to classifiers that enjoy good robustness guarantees, improving upon methods for provable certification in certain regimes. ## 1 Introduction, Motivation and Contributions Research in adversarial learning has shown that traditional neural network based classification models are prone to anomalous behaviour when their inputs are modified by tiny, human-imperceptible perturbations. Such perturbations, called adversarial examples, lead to a large degradation in the accuracy of classifiers [50]. This behavior is problematic when such classification models are deployed in security sensitive applications. Accordingly, researchers have and continue to come up with _defenses_ against such adversarial attacks for neural networks. Such defenses [45; 54; 39; 21] modify the training algorithm, alter the network weights, or employ preprocessing to obtain classifiers that have improved empirical performance against adversarially corrupted inputs. However, many of these defenses have been later broken by new adaptive attacks [1; 7]. This motivated recent impossibility results for adversarial defenses, which aim to show that all defenses admit adversarial examples. While initially such results were shown for specially parameterized data distributions [17], they were subsequently expanded to cover general data distributions on the unit sphere and the unit cube [44], as well as for distributions over more general manifolds [11]. On the other hand, we humans are an example of a classifier capable of very good (albeit imperfect [16]) robust accuracy against \(\ell_{2}\)-bounded attacks for natural image classification. Even more, a large body of recent work has constructed _certified_ defenses [10; 57; 9; 28; 19; 49] which obtain non-trivial performance guarantees under adversarially perturbed inputs for common datasets like MNIST, CIFAR-10 and ImageNet. This apparent contention between impossibility results and the existence of robust classifiers for natural datasets indicates that the bigger picture is more nuanced, and motivates a closer look at the impossibility results for adversarial examples. Our first contribution is to show that these results can be circumvented by data distributions whose mass concentrates on small regions of the input space. This naturally leads to the question of whether such a construction is necessary for adversarial robustness. We answer this question in the affirmative, formally proving that a successful defense exists only when the data distribution concentrates on an exponentially small volume of the input space. At the same time, this suggests that exploiting the inherent structure in the data is critical for obtaining classifiers with broader robustness guarantees. Surprisingly, almost1 all _certified_ defenses do not exploit any structural aspects of the data distribution like concentration or low-dimensionality. Motivated by our theoretical findings, we study the special case of data distributions concentrated near a union of low-dimensional linear subspaces, to create a certified defense for perturbations that go beyond traditional \(\ell_{p}\)-norm bounds. We find that simply exploiting the low-dimensional data structure leads to a natural classification algorithm for which we can derive norm-independent polyhedral certificates. We show that our method can certify accurate predictions under adversarial examples with an \(\ell_{p}\) norm larger than what can be certified by applying existing, off-the-shelf methods like randomized smoothing [10]. Thus, we demonstrate the importance of data structure in both the theory and practice of certified adversarial robustness. Footnote 1: See Section 6 for more details. More precisely, we make the following main contributions in this work: 1. We formalize a notion of \((\epsilon,\delta)\)-concentration of a probability distribution \(q\) in Section 2, which states that \(q\) assigns at least \(1-\delta\) mass to a subset of the ambient space having volume \(O(\exp{(-\epsilon)})\). We show that \((\epsilon,\delta)\)-concentration of \(q\) is a necessary condition for the existence of any classifier obtaining at most \(\delta\) error over \(q\), under perturbations of size \(\epsilon\). 2. We find that \((\epsilon,\delta)\)-concentration is too general to be a sufficient condition for the existence of a robust classifier, and we follow up with a stronger notion of concentration in Section 3 which is sufficient for the existence of robust classifiers. Following this stronger notion, we construct an example of a strongly-concentrated distribution, which circumvents existing impossibility results on the existence of robust classifiers. 3. We then consider a data distribution \(q\) concentrated on a union of low-dimensional linear subspaces in Section 4. We construct a classifier for \(q\) that is robust to perturbations following threat models more general than \(\ell_{p}\). Our analysis results in polyhedral certified regions whose faces and extreme rays are described by selected points in the training data. 4. We perform empirical evaluations on MNIST in Section 5, demonstrating that our certificates are complementary to existing off-the-shelf approaches like Randomized Smoothing (RS), in the sense that both methods have different strengths. In particular, we demonstrate a region of adversarial perturbations where our method is certifiably robust, but RS is not. We then combine our method with RS to obtain certificates that enjoy the best of both worlds. ## 2 Existence of Robust Classifier Implies Concentration We will consider a classification problem over \(\mathcal{X}\times\mathcal{Y}\) defined by the data distribution \(p\) such that \(\mathcal{X}\) is bounded and \(\mathcal{Y}=\{1,2,\ldots,K\}\). We let \(q_{k}\) denote the conditional distribution \(p_{X|Y=k}\) for class \(k\in\mathcal{Y}\). We will assume that the data is normalized, i.e., \(\mathcal{X}=B_{\ell_{2}}(0,1)\), and the boundary of the domain is far from the data, i.e., for any \(x\sim q_{k}\), an adversarial perturbation of \(\ell_{2}\) norm at most \(\epsilon\) does not take \(x\) outside the domain \(\mathcal{X}\).2 Footnote 2: More details in Appendix A. We define the robust risk of a classifier \(f\colon\mathcal{X}\to\mathcal{Y}\) against an adversary making perturbations whose \(\ell_{2}\) norm is bounded by \(\epsilon\) as 3 Footnote 3: Note that for \(f(\bar{x})\) to be defined, it is implicit that \(\bar{x}\in\mathcal{X}\) in (1). \[R(f,\epsilon)=\Pr_{(x,y)\sim p}\left(\exists\bar{x}\in B_{\ell_{2}}(x,\epsilon) \text{ such that }f(\bar{x})\neq y\right). \tag{1}\] We can now define a robust classifier in our setting. **Definition 2.1** (Robust Classifier).: _A classifier \(g\) is defined to be \((\epsilon,\delta)\)-robust if the robust risk against perturbations with \(\ell_{2}\) norm bounded by \(\epsilon\) is at most \(\delta\), i.e., if \(R(g,\epsilon)\leq\delta\)._ The goal of this section is to show that if our data distribution \(p\) admits an \((\epsilon,\delta)\)-robust classifier, then \(p\) has to be _concentrated_. Intuitively, this means that \(p\) assigns a "large" measure to sets of "small" volume. We define this formally now. **Definition 2.2** (Concentrated Distribution).: _A probability distribution \(q\) over a domain \(\mathcal{X}\subseteq\mathbb{R}^{n}\) is said to be \((\epsilon,\delta)\)-concentrated, if there exists a subset \(S\subseteq\mathcal{X}\) such that \(q(S)\geq 1-\delta\) but \(\operatorname{Vol}(S)\leq c_{1}\exp(-c_{2}\epsilon)\) for some constants \(c_{1},c_{2}>0\). Here, \(\operatorname{Vol}\) denotes the standard Lebesgue measure on \(\mathbb{R}^{n}\), and \(q(S)\) denotes the probability of sampling a point in \(S\) under \(q\)._ With the above definitions in place, we are ready to state our first main result. **Theorem 2.1**.: _If there exists an \((\epsilon,\delta)\)-robust classifier \(f\) for a data distribution \(p\), then at least one of the class conditionals \(q_{1},q_{2},\ldots,q_{K}\) must be \((\epsilon,\delta)\)-concentrated. Further, if the classes are balanced, then all the class conditionals are \((\epsilon,K\delta)\)-concentrated._ The proof utilizes the Brunn-Minkowski theorem from high-dimensional geometry, and we provide a brief sketch here, deferring the full proof to Appendix A. Proof Sketch.: Due to the existence of a robust classifier \(f\), _i.e._, \(R(f,\epsilon)\leq\delta\), the first observation is that there must be at least one class which is classified with robust accuracy at least \(1-\delta\). Say this class is \(k\), and the set of all points which do not admit an \(\epsilon\)-adversarial example for class \(k\) is \(S\). Now, the second step is to show that \(S\) has the same measure (under \(q_{k}\)) as the \(\epsilon\)-shrinkage (in the \(\ell_{2}\) norm) of the set of all points classified as class \(k\). Finally, the third step involves using the Brunn-Minkowski theorem, to show that this \(\epsilon\)-shrinkage has a volume \(O(\exp(-n\epsilon))\), thus completing the argument. **Discussion on Theorem 2.1**.: We pause here to understand some implications of this result. * Firstly, recall the apparently conflicting conclusions from Section 1 between impossibility results (suggesting that robust classifiers do not exist) and the existence of robust classifiers in practice (such as that of human vision for natural data distributions). Theorem 2.1 shows that whenever a robust classifier exists, the underlying data distribution has to be concentrated. In particular, natural distributions corresponding to MNIST, CIFAR and ImageNet must therefore be concentrated. This suggests a resolution to the conflict: concentrated distributions must somehow circumvent existing impossibility results. Indeed, this is precisely what we will show in Section 3. * Secondly, while our results are derived for the \(\ell_{2}\) norm, it is not very hard to extend this reasoning to general \(\ell_{p}\) norms. In other words, whenever a classifier robust to \(\ell_{p}\)-norm perturbations exists, the underlying data distribution must be concentrated. * Thirdly, Theorem 2.1 has a direct implication towards classifier design. Since we now know that natural image distributions are concentrated, one should design classifiers that are tuned for small-volume regions in the input space. This might be the deeper principle behind the recent success [61] of robust classifiers adapted to \(\ell_{p}\)-ball like regions in the input space. We have thus seen that data concentration is a necessary condition for the existence of a robust classifier. A natural question is whether it is also sufficient. We address this question now. ## 3 Strong Concentration Implies Existence of Robust Classifier Say our distribution \(p\) is such that all the class conditionals \(q_{1},q_{2},\ldots,q_{k}\) are \((\epsilon,\delta)\)-concentrated. Is this sufficient for the existence of a robust classifier? The answer is negative, as we have not precluded the case where all of the \(q_{k}\) are concentrated over the same subset \(S\) of the ambient space. In other words, it might be possible that there exists a small-volume set \(S\subseteq\mathcal{X}\) such that \(q_{k}(S)\) is high for all \(k\). This means that whenever a data point lies in \(S\), it would be hard to distinguish which class it came from. In this case, even an accurate classifier might not exist, let alone a robust classifier4. To get around such issues, we define a stronger notion of concentration, as follows. Footnote 4: Recall that classifier not accurate at a point \((x,y)\), i.e., \(f(x)\neq y\), is by definition not robust at \(x\), as a \(v=0\) perturbation is already sufficient to ensure \(f(x+v)\neq y\). **Definition 3.1** (Strongly Concentrated Distributions).: _A distribution \(p\) is said to be \((\epsilon,\delta,\gamma)\)-strongly-concentrated if each class conditional distribution \(q_{k}\) is \((\epsilon,\delta)\)-concentrated (say over the set \(S_{k}\subseteq\mathcal{X}\)), and \(q_{k}\left(\bigcup_{k^{\prime}\neq k}S_{k^{\prime}}^{+2\epsilon}\right)\leq\gamma\), where \(S^{+\epsilon}\) denotes the \(\epsilon\)-expansion of the set \(S\) in the \(\ell_{2}\) norm, i.e., \(S^{+\epsilon}=\{x\colon\exists\bar{x}\in S\text{ such that }\|x-\bar{x}\|_{2}\leq\epsilon\}\)._ In essence, Definition 3.1 states that each of the class conditionals are concentrated on subsets of the ambient space, which do not intersect too much with one another. Hence, it is natural to expect that we would be able to construct a robust classifier by exploiting these subsets. Building upon this idea, we are able to show Theorem 3.1: **Theorem 3.1**.: _If the data distribution \(p\) is \((\epsilon,\delta,\gamma)\)-strongly-concentrated, then there exists an \((\epsilon,\delta+\gamma)\)-robust classifier for \(p\)._ The basic observation behind this result is that if the conditional distributions \(q_{k}\) had disjoint supports which were well-separated from each other, then one could obtain a robust classifier by predicting the class \(k\) on the entire \(\epsilon\)-expansion of the set \(S_{k}\) where the conditional \(q_{k}\) concentrates, for all \(k\). To go beyond this idealized case, we can exploit the strong concentration condition to carefully remove the intersections at the cost of at most \(\gamma\) in robust accuracy. We make these arguments more precise in the full proof, deferred to Appendix B, and we pause here to note some implications for existing results. **Implications for Existing Impossibility Results.** To understand how Theorem 3.1 circumvents the previous impossibility results, consider the setting from [44] where the data domain is the sphere \(\mathbb{S}^{n-1}=\{x\in\mathbb{R}^{n}\colon\|x\|_{2}=1\}\), and we have a binary classification setting with class conditionals \(q_{1}\) and \(q_{2}\). The adversary is allowed to make bounded perturbations w.r.t. the geodesic distance. In this setting, it can be shown (see [44, Theorem 1]) that any classifier admits \(\epsilon\)-adversarial examples for the minority class (say class \(1\)), with probability at least \[1-\alpha_{q_{1}}\beta\exp\left(-\frac{n-1}{2}\epsilon^{2}\right), \tag{2}\] where \(\alpha_{q_{1}}=\sup_{x\in\mathbb{S}^{n-1}}q_{1}(x)\) depends on the conditional distribution \(q_{1}\), and \(\beta\) is a normalizing constant that depends on the dimension \(n\). Note that this result assumes little about the conditional \(q_{1}\). Now, by constructing a strongly-concentrated data distribution over the domain, we will show that the lower bound in (2) becomes vacuous. **Example 3.1**.: _The data domain is the unit sphere \(\mathbb{S}^{n-1}\) equipped with the geodesic distance \(d\). The label domain is \(\{1,2\}\). \(P\) is an arbitrary, but fixed, point lying on \(\mathbb{S}^{n-1}\). The conditional density of class \(1\), i.e., \(q_{1}\) is now defined as_ \[q_{1}(x)=\begin{cases}\frac{1}{C}\frac{1}{\sin^{n-2}d(x,P)},&\text{ if }d(x,P)\leq 0.1\\ 0,&\text{ otherwise}\end{cases},\] _where \(C=0.1\) is a normalizing constant. The conditional density of class 2 is defined to be uniform over the complement of the support of \(q_{1}\), i.e. \(q_{2}=\operatorname{Unif}(\{x\in\mathbb{S}^{n-1}\colon d(x,P)>0.1\})\). Finally, the classes are balanced, i.e., \(p_{Y}(1)=p_{Y}(2)=1/2\)._ The data distribution constructed in Example 3.1 makes Eq. (2) vacuous, as the supremum over the density \(q_{1}\) is unbounded. Additionally, the linear classifier defined by the half-space \(\{x\colon\langle x,P\rangle\leq\cos(0.1)\}\) is robust (Appendix C provides a derivation of the robust risk, and further comments on generalizing this example). Example 3.1 is plotted for \(n=3\) dimensions in Fig. 1. Compatibility of Theorem 3.1 with existing Negative ResultsThus, we see that strongly concentrated distributions are able to circumvent existing impossibility results on the existence of robust classifiers. However, this does _not_ invalidate any existing results. Firstly, measure-concentration-based results [44, 17, 11] provide non-vacuous guarantees given a _sufficiently flat_ (not concentrated) data distribution, and hence do not contradict our results. Secondly, our results are existential and do not provide, in general, an algorithm to _construct_ a robust classifier given a strongly-concentrated distribution. Hence, we also do not contradict the existing stream of results on the computational hardness of finding robust classifiers [5, 51, 43]. Our positive results are complementary to all such negative results, demonstrating a general class of data distributions where robust classifiers do exist. For the reminder of this paper, we will look at a specific member of the above class of strongly concentrated data distributions and show how we can practically construct robust classifiers. ## 4 Adversarially Robust Classification on Union of Linear Subspaces The union of subspaces model has been shown to be very useful in classical computer vision for a wide variety of tasks, which include clustering faces under varying illumination, image segmentation, Figure 1: A plot of \(q_{1}\). Redder colors denote a larger density, and the gray plane denotes the robust classifier. and video segmentation [52]. Its concise mathematical description often enables the construction and theoretically analysis of algorithms that also perform well in practice. In this section, we will study robust classification on data distributions concentrated on a union of low-dimensional linear subspaces. This data structure will allow us to obtain a non-trivial, practically relevant case where we can show a provable improvement over existing methods for constructing robust classifiers in certain settings. Before delving further, we now provide a simple example (which is illustrated in Fig. 2) demonstrating how distributions concentrated about linear subspaces are concentrated precisely in the sense of Definition 3.1, and therefore allow for the existence of adversarially robust classifiers. **Example 4.1**.: _The data domain is the ball \(B_{\ell_{\infty}}(0,1)\) equipped with the \(\ell_{2}\) distance. The label domain is \(\{1,2\}\). Subspace \(S_{1}\) is given by \(S_{1}=\{x\colon x^{\top}e_{1}=0\}\), and \(S_{2}\) is given by \(S_{2}=\{x\colon x^{\top}e_{2}=0\}\), where \(e_{1},e_{2}\) are the standard unit vectors. The conditional densities are defined as_ \[q_{1} =\mathrm{Unif}(\{x\colon\|x\|_{\infty}\leq 1,|x^{\top}e_{1}| \leq e^{-\alpha}/2\}),\text{ and,}\] \[q_{2} =\mathrm{Unif}(\{x\colon\|x\|_{\infty}\leq 1,|x^{\top}e_{2}| \leq e^{-\alpha}/2\}),\] _where \(\alpha>0\) is a large constant. Finally, the classes are balanced, i.e., \(p_{Y}(1)=p_{Y}(2)=1/2\). With these parameters, \(q_{1},q_{2}\) are both \((\alpha,0)\)-concentrated over their respective supports. Additionally, \(p\) is \((\alpha,0,e^{-\alpha}/2+2\epsilon)\)-strongly-concentrated. A robust classifier \(f\) can be constructed following the proof of Theorem 3.1, and it obtains a robust accuracy \(R(f,\epsilon)\leq e^{-\alpha}/2+2\epsilon\). See Appendix D for more details._ We will now study a specific choice of \(p\) that generalizes Example 4.1 and will let us move beyond the above simple binary setting. Recall that we have a classification problem specified by a data distribution \(p\) over the data domain \(\mathcal{X}\times\mathcal{Y}=B(0,1)\times\{1,2,\ldots,K\}\). Firstly, the classes are balanced, _i.e._, \(p_{Y}(k)=1/K\) for all \(k\in\mathcal{Y}\). Secondly, the conditional density, i.e., \(q_{k}=p_{X|Y=k}\), is concentrated on the set \(S_{k}^{+\gamma}\cap\mathcal{X}\), where \(S_{k}\) is a low-dimensional linear subspace, and the superscript denotes an \(\gamma\)-expansion, for a small \(\gamma>0\). For the purpose of building our robust classifier, we will assume access to a training dataset of \(M\)_clean_ data points \((s_{1},y_{1}),(s_{2},y_{2}),\ldots,(s_{M},y_{M})\), such that, for all \(i\), the point \(s_{i}\) lies exactly on one of the \(K\) low-dimensional linear subspaces. We will use the notation \(\mathbf{S}=[s_{1},s_{2},\ldots,s_{M}]\) for the training data matrix and \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{M})\) for the training labels. We will assume that \(M\) is large enough that every \(x\in\cup_{k}S_{k}\) can be represented as a linear combination of the columns of \(\mathbf{S}\). Now, the robust classification problem we aim to tackle is to obtain a predictor \(g\colon\mathcal{X}\to\mathcal{Y}\) which obtains a low robust risk, with respect to an additive adversary \(\mathcal{A}\) that we now define. For any data-point \(x\sim p\), \(\mathcal{A}\) will be constrained to make an additive perturbation \(v\) such that \(\mathrm{dist}_{\ell_{2}}(x+v,\cup_{i}S_{i})\leq\epsilon\). In other words, the attacked point can have \(\ell_{2}\) distance at most \(\epsilon\) from any of the linear subspaces \(S_{1},\ldots,S_{k}\). Note that \(\mathcal{A}\) is more powerful than an \(\ell_{2}\)-bounded adversary as the norm of the perturbation \(\|v\|_{2}\) might be large, as \(v\) might be parallel to a subspace. Under such an adversary \(\mathcal{A}\), given a (possibly adversarially perturbed) input \(x\), it makes sense to try to recover the corresponding point \(s\) lying on the union of subspaces, such that \(x=s+n\), such that \(\|n\|_{2}\leq\epsilon\). One way to do this is to represent \(s\) as a linear combination of a small number of columns of \(\mathbf{S}\), _i.e._, \(x=\mathbf{S}c+n\). This can be formulated as an optimization problem that minimizes the cardinality of \(c\), given by \(\|c\|_{0}\), subject to an approximation error constraint. Since such a problem is hard because of the \(\ell_{0}\) pseudo-norm, we relax this to the problem \[\min_{c}\|c\|_{1}\ \text{ s.t. }\|x-\mathbf{S}c\|_{2}\leq\epsilon. \tag{3}\] Under a suitable choice of \(\lambda\), this problem can be equivalently written as \[\min_{c,e}\|c\|_{1}+\frac{\lambda}{2}\|e\|_{2}^{2}\ \text{ s.t. }\ x=\mathbf{S}c+e, \tag{4}\] for which we can obtain the dual problem given by \[\max_{d}\langle x,d\rangle-\frac{1}{2\lambda}\|d\|_{2}^{2}\ \text{ s.t. }\|\mathbf{S}^{\top}d\|_{\infty}\leq 1. \tag{5}\] Figure 2: A plot of \(q_{1}\) (orange), \(q_{2}\) (violet) and the decision boundaries of \(f\) (dashed). Our main observation is to leverage the stability of the set of active constraints of this dual to obtain a robust classifier. One can note that each constraint of Eq. (5) corresponds to one training data point \(s_{i}\) - when the \(i^{\rm th}\) constraint is active at optimality, \(s_{i}\) is being used to reconstruct \(x\). Intuitively, one should then somehow use the label \(y_{i}\) while predicting the label for \(x\). Indeed, we will show that predicting the majority label among the active \(y_{i}\) leads to a robust classifier. We will firstly obtain a geometric characterization of the problem in Eq. (5) by viewing it as the evaluation of a projection operator onto a certain convex set, illustrated in Fig. 3. Observe that for \(\lambda>0\), the objective (5) is strongly convex in \(d\) and the problem has a unique solution, denoted by \(d_{\lambda}^{*}(x)\). It is not hard to show that this solution can be obtained by the projection operator \[d_{\lambda}^{*}(x)=\left(\operatorname*{arg\,min}_{d}\|\lambda x-d\|_{2}\ \ \text{s.t.}\ \ \|\mathbf{S}^{\top}d\|_{\infty}\leq 1\right)= \operatorname{Proj}_{K^{\circ}}(\lambda x), \tag{6}\] where \(K^{\circ}\) is the polar of the convex hull of \(\pm\mathbf{S}\). Denoting \(\mathbf{T}=[\mathbf{S},-\mathbf{S}]\), we can rewrite Problem (6) as \(d_{\lambda}^{*}(x)=\left(\operatorname*{arg\,min}_{d}\|\lambda x-d\|_{2}\) sub. to \(\mathbf{T}^{\top}d\leq\mathbf{1}\right)\). We now define the set of active constraints as \[A_{\lambda}(x)=\{t_{i}\colon\langle t_{i},d_{\lambda}^{*}(x)\rangle=1\}. \tag{7}\] **Geometry of the Dual (5).** It is illustrated in Fig. 3, where \(s_{1},s_{2}\) are two chosen data-points. The blue shaded polytope is \(K^{\circ}\). At \(\lambda=\lambda_{1}\), the point \(\lambda_{1}x\) lies in the interior of \(K^{\circ}\). Hence, \(A_{\lambda}(x)\) is empty and \(\operatorname{supp}(c^{*}(x))\) is also empty. As \(\lambda\) increases, a non-empty support is obtained for the first time at \(\lambda=1/\gamma_{K^{\circ}}(x)\). For all \(\lambda_{2}x\) in the red shaded polyhedron, the projection \(d_{\lambda_{2}}^{*}(x)=\operatorname{Proj}_{K^{\circ}}(\lambda_{2}x)\) lies on the face \(F\). As \(\lambda\) increases further we reach the green polyhedron. Further increases in \(\lambda\) do not change the dual solution, which will always remain at the vertex \(d_{\lambda_{3}}^{*}(x)\). Geometrically, \(A_{\lambda}(x)\) identifies the face of \(K^{\circ}\) which contains the projection of \(\lambda x\), if \(A_{\lambda}(x)\) is non-empty (otherwise, \(\lambda x\) lies inside the polyhedron \(K^{\circ}\)). The support of the primal solution, \(c^{*}(x)\), is a subset of \(A_{\lambda}\), _i.e._\(\operatorname{supp}(c^{*}(x))\subseteq A_{\lambda}(x)\). Note that whenever two points, say \(x,x^{\prime}\), both lie in the same shaded polyhedron (red or green), their projections would lie on the same face of \(K^{\circ}\). We now show this formally, in the main theorem of this section. **Theorem 4.1**.: _The set of active constraints \(A_{\lambda}\) defined in (7) is robust, i.e., \(A_{\lambda}(x^{\prime})=A_{\lambda}(x)\) for all \(\lambda x^{\prime}\in C(x)\), where \(C(x)\) is the polyhedron defined as_ \[C(x)=F(x)+V(x), \tag{8}\] _with \(F\subseteq K^{\circ}\) being a facet of the polyhedron \(K^{\circ}\) that \(x\) projects to, defined as_ \[F(x)=\left\{d\ \left|\begin{array}{l}t_{i}^{\top}d=1,\forall t_{i}\in A_{ \lambda}(x)\\ t_{i}^{\top}d<1,\operatorname{otherwise}\end{array}\right.\right\}, \tag{9}\] _and \(V\) being the cone generated by the constraints active at (i.e., normal to) \(F\), defined as_ \[V(x)=\left\{\sum_{t_{i}\in A_{\lambda}(x)}\alpha_{i}t_{i}\colon\alpha_{i}\geq 0,\forall t_{i}\in A_{\lambda}(x)\right\}. \tag{10}\] The proof of Theorem 4.1 utilizes the geometry of the problem and properties of the projection operator, and is presented in Appendix E. We can now use this result to construct a robust classifier: **Lemma 4.2**.: _Define the dual classifier as_ \[g_{\lambda}(x)=\textsc{Aggregate}(\{y_{i}\colon t_{i}\in A_{\lambda}(x)\}), \tag{11}\] _where Aggregate is any deterministic mapping from a set of labels to \(\mathcal{Y}\), e.g., Majority. Then, for all \(x^{\prime}\in C(x)\) as defined in Theorem 4.1, \(g_{\lambda}\) is certified to be robust, i.e., \(g_{\lambda}(x^{\prime})=g_{\lambda}(x)\)._ Figure 3: Geometry of the dual problem (5). See description on the left. **Implications.** Having obtained a certifiably robust classifier \(g\), we pause to understand some implications of the theory developed so far. We observe that the certified regions in Theorem 4.1 are not spherical, _i.e._, the attacker can make additive perturbations having large \(\ell_{2}\) norm but still be unable to change the label predicted by \(g\) (see Fig. 4). This is in contrast to the \(\ell_{2}\) bounded certified regions that can be obtained by most existing work on certification schemes, and is a result of modelling data structure while constructing robust classifiers. Importantly, however, note that we do not assume that the attack is restricted to the subspace. **Connections to Classical Results.** For \(\epsilon=0\), Eq. (3) is known as the primal form of the Basis Pursuit problem, and has been studied under a variety of conditions on \(\mathbf{S}\) in the sparse representation and subspace clustering literature [20; 12; 46; 59; 29; 24]. Given an optimal solution \(c^{*}(x)\) of this basis pursuit problem, how can we accurately predict the label \(y\)? One ideal situation could be that all columns in the support predict the same label, _i.e._, \(y_{i}\) is identical for all \(i\in\text{supp}(c^{*}(x))\). Indeed, this ideal case is well studied, and is ensured by necessary [24] and sufficient [46; 59; 29] conditions on the geometry of the subspaces \(S_{1},\ldots,S_{K}\). Another situation could be that the _majority_ of the columns in the support predict the correct label. In this case, we could predict \(\texttt{Majority}(\{y_{i}\colon i\in\text{supp}(c^{*}(x))\})\) to ensure accurate prediction. Theorem 4.1 allows us to obtain robustness guarantees which work for _any_ such aggregation function which can determine a single label from the support. Hence, our results can guarantee robust prediction even when classical conditions are not satisfied. Lastly, note that our Theorem 4.1 shows that the entire active set remains unperturbed. In light of the recent results in [49], we conjecture that it might be possible to relax this for specific choices of maps acting on the estimated support. ## 5 Experiments In this section, we will compare our certified defense derived in Section 4 to a popular defense technique called Randomized Smoothing (RS) [10], which can be used to obtain state-of-the-art certified robustness against \(\ell_{2}\) perturbations. RS transforms any given classifier \(f\colon\mathcal{X}\to\mathcal{Y}\) to a certifiably robust classifier \(g^{\mathrm{RS}}_{\sigma}\colon\mathcal{X}\to\mathcal{Y}\) by taking a majority vote over inputs perturbed by Gaussian5 noise \(\mathcal{N}(0,\sigma^{2}I)\), _i.e._, Footnote 5: In reality, the choice of the noise distribution is central to determining the type of certificate one can obtain [38; 57], but Gaussian suffices for our purposes here. \[g^{\mathrm{RS}}_{\sigma}(x)=\mathrm{Smooth}_{\sigma}(f)=\operatorname*{arg\, max}_{k\in\mathcal{Y}}\Pr_{v\sim\mathcal{N}(0,\sigma^{2}I)}(f(x+v)=k). \tag{12}\] Then, at any point \(x\), \(g^{\mathrm{RS}}_{\sigma}\) can be shown to be certifiably robust to \(\ell_{2}\) perturbations of size at least \(r^{\mathrm{RS}}(x)=\sigma\Phi^{-1}(p)\) where \(p=\max_{k\in\mathcal{Y}}\Pr_{v\sim\mathcal{N}(0,\sigma^{2}I)}(f(x+v)=k)\) denotes the maximum probability of any class under Gaussian noise. It is not immediately obvious how to compare the certificates provided by our method described above and that of RS, since the sets of the space they certify are different. The certified region obtained by RS, \(C^{\mathrm{RS}}(x)=\{\bar{x}\colon\|x-\bar{x}\|_{2}\leq r(x)\}\), is a sphere (orange ball in Fig. 4). In contrast, our certificate \(C_{\lambda}(x)\) from Theorem 4.1 is a polyhedron (resp., blue trapezoid), which, in general, is neither contained in \(C^{\mathrm{RS}}(x)\), nor a superset of \(C^{\mathrm{RS}}(x)\). Additionally, our certificate has no standard notion of _size_, unlike other work on elliptical certificates [13], making a size comparison non-trivial. To overcome these difficulties, we will evaluate two notions of _attack size_: in the first, we will compare the \(\ell_{2}\) norms of successful attacks projected onto our polyhedron, and in the second, we will compare the minimum \(\ell_{2}\) norm required for a successful attack. We will then combine our method with RS to get the best of both worlds, _i.e._, the green shape in Fig. 4. In the following, we detail and present both these evaluations on the MNIST [27] dataset, with each image normalized to unit \(\ell_{2}\) norm. **Comparison along Projection on \(C_{\lambda}(x)\).** For the first case, we investigate the question: _Are there perturbations for which our method is certifiably correct, but Randomized Smoothing fails?_ For an Figure 4: Comparing polyhedral and spherical certificates. Details on left. input point \(x\), we can answer this question in the affirmative by obtaining an adversarial example \(\bar{x}\) for \(g^{\mathrm{RS}}\) such that \(\bar{x}\) lies inside our certified set \(C_{\lambda}(x)\). Then, this perturbation \(v=x-\bar{x}\) is certified by our method, but has \(\ell_{2}\) norm larger than \(r^{\mathrm{RS}}(x)\) (by definition of the RS certificate). To obtain such adversarial examples, we first train a standard CNN classifier \(f\) for MNIST, and then use RS6 to obtain the classifier \(g^{\mathrm{RS}}_{\sigma}\). Then, for any \((x,y)\), we perform projected gradient descent to obtain \(\bar{x}=x^{T}\) by performing the following steps \(T\) times, starting with \(x^{0}\gets x\): Footnote 6: Further details (e.g., smoothing parameters, certified accuracy computation) are provided in Appendix F. \[\text{I. }x^{t}\leftarrow\mathrm{Proj}_{B_{\ell_{2}}(x,\epsilon)}\Big{(}x^{t-1}+ \eta\nabla_{x}\mathrm{Loss}(g^{\mathrm{RS}}_{\sigma}(x^{t}),y)\Big{)}\qquad \text{II. }x^{t}\leftarrow\mathrm{Proj}_{C_{\lambda}(x)}(x^{t}) \tag{13}\] Unlike the standard PGD attack (step I), the additional projection (step II) is not straightforward, and requires us to solve a quadratic optimization problem, which can be found in Appendix F. We can now evaluate \(g^{\mathrm{RS}}_{\sigma}\) on these perturbations to empirically estimate the robust accuracy over \(C_{\lambda}\), _i.e._, \[\mathrm{ProjectionRobustAcc}(\epsilon)=\underset{x,y\sim\mathrm{P_{ MNIST}}}{\Pr}\Big{(}\exists\bar{x}\in B_{\ell_{2}}(x,\epsilon)\cap C_{\lambda}(x) \text{ such that }g^{\mathrm{RS}}(\bar{x})\neq y\Big{)}.\] The results are plotted in Fig. 5, as the dotted curves. We also plot the certified accuracies7 for comparison, as the solid curves. We see that the accuracy certified by RS drops below random chance (0.1) around \(\epsilon=0.06\) (solid red curve). Similar to other certified defenses, RS certifies only a subset of the true robust accuracy of a classifier in general. This true robust accuracy curve is pointwise upper-bounded by the empirical robust accuracy curve corresponding to any attack, obtained via the steps I, II described earlier (dotted red curve). We then see that even the upper-bound drops below random chance around \(\epsilon=0.4\), suggesting that this might be a large enough attack strength so that an adversary only constrained in \(\ell_{2}\) norm is able to fool a general classifier. However, we are evaluating attacks lying on our certified set and it is still possible to recover the true class (blue solid curve), albeit by our specialized classifier \(g_{\lambda}\) suited to the data structure. Additionally, this suggests that our certified set contains useful class-specific information - this is indeed true, and we present some qualitative examples of images in our certified set in Appendix F. To summarize, we have numerically demonstrated that _exploiting data structure in classifier design leads to certified regions capturing class-relevant regions beyond \(\ell_{p}\)-balls_. Footnote 7: Further details (e.g., smoothing parameters, certified accuracy computation) are provided in Appendix F. **Comparison along \(\ell_{2}\) balls.** For the second case, we ask the question: _Are there perturbations for which RS is certifiably correct, but our method is not?_ When an input point \(x\) has a large enough RS certificate \(r^{\mathrm{RS}}(x)\geq r_{0}\), some part of the sphere \(B_{\ell_{2}}(x,r^{\mathrm{RS}}(x))\) might lie outside our polyhedral certificate \(C_{\lambda}(x)\) (blue region in Fig. 4). In theory, the minimum \(r_{0}\) required can be computed via an expensive optimization program that we specify in Appendix F. In practice, however, we use a black-box attack [8] to find such perturbations. We provide qualitative examples in Appendix F. **Combining Our Method with RS.** We now improve our certified regions using randomized smoothing. For this purpose, we treat our classifier \(g_{\lambda}\) as the base classifier \(f\) in (12), to obtain \(\mathrm{Smooth}_{\sigma}(g_{\lambda})\) (abbreviated as \(\mathrm{Ours}^{\mathrm{RS}}_{\sigma}\)). We then plot the \(\ell_{2}\) certified accuracy8[10, Sec 3.2.2] in Fig. 6, where note that, as opposed to Fig. 5, the attacks are _not_ constrained to lie on our certificate anymore. We observe that not only does randomized smoothing enable us to obtain a \(\ell_{2}\) robustness certificate, we can in fact slightly improve (note the Figure 5: Comparing RS with Our method for adversarial perturbations computed by repeating Steps I, II (13). Figure 6: Comparing Certified Accuracy after combining our method with RS. \(10\times\) zoom on the \(x\)-axis compared to Fig. 5) over the certificates obtained by RS on its own (blue curve vs. red curve in Fig. 6). As a final remark, we note that our objective in Fig. 6 was simply to explore RS as a method for obtaining an \(\ell_{2}\) certificate for our method, and we did not tune our method or RS for performance. In particular, we believe that a wide array of tricks developed in the literature for improving RS performance [38, 41, 57] could be employed to improve the curves in Fig. 6. ## 6 Conclusion and Discussion This paper studies conditions under which a robust classifier exists for a given classification task. We showed that concentration of the data distribution on small subsets of the ambient space is necessary for any classifier to be robust to small adversarial perturbations, and that a stronger notion of concentration is sufficient. We then studied a special concentration data distribution, that of data distributed near low-dimensional linear subspaces. For this special case of our results, we constructed a provably robust classifier, and then experimentally evaluated its benefits w.r.t. known techniques. For our concentration results, our proof techniques utilize tools from high-dimensional probability, and have the same flavor as recent impossibility results for adversarial robustness [11, 44, 43]. Our geometric treatment of the dual optimization problem is similar to the literature on sparse-representation [12, 20] and subspace clustering [46, 47, 59, 24], which is concerned with the question of representing a point \(x\) by the linear combination of columns of a dictionary \(\mathbf{S}\) using sparse coefficients \(c\). As mentioned in Section 4, there exist geometric conditions on \(\mathbf{S}\) such that all such candidate vectors \(c\) are _subspace-preserving_, i.e., for all the indices \(i\) in the support of \(c\), it can be guaranteed that \(s_{i}\) belongs to the correct subspace. On the other hand, the question of classification of a point \(x\) in a union of subspaces given by the columns of \(\mathbf{S}\), or subspace classification, has also been studied extensively in classical sparse-representation literature [56, 55, 6, 26, 58, 60, 18, 36, 22, 31]. The predominant approach is to solve an \(\ell_{1}\) minimization problem to obtain coefficients \(c\) so that \(x=\mathbf{S}c+e\), and then predict the subspace that minimizes the representation error. Various _global_ conditions can be imposed on \(\mathbf{S}\) to guarantee the success of such an approach [56], and its generalizations [14, 15]. Our work differs from these approaches in that we aim to obtain conditions on perturbations to \(x\) that ensure accurate classification, and as such we base our robust classification decision upon properties of solutions of the dual of the \(\ell_{1}\) problem. Our work is also related to recent empirical explorations on obtaining robust classifiers by _denoising_ a given input \(x\) of any adversarial corruptions, before passing it to a classifier [42, 35]. However, such approaches lack theoretical guarantees, and might be broken using specialized attacks [37]. Similarly, work on improving the robustness of deep network-based classifiers by adversarial training off the data-manifold can be seen as an empirical generalization of our attack model [23, 34, 62, 32]. More generally, it has been studied how adversarial examples relate to the underlying data-manifold [48, 25, 33]. Recent work also studies the robustness of classification using projections onto a single subspace [3, 2, 40]. Such works are close to us in spirit, as they use projections onto a single low-dimensional linear subspace to provide robustness certificates. [3] study an the attack model of bounded \(\ell_{2},\ell_{\infty}\) attacks, and they provide robustness certificates by obtaining guarantees on the distortion of a data-point \(x\) as it is projected onto a single linear subspace using a projection matrix \(\Pi\). In contrast, our work can be seen as projecting a perturbed point onto a union of multiple low-dimensional subspaces. The resultant richer geometry allows us to obtain more general certificates. One of the limitations of our work is that we always assume access to a _clean_ dataset \(\mathbf{S}\) lying perfectly on the union of low-dimensional linear subspaces. In reality, one might only have access to noisy samples. In light of existing results on noisy subspace clustering [53], an immediate future direction is to adapt our guarantees to support noise in the training data. While the assumption of low-dimensional subspace structure in \(\mathbf{S}\) enables us to obtain novel unbounded robustness certificates, real world datasets might not satisfy this structure. We hope to mitigate this limitation by extending our formulation to handle data lying on a general image manifold in the future. ## Acknowledgments and Disclosure of Funding We would like to thank Amitabh Basu for helpful insights into the optimization formulation for the largest \(\ell_{2}\) ball contained in a polyhedron. This work was supported by DARPA (HR00112020010) and NSF (1934979 and 2212457).
2310.02194
Unconditional convergence of general Fourier series
S. Banach, in particular, proved that for any function, even $f(x) = 1,$ where $x\in[0,1],$ the convergence of its Fourier series with respect to the general orthonormal systems (ONS) is not guaranteed. In this paper, we find conditions for the functions $\varphi_{n}$ of an ONS $(\varphi_n),$ under which the Fourier series of functions $f\in Lip_1$ are unconditionally convergent almost everywhere. During our research, we mainly used the properties of the sequences of linear functionals on the Banach spaces to prove the main theorems we presented in this article. Our research has concluded that the aforementioned conditions do exist and are the best possible in a certain sense. We have also found that any ONS contains a subsystem such that the Fourier series of any function $f\in Lip_1$ is unconditionally convergent. Further, the precondition presented in Theorem \ref{theorem1.1}, which demands that the Fourier series of the function $f=1$ be convergent, is adequate, meaning that it does not make the actual conditions of Theorem \ref{theorem1.1} redundant. We have shown that the solution for these types of problems for the general ONS is trivial for the classical ONS (trigonometric, Haar, and Walsh systems).
Vakhtang Tsagareishvili, Giorgi Tutberidze, Giorgi Cagareishvili
2023-09-07T06:55:42Z
http://arxiv.org/abs/2310.02194v1
# Unconditional convergence of general Fourier series ###### Abstract. S. Banach, in particular, proved that for any function, even \(f(x)=1\), where \(x\in[0,1]\), the convergence of its Fourier series with respect to the general orthonormal systems (ONS) is not guaranteed. In this paper, we find conditions for the functions \(\varphi_{n}\) of an ONS (\(\varphi_{n}\)), under which the Fourier series of functions \(f\in Lip_{1}\) are unconditionally convergent almost everywhere. During our research, we mainly used the properties of the sequences of linear functionals on the Banach spaces to prove the main theorems we presented in this article. Our research has concluded that the aforementioned conditions do exist and are the best possible in a certain sense. We have also found that any ONS contains a subsystem such that the Fourier series of any function \(f\in Lip_{1}\) is unconditionally convergent. Further, the precondition presented in Theorem 2, which demands that the Fourier series of the function \(f=1\) be convergent, is adequate, meaning that it does not make the actual conditions of Theorem 2 redundant. We have shown that the solution for these types of problems for the general ONS is trivial for the classical ONS (trigonometric, Haar, and Walsh systems). **2020 Mathematics Subject Classification.** 42C10, 46B07 **Key words and phrases:** Unconditional convergence, Orthonormal systems, Fourier coefficients, Sequence of linear functionals, Banach space. ## 1. Introductions In order not to disturb the discussion in this introduction and the proofs of our main result we have collected all notations, definitions and other preliminaries in Section 2. In this paper we find conditions for the functions \(\varphi_{n}\) of an ONS (\(\varphi_{n}\)), under which the Fourier series of functions \(f\in Lip_{1}\) are unconditionally convergent almost everywhere (See Theorem 1). We also prove that this result is, in a sense, sharp (See Theorem 2). Furthermore, we discover that any ONS contains a subsystem with respect to which the general Fourier series of any function is unconditionally convergent. Lastly, we clarify that the precondition imposed upon the functions of the ONS is accurate and valid, meaning that it does not make the actual condition presented in Theorem 1 redundant. In the monographs [1, 7, 8] and papers [9, 17, 18, 19] problems of the theory of convergence of orthonormal series are studied. It is interesting that according to Menchov's and Banach's results the convergence of general Introduction Let \(f\in L_{2}\) be an arbitrary, non-zero function. Then there exists an \(\mathcal{O}\)-\ _and \(C_{k}(f)\) is the Fourier coefficients of the function \(f\in L_{2}\) with respect to the system \(\{\varphi_{k}\}.\)_ Moreover, we recall the following well-known result of D.E.Menshov (see e.g. \(\left[1\right]\) ch.2, \(\#5\) p.111). **Theorem B**.: _Let \(\left(\varphi_{n}\right)\) be an ONS on \(\left[0,1\right].\) If the series_ \[\sum_{n=1}^{\infty}\left|a_{n}\right|^{2-\varepsilon}\] _is convergent for some \(\varepsilon\in\left(0,1\right),\) then the series_ \[\sum_{n=1}^{\infty}a_{n}\varphi_{n}\left(x\right)\] _is unconditionally convergence a.e. on \(\left[0,1\right].\)_ We also need the following theorem (see [6],Ch.II.p.40) **Theorem C**.: _If the series_ \[\sum_{n=1}^{\infty}a_{k}b_{k}\] _is converges for any \(\left(b_{n}\right)\in l_{q},\) then \(\left(a_{n}\right)\in l_{p},\) where \(\frac{1}{p}+\frac{1}{q}=1.\)_ For this investigation of the functionals \(\left(B_{n}(f)\right)\) (see (7)) we need the following important Lemma (see [4]). **Lemma 1**.: _Suppose that \(f,\)\(F\in L_{2}\) and \(f\) has only finite value in any point on \(\left[0,1\right].\) Then_ \[\int_{0}^{1}f\left(x\right)F\left(x\right)dx = \sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left(\frac{i+1} {n}\right)\right)\int_{0}^{i/n}F\left(x\right)dx\] \[+ \sum_{i=1}^{n}\int_{\left(i-1\right)/n}^{i/n}\left(f\left(x \right)-f\left(\frac{i}{n}\right)\right)F\left(x\right)dx\] \[+ f\left(1\right)\int_{0}^{1}F\left(x\right)dx.\] ## 3. The main results **Theorem 1**.: _Let \(\left(\varphi_{n}\right)\) be an ONS and suppose that \(\varepsilon\in\left(0,1\right)\) is an arbitrary number. If \(\left(C_{n}(l)\right)\in l_{p\left(\varepsilon\right)},\) where \(l(x)=1,\)\(x\in\left[0,1\right]\) and for arbitrary \(\left(a_{n}\right)\in l_{q\left(\varepsilon\right)}\)_ \[M_{n}(a,\varepsilon)=O_{\varepsilon}(1), \tag{5}\] _then for any \(f\in Lip_{1}\)_ \[\sum_{n=1}^{\infty}\left|C_{n}(f)\right|^{2-\varepsilon}<+\infty.\] Proof.: Firstly, in (4) we should assume that \(f\in Lip_{1}\) and \(F(x)=P_{n}(a,\varepsilon,x)\). Thus \[\int_{0}^{1}f(x)P_{n}(a,\varepsilon,x)dx = \sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left(\frac{i+1}{n }\right)\right)\int_{0}^{i/n}P_{n}(a,\varepsilon,x)dx \tag{6}\] \[+ \sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f\left( \frac{i}{n}\right)\right)P_{n}(a,\varepsilon,x)dx\] \[+ f\left(1\right)\int_{0}^{1}P_{n}(a,\varepsilon,x)dx=I_{1}+I_{2} +I_{3}.\] Next, by using (3) we receive (see (2)) \[B_{n}(f) := \sum_{k=1}^{n}C_{k}(f)a_{k}=\int_{0}^{1}f(x)\sum_{k=1}^{n}a_{k} \varphi_{k}(x)dx\] \[= \int_{0}^{1}f(x)P_{n}(a,\varepsilon,x)dx.\] If \(f\in Lip_{1}\) we get (see (2) and (5)) \[|I_{1}| \leq \sum_{i=1}^{n-1}\left|f\left(\frac{i}{n}\right)-f\left(\frac{i+1} {n}\right)\right|\left|\int_{0}^{i/n}P_{n}(a,\varepsilon,x)dx\right|\] \[= O(1)\frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(a, \varepsilon,x)dx\right|\] \[= O(1)M_{n}(a,\varepsilon)=O_{\varepsilon}(1).\] As \((a_{k})\in l_{q(\varepsilon)}\) and \(\frac{1}{2p(\varepsilon)}<\frac{1}{2}\), by using Holder inequality we have \[|I_{2}| \leq \sum_{i=1}^{n}\sup_{x\in\left[\frac{i-1}{n},\frac{i}{n}\right]} \left|f\left(x\right)-f\left(\frac{i}{n}\right)\right|\left|\int_{(i-1)/n}^{i /n}P_{n}(a,\varepsilon,x)dx\right|\] \[= O(1)\frac{1}{n}\left(\int_{0}^{1}\left(\sum_{k=1}^{n}a_{k} \varphi_{k}(x)\right)^{2}\right)^{1/2}\] \[= O(1)\frac{1}{n}\left(\sum_{k=1}^{n}a_{k}^{2}\right)^{1/2}=O \left(\frac{1}{n}\right)\sup_{k}|a_{k}|\,n^{1/2}=O_{\varepsilon}(1).\] By conditions of Theorem 1, when \(l(x)=1,\)\(x\in[0,1],\) using the Holder inequality we get \[|I_{3}| = |f\left(1\right)|\left|\int_{0}^{1}P_{n}(a,\varepsilon,x)dx\right| =O(1)\left|\sum_{k=1}^{n}a_{k}\int_{0}^{1}\varphi_{k}(x)dx\right|\] \[= O(1)\left|\sum_{k=1}^{n}a_{k}C_{k}(l)\right|=O(1)\left(\sum_{k=1 }^{n}|a_{k}|^{q(\varepsilon)}\right)^{1/q(\varepsilon)}=O_{\varepsilon}(1).\] Finally, taking into account (6), (7), (8), (9) and (10) we conclude, that for some \(\varepsilon\in(0,1)\) and for any \(f\in Lip_{1}\) the series \[\sum_{n=1}^{\infty}C_{n}(f)a_{n}\] is convergent for arbitrary \((a_{n})\in l_{q(\varepsilon)}.\) Then according to Theorem B we have that \((C_{n}(f))\in l_{p(\varepsilon)},\) consequently \[\sum_{n=1}^{\infty}|C_{n}(f)|^{2-\varepsilon}<+\infty\] for some \(\varepsilon\in(0,1)\) and for any \(f\in Lip_{1}.\) **Theorem 2**.: _Let \((\varphi_{n})\) be an ONS on \([0,1]\) and suppose that \(\varepsilon\in(0,1)\) is an arbitrary number. Now let us assume that for some \((b_{n})\in l_{q(\varepsilon)}\)_ \[\limsup_{n\to+\infty}M_{n}(b,\varepsilon)=+\infty. \tag{11}\] _Then there exists a function \(g\in Lip_{1},\) such that_ \[\sum_{k=1}^{\infty}|C_{n}(g)|^{2-\varepsilon}=+\infty.\] Proof.: Firstly, we suppose that \[\limsup_{n\to+\infty}\left|\int_{0}^{1}P_{n}(b,\varepsilon,x)dx\right|=+\infty.\] Then if \(l(x)=1,\) when \(x\in[0,1]\) and \[C_{n}(l)=\int_{0}^{1}l(x)\varphi_{k}(x)dx\ \ \ \ \ (n=1,2,\dots),\] we will have \[\limsup_{n\rightarrow+\infty}\left|\sum_{k=1}^{n}b_{k}C_{k}(l) \right|=\limsup_{n\rightarrow+\infty}\left|\sum_{k=1}^{n}b_{k}\int_{0}^{1}l(x) \varphi_{k}(x)dx\right|\] \[=\limsup_{n\rightarrow+\infty}\left|\sum_{k=1}^{n}b_{k}\int_{0}^ {1}\varphi_{k}(x)dx\right|=\limsup_{n\rightarrow+\infty}\left|\int_{0}^{1}P_{ n}(b,\varepsilon,x)dx\right|=+\infty.\] Thus, for some \((b_{n})\in l_{q(\varepsilon)}\) the series \[\sum_{k=1}^{\infty}b_{k}C_{k}(l)\] is divergent. Consequently \((C_{k}(l))\notin l_{p(\varepsilon)}\) or \[\sum_{n=1}^{\infty}\left|C_{n}(l)\right|^{2-\varepsilon}=+\infty.\] As \(l\in Lip_{1}\), then in such case Theorem 2 is valid. Now we suppose that \[\left|\int_{0}^{1}P_{n}(b,\varepsilon,x)dx\right|=O(1). \tag{12}\] In (6) we substitute \(P_{n}(a,\varepsilon,x)=P_{n}(b,\varepsilon,x)\) and \(f(x)=g_{n}(x)\), where \[g_{n}(x)=\int_{0}^{x}sign\int_{0}^{t}P_{n}(b,u,\varepsilon)dudt. \tag{13}\] Now we have \[\int_{0}^{1}g_{n}(x)P_{n}(b,\varepsilon,x)dx = \sum_{i=1}^{n-1}\left(g_{n}\left(\frac{i}{n}\right)-g_{n}\left( \frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx \tag{14}\] \[+ \sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(g_{n}\left(x\right)-g_{n }\left(\frac{i}{n}\right)\right)P_{n}(b,\varepsilon,x)dx\] \[+ g_{n}\left(1\right)\int_{0}^{1}P_{n}(b,\varepsilon,x)dx=h_{1}+h _{2}+h_{3}.\] According by (13) and the Cauchy inequality we get \[\left|h_{2}\right| = \left|\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(g_{n}\left(x\right) -g_{n}\left(\frac{i}{n}\right)\right)P_{n}(b,\varepsilon,x)dx\right|\] \[\leq\frac{1}{n}\int_{0}^{1}\left|P_{n}(b,\varepsilon,x)\right|\leq \frac{1}{n}\left(\int_{0}^{1}P_{n}^{2}(b,\varepsilon,x)dx\right)^{1/2}\] \[= \frac{1}{n}\left(\sum_{k=1}^{n}b_{k}^{2}\right)^{1/2}\leq\frac{1 }{n}\sqrt{n}\sup_{k}\left|b_{k}\right|=O(1).\] Next, by using (12) and (13) we receive \[|h_{3}|=O(1). \tag{16}\] Now, by \(E_{n}\) we denote the set of any numbers \(i\in{1,2,\ldots,n-1},\) for all of which, there exists a point \(y\in\left[\frac{i}{n},\frac{i+1}{n}\right),\) such that \[sign\int_{0}^{y}P_{n}(b,\varepsilon,x)dx\neq sign\int_{0}^{(i+1)/n}P_{n}(b, \varepsilon,x)dx.\] Then as the function \(\int_{0}^{x}P_{n}(b,\varepsilon,t)dt\) is a continuous on \(\left[\frac{i}{n},\frac{i+1}{n}\right),\) there exists a point \(x_{in}\in\left[\frac{i}{n},\frac{i+1}{n}\right),\) such that \[\int_{0}^{x_{in}}P_{n}(b,\varepsilon,x)dx=0.\] From here when \(i\in E_{n}\) and as \[\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx = \int_{0}^{x_{in}}P_{n}(b,\varepsilon,x)dx-\int_{i/n}^{x_{in}}P_{ n}(b,\varepsilon,x)dx\] \[= -\int_{i/n}^{x_{in}}P_{n}(b,\varepsilon,x)dx,\] we conclude \[\sum_{i\in E_{n}}\left|\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right| = \sum_{i\in E_{n}}\left|\int_{i/n}^{x_{in}}P_{n}(b,\varepsilon,x) dx\right|\] \[\leq \int_{0}^{1}|P_{n}(b,\varepsilon,x)dx|\leq\left(\int_{0}^{1}P_{ n}^{2}(b,\varepsilon,x)dx\right)^{1/2}\] \[= \left(\sum_{k=1}^{n}b_{k}^{2}\right)^{1/2}=\sqrt{n}\sup_{k}|b_{k} |=O(1)\sqrt{n}.\] Now, suppose that \(F_{n}=[0,1]\setminus E_{n},\) then (see (13)) \[g_{n}\left(\frac{i}{n}\right)-g_{n}\left(\frac{i+1}{n}\right) =-\int_{i/n}^{(i+1)/n}sign\int_{0}^{x}P_{n}(b,\varepsilon,x)dt\int _{0}^{i/n}P_{n}(b,\varepsilon,x)dx\] \[=-\frac{1}{n}\left|\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right|\] and \[\left|\sum_{i\in F_{n}}\left(g_{n}\left(\frac{i}{n}\right)-g_{n} \left(\frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right|\] \[=\frac{1}{n}\sum_{i\in F_{n}}\left|\int_{0}^{i/n}P_{n}(b, \varepsilon,x)dx\right|. \tag{18}\] By using (17) and (18) we have that \[|h_{1}| = \left|\sum_{i=1}^{n-1}\left(g_{n}\left(\frac{i}{n}\right)-g_{n} \left(\frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right| \tag{19}\] \[= \left|\sum_{i\in F_{n}}\left(g_{n}\left(\frac{i}{n}\right)-g_{n} \left(\frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right.\] \[+ \left.\sum_{i\in E_{n}}\left(g_{n}\left(\frac{i}{n}\right)-g_{n} \left(\frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(b,\varepsilon,x)dx\right|\] \[\geq \left|\frac{1}{n}\sum_{i\in F_{n}}\left|\int_{0}^{i/n}P_{n}(b, \varepsilon,x)dx\right|-\frac{1}{n}\sum_{i\in E_{n}}\left|\int_{0}^{i/n}P_{n} (b,\varepsilon,x)dx\right|\right|\] \[= \left|\frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(b, \varepsilon,x)dx\right|-\frac{2}{n}\sum_{i\in E_{n}}\left|\int_{0}^{i/n}P_{n} (b,\varepsilon,x)dx\right|\right|\] \[\geq M_{n}(b,\varepsilon)-O\left(\frac{1}{\sqrt{n}}\right).\] Finally, from equality (14) and by account of (15), (16) and (19) we obtain \[\left|\int_{0}^{1}g_{n}(x)P_{n}(b,\varepsilon,x)dx\right|\geq M_{n}(b, \varepsilon)-O\left(\frac{1}{\sqrt{n}}\right)-O(1).\] Thus, by the condition of the Theorem 2 we get \[\limsup_{n\rightarrow+\infty}\left|\int_{0}^{1}g_{n}(x)P_{n}(b,\varepsilon,x) dx\right|=\limsup_{n\rightarrow+\infty}M_{n}(b,\varepsilon)=+\infty. \tag{20}\] Now we must consider the sequence of bounded and continuous liner functionals on \(Lip_{1}\) (see (7)) \[B_{n}(f):=\int_{0}^{1}f(x)P_{n}(b,\varepsilon,x)dx.\] So, from (20) we obtain \[\limsup_{n\rightarrow+\infty}\left|B_{n}\left(g_{n}\right)\right|=+\infty.\] Since \[\left|\left|g_{n}\right|\right|_{Lip_{1}}=\left|\left|g_{n}\right|\right|_{C}+ \sup_{x,y\in[0,1]}\frac{\left|g_{n}(x)-g_{n}(y)\right|}{\left|x-y\right|}\leq 2,\] by the Banach-Steinhaus theorem there exists a function \(g\in Lip_{1}\) such that \[\limsup_{n\rightarrow+\infty}\left|B_{n}\left(g\right)\right|=+\infty.\] Consequently \[\limsup_{n\rightarrow+\infty}\left|\int_{0}^{1}g(x)P_{n}(b, \varepsilon,x)dx\right| = \limsup_{n\rightarrow+\infty}\left|\sum_{k=1}^{n}b_{k}\int_{0}^{1 }g(x)\varphi_{k}(x)dx\right|\] \[= \limsup_{n\rightarrow+\infty}\left|\sum_{k=1}^{n}b_{k}C_{k}(g) \right|=+\infty.\] Hence the series \[\sum_{k=1}^{\infty}b_{k}C_{k}(g)\] is divergent for \((b_{k})\in l_{q(\varepsilon)}.\) So, it is evident that \((C_{k}(g))\notin l_{p(\varepsilon)}\) or \[\sum_{k=1}^{\infty}\left|C_{n}(g)\right|^{2-\varepsilon}=+\infty.\] **Corollary 1**.: _Let \((\varphi_{n})\) be an ONS. For some \(\varepsilon\in(0,1)\), \((C_{n}(l))\in l_{p(\varepsilon)},\) where \(l(x)=1,\)\(x\in[0,1]\) and for arbitrary \((a_{n})\in l_{q(\varepsilon)}\)_ \[M_{n}(a,\varepsilon)=O_{\varepsilon}(1),\] _then for any \(f\in Lip_{1}\)_ \[\sum_{n=1}^{\infty}C_{n}(f)\varphi_{n}(x)\] _is unconditional convergent a. e. on \([0,1].\)_ The validity of Corollary 1 derives from Theorems 1 and Theorem B. **Theorem 3**.: _Let \((d_{n})\) be a nondecreasing sequence of positive numbers. Any ONS \((\varphi_{n})\) contains a subsystem \((\varphi_{n_{k}})\) such that_ \[M_{n}(da,\varepsilon)=O_{\varepsilon}(1) \tag{21}\] _for any \(\varepsilon\in(0,1)\) and \((a_{n})\in l_{q(\varepsilon)}.\) Where \(da=(d_{n}a_{n}).\)_ Proof.: Let ONS \((\varphi_{n})\) be a complete system on \([0,1].\) Then, by the Bessel equality for any \(x\in[0,1]\) \[\sum_{n=1}^{\infty}\left(\int_{0}^{x}\varphi_{n}(u)du\right)^{2}=x.\] According to the Dini Theorem, there exists a sequence of natural numbers \((n_{k})\) such that, uniformly with respect to \(x\in[0,1]\) \[\sum_{s=n_{k}}^{\infty}\left(\int_{0}^{x}\varphi_{s}(u)du\right)^{2}\leq\frac{1} {k^{4}}.\] From here \[\left|\int_{0}^{x}\varphi_{n_{k}}(u)du\right|\leq\frac{1}{d_{k}k^{2}},\quad k= 1,2,\ldots \tag{22}\] uniformly with respect to \(x\in[0,1].\) Next, in our case \[P_{n}(da,\varepsilon,x):=\sum_{k=1}^{n}d_{k}a_{k}\varphi_{n_{k}}(x),\] then, by using (22) \[|M_{n}(da,\varepsilon)| = \frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(da, \varepsilon,x)dx\right|\] \[= \frac{1}{n}\sum_{i=1}^{n-1}\left|\sum_{k=1}^{n}d_{k}a_{k}\int_{0 }^{i/n}\varphi_{n_{k}}(x)dx\right|\leq\frac{1}{n}\sum_{i=1}^{n-1}\sum_{k=1}^{ n}d_{k}\left|a_{k}\right|\frac{1}{d_{k}k^{2}}\] \[\leq \left(\sum_{k=1}^{n}\left|a_{k}\right|^{p(\varepsilon)}\right)^ {1/p(\varepsilon)}\left(\sum_{k=1}^{n}k^{-2q(\varepsilon)}\right)^{1/q( \varepsilon)}=O_{\varepsilon}(1).\] **Theorem 4**.: _Let \((d_{n})\) be a nondeacrising sequence of positive numbers and \(d_{n}=O(\sqrt{n}).\) Any ONS \((\varphi_{n})\) contains a subsystem \((\varphi_{n_{k}})\) such that the series_ \[\sum_{k=1}^{\infty}d_{k}C_{n_{k}}(f)\varphi_{n_{k}}(x)\] _is unconditionally convergent a.e. on \([0,1]\) for an arbitrary \(f\in Lip1.\)_ Proof.: Here we also consider the very same subsystem, which was investigated in Theorem 3. That is why for any \(\varepsilon\in(0,1)\) and \((a_{n})\in l_{q(\varepsilon)}\) we get \((da=(d_{n}a_{n}))\) condition (21). Also as \(p(\varepsilon)=2-\varepsilon,\) if \(l(x)=1,\)\(x\in[0,1]\) (see (22)) \[\sum_{k=1}^{\infty}\left|d_{k}C_{n_{k}}(l)\right|^{p(\varepsilon)} = \sum_{k=1}^{\infty}\left|d_{k}\int_{0}^{1}\varphi_{n_{k}}(x)dx \right|^{p(\varepsilon)}\] \[< \sum_{k=1}^{\infty}\left|d_{k}\frac{1}{d_{k}k^{2}}\right|^{p( \varepsilon)}<+\infty.\] Now we rewrite equality (6) for \(P_{n}(da,\varepsilon,x)\) and we obtain \[\int_{0}^{1}f(x)P_{n}(da,\varepsilon,x)dx =\sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left(\frac{i+1}{n }\right)\right)\int_{0}^{i/n}P_{n}(da,\varepsilon,x)dx \tag{24}\] \[+\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f\left( \frac{i}{n}\right)\right)P_{n}(da,\varepsilon,x)dx\] \[+f\left(1\right)\int_{0}^{1}P_{n}(da,\varepsilon,x)dx=S_{1}+S_{2 }+S_{3}.\] According to (21), as \(f\in Lip1\) \[|S_{1}| = \left|\sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left( \frac{i+1}{n}\right)\right)\int_{0}^{i/n}P_{n}(da,\varepsilon,x)dx\right|\] \[= O(1)\frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(da, \varepsilon,x)dx\right|\] \[= O(1)M_{n}(da,\varepsilon)=O_{\varepsilon}(1).\] By (23), as \((a_{k})\in l_{q(\varepsilon)}\) we have \[|S_{3}| = \left|f\left(1\right)\int_{0}^{1}P_{n}(da,\varepsilon,x)dx\right| =O(1)\left|\int_{0}^{1}\sum_{i=1}^{n}d_{k}a_{k}\varphi_{n_{k}}(x)dx\right|\] \[= O(1)\left|\sum_{i=1}^{n}d_{k}a_{k}\int_{0}^{1}\varphi_{n_{k}}(x )dx\right|=O(1)\sum_{i=1}^{n}d_{k}\left|a_{k}\right|\frac{1}{d_{k}k^{2}}=O_{ \varepsilon}(1).\] And lastly, as \(f\in Lip_{1}\), \(d_{n}=O(\sqrt{n})\) and \((a_{n})\in l_{q(\varepsilon)}\) we obtain \[|S_{2}| = \left|\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f \left(\frac{i}{n}\right)\right)P_{n}(da,\varepsilon,x)dx\right| \tag{27}\] \[= O(1)\frac{1}{n}\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left|P_{n}(da, \varepsilon,x)\right|dx\] \[= O(1)\frac{1}{n}\left(\int_{0}^{1}P_{n}^{2}(da,\varepsilon,x)dx \right)^{1/2}=O\left(\frac{1}{n}\right)\left(\sum_{k=1}^{n}d_{k}^{2}a_{k}^{2} \right)^{1/2}\] \[= O\left(\frac{1}{n}\right)d_{n}\sqrt{n}\max_{k}|a_{k}|=O_{ \varepsilon}(1).\] To reach the final conclusion we combine the findings in (25), (26) and (27) in equality (24) which yields the result \[\left|\int_{0}^{1}f(x)P_{n}(da,\varepsilon,x)dx\right|=O_{\varepsilon}(1)\] and consequently \[\left|\sum_{k=1}^{n}d_{k}a_{k}C_{k}(f)\right| = \left|\sum_{k=1}^{n}d_{k}a_{k}\int_{0}^{1}f(x)\varphi_{n_{k}}(x)dx\right|\] \[= \left|\int_{0}^{1}f(x)P_{n}(da,\varepsilon,x)dx\right|=O_{ \varepsilon}(1).\] From here we conclude that the series \[\sum_{k=1}^{\infty}d_{k}a_{k}C_{k}(f)\] is convergent for any \((a_{n})\in l_{q(\varepsilon)}.\) Thus \((d_{k}C_{k}(f))\in l_{p(\varepsilon)}\) or \[\sum_{k=1}^{\infty}|d_{k}C_{n_{k}}(f)|^{2-\varepsilon}<+\infty.\] As we know, by the Theorem B the series \[\sum_{k=1}^{\infty}d_{k}C_{n_{k}}(f)\varphi_{n_{k}}(x)\] is unconditionally convergent a. e. for any \(f\in Lip_{1}.\) Here a question might arise: is the condition (see Theorem 1) \[C_{n}(l)\in l_{p(\varepsilon)},\text{ where }l(x)=1,x\in[0,1]\] sufficient for convergence of series \[\sum_{k=1}^{\infty}|C_{k}(f)|^{2-\varepsilon}\] for any \(f\in Lip_{1}\)? **Theorem 5**.: _There exists a function \(g\in Lip_{1}\) and ONS \((\Phi_{n})\) such that_ \[1)\int_{0}^{1}\Phi_{n}(x)dx=0,\quad n=1,2,\ldots.\] 2) _For any \(\varepsilon>0,\)_ \[\sum_{n=1}^{\infty}|C_{n}(g,\Phi)|^{p(\varepsilon)}=+\infty,\] _where_ \[C_{n}(g,\Phi)=\int_{0}^{1}g(x)\Phi_{n}(x)dx=0,\quad(n=1,2,\ldots).\] Proof.: Let us assume that \(f(x)=1-\cos 4(x-1/2)\pi.\) According to the Banach Theorem there exist an ONS \((\varphi_{n}),\) such that \[\limsup_{n\rightarrow+\infty}|S_{n}(x,f)|=+\infty. \tag{29}\] From here for any \(\varepsilon>0\) \[\sum_{n=1}^{\infty}|C_{n}(f)|^{p(\varepsilon)}=+\infty, \tag{30}\] where \[C_{n}(f)=\int_{0}^{1}f(x)\varphi_{n}(x)dx=0,\quad(n=1,2,\dots).\] Now we investigate the function \[g\left(x\right)=\left\{\begin{array}{cc}f(2x),&\mbox{when}\quad x\in\left[0, \frac{1}{2}\right],\\ &\\ 0,&\mbox{when}\quad x\in\left[\frac{1}{2},1\right].\end{array}\right. \tag{31}\] It is evident that \(g\in Lip_{1}.\) Suppose \[\Phi_{n}\left(x\right)=\left\{\begin{array}{cc}\varphi_{n}(2x),&\mbox{when} \quad x\in\left[0,\frac{1}{2}\right],\\ &\\ -\varphi_{n}\left(2\left(x-\frac{1}{2}\right)\right),&\mbox{when}\quad x\in \left[\frac{1}{2},1\right].\end{array}\right. \tag{32}\] Then \[\int_{0}^{1}\Phi_{n}(x)dx = \int_{0}^{1/2}\varphi_{n}(2x)dx-\int_{1/2}^{1}\varphi_{n}\left(2 \left(x-\frac{1}{2}\right)\right)dx\] \[=\frac{1}{2}\int_{0}^{1}\varphi_{n}(x)dx-\frac{1}{2}\int_{0}^{1} \varphi_{n}(x)dx=0.\] Next (see (32)) \[\int_{0}^{1}\Phi_{n}(x)\Phi_{m}(x)dx = \int_{0}^{1/2}\varphi_{n}(2x)\varphi_{m}(2x)dx\] \[+ \int_{1/2}^{1}\varphi_{n}\left(2\left(x-\frac{1}{2}\right)\right) \varphi_{m}\left(2\left(x-\frac{1}{2}\right)\right)dx\] \[= \frac{1}{2}\int_{0}^{1}\varphi_{n}(x)\varphi_{m}(x)dx+\frac{1}{2 }\int_{0}^{1}\varphi_{n}(x)\varphi_{m}(x)dx=0.\] Also (see (32)) \[\int_{0}^{1}\Phi_{n}^{2}(x)dx = \int_{0}^{1/2}\varphi_{n}^{2}(2x)dx+\int_{1/2}^{1}\varphi_{n}^{2 }\left(2\left(x-\frac{1}{2}\right)\right)dx\] \[= \frac{1}{2}\int_{0}^{1}\varphi_{n}^{2}(x)dx+\frac{1}{2}\int_{0}^{ 1}\varphi_{n}^{2}(x)dx=1.\] Thus, \(\left(\Phi_{n}\right)\) is an ONS. Finally (see (31), (32)) \[C_{n}(g,\Phi) = \int_{0}^{1}g(x)\Phi_{n}(x)dx=\int_{0}^{1/2}f(2x)\varphi_{n}(2x)dx\] \[= \frac{1}{2}\int_{0}^{1}f(x)\varphi_{n}(x)dx=\frac{1}{2}C_{n}(f).\] From where, according to (30) we conclude that \[\sum_{n=1}^{\infty}\left|C_{n}(g,\Phi)\right|^{p(\varepsilon)}=2^{-p( \varepsilon)}\sum_{n=1}^{\infty}\left|C_{n}(f)\right|^{p(\varepsilon)}=+\infty.\] Now we can prove that a.e. on \([0,1]\) \[\limsup_{n\rightarrow+\infty}\left|Q_{n}(x,g)\right|=\limsup_{n\rightarrow+ \infty}\left|\sum_{k=1}^{n}C_{k}(g,\Phi)\Phi_{k}(x)\right|=+\infty.\] Indeed, if \[S_{n}(x,f):=\sum_{k=1}^{n}C_{k}(f)\varphi_{k}(x)\] then \[Q_{n}\left(x,g\right)=\left\{\begin{array}{cc}\frac{1}{2}S_{n}(2x,f),& \mbox{when}&x\in\left[0,\frac{1}{2}\right],\\ \frac{1}{2}S_{n}(2(x-\frac{1}{2}),g),&\mbox{when}&x\in\left[\frac{1}{2},1 \right].\end{array}\right.\] Let us assume that \(x\in\left[0,\frac{1}{2}\right],\) then \[Q_{n}\left(x,g\right)=\frac{1}{2}S_{n}(2x,f)=\frac{1}{2}\sum_{k=1}^{n}C_{k}(f) \varphi_{k}(2x)\] or if \(2x=t,\)\(t\in[0,1]\) \[Q_{n}\left(\frac{t}{2},g\right)=\frac{1}{2}S_{n}(t,f)=\frac{1}{2}\sum_{k=1}^{ n}C_{k}(f)\varphi_{k}(t).\] By (29) we get \[\limsup_{n\rightarrow+\infty}\left|Q_{n}\left(\frac{t}{2},g\right)\right|= \frac{1}{2}\limsup_{n\rightarrow+\infty}\left|S_{n}(t,f)\right|=+\infty,\] a.e. \(t\in[0,1]\). Analogously is possible to show that, if \(x\in\left[\frac{1}{2},1\right],\) then \[Q_{n}\left(x,g\right) = \frac{1}{2}S_{n}\left(2\left(x-\frac{1}{2}\right),f\right)\] \[= \frac{1}{2}\sum_{k=1}^{n}C_{k}(f)\varphi_{k}\left(2\left(x-\frac{ 1}{2}\right)\right)\] or if \(2\left(x-\frac{1}{2}\right)=t,\)\(t\in[0,1]\) \[Q_{n}\left(\frac{t}{2}+\frac{1}{2},g\right)=\frac{1}{2}S_{n}(t,f)=\frac{1}{2} \sum_{k=1}^{n}C_{k}(f)\varphi_{k}(t).\] By (29) we get \[\limsup_{n\rightarrow+\infty}\left|Q_{n}\left(\frac{t}{2}+\frac{1}{2},g \right)\right|=\frac{1}{2}\limsup_{n\rightarrow+\infty}\left|S_{n}(t,f) \right|=+\infty,\] a.e. \(t\in[0,1]\). Consequently \[\limsup_{n\rightarrow+\infty}\left|Q_{n}(t,g)\right|=+\infty,\] a.e. on \([0,1]\), for \(g\in Lip_{1}\). ## 4. Problems of efficiency We call the condition \(M_{n}(a,\varepsilon)=O(1)\) efficient if it is easily verified for classical ONS. **Theorem 6**.: _Let \((\varphi_{n})\) be an ONS and uniformly with respect to \(x\in[0,1]\)_ \[\int_{0}^{x}\varphi_{n}(u)du=O(1)\frac{1}{n}.\] _Then for arbitrary \((a_{n})\in l_{2}\)_ \[M_{n}(a,\varepsilon)=O(1). \tag{33}\] Proof.: \[M_{n}(a,\varepsilon) = \frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(a,x)dx\right| =\frac{1}{n}\sum_{i=1}^{n-1}\left|\sum_{k=1}^{n}a_{k}\int_{0}^{i/n}\varphi_{k }(x)dx\right|\] \[= O(1)\sum_{k=1}^{n}\frac{1}{k}\left|a_{k}\right|=O(1)\left(\sum_ {k=1}^{n}a_{k}^{2}\right)^{1/2}\left(\sum_{k=1}^{n}k^{-2}\right)^{1/2}=O(1).\] So the trigonometric \((\sqrt{2}\cos 2\pi nx,\sqrt{2}\sin 2\pi nx)\) and the Walsh systems satisfy the condition (33). **Theorem 7**.: _If \((X_{n})\) is the Haar system, then for any arbitrary \((a_{n})\in l_{2},\) the condition (33) holds._ Proof.: According to the definition of the Haar system we have (see [1] ch.I, #6. p.54) \[\left|\int_{0}^{x}\sum_{k=2^{m}}^{2^{m+1}}a_{k}X_{k}(u)du\right|\leq 2^{-m/2} \left|a_{k(m)}\right|,\] where \(2^{m}\leq k(m)<2^{m+1}\). Using the inequality (34), when \(n=2^{p}\) for an arbitrary \(f\in Lip_{1}\) we get \[V_{n}(a) = \frac{1}{n}\sum_{i=1}^{n-1}\left|\int_{0}^{i/n}P_{n}(a,x)dx\right|\] \[= \frac{1}{2^{p}}\sum_{i=1}^{2^{p}-1}\left|\sum_{m=0}^{p-1}\int_{0} ^{i/2^{p}}\sum_{k=2^{m}}^{2^{m+1}-1}X_{k}(x)a_{k}dx\right|\] \[= O(1)\sum_{m=0}^{p-1}2^{-m/2}\left|a_{k(m)}\right|\] \[= O(1)\sum_{m=0}^{p-1}2^{-m/2}\left(\sum_{k=2^{m}}^{2^{m+1}-1}a_{k }^{2}\right)^{1/2}\] \[= O(1)\left(\sum_{m=0}^{p-1}\sum_{k=2^{m}}^{2^{m+1}-1}a_{k}^{2} \right)^{1/2}\left(\sum_{m=0}^{p}2^{-m}\right)^{1/2}=O(1).\] Finally, we conclude that when \(n=2^{p}+l\), \(1\leq l<2^{p}\) the condition (33) is valid. ## 5. Conclusion From everything we have discussed in this article, it is clear to see that, despite the fact that the general Fourier series of \(Lip_{1}\) class functions are not convergent in a general sense, we can still separate a whole class of orthonormal systems, whose functions satisfy certain conditions and with respect to which Fourier series of \(Lip_{1}\) class functions are unconditionally convergent (see Theorem 2). Furthermore, we have proven that the conditions imposed upon the functions of ONS are accurate and valid. It is also worth mentioning the fact that every ONS contains a subsystem with respect to which general Fourier series of any \(f\in Lip_{1}\) function is unconditionally convergent a. e. on \([0,1]\).s.
2309.11935
RTS-GT: Robotic Total Stations Ground Truthing dataset
Numerous datasets and benchmarks exist to assess and compare Simultaneous Localization and Mapping (SLAM) algorithms. Nevertheless, their precision must follow the rate at which SLAM algorithms improved in recent years. Moreover, current datasets fall short of comprehensive data-collection protocol for reproducibility and the evaluation of the precision or accuracy of the recorded trajectories. With this objective in mind, we proposed the Robotic Total Stations Ground Truthing dataset (RTS-GT) dataset to support localization research with the generation of six-Degrees Of Freedom (DOF) ground truth trajectories. This novel dataset includes six-DOF ground truth trajectories generated using a system of three Robotic Total Stations (RTSs) tracking moving robotic platforms. Furthermore, we compare the performance of the RTS-based system to a Global Navigation Satellite System (GNSS)-based setup. The dataset comprises around sixty experiments conducted in various conditions over a period of 17 months, and encompasses over 49 kilometers of trajectories, making it the most extensive dataset of RTS-based measurements to date. Additionally, we provide the precision of all poses for each experiment, a feature not found in the current state-of-the-art datasets. Our results demonstrate that RTSs provide measurements that are 22 times more stable than GNSS in various environmental settings, making them a valuable resource for SLAM benchmark development.
Maxime Vaidis, Mohsen Hassanzadeh Shahraji, Effie Daum, William Dubois, Philippe Giguère, François Pomerleau
2023-09-21T09:47:55Z
http://arxiv.org/abs/2309.11935v2
# RTS-GT: Robotic Total Stations Ground Truthing dataset ###### Abstract Numerous datasets and benchmarks exist to assess and compare Simultaneous Localization and Mapping (SLAM) algorithms. Nevertheless, their precision must follow the rate at which SLAM algorithms improved in recent years. Moreover, current datasets fall short of comprehensive data-collection protocol for reproducibility and the evaluation of the precision or accuracy of the recorded trajectories. With this objective in mind, we proposed the Robotic Total Stations Ground Truthing dataset (RTS-GT) dataset to support localization research with the generation of six-Degrees Of Freedom (DOF) ground truth trajectories. This novel dataset includes six-DOF ground truth trajectories generated using a system of three Robotic Total Stations (RTSs) tracking moving robotic platforms. Furthermore, we compare the performance of the RTS-based system to a Global Navigation Satellite System (GNSS)-based setup. The dataset comprises around sixty experiments conducted in various conditions over a period of 17 months, and encompasses over 49 kilometers of trajectories, making it the most extensive dataset of RTS-based measurements to date. Additionally, we provide the precision of all poses for each experiment, a feature not found in the current state-of-the-art datasets. Our results demonstrate that RTSs provide measurements that are 22 times more stable than GNSS in various environmental settings, making them a valuable resource for SLAM benchmark development. ## I Introduction Accurate and precise ground truth trajectories in open-source datasets are essential to evaluate Simultaneous Localization and Mapping (SLAM) algorithms [1]. Motion capture such as Vicon or OptiTrack systems have become the _de facto_ standard to generate such ground truth in indoor environments [2]. Nonetheless, they are not suitable to be deployed outside due to direct sunlight corrupting the readings and the need to cover significantly larger areas. For outdoor deployments, the majority of datasets use Global Navigation Satellite System (GNSS) in Real Time Kinematics (RTK) mode integrated with Inertial Navigation System (INS) to compute the reference trajectories within several centimeters accuracy [3, 4, 5]. Meanwhile, such localization systems are vulnerable to GNSS outages, as seen on the KITTI dataset [6]. A few datasets rely on High-Definition (HD) maps from a terrestrial laser scanner to obtain the reference within centimeter accuracy [7, 8]. However, surveying remains an open challenge, especially in off-road terrains where reference planes are scarce, making it challenging to align scans. In recent years, a limited number of datasets used a single Robotic Total Station (RTS) to generate such references indoors or outdoors with various platforms [9, 10]. RTS is a surveying instrument capable of measuring the position of a reflective target (hereafter _prism_) with millimeter precision in two modes: 1) static when the prism is fixed, and 2) dynamic when the prism is in motion. Moreover, two types of prism exist, active and passive. An active prism is equipped with LEDs that emit a unique infrared signature. This active signature enables multiple RTSs to automatically track different prisms within their field of view. Regardless of their accuracy and robustness, a single RTS can only provide the reference 3D position of the tracked platform. This limitation arises since only one prism can be tracked while the robot is in motion. By using three passive prisms, it is possible to obtain the six-Degrees Of Freedom (DOF) reference trajectory by collecting manually the data when the robot is static [11], which makes the data collection procedure cumbersome. To the best of our knowledge, active prisms were never used in any released datasets providing six-DOF reference trajectories generated by RTSs in dynamic mode. Moreover, new research done recently enables estimation of the uncertainty for a multi-RTS setup [12]. This uncertainty on ground truth is not available in state-of-the-art SLAM datasets, which can lead to unbiased comparisons as many SLAM algorithms are approaching centimeter-level accuracy [13]. The key motivation for this work is to provide a high-quality dataset that covers a variety of challenging environments and further motivates ground truth generation for SLAM algorithms. As shown in our previous research [12, 14], generating a reliable six-DOF reference trajectories of a moving robot with three RTSs is now feasible. As a follow-up, we present Robotic Total Stations Ground Truthing dataset (RTS-GT), a dataset providing six-DOF reference trajectories originating from two different types of setup, three RTSs and three GNSS receivers. The RTS-GT dataset was collected during 17 months in diverse weather conditions, totaling over 49 kilometers of trajectories. With this dataset, we provide and compare the precision and the reproducibility of the setups during multiple weather conditions and in different environments as shown in Figure 1, demonstrating that RTSs are more reliable than GNSS to generate ground truth trajectories. ## II Related work Among the majority of datasets used for benchmarking SLAM algorithms, the prominent reference trajectory generation method is GNSS-Aided INS. This approach relies on integrating data from one or more GNSS receivers with high-frequency measurements obtained from a Inertial Measurement Unit (IMU) within an INS [18]. This fusion provides the six-DOF pose estimation for the platform. The KITTI dataset [6] was the first dataset to introduce a SLAM algorithm benchmark using over 39 km of reference trajectory data, generated by an GNSS-Aided INS. This system was mounted on a car equipped with lidar and camera sensors for continuous data collection. Given this success within the scientific community, numerous other benchmarks emerged, employing sensor-equipped cars and similarly utilizing GNSS-Aided INS for production of reference trajectories datasets [3, 4, 5]. Nonetheless, GNSS-Aided INS systems encounter challenges in urban environments, such as the GNSS canyon effect caused by buildings, issues related to signal multi-path, and limited satellite visibility. As a result, the system's accuracy can fluctuate, typically ranging around 10 cm which may be too significant for specific benchmarking scenarios such as the TUM-VI dataset [2]. This dataset focused on high-speed drone localization using camera images and IMU data. Given the drone's aggressive dynamics, a one-millimeter-accurate Optitrack system was used to generate the six-DOF reference trajectory. Other datasets have employed high-definition 3D maps provided by terrestrial scanners to generate reference trajectories [16]. The obtained accuracy is under ten centimeters, and the dataset supports object detection and tracking to train autonomous vehicle algorithms under clear or rainy conditions. More recently, [8] also utilizes high-resolution mapping of urban locations to reconstruct a camera, lidar, and IMU setup's trajectory in six-DOF. Lidar scans are directly aligned with the high-definition 3D map, resulting in millimeter-level accuracy. For natural environments, [15] used an SLAM algorithm to reconstruct their setup's trajectory in six-DOF and used it as a ground truth with an 30 cm accuracy. All of these methods are limited in accuracy, as seen with GNSS-Aided INS systems, or are difficult to apply in outdoor environments, such as high-resolution terrestrial laser scanning or Optitrack measurement methods. Our RTS-GT dataset overcomes these limitations, offering a dataset in various indoor and outdoor environments with centimeter-level accuracy through the use of multiple RTSs. As mentioned earlier, previous datasets have already employed RTSs to generate reference trajectories for their robotic platforms. The EuRoc [17] and UZH-FPV [9] datasets have employed a Leica RTS-based system to generate reference trajectories for drones navigating in indoor and outdoor environments. Similarly, the Hilti SLAM dataset [10] uses a Leica RTS-based system for dynamic tracking of a setup equipped with lidar, cameras, and an IMU. To generate reference trajectories using RTSs, two strategies exist in the literature. The first strategy, employed by Pomerleau et al. [11], involves placing three passive prisms on a robotic platform and manually capturing static position measurements using an RTS. This method provides the platform's six-DOF with millimeter-level accuracy. However, the number of obtained reference poses is limited due to the time-consuming nature of manual RTS measurements. The second strategy involves dynamically tracking an active prism with an RTS. The positional measurement accuracy is in the order of millimeters, and the entire prism trajectory can be reconstructed. However, the generated reference trajectory has only three-DOFs as a single prism is tracked in real-time. The Table I provides an overview of the mentioned datasets along with their respective characteristics. To the best of our knowledge, there is not any reference dataset that provides a six-DOF reference trajectory of a dynamic platform using a multi-RTS-based system. Moreover, there is a lack of precision information in available datasets regarding the precision of the dynamic RTS or GNSS measurements. Hence, we present the RTS-GT dataset, the first dataset focusing on creating six-DOF reference trajectories using three RTSs tracking three distinct active prisms. Additionally, we provide the precision of measurements and the final poses, a unique feature not previously provided in other datasets. ## III Hardware Two mobile robotic platforms were deployed for the dataset: a Clearpath Warthog Uncerewed Ground Vehicle (UGV), and a SuperDroid HD2 UGV. The Warthog robot is equipped with a RoboSense RS-32 lidar and an XSens MT-10 IMU, whereas the HD2 robot's sensor payload consists of a Velodyne-VLP 16 lidar and an XSens MTi-30 IMU. We generalized our approach to generate reference trajectories by using and comparing two different types of setup for the two robots, as shown in Figure 2 and Figure 3 for deployments of the HD2 and Warthog platforms, respectively. The first ground truth setup is composed of three Trimble S7 RTSs. Each RTS tracks a single Trimble MultiTrack Active Target MT1000 prism at a maximum achievable measurement rate of 2.5 Hz. For this active prism, the nominal range at the tracking mode is 800 m and the nominal position measurement accuracy is 4 mm. To collect the data from each RTS, a Raspberry Pi 4 is used as a client through a USB connection. The data are then sent to a master Raspberry Pi 4 through LoRa Shield radio modules from Dragino operating with a radio frequency of 905 MHz. The chosen modulation allows the Raspberry Pi to reliably send data over a distance of up to 800 m at 366 Bps in open space. The second ground truth setup is composed of a set of four GNSS receivers used for outdoor experiences. In this setup, three GNSS receivers are mounted on the Warthog UGV and the fourth GNSS receiver serves as a static base station. For the sake of comparison, two different models of GNSS receivers are used in this dataset, _Reach RS+_ and _Trimble R10-2_. When operating in RTK rover/base mode, the _Trimble R10-2_ receiver has a vertical accuracy of 21.2 mm and horizontal accuracy of 11.3 mm, while the _Reach RS+_ has a vertical accuracy of 19.8 mm and horizontal accuracy of 9.9 mm. All three prisms and GNSS receivers were mounted on the Warthog robot for outdoor experiments. The HD2 UGV was used to collect data in the university tunnels. In such environments, GNSS receivers are not functional, thus, only the prisms can be used. ## IV Data collection Our main contribution revolves around delivering a dataset featuring openly available ground truth trajectories obtained using diverse setups. The RTS-GT dataset was gathered across three distinct environmental settings, encompassing various times of day and spanning around diverse weather conditions, summarized in the Table II. \begin{table} \begin{tabular}{l l l l l l l l l l l l l l} \hline \hline _Dataset_ & _Area_ & _Platform_ & \multicolumn{6}{c}{_Sensors_} & \multicolumn{1}{c}{_Ground truth_} & _DOF_ & _Accuracy_ & _Distance_ & _Wather_ & _Site_ \\ & & & Lidar & Cam. & IMU & GNSS Radar & & & & & & & \\ \hline KAIST [3] & Urban & Car & & ✓ & ✓ & ✓ & - & INS & 3 & \(<10\) cm* & 191 km & - & O \\ Wild-Places [15] & Forest & Handheld & ✓ & ✓ & ✓ & - & - & HD map & **6** & \(<30\) cm & 33 km & - & O \\ NCLT [5] & Campus & Segway & ✓ & ✓ & ✓ & ✓ & - & INS & **6** & \(\approx 10\) cm* & 147 km & - & I/O \\ KITTI [6] & Urban & Car & ✓ & ✓ & ✓ & ✓ & - & INS & **6** & \(<10\) cm* & 39 km & - & O \\ nuScenes [16] & Urban & Car & ✓ & ✓ & ✓ & ✓ & ✓ & HD map & **6** & \(<10\) cm & 242 km & **Rain** & O \\ Oxford [4] & Urban & Car & ✓ & ✓ & ✓ & ✓ & ✓ & INS & **6** & \(\approx 1\) cm & **280 km** & **Fog, rain, snow** & O \\ Hitli-Oxford [8] & Urban & Handheld & ✓ & ✓ & ✓ & - & - & HD map & **6** & \(<1\) cm & \(<10\) km & - & I/O \\ \hline Hitli SLAM [10] & Urban & Handheld & ✓ & ✓ & ✓ & - & - & - & Hitli PLT 300 & 3 & \(\approx 3\) mm & \(<10\) km & - & I/O \\ Euroe [17] & Indoor & UAV & - & ✓ & ✓ & - & - & Leica MS50 & 3 & **3** & **1 mm** & 0.9 km & - & I \\ UZH-F [9] & Campus & UAV & - & ✓ & ✓ & - & - & Leica MS60 & 3 & \(\approx 1\) mm & 10 km & - & I \\ TUM [2] & Campus & Handheld & - & ✓ & ✓ & - & - & OptiTrack & **6** & \(\approx 1\) mm & \(<1\) km* & - & I/O \\ Challenging dataset [11] & Campus, mountain & Rig & ✓ & - & ✓ & - & - & Leica TS15 & **6** & \(\approx 1\) **mm** & \(<1\) km & - & I/O \\ _RTS-GT dataset (Ours)_ & Campus, forest & UGV & ✓ & - & ✓ & ✓ & - & Trimble S7 & **6** & \(\approx 4\) mm & **49 km** & **Fog, rain, snow** & I/O \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of public datasets containing ground truth trajectories. The top half of the table shows the most popular datasets used for SLAM evaluation. The bottom half shows public datasets that contain ground truth generated by the most accurate setups. The RTS-GT dataset is the only that allows dynamic six-DOF ground truth generation with RTSs on a large scale. The symbol * indicates the value is estimated by us. The letter \(O\) means _Outside_ and the letter \(I\) means _Inside_. Fig. 3: Setup used on the campus with the Warthog UGV. A GNSS fixed base station sends corrections to the three GNSS rovers on the robot. Three prisms are tracked by three RTS. Data are collected by three Raspberry Pi clients connected by USB to the RTS. A LoRa communication protocol is used to send data to a Raspberry Pi master located on the UGV. The lidar and IMU are on the front part of the Warthog. Fig. 2: Setup used in the tunnel sites. _Left_: RTS setup with the HD2 robot. _Right_: one deployment done in a 120 m tunnel. Because the floor was slippery, heavy weights were added to stabilize the RTSs tripods. ### _Environments_ The first deployment site is the campus of Universite Laval in Quebec City. This campus comprises buildings, open spaces, and wooded areas. Multiple views of the campus are depicted in Figure 1-_right_ and Figure 3. Overall, 45 different experiments through 20 distinct deployments were conducted with the Warthog UGV, totaling 37.6 km of trajectories on the campus. During these data collection campaigns, various weather conditions were encountered, including clear weather, rain, and even a snowstorm. Data from both RTS and GNSS setups are available for each of the conducted experiments, enabling a comparison of their respective accuracies in generating reference trajectories. For the second deployment site, underground tunnels located beneath the university campus were selected. Four deployments were conducted in these tunnels to gather data from RTS setup during 20 different experiments, covering a total of 4 km of trajectories with the HD2 platform. These tunnels have lengths of several hundred meters, which might cause particular challenges for SLAM algorithms. Unlike other setups based on motion capture or ultra-wideband, our RTS setup can generate reference trajectories at long ranges without the need to alter the environment. The Figure 2 shows an example of a tunnel as a long-range environment. Finally, the last site is located in the Montmorey Forest, which belongs to Universite Laval. This site contains numerous paths for snowmobiles and cross-country ski trails in a dense forest. Two deployments were carried out with the Warthog robot, along with RTS and GNSS systems. In total, 3.2 km of trajectories were acquired during four different experiments. An example of the RTS system in the forest is depicted in Figure 1-_left_. The lidar was available during the majority of deployments to collect data for SLAM algorithms. ### _Ground truth protocol_ In this section, we introduce the standardized protocols that we employed for each of the setups during all conducted deployments, split between RTS and GNSS. Starting with **RTS protocol**, the field deployment of the three RTSs is carried out in several steps: 1. (30 min) RTS units are acclimated to the ambient temperature before data collection can begin. This step is necessary to prevent any condensation effects that could bias the measurements, especially in winter; 2. (5-60 min) Tripods for the RTS are set up, while the RTS units adjust to the ambient temperature; 3. (10-15 min) RTS units are mounted on tripods, and we roughly level RTS units visually. To achieve a finer leveling for RTS units, calibrated electronic sensors are utilized; 4. (3-5 min) Raspberry Pi devices are powered on and connected via USB to the RTS units to retrieve their data and send it to the master unit for recording. These Raspberry Pi devices also assign a unique prism number to each RTS unit to be tracked; 5. (1-2 min) Active prisms are mounted on the UGV and powered on with their unique ID. To facilitate data processing of multiple experiments conducted by the robots, the same prism ID and positions were adopted during all data acquisition operations; 6. (1-10 min) Verification of proper prism tracking is performed, followed by measurements of static prism positions to enable post-processing extrinsic calibration of RTS units. The complete setup process takes between 50 min and 120 min, which depends on the number of available operators and weather conditions. At the end of the data collection procedure, prisms have to remain on the UGV to immediately perform an extrinsic sensor calibration, discussed later. As for the **GNSS protocol**, we need a minimum of three GNSS receivers as rover plus one GNSS receiver as a base station to generate a six-DOF ground truth with only GNSS-based setup. This rover/base configuration, known as RTK, can achieve centimeter-level accuracy by correcting errors of the GNSS receivers used as rovers. The base station, consisting of a GNSS receiver, provides corrections for the rover GNSS observations by simultaneously monitoring the same satellites as the rover receivers. The base station can be fixed at a predetermined, known, and stationary location (e.g., a geodesic pillar stock or a geodetic survey marker) or an unknown position. If the position of the base station is known in advance, GNSS receivers have just to be powered on and once the radio connection between the base and rovers is established the system is operational and the collection of data can start. On the other hand, if the position of the base station is unknown, a waiting time of at least 15 min is necessary before data collection to let the rover or the base receivers have enough time to boot, average their positions, and achieve optimal reading of the satellite constellations in the sky. The positional corrections are then transmitted through messages via a radio link from the base station to the rover receivers, where they are employed to correct the real-time positions of the rover. Moreover, the internal radio of the GNSS receiver has a maximum range of 2 km. To achieve a greater range, we have to use the external radio, which can theoretically transmit up to 10 km under optimal conditions. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & _Month_ & _Exp._ & _Length_ & _Wearther_ & _Robot_ & _Setup_ \\ \hline \multirow{5}{*}{Campus} & Feb. 22 & 1 & 1.58 km & C & W & RTS/GNSS \\ & Mar. 22 & 6 & 7.45 km & FR, S, C & W & RTS/GNSS \\ & May 22 & 8 & 14.43 km & C & W & RTS/GNSS \\ & Jun. 22 & 4 & 3.42 km & C & W & RTS/GNSS \\ & Jul. 22 & 6 & 2.67 km & C & W & RTS/GNSS \\ & Nov. 22 & 14 & 3.91 km & C & W & RTS/GNSS \\ & Dec. 22 & 4 & 2.1 km & C, R & W & RTS/GNSS \\ & Jul. 23 & 2 & 2.82 km & C & W & RTS/GNSS \\ \hline \multirow{2}{*}{Tunnel} & May 22 & 6 & 1.55 km & C & H & RTS \\ & Jul. 22 & 5 & 1.58 km & C & H & RTS \\ & Sep. 22 & 9 & 0.85 km & C & H & RTS \\ \hline Forest & Nov. 22 & 4 & 3.16 km & C & W & RTS/GNSS \\ \hline \hline \end{tabular} \end{table} TABLE II: Table of deployments in the RTS-GT dataset. The weather legend is as follows: C=Clear, FR=Freezing rain, S=Snow, R=Rain. The robot legend means W=Warthog and H=HD2. ### _Calibration and time synchronization_ This section specifies how the systems were calibrated and synchronized, along with the data format given by the dataset. Attention is given to 1) extrinsic calibration and 2) time synchronization. First, the extrinsic calibration process involves obtaining the precise pose of all the sensors on the robots. Calibration is performed using one RTS at the end of each deployment to closely match the conditions of the experiments as temperature, pressure, and humidity can affect measurements, especially in winter. Retro-reflective targets, as shown in Figure 3, are stuck atop the lidar and GNSS receivers. The prisms are left in active mode for this calibration. Ten repetitions are performed for each millimeter-precise position to increase precision and to determine their uncertainties. This method provides the relative positions of the prisms, GNSS receivers, and lidar to each other. Since the IMU is too small, its extrinsic calibration with the lidar is based on the work of Kubelka et al. [19], who uses lidar and IMU data to perform calibration within a tenth of a degree using a modified four-DOF SLAM algorithm. Translation between the lidar and IMU is determined using the Computer-Aided Design (CAD) model of their support. These calibrations were performed after each deployment to ensure precise referenced measurements. As for time synchronization, a Network Time Protocol (NTP) daemon was used to synchronize the clocks between the two on-board computers on the robot (i.e., the main computer of the UGV and the Raspberry Pi master connected to the robot network). Ten minutes was allowed after booting up both of the computers so that the client clock could adjust. Measurements from the wheel encoders, lidar, and IMU were timestamped using the data logging computer's clock. Synchronization between the master and client Raspberry Pi is achieved using a modified NTP protocol designed for LoRa communication, as described in [20]. Initial synchronization is performed at the start of data collection, followed by repetitions every five minutes for each client. This method ensures clock precision at the level of one to two milliseconds. As the GNSS devices are not connected to the setup, temporal synchronization to the RTS data is established using a classical maximum likelihood state estimator, similar to the approach taken by Burri et al. [17]. ### _Data format_ For each deployment and experiment, ROS 2 rosbag files are provided for data gathered by the Warthog and HD2 UGV. These files include lidar scans, IMU measurements, motors and encoders data, as well as rigid transformation between all sensor frames. Unified Robot Description Format (URDF) files are provided along with these rosbag files for each UGV. Note that no INS were used to process the IMU measurements. All IMU raw data is available in the different rosbag files. With the RTS setup, prism positions, and time synchronizations are provided for rosbag files in ROS 1 and ROS 2. The static prism positions computed for the RTS extrinsic calibration are given in a text file GCP.txt, and the sensor extrinsic calibration results are given in another text file calibration_raw.txt. Finally, NMEA and UBK files are provided for all GNSS receivers used during experiments. The RTS-GT dataset is available at [https://github.com/norlab-ulaval/RTS_project](https://github.com/norlab-ulaval/RTS_project), along with a toolbox code to process the data. ## V Discussion and challenges To assess the disparities among the setups, we conducted an analysis of 27 outdoor experiments, covering a total distance of 20.6 km, during which we tracked the trajectories of the robot using GNSS and RTS-based setups when available. We evaluated the precision of each system by employing _inter-distances_ between prisms and GNSSs. These distances are calculated between every synchronized triplet of prism positions or GNSS receiver positions recorded during each experiment, respectively referred as inter-prism distances and inter-GNSS distances. Subsequently, each of these distance triplets is compared to their corresponding calibrated distance (evaluated at step six of our protocol) and taken as a reference to obtain the errors in precision. Furthermore, we employed an inter-precision distance to quantify variation in precision between different experiments conducted on the same site at different times. The purpose of this distance is to highlight the reproducibility of the different setups to generate ground truth trajectories. Sets of closest positions in the ground truth trajectories recorded during separate experiments are computed by a nearest neighbors algorithm. Subsequently, the inter-precision distance is computed for each set by subtracting each average inter-prism or inter-GNSS distances of each position. ### _Precision and reproducibility_ Results of inter-prism distances, inter-GNSS distances, and precision on the final translation and rotation of the robot are shown in Table III for 15 deployments. The precision on the final pose is estimated by the same method used in our previous work [12]. It can be seen that RTS precision is stable in multiple environments, while the GNSS can have a variation of more than 300 mm in the forest environment, as well as in open space. Moreover, higher distances between a prism and its RTS affect the precision, leading to higher uncertainties. The results depicted in Figure 4_-Left_ offer general insights into the errors associated with the inter-prism and inter-GNSS distances. These findings reveal that the RTS acquisition system consistently achieves a median precision of approximately 4.5 mm, whereas the GNSS system exhibits a 22 times lower median precision, being around 118.1 mm. It is important to emphasize that the inter-distance errors highlight the highest precision of the RTS acquisition system compared to the GNSS system. This discrepancy can be attributed to the relatively low error inherent to the RTS acquisition system, in contrast to the absolute error associated with GNSS. To evaluate the reproducibility between experiments, we employ a nearest neighbor distance calculation within three meters, with data expressed in the GNSS frame. As illustrated in Figure 4_-Right_, the RTS setup consistently exhibits high reproducibility, with a median error of 4.2 mm. The GNSS have the same level of reproducibility, with a median of 6.2 mm. However, the GNSS IQR, with a value of 133 mm, is 14 times more important than the RTS. This observation underscores the stability of precision across all experiments for the RTS compared to the GNSS. ### _Challenges encountered_ The first encountered challenge was the leveling of the RTSs in winter. It is sometimes necessary to remove snow from the ground for step two of our protocol to prevent tripods from gradually sinking into the snow, which would impact the ground truth. In forested areas where the snow depth can reach several meters, as shown in Figure 5-_Left_, surface snow is compacted to provide stability. Secondly, as depicted in Figure 1, obstacles between prisms and RTSs can disrupt measurements. To address this issue, RTS placement locations are pre-selected based on the planned trajectory of the UGV, and the heights of both RTSs and prisms are varied to minimize occlusion risks. Another challenge is the presence of dust or dirt on RTS lenses, as shown in Figure 5-_Middle_, which can degrade RTS performance. Therefore, lenses should be wiped regularly. Furthermore, since prisms and GNSS units are elevated on the UGV, they are susceptible to vibrations during motions. These vibrations can lead to positioning errors of up to 1 cm. To deal with this problem, metal supports have been added to dampen vibrations, as depicted in Figure 5-_Right_. Finally, unlike GNSS data, RTS measurement timestamps are not globally valid. To address this issue and obtain the six-DOF pose, data interpolation is performed. This interpolation can reduce the accuracy of the final estimated pose, especially for high UGV dynamic and low-rate measurement setups. ## VI Conclusion In this paper, we have introduced a novel dataset, the RTS-GT dataset, designed to compare various ground truthing setups. The RTS-GT dataset stands out as a unique dataset for providing six-DOF reference trajectories of a moving robotic platform generated by three RTSs. The dataset encompasses GNSS and RTS data as well, along with information from lidar, IMU, and encoder sensors. It facilitates the application of SLAM algorithms and the assessment of results to the ground truth data. Furthermore, the RTS-GT dataset covers diverse environments and weather conditions to assess the quality of the different ground truth setups. Additionally, tools for quantifying their precision are provided, a feature not present in previous datasets. An extensive analysis of the precision and reproducibility of different trajectory generation setups was conducted. The results indicate that RTS systems deliver more precise and reproducible data compared to GNSS solutions, even when used in RTK mode. These results demonstrate the challenges posed by generating six-DOF ground truth trajectories in outdoor and indoor environments. Moreover, it shows that multiple RTSs can be used to benchmark six-DOF SLAM algorithm comparisons. Fig. 4: Distribution of errors arising for the two setups. _Left_: inter-prism and inter-GNSS distances. _Right_: inter-precision distances. Results obtained from the RTSs are denoted in blue, whereas those from the GNSS are shown in orange. The median error values are in the center of each box, and the Interquartile Range (IQR) is indicated alongside for reference. Fig. 5: Issues encountered while collecting the dataset. _Left_: difficulties of leveling on deep snow. _Middle_: dust on a lens which was interfering with the tracking mode of the RTS. _Right_: supports added on the Warthog to reduce vibrations on the top prism. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c} \hline \hline \multicolumn{14}{c}{_Environment_} \\ \cline{3-14} \multicolumn{14}{c}{_Open space_} & \multicolumn{3}{c|}{_Building_} & \multicolumn{3}{c}{_Forest_} \\ \cline{2-14} \multicolumn{14}{c}{} & 24/02 & 07/03 & 14/03 & 16/03 & 22/06 & 30/06 & 11/07 & 29/11 & 05/12 & 12/03 & 16/11 & 31/03 & 09/11 & 10/11 & 24/11 \\ \hline Inter-prism distance [mm] & 6.5 & 2.4 & 4.1 & 4.1 & 9.7 & 5.7 & 22.0 & 4.6 & 3.7 & 4.3 & 4.5 & 6.6 & 13.0 & 5.0 & 3.2 \\ Inter-GNSS distance [mm] & 5.5 & 5.2 & 4.6 & 6.4 & 37.2 & 2.83 & 111.0 & 6.9 & 20.5 & 23.0 & 17.7 & 23.0 & 149.0 & 532.0 & 529.0 \\ Translation error [mm] & 15.2 & 23.7 & 23.5 & 10.6 & 12.3 & 12.8 & 5.5 & 22.2 & 12.7 & 23.7 & 12.0 & 19.0 & 12.8 & 6.7 & 21.0 \\ Rotation error [deg] & 1.51 & 1.6 & 2.05 & 1.4 & 0.9 & 0.94 & 4.0 & 1.94 & 1.33 & 1.93 & 0.85 & 1.06 & 0.97 & 0.53 & 1.59 \\ RTS Range [m] & 39.0 & 20.0 & 18.0 & 19.0 & 20.0 & 32.0 & 37.0 & 39.0 & 32.0 & 71.0 & 42.0 & 38.0 & 42.0 & 53.0 & 37.4 \\ \hline \hline \end{tabular} \end{table} TABLE III: Median values for different metrics obtained over 15 RTS and GNSS setups. All deployments happened during 2022 and the date format is day/month. For each row, light colors indicate low values, whereas dark means high values.
2306.17519
GPT-FinRE: In-context Learning for Financial Relation Extraction using Large Language Models
Relation extraction (RE) is a crucial task in natural language processing (NLP) that aims to identify and classify relationships between entities mentioned in text. In the financial domain, relation extraction plays a vital role in extracting valuable information from financial documents, such as news articles, earnings reports, and company filings. This paper describes our solution to relation extraction on one such dataset REFinD. The dataset was released along with shared task as a part of the Fourth Workshop on Knowledge Discovery from Unstructured Data in Financial Services, co-located with SIGIR 2023. In this paper, we employed OpenAI models under the framework of in-context learning (ICL). We utilized two retrieval strategies to find top K relevant in-context learning demonstrations / examples from training data for a given test example. The first retrieval mechanism, we employed, is a learning-free dense retriever and the other system is a learning-based retriever. We were able to achieve 3rd rank overall. Our best F1-score is 0.718.
Pawan Kumar Rajpoot, Ankur Parikh
2023-06-30T10:12:30Z
http://arxiv.org/abs/2306.17519v2
# GPT-FinRE: In-context Learning for Financial Relation Extraction using Large Language Models ###### Abstract. Relation extraction (RE) is a crucial task in natural language processing (NLP) that aims to identify and classify relationships between entities mentioned in text. In the financial domain, relation extraction plays a vital role in extracting valuable information from financial documents, such as news articles, earnings reports, and company filings. This paper describes our solution to relation extraction on one such dataset REFinD. The dataset was released along with shared task as a part of the Fourth Workshop on Knowledge Discovery from Unstructured Data in Financial Services, co-located with SIGIR 2023. In this paper, we employed OpenAI models under the framework of in-context learning (ICL). We utilized two retrieval strategies to find top K relevant in-context learning demonstrations / examples from training data for a given test example. The first retrieval mechanism, we employed, is a learning-free dense retriever and the other system is a learning-based retriever. We were able to achieve 3rd rank overall (model performance and report). Our best F1-score is 0.718. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. as examples can demonstrate model not to hallucinate in some cases. In this paper, we employed GPT-3.5 Turbo and GPT-4 under the framework of ICL for the relation extraction task on REFinD dataset. We utilized two retrieval strategies to find top K relevant in-context learning demonstrations / examples from training data for a given test example. The first mechanism we have employed is a learning-free dense retriever. The other system we have utilized is a learning-based retriever (Kumar Rajpoot and Ankur Parikh, 2017). ## 2. Preliminary Background ### Task Definition As per the challenge "Relation Extraction is the task of automatically identifying and classifying the semantic relationships that exist between different entities in a given text." This shared task is a part of "Knowledge Discovery from Unstructured Data in Financial Services" (KDF) workshop which is collocated with SIGIR 2023. Let C denote the input context and e1 in C, e2 in C denote the pair of entity pairs. Given a set of predefined relation classes R, relation extraction aims to predict the relation y in R between the pair of entities (e1, e2) within the context C, or if there is no predefined relation between them, predict y="no relation". ### Data The dataset (Kumar Rajpoot and Ankur Parikh, 2017) released with this task is the largest relation extraction dataset for financial documents to date. Overall REFinD contains around 29K instances and 22 relations among 8 types of entity pairs. REFinD is created using raw text from various 10-X reports (including 10-K, 10-Q, etc. broadly known as 10-X) of publicly traded companies obtained from US Securities and Exchange Commission. Figure-2 shows different entity types and relations exist between them. ### In Context Learning In-context learning (ICL) refers to one of the core emergent abilities (Kumar Rajpoot and Ankur Parikh, 2017) that infers new tasks from context. We use the terms 'in-weights learning' and 'in-context learning' from prior work on sequence models (Kumar Rajpoot and Ankur Parikh, 2017) to distinguish between gradient-based learning with parameter updates and gradient-free learning from context, respectively. Formally, each training instance is first linearized into an input text x = (x1...xn ) and an output text y = (y1...yn), where for all tokens x1...xn, y1...yn in V and V is the vocabulary set of the LM. Given a new test input text x-test, in-context learning defines the generation of output y as y-test \(\sim\) PLM(y-test \(|\) x1,y1,...,xk,y, x-test), where \(\sim\) refers to decoding strategies (e.g., greedy decoding and nuclear sampling (Kumar Rajpoot and Ankur Parikh, 2017)), and each in-context example ei- (xi,yi) is sampled from a training set D. The generation procedure is especially attractive as it eliminates the need for updating the parameters of the language model when encountering a new task, which is often expensive and impractical. Notably, the performance of ICL on downstream tasks can vary from almost random to comparable with state-of-the-art systems, depending on the quality of the retrieved in-context examples (Kumar Rajpoot and Ankur Parikh, 2017) (Kumar Rajpoot and Ankur Parikh, 2017). ## 3. GPT-FinRE GPT-RE is formalized under the ICL framework, using GPT models as shown in Figure-3. ### Prompt Construction We construct a prompt for each given test example,which is fed to the GPT models. Each prompt consists of the following components. Task Description and Predefined Classes : We provide a succinct overview of the RE task description and the subset of predefined classes R, denoted by O. This subset is all possible relations exist between entity types of e1 and e2. The model is explicitly asked to output the relation, which belongs to the O. Otherwise, the model will output "no relation". K-shot Demonstrations : In the demonstration part, we reformulate each example by first showing the input prompt x-demo = Prompt(C, e1, e2) and the relation label y-demo. Test Input : Similar to the demonstrations, we offer the test input prompt x-test, and GPT models are expected to generate the corresponding relation y-test. Figure 2. REFinD dataset relation and entity types. ### Retrieval Systems We have employed two retrieval strategies to find top K relevant in-context learning demonstrations / examples from training data for a given test example. #### 3.2.1. KNN with OpenAI Embeddings Since ICL demonstrations closer to the test sample in the embedding space result in more consistent and robust performance (Kenn et al., 2020). We utilized the KNN to retrieve the most similar examples in the training set as the few-shot demonstrations for each test example. As this learning-free dense retriever relies on the choice of the embedding space, we used OpenAI embeddings (text-embedding-ada-002) to obtain example representations. For similarity search, we used FAISS tool (Beng et al., 2019). #### 3.2.2. **EPR (Efficient Prompt Retrieval)** This learning-based dense retriever is trained to retrieve a better singleton in-context example (Kenn et al., 2020), and Top-K most similar examples are selected in the inference stage. This method for retrieving prompts for in-context learning uses annotated data and a LM. Given an input-output pair, it estimates the probability of the output given the input and a candidate training example as the prompt, and labels training examples as positive or negative based on this probability. It then trains an efficient dense retriever from this data, which is used to retrieve training examples as prompts at test time. Due to limited access to OpenAI, we have used the gpt-neo-2.7B model (Dosovitskiy et al., 2017) as our choice of LM. #### 3.2.3. Random Class Examples Along with KNN / EPR based examples, we also added K examples randomly for each possible class between two entity types to add more variety in the final prompt. ## 4. Experiments Due to limited access and cost associated with OpenAI, We performed 4 primary experiments on the test dataset. We tried various rule based heuristics to improve the F1-score, but it didn't work as expected. We used retriever implementations from 1. Footnote 1: [https://github.com/HKUNLP/icl-ceil](https://github.com/HKUNLP/icl-ceil) ## 5. Results The results are shown in Table-1. Our best F1-Score is 0.718. We got 4th position in the shared-task. We find that GPT 4 performs better than GPT 3.5 Turbo. We also find that learning based retriever (EPR) outperforms learning-free retriever (KNN with OpenAI embeddings). ## 6. Future Work In future we want to utilize GPT 4 for EPR. We also want to use different retrieval approaches such as Compositional Exemplars for In-context Learning (CEIL)(Kenn et al., 2020). ## 7. Conclusion This work explores the potential of GPT + ICL on Financial Relation Extraction (REFinD dataset). We used two retrieval mechanisms to find similar examples: (1) KNN with OpenAI Embeddings (2) EPR. We tried two different GPT models: (1) GPT 3.5 Turbo and GPT 4. The experimental results show that GPT 4 with learning based retriever EPR is giving the best F1-Score of 0.718.
2309.06291
Existence and uniqueness of periodic pseudospherical surfaces emanating from Cauchy problems
We study implications and consequences of well-posed solutions of Cauchy problems of a Novikov equation describing pseudospherical surfaces. We show that if the co-frame of dual one-forms satisfies certain conditions for a given periodic initial datum, then there exists exactly two families of periodic one-forms satisfying the structural equations for a surface. Each pair then defines a metric of constant Gaussian curvature and a corresponding Levi-Civita connection form. We prove the existence of universal connection forms giving rise to second fundamental forms compatible with the metric. The main tool to prove our geometrical results is the Kato's semi-group approach, which is used to establish well-posedness of solutions of the Cauchy problem involved and ensure $C^1$ regularity for the first fundamental form and the Levi-Civita connection form.
Nilay Duruk Mutlubas, Igor Leite Freire
2023-09-12T14:55:59Z
http://arxiv.org/abs/2309.06291v1
# Existence and uniqueness of periodic pseudospherical surfaces emanating from Cauchy problems ###### Abstract **Abstract:** We study implications and consequences of well-posed solutions of Cauchy problems of a Novikov equation describing pseudospherical surfaces. We show that if the co-frame of dual one-forms satisfies certain conditions for a given periodic initial datum, then there exists exactly two families of periodic one-forms satisfying the structural equations for a surface. Each pair then defines a metric of constant Gaussian curvature and a corresponding Levi-Civita connection form. We prove the existence of universal connection forms giving rise to second fundamental forms compatible with the metric. The main tool to prove our geometrical results is the Kato's semi-group approach, which is used to establish well-posedness of solutions of the Cauchy problem involved and ensure \(C^{1}\) regularity for the first fundamental form and the Levi-Civita connection form. **2020 AMS Mathematics Classification numbers**: 35B10, 53A05, 58J60, 35A30. **Keywords:** Equations describing pseudospherical surfaces; First fundamental form; Second fundamental form; Cauchy problems; Kato's approach ###### Contents * 1 Introduction * 1.1 Novelty and challenges of the manuscript * 1.2 Outline of the manuscript * 2 Notation, notions and preliminaries * 2.1 Notation * 2.2 Structure equations and pseudospherical surfaces * 2.3 Equations describing pseudospherical surfaces * 2.4 Sobolev spaces and a few of functional analysis * 2.5 Semigroup Approach * 3 Proof of theorem 1.2 * 4 Proof of theorem 1.3 * 5 Proof of theorem 1.1 * 6 Concluding remarks Introduction In [34] Sasaki made a remarkable observation, showing that solutions of integrable equations solved by the AKNS \(2\times 2\) method [1] give rise to metrics of pseudospherical surfaces with Gaussian curvature \(\mathcal{K}=-1\), see [34, section 2] and [6, section 1] for further details. Later on, Chern and Tenenblat, in their seminal paper [6], introduced the notion of equations describing pseudospherical surfaces (PSS equation) and gave a systematic way for finding them. The works by Sasaki [34] and Chern and Tenenblat [6] showed a deep connection between integrability and differential geometry of pseudospherical surfaces, unsurprisingly, leading to a new notion of integrability, see [27, Definition 2] and [29, page 245]. Notwithstanding its relevance in terms of integrability, the work by Chern and Tenenblat made an in-depth investigation on certain very peculiar equations having the following property: with some exceptions (that we will discuss later), their solutions give rise to metrics with constant Gaussian curvature. This fact _per se_ has been known for a long time for the Sine-Gordon equation, see [32, Section 1], but its systematic study, implications and applications to other equations, potential links with integrable systems and construction of conserved quantities made [6] a paramount work. The metric and the Gaussian curvature are intrinsic properties of a surface, but alas insufficient to completely describe it. To this end, we need further information provided by its second fundamental form. While the first fundamental form (metric) can be though as the _way a two-dimensional being walks_ on the surface (an intrinsic aspect), the second fundamental form tells us _how the surface behaves from, or looks like to, an observer located outside it_. Given the importance of the second fundamental form and the relevance of the observations and ideas introduced in [34] and [6], respectively, it is somewhat surprising that it was taken nearly three decades from [6] until the first works [16, 17, 18] concerning second fundamental forms of the surfaces defined by the solutions of PSS equations. Although the work by Chern and Tenenblat was born in the context of integrability of differential equations, and most of the follow-up works, not to say all, were concerned with these connections, see [27, 28, 29, 30, 31, 5], along time the integrability aspects were put aside and the research carried out has been more focused on geometric aspects and classification of equations describing PSS, see [3, 4, 9, 16, 17, 18, 38] and references therein. A considerable number of relevant PSS equations can be seen as dynamical systems in certain Banach spaces, and from the point of view of analysis of PDEs, qualitative aspects of their solutions are obtained from Cauchy problems, meaning that not only the equation is relevant, but also a condition satisfied by a given solution of the equation at a given time, very often \(t=0\) (initial condition or datum). Usually, the regularity of the initial datum determines that of the corresponding solution of the equation. From a geometric perspective, solutions emanating from Cauchy problems involving an equation for an unknown \(u=u(x,t)\) can be seen as follows: given a certain curve \(x\mapsto(x,0,u_{0}(x))\), can we find a solution \(u\) for the equation such that the given curve belongs to the graph of \(u\)? Moreover, what does the regularity of the curve say about the graph of \(u\)? Is \(u\) the only solution of the equation whose graph contains the given curve? Despite being a topic mostly concerned with analysis of PDEs, the paragraph above shows that the problem of existence and uniqueness of solutions (well-posedness) of PDEs makes sense in the context of PSS equations. Surprisingly, it seems this topic has been out of the agenda of the literature of PSS equations. The purpose of our paper is to shed light on it. The main motivation for us to undertake the research reported in the present work are recent results reported in [33] concerned with the equation \[u_{t}-u_{txx}=\partial_{x}(2-\partial_{x})(1+\partial_{x})u^{2}, \tag{1.1}\] which was discovered in [24] and recently has been proved to be geometrically integrable [10], meaning that its solutions describe a non-trivial family of pseudospherical surfaces. Equation (1.1) was studied in [21, 22, 23, 12] from the point of view of qualitative analysis, such as existence and uniqueness of solutions. More recently, in [33, Theorem 5.1] results from [21, 23] were combined with [10] to prove the existence of \(C^{\omega}\) (metrics for) pseudospherical surfaces arising from the solutions of (1.1). The aforementioned result proved in [33], despite being established for \(C^{\omega}\) solutions, strongly indicates the possibility of relating Cauchy problems and pseudospherical surfaces. Actually, it made such a connection, but by considering solutions emanating from an initial datum with \(C^{\omega}\) regularity. The question is: Could we consider the same problem replacing a \(C^{\omega}\) initial datum by one with lower regularity? Can we consider a periodic initial datum? In line with the comments above, the vast majority of works in the field of PSS equations considers explicit or implicitly \(C^{\infty}\) solutions of the PSS equations, which technically avoid problems regarding regularity (that is, how much smooth the object is) and lead to \(C^{\infty}\) metrics. A simple question then arises: What may happen if we consider solutions with regularity other than \(C^{\infty}\)? The answer to the questions above is given in our first result. **Theorem 1.1**.: _Let \(u_{0}\in H^{4}(\mathbb{S})\) be a non-trivial and non-constant initial datum, with \(u-u_{0}^{\prime\prime}>0\), and consider the Cauchy problem_ \[\left\{\begin{array}{l}u_{t}-u_{txx}=\partial_{x}(2-\partial_{x})(1+ \partial_{x})u^{2},\qquad x\in\mathbb{R},\qquad t>0,\\ \\ u(x,0)=u_{0}(x),\qquad x\in\mathbb{R},\\ \\ u(x,t)=u(x+1,t),\qquad x\in\mathbb{R},\qquad t>0.\end{array}\right. \tag{1.2}\] _Then there exists triads of \(C^{1}\) one-forms \(\omega_{1},\,\omega_{2},\,\omega_{3}\), with_ \[\omega_{i}=f_{i1}dx+f_{i2}dt,\quad 1\leq i\leq 3, \tag{1.3}\] \[f_{p1}=\mu_{p}f_{11}+\eta_{p},\quad 1\leq p\leq 2, \tag{1.4}\] _where \(\mu_{p},\,\eta_{p}\in\mathbb{R}\), such that the forms (1.3) are defined on \(U=\mathbb{R}\times(0,\infty)\), periodic with respect to \(x\), and define a PSS whenever \(\nabla u\neq(0,0)\), with \(\omega_{3}\) being the Levi-Civita connection of the metric determined by \(\omega_{1}\) and \(\omega_{2}\)._ _Moreover, fixed a pair \(\{\omega_{1},\,\omega_{2}\}\) and \(p\in U\), there exists connection forms \(\omega_{13}=a\omega_{1}+b\omega_{2}\), \(\omega_{23}=b\omega_{1}+c\omega_{2}\), where \(a,\,b,\,c\) are \(C^{\infty}\) functions defined on an open neighborhood \(V\subseteq U\) of \(p\), such that \(\{\omega_{1},\omega_{2},\omega_{13},\,\omega_{23}\}\) defines a PSS of Gaussian curvature \(\mathcal{K}=-1\)._ In section 2 we shall present all pertinent definitions and notions, but for now it suffices to say that a function belonging to \(H^{4}(\mathbb{S})\) is a real valued periodic function, with period \(1\), of class \(C^{3}\). Our theorem 1.1 can be seen as an existence and uniqueness result for PSS surfaces. In fact, it says that from solutions of equation (1.1) whose graphs contain the regular curve \(x\mapsto(x,0,u_{0}(x))\), with \(u_{0}\in H^{4}(\mathbb{S})\), we can obtain an open set \(V\subseteq\mathbb{R}^{2}\) in which we have two possible choices to define a first fundamental form for a PSS surface with Gaussian curvature \(\mathcal{K}=-1\). Moreover, we can locally define connection forms on each point of \(V\). This fact, jointly with Bonnet theorem, tells us that we can locally define a PSS surface embedded in \(\mathbb{R}^{3}\). A key point to understand and prove theorem 1.1 is determining whether the problem (1.2) is well-posed. To this end, recognising the presence of the Helmholtz operator \(\Lambda^{2}=1-\partial_{x}^{2}\) in (1.1), we can rewrite the problem (1.2) in an alternative form, given by \[\left\{\begin{array}{l}u_{t}-2uu_{x}=\partial_{x}\Lambda^{-2}(u^{2}+(u^{2}) _{x}),\qquad x\in\mathbb{R},\qquad t>0,\\ \\ u(x,0)=u_{0}(x),\qquad x\in\mathbb{R},\\ \\ u(x,t)=u(x+1,t),\qquad x\in\mathbb{R},\qquad t>0.\end{array}\right. \tag{1.5}\] **Theorem 1.2**.: _Let \(u_{0}\in H^{s}(\mathbb{S})\), \(s>3/2\) be a given initial datum. Then there exists a maximal time of existence \(T>0\), depending on \(u_{0}\), such that there is a unique solution \(u\) to (1.5) satisfying \(u\in C^{0}(H^{s}(\mathbb{S}),[0,T))\cap C^{1}(H^{s-1}(\mathbb{S}),[0,T))\). Moreover, the map \(u_{0}\in H^{s}(\mathbb{S})\to u\), is continuous from \(H^{s}(\mathbb{S})\) to \(C^{0}(H^{s}(\mathbb{S}),[0,T))\cap C^{1}(H^{s-1}(\mathbb{S}),[0,T))\) and \(T\) is independent of \(s\)._ The inverse of the Helmholtz operator, denoted by \(\Lambda^{-2}\) and acting on a function \(f\), is defined by the convolution \(g*f\), where \[g(x)=\frac{\cosh{(x-\lfloor x\rfloor-1/2)}}{2\sinh{(1/2)}} \tag{1.6}\] and \(\lfloor\cdot\rfloor\) denotes the greatest integer function. Some previous results in the literature had already proved well-posedness results concerning periodic solutions of the problem (1.5), see [23], but it was either shown that \(u\in C^{0}([0,T),H^{s}(\mathbb{S}))\) or \(u\) is \(C^{\omega}\) in both variables on a certain domain, see [23, Theorem 1.1] and [23, Theorem 1.4], respectively. Although these results show the existence and uniqueness of solutions for a very large class of functions, they are unsuitable for our purposes because we need solutions with \(C^{1}\) regularity with respect to \(t\). The importance of our theorem 1.2 comes from just the fact that it ensures we have \(C^{1}\) solutions in both variables. Actually, this is a consequence of the Sobolev Lemma. In particular, it tells us that the solutions granted by theorem 1.2 are strong solutions for the (non-local) first order PDE in (1.5). A natural question then arises: is a strong solution of the equation in (1.5) also a strong solution of (1.1)? In general the answer is no! However, requiring enough regularity of the initial datum we can find solutions for the Cauchy problem (1.5) that are also strong solutions for (1.1), and therefore, simultaneously strong solutions for both formulations of the equation. In fact, whenever we consider an initial datum in \(H^{4}(\mathbb{S})\), the corresponding solution provided by theorem 1.2 not only is a \(C^{1}\) solution (in both variables), but the Sobolev Lemma also implies that \(x\mapsto u(x,t)\), \(t\) fixed, is \(C^{3}\), meaning that the solution emanating from (1.5) is a strong, or classical, solution for (1.1), which makes sense to be considered in the study of PSS and differential equations. Although the regularity of the initial datum is enough to make the corresponding solution of (1.5) a strong solution of (1.1), it is insufficient to guarantee that the solution is global, in the sense that it is defined for every \(t>0\). We can have global solutions requiring little more from the initial datum. **Theorem 1.3**.: _If \(u_{0}\in H^{4}(\mathbb{S})\) is a non-trivial initial datum, and \(u_{0}(x)-u_{0}^{\prime\prime}(x)>0\), \(x\in\mathbb{R}\), then the solution of the problem (1.5) exists for any \(t>0\). Moreover, \(u\in C^{1}(\mathbb{R}\times(0,\infty))\) and \(x\mapsto u(x,t)\) is a \(C^{3}\) periodic function, for each fixed \(t>0\)._ It is important to note that in view of the Sobolev Lemma, \(H^{4}(\mathbb{S})\) is continuously and densely embedded in \(H^{s}(\mathbb{S})\), for \(s\in(3/2,4)\). Therefore, both theorems 1.2 and 1.3 tell us that the problem (1.2) has only one solution \(u\). ### Novelty and challenges of the manuscript We study the problem of PSS determined by the solutions of a given equation from the perspective of well-posedness of Cauchy problems, which as far as we know, has not been considered yet. Let us highlight the relevance of our results by discussing the following problem: suppose we know a curve from the graph of a (unknown) solution of a PSS equation. Can we precisely describe the corresponding PSS? Let us exemplify by considering the function \(u_{c}(x,t):=e^{x-ct}\). Any member of the family \(\mathcal{U}=\{u_{c},\,c\in\mathbb{R}\}\) is a solution of (1.1) defined for \((x,t)\in\mathbb{R}\times(0,\infty)\), see [33, page 5]. For any \(u\in\mathcal{U}\), let \(\mathrm{Gr}(u)=\{(x,t,u(x,t)),\ x\in\mathbb{R},\ t>0\}\). Consider the curve \(\Gamma\), given by \(x\mapsto(x,0,e^{x})\) and let \(\partial\mathrm{Gr}(u)\) denote the boundary of a set \(\mathrm{Gr}(u)\). Then it is easy to see that \(\Gamma\subseteq\partial\mathrm{Gr}(u)\), for any \(u\in\mathcal{U}\). In particular, we have \[\Gamma\subseteq\bigcap_{u\in\mathcal{U}}\partial\mathrm{Gr}(u).\] If we allow \(t=0\) in the domain of \(u_{c}\), then the curve \(\Gamma\) belongs to the corresponding graph, but for our purposes it is enough to consider it lying in the boundary, the latter being disjoint from the graph. On the other hand, (1.1) is a PSS equation (see [10, Theorem 1]), and for any member \(u_{c}\) of \(\mathcal{U}\) we can construct a PSS \(U_{c}\) in an intrinsic way (note that these solutions satisfy all required conditions for the existence of a PSS, see the comments after Theorem 1 in [10]). Due to the fact that the curve \(\Gamma\) belongs to the boundary of any graph of the solutions \(u_{c}\), we cannot determine any specific PSS surface only knowing \(\Gamma\). Our theorem 1.1 gives a rather different answer to the same question. In geometric terms, it says that given a curve \(\Gamma\) of the form \(x\mapsto(x,0,u_{0}(x))\), as long as \(u_{0}\in H^{4}(\mathbb{S})\), we can precisely and intrinsically describe a PSS among all infinite surfaces emanating from all solutions of the PDE (1.1). To address this problem we make use of techniques of existence and uniqueness of solutions for PDEs that can be seen as dynamical systems in certain Banach spaces. In view of this approach, we deal with solutions of the equation that are less regular than those usually considered in the literature of the PSS equations. One of the difficulties to be overcome is concerned with the regularity of the one-forms \(\omega_{1}\) and \(\omega_{2}\) defining the metric of the corresponding PSS. Most of the books in differential geometry require \(C^{\infty}\) forms, although some of them require at least \(C^{2}\) regularity. As we will better discuss in the next section, for the classical theory of curves and surfaces, we can have PSS surfaces from \(C^{1}\) forms satisfying the structure equations for a surface. Last but not least, one of the challenges of this paper is that its main result is geometric, but the tools for proving it comes from modern approaches to prove qualitative aspects of solutions of Cauchy problems. For this reason, we tried our best to make clear and explain the technical aspects of each area, so that the readers can have a better reading and appreciation of our work. ### Outline of the manuscript Since this is a paper focusing on Analysis and Geometry, in the next section we provide an overview about PSS and functional analysis. We also fix the notation, present essential concepts and revisit Kato's semi-group approach, which is the main tool for proving theorems 1.2 and 1.3, whose demonstrations are given in sections 3 and 4, respectively. Theorem 1.1 is proved in section 5, whereas our conclusions are given in section 6. ## 2 Notation, notions and preliminaries In this section we introduce and fix the notation used throughout the manuscript. Given its plural and diverse aspects, we also present basic facts and concepts from differential geometry of surfaces and functional analysis, which are the main pillars of the work. Most of the geometric content can be better explored in [2, Chapter 5], [7, Chapter 4] and [37, Chapter 2], whereas our main references for functional analysis are [13, Chapter 3] and [36, Chapter 4]. ### Notation Given a function \(u=u(x,t)\), by \(u(x,\cdot)\) we mean the function \(t\mapsto u(x,t)\), for fixed \(x\), whereas \(u(\cdot,t)\) denotes the function \(x\mapsto u(x,t)\), for fixed \(t\). Let \(I,J\) two open, non-empty subsets of \(\mathbb{R}\). We say that \(u\in C^{0}(I\times J)\) if \(u=u(x,t)\) is continuous with respect to both variables \((x,t)\in I\times J\). Partial derivative of \(u\) with respect to its first argument will be denoted by \(u_{x}\) or \(\partial_{x}u\), whereas \(u_{t}\) or \(\partial_{t}u\) will denote partial derivative with respect to the second argument. Higher order derivatives can be considered using the standard conventions. For a positive integer \(k\), we say that \(u\in C^{k}(I\times J)\) if all partial derivatives of \(u\) up to order \(k\) (including the mixed ones) are continuous. Given a positive integer \(n\), we denote by \(u_{(n)}\) the set of ordered \(n-th\) derivatives of \(u\). Also, we say that \(u\) is \(C^{k}\) whenever all of its partial derivatives up to order \(k\) are continuous on the domain of \(u\). By \(C^{3,1}(I\times J)\) we mean the set of function \(u:I\times J\to\mathbb{R}\) such that \(u\), \(u_{x}\), \(u_{t}\), \(u_{xx}\), \(u_{xt}\), \(u_{xxx}\) and \(u_{xxt}\) belong to \(C^{0}(I\times J)\). Let \(X\) be a Banach space of real valued functions and \(I\subseteq\mathbb{R}\). The set \(C^{0}(I,X)\) denotes collection of continuous functions such that \(u(t,\cdot)\in X\). More generally, given a positive integer \(n\), we say that \(u\in C^{n}(I,X)\) if \(\partial_{t}^{k}u(t,\cdot)\in C^{0}(I,X)\), \(0\leq k\leq n\). ### Structure equations and pseudospherical surfaces Let \(\langle\cdot,\cdot\rangle\) be the usual inner product in \(\mathbb{R}^{3}\) and denote the pair \((\mathbb{R}^{3},\langle\cdot,\cdot\rangle)\) by \(\mathbb{E}^{3}\), that is, the Euclidean space. We recall that a surface is a two dimensional manifold in \(\mathbb{E}^{3}\), which we generally denote by \(\mathcal{M}\). Given a point \(p\in\mathcal{M}\), the tangent and the co-tangent spaces to \(\mathcal{M}\) at \(p\) are denoted by \(T_{p}\mathcal{M}\) and \(T_{p}^{*}\mathcal{M}\), respectively. Let \(\{e_{1},e_{2}\}\) be vector (sufficiently differentiable) valued functions on \(\mathcal{M}\), such that at each point \(p\in\mathcal{M}\), we have: \(\{e_{1},e_{2}\}\) is orthonormal with respect to inner product \(\langle\cdot,\cdot\rangle\); \(\mathrm{Span}\{e_{1},e_{2}\}=T_{p}\mathcal{M}\); \(\{\omega_{1},\omega_{2}\}\) is the dual bases of \(\{e_{1},e_{2}\}\). In particular, \(\mathrm{Span}\{\omega_{1},\omega_{2}\}=T_{p}^{*}\mathcal{M}\). Let \(\{\omega_{1},\omega_{2}\}\) be the corresponding dual of the basis \(\{e_{1},e_{2}\}\). Since \(\langle e_{i},e_{j}\rangle\) is either \(0\) or \(1\), depending on whether \(i=j\) or not, we have \[\langle de_{i},e_{j}\rangle+\langle e_{i},de_{j}\rangle=0, \tag{2.1}\] where \(d(\cdot)\) denotes the usual differential, and we can then define one-forms \[\omega_{ij}=\langle de_{i},e_{j}\rangle, \tag{2.2}\] called _connection forms_, and from (2.1) we see that \(\omega_{ij}=-\omega_{ji}\). A one-form \(\omega\) can be written as \(\omega=f(x,t)dx+g(x,t)dt\), where \(f\) and \(g\) are certain functions, called coefficients of the form \(\omega\). We say that \(\omega\) is of class \(C^{k}\) if and only if both \(f\) and \(g\) are \(C^{k}\) functions. Let \(\otimes\) and \(\wedge\) be the tensor and wedge products (for further details, see [7, page 39], respectively. The dual forms \(\omega_{1}\) and \(\omega_{2}\), jointly with the connection forms, satisfy the following relations: \[d\omega_{1}=\omega_{2}\wedge\omega_{21},\quad d\omega_{2}=\omega_{1}\wedge \omega_{12}, \tag{2.3}\] \[\omega_{1}\wedge\omega_{13}+\omega_{2}\wedge\omega_{23}=0, \tag{2.4}\] and \[d\omega_{12}=\omega_{13}\wedge\omega_{32},\quad d\omega_{13}=\omega_{12} \wedge\omega_{23},\quad d\omega_{23}=\omega_{21}\wedge\omega_{13}. \tag{2.5}\] It is important to note that the connection form \(\omega_{12}\) is completely determined by the forms \(\omega_{1}\) and \(\omega_{2}\), and is known as the Levi-Civita (connection form). For this reason, it is common write \(\omega_{3}:=\omega_{12}\). Moreover, we can define the Gaussian curvature as being the function \(\mathcal{K}\) satisfying the relation \[d\omega_{3}=-\mathcal{K}\,\omega_{1}\wedge\omega_{2}. \tag{2.6}\] Equation (2.6) is called the _Gauss equation_, and it reflects the fact that the Gaussian curvature is intrinsically determined by the surface, whereas we can rewrite equations (2.3) in terms of form \(\omega_{3}\), which reads \[d\omega_{1}=\omega_{3}\wedge\omega_{2},\quad d\omega_{2}=\omega_{1}\wedge \omega_{3}. \tag{2.7}\] Equations (2.6)-(2.7) are called _structure equations_ of the surface \(\mathcal{M}\). **Definition 2.1**.: _Let \(\omega_{1}\), \(\omega_{2}\), \(\omega_{13}\), and \(\omega_{23}\) be given one-forms on a surface \(\mathcal{M}\) in \(\mathbb{E}^{3}\), such that \(\{\omega_{1},\omega_{2}\}\) is LI, and \(p\in\mathcal{M}\). The first and second fundamental forms of \(\mathcal{M}\) are defined, on each \(T_{p}\mathcal{M}\), by \(I(v)=\omega_{1}(v)^{2}+\omega_{2}(v)^{2}\) and \(II(v)=\omega_{13}(v)\omega_{1}(v)+\omega_{23}(v)\omega_{2}(v)\), for each \(v\in T_{p}\mathcal{M}\)._ Commonly one writes the first and the second fundamental forms as \(I=\omega_{1}^{2}+\omega_{2}^{2}\) and \(II=\omega_{13}\omega_{1}+\omega_{23}\omega_{2}\), with the convection \(\alpha\beta=\alpha\otimes\beta\) and \(\alpha^{2}=\alpha\alpha\), for any (one-)forms \(\alpha\) and \(\beta\). We observe that everything done so far refers to a given surface \(\mathcal{M}\) in the euclidean space \(\mathbb{E}^{3}\). A quite useful result for our purposes is **Lemma 2.1**.: _Let \(\omega_{1}\), \(\omega_{2}\), \(\omega_{12}\), \(\omega_{13}\), and \(\omega_{23}\) be \(C^{1}\) one-forms. Then they determine a local surface up to a euclidean motion if and only if \(\omega_{1}\wedge\omega_{2}\neq 0\) and equations (2.3)-(2.5) are satisfied._ Lemma 2.1 (see [11, Theorem 10-19, page 232] and also [11, Theorem 10-18, page 232] for its proof) is a _sine qua non_ result from classical differential geometry of surfaces that enabled us to consider solutions \(u\in C^{3,1}(\mathbb{R}\times(0,\infty))\) and use them to prove our theorem 1.1. **Remark 2.1**.: _The relevance of lemma 2.1 for us is the following: it states that if a set of given one-forms in \(\mathbb{R}^{3}\) satisfies its conditions, then they define, at least locally, a surface \(\mathcal{M}\) in the euclidean space. Such a result is sometimes fundamental theorem of surface theory, see [2, theorem 11, page 143], or also called Bonnet theorem, see [7, theorem 4.39, page 127] or [19, theorem 4.24, page 153]._ Surfaces for which their Gaussian curvatures are constant and negative are called _pseudospherical_ surfaces [7, page 9]. ### Equations describing pseudospherical surfaces If we take \(\mathcal{K}=-1\) in the structure equations (2.6)-(2.7), we then have \[d\omega_{1}=\omega_{3}\wedge\omega_{2},\quad d\omega_{2}=\omega_{1}\wedge \omega_{3},\quad d\omega_{3}=\omega_{1}\wedge\omega_{2}. \tag{2.8}\] Sasaki's observation [34] can be summed up as follows: if we denote \[\omega_{i}=f_{i1}dx+f_{i2}dt,\quad 1\leq i\leq 3, \tag{2.9}\] from the AKNS method [1] we can determine functions \(f_{ij}\) for which the corresponding triad of one-forms satisfies (2.7) and, as a consequence, they determine a PSS in an intrinsic way. Let \((x,t)\) be independent variables. A differential equation for a real valued function \(u=u(x,t)\) of order \(n\) is generically denoted by \[\mathcal{E}(x,t,u,u_{(1)},\cdots,u_{(n)})=0. \tag{2.10}\] **Definition 2.2**.: _A differential equation (2.10) is said to describe a pseudospherical surface, or it is said to be of pseudospherical type, if it is a necessary and sufficient condition for the existence of differentiable functions \(f_{ij}\), \(1\leq i,j\leq 3\), such that the forms (2.9) satisfy the structure equations of a pseudospherical surface (2.8)._ In practical terms, given a triad of one-forms \(\omega_{1}\), \(\omega_{2}\) and \(\omega_{3}\), and an equation (2.10), we can check if they describe a PSS surface in the following way: let us define a matrix of one-forms \(\Omega\) by \[\Omega=\frac{1}{2}\begin{pmatrix}\omega_{2}&\omega_{1}-\omega_{3}\\ \omega_{1}-\omega_{3}&-\omega_{2}\end{pmatrix}=:(\Omega_{ij}),\quad(\Omega \wedge\Omega)_{ij}:=(\sum_{k=1}^{2}\Omega_{ik}\wedge\Omega_{kj}), \tag{2.11}\] \[\Sigma:=d\Omega-\Omega\wedge\Omega,\quad d\Omega:=(d\Omega_{ij}). \tag{2.12}\] If, when restricted to the manifold determined by the solutions of (2.10), the matrix \(\Sigma\) vanishes, we say that (2.10) is a PSS equation, and the triad \(\{\omega_{1},\,\omega_{2},\,\omega_{3}\}\) satisfies the structure equations of a PSS equation with Gaussian curvature \(\mathcal{K}=-1\). **Example 2.1**.: _Let \(m_{1}\in\{-2,1\}\), \(\mu\in\mathbb{R}\) and consider the triad of one-forms_ \[\begin{array}{rcl}\omega_{1}&=&\Big{(}u-u_{xx}\Big{)}dx+\Big{(}2u(u-u_{xx})+ \psi\Big{)}dt,\\ \omega_{2}&=&\Big{(}\mu(u-u_{xx})\pm m_{1}\sqrt{1+\mu^{2}}\Big{)}dx+\mu\big{(}2 u(u-u_{xx})+\psi\big{)}dt,\\ \omega_{3}&=&\Big{(}\pm\sqrt{1+\mu^{2}}(u-u_{xx})+m_{1}\mu\Big{)}dx\\ &&\pm\Big{(}\sqrt{1+\mu^{2}}\big{(}2u(u-u_{xx})+\psi\big{)}\Big{)}dt,\end{array} \tag{2.13}\] _where_ \[\psi:=\frac{4}{m_{1}}uu_{x}-2u_{x}^{2}-2u^{2}, \tag{2.14}\] _and_ \[\mathcal{E}=u_{t}-u_{txx}-4uu_{x}-2u_{x}^{2}-2uu_{xx}+6u_{x}u_{xx}+2uu_{xxx}. \tag{2.15}\] _Note that \(\mathcal{E}=0\) is nothing but (1.1). It is straightforward, but lengthy, to confirm that (see [38, Theorem 4.5])_ \[\begin{array}{rcl}d\omega_{1}-\omega_{3}\wedge\omega_{2}&=&\mathcal{E}dx \wedge dt,\quad d\omega_{2}-\omega_{1}\wedge\omega_{3}=\mathcal{E}dx\wedge dt,\\ d\omega_{3}-\omega_{1}\wedge\omega_{2}&=&\pm\sqrt{1+\mu^{2}}\mathcal{E}dx \wedge dt.\end{array} \tag{2.16}\] _Substituting (2.13)-(2.14) into (2.11), after reckoning we conclude that (2.12) is given by_ \[\Sigma=\frac{\mathcal{E}}{2}\begin{pmatrix}1&1\pm\sqrt{1+\mu^{2}}&1\pm\sqrt{1 +\mu^{2}}\\ 1\pm\sqrt{1+\mu^{2}}&-1\end{pmatrix}. \tag{2.17}\] _Therefore, \(\Sigma=0\) if and only if \(\mathcal{E}=0\), but \(\mathcal{E}=0\) if and only if the forms (2.13) satisfies the structure equations (2.8) for a PSS with \(\mathcal{K}=-1\) in view of (2.16). In particular, \(u\) must be a solution of (1.1)._ It is important to highlight that a _sine qua non_ condition for the existence of a PSS defined on an open set \(\Omega\) contained in the domain of a solution \(u\) is that \(\omega_{1}\wedge\omega_{2}\neq 0\) whenever \((x,t)\in\Omega\), otherwise the Gaussian curvature cannot be inferred from the Gauss equation (2.6). **Definition 2.3**.: _Suppose that (2.10) is a PSS equation with corresponding one forms satisfying (2.8). A solution \(u\) of (2.10) for which \(\omega_{1}\wedge\omega_{2}\neq 0\) is called generic, whereas those satisfying \(\omega_{1}\wedge\omega_{2}=0\) are said to be non-generic._ **Example 2.2**.: _From the one-forms \(\omega_{1}\) and \(\omega_{2}\) in (2.13), we obtain_ \[\omega_{1}\wedge\omega_{2}=\pm\sqrt{1+\mu^{2}}\Big{(}2m_{1}uu_{xx}-4uu_{x}+2m_ {1}u_{x}^{2}\Big{)}dx\wedge dt. \tag{2.18}\] _The condition \(\omega_{1}\wedge\omega_{2}=0\) on an open set \(\Omega\) contained in the domain of \(u\) is satisfied in the following circumstances:_ * _For_ \(m_{1}=-2\)_, then_ \(\phi(x,t)=\pm\sqrt{ae^{-x}+b}\)_;_ * _For_ \(m_{1}=1\)_, then_ \(\phi(x,t)=\pm\sqrt{ae^{2x}+b}\) _or_ \(\phi(x,t)=f(t)e^{x}\)_._ _Above \(a\) and \(b\) are real constants, whereas \(f\in C^{1}(\mathbb{R})\)._ _Let \(u\) be a solution of (1.1), satisfying the condition \(u(x+1,t)=u(x,t)\). Then it is non-generic on an open set \(\Omega\) if and only if it is constant._ ### Sobolev spaces and a few of functional analysis Let \(\mathcal{P}[0,1]\) be the collection of all periodic functions \(f:\mathbb{R}\to\mathbb{C}\) with period \(1\). Given a positive integer \(k\), we denote by \(f^{(n)}\) its \(n-\)th order derivative, while the set of functions \(f\) for which \(f^{(n)}\in C^{0}(\mathbb{R})\), \(0\leq n\leq k\), is denoted by \(C^{k}(\mathbb{R})\). If \(k\) is a non-negative integer, we define \(C^{k}_{\mathrm{per}}[0,1]=C^{k}(\mathbb{R})\cap\mathcal{P}[0,1]\). For the very particular case \(k=\infty\), we write \(\mathcal{P}\) instead of \(C^{\infty}_{\mathrm{per}}[0,1]\), with topological dual denoted by \(\mathcal{P}^{\prime}\). Recall that a member of \(\mathcal{P}^{\prime}\) is a continuous linear functional \(f:\mathcal{P}\to\mathbb{C}\). The Fourier transform of \(f\in\mathcal{P}^{\prime}\) is defined by \[\hat{f}(k)=\frac{1}{2\pi}\int_{0}^{1}f(x)e^{-ikx}dx.\] Let \(\ell^{2}(\mathbb{Z})\) be the collection of sequences \(\alpha=(\alpha_{n})_{n\in\mathbb{Z}}\) such that \[\sum_{n\in\mathbb{Z}}|\alpha_{n}|^{2}<\infty.\] Given \(s\in\mathbb{R}\), we denote by \(\ell^{2}_{s}(\mathbb{Z})\) as the collection of \(\alpha\in\ell^{2}(\mathbb{Z})\) such that \[\sum_{k=-\infty}^{\infty}(1+|k|^{2})^{s}|\alpha_{k}|^{2}<\infty,\] which has a structure of a Banach space when endowed with norm \[\|\alpha\|_{\ell^{2}_{s}}=\sqrt{\sum_{k=-\infty}^{\infty}(1+|k|^{2})|\alpha_{ k}|^{2}}.\] The periodic Sobolev space of order \(s\) is \(H^{s}_{\mathrm{per}}=\{f\in\mathcal{P}^{\prime};\ (\hat{f}(k))_{k\in\mathbb{Z}}\in \ell^{2}_{s}(\mathbb{Z})\}\), and the sesquilinear form \[\big{(}f|g\big{)}_{s}=\sum_{k=-\infty}^{\infty}(1+|k|^{2})^{2}\hat{f}(k) \overline{\hat{g}(k)}\] turns it into a Hilbert space. In particular, note that \(H^{0}_{\mathrm{per}}=L^{2}[0,1]\). Let us now define the following equivalence relation: given \(a,b\in\mathbb{R}\), we say that \(a\sim b\) if \(b=a+k\), for some integer \(k\). The quotient space \(\mathbb{R}/\sim\) can be identified with the set \([0,1)\), which we shall denote by \(\mathbb{S}\). For this reason, henceforth we define \(H^{s}(\mathbb{S}):=H_{\mathrm{per}}[0,1]\). The norm of \(H^{s}(\mathbb{S})\) will be denoted by \(\|\cdot\|_{s}\), whereas \(\|\cdot\|_{\infty}\) is reserved for the norm in \(L^{\infty}\). Given two Banach spaces \(X\) and \(Y\), we write \(X\hookrightarrow Y\) to mean that \(X\) is continuously and densely embedded in \(Y\). **Lemma 2.2**.: **([13, Theorem 3.193, page 201])** _Let \(s,r\in\mathbb{R}\), with \(s\geq r\). Then \(H^{s}(\mathbb{S})\hookrightarrow H^{r}(\mathbb{S})\) and \(\|f\|_{r}\leq\|f\|_{s},\) for any \(f\in H^{s}(\mathbb{S})\). In particular, \(H^{s}(\mathbb{S})\hookrightarrow L^{2}[0,1]\) for any \(s\geq 0\)._ The next result is known as Sobolev Lemma, or also as Sobolev Embedding Theorem. **Lemma 2.3**.: **([13, Theorem 3.195, page 204], [36, Proposition 3.3, page 329])** _If \(s>1/2\), then \(H^{s}(\mathbb{S})\hookrightarrow C^{0}_{per}[0,1]\) and \(\|f\|_{\infty}\leq c\|f\|_{s}\), for some constant \(c\) depending only on \(s\), where \(f\in H^{s}(\mathbb{S})\). More generally, if \(s>1/2+m\), where \(m\) is a positive integer, then \(H^{s}(\mathbb{S})\hookrightarrow C^{m}_{per}[0,1]\)._ We conclude our revision on Sobolev spaces by recalling the algebra property. **Lemma 2.4**.: **([13, Theorem 3.200, page 207])** _If \(s>1/2\), for any \(f,g\in H^{s}(\mathbb{S})\), we have \(fg\in H^{s}(\mathbb{S})\) and their norm satisfies the estimate \(\|fg\|_{s}\leq c\|f\|_{s}\|g\|_{s},\) for some constant \(c>0\) depending only on \(s\)._ ### Semigroup Approach Let us revisit basic aspects of Kato's theory [14, 15, 25], also known as semigroup approach, which is our main tool for proving well-posedness of solutions with the regularity we need to make them consistent with the geometric nature of our problem. Let \(X\) be a Hilbert space, and let \(u(\cdot,t)\in X\) such that \[u_{t}+A(u)u=f(u),\hskip 14.226378ptt\geq 0,\hskip 14.226378ptu(0)=u_{0}. \tag{2.19}\] Let \(Y\hookrightarrow X\) and \(S:Y\to X\) be a topological isomorphism. Assume that * For any given \(r>0\) it holds that for all \(u\in\mathrm{B}_{r}(0)\subseteq Y\) (the ball around the origin in \(Y\) with radius \(r\)), the linear operator \(A(u)\colon X\to X\) generates a strongly continuous semigroup \(T_{u}(t)\) in \(X\) which satisfies \(\|T_{u}(t)\|_{\mathcal{L}(X)}\leq\mathrm{e}^{\omega_{r}t}\), for all \(t\in[0,\infty)\), for a uniform constant \(\omega_{r}>0\); * \(A\) maps \(Y\) into \(\mathcal{L}(Y,X)\), more precisely the domain \(D(A(u))\) contains \(Y\) and the restriction \(A(u)|_{Y}\) belongs to \(\mathcal{L}(Y,X)\) for any \(u\in Y\). Furthermore \(A\) is Lipschitz continuous in the sense that for all \(r>0\) there exists a constant \(C_{1}\) which only depends on \(r\) such that \(\|A(u)-A(v)\|_{\mathcal{L}(Y,X)}\leq C_{1}\,\|u-v\|_{X}\) for all \(u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\). * For any \(u\in Y\) there exists a bounded linear operator \(B(u)\in\mathcal{L}(X)\) satisfying \(B(u)=SA(u)S^{-1}-A(u)\) and \(B\colon Y\to\mathcal{L}(X)\) is uniformly bounded on bounded sets in \(Y\). Furthermore for all \(r>0\) there exists a constant \(C_{2}\) which depends only on \(r\) such that \(\|B(u)-B(v)\|_{\mathcal{L}(X)}\leq C_{2}\,\|u-v\|_{Y}\), for all \(u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\); * For all \(t\in[0,\infty)\), \(f\) is uniformly bounded on bounded sets in \(Y\). Moreover, the map \(f\colon Y\to Y\) is locally \(X\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{3}>0\), depending only on \(r\), such that \(\|f(u)-f(v)\|_{X}\leq C_{3}\,\|u-v\|_{X}\), for all \(u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\), and locally \(Y\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{4}>0\), depending only on \(r\), such that \(\|f(u)-f(v)\|_{Y}\leq C_{4}\)\(\|u-v\|_{Y}\), for all \(u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\). **Lemma 2.5**.: _[_14_]_ _Assume that (A1)-(A4) hold. Then for given \(u_{0}\in Y\), there is a maximal time of existence \(T>0\), depending on \(u_{0}\), and a unique solution \(u\) to (2.19) in \(X\) such that \(u=u(u_{0},.)\in C^{0}(Y,[0,T))\cap C^{1}(X,[0,T)).\) Moreover, the solution depends continuously on the initial data, i.e. the map \(u_{0}\to u(u_{0},.)\) is continuous from \(Y\) to \(C^{0}(Y,[0,T))\cap C^{1}(X,[0,T))\)._ ## 3 Proof of theorem 1.2 We begin by noticing that the equation in (1.5) is in the quasi-linear equation form (2.19), where \[A(u)=-2u\partial_{x} \tag{3.1}\] and \[f(u)=\Lambda^{-2}\partial_{x}\big{(}u^{2}+(u^{2})_{x}\big{)}. \tag{3.2}\] Let \[(\Lambda^{s}f)(k):=\sum_{k\in\mathbb{Z}}(1+n^{2})^{s/2}\hat{f}(n)e^{ink}.\] For any \(s,s^{\prime}\in\mathbb{R}\), \(\Lambda^{s}:H^{s^{\prime}}(\mathbb{S})\to H^{s^{\prime}-s}(\mathbb{S})\) is an isomorphism [36, page 330]. Then, it is natural to choose as Hilbert spaces \(X\coloneqq(H^{s-1}(\mathbb{S}),\|\cdot\|_{s-1})\) and \(Y\coloneqq(H^{s}(\mathbb{S}),\|\cdot\|_{s})\) with \(s>\frac{3}{2}\), and \(S=\Lambda\) as well, and work with them using Kato's approach. We aim at proving lemmas ensuring the validity of the assumptions (A1)-(A4). For convenience, in the remaining part of this section we simply write \(H^{s}\) in place of \(H^{s}(\mathbb{S})\). Moreover, for a given function \(g\in H^{r}\) with \(r>1/2\) let us denote by \(M_{g}\) the corresponding multiplication operator on \(H^{r}\), i.e. \(M_{g}\colon H^{r}\to H^{r},w\mapsto gw\). Since \(H^{r}\), \(r>1/2\), is closed under multiplication, \(M_{g}\) is continuous. Now, we verify the assumptions needed for Theorem 1.2. We start with assumption (A1): **Lemma 3.1**.: _Let \(s>3/2\). For any given \(r>0\), it holds that for all \(u\in\mathrm{B}_{r}(0)\subseteq H^{s}\), the linear operator \(A(u)\colon H^{s-1}\to H^{s-1}\), with domain \(D(A(u)):=\{w\in H^{s-1}\colon A(u)w\in H^{s-1}\}\), generates a strongly continuous semigroup \(T_{u}(t)\) in \(X\) which satisfies \(\|T_{u}(t)\|_{\mathcal{L}(X)}\leq e^{\omega_{r}t}\) for all \(t\in[0,\infty)\), for a uniform constant \(\omega_{r}>0\). In particular, the operator \(A(u)\) given in (3.1), with domain \(\mathcal{D}(A)=\{\omega\in H^{s-1}:A(u)\omega\in H^{s-1}\}\subset H^{s-1}\) is quasi-m-accreative in \(H^{s-1}\)if \(u\in H^{s}\), \(s>\frac{3}{2}\)._ For convenience, it would be good to mention that the coefficient in (3.1) does not affect the analysis. It just plays a role in constant estimation which is not of our interest. Therefore, we will neglect it and keep the operator form as \(u\partial_{x}\). We prove this lemma in two steps. First step is given as follows: **Lemma 3.2**.: _The operator \(A(u)=u\partial_{x}\) in \(L^{2}\), with \(u\in H^{s}\), \(s>\frac{3}{2}\), is quasi-m-accreative._ Proof.: A linear operator \(A=A(u)\) in \(X\) is quasi-m-accretive if and only if [15]: 1. There is a real number \(\beta\) such that \((A\omega,\omega)_{X}\geq-\beta\|\omega\|_{X}^{2}\) for all \(\omega\in D(A)\); 2. The range of \(A(u)+\lambda I\) is all of \(X\) for some (or equivalently, all) \(\lambda>\beta\). Note that if the above property (a) holds, then \(A+\lambda I\) is dissipative for all \(\lambda>\beta\). Moreover, if \(A\) is a closed operator, then \(A+\lambda I\) has closed range in \(X\) for all \(\lambda>\beta\). Hence, in order to prove (b) in such a case, it is enough to show that \(A+\lambda I\) has dense range in \(X\) for all \(\lambda>\beta\). First we show that \(A\) is a closed operator in \(L^{2}\). Let \((v_{n})_{n\in\mathbb{N}}\) be a sequence in \(D(A)\) with \(v_{n}\to v\) in \(L^{2}\) and \(Av_{n}\to w\) in \(L^{2}\). Then \(uv_{n}\in H^{1}\) for all \(n\in\mathbb{N}\) by definition of \(D(A)\) since an alternative way of writing the domain is \(D(A)=\{\omega\in L^{2}:u\omega\in H^{1}\}\) and \(v_{n}\in D(A)\). Moreover, both \(uv_{n}\to uv\) and \(u_{x}v_{n}\to u_{x}v\) in \(L^{2}\) by the continuity of the multiplication \(H^{r}\times L^{2}\to L^{2}\) for \(r>1/2\). Therefore,\((uv_{n})_{x}\to w+u_{x}v\) in \(L^{2}\). Having sequences \((uv_{n})_{n\in\mathbb{N}}\) and \(((uv_{n})_{x})_{n\in\mathbb{N}}\) convergent in \(L^{2}\) implies that \((uv_{n})_{n\in\mathbb{N}}\) converges in \(H^{1}\) with the limit \(uv\), thus \(v\in D(A)\). Moreover the continuity of \(\partial_{x}\colon H^{1}\to L^{2}\) implies that \(\lim_{n\to\infty}(uv_{n})_{x}=(uv)_{x}\), therefore \(w=(uv)_{x}-u_{x}v=Av\). Now, we take the following \(L^{2}\) inner product \[(A(u)\omega,\omega)_{0}=(u\partial_{x}\omega,\omega)_{0}\] We refer to Lemma 3.3 to be stated below and use integration by parts to get: \[|(u\partial_{x}\omega,\omega)_{0}|=|-\frac{1}{2}(u_{x},\omega^{2})_{0}|\leq C \|u_{x}\|_{L^{\infty}}\|w\|_{0}^{2}\leq\tilde{C}\|\omega\|_{0}^{2}.\] Having \(\|u\|_{s}\) bounded allows us to choose \(\beta=\tilde{C}(\|u\|_{H^{s}})\) and to show that the operator satisfies the inequality in (a). Thus, \(A(u)+\lambda I\) is dissipative for all \(\lambda>\beta\). Moreover, recall that \(A(u)\) is a closed operator. Therefore, we now show that \(A(u)+\lambda I\) has dense range in \(L^{2}\) for all \(\lambda>\beta\). It is known that if the adjoint of an operator has trivial kernel, then the operator has dense range [26]. For \(A(u)=u\partial_{x}\), the adjoint operator can be expressed \(A^{*}(u)=-u_{x}-u\partial_{x}\). Observe that \[A^{*}(u)\omega=-u_{x}\omega-u\omega_{x}=-(u\omega)_{x}.\] Since \(u_{x}\in L^{\infty}\) and \(\omega\in L^{2}\), we have \(u_{x}\omega\in L^{2}\). Having also \(A(u)\omega=u\omega_{x}\in L^{2}\) for \(\omega\in D(A)\) reveals that \(\mathcal{D}(A^{*})=\{\omega\in L^{2}:A^{*}(u)\omega\in L^{2}\}\). Assume that \(A(u)+\lambda I\) does not have a dense range in \(L^{2}\). Then, there exists \(0\neq z\in L^{2}\) such that \(((A(u)+\lambda I)\omega,z)_{0}=0\) for all \(\omega\in\mathcal{D}(A)\). Since \(H^{1}\subset\mathcal{D}(A)\) \(\mathcal{D}(A)=\mathcal{D}(A^{*})\) is dense in \(L^{2}\). It means that there exists a sequence \(z_{k}\in\mathcal{D}(A^{*})\) such that it converges to an element \(z\in L^{2}\). Recall that \(D(A^{*})\) is closed. So, \(z\in\mathcal{D}(A^{*})\). Moreover, \[((A(u)+\lambda I)\omega,z)_{0}=(\omega,(A(u)+\lambda I)^{*}z)_{0}=0\] reveals that \((A^{*}(u)+\lambda I)z=0\) in \(L^{2}\). Multiplying by \(z\) and integrating by parts, we get \[0=((A^{*}(u)+\lambda I)z,z)_{0}=(\lambda z,z)_{0}+(z,A(u)z)_{0}\geq(\lambda- \beta)\|z\|_{0}^{2}\quad\forall\lambda>\beta\] and thus, \(z=0\), which contradicts our assumption. It completes the proof of (b). Therefore, the operator \(A(u)\) is quasi-m-accreative. In the proof of Lemma 3.1 we use the fact that \(C^{\infty}(\mathbb{S})\) is a _core_ for \(A\) in \(H^{s-1}\), i.e. \(A(u)v\) can be approximated by smooth functions in \(H^{s-1}\) ([8]): **Lemma 3.3**.: _Given \(v\in D(A)\) there exists a sequence \((v_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}^{\infty}\) such that both \(v_{n}\to v\) and \(Av_{n}\to Av\) in \(H^{s-1}\)._ Proof.: Let \(v\in D(A)\) and fix \(\rho\in C_{c}^{\infty}\), where \(C_{c}^{\infty}\) denotes the set of \(C^{\infty}\) functions with compact support, with \(\rho\geq 0\) and \(\int_{\mathbb{R}}\rho=1\). Given \(n\geq 1\), let \(\rho_{n}=n\rho(nx)\). If we set \(v_{n}\coloneqq\rho_{n}*v\), then \(v_{n}\in C_{c}^{\infty}\) for \(n\geq 1\) and \(v_{n}\to v\) in \(H_{p}^{s-1}\). We have to prove that \((uv_{n})_{x}\to(uv)_{x}\) in \(H^{s-1}\). Since \(v\in D(A)\), we have that \(uv_{x}\in H^{s-1}\) and hence \(\rho_{n}*(uv_{x})\to uv_{x}\). Moreover, since \(uv_{n}\in H^{s}\) it follows that \((uv_{n})_{x}=u_{x}v_{n}+u(v_{n})_{x}\in H^{s-1}\), hence we have that \(u_{x}v_{n}\to u_{x}v\) in \(H^{s-1}\). Therefore \[(uv_{n})_{x}-(uv)_{x}=u_{x}v_{n}-u_{x}v+\rho_{n}*(uv_{x})-uv_{x}+u(v_{n})_{x}- \rho_{n}*(uv_{x})\] holds true and it suffices to show that \(u(v_{n})_{x}-\rho_{n}*(uv_{x})\to 0\) in \(H^{s-1}\). To this end, denote \[P_{n}v\coloneqq u(v_{n})_{x}-\rho_{n}*(uv_{x}),\quad n\geq 1.\] We will show that there exists \(K>0\) independent of \(v\) such that \[\|P_{n}v\|_{s-1}\leq K\|v\|_{s-1},\quad n\geq 1. \tag{3.3}\] That will enable us to conclude that \(P_{n}\) is uniformly bounded in \(H^{s-1}\) by the uniform boundedness principle. When we approximate \(v\) in \(H^{s-1}\) by smooth functions, and use this conclusion, we will be able to prove the assertion \(P_{n}\to 0\) for \(v\in C_{c}^{\infty}\). Since the set of smooth functions is dense in \(H^{s-1}\) and \(P_{n}\) are uniformly bounded, the proof will be completed. We first notice that \[P_{n}v(x) =\int_{\mathbb{R}}(\rho_{n})_{y}(y)(u(x)-u(x-y))v(x-y)dy+(\rho_{n }*(u_{x}v))(x)\] \[=n^{2}\int_{\mathbb{R}}\rho_{y}(ny)(u(x)-u(x-y))v(x-y)dy+(\rho_{n }*(u_{x}v))(x)\] \[=n\int_{-1}^{1}\rho_{y}(y)(u(x)-u(x-\frac{y}{n}))v(x-\frac{y}{n}) dy+(\rho_{n}*(u_{x}v))(x),\] where \(\operatorname{supp}(\rho)\subset[-1,1]\). Moreover, using the mean value theorem, we obtain the estimate \[\Big{|}n^{2}\int_{\mathbb{R}}\rho_{y}(ny)(u(x)-u(x-y))v(x-y)dy\Big{|} =\Big{|}n^{2}\int_{\mathbb{R}}\rho_{y}(ny)u_{x}(x_{0})yv(x-y)dy\Big{|}\] \[=\Big{|}\int_{-1}^{1}\rho_{y}(y)u_{x}(x_{0})yv(x-y)dy\Big{|}\leq \|u_{x}\|_{L^{\infty}}\int_{-1}^{1}|\rho_{y}(y)|\,|y|\,|v(x-\frac{y}{n})|dy,\] for some \(x_{0}\in(x,x-y)\). Let now \(C\coloneqq\sup_{x\in\mathbb{R}}\|u_{x}\|_{L^{\infty}}^{2}\int_{-1}^{1}|\rho_{y }(y)y|^{2}dy\). Then the Cauchy-Schwarz inequality, Fubini's theorem and the fact that the operator \(\Lambda^{s-1}\) commutes with integration yield that \[\|n^{2}\int_{\mathbb{R}}\rho_{y}(ny)(u(x)-u(x-y))v(x-y)dy\|_{s-1}^ {2}\] \[=\|\Lambda^{s-1}\int_{-1}^{1}\rho_{y}(y)u_{x}(x_{0})y(v(x-\frac{ y}{n}))dy\|_{2}^{2}\] \[=\int_{\mathbb{R}}\left|\int_{-1}^{1}\rho_{y}(y)u_{x}(x_{0})y \Lambda^{s-1}v(x-\frac{y}{n})dy\right|^{2}dx\] \[\leq C\int_{-1}^{1}\int|\Lambda^{s-1}v(x-\frac{y}{n})|^{2}dxdy \leq 2C\|v\|_{s-1}.\] Moreover, we obtain by Plancherel's theorem that \[\|\rho_{n}*(u_{x}v)\|_{s-1} =\|\Lambda^{s-1}(\rho_{n}*(u_{x}v))\|_{2}=\|\rho_{n}*\Lambda^{s-1 }(u_{x}v))\|_{2}\ \ \leq\|\Lambda^{s-1}(u_{x}v)\|_{2}\] \[\leq\|u_{x}\|_{L^{\infty}}\|v\|_{s-1}.\] Therefore we conclude that \[\|P_{n}v\|_{s-1}\leq(\sqrt{2C}+\|u_{x}\|_{L^{\infty}})\,\|v\|_{s-1},\ \ n\geq 1. \tag{3.4}\] For \(K=\sqrt{2C}+\|u_{x}\|_{L^{\infty}}\) in (3.3), proof is completed by the estimate (3.4). Second step to prove Lemma 3.1 is making use of the following lemma proved in [25]: **Lemma 3.4**.: _Let \(X\) and \(Y\) be two Banach spaces such that \(Y\) is continuously and densely embedded in \(X\). Let \(-A\) be the infinitesimal generator of the \(C_{0}\)-semigroup \(T(t)\) on \(X\) and let \(Q\) be an isomorphism from \(Y\) onto \(X\). Then \(Y\) is \(-A\)-admissible (i.e. \(T(t)Y\subset Y\) for all \(t\geq 0\), and the restriction of \(T(t)\) to \(Y\) is a \(C_{0}\)-semigroup on \(Y\)) if and only if \(-A_{1}=-QAQ^{-1}\) is the infinitesimal generator of the \(C_{0}\)-semigroup \(T_{1}(t)=QT(t)Q^{-1}\) on \(X\). Moreover, if \(Y\) is \(-A\)-admissible, then the part of \(-A\) in \(Y\) is the infinitesimal generator of the restriction \(T(t)\) to \(Y\)._ Before we proceed with the proof of Lemma 3.1, we give the commutator estimate which will be used: **Lemma 3.5** ([35]).: _Let \(m>0\), \(s\geq 0\) and \(3/2<s+m\leq\sigma\). Then for all \(f\in H^{\sigma}\) and \(g\in H^{s+m-1}\) one has \(\|[\Lambda^{m},f]g\|_{s}\leq C\|f\|_{\sigma}\,\|g\|_{s+m-1},\) where \(C\) is a constant which is independent of \(f\) and \(g\)._ **Proof of Lemma 3.1**: Following the arguments in the proof of Lemma 3.2, we first take the following \(H^{s-1}\) inner product \[(A(u)\omega,\omega)_{s-1} =(u\partial_{x}\omega,\omega)_{s-1}=(\Lambda^{s-1}u\partial_{x} \omega,\Lambda^{s-1}\omega)_{0}\] \[=([\Lambda^{s-1},u]\partial_{x}\omega,\Lambda^{s-1}\omega)_{0}+( u\partial_{x}\Lambda^{s-1}\omega,\Lambda^{s-1}\omega)_{0}. \tag{3.5}\] Using Cauchy-Schwartz's inequality and Lemma 3.5 with \(m=s-1\), \(\sigma=s\), we get the following estimate for the first term of (3.5): \[|([\Lambda^{s-1},u]\partial_{x}\omega,\Lambda^{s-1}\omega)_{0}|\leq C\|u\|_{s }\|\partial_{x}\omega\|_{s-2}\|\omega\|_{s-1}\leq\tilde{C}\|\omega\|_{s-1}^{2},\] for some constant \(\tilde{C}\) depending on \(\|u\|_{s}\). For the second term of (3.5), we again refer to Lemma 3.3 and use integration by parts to get: \[|(u\partial_{x}\Lambda^{s-1}\omega,\Lambda^{s-1}\omega)_{0}|=|-\frac{1}{2}(u_{ x},(\Lambda^{s-1}\omega)^{2})_{0}|\leq C\|u_{x}\|_{L^{\infty}}\|w\|_{s-1}^{2}\leq \tilde{C}\|\omega\|_{s-1}^{2}.\] Choosing \(\beta=\tilde{C}(\|u\|_{H^{s}})\), the operator satisfies the required inequality. Moreover, let \(Q:=\Lambda^{s-1}\) and notice that \(Q\) is an isomorphism of \(H^{s-1}\) to \(L^{2}\) and \(H^{s-1}\) is continuously and densely imbedded into \(L^{2}\) as \(s>\frac{3}{2}\). Define \[A_{1}(u)=QA(u)Q^{-1}=\Lambda^{s-1}A(u)\Lambda^{1-s}=\Lambda^{s-1}u\partial_{x }\Lambda^{1-s}=\Lambda^{s-1}u\Lambda^{1-s}\partial_{x},\] and let \(\omega\in L^{2}\) and \(u\in H^{s}\), \(s>\frac{5}{2}\). Then write \(B_{1}(u)=A_{1}(u)-A(u)\) and consider the following estimate: \[\|B_{1}(u)\omega\|_{0} =\|[\Lambda^{s-1},A(u)]\Lambda^{1-s}\omega\|_{0}=\|[\Lambda^{s-1},u]\Lambda^{1-s}\partial_{x}\omega\|_{0}\] \[\leq C\|u\|_{s}\|\Lambda^{1-s}\partial_{x}\omega\|_{s-2}\leq C\|u \|_{s}\|\omega\|_{0},\] where we applied Lemma 3.5 with \(m=s-1\) and \(\sigma=s\). Hence, we obtain \(B_{1}(u)\in\mathcal{L}(L^{2})\). Recall from Lemma 3.2 that \(A(u)\) is quasi-m-accretive in \(L^{2}\), i.e. \(-A(u)\) is the infinitesimal generator of a \(C_{0}\)-semigroup on \(L^{2}\). Thus, \(A_{1}(u)=A(u)+B_{1}(u)\) is also the infinitesimal generator of a \(C_{0}\)-semigroup in \(L^{2}\) by means of a perturbation theorem for semigroups (see [25]). Lemma 3.4 reveals that for \(Y=H^{s-1}\), \(X=L^{2}\) and \(Q=\Lambda^{s-1}\), \(H^{s-1}\) is \(A\)-admissible. Hence, \(-A(u)\) is the infinitesimal generator of a \(C_{0}\)-semigroup on \(H^{s-1}\). We continue with the proof of assumption (A2): **Lemma 3.6**.: _A maps \(H^{s}\) into \(\mathcal{L}(H^{s},H^{s-1})\), more precisely the domain \(D(A(u))\) contains \(H^{s}\) and the restriction \(A(u)|_{H^{s}}\) belongs to \(\mathcal{L}(H^{s},H^{s-1})\) for any \(u\in H^{s}\)._ _Furthermore \(A\) is Lipschitz continuous in the sense that for all \(r>0\) there exists a constant \(C_{1}\) which only depends on \(r\) such that_ \[\|A(u)-A(v)\|_{\mathcal{L}(H^{s},H^{s-1})}\leq C_{1}\,\|u-v\|_{s-1} \tag{3.6}\] _for all \(u,\ v\) inside \(\mathrm{B}_{r}(0)\subseteq H^{s}\)._ Proof.: The operator \(A(u)|_{H^{s}}\) belongs to \(\mathcal{L}(H^{s},H^{s-1})\) for any \(u\in H^{s}\), since \(\partial_{x}\in\mathcal{L}(H^{s},H^{s-1})\) and \(M_{u}\in\mathcal{L}(H^{s-1})\). To see that the required estimate is satisfied, let \(u,v,w\in H^{s}\) be arbitrary. Then, \[\|(A(u)-A(v))w\|_{s-1} =\|(u-v)\partial_{x}w\|_{s-1}\leq C\|u-v\|_{s-1}\,\|\partial_{x}w \|_{s-1}\] \[\leq C\|u-v\|_{s-1}\,\|w\|_{s},\] where \(C\) denotes a generic constant. This shows that for arbitrary \(r>0\) one can always find a constant \(C_{1}\) such that (3.6) holds uniformly for all \(u,v\in\mathrm{B}_{r}(0)\subseteq H^{s}\). The last two assumptions (A3)-(A4) are proved by the help of the following commutator and product estimates stated in [20, Proposition B.10.(2)] and [14, Lemma A1], respectively: **Lemma 3.7**.: _Let \(r>1/2\)._ * _If_ \(-1/2<t\leq r+1\)_, there exists a constant_ \(C_{r,t}>0\) _such that_ \[\left\|[\Lambda^{t},M_{g}]h\right\|_{0}\leq C_{r,t}\,\|g\|_{r+1}\,\|h\|_{t-1}\] _for all_ \(g\in H^{r+1}\) _and_ \(h\in H^{t-1}\)_._ * _If_ \(-r<t\leq r\)_, there exists a constant_ \(C_{r,t}>0\) _such that_ \(\|fg\|_{t}\leq C_{r,t}\,\|f\|_{r}\,\|g\|_{t}\)_, for all_ \(f\in H^{r}\) _and_ \(g\in H^{t}\)_._ Even though Lemma 3.7 is stated on the real line (in general \(\mathbb{R}^{m}\)), it holds on periodic domain as well. Now, we define a bounded linear operator and prove assumption (A3): **Lemma 3.8**.: _For any \(u\in H^{s}\) there exists a bounded linear operator \(B(u)\in\mathcal{L}(H^{s-1})\) satisfying \(B(u)=\Lambda A(u)\Lambda^{-1}-A(u)\) and \(B\colon H^{s}\to\mathcal{L}(H^{s-1})\) is uniformly bounded on bounded sets in \(H^{s}\). Furthermore for all \(r>0\) there exists a constant \(C_{2}\) which depends only on \(r\) such that_ \[\|B(u)-B(v)\|_{\mathcal{L}(H^{s-1})}\leq C_{2}\,\|u-v\|_{s} \tag{3.7}\] _for all \(u,v\in\mathrm{B}_{r}(0)\subseteq H^{s-1}\). Here, \(A(u)\) is the operator given by (3.1)._ Proof.: Let \(u\in H^{s}\). Since \(\partial_{x}\) commutes with \(\Lambda\) and \(\Lambda^{-1}\) we obtain that \[B(u)=\Lambda u\partial_{x}\Lambda^{-1}-u\partial_{x}=[\Lambda,u\partial_{x}] \Lambda^{-1}=[\Lambda,u]\Lambda^{-1}\partial_{x}.\] Hence we can write \(\Lambda^{s-1}B(u)\) as \[\Lambda^{s-1}[\Lambda,u]\Lambda^{-1}\partial_{x} =\Lambda^{s}u\Lambda^{-1}\partial_{x}-\Lambda^{s-1}u\partial_{x}= [\Lambda^{s},u]\Lambda^{-1}\partial_{x}+u\Lambda^{s-1}\partial_{x}-\Lambda^{s- 1}u\partial_{x}\] \[=[\Lambda^{s},u]\Lambda^{-1}\partial_{x}+[u,\Lambda^{s-1}] \partial_{x}.\] Let now \(\omega\in H^{s-1}\) and \(u,v\in H^{s}\) be arbitrary. In view of the above identity and Lemma 3.7, we obtain the following estimate \[\|(B(u)-B(v))\omega\|_{s-1}=\|\Lambda^{s-1}(B(u)-B(v))\omega\|_{0}\] \[\leq\|[\Lambda^{s},u-v]\Lambda^{-1}\partial_{x}\omega\|_{0}+\|[u -v,\Lambda^{s-1}]\partial_{x}\omega\|_{0}\] \[\leq C\|u-v\|_{s}(\|\Lambda^{-1}\partial_{x}\omega\|_{s-1}+\| \partial_{x}\omega\|_{s-2})\leq C\|u-v\|_{s}\|\omega\|_{s-1},\] where \(C\) is a generic constant independent of \(u,w\) and \(w\). In particular, this shows that \(B(u)\) extends to a bounded linear operator on \(H^{s-1}\) for every \(u\in H^{s}\) such that \(B\colon H^{s}\to\mathcal{L}(H^{s-1})\) is uniformly bounded on bounded sets in \(H^{s}\). Furthermore, this estimation proves that there exists a constant \(C_{2}\) depending only on the radius of the ball \(\mathrm{B}_{r}(0)\subseteq H^{s}\) such that (3.7) is satisfied for all \(u,v\in B_{r}(0)\). The last assumption (A4) is proved in Lemma 3.9: **Lemma 3.9**.: _For all \(t\in[0,\infty)\), \(f\) is uniformly bounded on bounded sets in \(H^{s}\). Moreover, the map \(f\colon H^{s}\to H^{s}\) is locally \(H^{s-1}\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{3}>0\), depending only on \(r\), such that \(\|f(u)-f(v)\|_{s-1}\leq C_{3}\,\|u-v\|_{s-1}\) for all \(u,v\in\mathrm{B}_{r}(0)\subseteq H^{s}\) and locally \(H^{s}\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{4}>0\), depending only on \(r\), such that \(\|f(u)-f(v)\|_{s}\leq C_{4}\,\|u-v\|_{s}\) for all \(u,v\in\mathrm{B}_{r}(0)\subseteq H^{s}\)._ Proof.: Recall that \(f(u)=\Lambda^{-2}\partial_{x}\big{(}u^{2}+(u^{2})_{x}\big{)}.\) Therefore, with the help of Lemma 3.7 \[\|f(u)-f(v)\|_{s-1} \leq C\|(u^{2}-v^{2})+(u^{2}-v^{2})_{x}\|_{s-2}\] \[\leq C\|(u+v)(u-v)+((u+v)(u-v))_{x}\|_{s-2}\] \[\leq C(\|(u+v)\|_{s-2}\|(u-v)\|_{s-1}+\|u+v\|_{s-1}\|u-v\|_{s-1})\] \[\leq C_{3}\|u-v\|_{s-1}\] where \(C_{3}\) is a constant depending on \(\|u\|_{H^{s}}\) and \(\|v\|_{H^{s}}\). This proves \(H^{s-1}\)-Lipschitz continuity. Similar arguments will show that we have the following estimates: \[\|f(u)-f(v)\|_{s}\leq C_{4}\|u-v\|_{s} \tag{3.8}\] where \(C_{4}\) is also a constant depending on \(\|u\|_{H^{s}}\) and \(\|v\|_{H^{s}}\). Since we choose \(u_{0}\in H^{s}\), this estimate actually corresponds to the proof of continuous dependence on the initial data. Note that boundedness of \(f(u)\) on bounded subsets \(\{u\in H^{s}:\|u\|_{s}\leq M\}\) of \(H^{s}\) (for all \(M\)) can be obtained from (3.8) by choosing \(v=0\). Hence, we get the estimates for (A4). ## 4 Proof of theorem 1.3 In order to prove theorem 1.3 we need some estimates for \(u,\,u_{x}\) and \(u_{xx}\). To this end, we need the next result. **Theorem 4.1**.: _Assume that \(u_{0}\in H^{3}(\mathbb{S})\), \(m_{0}(x):=u_{0}(x)-u_{0}^{\prime\prime}(x)\), and let \(u\) be the corresponding solution of (1.5). If \(m_{0}(x)\geq 0\), \(x\in\mathbb{S}\), then \(m(x,t):=u(x,t)-u_{xx}(x,t)\) is non-negative for any \(t\) as long as the solution exists, and any \(x\in\mathbb{S}\). Moreover, \(u\) is also non-negative. In particular, if \(m_{0}>0\), then \(u>0\)._ An analogous result for non-periodic problems was proved in [21, Lemma 5.5], and following the same steps we get the demonstration for Theorem 4.1. For this reason it is omitted. There is one more fact regarding the \(L^{\infty}\) norm of \(u_{x}\). We begin by observing that \[\int_{\mathbb{S}}u_{xx}dx=0.\] This fact is enough to guarantee the existence of a point \(\xi_{t}-1\in(0,1)\) such that \(u_{x}(t,\xi_{t}-1)=0\), for each \(t\in(0,T)\),. **Lemma 4.1**.: _If \(u_{0}\in H^{3}(\mathbb{S})\cap L^{1}(\mathbb{S})\), is such that \(m_{0}\geq 0\), then there exists a constant \(K>0\) such that the solution of (1.5) satisfies \(\|u_{x}\|_{L^{\infty}(\mathbb{S})}\leq K\)._ Proof.: Let us first assume that \(\|m(\cdot,t)\|_{L^{1}}(\mathbb{S})\) is constant for any \(t\) as long as the solution exists. Assume \(m_{0}\) does not change sign and \(m_{0}\geq 0\). Then, \[K_{1} = \|m_{0}\|_{L^{1}(\mathbb{S})}=\int_{\mathbb{S}}m_{0}(r)dr=\int_{ \mathbb{S}}m(r,t)dr=\int_{\xi_{t}-1}^{\xi_{t}}m(r,t)dr\] \[\geq \int_{\xi_{t}-1}^{x}(u-u_{xx})(r,t)dr=\int_{\xi_{t}-1}^{x}u(r,t) dr-u_{x}(x,t)\geq-u_{x}(x,t)\] holds for every \(x\in[\xi_{t}-1,\xi_{t}]\). Here, we use Theorem 4.1, which guarantee that \(u\) does not change sign, under the assumption that \(m_{0}\) does not change sign. Taking into account the final result, we observe that \(u_{x}\) is bounded from below. Moreover, \[K_{1}=\int_{\xi_{t}-1}^{\xi_{t}}m(r,t)dr\geq\int_{x}^{\xi_{t}}m(r,t)dr=\int_{ x}^{\xi_{t}}u(r,t)dr+u_{x}(x,t)\geq u_{x}(x,t).\] Hence, \(u_{x}\) is bounded also from above. Therefore, we can conclude that \(\|u_{x}\|_{\infty}\) norm is bounded provided that \(m\) does not change sign, i.e. \(\|u_{x}\|_{\infty}\leq K\). The case \(m_{0}\leq 0\) is proved in a similar way and, therefore, is omitted. We now complete the demonstration proving that \(\|m(\cdot,t)\|_{L^{1}(\mathbb{S})}\) is constant. We begin by noticing that (1.1) is itself a conservation law, in the sense that \[\partial_{t}(u-u_{xx})=\partial_{x}\Big{(}(2-\partial_{x})(1+\partial_{x})u^ {2}\Big{)}=\partial_{x}\Big{(}(1-\partial_{x}^{2})u^{2}+u^{2})\Big{)}.\] Integrating the relation above with respect to \(x\) on \(\mathbb{S}\), we obtain \[\frac{d}{dt}\int_{\mathbb{S}}(u-u_{xx})dx=\Big{(}(1-\partial_{x}^{2})u^{2}+u^ {2})\Big{)}\big{|}_{\mathbb{S}}=0,\] meaning that the \(\|m(\cdot,t)\|_{L^{1}(\mathbb{S})}=const.\). Since \(m_{0}\in L^{1}(\mathbb{S})\), we conclude that \(\|m(\cdot,t)\|_{L^{1}(\mathbb{S})}=\|m_{0}\|_{L^{1}(\mathbb{S})}\). Now we start proving Theorem 1.3: First, we rewrite the equation (1.5) in the following form \[u_{t}-2uu_{x}+u^{2}=\Lambda^{-2}(u^{2}+(u^{2})_{x})\] by using \(\Lambda^{-2}(f(u))_{xx}=\Lambda^{-2}f(u)-f(u)\). Calling \(f(u)=u^{2}+(u^{2})_{x}\) and observing that \(2uu_{x}=(u^{2})_{x}\), we get \[u_{t}+(1-\partial_{x})u^{2}=\Lambda^{-2}f(u). \tag{4.1}\] Now, we will differentiate (4.1) with respect to \(x\), simplify and write \[u_{tx}+(\partial_{x}-\partial_{x}^{2})u^{2}=\Lambda^{-2}f(u)-u^{2}. \tag{4.2}\] We continue this process and get the following equations: \[u_{txx}+(\partial_{x}^{2}-\partial_{x}^{3})u^{2}=\Lambda^{-2}f(u)-f(u), \tag{4.3}\] \[u_{txxx}+(\partial_{x}^{3}-\partial_{x}^{4})u^{2}=\partial_{x}\Lambda^{-2}f(u )-\partial_{x}f(u), \tag{4.4}\] \[u_{txxxx}+(\partial_{x}^{4}-\partial_{x}^{5})u^{2}=\Lambda^{-2}f(u)-f(u)- \partial_{x}^{2}f(u). \tag{4.5}\] Moreover, we will multiply (4.1) by \(u\), (4.2) by \(u_{x}\), (4.3) by \(u_{xx}\), (4.4) by \(u_{xxx}\), (4.5) by \(u_{xxxx}\) and integrate all over \(\mathbb{S}\). Let \(I(u)=\int_{\mathbb{S}}(u^{2}+u_{x}^{2}+u_{xx}^{2}+u_{xxx}^{2}+u_{xxxx}^{2})dx\). Therefore, summing up equations (4.1)-(4.5) we obtain \[\frac{1}{2}\frac{d}{dt}I(u)+\int_{\mathbb{S}}(u(1-\partial_{x})u ^{2}+u_{x}(\partial_{x}-\partial_{x}^{2})u^{2}+u_{xx}(\partial_{x}^{2}- \partial_{x}^{3})u^{2}\] \[+u_{xxx}(\partial_{x}^{3}-\partial_{x}^{4})u^{2}+u_{xxxx}(\partial _{x}^{4}-\partial_{x}^{5})u^{2})dx\] \[=\int_{\mathbb{S}}(u\Lambda^{-2}f(u)+u_{x}\Lambda^{-2}f(u)-u^{2} u_{x}+u_{xx}(\Lambda^{-2}f(u)-f(u))\] \[+u_{xxx}(\partial_{x}\Lambda^{-2}f(u)-\partial_{x}f(u))+u_{xxxx}( \Lambda^{-2}f(u)-f(u)-\partial_{x}^{2}f(u))dx.\] Our main aim is to obtain \(I(u)\), which is equivalent to \(H^{4}\) norm of \(u\), within the equation so that Gronwall's inequality is applicable and we get an upper bound valid for all time. That bound will imply the global existence of solution. After integration by parts, we can rewrite the equality in the following form: \[\frac{1}{2}\frac{d}{dt}I(u)+\int_{\mathbb{S}}(u^{3}+2(u^{2})_{x}u_{xx}+(u^{2})_ {xx}u_{xxx}+2(u^{2})_{xxx}u_{xxxx}\] \[+(u^{2})_{xxxx}u_{xxxx}+(u^{2})_{xxxx}u_{xxxxx}+u^{2}u_{xxxx})dx=\int_{ \mathbb{S}}u(\Lambda^{-2}f(u))dx.\] Since \[(u^{2})_{x}=2uu_{x},\quad(u^{2})_{xx}=2u_{x}^{2}+2uu_{xx},\quad(u^ {2})_{xxx}=6u_{x}u_{xx}+2uu_{xxx},\] \[(u^{2})_{xxxx}=6u_{xx}^{2}+8u_{x}u_{xxx}+2uu_{xxxx},\] \[(u^{2})_{xxxxxx}=20u_{xx}u_{xxx}+10u_{x}u_{xxxx}+2uu_{xxxx},\] and integrating by parts once more, the integral becomes \[\frac{1}{2}\frac{d}{dt}I(u)+\int_{\mathbb{S}}(u^{3}-2(u^{2})_{xx }u_{x}-(u^{2})_{xxx}u_{xx}-2(u^{2})_{xxxx}u_{xxx} \tag{4.6}\] \[-(u^{2})_{xxxxx}u_{xxx}-(u^{2})_{xxxxx}u_{xxxx}+(u^{2})_{xx}u_{ xx})dx\] \[= \frac{1}{2}\frac{d}{dt}I(u)+\int_{\mathbb{S}}(u^{3}-4u_{x}^{3}- 4uu_{x}u_{xx}-4uu_{xx}^{2}-2uu_{xx}u_{xxx}\] \[-16u_{x}u_{xxx}^{2}-4uu_{xxx}u_{xxxx}-20u_{xx}u_{xxx}^{2}\] \[-10u_{x}u_{xxx}u_{xxxx}-2uu_{xxx}u_{xxxxx}-20u_{xx}u_{xxx}u_{xxxx }-10u_{x}u_{xxxx}^{2}\] \[-2uu_{xxxx}u_{xxxxx})dx=\int_{\mathbb{S}}u(\Lambda^{-2}f(u))dx.\] By Lemma 4.1, we have that \(\|u_{x}\|_{\infty}\) is bounded. Moreover, we can rewrite (1.1) as \[u_{t}-u_{txx}=4uu_{x}+2u_{x}^{2}+2uu_{xx}-6u_{x}u_{xx}-2uu_{xxx}.\] Multiplying equation above by \(u\), noting that \[2uu_{x}u_{xx} = \partial_{x}(uu_{x}^{2})-u_{x}^{3},\quad u^{2}u_{xxx}=\partial_{x }(u^{2}u_{xx}-uu_{x}^{2})+u_{x}^{3},\] \[u^{2}u_{xx} = \partial_{x}(u^{2}u_{x})-2uu_{x}^{2},\] integrating over \(\mathbb{S}\) and using the identities above, we obtain \[\frac{1}{2}\frac{d}{dt}\int_{\mathbb{S}}(u^{2}+u_{x}^{2})dx=\int_{\mathbb{S}}( 4u_{x}^{3}-2uu_{x}^{2})dx\leq 5\|u_{x}\|_{\infty}\int_{\mathbb{S}}(u^{2}+u_{x}^{2})dx,\] which implies \[\|u\|_{1}^{2}\leq\|u_{0}\|_{1}^{2}e^{10\int_{0}^{t}\|u_{x}\|_{\infty}d\tau} \leq\|u_{0}\|_{1}^{2}e^{At}=C_{0}^{2},\] for some optimal constant \(A>0\) since \(\|u_{x}\|_{\infty}\) norm is bounded. This estimate is valid at any finite time, therefore is a global bound for \(H^{1}\) norm of \(u\). The reason we provide this inequality is to verify that \(\|u\|_{\infty}\) is also bounded, since \(\|u\|_{\infty}\leq\|u\|_{1}\leq C_{0}\) by Sobolev embedding theorem. Moreover, we can show that \(H^{2}\) norm of \(u\) will be bounded in finite time: Let \(J(u)=\int_{\mathbb{S}}(u^{2}+u_{x}^{2}+u_{xx}^{2})dx\). Following the arguments done above for (4.1)-(4.3) and using integration by parts, \[\frac{1}{2}\frac{d}{dt}J(u) + \int_{\mathbb{S}}(2u^{3}+(u^{2})_{x}u_{x}+2(u^{2})_{x}u_{xx}+(u^{ 2})_{xx}u_{xx}+(u^{2})_{xx}u_{xxx})dx\] \[\leq \int_{\mathbb{S}}2u(\Lambda^{-2}f(u))dx.\] Since \(\|u\|_{\infty}\) and \(\|u_{x}\|_{\infty}\) are bounded, \[\frac{1}{2}\frac{d}{dt}J(u) \leq 2\max{(\|u\|_{\infty},\|u_{x}\|_{\infty})}J(u)+2\max{(\|u\|_{ \infty})}\int_{\mathbb{S}}(\Lambda^{-2}f(u))dx.\] As it was given in (1.6), \(\Lambda^{-2}f=g*f\). Now, we need to estimate the \(L^{\infty}-\)norm of the integrand of the second term in order to obtain a differential inequality and apply Gronwall's inequality. For this purpose, we first provide the following two estimates: \[\|g\|_{2}\leq\frac{1}{2}(\frac{e^{2}+2e-1}{e^{2}-2e+1})^{1/2}:=n_{2},\quad\|g \|_{\infty}\leq\frac{1}{2}(\frac{e+1}{e-1}):=n_{\infty}.\] Hence, \[\|g*u^{2}\|_{\infty}\leq\|g\|_{\infty}\|u^{2}\|_{1}\leq\|g\|_{ \infty}\|u\|_{2}^{2}\leq n_{\infty}C_{0},\] \[\|g*(u^{2})_{x}\|_{\infty}\leq\|g\|_{2}\|(u^{2})_{x}\|_{2}\leq\|g \|_{2}\|u^{2}\|_{1}\leq n_{2}C_{0}^{2}.\] These estimates, together with Gronwall's inequality, provide the boundedness of \(J(u)\) which is equivalent to \(H^{2}\) norm. Therefore, we will be able to give an upper bound for \(\|u_{x}\|_{\infty}\) norm as well since \(\|u_{x}\|_{\infty}\leq\|u_{x}\|_{1}<\infty\). Proving that \(H^{3}\) norm is bounded in finite time will be the last issue to conclude the proof of Theorem 1.3: Let \(K(u)=\int_{\mathbb{S}}(u^{2}+u_{x}^{2}+u_{xx}^{2}++u_{xxx}^{2})dx\). Like we did for \(J(u)\), we can evaluate the following inequality: \[\frac{1}{2}\frac{d}{dt}K(u) + \int_{\mathbb{S}}(2u^{3}+(u^{2})_{x}u_{x}+2(u^{2})_{x}u_{xx}+(u^{ 2})_{xx}u_{xx}\] \[+ (u^{2})_{xx}u_{xxx}+(u^{2})_{xxx}u_{xxx}+(u^{2})_{xxx}u_{xxxx})dx\] \[\leq \int_{\mathbb{S}}(2u(\Lambda^{-2}f(u))+u_{x}(\Lambda^{-2}f(u))+u_ {xxx}\partial_{x}(\Lambda^{-2}f(u)))dx,\] and \[\frac{1}{2}\frac{d}{dt}K(u) \leq 2\max{(\|u\|_{\infty},\|u_{x}\|_{\infty})}K(u)+\int_{\mathbb{S} }(2u(\Lambda^{-2}(u^{2}))+2u_{x}(\Lambda^{-2}(u^{2})_{x}))dx\] \[\leq 2\max{(\|u\|_{\infty},\|u_{x}\|_{\infty})}K(u)+2\max{(\|u\|_{ \infty},\|u_{x}\|_{\infty})}\int_{\mathbb{S}}(\Lambda^{-2}f(u))dx.\] Similar arguments reveal that \(K(u)\) is bounded in finite time and hence, \(\|u_{xx}\|_{\infty}\leq\|u_{xx}\|_{1}<\infty\). Recalling the equality (4.6), and Theorem 4.1 which guarantees that \(u>0\), we evaluate \[\frac{1}{2}\frac{d}{dt}I(u) = -(\int_{\mathbb{S}}(u^{3}-4u_{x}^{3}-4uu_{x}u_{xx}-4uu_{xx}^{2}-2uu _{xx}u_{xxx}\] \[-16u_{x}u_{xxx}^{2}-4uu_{xxx}u_{xxxx}-20u_{xx}u_{xxx}^{2}\] \[-10u_{x}u_{xxx}u_{xxxx}-2uu_{xxx}u_{xxxx}-20u_{xx}u_{xxx}u_{xxxx}- 10u_{x}u_{xxxx}^{2}\] \[-2uu_{xxxx}u_{xxxxx})dx)+\int_{\mathbb{S}}u(\Lambda^{-2}f(u))dx\] \[\leq -(\int_{\mathbb{S}}(-u^{3}-4u_{x}^{3}-4uu_{x}u_{xx}-4uu_{xx}^{2}- 2uu_{xx}u_{xxx}\] \[-16u_{x}u_{xxx}^{2}-4uu_{xxx}u_{xxxx}-20u_{xx}u_{xxx}^{2}\] \[-10u_{x}u_{xxx}u_{xxxx}-2uu_{xxx}u_{xxxxx}-20u_{xx}u_{xxx}u_{xxxx }-10u_{x}u_{xxxx}^{2}\] \[-2uu_{xxxx}u_{xxxxx})dx)+\int_{\mathbb{S}}u(\Lambda^{-2}f(u))dx\] \[\leq \max\left(\|u\|_{\infty},\|u_{x}\|_{\infty},\|u_{xx}\|_{\infty} )I(u)+\int_{\mathbb{S}}u(\Lambda^{-2}f(u))dx\right.\] \[\leq \max\left(\|u\|_{\infty},\|u_{x}\|_{\infty},\|u_{xx}\|_{\infty} )I(u)+\max\left(\|u\|_{\infty}\right)\int_{\mathbb{S}}(\Lambda^{-2}f(u))dx.\] Therefore, \[\frac{1}{2}\frac{d}{dt}I(u) \leq K_{2}I(u)+K_{3}\] for some optimal constants \(K_{2}\), \(K_{3}\). Gronwall's inequality implies \(I(u)\leq[I(0)+K_{3}t]e^{K_{2}t}\), which is valid for any finite time \(0<t\leq T\). Since we find an upper bound for \(\|u\|_{4}\), this completes the proof of Theorem 1.3. ## 5 Proof of theorem 1.1 The proof of theorem 1.1 is divided in three major parts, namely, * Existence of \(C^{1}\) periodic one-forms \(\omega_{1},\omega_{2}\) and \(\omega_{3}\) satisfying (1.4); * Existence of a domain \(V\), depending on the initial datum, containing open sets endowed with a PSS structure; * Existence of local connection forms \(\omega_{13},\omega_{23}\). * **Existence of \(C^{1}\) periodic one-forms \(\omega_{1}\), \(\omega_{2}\) and \(\omega_{3}\).** Example 2.1 exhibits two triads of one forms (2.13) satisfying the condition (1.4). For solutions \(u\) emanating from an initial datum \(u_{0}\in H^{4}(\mathbb{S})\), with \(u_{0}-u_{0}^{\prime\prime}>0\), theorem 1.2 implies that \(u\in C(H^{4}(\mathbb{S}),[0,T))\cap C^{1}(H^{3}(\mathbb{S}),[0,T))\), whereas Theorem 1.3 informs us that \(u\) is defined on \(U=\mathbb{R}\times(0,\infty)\). Moreover, \(u_{t}(\cdot,t)\in H^{3}(\mathbb{S})\subseteq C_{\mathrm{per}}^{2}(\mathbb{R})\) and \(u(\cdot,t)\in H^{4}(\mathbb{S})\subseteq C_{\mathrm{per}}^{3}(\mathbb{R})\) in view of the Sobolev Lemma (see lemma 2.3). Therefore, \(u\in C^{3,1}(\mathbb{R})\) and then \(f_{ij}\in C^{1}(\mathbb{R})\) and is periodic in the variable \(x\), where \(f_{ij}\) are the coefficients of the forms given in (2.13). =\(\bullet\)**Existence of a domain \(V\), depending on the initial datum, containing open sets endowed with a PSS structure**; From example 2.2, a non-generic solution \(u\) of (1.1) can only be periodic if it is constant. Since the initial datum satisfies the condition \(u_{0}-u_{0}^{\prime\prime}>0\), by theorem 4.1 we know that \(u>0\) and it cannot be constant on \(U\). Let us then suppose the existence of an open set \(\Omega\subseteq U\) for which \(u\big{|}_{\Omega}=k\), where \(k>0\) is a real number. Without loss of generality, we may assume that \(\Omega\subseteq(0,1)\times(0,\infty)=:U_{1}\) in view of the periodicity of \(u\) with respect to \(x\). For some \(p\in U_{1}\setminus\Omega\) and \(\epsilon>0\), we have \(\nabla u(p)=(u_{x}(p),u_{t}(p))\neq(0,0)\) and \(u\big{|}_{B_{\epsilon}(p)}\) is non-constant, where \(B_{\epsilon}(p)\) denotes the disc of centre \(p\) and radius \(\epsilon\). As a result, \(U_{1}\) has at least one connected component \(V\) (and \(U\) as well), with \(B_{\epsilon}(p)\subseteq V\), endowed with a PSS structure determined by the forms \(\omega_{1}\) and \(\omega_{2}\). \(\bullet\)**Existence of connection forms defined everywhere \(\omega_{1}\wedge\omega_{2}\neq 0\).** Henceforth we assume that the open sets under consideration are those that \(\nabla u\neq(0,0)\) everywhere. Let us denote these sets generically by \(V\). Our proof is based on, and follows, that made by Castro Silva and Kamran [4, Proposition 3.7]. We have two possible choices for the form \(\omega_{2}\). For this reason, fix one of them and consider the frame \(\{\omega_{1},\omega_{2}\}\). Let \(a\), \(b\) and \(c\) functions such that \[\omega_{13}=a\omega_{1}+b\omega_{2},\quad\omega_{23}=b\omega_{1}+c\omega_{2}. \tag{5.1}\] Our task is to find functions locally defined on any open set of \(U\) for which (5.1) and the Levi-Civita connection form \(\omega_{3}\) given in (2.13) satisfy (2.8). In [4, Proposition 3.7] it was shown that the connection forms (5.1) for an equation of the type \[u_{t}-u_{txx}=\lambda uu_{xxx}+G(u,u_{x},u_{xx}), \tag{5.2}\] satisfy a certain set of differential equations, see [4, Theorem 2.4] and also [3, Theorem 3.4]. The function \(G\) has somewhat a specific dependence on its arguments, and also some parameters \(\mu\), \(m_{1}\) and \(m_{2}\). In [10, Theorem 1] it was shown that (1.1) is a PSS equation, and one of the steps for that demonstration was just to show that it falls in the class considered in [3, Theorem 3.4]. In particular, the mentioned parameters are \(\mu\in\mathbb{R}\), \(m_{1}\in\{-2,1\}\) and \(m_{2}=0\). Therefore, in view of [10, Equation (7)] and [3, Theorem 3.4], we fall either into [4, Proposition 3.7, case ii.)] or [4, Proposition 3.7, case iii.)], depending on whether \(\mu=0\) or \(\mu\neq 0\). According to [4, Equation (181)], the functions \(a\), \(b\) and \(c\) in (5.1) take the form (recall that \(m_{2}=0\)) \(a=\phi_{1}(z)\), \(b=\phi_{2}(z)\), and \(c=\phi_{3}(z)\), \(z=m_{1}x\) for some real valued and smooth functions \(\phi_{1}\), \(\phi_{2}\) and \(\phi_{3}\) to be determined, satisfying the condition \[\phi_{1}\phi_{3}\neq 0. \tag{5.3}\] The Codazzi-Mainardi equations give (see [4, Equations (182)-(183)]) \[\phi_{1}^{\prime}+\mu\phi_{2}^{\prime}-\phi_{1}-2\mu\phi_{2}+\phi_{3}=0 \tag{5.4}\] and \[\phi_{2}^{\prime}+\mu\phi_{3}^{\prime}+\mu\phi_{1}-2\phi_{2}-\mu\phi_{3}=0, \tag{5.5}\] whereas the Gauss equation reads \[\phi_{1}\phi_{3}-\phi_{2}^{2}=-1. \tag{5.6}\] Equation (5.6), jointly with condition (5.3), imply that \(b\neq 0\) everywhere. From (5.4) we obtain \(\phi_{3}\) in terms of \(\phi_{1}\), \(\phi_{2}\) and their first derivatives. Substituting the result into (5.5) and integrating once, we obtain (see [4, Equation (184)]) \[\mu\phi_{1}^{\prime}=(1+\mu^{2})\phi_{2}-\mu^{2}\phi_{2}^{\prime}-\beta e^{2z}, \tag{5.7}\] where \(\beta\in\mathbb{R}\) is a constant. We now divide our proof in two different cases. **Case \(\mu=0\).** From (5.7) we conclude that \(\phi_{2}=\beta e^{2z}\), whereas (5.4) gives \[\phi_{3}=\phi_{1}-\phi_{1}^{\prime}. \tag{5.8}\] Substituting \(\phi_{2}\) and \(\phi_{3}\) into the Gauss equation (5.6) we obtain the following Abel differential equation of the second kind \[\phi_{1}\phi_{1}^{\prime}=\phi_{1}^{2}-\beta^{2}e^{4z}+1.\] Under the change \(\phi_{1}=e^{z}w\), where \(w\) is another function of \(z\), we obtain the following simpler ODE \[ww^{\prime}=e^{-2z}-\beta^{2}e^{2z},\] that, after solving, substituting back the result for \(\phi_{1}\), and proceeding some manipulation, gives \[\phi_{1}(z)=\pm e^{z}\sqrt{\gamma-\beta^{2}e^{2z}-e^{-2z}}=\pm\sqrt{\gamma e^ {2z}-1-\beta^{2}e^{4z}}. \tag{5.9}\] Substituting (5.9) into (5.8) and going back to the original functions \(a\), \(b\) and \(c\), we obtain \[\begin{array}{rcl}a(x,t)&=&\pm\sqrt{\gamma e^{2m_{1}x}-\beta^{2}e^{4m_{1}x} -1},\quad b(x,t)=\beta e^{2m_{1}x},\\ c(x,t)&=&\pm\frac{\beta^{2}e^{2m_{1}x}-1}{\sqrt{\gamma e^{2m_{1}x}-\beta^{2}e^{4 m_{1}x}-1}}.\end{array} \tag{5.10}\] **Case \(\mu\neq 0\).** From (5.7) we can write \(\phi_{1}^{\prime}\) in terms of \(\phi_{2}\) and \(\phi_{2}^{\prime}\). Substituting it into (5.6) we obtain \[\phi_{3}=\phi_{1}+\phi,\quad\phi=\frac{\mu^{2}-1}{\mu}\phi_{2}-\frac{\beta}{\mu} e^{2z}. \tag{5.11}\] Substituting (5.11) into the Gauss equation (5.6) we obtain \[\phi_{1}^{2}+\phi\phi_{1}-\phi_{2}^{2}=-1,\] which, after solved for \(\phi_{1}\), yields \[\phi_{1}=\frac{-\phi\pm\sqrt{\Delta}}{2},\quad\Delta=\phi^{2}-4(1-\phi_{2}^{2}), \tag{5.12}\] which are well defined as long as \(\Delta\geq 0\). Substituting (5.12) into (5.11) and the result into (5.7) we obtain the following ODE for \(\phi_{2}\) \[[(1+\mu^{2})\sqrt{\Delta}\pm(\mu^{2}-1)\phi\pm 4\mu\phi_{2}]\phi_{2}^{\prime}-2 (1+\mu^{2})\sqrt{\Delta}\phi_{2}\mp 2\beta e^{2z}\phi=0. \tag{5.13}\] It was shown in [4, page 36] that the coefficient of \(\phi_{2}^{\prime}\) cannot vanish, otherwise we would conclude that \(\phi_{1}=\phi_{3}\) and \(\phi_{2}=0\), which contradicts (5.6). By continuity, such a coefficient does not change its sign. Without loss of generality, we may assume it to be positive and the ODE above takes the form \(b^{\prime}=g(z,b)\). Given a point \(p=(x_{0},t_{0})\in V\), arguing exactly as [4, page 37] we conclude that the ODE to \(b\) subject to \(b(x_{0}/m_{1})=t_{0}\) has a unique (local) solution, that guarantee the local existence of \(b\) in a neighborhood of each point of \(V\). **Remark 5.1**.: _The uniqueness of the forms (2.13) follows from [3, Theorem 3.4], see also [10, page 760], and the fact that the solution \(u\) of (1.1) is unique in view of theorem 1.2._ ## 6 Concluding remarks In this paper we studied an equation whose solutions define metrics for a PSS from the point of view of geometric analysis. More precisely, we used tools of semi-group theory to establish well-posedness of solutions and then study the corresponding surface qualitatively. From the point of view of analysis, our Theorem 1.2 ensures well-posedness of solutions, whereas our Theorem 1.3 ensures enough regularity of the solution in order to ensure geometric relevance in the one-forms given in Theorem 1.1. In particular, this last theorem can be seen as a sort of existence and uniqueness theorem for periodic surfaces emanating from a given initial datum, which can be associated to a certain curve in the three-dimensional space. ## Acknowledgements N. D. Mutlubas is supported by the Turkish Academy of Sciences within the framework of the Outstanding Young Scientists Awards Program (TUBA-GEBIP-2022). I. L. Freire is thankful to Enrique Reyes, Marcio Fabiano da Silva and Stefano Nardulli for stimulating discussions and support. He is also grateful to CNPq (grant n\({}^{\text{o}}\) 310074/2021-5) and FAPESP (grants n\({}^{\text{o}}\) 2020/02055-0 and 2022/00163-6) for financial support. He is also grateful to the Department of Mathematical Sciences for warm hospitality and support during the development of this work.
2305.00535
Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search
Graph neural networks are useful for learning problems, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search. We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a Monte Carlo search to compute a Steiner tree. The proposed method consistently outperforms the standard 2-approximation algorithm on many different types of graphs and often finds the optimal solution.
Reyan Ahmed, Mithun Ghosh, Kwang-Sung Jun, Stephen Kobourov
2023-04-30T17:15:38Z
http://arxiv.org/abs/2305.00535v1
# Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search ###### Abstract Graph neural networks are useful for learning problems, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search. We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a Monte Carlo search to compute a Steiner tree. The proposed method consistently outperforms the standard 2-approximation algorithm on many different types of graphs and often finds the optimal solution. ## 1 Introduction Graphs arise in many real-world applications that deal with relational information. Classical machine learning models, such as neural networks and recurrent neural networks, do not naturally handle graphs. Graph neural networks (GNN) were introduced by Gori et al. [24] in order to better capture graph structures. A GNN is a recursive neural network where nodes are treated as state vectors and the relationships between the nodes are quantified by the edges. Scarselli et al. [42] extended the notion of unfolding equivalence that leads to the transformation of the approximation property of feed-forward networks (Scarselli and Tsoi [43]) to GNNs. Many real-world problems are modeled by combinatorial and graph problems that are known to be NP-complete. GNNs offer an alternative to traditional heuristics and approximation algorithms; indeed the initial GNN model [42] was used to approximate solutions to two classical graph problems: subgraph isomorphism and clique detection. Recent GNN work [37, 48] suggests that combining neural networks and tree search leads to better results than just the neural network alone. Li et al. [37] combine a convolutional neural network with tree search to compute independent sets and other NP-hard problems that are efficiently reducible to the independent set problem. AlphaGo, by Silver et al. [44] combines deep convolutional neural networks and Monte Carlo Tree Search (MCTS) [12, 34] to assess Go board positions and reduce the search space. Xing et al. [48] build on this combination to tackle the traveling salesman problem (TSP). Since Xing et al. [48] showed that the AlphaGo framework is effective for TSP, a natural question is whether this framework can be applied to other combinatorial problems such as the Steiner tree problem. Although both TSP and the Steiner tree problem are NP-complete, they are different. First, in the Steiner tree problem we are given a subset of the nodes called _terminals_ that must be spanned, whereas in TSP all nodes are equivalent. Second, the output of the Steiner tree problem is a tree, whereas the output of TSP is a path (or a cycle). When iteratively computing a TSP solution, the next node to be added can only be connected to the previous one, rather than having to choose from a set of nodes when growing a Steiner tree. Third, TSP and Go are similar in terms of the length of the instance: both the length of the game and the number of nodes in the TSP solution are fixed and taking an action in Go is equivalent to adding a node to the tour, while the number of nodes in the Steiner tree problem varies depending on the graph instance. Finally, Xing et al. [48] only considered geometric graphs, which is a restricted class of graphs. ### Background: The Steiner tree problem is one of Karp's 21 NP-complete problems [29]: given an edge-weighted graph \(G=(V,E)\), a set of terminals \(T\subseteq V\) and cost \(k\), determine whether there exists a tree of cost at most \(k\) that spans all terminals. For \(|T|=2\) this is equivalent to the shortest path problem, for \(|T|=|V|\) this is equivalent to the minimum spanning tree problem, while for \(2<|T|<|V|\) the problem is NP-complete [11]. Due to applications in many domains, there is a long history of heuristics, approximation algorithms and exact algorithms for the problem. The classical 2-approximation algorithm for the Steiner tree problem [22] uses the _metric closure_ of \(G\), i.e., the complete edge-weighted graph \(G^{*}\) with terminal node set \(T\) in which, for every edge \(uv\), the cost of \(uv\) equals the length of a shortest \(u\)-\(v\) path in \(G\). A minimum spanning tree of \(G^{*}\) corresponds to a 2-approximation Steiner tree in \(G\). This algorithm is easy to implement and performs well in practice [2]. The last in a long list of improvements is the LP-based algorithm of Byrka et al. [9], with approximation ratio of \(\ln(4)+\varepsilon<1.39\). The Steiner tree problem is APX-hard [7] and NP-hard to approximate within a factor of 96/95 [10]. Geometric variants of the problem, where terminals correspond to points in the Euclidean or rectilinear plane, admit polynomial-time approximation schemes [4, 38]. ### Related Work: Despite its practical and theoretical importance, the Steiner tree problem is not as well explored with machine learning approaches as other combinatorial and graph problems. In 1985, Hopfield et al. [27] proposed a neural network to compute feasible solutions for different combinatorial problems such as TSP. Bout et al. [14] developed a TSP objective function that works well in practice and Brandt et al. [8] provided different networks for solving TSP. Kohonen's 1982 self-organizing maps [35], an architecture for artificial neural networks, can also be used for such problems as shown by Fort [17, 3] and Favata et al. [16]. Recently, graph neural networks have been an active area of research. Lei et al. [36] introduced recurrent neural operations for graphs with associated kernel spaces. Gilmer et al. [23] study graph neural models as Message Passing Neural Networks. Garg et al. [21] generalized message-passing GNNs that rely on the local graph structure, proposing GNN frameworks that rely on graph-theoretic formalisms. GNNs have been widely used in many areas including physical systems [6, 41], protein-protein interaction networks [18], social science [26, 32], and knowledge graphs [25]; The survey of Zhou et al. [50] covers GNN methods and applications in general, and the survey of Vesselinova et al. [45] provides more details on attempts to solve combinatorial and graph problems with neural networks. ### Problem Statement: In the standard optimization version of the Steiner Tree Problem we are given a weighted graph \(G=(V,E)\) and a set of terminals \(T\subseteq V\), and the objective is to compute a minimum cost tree that spans \(T\). A Steiner tree \(H\) must contain all the terminals and non-terminal nodes in \(H\) are the Steiner nodes. Several approximation algorithms have been proposed for this problem including a classical 2-approximation algorithm that first computes the metric closure of \(G\) on \(T\) and then returns the minimum spanning tree [1]. In this paper we consider whether graph neural networks can be used to compute spanning trees with close-to-optimal costs using a variety of different graph classes. ### Summary of Contributions: We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search (MCTS). We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a MCTS to compute a Steiner tree. The proposed method consistently outperforms the standard 2-approximation algorithm on many different types of graphs and often finds the optimal solution. We illustrate our approach in Figure 1. Our approach builds on the work of Xing et al. [48] for TSP. Since TSP is non-trivially different from the Steiner tree problem, we needed to address challenges in both training the graph neural network and testing the MCTS. We summarize our contribution below: * To train the neural network we generate exact solutions of Steiner tree instances. From each instance, we generate several data points. The purpose of the neural network is to predict the next Steiner node, given a partial solution. Any permutation of the set of Steiner nodes can lead to a valid sequence of predictions. Hence, we use random permutations to generate data points for the network. * After we determine the Steiner nodes for a given instance, it is not straightforward to compute the Steiner tree. For TSP, any permutation of all nodes is a feasible tour. For the Steiner tree problem, an arbitrary permutation can have many unnecessary nodes and thus a larger weight compared to the optimal solution. Selecting a subset of nodes is not enough either, since the output needs to be connected and span the terminals. We propose heuristics to compute the tree from the nodes that provide valid result with good quality. * We evaluate our results on many different classes of graphs, including geometric graphs, Erdos-Renyi graphs, Barabasi-Albert graphs, Watts-Strogatz graphs, and known hard instances from the Stein-Lib database [33]. Our method is fully functional and available on Github. ## 2 Our approach Let \(G(V,E)\) be a graph, where \(V\) is the set of nodes and \(E\) is the set of edges. Let \(w(u,v)\) be the weight of edge \((u,v)\in E\) and for unweighted graphs \(w(u,v)=1\) for any edge \((u,v)\in E\). Let \(T\subseteq V\) be the set of terminals. We use \(S=\{v_{1},v_{2},\cdots,v_{i}\}\) to represent the set of nodes that are already added in a partially computed Steiner tree. Then, \(\overline{S}=V-S\) is the set of candidate nodes to be added to \(S\). Given the graph \(G\) our goal is to derive a Steiner tree by adding node \(v\in\overline{S}\) to \(S\) in turn. A natural approach is to train a neural network to predict which node to add to the partial Steiner tree at a particular step. That is, neural network \(f(G|S)\) takes graph \(G\) and partial solution \(S\) as input, and return probabilities for the remaining nodes, indicating the likelihood they belong to the Steiner tree. We use the GNN of [26] to represent \(f(G|S)\). Intuitively, we can directly use the probability values, selecting all nodes with probability higher than a given threshold. We can then construct a tree from the selected nodes in different ways. For example, we can compute the induced graph of the selected nodes (add an edge if it connects to selected nodes) and extract a minimum spanning tree [11]. Note that the induced graph may be disconnected and therefore the spanning tree will be also disconnected. Even if the spanning tree is connected, it may not span all the terminals, hence it may not provide a valid solution. These issues can be addressed by reducing the given threshold until we obtain a valid solution. However, deriving trees in this fashion might not be reliable, as a learning-based algorithm has only one chance to compute the optimal solution, and it never goes back to reverse the decision. To overcome this drawback, we leverage the MCTS. We use a variant of PUCT [40] to balance exploration (i.e., visiting a state as suggested by the prior policy) and exploitation (i.e., visiting a state that has the best value). Using the concept of prior probability, the search space of the tree could be reduced substantially, enabling the search to allocate more computing resources to the states having higher values. We could get a more reliable policy after a large number of simulations as the output of the MCTS acts as the feedback information by fusing the prior probability with the scouting exploration. The overall approach is illustrated in Figure 1. ### Graph neural network architecture: To get a useful neural network, information about the structures of the concerned graph, terminal nodes, and contextual information, i.e., the set of added nodes \(S=\{v_{1},\ldots,v_{i}\}\) in the partial solution, is required. We tag node \(u\) with \(x_{u}^{t}=1\) if it is a terminal, otherwise \(x_{u}^{t}=0\). We also tag node \(v\) with \(x_{v}^{a}=1\) if it is already added, otherwise \(x_{v}^{a}=0\). Intuitively, \(f(G|S)\) should summarize the state of such a "tagged" graph and generate the prior probability for each node to get included in \(S\). Some combinatorial problems like the independent set problem and minimum vertex cover problem do not consider edge weights. However, edge weight is an important feature of the Steiner tree problem as the objective is computed based on the weights. Hence, we use the static edge graph neural network (SE-GNN) [48], to efficiently extract node and edge features of the Figure 1: GNN assisted MCTS: first, train a GNN to evaluate non-terminal nodes, then use the network and heuristics to compute a Steiner tree with MCTS. Steiner tree problem. A GNN model consists of a stack of \(L\) neural network layers, where each layer aggregates local neighborhood information, i.e., features of neighbors of each node, and then passes this aggregated information to the next layer. We use \(H_{u}^{l}\in\mathbb{R}^{d}\) to denote the real-valued feature vector associated with node \(u\) at layer \(l\). Specifically, the basic GNN model [26] can be implemented as follows. In layer \(l=1,2,\cdots,L\), a new feature is computed as given by 2.1. \[H_{u}^{l+1}=\sigma\Big{(}\theta_{1}^{l}H_{u}^{l}+\sum_{v\in N(u)}\theta_{2}^{l} H_{v}^{l}\Big{)} \tag{1}\] In 2.1, \(N(u)\) is the set of neighbors of node \(u\), \(\theta_{1}^{l}\) and \(\theta_{2}^{l}\) are the parameter matrices for the layer \(l\), and \(\sigma(\cdot)\) denotes a component-wise non-linear function such as a sigmoid or a ReLU function. For \(l=0\), \(H_{u}^{0}\) denotes the feature initialization at the input layer. The edge information is not taken into account in 2.1. To incorporate edge features, we adapt the approach in [30, 47] to the Steiner tree problem. We integrate the edge features with node features using 2.2. \[\mu_{u}^{l+1}=\sigma\Big{(}\theta_{1}x_{u}+\theta_{2}\sum_{v\in N(u)}\mu_{v}^{ l}+\theta_{3}\sum_{v\in N(u)}\sigma(\theta_{4}w(u,v))\Big{)} \tag{2}\] In 2.2, \(\theta_{1}\in\mathbb{R}^{l}\), \(\theta_{2},\theta_{3}\in\mathbb{R}^{l\times l}\) and \(\theta_{4}\in\mathbb{R}^{l}\) are all model parameters. We can see in 2.1 and 2.2 that the nonlinear mapping of the aggregated information is a single-layer perceptron, which is not enough to map distinct multisets into unique embeddings. Hence, as suggested in [48, 49], we replace the single perceptron with a multi-layer perceptron. Finally, we compute a new node feature \(H\) using 2.3. \[H_{u}^{l+1}=\text{MLP}^{l}\Big{(}\theta_{1}^{l}H_{u}^{l}+\sum_{v\in N(u)}\theta _{2}^{l}H_{v}^{l}+\sum_{v\in N(u)}\theta_{3}^{l}e_{u,v}\Big{)} \tag{3}\] In 2.3, \(e_{u,v}\) is the edge feature, \(\theta_{1}^{l}\), \(\theta_{2}^{l}\), and \(\theta_{3}^{l}\) are parameter matrices, and \(\text{MLP}^{l}\) is the multi-layer perceptron for layer \(l\). Note that SE-GNN differs from GEN [13] in the following aspects: (1) SE-GNN replaces \(x_{u}\) in 2.2 with \(H_{u}\) so that the SE-GNN can integrate the latest feature of the node itself directly. (2) Each update process in the GEN can be treated as one update layer of the SE-GNN, i.e., each calculation is equivalent to going one layer forward, thus calculating \(L\) times for \(L\) layers. Parameters of each layer of SE-GNN are independent, while parameters in GEN are shared between different update processes which limits the neural network. (3) We replace \(\sigma\) in 2.2 with MLP as suggested by [48, 49] to map distinct multisets to unique embeddings. We initialize the node feature \(H^{0}\) as follows. Each node has a feature tag which is a 4-dimensional vector. The first element of the vector is binary and it is equal to 1 if the partial solution \(S\) contains the node. The second element of the vector is also binary and it is equal to 1 if the node is a terminal. The third and fourth elements of the feature tag are the \(x\) and \(y\) coordinates of the node. The last two are used only for geometric graphs. ### Parameterizing \(f(G|S;\theta)\): Once the feature for every node is computed after updating \(L\) layers, we use the new feature for the nodes to define the \(f(G|S;\theta)\) function, which returns the prior probability for each node indicating how likely the node will belong to partial solution \(S\). Specifically, we fuse all node feature \(H_{u}^{L}\) as the current state representation of the graph and parameterize \(f(G|S;\theta)\) as expressed by 2.4. \[f(G|S;\theta)=\text{softmax}(sum(H_{1}^{L}),\cdots,sum(H_{n}^{L})) \tag{4}\] During training, we minimize the cross-entropy loss for each training sample \((G_{i},S_{i})\) in a supervised manner as given by 2.5. \[\ell(S_{i},f(G_{i}|S_{i};\theta))=-\sum_{j=1}^{N}y_{j}\log f(G_{i}|S_{i}(1:j-1 );\theta) \tag{5}\] In 2.5, \(S_{i}\) is an ordered set of nodes of a partial solution which is a permutation of the nodes of graph \(G_{i}\), with \(S_{i}(1:j-1)\) the ordered subset containing the first \(j-1\) elements of \(S_{k}\), and \(y_{j}\) a vector of length \(N\) with 1 in the \(S_{i}(j)\)-th position. We provide more details in Section 3. ### GNN assisted MCTS: Similar to the implementation in [48], the GNN-MCTS uses graph neural networks as a guide of MCTS. We denote the child edges of \(u\) in MCTS by \(A(u)\). Each node \(u\) in the search tree contains edges \((u,a)\) for all legal actions \(a\in A(u)\). Each edge of MCTS stores a set of statistics: \[\{N(u,a),Q(u,a),P(u,a)\},\] where node \(u\) denotes the current state of the graph including the set of nodes \(S\) and other graph information, action \(a\) denotes the selection of node \(v\) from \(\overline{S}\) to add in \(S\), \(N(u,a)\) is the visit count, \(Q(u,a)\) is the action value and \(P(u,a)\) is the prior probability of selecting edge \((u,a)\). In the Steiner tree problem, we are interested in finding a tree with minimum cost. Hence, we track the best action value found under the subtree of each node to determine the "exploitation value" of the tree node, as suggested in [19] in the context of the stock trading problem. The standard MCTS takes solution values in the range \([0,1]\)[34]. However, the Steiner tree can have an arbitrary solution value that does not fall in a predefined interval. This issue could be addressed by adjusting the parameters of the tree search algorithm in such a way that it is feasible for a specified interval. Adjusting parameters requires substantial trial and error due to the change in the number of nodes. Instead, we address this issue by normalizing the action value of node \(n\), whose parent is node \(p\), in the range of \([0,1]\) using 2.6. \[Q_{n}=\frac{\tilde{Q}_{n}-w_{p}}{b_{p}-w_{p}} \tag{2.6}\] In 2.6, \(b_{p}\) and \(w_{p}\) are the minimum and maximum action values among the children of \(p\), and \(Q_{n}\) is the action value of \(n\). The actions under \(p\) are normalized in the range of \([0,1]\) so that the best action is 0 and the worst action is 1. The GNN-MCTS proceeds by iterating over the four phases below and then selects a move to play. 1. **Selection Strategy.** The first in-tree phase of each simulation starts at the root node \(v_{0}\) of the search tree and finishes when the simulation reaches a leaf node \(v_{l}\) at time step \(l\). At time step \(t<l\), we use a variant of PUCT [40] to balance exploration (i.e., visiting the states suggested by the prior policy) and exploitation (i.e., visiting states which have best values) according to the statistics in the search tree as given by 2.7 and 2.8 respectively. (2.7) \[a_{t}=\text{argmax}_{a}(Q(v_{t},a)+U(v_{t},a))\] (2.8) \[U(v,a)=c_{puct}P(v,a)\frac{\sqrt{\sum_{b}N(v,b)}}{1+N(v,a)}\] where \(c_{puct}\) is a constant for trading off between exploration and exploitation. We set \(c_{puct}=1.3\) according to previous experimental results [48]. 2. **Expansion Strategy.** When a leaf node \(v\) is reached, the corresponding state \(s_{v}\) is evaluated by the GNN to obtain the prior probability \(p\) of its children nodes. The leaf node is expanded and the statistic of each edge \((s_{v},a)\) is initialized to \(\{N(s_{v},a)=0,Q(s_{v},a)=-\infty,P(s_{v},a)=p_{a}\}\). 3. **Back-Propagation Strategy.** For each step \(t<l\), the edge statistics are updated in a backward process. The visit counts are increased as \(N(v_{t},a_{t})=N(v_{t},a_{t})+1\), and the action value is updated to the best value. 4. **Play.** After repeating steps 1-3 several times (800 times for smaller datasets and 1200 times for larger datasets according to the previous experimental results [48]), we select the node with the biggest \(\hat{P}(a|u_{0})=\frac{Q(u_{0},a)}{\sum_{b}Q(u_{0},b)}\) as the next move \(a\) in the root position \(u_{0}\). The selected child becomes the new root node and the statistics stored in the subtree are preserved. ### Computing Steiner tree from \(S\): There are several ways to compute a Steiner tree from the set of nodes \(S\). We provide two effective heuristics that we use in our experiments. 1. **MST-based heuristic.** In this heuristic, we first add the terminal nodes to the solution if they are not already present, and then compute the induced graph. We iteratively add nodes from \(\overline{S}\) in order computed by the MCTS until the induced graph is connected. In the last step, we compute a minimum spanning tree (MST) of the induced graph and prune degree-1 non-terminal nodes. This heuristic is effective for geometric graphs and unweighted graphs. 2. **Metric closure-based heuristic.** In this heuristic, given an input graph \(G=(V,E)\) and a set of terminals \(T\), we first compute a metric closure graph \(G^{\prime}=(T,E^{\prime})\). Every pair of nodes in \(G^{\prime}\) is connected by an edge with weight equal to the shortest path distance between them. The minimum spanning tree of the metric closure provides a 2-approximation to the optimal Steiner tree. For Figure 2: Example graph for the Steiner tree heuristic. Considering \(D\) as a terminal node and computing the MST on the metric closure provides a better solution than the 2-approximation. example, in Figure 2, \(A\), \(B\) and \(C\) are terminal nodes and \(D\) is not. Note that \(D\) does not appear in any shortest path as every shortest path between pairs of terminals is 5 and none of them goes through \(D\). Without loss of generality, the 2-approximation algorithm chooses the \(A-C-B\) path with total cost of 10, while the optimal solution that uses \(D\) has cost 9. While the 2-approximation algorithm does not consider any node that does not belong to a shortest path between two terminal nodes, here we consider such nodes. Specifically, we iteratively add nodes from \(\overline{S}\) in order computed by the MCTS, even if they don't belong to any shortest path. Note that, unlike the MST-based heuristic, the metric closure-based heuristic computes the MST on the metric closure (not on the input graph). Both of the heuristics start by selecting all the terminals as the partial solution. In the MCTS, we gradually add nodes that are not in the set of already selected nodes. For the MST-based heuristic, we stop selecting nodes when the induced graph becomes connected. For the metric closure-based heuristic we stop selecting nodes when 10% non-terminal nodes have been selected. ## 3 Model setup and training In order to train the models, one has to provide training data consisting of input graphs \(G=(V,E)\), edge weights \(W:E\rightarrow\mathbb{R}^{+}\), and terminals \(T\subseteq V\). Given \(G,W,T\), and partial solution \(S\), our goal is to give label 1 to the next node to be added and 0 to all others. Initially, we set \(S=T\) as all terminals must be in the Steiner tree. Consider a graph with 6 nodes \(u_{1},u_{2},\cdots,u_{6}\), \(T=\{u_{1},u_{2},u_{3}\}\), and an optimal Steiner tree contains the first five nodes \(u_{1},u_{2},\cdots,u_{5}\). For this example, initially we set \(S=T=\{u_{1},u_{2},u_{3}\}\). Since we have two Steiner nodes \(u_{4}\) and \(u_{5}\), both permutations \(u_{4},u_{5}\) and \(u_{5},u_{4}\) are valid. For the first permutation, after setting \(S=\{u_{1},u_{2},u_{3}\}\), the next node to be added in the solution is \(u_{4}\). Hence for this data point, only the label for \(u_{4}\) is 1. This permutation provides another data point where \(S=\{u_{1},u_{2},u_{3},u_{4}\}\) and only the label for \(u_{5}\) is equal to 1. Similarly, we can generate two more data points from the other permutation. This exhaustive consideration of all possible permutations does not scale to larger graphs, so we randomly select 100 permutations from each optimal solution. The model is trained with Stochastic Gradient Descent, using the ADAM optimizer [31] to minimize the cross-entropy loss between the models' prediction and the ground-truth (a vector in \(\{0,1\}^{|V|}\) indicating whether a node is the next solution node or not) for each training sample. ### Data generation: We produce training instances using several different random graph generation models: Erdos-Renyi [15], Watts-Strogatz [46], Barabasi-Albert [5], and random geometric [39] graphs. Each of these generators needs some parameters; below we describe the values we used, aiming to have graphs of comparable density across the different generators. For Erdos-Renyi model, there is an edge selection probability \(p\), which we set to \(\frac{2\ln n}{n}\) to ensure that the generated graphs are connected with high probability. In the Watts-Strogatz model, we initially create a ring lattice of constant degree \(K\) and rewire each edge with probability \(0\leq p\leq 1\), while avoiding self-loops and duplicate edges. For our experiments we use \(K=6\) and \(p=0.2\). In the Barabasi-Albert model, the graph begins with an initially connected graph of \(m_{0}\) nodes. New nodes are added to the network one at a time. Each new node is connected to \(m\leq m_{0}\) existing nodes with a probability that is proportional to the number of edges that the existing nodes already have. We set \(m_{0}=5\). In the random geometric graph model, we uniformly select \(n\) points from the Euclidean cube, and connect nodes whose Euclidean distance is not larger than a threshold \(r_{c}\), which we choose to be \(\sqrt{\frac{2\ln n}{\pi n}}\) to ensure the graph is connected with high probability. The Steiner tree problem is NP-complete even if the input graph is unweighted [20]. We generate both unweighted and weighted Steiner tree instances using the random generators described above. The number of nodes in these instances is equal to 20 and the number of terminals is equal to 10. For each type of instance we generate 200 instances. For weighted graphs, we assign random integer weights in the range \(\{1,2,\cdots,10\}\) to each edge. Since the weighted version of the Steiner tree problem is the more general version, and the number of terminals is an important parameter, we create a second dataset of graphs with 50 nodes. For the number of terminals, we use two distributions. In the first distribution, the percentage of the number of terminals with respect to the total number of nodes is in \(\{20\%,40\%,60\%,80\%\}\). In the second distribution the percentage is in \(\{3\%,6\%,\cdots,18\%\}\). These two cases are considered to determine the behavior of the learning models on large and small terminal sets (compared with the overall graph size). As random graphs instances can be "easy" to solve, we also evaluate our approach on graphs from the SteinLib library [33], which provides hard graph instances. Specifically, we perform experiments on two SteinLib datasets: I080 and I160. Each instance of the I080 and I160 datasets contains 80 nodes and 160 nodes respectively. Both datasets have 100 in stances. ### Computing optimal solutions: In order to evaluate the performance of our approach (and that of the 2-approximation) we need to compute the optimal solutions. There are different integer linear programs (ILP) for the exact Steiner tree problem. The cut-based approach considers all possible combinations of partitions of terminals and ensures that there is an edge between that partition. This ILP is simple but introduces an exponential number of constraints. A better ILP approach in practice considers an arbitrary terminal as a root and sends flow to the rest of the terminals; see [2, 28] for details about these and other ILP methods for the exact Steiner tree problem. We generate 2,000 Steiner tree instances and compute the exact solution with the flow-based ILP. We use CPLEX 12.6.2 as the ILP solver on a high-performance computer (Lenovo NeXtScale nx360 M5 system with 400 nodes with 192 GB of memory each). We use Python to implementing the algorithms described above. ### Model architectures: For the MLP in our GNN model, we have used two hidden layers. The first hidden layer has an embedding dimension equal to 128. The second hidden layer has a convolution dimension equal to 128. We use the ReLU activation function for both layers. We also use batch normalization in both layers to normalize the contributions to a layer for every batch of the datasets. The value of early stopping is equal to 15; hence the model will automatically stop training when the chosen metric does not improve for 15 epochs. We trained the network and evaluated our Figure 3: Performance on simple graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation. algorithm separately for each combination of generator and node size. Recall that the neural network predicts the next Steiner node from a partial solution. Hence, for each Steiner tree instance, we generate a set of data points. Since neural network architecture can not handle different node sizes, we have trained four independent neural networks for node sizes 20, 50, 80, and 160. The same neural network can predict solution nodes for different graph generation models if the node size is the same. In total, we have trained the networks on around 200,000 data points. ### Heuristic setup: We used the two heuristics described in Section 2.4. Recall that the MST-based heuristic just computes the minimum spanning tree on the induced graph of the partial solution. It works well for geometric graphs, unweighted Erdos-Renyi, unweighted Watts-Strogatz, and unweighted Barabasi-Albert graphs. We use the metric closure-based heuristic for all the other experiments. ## 4 Experimental results We evaluate the performance of the proposed approach by comparing the computed trees to those computed by the classical 2-approximation algorithm and the optimal solutions. The proposed approach never performs worse than the 2-approximation algorithm. We also report running times. The results for geometric graphs and other unweighted graphs are shown in Figure 3. The X-axis represents the graph or instance number that does not have any significance. Traditionally bar plot is used in such a scenario. However, for each instance, we show three costs for three different algorithms. Hence scatter Figure 4: Performance on weighted graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation. plot provided a better visualization by saving space horizontally. One can show two costs instead of three costs by showing the difference w.r.t. the optimal algorithm. However, this approach does not provide a better visualization since many differences get closer to zero. We illustrate the performance of different algorithms on the Geometric graphs in Figure 2(a). We represent the optimal solution with green triangles, our algorithm with yellow squares, and the 2-approximation with blue circles. For the geometric graph, we have 40 instances, each of which has 20 nodes and 10 terminals. A majority of the time the 2-approximation has a larger solution value and our algorithm has a solution very close to the optimal value. The 2-approximation performs worse than our algorithm in 36 instances out of 40 instances. Our algorithm also performs well for unweighted graphs. We illustrate the performance for random graphs generated by Erdos-Renyi, Barabasi-Albert, and Watts-Strogatz models in Figure 2(b), Figure 2(c), and Figure 2(d) respectively. We have 40 instances for each type of generator. Again, each instance has 20 nodes and 10 terminals. In all of these instances, our algorithm achieves the optimal solution. For Erdos-Renyi graphs, our algorithm performs better than the 2-approximation in two instances. For Barabasi-Albert graphs, our algorithm performs better in six instances. For Watts-Strogatz graphs, our algorithm performs better in four instances. Results for the weighted graphs are shown in Figure 4. The weighted version of the Steiner tree problem is harder than the unweighted version. Hence, we consider a larger set of instances. For each random graph generation model, we consider one dataset that has 20 nodes for each instance and another dataset that has 50 nodes. We illustrate the performance on 20 nodes Erdos-Renyi graphs in Figure 3(a). For this dataset, both of the algorithms provide solution values similar to the optimal value. We illustrate the performance on 20 nodes Barabasi-Albert and Watts-Strogatz graphs in Figure 3(b) and Figure 3(c) respectively. For Barabasi-Albert graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 40 instances. Our algorithm provides an optimal solution in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 2-approximation performs worse than our algorithm in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 2-approximation performs worse than our algorithm in 39 instances. Our algorithm provides an optimal solution in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 39 instances. We illustrate the performance of the algorithms on 50 nodes Watts-Strogatz graphs in Figure 4(a). Again, our algorithm provides nearly optimal solutions and the 2-approximation has a noticeable difference. The 2-approximation performs worse than our algorithm in 38 instances out of 40 instances. Our algorithm provides an optimal solution in 34 instances. We illustrate the performance of the algorithms on 50 nodes Barabasi-Albert graphs in Figure 4(b). The 2-approximation Figure 5: Performance on more weighted graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation. performs worse than our algorithm in 34 instances out of 40 instances. Our algorithm provides an optimal solution in 31 instances. Our algorithm provides nearly optimal solutions for the remaining instances. The SteinLib library [33] provides hard graph instances for solving the Steiner tree problem. Results for a SteinLib dataset is shown in Figure 6. We can see that there is a relatively large difference between the optimal solution value and the 2-approximation solution value. Despite this larger difference of 2-approximation solution values, our algorithm finds nearly optimal solutions. ### Running time: The training time of the GNN depends on the dataset. The maximum training time is around 20 hours for the I160 SteinLib dataset. The average running times of the optimal algorithm, 2-approximation, and our algorithm for different test datasets are shown in Figure 1. We denote the geometric, unweighted Erdos-Renyi, unweighted Watts-Strogatz, and unweighted Barabasi-Albert graphs by GE, ER, WS, and BA respectively. We denote the weighted 20 nodes Erdos-Renyi, Watts-Strogatz, and Barabasi-Albert graphs by ER20, WS20, and BA20 respectively. We denote the weighted 50 nodes Erdos-Renyi, Watts-Strogatz, and Barabasi-Albert graphs by ER50, WS50, and BA50 respectively. We denote the 80 nodes and 160 nodes SteinLib datasets by I080 and I160 respectively. We can see the 2-approximation algorithm is the fastest. Our algorithm is a little slower, however, the solution values are closer to the optimal values. ## 5 Conclusion We described an approach for the Steiner tree problem based on GNNs and MCTS. An experimental evaluation shows that the proposed method computes nearly optimal solutions on a wide variety of datasets in a reasonable time. The proposed method never performs worse than the standard 2-approximation algorithm. The source code and experimental data can be found on github [https://github.com/abureyanahmed/GNN-MCTS-Steiner](https://github.com/abureyanahmed/GNN-MCTS-Steiner). One limitation of our work is we need to retrain for different node sizes. Also, the Steiner tree problem can be seen as a network sparsification technique. In fact, it is one of the simplest sparsification methods since it only considers trees. It would be interesting to see whether our proposed approach can be adapted to graph spanner problems. Our model is unable to fit different node sizes. Hence in our experiments, we try a \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline Graphs/ & GE & ER & WS & BA & ER20 & WS20 & BA20 & ER50 & WS50 & BA50 & I080 & I060 \\ Algorithms & & & & & & & & & & & \\ \hline 2-apprx & 0.16 & 0.09 & 0.10 & 0.11 & 0.16 & 0.40 & 0.14 & 1.14 & 0.79 & 0.47 & 1.29 & 7.88 \\ \hline MCTS & 0.64 & 0.40 & 0.49 & 0.49 & 0.75 & 1.73 & 0.66 & 5.06 & 3.20 & 2.17 & 5.77 & 34.52 \\ \hline OPT & 5.92 & 6.33 & 5.00 & 4.68 & 22.99 & 28.61 & 29.90 & 153.71 & 125.41 & 134.46 & 1051.51 & 6188.18 \\ \hline \end{tabular} \end{table} Table 1: Average running time of different algorithms in seconds. Figure 6: Performance on SteinLib datasets. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation. small set of node sizes. It is an interesting future work to generalize the model to handle different node sizes. A general model will provide an opportunity to explore the effectiveness of different parameters of the model.
2302.00592
Comparative Study of Parameter Selection for Enhanced Edge Inference for a Multi-Output Regression model for Head Pose Estimation
Magnitude-based pruning is a technique used to optimise deep learning models for edge inference. We have achieved over 75% model size reduction with a higher accuracy than the original multi-output regression model for head-pose estimation.
Asiri Lindamulage, Nuwan Kodagoda, Shyam Reyal, Pradeepa Samarasinghe, Pratheepan Yogarajah
2022-12-28T09:58:04Z
http://arxiv.org/abs/2302.00592v1
Comparative Study of Parameter Selection for Enhanced Edge Inference for a Multi-Output Regression model for Head Pose Estimation ###### Abstract Magnitude-based pruning is a technique used to optimise deep learning models for edge inference. We have achieved over 75% model size reduction with a higher accuracy than the original multi-output regression model for head-pose estimation. Edge Inference, Optimisation, Network Pruning, Quantisation, TensorFlow,Head Pose estimation ## I Introduction With the development of deep learning models and the enhancement of technology used in edge devices such as mobile phones and tablets, the deployment of deep learning models in edge devices has become popular [1]. Being able to run a deep learning inference without an internet connection or a connection to a server is a huge advantage to save network bandwidth [2], energy and to reduce network latency related delays considerably [3]. Furthermore, with edge inference, user data and the data used in the inference cycle do not leave the device, providing users with a secure and enhanced user experience [4]. In order to deploy a deep learning model, the model should be able to work with the limited amount of resources available on the edge device [5]. A mobile device cannot handle a heavy model that is intended to be deployed on a server-level computer. Hence, those should be optimised. Network pruning, quantisation, parameter factorisation, tensor decomposition, and teacher-student training [6] are some of the Deep Neural Network optimisation (DNN) techniques. Furthermore, there are mobile-friendly DNN architectures such as MobileNet [7], EfficientNet [8], SqueezeNet [9] etc., which are optimised to be deployed on edge devices due to their lower number of parameters and lower model sizes but get the same level or even better accuracy than non-optimised models. Taylor approximation and Hessian approximation, inverse Hessian approximation, and magnitude-based pruning have been used for model pruning since 1990s [10]. Magnitude-based pruning has become popular since it can optimise large networks regardless of the network architecture [10, 11]. Michael H. Zhu and Suyog Gupta have presented magnitude-based model pruning as a module in the TensorFlow Model Optimisation library [10]. Although the impact of magnitude-based model pruning on the model size and model accuracy on classification models has been discussed before [10], the impact on a regression model has not been analysed. Our work focuses on an in-depth analysis of the impact of magnitude-based pruning on the regression models and how the pruning parameters have to be tuned to obtain a model with a smaller model size while preserving model accuracy. Therefore, this study seeks to find out how a regression-based model can be compressed by tuning the pruning parameters without changing any hyper-parameters or architecture of the model. Our key contributions of this study are: * Conduct in-depth analysis on parameter selection for the model pruning to get the optimal solution for the best model size while preserving the model accuracy. * Compare the impact of Constant and Dynamic pruning. ## II Methodology This section describes the types of magnitude-based pruning, the construction of the model, and the testing criteria for measuring performance and the model size of the resulting model when magnitude-based pruning is applied to the multi-output regression model to optimise the model for edge inference. Magnitude-based pruning makes the weights zero if they are less than a given threshold. By making weights zero, the computational power and propagation delays required when making a prediction can be reduced since the weights that are zero can be eliminated in the computations. Based on how this threshold is chosen and how the sparsity of the model is increased in the training process, there are two methods of model pruning: dynamic pruning and constant pruning. ### _Dynamic pruning and Constant pruning_ Dynamic pruning allows one to gradually increase the sparsity percentage. The equation of dynamic pruning is as follows: [10] \[s_{t}=s_{f}+(s_{i}-s_{f})\left(1-\frac{t-t_{0}}{(t_{f}-t_{0})\Delta t}\right)^{3},\] where \(s_{t}\) = Current sparsity value \(s_{i}\) = Initial sparsity \(s_{f}\) = final sparsity \(t_{0}\) = Starting training step \(t_{f}\) = Ending training step \(\Delta t\) = Pruning frequency The layers chosen to be pruned are assigned a binary mask variable of the shape and size of the weights of the layer, which is used to determine which weights participate in the forward propagation. In each epoch, the weights are sorted by the absolute value and the smallest weights' masks are changed to zero. In the back propagation process, the weights masked with zeros are not updated. Once the desired sparsity is achieved, the mask updating process will be stopped. This process is continued until the desired sparsity value \(s_{f}\) is reached. By controlling the above parameters, we can fine-tune the accuracy and size of the generated pruned model. But in constant pruning, a constant sparsity level \(s_{c}\) is maintained throughout the training process. When the pruning process starts, it immediately tries to reach the desired sparsity level. This sparsity value doesn't depend on the starting step, end step, and frequency of constant pruning, but they have to be tuned to get the best outcome of the constant pruning method. ### _Model_ The model chosen for the experiments of this study is a multi-output regression model by Dhanawnsa.V et al. [12] which was implemented for head-pose estimation. It is a custom model based on the Efficient B0 model [8] combined with additional dense layers at the top of the architecture. The hyperparameters used in the training process are as follows: * Learning rate = 0.001 * Optimiser = Adam * Batch size = 128 * No of total training cycles(epochs) = 80 * Loss Function = Mean Absolute Error (MAE) * patience = 3 - factor = 0.5 - min learning rate = 0.00001 * Data-set = BIWI Kinect Head Pose [13], 300W-LP [14] and AFLW 2000 [14] The input to the model is a RGB image of 64\(\times\)64 and the output provides yaw, pitch and roll angles for head-pose estimation [12] (see Fig. 1). All parameters mentioned above were kept constant through all experiments conducted in this study. ### _Model Pruning implementation_ Implementation was done to test the performance of constant pruning and dynamic pruning, varying one parameter at a time. When model pruning is applied, some of the layers have to be neglected since they are not supported by the model pruning library. Those layers are _re-scaling layers, normalisation layers, and multiplication layers._ They were annotated not to be included in the pruning process. The pruning parameters which will be analyzed through this study are as follows, 1. Pruning schedule * Constant pruning - \(s_{c}\) * Dynamic pruning * Initial sparsity - \(s_{i}\) * Final sparsity - \(s_{f}\) 2. Starting training step (epoch) - \(t_{0}\) 3. Ending training step (epoch) - \(t_{f}\) We experimented with 50%, 75%, and 87.5% final sparsity (\(s_{f}\)) values for both dynamic and constant pruning because values less than 50% result in weights less than 50%, which does not result in any significant improvement over the model size, and values greater than 87.5% completely destroy the model accuracy because all significant weights that contain important features are also eliminated. Starting step values (\(t_{0}\)) were 0, 20, 40, 60, 80, and ending step values (\(t_{f}\)) were combinations of 20, 40, 60, 80. ### _Testing criteria_ To maximise the impact of model pruning, Gzip compression was used [10]. Although model pruning makes weights zero based on the desired sparsity value, those zero weights are still represented as a 32-bit number, which takes up the same amount of space as none-zero weights. Therefore, to get the maximum benefit of model pruning, those 0 values have to be removed by compressing the file [15]. The model accuracy was measured based on the Mean Absolute Error (MAE) as per the original model. **The accuracy of the base model used in this study is 8.63 with a file size of 110MB** and all tests are compared with the base model accuracy. For each test, 3 models were implemented. The models discussed through this study are TensorFlow Lite models which are ready to be deployed on edge devices. 1. Pruned model - Without any post optimiser on model size and accuracy. 2. Optimised model - Used "EXPERIMTAL_SPARSITY" post-optimiser on model size [16]. Fig. 1: Model Architecture (Source:- [12]) 3. Quantized model - Used "DEFAULT" post-optimiser to quantise(32 bit to 8 bit) the pruned model [16] ## III Results and discussion ### _Dynamic Sparsity_ Fig. 2 shows how the model size (in MB) and the Mean Absolute Error (MAE) vary when the final sparsity (\(s_{f}\)) is increased from 50% up to 87.5%. The MAE value has increased, which means the accuracy of the model has decreased when the final sparsity (\(s_{f}\)) is increased. But the file size has decreased considerably. The file size has been reduced since the number of zero weights increases with sparsity. With the model becoming sparse, it might lose weights that contain important features that contribute to the accuracy of the model. Therefore, there's a trade-off between model size and model accuracy when pruning is applied to a model. Since the MAE curves of the pruned model and post-pruning optimised model overlaps with each other, we can observe that by applying the post-pruning optimiser, the model size could be slightly reduced further without making any difference in the model accuracy. Furthermore, when post quantisation is applied to the model, the model size has been reduced significantly more than the other two models, with a very small reduction in model accuracy. By applying only pruning, the size of the model could be reduced by up to 5.9% (8.78MB) - 19.7% (21.97MB) of the original model size (110MB). By applying the post-pruning optimiser, the model size could be further reduced by up to 2.2MB of the pruned model. Quantifying the pruned model could further reduce the model size up to 15.82MB of the pruned model, which is 5.59% (6.15MB) - 2.43% (2.68MB) of the original model size. According to Fig. 3 the best model accuracy was obtained when pruning was started at the 40\({}^{\text{th}}\) epoch. The model sizes are almost constant throughout the starting steps (\(t_{0}\)) on all sparsity levels, but the accuracy keeps going up and down in a small range. Similarly, the Fig. 4 shows the model size and MAE when the ending step (\(t_{f}\)) is increased from 0 to 80, with \(t_{0}=0\) and \(s_{f}=50\%\). Observing the graph, the best accuracy level is reached when the ending step (\(t_{0}\)) is 60. This indicates that the model can recover the loss that is caused by the pruning process while maintaining the same level of model size if it has enough training epochs after the desired sparsity level is reached. But the number of pruning epochs cannot be too low either. The desired sparsity level has to be reached gradually without harming model accuracy. It can be observed that when stopped at the 80\({}^{\text{th}}\) epoch, the model has lost accuracy since it did not have enough time to recover the lost accuracy. Observing the experimental results, the starting step and the ending step cannot be considered separately. The combination of both is what matters to the accuracy of the pruned model. Overall, the model with the best accuracy was obtained at \(t_{0}=0\) and \(t_{f}=60\). Furthermore, tests were run to see how initial sparsity (\(s_{i}\)) affected model size and accuracy, but there was no significant difference. MAE varied in the range between 8.9 and 9.96 throughout the tests. Overall, the best accuracy in dynamic pruning was achieved at 50% final sparsity (\(s_{f}\)) when \(t_{0}\) = 0 and \(t_{f}\) = 60, with a total number of 80 training steps and applying the post-pruning optimiser. The resulting model file size was 21.82MB (19.83% of the original model size) with an MAE value of 8.54 (0.09 lower than the original model), which is slightly better than the original model. The model with the best file size was obtained at \(s_{f}\) = 87.5%, \(t_{0}\) = 40, \(t_{f}\) = 80 and the post-pruning optimiser was applied. The model accuracy (MAE) is 11.99 (-3.36 worse than the original model) with a file size of 8.56 (7.78% of the original model size). By applying post quantisation, the best accuracy model size could be further reduced to 6.24MB (5.67% of the original model size) with an 8.58 MAE value (0.05 lower than the original model), which is still a bit better than the original model. The model that had the best file size could be further reduced to 2.54MB (2.3% of the original model size) with a further Fig. 3: Starting step \((t_{0})\) vs model size and MAE (0-50% Dynamic Sparsity, 80 - end step \((t_{f})\)) Fig. 2: Final Sparsity percentage \((s_{f})\) vs model size and MAE reduced accuracy of 12.17 (MAE). ### _Constant Sparsity_ Inspecting the Fig. 5, the accuracy and model size behaviour with sparsity level is the same as dynamic sparsity, but the range of accuracy differs from the original model (0.02 - 1.68) and the file size range of the resulting models (8.25% - 20.26% of the original model) is also higher than dynamic pruning. The behaviour of the models generated by applying the quantisation and post-pruning flags is also similar to that of dynamic pruning. The effect of the generated model's starting step (\(t_{0}\)) and ending step (\(t_{f}\)) values, as well as the variation in model size and accuracy, was the same as that of dynamic pruning. Overall, the best model accuracy of 8.57 (MAE) is obtained with 50% sparsity (\(s_{c}\)) and the pruning process starting (\(t_{0}\)) from the 60th to 80th epoch(\(t_{f}\)), which is 0.06 better than the original model with a relatively larger file size of 35.54MB (32.3% of original model). The model with the best file size (8.41) was obtained at 0.88 sparsity with the pruning process starting from the 60th to 80th epoch with a reduced accuracy of 12.39 (3.76 lower than the original model). The best-accurate model size could be reduced to 8.59MB (7.8% of the original model size) with an accuracy of 8.59 (0.02 lower than the best model) by applying post quantisation. ### _Constant pruning Vs Dynamic Pruning_ According to the Table I, we can observe that the model accuracy is better in all three types of models in the dynamic pruned models. The dynamic pruned model is better in 0.03 than the constant pruned model. It is not a significant improvement, but the file size is 13.7MB lower than the constant pruned model. Therefore, with a considerably lower file size, a model with better or at least almost equal accuracy can be obtained by dynamically pruning the model. In the models with the smallest file sizes, the dynamically pruned models still have better accuracy and smaller file sizes than the constant-pruned models. Therefore, we can decide that with dynamic pruning, models can recover the accuracy loss that happens due to pruning better than with constant pruning, and dynamic pruning has pruned more weights than constant pruning, resulting in a model with a smaller file size than constant pruning. Fig. 6 shows the variation of validation accuracy over the training steps of the models with the best file size and accuracy. The validation accuracy has suddenly increased at once and rapidly decreased but has stopped at a much higher value than dynamic pruning when constant pruning is applied. But with dynamic pruning, the validation accuracy has increased a bit, has recovered after a few training steps, and has converged to a much lower value than with constant pruning. Also, we can observe that in constant pruning, the ability to recover the damage due to model pruning is lower than in dynamic pruning. This behaviour proves that dynamic pruning can continue Fig. 4: Ending step (\(t_{f}\)) vs model size and MAE (0-50% Dynamic Sparsity, 0 - starting step(\(t_{0}\))) Fig. 5: Static sparsity (\(s_{f}\)) vs model size and MAE Fig. 6: Validation error (MAE) the training process without losing the model accuracy and can generate a more accurate model than constant pruning. ### _Post Optimisation Flags: Post-pruning and Quantisation_ Post-quantisation can reduce the file size dramatically with a very small loss in accuracy. For example, the model size with the best accuracy can be reduced by 15.84MB with a loss of accuracy of 0.04 by applying post quantisation. Similarly, by observing the graphs described in the previous sections, we can see that the quantisation is very effective in further reducing the model size significantly without losing a considerable amount of accuracy. But Fig. 7 shows how much deviation exists between predictions made by the best accuracy model without post-optimisation and the two post-optimised models. Since the post-pruning optimised model has the same accuracy as the pruned model, there's no difference between the pruned model and the optimised model. Therefore, they overlap on each other while the quantized model has a considerable deviation from the pruned model prediction. Although the MAE difference between the pruned model and the post-quantized model is very small (0.04), there's a noticeable deviation between the predictions made by the two models. Therefore, we can conclude that by post-quantisation, although we reduce the file size considerably with a very small accuracy loss, it still impacts the accuracy of the predictions. ### _Non-compressed model sizes_ Without GZip compression, the model sizes remained constant for each type of the model regardless of the pruning parameters in both dynamic and constant pruning, as shown in Table II. Therefore, the best use of pruning is when the model is being transmitted over a network or stored as a compressed file. Only by applying quantisation or post-pruning optimiser, the model size can be reduced further if the model is stored without compression. ## IV Recommendations The model accuracy trades off with the model size. At more of the desired sparsity, the model size reduces, but the accuracy of the model also keeps decreasing. Based on the capabilities of the edge device on which the model is going to be deployed, the choice has to be made if the priority has to be given to the model accuracy or to reduce the model size to the lowest possible size with a lower accuracy or reduce the model size up to a moderate level (size less than 50% of the original model) without reducing the model accuracy or even with better accuracy than the original model. Throughout the experiments, the range of model MAE was 3.77. In order to understand the impact of this variation on the predictions, the pitch angle of 20 consecutive pieces of test data was tested against the pruned models which had the best and worst accuracy. According to Fig. 8 it can be observed that the model with the best accuracy has a very low distance between the base model and the least accurate model. Since the most accurate model is more accurate than the base model, it has a lower distance to the true value at some points. But at all points, the lowest value model prediction has a huge variation from the true value compared to the other models. The difference between the base model and the best accurate model is (0.09) but still, it can Fig. 7: Pitch angle predictions of best accuracy model on sample video frames be observed that there's a distance between the predictions of those two models. Therefore, it can be concluded that even a small degradation in model accuracy matters to the accuracy of the model prediction. Based on these observations, the model accuracy vs file size trade-off has to be carefully decided. The primary concern has to be the accuracy, unless the model size is very critical. We can observe that with post-quantisation we can further reduce the model size with a very small loss in accuracy. With the experimental results of this study for the optimal model pruning output, a range of 0 to 50 dynamic sparsity can be recommended to get a model with good accuracy and over 50% reduced file size. If the resources in the edge device where the model was to be deployed were more constrained, the final sparsity (\(s_{f}\)) could be increased to 87.5%. Based on the findings of this study, the initial sparsity (\(s_{i}\)) can be recommended to be kept constant at 0. From constant sparsity and dynamic sparsity, dynamic pruning can be recommended since it can generate more accurate models with a lower file size than Constant pruning. The authors of the pruning module [10] have stated that this technique does not consider the architecture of the model. Since magnitude-based model pruning only looks at the magnitude of the weights, Therefore, regardless of the network architecture, the model pruning technique, along with the observations and conclusions obtained through this study, can be applied to any type of regression model. ## V Conclusion and Future work This work presents an in-depth analysis of model pruning parameter selection in order to obtain a model with a smaller file size without reducing model accuracy. The impact of constant and dynamic pruning and how parameters of them have to be chosen to get an optimal result is discussed in depth throughout this study. The next step of this study is to combine model pruning with other techniques such as weight pruning, weight clustering, and quantisation and test the performance in different configurations and analyse what the best configuration is to compress models without losing accuracy. ## VI Experimental results and codes Experimental results and the model pruning code can be found at: [https://github.com/asirigawesha/PruningTests.git](https://github.com/asirigawesha/PruningTests.git) ## VII Acknowledgement This research was supported by the Accelerating Higher Education Expansion and Development (AHEAD) Operation of the Ministry of Higher Education of Sri Lanka funded by the World Bank ([https://ahead.lk/result-area-3/](https://ahead.lk/result-area-3/)).
2309.05798
Enhancing Hyperedge Prediction with Context-Aware Self-Supervised Learning
Hypergraphs can naturally model group-wise relations (e.g., a group of users who co-purchase an item) as hyperedges. Hyperedge prediction is to predict future or unobserved hyperedges, which is a fundamental task in many real-world applications (e.g., group recommendation). Despite the recent breakthrough of hyperedge prediction methods, the following challenges have been rarely studied: (C1) How to aggregate the nodes in each hyperedge candidate for accurate hyperedge prediction? and (C2) How to mitigate the inherent data sparsity problem in hyperedge prediction? To tackle both challenges together, in this paper, we propose a novel hyperedge prediction framework (CASH) that employs (1) context-aware node aggregation to precisely capture complex relations among nodes in each hyperedge for (C1) and (2) self-supervised contrastive learning in the context of hyperedge prediction to enhance hypergraph representations for (C2). Furthermore, as for (C2), we propose a hyperedge-aware augmentation method to fully exploit the latent semantics behind the original hypergraph and consider both node-level and group-level contrasts (i.e., dual contrasts) for better node and hyperedge representations. Extensive experiments on six real-world hypergraphs reveal that CASH consistently outperforms all competing methods in terms of the accuracy in hyperedge prediction and each of the proposed strategies is effective in improving the model accuracy of CASH. For the detailed information of CASH, we provide the code and datasets at: https://github.com/yy-ko/cash.
Yunyong Ko, Hanghang Tong, Sang-Wook Kim
2023-09-11T20:06:00Z
http://arxiv.org/abs/2309.05798v1
# Enhancing Hyperedge Prediction with Context-Aware Self-Supervised Learning ###### Abstract Hypergraphs can naturally model _group-wise relations_ (e.g., a group of who users co-purchase an item) as _hyperedges_. _Hyperedge prediction_ is to predict future or unobserved hyperedges, which is a fundamental task in many real-world applications (e.g., group recommendation). Despite the recent breakthrough of hyperedge prediction methods, the following challenges have been rarely studied: (C1) _How to aggregate the nodes in each hyperedge candidate for accurate hyperedge prediction?_ and (C2) _How to mitigate the inherent data sparsity problem in hyperedge prediction?_ To tackle both challenges together, in this paper, we propose a novel hyperedge prediction framework (CASH) that employs (1) _context-aware node aggregation_ to precisely capture complex relations among nodes in each hyperedge for (C1) and (2) _self-supervised contrastive learning_ in the context of hyperedge prediction to enhance hypergraph representations for (C2). Furthermore, as for (C2), we propose a _hyperedge-aware augmentation_ method to fully exploit the latent semantics behind the original hypergraph and consider both node-level and group-level contrasts (i.e., _dual contrasts_) for better node and hyperedge representations. Extensive experiments on six real-world hypergraphs reveal that CASH consistently outperforms all competing methods in terms of the accuracy in hyperedge prediction and each of the proposed strategies is effective in improving the model accuracy of CASH. For the detailed information of CASH, we provide the code and datasets at: [https://github.com/yy-ko/cash](https://github.com/yy-ko/cash). Hypergraph, hyperedge prediction, self-supervised learning, hypergraph augmentation ## I Introduction Graphs are widely used to model real-world networks, where a node represents an object and an edge does a pairwise relation between two objects. In real-world networks, however, high-order relations (i.e., _group-wise relations_) are prevalent [1, 2, 3, 4, 5, 6, 7, 8, 9], such as (1) an item co-purchased by a group of users in e-commerce networks, (2) a paper co-authored by a group of researchers in collaboration networks, and (3) a chemical reaction co-induced by a group of proteins in protein-protein interaction networks. Modeling such group-wise relations by an ordinary graph could lead to unexpected information loss. As shown in Figure 1(a), for example, the group-wise relation among users \(u_{1}\), \(u_{2}\), and \(u_{3}\) for item 1 (i.e., the three users co-purchase the same item 1) is missing in a graph, instead, there exist three separate pair-wise relations (i.e., clique). A _hypergraph_, a generalized graph structure, can naturally model such high-order relations without any information loss, where a group-wise relation among an arbitrary number of objects is modeled as a _hyperedge_, e.g., the group-wise relation among users \(u_{1}\), \(u_{2}\), and \(u_{3}\) for item 1 is modeled as a single hyperedge \(e_{1}=\{u_{1},u_{2},u_{3}\}\) (blue ellipse) as shown in Figure 1(b). As a special case, if the size of all hyperedges is restricted to 2, a hypergraph is degenerated to a graph. _Hyperedge prediction_ (i.e., link prediction on hypergraphs) is a fundamental task in many real-world applications in the fields of recommender systems [10, 11, 12, 13, 14, 15, 16], social network analysis [17, 18, 19, 20], bioinformatics [21, 22], and so on, which predicts future or unobserved hyperedges based on an observed hypergraph structure. For example, it predicts (a) an item that a group of users are likely to co-purchase in recommender systems and (b) a set of proteins that could potentially co-induce a chemical reaction in bioinformatics. The process of hyperedge prediction is two-fold [23]: given a hypergraph, (1) (**hypergraph encoding**) the embeddings of nodes are produced by a hypergraph encoder (e.g., hypergraph neural networks (HGNNs) [24, 25, 26, 27, 28, 29, 30, 31]) and (2) (**hyperedge candidate scoring**) the embeddings of nodes in each hyperedge candidate are _aggregated_ and fed into a prediction model (e.g., MLP) to decide whether the candidate is real or not. **Challenges.** Although many existing methods have been proposed to improve hyperedge prediction [23, 32, 33, 34, 35, 36], the following challenges remain largely under-explored: **(C1) Node aggregation**. _"How to aggregate the nodes in each hyperedge candidate for accurate hyperedge prediction?"_ Fig. 1: Group-wise relations in e-commerce networks modeled as (a) a graph and (b) a hypergraph, where each hyperedge represents an item co-purchased by a group of users. Intuitively, the formation of group-wise relations (i.e., hyperedges) is more complex than that of pair-wise relations (i.e., edges). For example, the number of nodes engaged and their influences could be different depending on hyperedges. On the other hand, for edges to represent pair-wise relations, the number of nodes engaged is always 2. Such complex and subtle properties of hyperedge formation, however, have rarely been considered in existing methods. Instead, they simply aggregate the nodes in each hyperedge candidate by using heuristic rules [23, 36] (e.g., average pooling). Thus, they fail to precisely capture the complex relations among nodes, which eventually results in accuracy degradation. **(C2) Data sparsity**. "_How to mitigate the data sparsity problem in hyperedge prediction?_" In real-world hypergraphs, there exist only a few group-wise relations, which tend to be even fewer than pair-wise relations [14, 15]. This inherent data sparsity problem makes it more challenging to learn group-wise relations among nodes, thereby degrading the accuracy of hyperedge prediction. Although existing works have studied (a) HGNNs to effectively learn the hypergraph structure based on the limited number of observed hyperedges [32, 33, 34, 35] and (b) negative samplers to select negative examples (non-existing hyperedges) useful in model training [23, 36], they still suffer from the data sparsity problem. **Our work.** To tackle the aforementioned challenges together, we propose a novel hyperedge prediction framework, named \(\underline{\mathsf{C}}\)ontext-\(\underline{\mathsf{A}}\)ware \(\underline{\mathsf{S}}\)elf-supervised learning for \(\underline{\mathsf{H}}\)hyperedge prediction (\(\mathsf{CASH}\)). \(\mathsf{CASH}\) employs the two key strategies: **(1) Context-aware node aggregation**. To aggregate the nodes in each hyperedge candidate while considering their complex and subtle relations among them precisely, we propose a method of _context-aware node aggregation_ that calculates different degrees of influences of the nodes in a hyperedge candidate to its formation and integrates the contextual information into the node aggregation process. **(2) Self-supervised contrative learning**. To alleviate the inherent data sparsity problem, we incorporate _self-supervised contrastive learning_[37, 38, 39, 40] into the training process of \(\mathsf{CASH}\), providing complementary information to improve the accuracy of hyperedge prediction. Specifically, we propose a method of _hyperedge-aware augmentation_ to generate two augmented hypergraphs that preserve the structural properties of the original hypergraph, which enables \(\mathsf{CASH}\) to fully exploit the latent semantics behind the original hypergraph. We also consider not only _node-level_ but also _group-level_ contrasts in contrastive learning to better learn node and hyperedge embeddings (i.e., _dual contrastive loss_). Lastly, we conduct extensive experiments on real-world hypergraphs to evaluate \(\mathsf{CASH}\), which reveal that (1) (_Accuracy_) \(\mathsf{CASH}\) consistently outperforms _all_ competing methods in terms of the accuracy in hyperedge prediction (up to \(4.78\%\) higher than the best state-of-the-art method [23]), (2) (_Effectiveness_) the proposed strategies of \(\mathsf{CASH}\) are _all_ effective in improving the accuracy of \(\mathsf{CASH}\), (3) (_Insensitivity_) \(\mathsf{CASH}\) could achieve high accuracy across a wide range of values of hyperparameters (i.e., low hyperparameter sensitivity), and (4) (_Scalability_) \(\mathsf{CASH}\) provides (almost) linear scalability in training with the increasing size of hypergraphs. **Contributions.** The main contributions of our work are summarized as follows. * **Challenges**: We point out two important but under-explored challenges of hyperedge prediction: (**C1**) the node aggregation of a hyperedge candidate and (**C2**) the data sparsity. * **Framework**: We propose a novel hyperedge prediction framework, \(\mathsf{CASH}\) that employs (1) a _context-aware node aggregation_ for (C1) and (2) _self-supervised learning_ equipped with _hyperedge-aware augmentation_ and _dual contrastive loss_ for (C2). * **Evaluation**: Through extensive evaluation using six real-world hypergraphs, we demonstrate the superiority of \(\mathsf{CASH}\) in terms of (1) accuracy, (2) effectiveness, (3) insensitivity, and (4) scalability. For reproducibility, we provide the code and datasets used in this paper at [https://github.com/yy-ko/cash](https://github.com/yy-ko/cash) ## II Related Works In this section, we introduce existing hyperedge prediction methods and self-supervised hypergraph learning methods and explain their relation to our work. **Hyperedge prediction methods.** There have been many works to study hyperedge prediction; they mostly formulate the hyperedge prediction task as a classification problem [23, 24, 32, 33, 24]. Expansion [34] represents a hypergraph into multiple _n_-projected graphs and applies a logistic regression model to the projected graphs to predict unobserved hyperedges. HyperSAGNN [33] employs self-attention-based graph neural networks for hypergraphs to learn hyperedges with variable sizes and estimates the probability of each hyperedge candidate being formed. NHP [32] adopts hyperedge-aware graph neural networks [41] to learn the node embeddings in a hypergraph and aggregates the learned embeddings of nodes in each hyperedge candidate via max-min pooling for hyperedge prediction. AHP [23], a state-of-the-art hyperedge prediction method, employs an adversarial training-based model to generate negative examples (i.e., non-existing hyperedges) for use in the model training for hyperedge prediction and adopts max-min pooling as a node aggregation method. These methods, however, suffer from the data sparsity problem since they rely only on a small number of existing group-wise relations (i.e., observed hyperedges). On the other hand, our \(\mathsf{CASH}\) incorporates self-supervised contrastive learning into the context of hyperedge prediction, which provides complementary information for obtaining better node and hyperedge representations, thereby alleviating the data sparsity problem eventually. **Self-supervised hypergraph learning.** Recently, there have been a handful of works to study self-supervised learning on hypergraphs [14, 15, 16, 40, 42]. HyperGene [42] adopts bi-level (node- and hyperedge-level) self-supervised tasks to effectively learn group-wise relations. However, it adopts a clique expansion to transform a hypergraph into a simple graph, which incurs a significant loss of high-order information and does not employ contrastive learning. TriCL [40] employs tri-level (node-, group-, and membership-level) contrasts in contrastive hypergraph learning. This method, however, has been studied only in the _node-level_ task (e.g., node classification) but not in the hyperedge-level task (i.e., hyperedge prediction) that we focus on. Thus, TriCL does not tackle the node aggregation challenge (C1) that we point out. Also, TriCL adopts simple random hypergraph augmentation methods [37] that do not consider the structural properties of the original hypergraph. HyperGCL [43] employs two hyperedge augmentation strategies to build contrastive views of a hypergraph. HyperGCL (i) directly drops random hyperedges and (ii) masks nodes in each hyperedge randomly (i.e., hyperedge membership masking). This method, however, does not take into account the structural properties of the original hypergraph. On the other hand, our proposed hyperedge-aware augmentation method builds two contrastive views that preserve the structural properties of the original hypergraph. In the context of recommendations, DHCN [16], a session-based recommendation method, models items in a session as a hyperedge and captures the group-wise relation of each session by employing a group-level contrast. However, DHCN adopts a clique expansion, incurring information loss, and does not consider a node-level contrast in the model training. S2-HHGR [15] is a self-supervised learning method for group recommendation, which employs a hierarchical hypergraph learning method to capture the group interactions among users. However, they do not consider a group-level contrast. MHCN [14], a social recommendation method, models three types of social triangle motifs as hypergraphs. However, the motifs used in MHCN are only applicable to a recommendation task, but not to a general hypergraph learning task such as the hyperedge prediction that we focus on in this paper. ## III The Proposed Method: Cash In this section, we present a novel hyperedge prediction framework, **C**ontext-**A**ware **S**elf-supervised learning for **H**yperedge prediction (**CASH**). First, we introduce the notations and define the problem that we aim to solve (Section III-A). Then, we describe two key strategies of **C**ASH: context-aware node aggregation (Section III-B) and self-supervised learning (Section III-C). Finally, we analyze the space and time complexity of **C**ASH (Section III-D). ### _Problem Definition_ #### Iii-A1 **Notations** The notations used in this paper are described in Table I. Formally, a _hypergraph_ is defined as \(H=(V,E)\), where \(V=\{v_{1},v_{2},...,v_{|V|}\}\) is the set of nodes and \(E=\{e_{1},e_{2},...,e_{|E|}\}\) is the set of hyperedges. The node features are represented by the matrix \(\mathbf{X}\in\mathbb{R}^{|V|\times F}\), where each row \(x_{i}\) represents the \(F\)-dimensional feature of node \(v_{i}\). Each hyperedge \(e_{j}\in E\) contains an arbitrary number of nodes and has a positive weight \(w_{jj}\) in a diagonal matrix \(W\in\mathbb{R}^{|E|\times|E|}\). A hypergraph can generally be represented by an _incidence_ matrix \(\mathbf{H}\in\{0,1\}^{|V|\times|E|}\), where each element \(h_{ij}=1\) if \(v_{i}\in e_{j}\), and \(h_{ij}=0\) otherwise. To denote the degrees of nodes and hyperedges, we use _diagonal_ matrices \(\mathbf{D}^{V}\) and \(\mathbf{D}^{E}\), respectively. In \(\mathbf{D}^{V}\), each element \(d_{ii}^{v}=\sum_{j=1}^{|E|}w_{jj}h_{ij}\) represents the sum of the weights of node \(v_{i}\)'s incident hyperedges, and in \(\mathbf{D}^{E}\), each element \(d_{jj}^{e}=\sum_{i=1}^{|V|}h_{ij}\) represents the number of nodes in hyperedge \(e_{j}\). We represent the node and hyperedge representations as \(\mathbf{P}\in\mathbb{R}^{|V|\times d}\) and \(\mathbf{Q}\in\mathbb{R}^{|E|\times d}\), respectively, where each row \(p_{v}\) (\(q_{e}\)) represents the \(d\)-dimensional embedding vector of node \(v\) (hyperedge \(e\)). #### Iii-A2 **Problem definition** This work aims to solve the _hyperedge prediction_ problem, which is formally defined as follows. **Problem 1** (**Hyperedge prediction**).: Given a hypergraph \(\mathbf{H}\in\{0,1\}^{|V|\times|E|}\), node feature \(\mathbf{X}\in\mathbb{R}^{|V|\times F}\), and a hyperedge candidate \(e^{\prime}\notin E\), the goal of hyperedge prediction is to predict whether \(e^{\prime}\) is real or not. The process of hyperedge prediction is two-fold [23]: (1) **Hypergraph encoding**: a hypergraph encoder, \(f:(\mathbb{R}^{|V|\times F},\mathbb{R}^{|V|\times|E|})\rightarrow(\mathbb{R}^{| V|\times d},\mathbb{R}^{|E|\times d})\), produces the node and hyperedge embeddings based on the observed hypergraph structure, i.e., \(f(\mathbf{X},\mathbf{H})=(\mathbf{P},\mathbf{Q})\). (2) **Hyperedge candidate scoring**: a node aggregator, \(agg:\mathbb{R}^{|e^{\prime}|\times d}\rightarrow\mathbb{R}^{d}\), produces the single embedding of a given hyperedge candidate \(e^{\prime}\) by aggregating the embeddings of nodes in \(e^{\prime}\); finally, the aggregated embedding is fed into a predictor, \(pred:\mathbb{R}^{d}\rightarrow\mathbb{R}^{1}\), to compute the probability of the hyperedge candidate \(e^{\prime}\) being formed. Based on this process, we aim to train the hypergraph encoder \(f(\cdot)\), the node aggregator \(agg(\cdot)\), and the hyperedge predictor \(pred(\cdot)\) that minimize the loss \(\mathcal{L}\) in an _end-to-end_ way. Note that we define the loss function \(\mathcal{L}(\cdot)\) based on two tasks, i.e., _hyperedge prediction_ as a primary task and _self-supervised contrastive learning_ as an auxiliary task as illustrated in Figure 2. We will describe the details of \(f(\cdot)\), \(agg(\cdot)\), \(pred(\cdot)\), and \(\mathcal{L}(\cdot)\) in Sections III-B and III-C. ### _Context-Aware Hyperedge Prediction_ As illustrated in Figure 2, **C**ASH** jointly tackles two tasks: _hyperedge prediction_ as a primary task (upper) and _self-supervised contrastive learning_ as an auxiliary task (lower). In this section, we explain how **C**ASH** addresses the hyperedge prediction, which consists of (1) hypergraph encoding and (2) hyperedge candidate scoring. For (1) hypergraph encoding, as CASH is _agnostic_ to hypergraph encoders, any hypergraph neural networks (HGNNs) [25, 26, 27, 28, 30, 31], producing the node representations (\(\mathbf{P}\)) to be used for (2) hyperedge candidate scoring, could be applied to CASH. As we explained before, however, clique-expansion-based HGNNs [25, 27] are unable to fully capture group-wise relations. Thus, to better learn group-wise relations, we carefully design a hypergraph encoder of CASH based on a _2-stage aggregation strategy_ (i.e., node-to-hyperedge and hyperedge-to-node aggregation) by following [27, 30, 31] (**Section III-B1**). For (2) hyperedge candidate scoring (our main focus), we point out a critical challenge that has been largely under-explored yet: **(C1) Node aggregation**. "_How to aggregate the nodes in each hyperedge candidate for accurate hyperedge prediction?_" Naturally, in real-world scenarios, group-wise relations among objects are formed in a very complex manner. In protein-protein interaction (PPI) networks, for example, a group-wise relation among an arbitrary number of proteins could be formed only when the proteins co-induce a single chemical reaction together, where each protein may have a different degree of influence to its group-wise relation (i.e., the chemical reaction). Thus, for predicting unobserved hyperedges (e.g., new chemical reaction) accurately, _it is crucial to precisely capture the degrees of influences in the complex and subtle relation among the nodes_ that would form a hyperedge. Existing hyperedge prediction methods [23, 32], however, simply aggregate a group of nodes without considering the complex relations (e.g, average pooling), which degrades the accuracy of hyperedge prediction eventually. From this motivation, we propose a method of _context-aware node aggregation_ that computes different degrees of influences of nodes in a hyperedge candidate to its formation and produces "the _context-aware_ embedding" of the hyperedge candidate, by aggregating the node embeddings based on their influences (**Section III-B2**). #### Iii-B1 **Hypergraph encoding** In this step, a hypergraph encoder \(f(\cdot)\) produces the node and hyperedge embeddings via a 2-stage aggregation strategy [30, 31, 27]. Specifically, CASH updates each _hyperedge embedding_ by aggregating the embeddings of its incident nodes, \(f_{V\to E}:\mathbb{R}^{|V|\times d}\rightarrow\mathbb{R}^{|E|\times d}\) (i.e., node-to-hyperedge aggregation), and then updates each _node embedding_ by aggregating the embeddings of the hyperedges that it belongs to, \(f_{E\to V}:\mathbb{R}^{|E|\times d}\rightarrow\mathbb{R}^{|V|\times d}\) (i.e., hyperedge-to-node aggregation). This 2-stage process is repeated by the number of layers \(k\) of the hypergraph encoder model. Formally, given a hypergraph incidence matrix \(\mathbf{H}\) and an input node feature matrix \(\mathbf{X}\), the node and hyperedge embeddings at the \(k\)-th layer, \(\mathbf{P}^{(k)}\) and \(\mathbf{Q}^{(k)}\), are defined as: \[\mathbf{Q}^{(k)} =\sigma(\mathbf{D}_{E}^{-1}\mathbf{H}^{T}\mathbf{P}^{(k-1)} \mathbf{W}_{E}^{(k)}+b_{E}^{(k)}), \tag{1}\] \[\mathbf{P}^{(k)} =\sigma(\mathbf{D}_{V}^{-1}\mathbf{H}\mathbf{Q}^{(k)}\mathbf{W}_ {V}^{(k)}+b_{V}^{(k)}), \tag{2}\] where \(\mathbf{P}^{(0)}=\mathbf{X}\), \(\mathbf{W}_{*}^{(k)}\) and \(b_{*}^{(k)}\) are trainable weight and bias matrices, respectively, \(\mathbf{D}_{*}^{-1}\) is the normalization term, and \(\sigma\) is a non-linear activation function (PReLU [44]). As illustrated in Figure 2, the weights and biases of the hypergraph encoder (\(\mathbf{W}_{*}\) and \(b_{*}\)) are shared in the self-supervised learning part. It is worth noting that more-complicated neural network models [28, 29, 30, 31] could be adopted as the hypergraph encoder of our CASH since our method is _agnostic_ to the hypergraph encoder architecture. #### Iii-B2 **Hyperedge candidate scoring** In this step, given the learned node embeddings \(\mathbf{P}\) and a hyperedge candidate \(e^{\prime}\), (1) a node aggregator, \(agg:\mathbb{R}^{|e^{\prime}|\times d}\rightarrow\mathbb{R}^{d}\), produces \(q_{e^{\prime}}^{*}\), the embedding of the hyperedge candidate \(e^{\prime}\), and (2) a predictor \(pred:\mathbb{R}^{d}\rightarrow\mathbb{R}^{1}\) computes the probability of the hyperedge candidate \(e^{\prime}\) being formed based on \(q_{e^{\prime}}^{*}\). **Context-aware hyperedge prediction.** To reflect the different degrees of nodes' influences on a hyperedge candidate in its node aggregation, we devise a simple but effective node aggregation method, i.e., _context-aware_ node aggregator, \(agg(\cdot)\). We first calculate the relative degrees of influences of the nodes in Fig. 2: The overview of CASH: (1) Context-aware hyperedge prediction (upper) and (2) Self-supervised contrative hypergraph learning (lower). a hyperedge candidate to its formation by using the attention mechanism [45], and update each node embedding based on the relative degrees of influences. Formally, given a hyperedge candidate \(e^{\prime}=\{v^{\prime}_{1},v^{\prime}_{2},...,v^{\prime}_{|e^{\prime}|}\}\) and the learned embeddings of the nodes in hyperedge candidate \(e^{\prime}\), \(\mathbf{P}[e^{\prime},:]\in\mathbb{R}^{|e^{\prime}|\times d}\), the _influence-reflected_ embedding of node \(v^{\prime}_{j}\), \(p^{*}_{v^{\prime}_{j}}\), and the relative influence of \(v^{\prime}_{i}\) to \(v^{\prime}_{j}\), \(\alpha_{i,j}\), are defined as: \[p^{*}_{v^{\prime}_{j}} =\sum_{v^{\prime}_{i}\in e^{\prime}}\alpha_{i,j}\cdot p_{v^{\prime }_{i}}\mathbf{W}^{{}^{\prime}}_{agg}, \tag{3}\] \[\alpha_{i,j} =\frac{exp(p_{v^{\prime}_{i}}\mathbf{W}^{{}^{\prime\prime}}_{agg} \cdot x^{\top})}{\sum_{v^{\prime}_{j}\in e^{\prime}}exp(p_{v^{\prime}_{j}} \mathbf{W}^{{}^{\prime\prime}}_{agg}\cdot x^{\top})}, \tag{4}\] where \(\mathbf{W}^{{}^{\prime}}_{agg},\mathbf{W}^{{}^{\prime\prime}}_{agg}\in \mathbb{R}^{d\times d}\) and \(x\in\mathbb{R}^{d}\) are trainable parameters. Then, we aggregate the influence-reflected embeddings of the nodes in hyperedge candidate \(e^{\prime}\), \(\mathbf{P}^{*}[e^{\prime},:]\), via _element-wise max pooling_ to filter the important contextual information of each node, finally computing the probability of \(e^{\prime}\) being formed, \(\hat{y}_{e^{\prime}}\), as: \[\hat{y}_{e^{\prime}}=pred(q^{*}_{e^{\prime}}),\ \ q^{*}_{e^{\prime}}=MaxPool( \mathbf{P}^{*}[e^{\prime},:]), \tag{5}\] where \(q^{*}_{e^{\prime}}\in\mathbb{R}^{d}\) is the final embedding of hyperedge candidate \(e^{\prime}\), which can reflect the complex and subtle relation among the nodes of the hyperedge candidate, and \(pred(\cdot)\) is a hyperedge predictor (a fully-connected layer (\(d\times 1\)), followed by a sigmoid function). To the best of our knowledge, this is the first work to adopt the attention-based method to aggregate the nodes in a hyperedge candidate for accurate hyperedge prediction. We will empirically show the effectiveness of our context-aware node aggregation method in Section IV-B2. **Model training.** For the model training and validation of CASH, we consider both positive and negative examples (i.e., existing and non-existing hyperedges). Specifically, to sample negative examples, we use the following heuristic negative sampling (NS) methods [36], each of which has the different degrees of difficulty: * **Sized NS (SNS)**: sampling \(k\) random nodes (easy). * **Motif NS (MNS)**: sampling a \(k\)-connected component in a clique-expanded hypergraph (difficult). * **Clique NS (CNS)**: selecting a hyperedge \(e\) and replacing one of its incident nodes \(u\in e\) with a node \(v\notin e\), which is linked to all the other incident nodes, i.e., (\(e\setminus\{u\})\cup\{v\}\) (most difficult). Thus, we aim to train the model parameters of CASH so that positive examples obtain higher scores while negative examples obtain lower scores. Formally, given a set \(E^{\prime}\) of hyperedge candidates, the prediction loss is defined as: \[\mathcal{L}_{pred}=-\frac{1}{|E^{\prime}|}\sum_{e^{\prime}\in E^{ \prime}}\underbrace{y_{e^{\prime}}\cdot\log\frac{\hat{y}_{e^{\prime}}}{\text{ positives}}}_{\text{ negatives}}+\underbrace{(1-y_{e^{\prime}})\cdot\log\left(1-\hat{y}_{e^{\prime}} \right)}_{\text{negatives}}, \tag{6}\] where \(y_{e^{\prime}}\) is the label of the hyperedge candidate \(e^{\prime}\) (1 or 0). ### _Self-Supervised Hypergraph Learning_ In real-world hypergraphs, there exist only a small number of group-wise relations [14, 15]. This inherent data sparsity problem makes it very challenging to precisely capture group-wise relations among nodes, which often results in the accuracy degradation in hyperedge prediction. To alleviate **(C2) the data sparsity** problem in hyperedge prediction, we incorporate the _self-supervised contrastive learning_[37, 38, 40] in the training process of CASH (See Figure 2), which provides complementary information to better learn group-wise relations among nodes in a hypergraph. A general process of contrastive learning is as follows: (1) generating two augmented views of a given hypergraph and (2) training the model parameters to minimize the contrast between the two views. There are two important questions to answer in contrastive learning: **(Q1)** "_How to generate two augmented views to fully exploit the latent semantics behind the original hypergraph?_" and **(Q2)** "_What to contrast between the two augmented views?_" To answer these questions, we (1) propose a _hyperedge-aware augmentation_ method that generates two augmented views, preserving the structural properties of the original hypergraph for (Q1) (**Section III-C1**) and (2) consider both node-level and group-level contrasts in constructing the training loss (i.e., _dual contrastive loss_) for (Q2) (**Section III-C2**). #### Iv-C1 **Hypergraph augmentation** In contrastive learning, generating augmented views is crucial since the latent semantics of the original hypergraph to capture could be different depending on the views. Despite its importance, the hypergraph augmentation still remains largely under-explored. Existing works [37, 38, 40], however, adopt a simple random augmentation method that generates augmented views by (i) directly dropping hyperedges or (ii) masking random nodes (members) in hyperedges (i.e., _random membership masking_). Specifically, it uses a random binary mask of the size \(S=nnz(\mathbf{H})\), where \(nnz(\mathbf{H})\) is the number of non-zero elements in a hypergraph incidence matrix \(\mathbf{H}\). It might happen that a majority (or all) of members might be masked in _some_ hyperedges, while only a few (or none of) members are masked in others. Figure 3(a) shows a toy example that _all_ members in hyperedge \(e_{2}\) are masked (i.e., the group-wise relation disappears), while members in hyperedges \(e_{1}\) and \(e_{3}\) are not masked at all. Therefore, this random augmentation method may impair the original hypergraph structure, which results in decreasing the effect of contrastive learning eventually. Fig. 3: Comparison of (a) random membership masking method with (b) our hyperedge-aware membership masking method. From this motivation, we argue that _it is critical to generate augmented views that preserve the original hypergraph structure_ (e.g., the distribution of hyperedges). To this end, we propose a simple yet effective augmentation method that generates two augmented views, considering variable sizes of hyperedges for (Q1). Specifically, our method masks random \(p_{m}\%\) members of each hyperedge _individually_ (i.e., _hyperedge-aware membership masking_), rather than masking \(p_{m}\%\) members of all hyperedges at once. Thus, as shown in Figure 3(b), all existing group-wise relations of the original hypergraph can be preserved in the augmented views. This implies that our hyperedge-aware method is able to successfully preserve the structural properties of the original hypergraph, which enables CASH to fully exploit the latent semantics behind the original hypergrpah. We also employ _random node feature masking_ by following [37, 38, 40]. For node feature masking, we mask random \(p_{f}\%\) dimensions of node features. As a result, CASH generates two augmented views of a hypergraph, \(\mathcal{H}_{1}=(\mathbf{X}_{1},\mathbf{H}_{1})\) and \(\mathcal{H}_{2}=(\mathbf{X}_{2},\mathbf{H}_{2})\). Algorithm 1 shows the entire process of our hyperedge-aware augmentation. We will evaluate our hyperedge-aware augmentation method and its hyperparameter sensitivity to \(p_{m}\) and \(p_{f}\) in Sections IV-B2 and IV-B3, respectively. ``` Input: Node features \(\mathbf{X}\), hypergraph \(\mathbf{H}=(V,E)\), membership and feature masking rates \(p_{m}\) and \(p_{f}\) Output: Augmented hypergraph \(\mathcal{H}^{*}\) 1Function hyperedgeAwareAugment(\(\mathbf{X}\), \(\mathbf{H}\), \(p_{m}\), \(p_{f}\)): 2\(V^{*}\leftarrow\emptyset\), \(E^{*}\leftarrow\emptyset\), \(d\leftarrow|\mathbf{X}|0|\), FeatureMask\(\leftarrow[]\) 3for\(e_{j}\in E\)do// 1. Membership masking 4\(e^{*}\leftarrow\emptyset\)// // Masked hyperedge 5for\(v_{i}\in e\)do 6if\(S\sim\mathcal{B}(1-p_{m})\)then 7\(\leftarrow\)\(e^{*}\leftarrow\cup\{v_{i}\}\), \(V^{*}\gets V^{*}\cup\{v_{i}\}\) 8\(E^{*}\gets E^{*}\cup\{e^{*}\}\) 9\(\mathbf{X}^{*}\leftarrow\mathbf{X}[V^{*},:]\), \(\mathbf{H}^{*}\leftarrow(V^{*},E^{*})\) 10for\(i=1\to d\)do// 2. Node feature masking 11if\(S\sim\mathcal{B}(1-p_{f})\)then// Generating FeatureMask 12FeatureMask.append\((1)\) 13 14else 15FeatureMask.append\((0)\) 16\(\mathbf{X}^{*}\leftarrow\mathbf{X}^{*}\otimes\)FeatureMask // Applying FeatureMask 17return\(\mathcal{H}^{*}\leftarrow(\mathbf{X}^{*},\mathbf{H}^{*})\) 18endfunction ``` **Algorithm 1**Hyperedge-Aware Augmentation #### Iv-C2 Hypergraph contrastive learning For the two augmented views, \(\mathcal{H}_{1}=(\mathbf{X}_{1},\mathbf{H}_{1})\) and \(\mathcal{H}_{2}=(\mathbf{X}_{2},\mathbf{H}_{2})\), we produce the node and hyperedge embeddings, \(\mathbf{P}_{i}\) and \(\mathbf{Q}_{i}\), respectively, where \(i=1,2\) for each augmented view. We use the same hypergraph encoder \(f(\cdot)\) as explained in Section III-B1. Then, we apply node and hyperedge projectors, \(g_{V}:\mathbb{R}^{|V|\times d}\rightarrow\mathbb{R}^{|V|\times d}\) and \(g_{E}:\mathbb{R}^{|E|\times d}\rightarrow\mathbb{R}^{|E|\times d}\), to the learned node and hyperedge embeddings (\(\mathbf{P}_{i}\) and \(\mathbf{Q}_{i}\)), in order to represent them to better fit the form in constructing the contrastive loss by following [46]. Thus, given the learned node and hyperedge embeddings for the \(i\)-th augmented view, \(\mathbf{P}_{i}\) and \(\mathbf{Q}_{i}\), their projected embeddings, \(\mathbf{Z}_{(i,V)}\) and \(\mathbf{Z}_{(i,E)}\), are defined as: \[\mathbf{Z}_{(i,V)}=g_{V}(\mathbf{P}_{i}),\quad\mathbf{Z}_{(i,E)}=g_{E}(\mathbf{ Q}_{i}). \tag{7}\] As the projector \(g_{*}(\cdot)\), we use a two-layer MLP model (\(d\times d_{proj}\times d\)) with the ELU non-linear function [47]. Then, based on the projected node and hyperedge embeddings, \(\mathbf{Z}_{(i,V)}\) and \(\mathbf{Z}_{(i,E)}\), we measure the contrast between the two contrastive views. We consider not only the _node-level_ but also _group-level_ contrasts as self-supervisory signals in constructing the contrastive loss, i.e., _dual contrastive loss_, for (Q2). These dual contrastive signals are complementary information to better learn both node-level and group-level structural information of the original hypergraph, thereby improving the accuracy in hyperedge prediction (i.e., alleviating (C2) the data sparsity problem). Formally, given the projected node and hyperedge embeddings for each augmented view, \(\mathbf{Z}_{(i,V)}\) and \(\mathbf{Z}_{(i,E)}\), the contrastive loss with dual contrasts is defined as: \[\mathcal{L}_{con}= -\underbrace{\log sim(\mathbf{Z}_{(1,V)},\mathbf{Z}_{(2,V)})}_{ \text{node-level contrast}}\] \[-\underbrace{\log sim(\mathbf{Z}_{(1,E)},\mathbf{Z}_{(2,E)})}_{ \text{group-level contrast}}), \tag{8}\] where \(sim(\cdot)\) is the cosine similarity used as a similarity function in CASH. Finally, we unify the two losses of the hyperedge prediction (primary task) and self-supervised contrastive learning (auxiliary task) by a weighted sum. Thus, the unified loss of CASH is finally defined as: \[\mathcal{L}=\mathcal{L}_{pred}+\beta\mathcal{L}_{con}, \tag{9}\] where \(\beta\) is a hyperparameter to control the weight of the auxiliary task. Accordingly, all model parameters of CASH are trained to jointly optimize the two tasks. We will evaluate the impact of the hyperparameter \(\beta\) on the accuracy of CASH in Section IV-B3. As a result, CASH effectively addresses the two important but under-explored challenges of hyperedge prediction by employing two strategies: (1) _context-aware node aggregation_ that considers the complex relation among nodes that would form a hyperedge for (C1) and (2) _self-supervised contrastive learning_ that provides complementary information to better learn group-wise relations for (C2). ### _Complexity Analysis_ In this section, we analyze the space and time complexity of CASH. **Space complexity.** CASH consists of (1) a hypergraph encoder, (2) a node aggregator, (3) a hyperedge predictor, and (4) a projector. The parameter size of a hypergraph encoder is \(d\times d\times k\times 2\), where \(d\) is the embedding dimensionality and \(k\) is the number of layers. The parameter sizes of a node aggregator, a hyperedge predictor, and a projector are \(d\times d\times 3\), \(d\), and \(d\times d\times 2\), respectively. In addition, the space for node and hyperedge embeddings, \(|V|\times d\) and \(|E|\times d\), is commonly required in any hyperedge prediction methods. Thus, since \(k\) is much smaller than \(d\), \(|V|\), and \(|E|\), the overall space complexity of CASH is \(O((|V|+|E|+d)\cdot d)\), i.e., linear to the hypergraph size. As a result, the space complexity of CASH is comparable to those of existing methods since the additional space for our context-aware node aggregator and projector is much smaller than the commonly required space (i.e., \(|V|+|E|\gg d\)). **Time complexity.** The computational overhead of CASH comes from (1) hypergraph encoding, (2) node aggregation, (3) hyperedge prediction, (4) projection, and (5) contrast. The computational overhead of hypergraph encoding is \(O(d\times|\mathbf{H}|\times k\times 2)\), where \(|\mathbf{H}|\) is the number of non-zero elements in the hypergraph incidence matrix. The context-aware node aggregation requires the time complexity of \(O(d^{2}\times|e^{\prime}|\times 3)\), where \(|e^{\prime}|\) is the size of a hyperedge candidate \(e^{\prime}\). The overheads of hyperedge prediction and projection are \(O(d)\) and \(O(d^{2}\times 2)\), respectively. Finally, the contrast overhead is \(O(|V|+|E|)\cdot d\) (i.e., node-level and group-level). Thus, the overall time complexity of CASH is \(O(|\mathbf{H}|+d+|V|+|E|)\cdot d\), i.e., linear to the hypergraph size, since \(|e^{\prime}|\) and \(k\) are much smaller than \(d\), \(|V|\), \(|E|\), and \(|\mathbf{H}|\), where we note the first term (the common overhead of hypergraph encoding \(O(|\mathbf{H}|\cdot d)\)) is dominant. This implies that the time complexity of CASH is also comparable to those of existing hyperedge prediction methods. We will evaluate the scalability of CASH with the increasing size of hypergraphs in Section IV-B4. ## IV Experimental Validation In this section, we comprehensively evaluate CASH by answering the following evaluation questions (EQs): * **EQ1 (Accuracy)**. To what extent does CASH improve the existing hyperedge prediction methods in terms of the accuracy in hyperedge prediction? * **EQ2 (Ablation study)**. How does each of our proposed strategies contributes to the model accuracy of CASH? * **EQ3 (Sensitivity)**. How sensitive is the model accuracy of CASH to the hyperparameters (\(\beta\), \(p_{f}\) and \(p_{m}\))? * **EQ4 (Scalability)**. How does the training of CASH scale up with the increasing size of hypergraphs? ### _Experimental Setup_ **Datasets.** We use six real-world hypergraphs (Table II), which were also used in [23, 25, 27]: (1) three co-citation datasets (Citeseer, Cora, and Pubmed), (2) two authorship datasets (Cora-A and DBLP-A), and (3) one collaboration dataset (DBLP). In the co-citation datasets, each node indicates a paper and each hyperedge indicates the set of papers co-cited by a paper; in the authorship dataset, each node indicates a paper and each hyperedge indicates the set of papers written by an author; in the collaboration dataset, each node indicates a researcher and each hyperedge indicates the set of researchers who wrote the same paper. For all the datasets, we use the bag-of-word features from the abstract of each paper as in [23]. **Evaluation protocol.** We evaluate CASH by using the protocol exactly same as that used in [23]. For each dataset, we use five data splits, where hyperedges (i.e., positive examples) in each split are randomly divided into the training (60%), validation (20%), and test (20%) sets. To comprehensively evaluate CASH, we use four different validation and test sets, each of which has different negative examples with various degrees of difficulty, as in [23]. Specifically, we (1) sample negative examples as many as positive examples by using four heuristic negative sampling (NS) methods [36], which are explained in Section III-B (i.e., sized NS (SNS), motif NS (MNS), clique NS (CNS), and a mixed one (MIX)), and (2) add them to each validation/test set (i.e., the ratio of positives to negatives is 1:1). As evaluation metrics, we use AUROC (area under the ROC curve) and AP (average precision), where higher values of these metrics indicate higher hyperedge prediction accuracy. Then, we (1) measure AUROC and AP on each test set at the epoch when the averaged AUROC over the four validation sets is maximized, and (2) report the averaged AUROC and AP on each test set over five runs. All datasets and their splits used in this paper are available at: [https://github.com/yy-ko/cash](https://github.com/yy-ko/cash). **Competing methods.** We compare CASH with the following four hyperedge prediction methods in our experiments. * **Expansion**[34]: Expansion represents a hypergraph via multiple _n_-projected graphs and predicts future hyperedges based on the multiple projected graphs. * **HyperSAGNN**[33]: HyperSAGNN employs a self-attention based GNN model to learn hyperedges with variable sizes and predicts whether each hyperedge candidate is formed. * **NHP**[32]: NHP applies hyperedge-aware GCNs to a hypergraph to learn the node embeddings and aggregates the embeddings of nodes in each hyperedge candidate by using max-min pooling. * **AHP**[23]: AHP, a state-of-the-art method, employs an adversarial-training-based model to generate negative examples for use in the model training and employs max-min pooling for the aggregation of the nodes in a hyperedge candidate. For all competing methods, we use their results reported in [23] since we follow the exactly same evaluation protocol and use the exactly same data splits as in [23]. **Implementation details.** We implement CASH by using PyTorch 1.11 and Deep Graph Library (DGL) 0.9 on Ubuntu 20.04. We run all experiments on the machine equipped with an Intel i7-9700k CPU with 64GB main memory and two NVIDIA RTX 2080 Ti GPUs, each of which has 11GB memory and is installed with CUDA 11.3 and cuDNN 8.2.1. For all datasets, we set the batch size as 32 to fully utilize the GPU memory and the dimensionality of node and hyperedge embeddings as 512, following [23, 27]. For the model training, we use the Adam optimizer [48] with the learning rate \(\eta=\) 5e-3 and the weight decay factor 5e-4 for all datasets. We use the motif NS (MNS)1[36] to select negative hyperedges in the model training with the ratio of positive examples to negative examples as 1:1 (i.e., 32 positives and 32 negatives are used in each iteration). For self-supervised learning, we adjust the control factor of the auxiliary task, \(\beta\), from 0.0 to 1.0, and the node feature masking and membership masking rates, \(p_{f}\) and \(p_{m}\), from 0.1 to 0.9 in step of 0.1 for all datasets, which will be elaborated in Section IV-B3. Footnote 1: We have also tried to use other negative samplers in the training of CASH but have observed that their impacts on the accuracy are negligible. ### _Experimental Results_ #### Iv-B1 **Accuracy (EQ1)** We first evaluate the hyperedge prediction accuracy of CASH. Table III shows the accuracies of all comparing methods in six real-world hypergraphs. The results show that CASH consistently outperforms _all_ competitive methods in _all_ datasets in both (averaged) AUROC and AP. Specifically, CASH achieves higher AUROC by up to 45.4%, 38.3%, 22.5%, and 4.78% than Expansion, HyperSAGNN, NHP, and AHP, respectively in DBLP-A. We note that these improvements of CASH over AHP (the best competitor) are remarkable, given that AHP has already improved other existing methods significantly in those datasets. Consequently, these results demonstrate that CASH is able to effectively capture the group-wise relations among nodes by addressing the two challenges of hyperedge prediction successfully, i.e., (C1) _node aggregation_ and (C2) _data sparsity_, through the proposed strategies: (1) the context-aware node aggregation for (C1) and (2) the self-supervised learning with _hyperedge-aware augmentation_ and _dual contrastive loss_ for (C2). Interestingly, the two best competitors (i.e., NHP and AHP) show very low accuracies in the CNS test set (i.e., the most difficult test set), which is similar to or even worse than the accuracy of the random prediction (\(\approx 0.5\)), while they achieve very high accuracies (almost perfect) in the SNS test set (i.e., the easiest test set). The results imply that these methods might overfit easy negative examples, thus limiting their ability to be generalized to other datasets. In other words, they do not successfully address the two challenges of hyperedge prediction that we point out - i.e., (C1) node aggregation and (C2) data sparsity - so that they fail to precisely capture the high-order information encoded in hyperedges. On the other hand, our CASH always achieves much higher accuracies than all competing methods in the CNS test set, e.g., up to 93.4%, 23.8%, 40.8%, and 24.6% higher AUROC than Expansion, HyperSAGNN, NHP, and AHP, respectively in DBLP, thereby demonstrating that the generalization ability of CASH is superior to all the competing methods. #### Iv-A2 **Ablation study (EQ2)** In this experiment, we verify the effectiveness of the proposed strategies of CASH individually. We compare the following four versions of CASH: * **CASH-No:** the baseline version without both strategies (i.e., neither context-aware node aggregation nor self-supervised contrastive learning). * **CASH-CL:** the version with self-supervised contrastive learning with dual contrastive loss, but without hyperedge-aware augmentation and context-aware node aggregation. * **CASH-HCL:** the version with self-supervised contrastive learning with dual contrastive loss and hyperedge-aware augmentation, but without context-aware node aggregation. * **CASH-ALL:** the original version with _all_ strategies (i.e., context-aware node aggregation and self-supervised contrastive learning with dual contrastive loss and hyperedge-aware augmentation). Table IV shows the results of our ablation study. Overall, each of our proposed strategies is always beneficial to improving the model accuracy of CASH. Specifically, when all strategies are applied to CASH (i.e., CASH-ALL), the averaged AUROC is improved by 8.91%, 15.45%, 6.04%, and 9.19% compared to the baseline (i.e., CASH-No), in Citeseer, Cora, Pubmed, and Cora-A, respectively. These results demonstrate that the two challenges of hyperedge prediction that we point out, i.e., (C1) node aggregation and (2) data sparsity, are critical for accurate hyperedge prediction and our proposed strategies employed in CASH address them successfully. Looking more closely, **(1) effect of contrastive learning**: CASH-CL outperforms CASH-No on all test sets of all datasets. This result verifies the effect of the _self-supervised contrastive learning_ of CASH, which alleviates (C2) the _data sparsity_ problem successfully by providing complementary information to better learn node and hyperedge representations, as we claimed in Section III-C. Then, **(2) effect of the hyperedge-aware augmentation**: CASH-HCL also improves CASH-CL consistently. This demonstrates that our _hyperedge-aware augmentation_ method is more beneficial to hyperedge prediction than a simple random augmentation method. Thus, our method is able to generate two augmented views preserving the structural properties of the original hypergraph, thereby enabling CASH to fully exploit the latent semantics behind the original hypergraphs as we claimed in Section III-C1. Lastly, **(3) effect of the context-aware node aggregation**: \(\mathsf{CASH}\)-\(\mathsf{ALL}\) achieves higher accuracies than \(\mathsf{CASH}\)-\(\mathsf{HCL}\) in all datasets, which verifies that our _context-aware node aggregation_ method is able to address (C1) the challenge of node aggregation effectively, by capturing the complex and subtle relations among the nodes in a hyperedge candidate for accurate hyperedge prediction. #### Iii-B3 **Sensitivity analysis (EQ3)** In this experiment, we analyze the hyperparameter sensitivity of \(\mathsf{CASH}\). First, we evaluate the impact of the auxiliary task (i.e., self-supervised contrastive learning) on the model accuracy of \(\mathsf{CASH}\) according to the control factor \(\beta\). We measure the model accuracy of \(\mathsf{CASH}\) on four different test sets with varying \(\beta\) from 0.0 (i.e., not used) to 1.0 (i.e., as the same as the primary task) in step of 0.1. Figure 4 shows the results, where the \(x\)-axis represents the control factor \(\beta\) and the \(y\)-axis represents the AUROC. The model accuracy of \(\mathsf{CASH}\) is significantly improved in _all_ cases when \(\beta\) is larger than 0.1, and \(\mathsf{CASH}\) achieves high prediction accuracy across a wide range of \(\beta\) values (\(\beta>=0.1\)). This result verifies that (i) self-supervised contrastive learning is consistently beneficial to improving the accuracy of \(\mathsf{CASH}\) by providing complementary information to better learn high-order information encoded in hyperedges and (ii) the accuracy of \(\mathsf{CASH}\) is insensitive to its hyperparameter \(\beta\). Then, we evaluate the impacts of the augmentation hyperparameters \(p_{m}\) and \(p_{f}\) on the model accuracy of \(\mathsf{CASH}\). As explained in Section III-C1, the hyperparameter \(p_{m}\) (\(p_{f}\)) controls how many members (dimensions) of each hyperedge (node feature vector) are masked in augmented views. Thus, as \(p_{m}\) (\(p_{f}\)) becomes larger, the more members (dimensions) of each hyperedge (node feature vector) are masked in augmented views. We measure the model accuracy of \(\mathsf{CASH}\) with varying \(p_{m}\) and \(p_{f}\) from 0.1 to 0.9 in step of 0.1. Figure 5 shows the results, where the \(x\)-axis represents the membership masking rate \(p_{m}\), the \(y\)-axis represents the node feature masking rate \(p_{f}\), and the \(z\)-axis represents the averaged AUROC. \(\mathsf{CASH}\) with \(p_{m}\) above 0.4 consistently achieves higher accuracy than \(\mathsf{CASH}\) with \(p_{m}\) below 0.4 regardless of \(p_{f}\) (i.e., the blue wide area on the surface in Figure 5). On the other hand, \(\mathsf{CASH}\) with \(p_{m}\) below 0.4 shows low hyperedge prediction accuracy (i.e., the red/orange area on the surface). Specifically, \(\mathsf{CASH}\) with \(p_{m}=0.1\) and \(p_{f}=0.1\) (i.e., memberships and features are rarely masked) shows the worst result in the Citeseer dataset. These results imply that (1) _the hyperedge membership masking is more important than the node feature masking in contrastive learning_ that aims to capture the structural information of the original hypergraph and (2) \(\mathsf{CASH}\) is able to achieve high accuracy across a wide range of values of hyperparameters. Based on these results, we believe that the accuracy of \(\mathsf{CASH}\) is _insensitive_ to the augmentation hyperparameters \(p_{m}\) and \(p_{f}\), and we recommend setting \(p_{m}\) and \(p_{f}\) as above 0.4. #### Iii-B4 **Scalability (EQ4)** Finally, we evaluate the scalability of \(\mathsf{CASH}\) in training with the increasing size of hypergraphs. We train \(\mathsf{CASH}\) for 20 training epochs with varying the ratio of the training examples (i.e., hyperedges) from 10% to 100% in step of 10%, and measure the averaged training time per epoch. For Fig. 4: The impact of the auxiliary task (i.e., dual contractive learning) on the hyperedge prediction accuracy of \(\mathsf{CASH}\) according to the control hyperparameter \(\beta\). The auxiliary task is consistently beneficial to hyperedge prediction across a wide range of \(\beta\) values (\(\beta>=0.1\)). Fig. 5: The hyperparameter sensitivity of \(\mathsf{CASH}\) to the membership and node feature masking rates \(p_{m}\) and \(p_{f}\). \(\mathsf{CASH}\) achieves high accuracy with a wide range of \(p_{m}\) and \(p_{f}\) values (i.e., the blue wide area on the surface). brevity, we report the relative training time per epoch, i.e., the relative time of 1 means the time per epoch for 10% of the training examples in each dataset. We have observed that CASH provides the similar scalabilities in training across six real-world hypergraphs used in this paper. We thus report the results on Citeseer and Cora in Figure 6, where the \(x\)-axis represents the ratio of the training examples and the \(y\)-axis represents the relative training time per epoch. The results reveal that the training of CASH_scales up linearly_ with the increasing number of hyperedges, which demonstrates that the time complexity of CASH is linear to the size of hypergraphs, as we explained in Section III-D. ## V Conclusion and Future Work In this paper, we point out two important but under-explored challenges of hyperedge prediction, i.e., (C1) node aggregation and (C2) data sparsity. To tackle the two challenges together, we propose a novel hyperedge prediction framework, named as CASH that employs (1) the context-aware node aggregation for (C1) and (2) the self-supervised contrastive learning for (C2). Furthermore, we propose the hyperedge-aware augmentation method to fully exploit the structural information of the original hypergraph and consider the dual contrasts to better capture the group-wise relations among nodes. Via extensive experiments on six real-world hypergraphs, we demonstrate that (1) (_Accuracy_) CASH consistently outperforms all competing methods in terms of the accuracy in hyperedge prediction, (2) (_Effectiveness_) all proposed strategies are beneficial to improving the accuracy of CASH, (3) (_Insensitivity_) CASH is able to achieve high accuracy across a wide range of values of hyperparameters (i.e., low hyperparameter sensitivity), and (4) (_Scalability_) CASH provides almost linear scalability in training with the increasing size of hypergraphs. ## Acknowledgments This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2022-00155586, 2022-0-00352, 2020-0-01373). Hanghang Tong was partially supported by NSF (1947135, 2134079, and 2324770).
2309.14695
Strong Szegő Limit Theorems for Multi-Bordered, Framed, and Multi-Framed Toeplitz Determinants
This work provides the general framework for obtaining strong Szeg\H{o} limit theorems for multi-bordered, semi-framed, framed, and multi-framed Toeplitz determinants, extending the results of Basor et al. (2022) beyond the (single) bordered Toeplitz case. For the two-bordered and also the semi-framed Toeplitz determinants, we compute the strong Szeg\H{o} limit theorems associated with certain classes of symbols, and for the $k$-bordered (${k \geq 3}$), framed, and multi-framed Toeplitz determinants we demonstrate the recursive fashion offered by the Dodgson condensation identities via which strong Szeg\H{o} limit theorems can be obtained. One instance of appearance of semi-framed Toeplitz determinants is in calculations related to the entanglement entropy for disjoint subsystems in the XX spin chain (Brightmore et al. (2020) and Jin-Korepin (2011)). In addition, in the recent work Gharakhloo and Liechty (2024) and in an unpublished work of Professor Nicholas Witte, such determinants have found relevance respectively in the study of ensembles of nonintersecting paths and in the study of off-diagonal correlations of the anisotropic square-lattice Ising model. Besides the intrinsic mathematical interest in these structured determinants, the aforementioned applications have further motivated the study of the present work.
Roozbeh Gharakhloo
2023-09-26T06:15:37Z
http://arxiv.org/abs/2309.14695v3
# Strong Szego limit theorems for multi-bordered, framed, and multi-framed Toeplitz determinants ###### Abstract. This work provides the general framework for obtaining Strong Szego Limit Theorems for multi-bordered, semi-framed, framed, and multi-framed Toeplitz determinants, extending the results of [BEGIL] beyond the (single) bordered Toeplitz case. For the two-bordered and also the semi-framed Toeplitz determinants, we compute the Strong Szego Limit Theorems associated with certain classes of symbols, and for the \(k\)-bordered (\(k\geq 3\)), framed, and multi-framed Toeplitz determinants we demonstrate the recursive fashion offered by the Dodgson Condensation identities via which Strong Szego Limit Theorems can be obtained. One instance of appearance of semi-framed Toeplitz determinants is in calculations related to the entanglement entropy for disjoint subsystems in the XX spin chain [JK, BGI+]. In addition, in the unpublished works of Professor Karl Lichetty and Professor Nicholas Witte, such determinants have found relevance respectively in the study of ensembles of nonintersecting paths and in the study of off-diagonal correlations of the anisotropic square-lattice Ising model. Besides the intrinsic mathematical interest in these structured determinants, the aforementioned applications have further motivated the study of the present work. ###### Contents * 1 Introduction * 1.1 An outline of main results * 2 Multi-Bordered Toeplitz Determinants * 2.1 Proofs of Theorems 1.6 and 1.7 * 2.2 Proof of Theorem 1.5 * 2.3 A new proof of the three term recurrence relations for BOPUC * 3 Semi-Framed, Framed and Multi-Framed Toeplitz Determinants * 3.1 The Riemann-Hilbert characterization for semi-framed Toeplitz determinants: Proof of Theorem 1.9 * 3.2 Semi-framed Toeplitz determinants involving rational frame symbols: Proofs of Theorems 1.10 and 1.11 * 3.3 Beyond the semi-framed case: framed and multi-framed Toeplitz determinants * 4 Appendix: solution of the Riemann-Hilbert problem for BOPUC with Szego-type symbols ## 1. Introduction For \(\phi\in L^{1}(\mathbb{T})\) denote the \(n\times n\) (pure) Toeplitz determinant by \[D_{n}[\phi]=\det_{0\leq j,k\leq n-1}\{\phi_{j-k}\}, \tag{1.1}\] ###### Abstract We consider the _bordered Toeplitz determinants_ of Toeplitz determinants \[\det\begin{pmatrix}a_{9}&\xi_{3,n-3}&\xi_{3,n-4}&\xi_{3,n-5}&\xi_{3,n-6}&\cdots& \xi_{3,2}&\xi_{3,1}&\xi_{3,0}&a_{10}\\ \gamma_{3,n-3}&a_{5}&\xi_{2,n-5}&\xi_{2,n-6}&\xi_{2,n-7}&\cdots&\xi_{2,1}&\xi_ {2,0}&a_{6}&\psi_{3,0}\\ \gamma_{3,n-4}&\gamma_{2,n-5}&a_{1}&\xi_{1,n-7}&\xi_{1,n-8}&\cdots&\xi_{1,0}& a_{2}&\psi_{2,0}&\psi_{3,1}\\ \gamma_{3,n-5}&\gamma_{2,n-6}&\gamma_{1,n-7}&\phi_{0}&\phi_{-1}&\cdots&\phi_{- n+7}&\psi_{1,0}&\psi_{2,1}&\psi_{3,2}\\ \gamma_{3,n-6}&\gamma_{2,n-7}&\gamma_{1,n-8}&\phi_{1}&\phi_{0}&\cdots&\phi_{- n+8}&\psi_{1,1}&\psi_{2,2}&\psi_{3,3}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots\\ \gamma_{3,2}&\gamma_{2,1}&\gamma_{1,0}&\phi_{n-7}&\phi_{n-8}&\cdots&\phi_{0}& \psi_{1,n-7}&\psi_{2,n-6}&\psi_{3,n-5}\\ \gamma_{3,1}&\gamma_{2,0}&a_{4}&\eta_{1,n-7}&\eta_{1,n-8}&\cdots&\eta_{1,0}& a_{3}&\psi_{2,n-5}&\psi_{3,n-4}\\ \gamma_{3,0}&a_{8}&\eta_{2,n-5}&\eta_{2,n-6}&\eta_{2,n-7}&\cdots&\eta_{2,1}& \eta_{2,0}&a_{7}&\psi_{3,n-3}\\ a_{12}&\eta_{3,n-3}&\eta_{3,n-4}&\eta_{3,n-5}&\eta_{3,n-6}&\cdots&\eta_{3,2}& \eta_{3,1}&\eta_{3,0}&a_{11}\end{pmatrix}. \tag{1.6}\] Our approach to conducting asymptotic analysis on multi-bordered, framed, and multi-framed Toeplitz determinants involves rewriting these structured determinants in terms of others with tractable asymptotics. Such reductions to simpler structured determinants result from utilizing the _Dodgson Condensation Identity3_, which we occasionally abbreviate as DCI (see [A, FK, B, GW] and references therein). Let \(\mathcal{M}\) be an \(n\times n\) matrix. By Footnote 3: also known as the _Desnanot–Jacobi_ identity or the _Shyvester determinant_ identity \[\mathcal{M}\begin{cases}j_{1}&j_{2}&\cdots&j_{\ell}\\ k_{1}&k_{2}&\cdots&k_{\ell}\end{cases},\] we mean the determinant of the \((n-\ell)\times(n-\ell)\) matrix obtained from \(\mathcal{M}\) by removing the rows \(j_{i}\) and the columns \(k_{i}\), \(1\leq i\leq\ell\). Although the order of writing the row and column indices is immaterial for this definition, in this work we prefer to respect the order of indices, for example we prefer to write \[\mathcal{M}\begin{cases}3&5\\ 1&4\end{cases},\] and not \[\mathcal{M}\begin{cases}5&3\\ 1&4\end{cases},\qquad\text{or}\qquad\mathcal{M}\begin{cases}3&5\\ 4&1\end{cases}\qquad\text{or}\qquad\mathcal{M}\begin{cases}5&3\\ 4&1\end{cases},\] although all of these are the same determinant. Let \(j_{1}<j_{2}\) and \(k_{1}<k_{2}\). The Dodgson Condensation identity reads \[\mathcal{M}\cdot\mathcal{M}\begin{cases}j_{1}&j_{2}\\ k_{1}&k_{2}\end{cases}=\mathcal{M}\begin{cases}j_{1}\\ k_{1}\end{cases}\cdot\mathcal{M}\begin{cases}j_{2}\\ k_{2}\end{cases}-\mathcal{M}\begin{cases}j_{1}\\ k_{2}\end{cases}\cdot\mathcal{M}\begin{cases}j_{2}\\ k_{1}\end{cases}. \tag{1.7}\] Speaking of reductions to simpler structured determinants through one or multiple applications of the DCI, it turns out that the multi-bordered Toeplitz determinants can be reduced to the pure and bordered Toeplitz determinants (1.1) and (1.3), while the framed and multi-framed Toeplitz determinants can be expressed in terms of pure Toeplitz determinants and what we refer to as _semi-framed_ Toeplitz determinants. These are determinants like \[\det\begin{pmatrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n+2}&\psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{n-2}\\ \eta_{n-2}&\eta_{n-3}&\cdots&\eta_{0}&a\end{pmatrix},\quad\text{or}\quad\det \begin{pmatrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n+2}&\psi_{n-2}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{n-3}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{0}\\ \eta_{0}&\eta_{1}&\cdots&\eta_{n-2}&a\end{pmatrix}, \tag{1.8}\] for \(\phi,\psi,\eta\in L^{1}(\mathbb{T})\) and a parameter \(a\in\mathbb{C}\), where \(f_{j}\)'s are the Fourier coefficients of \(f\in\{\phi,\psi,\eta\}\). In the sequel we will denote the determinants in (1.42) by \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) respectively. **Remark 1.1**.: Regarding the other choices for positioning the Fourier coefficients of \(\psi\) and \(\eta\) in the last column and the last row, we will introduce two other (related) semi-framed Toeplitz determinants in SS3 denoted by \(\mathbb{S}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\). It turns out that such different placements of Fourier coefficients does in fact affect the leading order behavior of the asymptotics, as the size of the determinant grows to infinity (see Theorems 1.10 and 1.11). At this juncture, we would like to highlight two primary questions: 1. Given what we discussed above about the reductions of more complex structures to the bordered and semi-framed Toeplitz determinants, are the large-size asymptotics of these _simpler_ structured detereminants indeed tractable? 2. Why is it significant to delve into the asymptotic behavior of (multi-)bordered and (multi-)framed Toeplitz determinants? Before tackling these inquiries, it is worthwhile to place them in a broader perspective. The asymptotic properties of the more classical structured determinants, such as Toeplitz [BS, BS1, DIK1, CIK, K1], Hankel [C, CG, DIK, IK, K, BGM], and Toeplitz+Hankel [BE, BE1, BE2, BE3, BE4, Ch, FG, GI, BR], have been extensively and successfully explored primarily via operator theoretic and Riemann-Hilbert methods. These well-established asymptotic characteristics are recognized for their connection to fundamental questions spanning diverse fields, particularly in random matrix theory and mathematical physics. For this work it is useful to recall the existing theory for the pure Toeplitz determinants. The asymptotic behavior of Toeplitz determinants can be described by the Szego-Widom theorem [W, Sz1, BS], which is formulated as \[D_{n}[\phi]\sim G[\phi]^{n}E[\phi],\qquad n\to\infty, \tag{1.9}\] where, the terms \(G[\phi]\) and \(E[\phi]\) are defined by: \[G[\phi]=\exp\left([\log\phi]_{0}\right)\quad\text{and}\quad E[\phi]=\exp\left( \sum_{k\geq 1}k[\log\phi]_{k}[\log\phi]_{-k}\right). \tag{1.10}\] This theorem holds true when the function \(\phi\) is suitably smooth, does not vanish on the unit circle, and possesses a winding number of zero. We refer to [DIK1] for a comprehensive survey of the Szego theorem, including an intriguing account of its historical developments. Now we address question 1 mentioned above starting with bordered Toeplitz determinants. Recently in [BEGIL] it was demonstrated that an analogous Strong Szego Limit Theorem holds true for the bordered Toeplitz determinants. \[D_{n}^{B}[\phi;\psi]\sim G[\phi]^{n}E[\phi]\,F[\phi;\psi],\qquad n\to\infty, \tag{1.11}\] where \(F[\phi;\psi]\) is a constant described in Theorems 1.3 and 1.4 below. Theorem 1.3 below discusses the asymptotics of \(D_{n}^{B}[\phi;\psi]\), where \(\psi\) is of the form \[\psi(z)=q_{1}(z)\phi(z)+q_{2}(z), \tag{1.12}\] where \[q_{1}(z)=a_{0}+a_{1}z+\frac{b_{0}}{z}+\sum_{j=1}^{m}\frac{b_{j}z}{z-c_{j}}, \quad\text{and}\quad q_{2}(z)=\hat{a}_{0}+\hat{a}_{1}z+\frac{\hat{b}_{0}}{z}+ \sum_{j=1}^{m}\frac{\hat{b}_{j}}{z-c_{j}}, \tag{1.13}\] all parameters are complex and the \(c_{j}\) are nonzero and do not lie on the unit circle. In fact, this form of the border symbol was considered in [BEGIL] as an inspiration from the Ising model. It was first established in 1987 by Au-Yang and Perk [AYP] that the next-to-diagonal two point correlation function is in fact the bordered Toeplitz determinant \[\langle\sigma_{0,0}\sigma_{N-1,N}\rangle=D_{N}^{B}[\widehat{\phi};\widehat{ \psi}], \tag{1.14}\] with \[\widehat{\phi}(z)=\sqrt{\frac{1-k^{-1}z^{-1}}{1-k^{-1}z}}\quad\text{and}\quad \widehat{\psi}(z)=\frac{C_{v}z\widehat{\phi}(z)+C_{h}}{S_{v}(z-c_{*})}, \tag{1.15}\] where \(k,C_{v},C_{h},S_{v}\), and \(c_{*}\) are all physical parameters of the model. In the context of the low-temperature two-dimensional Ising model, the analogue of the Strong Szego Limit Theorem for bordered Toeplitz determinants (see Theorem 1.3 below) was later used in [BEGIL] to extract the leading and subleading terms of the _long-range-order_ along the next-to-diagonal direction and comparisons with the diagonal direction were made. It was concluded that although the bordered Toeplitz determinant which defines the next-to-diagonal correlation function depends on the horizontal and vertical coupling constants, its leading order asymptotics does not. More interestingly, it was established that the sensitivity to the horizontal and vertical parameters is reflected in the second-order term of the asymptotic expansion. Before recalling the Strong Szego Limit Theorems established in [BEGIL], let us define a class of symbols we are mostly concerned with in this work. **Definition 1.2**.: Throughout the paper, we will occasionally refer to a symbol as _Szego-type_, if a) it is smooth and nonzero on the unit circle, b) has no winding number, and c) admits an analytic continuation in a neighborhood of the unit circle. **Theorem 1.3**.: [BEGIL] _Let \(D_{n}^{B}[\phi;\psi]\) be the bordered Toeplitz determinant with \(\psi=q_{1}\phi+q_{2}\) given by (1.12) and (1.13), and \(\phi\) of Szego-type. Then, the following asymptotic behavior of \(D_{n}^{B}[\phi;\psi]\) as \(n\to\infty\) takes place_ \[D_{n}^{B}[\phi;\psi]=G[\phi]^{n}E[\phi]\left(F[\phi;\psi]+O\left(e^{-cn}\right) \right), \tag{1.16}\] _where \(G[\phi]\) and \(E[\phi]\) are given by (1.10),_ \[F[\phi;\psi]=a_{0}+b_{0}[\log\phi]_{1}+\sum_{\begin{subarray}{c}j =1\\ 0<|c_{j}|<1\end{subarray}}^{m}b_{j}\,\frac{\alpha(c_{j})}{\alpha(0)}+\frac{1}{ \alpha(0)}\left(\hat{a}_{0}-\hat{a}_{1}[\log\phi]_{-1}-\sum_{\begin{subarray}{ c}j=1\\ |c_{j}j>1\end{subarray}}^{m}\frac{\hat{b}_{j}}{c_{j}}\alpha(c_{j})\right), \tag{1.17}\] \[\alpha(z):=\exp\left[\frac{1}{2\pi i}\int_{\mathbb{T}}\frac{\ln(\phi(\tau))}{ \tau-z}d\tau\right], \tag{1.18}\] _and \(\mathfrak{c}\) is some positive constant._ Transitioning beyond the category of symbols linked to the Ising model, for a broader range of border symbols, a different version of the Strong Szego Limit Theorem was proven in [BEGIL]. In this instance, the only requirement was that \(\psi\) has an analytic continuation in some neighborhood of the unit circle. **Theorem 1.4**.: [BEGIL] _Let \(\psi(z)\) be a function which admits an analytic continuation in a neighborhood of the unit circle, and let \(\phi\) be of Szego-type. Denote by \(\phi_{\pm}(z)\) the factors of a canonical Wiener-Hopf factorization of the symbol \(\phi(z)\), i.e., \(\phi=\phi_{-}\phi_{+}\). Then_ \[D_{n}^{B}[\phi;\psi]=G[\phi]^{n}E[\phi]\left(F[\phi;\psi]+O\left(e^{-cn}\right) \right), \tag{1.19}\] _where \(G[\phi]\) and \(E[\phi]\) are given by (1.10),_ \[F[\phi;\psi]=\frac{[\phi_{-}^{-1}\psi]_{0}}{[\phi_{+}]_{0}}, \tag{1.20}\] _and \(\mathfrak{c}\) is some positive constant._ It is worth mentioning that these asymptotic results for the bordered Toeplitz determinants were obtained in parallel and independent to each other using the Riemann-Hilbert and operator-theoretic methods. While as discussed above the asymptotics of bordered determinants for a general class of symbols were established in [BEGIL], the asymptotics of semi-framed Toeplitz determinants remained uncharted territory. In this work, we undertake the task of filling this gap. A pivotal enabling factor for accessing these asymptotics lies in the connection of these objects to the system of bi-orthogonal polynomials on the unit circle (BOPUC): Just as BOPUC characterize (single) bordered determinants [BEGIL], we demonstrate that the reproducing kernel of BOPUC serves as the characterizing object for the semi-framed Toeplitz determinants. Consequently, with these characterizations in terms of bi-orthogonal polynomials and their reproducing kernel, we can employ the Riemann-Hilbert approach to BOPUC [BDJ] to attain the sought-after asymptotics for multi-bordered, framed, and multi-framed Toeplitz determinants. Now, we make an attempt to address question 2 mentioned above. The semi-framed Toeplitz determinants have already appeared in the calculations of entanglement entropy for disjoint subsystems in the XX spin chain [JK, BGI+], which we briefly recall below. To ensure consistency in notations, we closely follow the paper [BGI+]. For a more comprehensive description of the model, we refer to [BGI+], [JK], and the references therein. Consider the chain of free fermions \[H_{F}=-\sum_{j=1}^{N}b_{j}^{\dagger}b_{j+1}+b_{j}b_{j+1}^{\dagger}, \tag{1.21}\] where the Fermi operators \(b_{j}\) are defined by the anticommutation relations \[\{b_{j},b_{k}\}=0\quad\text{and}\quad\{b_{j},b_{k}^{\dagger}\}=\delta_{jk}. \tag{1.22}\] Define the quantity \[S(\rho_{P})=\lim_{\varepsilon\searrow 0}\frac{1}{2\pi i}\oint_{\Gamma_{ \varepsilon}}e(1+\varepsilon,\lambda)\frac{d}{d\lambda}\ln D(\lambda)\ d\lambda, \tag{1.23}\] where \[e(x,v):=-\frac{x+v}{2}\ln\frac{x+v}{2}-\frac{x-v}{2}\ln\frac{x-v}{2},\] the contour \(\Gamma_{\varepsilon}\) goes around the \([-1,1]\) interval once in the positive direction avoiding the cuts \((-\infty,-1-\varepsilon]\cup[1+\varepsilon,\infty)\) of \(e(1+\varepsilon,\cdot)\), and the function \(D(\lambda)\) is defined further below (in terms of semi-framed Toeplitz determinants). Let \(k,m,n\in\mathbb{N}\). The quantity in (1.23) is considered as a measure of entanglement between the subsystem \[P=\{1,2,\ldots,m\}\cup\{m+k+1,m+k+2,\ldots,m+k+n\}, \tag{1.24}\] and the rest of the chain of free fermions (1.21) in the thermodynamic limit \(N\to\infty\). The connection to the semi-framed Toeplitz determinants is through the function \(D(\lambda)\), which we define here. Let \(g\colon\mathbb{T}\to\mathbb{C}\) be defined as \[g(z)=\begin{cases}1&\Re z>0,\\ -1&\Re z<0.\end{cases} \tag{1.25}\] Consider \[A=\begin{pmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{pmatrix}\in\mathbb{C}^{(m+n)\times(m+n)}\,, \tag{1.26}\] and \[D(\lambda):=\det(\lambda I-A),\qquad\lambda\in\mathbb{C}, \tag{1.27}\] where \[A_{11}=-T_{m}[g]\in\mathbb{C}^{m\times m},\quad A_{22}=-T_{n}[g]\in\mathbb{C} ^{n\times n},\quad A_{12}=A_{21}^{T}=\left(A_{ij}(k)\right)_{i=1,\ldots,m;j=1, \ldots,n}\in\mathbb{C}^{m\times n}, \tag{1.28}\] and \[\mathcal{A}_{ij}(k)\equiv\mathcal{A}_{ij}=-\left|\begin{matrix}g_{i-j-m-k}&g_{i-m-1 }&g_{i-m-2}&\cdots&g_{i-m-k}\\ g_{1-j-k}&g_{0}&g_{-1}&\cdots&g_{1-k}\\ g_{2-j-k}&g_{1}&g_{0}&\cdots&g_{2-k}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ g_{-j}&g_{k-1}&g_{k-2}&\cdots&g_{0}\end{matrix}\right|. \tag{1.29}\] Notice that \(\mathcal{A}_{ij}(k)\) is a \((k+1)\times(k+1)\) semi-framed Toeplitz matrix, which can be written in terms of the semi-framed Toeplitz determinants \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) introduced above. Indeed, by multiple adjacent row and column swaps, and recalling (1.42), we can write \[\mathcal{A}_{ij}=-\left|\begin{matrix}g_{0}&g_{-1}&\cdots&g_{1-k}&g_{1-j-k}\\ g_{1}&g_{0}&\cdots&g_{2-k}&g_{2-j-k}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ g_{k-1}&g_{k-2}&\cdots&g_{0}&g_{-j}\\ g_{i-m-1}&g_{i-m-2}&\cdots&g_{i-m-k}&g_{i-j-m-k}\end{matrix}\right|=-\mathcal{ H}_{k+1}\left[g(z);g(z)z^{j+k-1},g(z)z^{m+k-i};g_{i-j-m-k}\right]\] \[=-\mathcal{L}_{k+1}\left[g(z);\tilde{g}(z)z^{-j},\tilde{g}(z)z^{i -m-1};g_{i-j-m-k}\right], \tag{1.30}\] where \(\tilde{f}(z)=f(z^{-1})\). Even though the computation of the entanglement between the chain (1.21) and the rest of the system is of interest in the regime where all three parameters \(m,n,k\) tend towards infinity, the authors in [BGI+] specifically focused on the scenario where \(k=1\) and \(m,n\to\infty\). We quote4: Footnote 4: Also see Remark 1 in Section 3 of [BGI+]. _Our ultimate interest is to analyse \(S(\rho_{P})\) as \(k,m,n\to\infty\), however, at this point the general problem seems to be far too complicated to attack directly. Therefore we decided to start with the easier case when the gap between the two intervals is fixed to be \(k=1\). \(\cdots\) As we shall see, this simplest case already leads to a mathematically very challenging problem._ Now let us demonstrate how the findings of this work could be relevant to the goal of [BGI+] in extending the analysis to the asymptotic regime \(k\to\infty\). Indeed in SS3, among other results, we prove that for general symbols the semi-framed Toeplitz determinants have a representation in terms of the solution of the BOPUC Riemann-Hilbert problem. For example, for \(L_{k}[\phi;\psi,\eta;a]\) we show \[\frac{\mathcal{L}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[\phi]}=a-\int_ {\mathbb{T}}\int_{\mathbb{T}}\frac{\bar{\eta}(z_{2})\bar{\psi}(z_{1})}{z_{1}- z_{2}}\det\begin{pmatrix}X_{11}(z_{2};n+1)&X_{21}(z_{2};n+2)\\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}\frac{\mathrm{d}z_{2}}{2\pi \mathrm{i}z_{2}}\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}, \tag{1.31}\] where \(X_{11}\) and \(X_{21}\) are the entries in the first column of the solution \(X\) to the Riemann-Hilbert problem for BOPUC associated with the orthogonality weight \(\phi\). More precisely, \(X\) solves the following Riemann-Hilbert problem (RHP) [BDJ]: * **RH-X1**\(X:\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-X2** The limits of \(X(\xi;n)\) as \(\xi\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, which are denoted by \(X_{\pm}(z;n)\) respectively and are related by the _jump condition_ (1.32) \[X_{+}(z;n)=X_{-}(z;n)\begin{pmatrix}1&z^{-n}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-X3** As \(z\to\infty\) (1.33) \[X(z;n)=\left(I+\frac{\infty}{X_{1}(n)}{z}+\frac{\infty}{X_{2}(n)}{z^{2}}+O\left( z^{-3}\right)\right)z^{n\sigma_{3}},\] where \[\sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\] is the third Pauli matrix. Since the symbol \(\phi=g\) given by (1.25) is a Fisher-Hartwig symbol, one may refer to [2] for the asymptotics of \(D_{n}[g]\) and the polynomials \(X_{11}(z;n)\) and \(X_{21}(z;n)\) as \(n\to\infty\), and then perform the integrations in (1.31). This approach is expected to yield the asymptotic behavior of \(A_{ij}(k)\) as \(k\) approaches infinity, which could contribute to our comprehension of the entanglement in the limit where \(k,m\), and \(n\) all tend to infinity. We would like to emphasize that this work is expected to provide the _general framework_ of translating the objects of interest in terms of the solution to the \(X\)-RHP. We do occasionally use the RHP characterizations, such as (1.31), to demonstrate how the asymptotics could be obtained, but we do not intend to exhaust all the cases. For example, as a way of explanation, we show in SS3 how this scheme works in the case of Szego-type (non Fisher-Hartwig) symbols and when \(\psi\) and \(\eta\) are either rational functions, or the product of a rational function with the bulk symbol \(\phi\). We have made such choices since these choices are simple and yet nontrivial enough to illustrate the procedure, but by no means we do not want to convey the messge that those are the only cases which can make the asymptotic analysis feasible. We plan to undertake the task of using identities like (1.31) for Fisher-Hartwig symbols in a forthcoming work, especially in connection to the entanglement problem discussed above. In addition to their relevance in the context of XX quantum spin chains, the author has recently received information on the appearance and relevance of multi-bordered, semi-framed, framed and multi-framed Toeplitz determinants in other contexts. Professor Karl Liechty has communicated to the author that these structured determinants arise in the analysis of ensembles of nonintersecting paths, via the Lindstrom-Gessel-Viennot (LVG) formula5. In a separate communication, Professor Nicholas Witte has highlighted to the author the relevance of these structures in his ongoing research on the Ising model [20]. These recent discoveries have served as additional inspiration for the current study. Footnote 5: which is going to be featured in the upcoming work [GL] ### An outline of main results #### 1.1.1. Strong Szego Limit Theorem for two-bordered Toeplitz determinants **Theorem 1.5**.: _For \(\ell=1,2\), let \(\psi_{\ell}(z)=q_{1}^{(\ell)}(z)\phi(z)+q_{2}^{(\ell)}\left(z\right)\) where_ \[q_{1}^{(\ell)}(z)=a_{0}^{(\ell)}+a_{1}^{(\ell)}z+\frac{b_{0}^{(\ell)}}{z}+ \sum_{j=1}^{m_{\ell}}\frac{b_{j}^{(\ell)}z}{z-c_{j}^{(\ell)}},\quad\text{and} \quad q_{2}^{(\ell)}(z)=\hat{a}_{0}^{(\ell)}+\hat{a}_{1}^{(\ell)}z+\frac{ \hat{b}_{0}^{(\ell)}}{z}+\sum_{j=1}^{m}\frac{\hat{b}_{j}^{(\ell)}}{z-c_{j}^{( \ell)}}, \tag{1.34}\] _and suppose that \(\phi\) is of Szego-type. Then, the associated two-bordered Toeplitz determinant has the following asymptotic behavior as \(n\to\infty\):_ \[D_{n}^{B}[\phi;\boldsymbol{\psi}_{2}]\equiv D_{n}^{B}[\phi;\psi_{1},\psi_{2}] =G^{n}[\phi]E[\phi]\left\{J_{1}[\phi,\psi_{1},\psi_{2}]+O\left(\rho^{-n}\right) \right\}, \tag{1.35}\] _where \(G[\phi]\) and \(E[\phi]\) are given by (1.10),_ \[J_{1}[\phi,\psi_{1},\psi_{2}]=\begin{vmatrix}F[\phi,\psi_{2}]&F[\phi,\psi_{1} ]\\ H[\phi,\psi_{2}]&H[\phi,\psi_{1}]\end{vmatrix}, \tag{1.36}\] in which \(F[\phi,\psi]\) is given by (1.17), and_ \[H[\phi;\psi]=a_{1}-\sum_{j=1}^{m}\frac{b_{j}}{c_{j}}+a_{0}[\log\phi]_{1}+b_{0}[ \log\phi]_{2}+\frac{b_{0}}{2}[\log\phi]_{1}^{2}+\frac{1}{G[\phi]}\left(\hat{a}_ {1}-\sum_{\begin{subarray}{c}j=1\\ |c_{j}|>1\end{subarray}}^{m}\frac{\hat{b}_{j}}{c_{j}^{2}}\alpha(c_{j})+\sum_{ \begin{subarray}{c}j=1\\ 0<|c_{j}|<1\end{subarray}}^{m}\frac{b_{j}}{c_{j}}\alpha(c_{j})\right). \tag{1.37}\] _In the above formula \(\alpha\) is given by (1.18), and the number \(\rho\) is such that: \(1<\rho<\min\limits_{\begin{subarray}{c}1\leq j<m\\ |c_{j}|>1\end{subarray}}\{|c_{j}|\}\), \(\max\limits_{\begin{subarray}{c}1\leq j<m\\ 0<|c_{j}|<1\end{subarray}}\{|c_{j}|\}<\rho^{-1}<1\), and \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ At the end of Section 2, in Remark 2.15 we explain how the techiques used for the two bordered case can be recursively used to obtain the asymptotics of \(k\)-bordered Toeplitz determinants, \(k>2\). #### 1.1.2. The Riemann-Hilbert problem for Boppc when the weight has a nonzero winding number In order to arrive at the above asymptotic results for multi-bordered Toeplitz determinants, one is invited to asymptotically analyze (single) bordered Toeplitz determinants of the form \(D_{n}[z\phi;q_{1}\phi+q_{2}]\), as a result of employing a Dodgson Condensation identity. Notice that if \(\phi\) is of Szego-type, then \(z\phi\) is not, as it does not have a zero winding number. Therefore the asymptotics of \(D_{n}[z\phi;q_{1}\phi+q_{2}]\) can not be obtained from Theorem 1.3 and thus must be treated differently. Such bordered determinants are characterized in terms of the solution to the following Riemann-Hilbert problem: * **RH-Z1**\(Z(\cdot;n):\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-Z2** The limits of \(Z(\zeta;n)\) as \(\zeta\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, and are denoted \(Z_{\pm}(z;n)\) respectively and are related by (1.38) \[Z_{+}(z;n)=Z_{-}(z;n)\begin{pmatrix}1&z^{-n+1}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-Z3** As \(z\to\infty\) \[Z(z;n)=\big{(}I+O\,(z^{-1})\big{)}z^{n\sigma_{3}}. \tag{1.39}\] This is the same as **RH-X1- RH-X3**, the only difference being that \(\phi\) is now replaced by \(z\phi\). However, the usual steps of the Deift-Zhou nonlinear steepest descent analysis [DZ] do not work for a symbol with nonzero winding number6. Instead, we find the explicit formulae relating the solution of the \(Z\)-RHP to the solution of the \(X\)-RHP, which is amenable to the Deift-Zhou nonlinear steepest descent analysis (see the appendix in SS4). In our work, such relations are essential in proving Theorem 1.5. Footnote 6: This is because at the stage of finding the solution to the global parametrix RHP one would need to find the solution of the scalar RHP: \(\beta_{+}(z)-\beta_{-}(z)=\log(z\phi(z))\) for \(z\in\mathbb{T}\) and \(\beta(z)=1+O\,(1/z)\) as \(z\to\infty\), see §4. However the Plemelj-Sokhotskii formula can not be applied [G] as the function \(\log(z\phi(z))\) has a jump discontinuity on the unit circle, for a Szego-type \(\phi\). **Theorem 1.6**.: _The solution \(Z(z;n)\) to the Riemann-Hilbert problem **RH-Z1** through **RH-Z3** can be expressed in terms of the data extracted from the solution \(X(z;n)\) of the Riemann-Hilbert problem **RH-X1** through **RH-X3** as_ \[Z(z;n)=\begin{bmatrix}\left(\frac{\infty}{X_{1,12}(n)X_{21}(0;n)} \frac{\infty}{X_{11}(0;n)}&-\overset{\infty}{X}_{1,12}(n)\\ -\frac{X_{21}(0;n)}{X_{11}(0;n)}&1\end{bmatrix}z^{-1}+\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\right)X(z;n)\begin{pmatrix}1&0\\ 0&z\end{pmatrix}, \tag{1.40}\] _where \(\overset{\infty}{X}_{1,12}(n)\) is the \(12\)-entry of the matrix \(\overset{\infty}{X}_{1}(n)\) in the asymptotic expansion (1.33)._ We alternatively prove another way to connect the solution of the \(Z\)-RHP to the solution of the \(X\)-RHP, described in the following theorem. **Theorem 1.7**.: _The solution \(Z(z;n)\) to the Riemann-Hilbert problem **RH-Z1** through **RH-Z3** can be expressed in terms of the data extracted from the solution \(X(z;n)\) of the Riemann-Hilbert problem **RH-X1** through **RH-X3** as_ \[Z(z;n)=\begin{pmatrix}z+\overset{\infty}{X}_{1,22}(n-1)-\frac{\overset{\infty}{X }_{2,12}(n-1)}{\overset{\infty}{X}_{1,12}(n-1)}&-\overset{\infty}{X}_{1,12}(n -1)\\ \frac{1}{\overset{\infty}{X}_{1,12}(n-1)}&0\end{pmatrix}X(z;n-1), \tag{1.41}\] _where \(\overset{\infty}{X}_{1,jk}(n)\) and \(\overset{\infty}{X}_{2,jk}(n)\) are the \(jk\)-entries of the matrices \(\overset{\infty}{X}_{1}(n)\) and \(\overset{\infty}{X}_{2}(n)\) in the asymptotic expansion (1.33)._ **Remark 1.8**.: The compatibility of these two theorems offers a new proof for the recurrence relations governing the system of bi-orthogonal polynomials on the unit circle, as detailed in Lemma 2.16. #### 1.1.3. Strong Szego Limit Theorems for semi-framed Toeplitz determinants For \(\phi,\psi,\eta\in L^{1}(\mathbb{T})\) and a parameter \(a\in\mathbb{C}\) define the \(n\times n\) semi-framed Toeplitz determinants \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{G}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) as \[\mathcal{E}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{n-2}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{n-3}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{0}\\ \eta_{n-2}&\eta_{n-3}&\cdots&\eta_{0}&a\end{pmatrix}, \tag{1.42}\] \[\mathcal{G}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{n-2}\\ \eta_{0}&\eta_{1}&\cdots&\eta_{n-2}&a\end{pmatrix}, \tag{1.43}\] \[\mathcal{H}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{n-2}\\ \eta_{n-2}&\eta_{n-3}&\cdots&\eta_{0}&a\end{pmatrix}, \tag{1.44}\] and \[\mathcal{L}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{n-2}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{n-3}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{0}\\ \eta_{0}&\eta_{1}&\cdots&\eta_{n-2}&a\end{pmatrix}, \tag{1.45}\] where \(f_{j}\)'s are the Fourier coefficients of \(f\in\{\phi,\psi,\eta\}\). Consider the reproducing kernel \[K_{n}(z,\tilde{z}):=\sum_{j=0}^{n}Q_{j}(\tilde{z})\widetilde{Q}_{j}(z), \tag{1.46}\] of the system of bi-orthogonal polynomials on the unit circle associated with the symbol \(\phi\), satisfying the _bi-orthogonality_ relation \[\int_{\mathbb{T}}Q_{n}(\zeta)\widehat{Q}_{m}(\zeta^{-1})\phi(\zeta)\frac{\mathrm{ d}\zeta}{2\pi\mathrm{i}\zeta}=\delta_{nm},\qquad n,m\in\mathbb{N}\cup\{0\}. \tag{1.47}\] In Section 3 we prove the following representation of the above semi-framed Toeplitz determinants in terms of the reproducing kernel (1.46). **Theorem 1.9**.: _The semi-framed Toeplitz determinants \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{G}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) can be represented in terms of the reproducing kernel of the system of bi-orthogonal polynomials on the unit circle associated with \(\phi\) given by (1.46) and (1.47) as_ \[\frac{\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\left[\int_{\mathbb{T}}K_{n}(z_{1},z_{2})z_{2}^{-n} \eta(z_{2})\frac{dz_{2}}{2\pi iz_{2}}\right]z_{1}^{-n}\psi(z_{1})\frac{dz_{1}} {2\pi iz_{1}}, \tag{1.49}\] \[\frac{\mathcal{G}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\left[\int_{\mathbb{T}}K_{n}(z_{1}^{-1},z_{2}^{-1}) \eta(z_{2})\frac{dz_{2}}{2\pi iz_{2}}\right]\psi(z_{1})\frac{dz_{1}}{2\pi iz_{ 1}},\] (1.50) \[\frac{\mathcal{H}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\left[\int_{\mathbb{T}}K_{n}(z_{1}^{-1},z_{2})z_{2}^{-n }\eta(z_{2})\frac{dz_{2}}{2\pi iz_{2}}\right]\psi(z_{1})\frac{dz_{1}}{2\pi iz_{ 1}},\] (1.51) \[\frac{\mathcal{L}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\left[\int_{\mathbb{T}}K_{n}(z_{1},z_{2}^{-1}) \eta(z_{2})\frac{dz_{2}}{2\pi iz_{2}}\right]z_{1}^{-n}\psi(z_{1})\frac{dz_{1}} {2\pi iz_{1}}, \tag{1.48}\] _where \(D_{n}[\phi]\) is given by (1.1)._ Using the Christoffel-Darboux identity for the bi-orthogonal polynomials on the unit circle, we obtain the following characterizations in terms of the solution \(X\) of RH-X1 through RH-X3 in the following Corollary. **Corollary 1.9.1**.: _The semi-framed Toeplitz determinants \(\mathcal{H}_{n+2}\left[\phi;\psi,\eta;a\right]\), \(\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]\), \(\mathcal{G}_{n+2}\left[\phi;\psi,\eta;a\right]\), and \(\mathcal{L}_{n+2}\left[\phi;\psi,\eta;a\right]\) are encoded into the \(X\)-RHP data as_ (1.52) \[\frac{\mathcal{H}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{z_{1}^{-n}z_{2}^{-n} \eta(z_{2})\psi(z_{1})}{z_{1}-z_{2}}\det\begin{pmatrix}X_{11}(z_{2};n+1)&X_{2 1}(z_{2};n+2)\\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}\frac{dz_{2}}{2\pi iz_{2}}\frac {dz_{1}}{2\pi iz_{1}},\] (1.53) \[\frac{\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{z_{2}^{-n}\eta(z_{2}) \bar{\psi}(z_{1})}{z_{1}-z_{2}}\det\begin{pmatrix}X_{11}(z_{2};n+1)&X_{21}(z_ {2};n+2)\\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}\frac{dz_{2}}{2\pi iz_{2}}\frac {dz_{1}}{2\pi iz_{1}},\] (1.54) \[\frac{\mathcal{G}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n+1}[ \phi]} =a-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{z_{1}^{-n}\bar{\eta}(z_{2}) \psi(z_{1})}{z_{1}-z_{2}}\det\begin{pmatrix}X_{11}(z_{2};n+1)&X_{21}(z_{2};n+2) \\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}\frac{dz_{2}}{2\pi iz_{2}}\frac {dz_{1}}{2\pi iz_{1}},\] (1.55) _where \(D_{n}[\phi]\) is given by (1.1), and \(X_{11}\) and \(X_{21}\) are respectively the \(11\) and \(21\) entries of the solution to RH-X1 through RH-X3._ In Section 3.2, we prove the following Strong Szego Theorems for semi-framed Toeplitz determinants for a class of _frame symbols_\(\psi\) and \(\eta\). **Theorem 1.10**.: _Let \(\phi\) be of Szego-type, and \(c\) and \(d\) be complex numbers that do not lie on the unit circle. Then, the following Strong Szego asymptotics hold for \(\mathcal{H},\mathcal{L},\mathcal{E}\) and \(\mathcal{G}\):_ \[\mathcal{H}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}}{z-d_{j}},\sum_{k=1} ^{m_{2}}\frac{B_{k}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+O(\rho^{-n}) \right). \tag{1.56}\] \[\mathcal{L}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}}{z-d_{j}},\sum_{k=1}^{m_{ 2}}\frac{B_{k}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+O\left(\rho^{-n} \right)\right). \tag{1.57}\] \[\mathcal{E}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}}{z-d_{j}},\sum_{k=1}^ {m_{2}}\frac{B_{k}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+\sum_{j=1}^{m _{1}}\sum_{k=1\atop|d_{j}|>1\atop|c_{k}|>1}^{m_{2}}A_{j}B_{k}\frac{\alpha(c_{k} )}{\alpha(d_{j}^{-1})}\cdot\frac{1}{1-c_{k}d_{j}}+O\left(\rho^{-n}\right) \right). \tag{1.58}\] \[\mathcal{G}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}}{z-d_{j}},\sum_{k=1} ^{m_{2}}\frac{B_{k}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+\sum_{j=1}^{m _{1}}\sum_{k=1\atop|d_{j}|>1\atop|c_{k}|>1}^{m_{2}}A_{j}B_{k}\frac{\alpha(d_ {j})}{\alpha(c_{k}^{-1})}\cdot\frac{1}{1-c_{k}d_{j}}+O\left(\rho^{-n}\right) \right). \tag{1.59}\] _Here the number \(\rho\) is such that: \(1<\rho<\min_{1\leq j\leq m_{1},1\leq k\leq m_{2}\atop|d_{j}|>1,|c_{k}|>1}\{|d_ {j}|,|c_{k}|\}\), \(\max_{1\leq j\leq m_{1},1\leq k\leq m_{2}\atop|d_{j}|<1,|c_{k}|\leq 1}\{|d_ {j}|,|c_{k}|\}<\rho^{-1}<1\), and \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ **Theorem 1.11**.: _Let \(\phi\) be a Szego-type symbol, and \(c\) and \(d\) be complex numbers that do not lie on the unit circle. Then, the following Strong Szego asymptotics hold for \(\mathcal{H},\mathcal{L},\mathcal{E}\) and \(\mathcal{G}\):_ \[\mathcal{H}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}\phi}{z-d_{j}},\sum_{ k=1}^{m_{2}}\frac{B_{k}\phi}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+O\left( \rho^{-n}\right)\right). \tag{1.60}\] \[\mathcal{L}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}\bar{\phi}}{z-d_{j}}, \sum_{k=1}^{m_{2}}\frac{B_{k}\bar{\phi}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi] \left(a+O\left(\rho^{-n}\right)\right). \tag{1.61}\] \[\mathcal{E}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}\bar{\phi}}{z-d_{j}}, \sum_{k=1}^{m_{2}}\frac{B_{k}\phi}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a +\sum_{j=1}^{m_{1}}\sum_{k=1\atop|d_{j}|<1\atop|c_{k}|<1}^{m_{2}}A_{j}B_{k} \frac{\alpha(c_{k})}{\alpha(d_{j}^{-1})}\cdot\frac{1}{1-c_{k}d_{j}}+O\left( \rho^{-n}\right)\right). \tag{1.62}\] \[\mathcal{G}_{n+1}\left[\phi;\sum_{j=1}^{m_{1}}\frac{A_{j}\phi}{z-d_{j}},\sum_{ k=1}^{m_{2}}\frac{B_{k}\bar{\phi}}{z-c_{k}};a\right]=G^{n}[\phi]E[\phi]\left(a+\sum_{j=1 \atop|d_{j}|<1\atop|c_{k}|<1}^{m_{1}}A_{j}B_{k}\frac{\alpha(d_{j})}{\alpha(c _{k}^{-1})}\cdot\frac{1}{1-c_{k}d_{j}}+O\left(\rho^{-n}\right)\right). \tag{1.63}\] _Here the number \(\rho\) is such that: \(1<\rho<\min_{1\leq j\leq m_{1},1\leq k\leq m_{2}\atop|d_{j}|>1,|c_{k}|>1}\{|d_ {j}|,|c_{k}|\}\), \(\max_{1\leq j\leq m_{1},1\leq k\leq m_{2}\atop|d_{j}|<1,|c_{k}|<1}\{|d_{j}|,|c_ {k}|\}<\rho^{-1}<1\), and \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ In Section 3.3, we eventually redirect our attention to framed and multi-framed Toeplitz determinants. Our intention in this section is not to present formal proofs of asymptotic results. Instead, our goal is to present a broad framework for approaching the asymptotic analysis of these determinants, emphasizing their recursive characteristics in relation to the Dodgson Condensation identities. We will show that the semi-framed Toeplitz determinants are the building blocks for the asymptotic analysis of framed and multi-framed determinants. ## 2. Multi-Bordered Toeplitz Determinants In this section we focus on multi-bordered Toeplitz determinants \[D_{n}^{B}[\phi;\boldsymbol{\Psi}_{m}]:=\det\begin{pmatrix}\phi_{0}&\phi_{1}& \cdots&\phi_{n-m-1}&\psi_{1,n-1}&\cdots&\psi_{m,n-1}\\ \phi_{-1}&\phi_{0}&\cdots&\phi_{n-m-2}&\psi_{1,n-2}&\cdots&\psi_{m,n-2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ \phi_{-n+1}&\phi_{-n+2}&\cdots&\phi_{-m}&\psi_{1,0}&\cdots&\psi_{m,0}\end{pmatrix}, \tag{2.1}\] and their reduction to (single) bordered determinants. This reduction allows for a representation in terms of the orthogonal polynomials on the unit circle and hence a Riemann-Hilbert characterization. Let us recall the system of bi-orthogonal polynomials on the unit circle \(\{Q_{k}(z)\}_{k=0}^{\infty}\) and \(\{\widehat{Q}_{k}(z)\}_{k=0}^{\infty}\), \(\deg Q_{k}=\deg\widehat{Q}_{k}=k\), given by \[Q_{n}(z):=\frac{1}{\sqrt{D_{n}[\phi]D_{n+1}[\phi]}}\det\begin{pmatrix}\phi_{0} &\phi_{-1}&\cdots&\phi_{-n}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}\\ \vdots&\vdots&\ddots&\vdots\\ \phi_{n-1}&\phi_{n-2}&\cdots&\phi_{-1}\\ 1&z&\cdots&z^{n}\end{pmatrix}, \tag{2.2}\] and \[\widehat{Q}_{n}(z):=\frac{1}{\sqrt{D_{n}[\phi]D_{n+1}[\phi]}}\det\begin{pmatrix} \phi_{0}&\phi_{-1}&\cdots&\phi_{-n+1}&1\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+2}&z\\ \vdots&\vdots&\ddots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{1}&z^{n}\end{pmatrix}, \tag{2.3}\] satisfying the _bi-orthogonality_ relation \[\int_{\mathbb{T}}Q_{n}(\zeta)\widehat{Q}_{m}(\zeta^{-1})\phi(\zeta)\frac{ \mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}=\delta_{nm},\qquad n,m\in\mathbb{N} \cup\{0\}. \tag{2.4}\] The above bi-orthogonality condition is equivalent to \[\int_{\mathbb{T}}Q_{n}(\zeta)\zeta^{-m}\phi(\zeta)\frac{\mathrm{d}\zeta}{2\pi \mathrm{i}\zeta}=\varkappa_{n}^{-1}\delta_{nm},\qquad m=0,\cdots,n, \tag{2.5}\] and \[\int_{\mathbb{T}}\widehat{Q}_{n}(\zeta^{-1})\zeta^{m}\phi(\zeta)\frac{ \mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}=\varkappa_{n}^{-1}\delta_{nm},\qquad m =0,\cdots,n, \tag{2.6}\] where \[\varkappa_{n}=\sqrt{\frac{D_{n}[\phi]}{D_{n+1}[\phi]}},\qquad n\in\mathbb{N} \cup\{0\}, \tag{2.7}\] is the leading coefficient of both \(Q_{n}\) and \(\widehat{Q}_{n}\) (we set \(D_{0}[\phi]\equiv 1\)). The following matrix-valued function \[X(z;n):=\begin{pmatrix}\varkappa_{n}^{-1}Q_{n}(z)&\varkappa_{n}^{-1}\int_{ \mathbb{T}}\frac{Q_{n}(\zeta)}{(\zeta-z)}\frac{\phi(\zeta)\mathrm{d}\zeta}{2 \pi\mathrm{i}\zeta^{n}}\\ -\varkappa_{n-1}z^{n-1}\widehat{Q}_{n-1}(z^{-1})&-\varkappa_{n-1}\int_{ \mathbb{T}}\frac{\widehat{Q}_{n-1}(\zeta^{-1})}{(\zeta-z)}\frac{\phi(\zeta) \mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}\end{pmatrix}, \tag{2.8}\] satisfies the following Riemann-Hilbert problem [BDJ], which in the subsequent parts of this text will occasionally be referred to as the \(X\)-RHP: * **RH-X1**\(X:\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-X2** The limits of \(X(\xi;n)\) as \(\xi\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, which are denoted by \(X_{\pm}(z;n)\) respectively and are related by the _jump condition_ (2.9) \[X_{+}(z;n)=X_{-}(z;n)\begin{pmatrix}1&z^{-n}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-X3** As \(z\to\infty\) (2.10) \[X(z;n)=\left(I+\frac{\infty}{X_{1}(n)}{z}+\frac{\infty}{X_{2}(n)}{z^{2}}+O(z^ {-3})\right)z^{n\sigma_{3}},\] where \[\sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\] is the third Pauli matrix. In the rest of this section, we demonstrate the utilization of the Dodgson Condensation identity in reducing (2.1) to a number of bordered Toeplitz determinants, which, in view of the results in [BEGIL], paves the way for an effective asymptotic analysis. As this paper aims to provide the general framework, we start with the simplest nontrivial case, which is \(m=2\). We will then discuss the recursive nature of our method and how large-size asymptotic analysis for higher values of \(m\) can be obtained using essentially the same ideas involved in the case \(m=2\). Our first objective in this work is to obtain a Riemann-Hilbert representation for the Toeplitz determinants with two borders. To this end, let us assume that \(\phi\) is of Szego-type and the border symbols \(\psi_{1}\) and \(\psi_{2}\) are analytic in a neighborhood of the unit circle and consider \[D_{n}^{B}[\phi;\boldsymbol{\psi}_{2}]=\det\begin{pmatrix}\phi_{0}&\phi_{1}& \cdots&\phi_{n-3}&\psi_{1,n-1}&\psi_{2,n-1}\\ \phi_{-1}&\phi_{0}&\cdots&\phi_{n-4}&\psi_{1,n-2}&\psi_{2,n-2}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ \phi_{-n+3}&\phi_{-n+4}&\cdots&\phi_{0}&\psi_{1,2}&\psi_{2,2}\\ \phi_{-n+2}&\phi_{-n+3}&\cdots&\phi_{-1}&\psi_{1,1}&\psi_{2,1}\\ \phi_{-n+1}&\phi_{-n+2}&\cdots&\phi_{-2}&\psi_{1,0}&\psi_{2,0}\end{pmatrix}. \tag{2.11}\] For simplicity of notation in this section we denote \[D_{n}^{B}[\phi;\boldsymbol{\psi}_{2}]\equiv\mathcal{D}.\] Let us consider \[\mathcal{D}\cdot\mathcal{D}\begin{pmatrix}0&n-1\\ n-2&n-1\end{pmatrix}=\mathcal{D}\begin{pmatrix}0\\ n-2\end{pmatrix}\cdot\mathcal{D}\begin{pmatrix}n-1\\ n-1\end{pmatrix}-\mathcal{D}\begin{pmatrix}0\\ n-1\end{pmatrix}\cdot\mathcal{D}\begin{pmatrix}n-1\\ n-2\end{pmatrix}, \tag{2.12}\] where \[\mathcal{D}\begin{pmatrix}0&n-1\\ n-2&n-1\end{pmatrix}\equiv D_{n-2}[z\phi], \tag{2.13}\] is a pure Toeplitz and all determinants on the right hand side are bordered Toeplitz determinants. Indeed, \[\mathcal{D}\begin{pmatrix}n-1\\ n-1\end{pmatrix}\equiv D_{n-1}^{B}[\phi;z^{-1}\psi_{1}],\quad\text{and}\quad \mathcal{D}\begin{pmatrix}n-1\\ n-2\end{pmatrix}\equiv D_{n-1}^{B}[\phi;z^{-1}\psi_{2}] \tag{2.14}\] However, the _bulk symbol_ for the other two bordered determinants has a nonzero winding number, more precisely we have \[\mathcal{D}\begin{pmatrix}0\\ n-2\end{pmatrix}\equiv D_{n-1}^{B}[z\phi;\psi_{2}],\quad\text{and}\quad\mathcal{D} \begin{pmatrix}0\\ n-1\end{pmatrix}\equiv D_{n-1}^{B}[z\phi;\psi_{1}]. \tag{2.15}\] Using these, we can rewrite (2.12) as \[D_{n}^{B}\left[\phi;\mathbf{\psi}_{2}\right]=D_{n-1}^{B}[\phi;z^{-1}\psi_{1}]\frac{D_ {n-1}^{B}\left[z\phi;\psi_{2}\right]}{D_{n-2}\left[z\phi\right]}-D_{n-1}^{B}[ \phi;z^{-1}\psi_{2}]\frac{D_{n-1}^{B}\left[z\phi;\psi_{1}\right]}{D_{n-2}\left[ z\phi\right]} \tag{2.16}\] **Remark 2.1**.: Alternatively, we could consider the following Dodgson Condensation identity \[\mathcal{D}\cdot\mathcal{D}\begin{cases}n-2&n-1\\ n-2&n-1\end{cases}=\mathcal{D}\begin{cases}n-2\\ n-2\end{cases}\cdot\mathcal{D}\begin{cases}n-1\\ n-1\end{cases}-\mathcal{D}\begin{cases}n-2\\ n-1\end{cases}\cdot\mathcal{D}\begin{cases}n-1\\ n-2\end{cases}. \tag{2.17}\] Notice that \(\mathcal{D}\begin{cases}n-2&n-1\\ n-2&n-1\end{cases}\) is the pure Toeplitz determinant \(D_{n-2}[\phi]\), \(\mathcal{D}\begin{cases}n-1\\ n-1\end{cases}\) and \(\mathcal{D}\begin{cases}n-1\\ n-2\end{cases}\) are respectively the bordered Toeplitz determinants \[D_{n-1}^{B}\left[\phi;z^{-1}\psi_{1}\right]\quad\text{and}\quad D_{n-1}^{B}[ \phi;z^{-1}\psi_{2}],\] and \(\mathcal{D}\begin{cases}n-2\\ n-1\end{cases}\) and \(\mathcal{D}\begin{cases}n-2\\ n-2\end{cases}\) are semi-framed Toeplitz determinants (see SS37). Therefore, this DCI has the advantage that we do not need to deal with a bulk symbol with non-zero winding number, but its disadvantage is that it relates two-bordered Toeplitz determinants to semi-framed ones, which are, as we will see in SS3, more complicated objects. This is evident in the fact that the bordered Toeplitz determinants are characterized by BOPUC themselves, while the semi-framed ones are characterized by the reproducing kernel of BOPUC (see Theorem 1.9). However, from the view point of obtaining the desired asymptotics, each of the two DCIs (2.12) or (2.17) can be taken as the starting point. Footnote 7: These are respectively \(E_{n-1}[\phi;z^{-2}\psi_{1},z^{-2}\phi;\psi_{1,0}]\) and \(E_{n-1}[\phi;z^{-2}\psi_{2},z^{-2}\phi;\psi_{2,0}]\), where \(\xi_{n}[\phi;\psi,\eta;a]\) is introduced in (3.1) In the rest of this section we choose to concentrate on the Dodgson Condensation identity (2.12). The asymptotics of the bordered Toeplitz determinants in (2.14) can be obtained by rather straight-forward modifications of the findings in [BEGIL]. However, the asymptotics of the bordered Toeplitz determinants in (2.15) are more challenging as the bulk symbol \(z\phi(z)\) has a nonzero winding number. Notice that this is an instance of a _non-degenerate_ Fisher Hartwig singularity at \(z=1\) with the parameters \(\beta=1\) and \(\alpha=0\) (see [DIK] for more details). We know that the asymptotics of \(D_{n}\left[z\phi\right]\) can be obtained from Lemma 2.4 of [DIK], which in particular states that \[D_{n}\left[z\phi\right]=(-1)^{n}\frac{Q_{n}(0)}{\kappa_{n}}D_{n}\left[\phi \right],\qquad n\geq N_{0}, \tag{2.18}\] provided that there exists a fixed \(N_{0}\geq 0\) such that for all \(n\geq N_{0}\) the Toeplitz determinants \(D_{n}\left[\phi\right]\) are nonzero, and \(Q_{k}(0)\neq 0\) for \(k=N_{0},N_{0}+1,\cdots,n-1\). However, for the ultimate goal of finding the asymptotics of the right hand side of (2.16), it turns out that we do not need to use (2.18) for our calculations, at least for the symbols \(\psi_{1}\) and \(\psi_{2}\) of the form (1.12)-(1.13). This is because for such symbols we can obtain the asymptotics of \[\frac{D_{n-1}^{B}\left[z\phi;\psi_{2}\right]}{D_{n-2}\left[z\phi\right]}\quad \text{and}\quad\frac{D_{n-1}^{B}\left[z\phi;\psi_{1}\right]}{D_{n-2}\left[z \phi\right]} \tag{2.19}\] in terms of the solution of an asymptotically tractable Riemann-Hilbert problem. More precisely, to find the asymptotics of ratios in (2.19), we need to find the solution of the \(X\)-RHP when \(\phi\) is replaced by \(z\phi\). We call this the \(Z\)-RHP and using two distinct approaches we prove in Theorems 1.6 and 1.7 how to construct its solution in terms of the solution to the \(X\)-RHP. Once we have all the above ingredients, we can find the desired asymptotics of \(D_{n}^{B}\left[\phi;\mathbf{\psi}_{2}\right]\). ### Proofs of Theorems 1.6 and 1.7 In this section we will write \(Z(z;n)\) to refer to the solution of the \(X\)-RHP when \(\phi\) is replaced by \(z\phi\). More precisely, \(Z(z;n)\) satisfies * **RH-Z1**\(Z(\cdot;n):\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-Z2** The limits of \(Z(\zeta;n)\) as \(\zeta\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, and are denoted \(Z_{\pm}(z;n)\) respectively and are related by (2.20) \[Z_{+}(z;n)=Z_{-}(z;n)\begin{pmatrix}1&z^{-n+1}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-Z3** As \(z\to\infty\) \[Z(z;n)=\big{(}I+O\,(z^{-1})\big{)}z^{n\sigma_{3}}. \tag{2.21}\] To fix the notation, let us consider the system of bi-orthogonal polynomials on the unit circle \(\{P_{k}(z)\}_{k=0}^{\infty}\) and \(\{\widehat{P}_{k}(z)\}_{k=0}^{\infty}\), \(\deg P_{k}=\deg\widehat{P}_{k}=k\), given by \[P_{n}(z):=\frac{1}{\sqrt{D_{n}\left[z\phi\right]D_{n+1}\left[z\phi\right]}} \det\begin{pmatrix}(z\phi)_{0}&(z\phi)_{-1}&\cdots&(z\phi)_{-n}\\ (z\phi)_{1}&(z\phi)_{0}&\cdots&(z\phi)_{-n+1}\\ \vdots&\vdots&\ddots&\vdots\\ (z\phi)_{n-1}&(z\phi)_{n-2}&\cdots&(z\phi)_{-1}\\ 1&z&\cdots&z^{n}\end{pmatrix}, \tag{2.22}\] and \[\widehat{P}_{n}(z):=\frac{1}{\sqrt{D_{n}\left[z\phi\right]D_{n+1}\left[z\phi \right]}}\det\begin{pmatrix}(z\phi)_{0}&(z\phi)_{-1}&\cdots&(z\phi)_{-n+1}&1\\ (z\phi)_{1}&(z\phi)_{0}&\cdots&(z\phi)_{-n+2}&z\\ \vdots&\vdots&\ddots&\vdots\\ (z\phi)_{n}&(z\phi)_{n-1}&\cdots&(z\phi)_{1}&z^{n}\end{pmatrix}, \tag{2.23}\] satisfying the bi-orthogonality relation \[\int_{\mathbb{T}}P_{n}(\zeta)\widehat{P}_{m}(\zeta^{-1})\zeta\phi(\zeta) \frac{\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}=\delta_{nm},\qquad n,m\in\mathbb{ N}\cup\{0\}. \tag{2.24}\] The above bi-orthogonality condition is equivalent to \[\int_{\mathbb{T}}P_{n}(\zeta)\zeta^{-m}\zeta\phi(\zeta)\frac{\mathrm{d}\zeta }{2\pi\mathrm{i}\zeta}=\frac{1}{\varkappa_{n}\left[z\phi\right]}\delta_{nm}, \qquad m=0,\cdots,n, \tag{2.25}\] and \[\int_{\mathbb{T}}\widehat{P}_{n}(\zeta^{-1})\zeta^{m}\zeta\phi(\zeta)\frac{ \mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}=\frac{1}{\varkappa_{n}\left[z\phi \right]}\delta_{nm},\qquad m=0,\cdots,n, \tag{2.26}\] where \[\varkappa_{n}\left[z\phi\right]=\sqrt{\frac{D_{n}\left[z\phi\right]}{D_{n+1} \left[z\phi\right]}},\qquad n\in\mathbb{N}\cup\{0\},\qquad D_{0}\left[z\phi \right]\equiv 1. \tag{2.27}\] is the leading coefficient of both \(P_{n}\) and \(\widehat{P}_{n}\). Now, as expected, the following matrix-valued function constructed out of \(P\) and \(\widehat{P}\) satisfies the \(Z\)-RHP: \[Z(z;n)=\begin{pmatrix}\frac{1}{\varkappa_{n}\left[z\phi\right]}P_{n}(z)&\frac {1}{\varkappa_{n}\left[z\phi\right]}\int_{\mathbb{T}}\frac{P_{n}(\zeta)}{(z-z )}\frac{\zeta\phi(\zeta)\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta^{n}}\\ -\varkappa_{n-1}\left[z\phi\right]z^{n-1}\widehat{P}_{n-1}(z^{-1})&-\varkappa_ {n-1}\left[z\phi\right]\int_{\mathbb{T}}\frac{\widehat{P}_{n-1}(\zeta^{-1})} {(\zeta-z)}\frac{\zeta\phi(\zeta)\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}\end{pmatrix}. \tag{2.28}\] However one can find an explicit relation relating the solution of the \(Z\)-RHP to the solution of the \(X\)-RHP which can be directly analyzed by the Deift-Zhou nonlinear steepest descent method [DZ]. This means that from the asymptotic analysis of the X-RHP we can obtain the asymptotics of the \(Z\)-RHP. One way of making this connection is shown in Theorem 1.7, which is based upon shifting in the index \(n\). Instead, there is an alternative way8 which will yield a simpler connection between the solution of the \(Z\)-RHP to the solution of the \(X\)-RHP. To describe this idea more generally, let us consider the Riemann-Hilbert problem Footnote 8: Based on the idea used in [GI2]. * **RH-Y1**\(Y(\cdot;n,r):\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-Y2** The limits of \(Y(\zeta;n,r)\) as \(\zeta\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, and are denoted \(Y_{\pm}(z;n,r)\) respectively and are related by (2.29) \[Y_{+}(z;n,r)=Y_{-}(z;n,r)\begin{pmatrix}1&z^{-n+r}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-Y3** As \(z\to\infty\) (2.30) \[Y(z;n,r)=\big{(}I+O\left(z^{-1}\right)\big{)}z^{n\sigma_{3}}.\] Using the standard Liouville's Theorem arguments we have the following uniqueness result. **Lemma 2.2**.: _The solution of the Riemann-Hilbert problem RH-Y1 - RH-Y3 is unique, if it exists._ Define the function \[W(z;n,r):=Y(z;n,r)\begin{pmatrix}1&0\\ 0&z^{-r}\end{pmatrix} \tag{2.31}\] It can be readily checked that \(W(z;n,r)\) satisfies the same jump condition on the unit circle as \(X(z;n)\). Therefore the function \[\mathcal{R}(z;n,r):=W(z;n,r)X^{-1}(z;n) \tag{2.32}\] must be a meromorphic function with singular behaviour only at \(z=0\) and \(\infty\). For a fixed value of \(r\in\mathbb{Z}\), one can find the function \(\mathcal{R}(z;n,r)\) explicitly in terms of the \(X\)-RHP data. The idea presented in the proof of the following theorem can be used to connect \(Y(z;n,r)\) to \(X(n,z)\) for any \(r\in\mathbb{Z}\). However, for two reasons we only consider the case \(r=1\); firstly, because it is the simplest nontrivial case (besides \(r=-1\)) for which the main idea can be brought forth, and secondly because it is naturally related to the problem of asymptotic analyis of two-bordered Toeplitz determinants considered in this section. #### 2.1.1. Proof of Theorem 1.6 Notice that \[Y(z;n,1)\equiv Z(z;n). \tag{2.33}\] We can directly see that the behavior of \(\mathcal{R}(z;n,1)\) as \(z\to 0\) and \(z\to\infty\) are respectively given by \[\mathcal{R}(z;n,1) =Z(0;n)\begin{pmatrix}0&0\\ 0&z^{-1}\end{pmatrix}X^{-1}(0;n)+O\left(1\right),\quad\text{as}\quad z\to 0, \tag{2.35}\] \[\mathcal{R}(z;n,1) =\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+O\left(z^{-1}\right),\quad\text{as}\quad z\to\infty. \tag{2.34}\] Therefore by the Liouville's theorem we have \[\mathcal{R}(z;n,1)=Z(0;n)\begin{pmatrix}0&0\\ 0&z^{-1}\end{pmatrix}X^{-1}(0;n)+\begin{pmatrix}1&0\\ 0&0\end{pmatrix}, \tag{2.36}\] or \[W(z;n,1)=\left[Z(0;n)\begin{pmatrix}0&0\\ 0&z^{-1}\end{pmatrix}X^{-1}(0;n)+\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\right]X(z;n). \tag{2.37}\] In view of (2.33) and (2.31), this can be rewritten as \[Z(z;n)=\left[Z(0;n)\begin{pmatrix}0&0\\ 0&z^{-1}\end{pmatrix}X^{-1}(0;n)+\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\right]X(z;n)\begin{pmatrix}1&0\\ 0&z\end{pmatrix}. \tag{2.38}\] Let \[Z(0;n)=\begin{pmatrix}A&B\\ C&D\end{pmatrix}. \tag{2.39}\] Since \[Z(0;n)\begin{pmatrix}0&0\\ 0&z^{-1}\end{pmatrix}=\begin{pmatrix}0&Bz^{-1}\\ 0&Dz^{-1}\end{pmatrix},\] the formula (2.38) implies that in order to express \(Z(z;n)\) purely in terms of \(X\)-RHP data, we only need to find the unknowns \(B\) and \(D\) in terms of data from the \(X\)-RHP. Indeed we can do so by requiring the right hand side of (2.38) to behave according to RH-Z3. In fact, using RH-X3, and the fact that9 Footnote 9: Note that \(\det X(z;n)\equiv 1\). \[X^{-1}(0;n)=\begin{pmatrix}X_{22}(0;n)&-X_{12}(0;n)\\ -X_{21}(0;n)&X_{11}(0;n)\end{pmatrix}, \tag{2.40}\] we find that the right hand side of (2.38) behaves like \[Z(z;n)=\left[\begin{pmatrix}1&BX_{11}(0;n)\overset{\infty}{X}_{1,12}(n)\\ 0&DX_{11}(0;n)\end{pmatrix}+O(z^{-1})\right]z^{n\sigma_{3}}. \tag{2.41}\] Comparing this with RH-Z3 yields \[B=-\frac{\overset{\infty}{X}_{1,12}(n)}{X_{11}(0;n)}\quad\text{and}\quad D= \frac{1}{X_{11}(0;n)}. \tag{2.42}\] Using these in (2.38) yields the desired result (1.40). #### 2.1.2. Proof of Theorem 1.7 Recalling RH-X1, it is obvious that \(Z(z;n)\) as defined by (1.41) satisfies RH-Z1. From (1.41) it is clear that \(Z(z;n)\) and \(X(z;n-1)\) satisfy the same jump condition on the unit circle since \(X(z;n-1)\) is multiplied by a holomorphic function on the left. Notice that from **RH-X2** we have \[X_{-}^{-1}(z;n-1)X_{+}(z;n-1)=\begin{pmatrix}1&z^{-n+1}\phi(z)\\ 0&1\end{pmatrix}, \tag{2.43}\] and therefore \(Z(z;n)\) as defined by (1.41) satisfies **RH-Z2**. Recalling RH-X3, as \(z\to\infty\) for the right hand side of (1.41) we have \[r.h.s.\text{ of \eqref{eq:2.41}}= \left(z+\overset{\infty}{X}_{1,22}(n-1)-\frac{\overset{\infty}{X }_{2,12}(n-1)}{\overset{\infty}{X}_{1,12}(n-1)}&-\overset{\infty}{X}_{1,12}(n -1)\\ \frac{1}{\overset{\infty}{X}_{1,12}(n-1)}&0\end{array}\right)\] \[\times \left(I+\frac{\overset{\infty}{X}_{1}(n-1)}{z}+\frac{\overset{ \infty}{X}_{2}(n-1)}{z^{2}}+O(z^{-3})\right)\begin{pmatrix}z^{-1}&0\\ 0&z\end{pmatrix}z^{n\sigma_{3}}\] \[= \left(I+O(z^{-1})\right)z^{n\sigma_{3}}. \tag{2.44}\] Therefore \(Z(z;n)\) as defined by (1.41) satisfies **RH-Z3** as well, and hence is the unique solution of the \(Z\)-RHP. **Remark 2.3**.: It is worthwhile to highlight that we would prefer (1.40) over (1.41) because in (1.40), we only need to use data from one subleading term, \(\overset{\infty}{X}_{1}\), while in (1.41), we also need to extract data from \(\overset{\infty}{X}_{2}\). The compatibility of these two solutions is expected to give rise to identities involving \(Q_{n}\) and \(\widehat{Q}\). In this case, as expected, these identities are exactly the well-known recurrence relations for the system of bi-orthogonal polynomials on the unit circle. A new proof for these identities is presented in Lemma 2.16. ### Proof of Theorem 1.5 #### 2.2.1. Bordered Toeplitz determinants of the type \(D_{n}^{B}[\phi;z^{-1}(q_{1}\phi+q_{2})]\) Let us first recall the following elementary properties of the bordered Toeplitz determinants \[D_{n}^{B}\left[\phi;\sum_{j=1}^{m}a_{j}\psi_{j}\right]=\sum_{j=1}^{m}a_{j}D_{n }^{B}[\phi,\psi_{j}], \tag{2.45}\] \[D_{n}^{B}[\phi;\phi]=D_{n}[\phi], \tag{2.46}\] \[D_{n}^{B}[\phi;1]=D_{n-1}[\phi]. \tag{2.47}\] Let us denote \[q_{0}(z):=\frac{1}{z-c},\quad\text{and}\quad\psi_{0}(z):=q_{0}(z)\phi(z). \tag{2.48}\] As a first step, it is useful to recall the description of \(D_{N}^{B}[\phi;q_{1}\phi+q_{2}]\) in terms of the solution of the \(X\)-RHP as shown in [BEGIL] which allows for an effective asymptotic analysis of such bordered Toeplitz determinants. **Lemma 2.4**.: _[_BEGIL_]_ _The bordered Toeplitz determinant \(D_{n+1}^{B}[\phi,q_{0}]\), is encoded into \(X\)-RHP data described by_ \[D_{n+1}^{B}[\phi;q_{0}]=\begin{cases}0,&|c|<1,\\ -c^{-n-1}D_{n}[\phi]X_{11}(c;n),&|c|>1,\end{cases} \tag{2.49}\] _where \(D_{n}[\phi]\) is given by (1.1) and \(X_{11}\) is the \(11\) entry of the solution to **RH-X1** through **RH-X3**._ **Corollary 2.4.1**.: _[_BEGIL_]_ _We have_ \[D_{n+1}^{B}\left[\phi;a+\frac{b_{0}}{z}+\sum_{j=1}^{m}\frac{b_{j}}{z-c_{j}} \right]=D_{n}[\phi]\left(a-\sum_{j=1\atop|c_{j}|>1}^{m}b_{j}c_{j}^{-n-1}X_{11 }(c_{j};n)\right), \tag{2.50}\] _and for a Szego-type \(\phi\)_ \[D_{n+1}^{B}\left[\phi;a+\frac{b_{0}}{z}+\sum_{j=1}^{m}\frac{b_{j}}{z-c_{j}} \right]=G[\phi]^{n}E[\phi]\left(a-\sum_{j=1\atop|c_{j}|>1}^{m}\frac{b_{j}}{c_ {j}}\alpha(c_{j})\right)\left(1+O(e^{-cn})\right), \tag{2.51}\] _as \(n\to\infty\), where \(\alpha\) is given by (1.18), and the constants \(G[\phi]\) and \(E[\phi]\) are given by (1.10) and \(\mathfrak{c}\) is some positive constant._ **Lemma 2.5**.: [BEGIL] _Let \(\phi\) be of Szego-type. Then, as \(n\to\infty\) we have_ \[D_{n+1}^{B}\left[\phi;z\right]=D_{n}\left[\phi\right]\left(-\frac{1}{2\pi i} \int_{\mathbb{T}}\ln(\phi(\tau))d\tau+O(e^{-\alpha n})\right), \tag{2.52}\] _for some positive constant \(\mathfrak{c}\)._ **Lemma 2.6**.: [BEGIL] _Let \(\psi_{0}\) be as defined in (2.48) with \(c\neq 0\). Then the bordered Toeplitz determinant \(D_{n}^{B}\left[\phi;\psi_{0}\right]\) can be written in terms of the following data from the solution of the X-RHP:_ \[D_{n+1}^{B}\left[\phi;\psi_{0}\right]=-\frac{1}{c}D_{n+1}\left[\phi\right]+ \frac{1}{c}D_{n}\left[\phi\right]X_{12}(c;n), \tag{2.53}\] _where \(D_{n}\left[\phi\right]\) is given by (1.1) and \(X_{12}\) is the \(12\) entry of the solution to RH-X1 through RH-X3._ **Corollary 2.6.1**.: [BEGIL] _We have_ \[D_{n+1}^{B}\left[\phi;\left(a+\sum_{j=1}^{m}\frac{b_{j}z}{z-c_{j}}\right)\phi \right]=aD_{n+1}\left[\phi\right]+D_{n}\left[\phi\right]\sum_{j=1}^{m}b_{j}X_ {12}(c_{j};n), \tag{2.54}\] _and for a Szego-type \(\phi\)_ \[D_{n+1}^{B}\left[\phi;\left(a+\sum_{j=1}^{m}\frac{b_{j}z}{z-c_{j}}\right)\phi \right]=G\left[\phi\right]^{n+1}E\left[\phi\right]\left(a+\frac{1}{G\left[\phi \right]}\sum_{j=1\atop|\varsigma_{j}|<1}^{m}b_{j}\alpha(c_{j})\right)\left(1+ O(e^{-\alpha n})\right), \tag{2.55}\] _as \(n\to\infty\), where \(\alpha\) is defined in (1.18), \(G\left[\phi\right]\) and \(E\left[\phi\right]\) are given by (1.10), and \(\mathfrak{c}\) is some positive constant._ We choose to follow the notations introduced in [BEGIL] for a smoother navigation between the papers. Recalling (1.13) we have \[z^{-1}q_{1}(z)=a_{1}+\frac{a_{0}}{z}+\frac{b_{0}}{z^{2}}+\sum_{j=1}^{m}\frac{ b_{j}}{z-c_{j}},\quad\text{and}\quad z^{-1}q_{2}(z)=\hat{a}_{1}+\frac{\hat{d}_{0}}{z }+\frac{\hat{b}_{0}}{z^{2}}+\sum_{j=1}^{m}\frac{\hat{d}_{j}}{z-c_{j}},\] where \[\hat{d}_{0}=\hat{a}_{0}-\sum_{j=1}^{m}\hat{b}_{j}c_{j}^{-1}\quad\text{and} \quad\hat{d}_{j}=\hat{b}_{j}c_{j}^{-1}.\] All contributions from all terms in \(z^{-1}q_{1}\) and \(z^{-1}q_{2}\) are expressed in Lemmas above and other results in [BEGIL], except for the contribution from \(b_{0}z^{-2}\) in \(z^{-1}q_{1}\). Notice that the term \(\hat{b}_{0}z^{-2}\) in \(z^{-1}q_{2}\) does not contribute due to Lemma 2.1 of [BEGIL]. The contribution from \(b_{0}z^{-2}\) in \(z^{-1}q_{1}\) corresponds to the bordered Toeplitz determinant \(D_{N}^{B}\left[\phi;z^{-2}\phi\right]\). More generally, below we prove the following Theorem, which characterizes \(X_{12}(z;n)\) as the generating function for the objects \(D_{n+1}^{B}\left[\phi;z^{-\ell}\phi\right]/D_{n}\left[\phi\right]\), \(\ell\in\mathbb{N}\). **Theorem 2.7**.: _Let \(\ell\in\mathbb{N}\). The coefficient of \(z^{\ell}\) in the Taylor expansion of \(X_{12}(z;n)\) centered at zero, is precisely the object \(D_{n+1}^{B}\left[\phi;z^{-\ell}\phi\right]/D_{n}\left[\phi\right]\). In other words:_ \[D_{n+1}^{B}\left[\phi;z^{-\ell}\phi\right]=\frac{D_{n}\left[\phi\right]}{\ell!}\left.\frac{d^{\ell}}{dz^{\ell}}X_{12}(z;n)\right|_{z=0}, \tag{2.56}\] _where \(X_{12}(z;n)\) is the \(12\)-entry in the solution of the Riemann-Hilbert problem RH-X1 - RH-X3._ Proof.: Notice that \[\frac{\mathrm{d}^{\ell}}{\mathrm{d}z^{\ell}}\left(\zeta-z\right)^{-1}\Bigg{|}_{z=0 }=\ell!\zeta^{-\ell-1}. \tag{2.57}\] Therefore, from (2.8), we have \[\frac{D_{n}\left[\phi\right]}{\ell!}\frac{\mathrm{d}^{\ell}}{\mathrm{d}z^{\ell }}X_{12}(z;n)\Bigg{|}_{z=0}=\sqrt{D_{n+1}\left[\phi\right]D_{n}\left[\phi \right]}\int_{\mathbb{T}}Q_{n}(\zeta)\zeta^{-\ell-n}\phi(z)\frac{\mathrm{d} \zeta}{2\pi\mathrm{i}\zeta}. \tag{2.58}\] This expression can be written, in view of (2.2), as \[\begin{split}\frac{D_{n}\left[\phi\right]}{\ell!}\frac{\mathrm{d} ^{\ell}}{\mathrm{d}z^{\ell}}X_{12}(z;n)\Bigg{|}_{z=0}&=\int_{ \mathbb{T}}\det\left(\begin{matrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}\\ \vdots&\vdots&\ddots&\vdots\\ \phi_{n-1}&\phi_{n-2}&\cdots&\phi_{-1}\\ 1&\zeta&\cdots&\zeta^{n}\end{matrix}\right)\zeta^{-n-\ell}\phi(\zeta)\frac{ \mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}\\ =\det\left(\begin{matrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}\\ \vdots&\vdots&\ddots&\vdots\\ \phi_{n-1}&\phi_{n-2}&\cdots&\phi_{-1}\\ \int_{\mathbb{T}}\frac{\zeta^{-\ell}\phi(\zeta)}{\zeta^{n}}\frac{\mathrm{d} \zeta}{2\pi\mathrm{i}\zeta}&\int_{\mathbb{T}}\frac{\zeta^{-\ell}\phi(\zeta)}{ \zeta^{n-1}}\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}&\cdots&\int_{\mathbb{T} }\zeta^{-\ell}\phi(\zeta)\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}\end{matrix} \right)\\ =\det\left(\begin{matrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}\\ \vdots&\vdots&\ddots&\vdots\\ \phi_{n-1}&\phi_{n-2}&\cdots&\phi_{-1}\\ \left[z^{-\ell}\phi\right]_{n}&\left[z^{-\ell}\phi\right]_{n-1}&\cdots&\left[z^ {-\ell}\phi\right]_{0}\end{matrix}\right)=D_{n+1}^{B}[\phi;z^{-\ell}\phi]. \tag{2.59}\] The following asymptotic result simply follows from the asymptotic analysis of the X-RHP, see (4.19). **Corollary 2.7.1**.: _For a Szego-type symbol we have_ \[D_{n+1}^{B}\left[\phi;z^{-\ell}\phi\right]=G\left[\phi\right]^{n}E\left[\phi \right]\left(\frac{\alpha^{(\ell)}\left(0\right)}{\ell!}+O\left(e^{-\alpha n} \right)\right),\quad\text{as}\quad n\to\infty, \tag{2.60}\] _where \(\alpha\) is given by (1.18), and the constants \(G\left[\phi\right]\) and \(E\left[\phi\right]\) are given by (1.10) and \(\mathfrak{c}\) is some positive constant._ **Remark 2.8**.: This is in agreement with Lemma 2.7 of [BEGIL]. For \(\ell=1\), Notice that \(\alpha^{\prime}(0)=[\log\phi]_{1}\alpha(0)=[\log\phi]_{1}G\left[\phi\right]\). **Lemma 2.9**.: _Let \(\phi\) be of Szego-type and the rational functions \(q_{1}\) and \(q_{2}\) be given by (1.13). Then, the following asymptotic behavior of \(D_{n}^{B}\left[\phi;z^{-1}(q_{1}\phi+q_{2})\right]\) as \(n\to\infty\) takes place_ \[D_{n}^{B}\left[\phi;z^{-1}(q_{1}\phi+q_{2})\right]=G\left[\phi\right]^{n}E \left[\phi\right]\left(H\left[\phi;\psi\right]+O\left(e^{-\alpha n}\right) \right), \tag{2.61}\] _where \(G\left[\phi\right]\) and \(E\left[\phi\right]\) are given by (1.10), \(H[\phi;\psi]\) is given by (2.62)_ \[H[\phi;\psi]=a_{1}-\sum_{j=1}^{m}\frac{b_{j}}{c_{j}}+a_{0}[\log\phi]_{1}+b_{0} [\log\phi]_{2}+\frac{b_{0}}{2}[\log\phi]_{1}^{2}+\frac{1}{\alpha(0)}\left( \hat{a}_{1}-\sum_{j=1\atop|c_{j}|>1}^{m}\frac{\hat{b}_{j}}{c_{j}^{2}}\alpha (c_{j})+\sum_{j=1\atop 0<|c_{j}|<1}^{m}\frac{b_{j}}{c_{j}}\alpha(c_{j})\right), \tag{2.63}\] _and \(\alpha\) is given by (1.18), and \(\mathfrak{c}\) is some positive constant._ Proof.: The proof is straight-forward, using (2.45), (2.46), (2.47), Lemmas cited above from [BEGIL] and Corollary 2.7.1. #### 2.2.2. Bordered Toeplitz determinants of the type \(D_{n}^{B}[z\phi;q_{1}\phi+q_{2}]\) Now we focus on computing the asymptotics of \(D_{n}^{B}[z\phi;q_{1}\phi+q_{2}]\) for the rational functions \(q_{1}(z)\) and \(q_{2}(z)\) given by (1.13). In view of (2.45) we need to express the following bordered Toeplitz determinants in terms of the data from the \(Z\)-RHP: \(D_{n}^{B}[z\phi;\phi]\), \(D_{n}^{B}[z\phi;\frac{1}{z}\phi]\), \(D_{n}^{B}[z\phi;z]\), and \(D_{n}^{B}[z\phi;\frac{1}{z-c}]\) with \(c\neq 0\). Notice that we already know that \[D_{N}^{B}[z\phi;z^{k}]=0,\qquad k\in\mathbb{Z}\setminus\{0,1,\cdots,n\},\] since the Fourier coefficients \((z^{k})_{j}=0\) for \(0\leq j\leq n\), \(k\in\mathbb{Z}\setminus\{0,1,\cdots,n\}\). Regarding the first two bordered Toeplitz determinants in the above list, we can use the following generalization of Theorem 2.7 which can be proven identically. **Lemma 2.10**.: _Let \(r\in\mathbb{Z}\) and \(\ell\in\mathbb{N}\). The coefficient of \(z^{\ell}\) in the Taylor expansion of \(Y_{12}(z;n,r)\) centered at zero, is precisely the object \(D_{n+1}^{B}[z^{r}\phi;z^{r-\ell}\phi]/D_{n}[z^{r}\phi]\). In other words:_ \[D_{n+1}^{B}[z^{r}\phi;z^{r-\ell}\phi]=\frac{D_{n}[z^{r}\phi]}{\ell!}\frac{d^{ \ell}}{dz^{\ell}}Y_{12}(z;n,r)\Bigg{|}_{z=0}, \tag{2.63}\] _where \(Y_{12}(z;n,r)\) is the \(12\)-entry in the solution of the Riemann-Hilbert problem RH-Y1 - RH-Y3._ **Corollary 2.10.1**.: _We have the following characterizations for \(D_{n+1}^{B}[z\phi,\phi]\) and \(D_{n+1}^{B}[z\phi,\frac{1}{z}\phi]\) in terms of the solution of the Riemann-Hilbert problem RH-Z1 - RH-Z3:_ \[D_{n+1}^{B}[z\phi,\phi]=D_{n}[z\phi]\frac{d}{dz}Z_{12}(z;n)\Bigg{|}_{z=0}, \tag{2.64}\] \[D_{n+1}^{B}[z\phi,\frac{1}{z}\phi]=\frac{D_{n}[z\phi]}{2}\frac{d^{2}}{dz^{2}} Z_{12}(z;n)\Bigg{|}_{z=0}. \tag{2.65}\] _Moreover, if \(\phi\) is a Szego-type symbol we have_ \[\frac{D_{n+1}^{B}[z\phi,\phi]}{D_{n}[z\phi]}=G[\phi]\left(1-[ \log\phi]_{1}\frac{C_{n}[\phi]}{C_{n-1}[\phi]}\right)\left(1+O(\rho^{-2n}) \right), \tag{2.67}\] \[\frac{D_{n+1}^{B}[z\phi,\frac{1}{z}\phi]}{D_{n}[z\phi]}=G[\phi] \left([\log\phi]_{1}-\left([\log\phi]_{2}+\frac{[\log\phi]_{1}^{2}}{2}\right) \frac{C_{n}[\phi]}{C_{n-1}[\phi]}\right)\left(1+O(\rho^{-2n})\right), \tag{2.66}\] _where_ \[C_{n}[\phi]:=\frac{1}{2\pi i}\int_{\Gamma_{0}}\tau^{n}\phi^{-1}(\tau)\alpha^{ 2}(\tau)d\tau, \tag{2.68}\] \(\alpha\) _is given by (1.18), and \(\Gamma_{0}\) is a counter-clockwise circle with radius \(\rho^{-1}<1\). The number \(\rho>1\) is chosen such that \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ Proof.: Using (1.40) we have \[Z_{12}(z;n)=\left(\mathcal{B}(n)+z\right)X_{12}(z;n)-\overset{\infty}{X}_{1,12 }(n)X_{22}(z;n), \tag{2.69}\] where for the simplicity of notations we have introduced \[\mathcal{B}(n):=\frac{\overset{\infty}{X}_{1,12}(n)X_{21}(0;n)}{X_{11}(0;n)}. \tag{2.70}\] Using (4.19), as \(n\to\infty\), uniformly for \(z\in\Omega_{0}\) we have \[X_{11}(z;n) =-R_{1,12}(z;n)\alpha^{-1}(z)\left(1+O\left(\frac{\rho^{-2n}}{1+|z |}\right)\right), \tag{2.72}\] \[X_{12}(z;n) =\alpha(z)\left(1+O\left(\frac{\rho^{-2n}}{1+|z|}\right)\right),\] (2.73) \[X_{21}(z;n) =-\alpha^{-1}(z)\left(1+O\left(\frac{\rho^{-2n}}{1+|z|}\right) \right),\] (2.74) \[X_{22}(z;n) =R_{1,21}(z;n)\alpha(z)\left(1+O\left(\frac{\rho^{-2n}}{1+|z|} \right)\right), \tag{2.71}\] where \(\alpha\) and \(R_{1}\) are respectively given by (4.9) and (4.18) and \(\rho^{-1}\) is the radius of the circle \(\Gamma_{0}\) shown in Figure 1. First, let us consider the large-\(n\) behavior of \(\mathcal{B}(n)\) given by (2.70). Let us first focus on \(\overset{\infty}{X}_{1,12}(n)\). From **RH-X3** we have \[\overset{\infty}{X}_{1}(n)=\lim_{z\to\infty}z\left(X(z;n)z^{-n\sigma_{3}}-I\right) \tag{2.75}\] From this, and recalling (4.19) for \(z\in\Omega_{\infty}\), (4.20) and the fact that \(\alpha(z)\to 1\) as \(z\to\infty\), we find \[\overset{\infty}{X}_{1,12}(n)=\lim_{z\to\infty}zR_{1,12}(z)+O(\rho^{-3n})= \frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{0}}\tau^{n}\phi^{-1}(\tau)\alpha^{2}( \tau)\mathrm{d}\tau\times\left(1+O(\rho^{-2n})\right)=O(\rho^{-n}), \tag{2.76}\] where we have used (4.16), (4.18) and the fact that \(R_{2\ell}(z;n)\) is diagonal and \(R_{2\ell+1}(z;n)\) is off-diagonal, \(\ell\in\mathbb{N}\cup\{0\}\). From (2.71), (2.73) and (2.76) we obtain \[\mathcal{B}(n)=-\frac{C_{n}[\phi]}{C_{n-1}[\phi]}\times\left(1+O(\rho^{-2n}) \right), \tag{2.77}\] where \(C_{n}[\phi]\equiv-R_{1,12}(0;n+1)=O(\rho^{-n})\) is given by (2.68), see (4.16). From (2.69) \[\frac{\mathrm{d}}{\mathrm{d}z}Z_{12}(z;n)\Bigg{|}_{z=0}=X_{12}(0;n)+\mathcal{B }(n)\frac{\mathrm{d}}{\mathrm{d}z}X_{12}(z;n)\Bigg{|}_{z=0}+O(\rho^{-2n})=( \alpha(0)+\mathcal{B}(n)\alpha^{\prime}(0))\left(1+O(\rho^{-2n})\right), \tag{2.79}\] \[\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}Z_{12}(z;n)\Bigg{|}_{z=0}=2 \frac{\mathrm{d}}{\mathrm{d}z}X_{12}(z;n)\Bigg{|}_{z=0}+\mathcal{B}(n)\frac{ \mathrm{d}^{2}}{\mathrm{d}z^{2}}X_{12}(z;n)\Bigg{|}_{z=0}+O(\rho^{-2n})=(2 \alpha^{\prime}(0)+\mathcal{B}(n)\alpha^{\prime\prime}(0))\left(1+O(\rho^{-2 n})\right), \tag{2.78}\] recalling that \(\overset{\infty}{X}_{1,12}(n)=O(\rho^{-n})\) by (2.76), and that \(X_{22}(z;n)=O(\rho^{-n}/(1+|z|))\) by (2.74) and (4.16). Finally, we arrive at (2.66) and (2.67) recalling (1.10), (1.18) and observing that \[\alpha(0) =G[\phi], \tag{2.81}\] \[\alpha^{\prime}(0) =G[\phi]\cdot\left[\log\phi\right]_{1},\] (2.82) \[\alpha^{\prime\prime}(0) =G[\phi]\cdot\left(2[\log\phi]_{2}+[\log\phi]_{1}^{2}\right). \tag{2.80}\] **Remark 2.11**.: The asymptotic behavior of \(C_{n}[\phi]\), and hence that of \(\mathcal{B}_{n}[\phi]\), depends on the analytic properties of the symbol \(\phi\), and thus detailed behavior can only be obtained for a concrete symbol. For instance, for \(\widehat{\phi}(z)=\sqrt{\frac{1-k^{-1}z^{-1}}{1-k^{-1}z}}\) which is associated to the two-point correlations of the Ising model, similar asymptotics were computed in [BEGIL] in Proposition 2.8. Nevertheless, We will see that the knowledge of this asymptotics is not going to be needed for the leading order asymptotics of \(D_{n}^{B}[\phi;\boldsymbol{\psi}_{2}]\). Similar to Lemma 2.6 we can prove the analogous result when \(\phi\) is replaced by \(z\phi\). **Lemma 2.12**.: _Let_ \[\eta_{0}(z):=\frac{z\phi}{z-c},\qquad\text{with}\qquad c\neq 0. \tag{2.83}\] _Then the bordered Toeplitz determinant \(D_{n}^{B}[z\phi;\eta_{0}]\) can be written in terms of the following data from the solution of the Z-RHP:_ \[D_{n+1}^{B}[z\phi;\eta_{0}]=-\frac{1}{c}D_{n+1}[z\phi]+\frac{1}{c}D_{n}[z\phi ]Z_{12}(c;n), \tag{2.84}\] _where \(D_{n}[\phi]\) is given by (1.1) and \(Z_{12}\) is the \(12\) entry of the solution to RH-Z1 through RH-Z3._ **Corollary 2.12**.: _It holds that_ \[\frac{D_{n+1}^{B}[z\phi;\eta_{0}]}{D_{n}[z\phi]}=\frac{Z_{12}(c;n)-Z_{12}(0;n) }{c}, \tag{2.85}\] _and moreover, if \(\phi\) is a Szego-type symbol we have_ \[\frac{D_{n+1}^{B}[z\phi;\eta_{0}]}{D_{n}[z\phi]}=\frac{G[\phi]}{c}\frac{C_{n} [\phi]}{C_{n-1}[\phi]}\times\left(1+O(\rho^{-2n})\right)+\begin{cases}\alpha( c)\left(1-c^{-1}\frac{C_{n}[\phi]}{C_{n-1}[\phi]}\right)\left(1+O(\rho^{-2n}) \right),&|c|<1\\ O(\rho^{-n}),&|c|>1,\end{cases} \tag{2.86}\] _where \(C_{n}[\phi]\) and \(\alpha\) are given by (2.68) and (1.18), respectively. The number \(\rho>1\) is such that \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\), and in addition is chosen such that \(\rho<|c|\) when \(|c|>1\), and \(\rho^{-1}>|c|\) when \(|c|<1\)._ Proof.: Let us rewrite (2.84) as \[\frac{D_{n+1}^{B}[z\phi;\eta_{0}]}{D_{n}[z\phi]}=-\frac{1}{c}\frac{D_{n+1}[z \phi]}{D_{n}[z\phi]}+\frac{1}{c}Z_{12}(c;n)=\frac{Z_{12}(c;n)-\varkappa_{n}^{ -2}[z\phi]}{c} \tag{2.87}\] where we have used (2.27). From (2.25) and (2.22) we can observe that \[\varkappa_{n}^{-2}[z\phi]=Z_{12}(0;n), \tag{2.88}\] and if we combine this with (2.87) we obtain (2.85). Rewriting (2.85) using (2.69) we have \[\frac{Z_{12}(c;n)-Z_{12}(0;n)}{c}=X_{12}(c;n)+\mathcal{B}(n)\left(\frac{X_{1 2}(c;n)-X_{12}(0;n)}{c}\right)-\overset{\infty}{X}_{1,12}(n)\left(\frac{X_{2 2}(c;n)-X_{22}(0;n)}{c}\right). \tag{2.89}\] We now obtain (2.86), using (4.19), (2.76), and (2.77). Let us recall \(q_{0}\) as defined in (2.48). The Fourier coefficients of \(q_{0}\) are given by \[q_{0,j}=\begin{cases}0,&|c|<1,\\ -(c)^{-j-1},&|c|>1,\end{cases}\qquad 0\leq j\leq n. \tag{2.90}\] The following results can be proven identically as Lemma 2.4 and Corollary 2.4.1 reminded above from [BEGIL], which establishes how \(D_{N}^{B}[z\phi;q]\) is encoded into \(Z\)-RHP data. **Lemma 2.13**.: _The bordered Toeplitz determinant \(D_{n+1}^{B}[z\phi,\frac{1}{z-c}]\), is encoded into the \(Z\)-RHP data described by_ \[D_{n+1}^{B}[z\phi;\frac{1}{z-c}]=\begin{cases}0,&|c|<1,\\ -c^{-n-1}D_{n}[z\phi]Z_{11}(c;n),&|c|>1,\end{cases} \tag{2.91}\] _where \(D_{n}[\phi]\) is given by (1.1) and \(Z_{11}\) is the \(11\) entry of the solution to RH-Z1 through RH-Z3. Moreover, if \(\phi\) is a Szego-type symbol we have_ \[\frac{D_{n+1}^{B}\left[z\phi;\frac{1}{z-c}\right]}{D_{n}[z\phi]}=\begin{cases} 0,&|c|<1,\\ -\alpha(c)\left(c^{-1}-c^{-2}\frac{C_{n}[\phi]}{C_{n-1}[\phi]}\right)\left(1+ O(\rho^{-2n})\right),&|c|>1,\end{cases} \tag{2.92}\] _where \(C_{n}[\phi]\) and \(\alpha\) are given by (2.68) and (1.18), respectively. For the case \(|c|>1\), the number \(1<\rho<|c|\) is such that \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ Proof.: The proof of (2.91) is identicall to that of (2.49). From (1.40) we have \[Z_{11}(z;n)=\left(1+\mathcal{B}(n)z^{-1}\right)X_{11}(z;n)-X_{1,12}^{\infty}( n)z^{-1}X_{21}(z;n), \tag{2.93}\] where \(\mathcal{B}(n)\) is introduced in (2.70)10. Now, (2.92) follows from (4.19), (4.16), (2.76), and (2.77). Footnote 10: Since \(Z_{11}\) is a polynomial (see (2.28)) the coefficient of \(z^{-1}\) in (2.93) must vanish: \(\mathcal{B}(n)X_{11}(0;n)-X_{1,12}^{\infty}(n)X_{21}(0;n)=0\). This is, as expected, in agreement with (2.70). Finally, let us find the asymptotics of \(D_{n+1}^{B}[z\phi;z]\). **Lemma 2.14**.: _It holds that_ \[D_{n+1}^{B}[z\phi;z]=D_{n}^{B}[z\phi]\lim_{z\to\infty}\left[\frac{Z_{11}(z;n) -z^{n}}{z^{n-1}}\right]. \tag{2.94}\] _Moreover, if \(\phi\) is of Szego-type, As \(n\to\infty\) we have_ \[\frac{D_{n+1}^{B}[z\phi;z]}{D_{n}[z\phi]}=\left(-[\log\phi]_{-1}-\frac{C_{n} [\phi]}{C_{n-1}[\phi]}\right)\left(1+O(\rho^{-2n})\right), \tag{2.95}\] _where \(C_{n}[\phi]\) is given by (2.68) and the number \(\rho>1\) is chosen such that \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ Proof.: One can prove (2.94) in the exact same manner as equation (2.16) of [BEGIL]. Let us recall (2.93). Observe that \[\frac{X_{1,12}^{\infty}(n)z^{-1}X_{21}(z;n)}{z^{n-1}}=O(z^{-1})\] as \(X_{21}(z;n)\) is a polynomial of degree \(n-1\), and thus the above term in (2.93) does not contribute to the limit in (2.94). So we just focus on the first term in (2.93). Expanding \(\alpha(z)\), given by (1.18), as \(z\to\infty\) we get \[\alpha(z)=1-\frac{1}{2\pi\mathrm{i}z}\int_{\mathbb{T}}\ln\left(\phi(\tau) \right)\mathrm{d}\tau+O(z^{-2}). \tag{2.96}\] Using this in the expression for \(X_{11}(z;n)=\alpha(z)z^{n}\left(1+O\left(\frac{\rho^{-2n}}{1+|z|}\right)\right)\) in \(\Omega_{\infty}\) (see (4.19) and Figure 1) and combining with (2.93), (2.94) and (2.77) we obtain (2.95). The following result now follows in a straightforward way from Lemmas 2.13, 2.14, Corollaries 2.10.1 and 2.12.1, and equations (2.45), (2.46) and (2.47). **Corollary 2.14.1**.: _Let \(\psi\) be given by (1.12) and (1.13), and \(\phi\) be of Szego-type. Then, the following asymptotic behavior as \(n\to\infty\) takes place_ \[\frac{D_{n+1}^{B}[z\phi;\psi]}{D_{n}[z\phi]}=G[\phi]\left(F[\phi,\psi]-H[\phi, \psi]\frac{C_{n}[\phi]}{C_{n-1}[\phi]}+O(\rho^{-n})\right), \tag{2.97}\] _where \(F[\phi,\psi]\) is given by (1.17), and \(H[\phi,\psi]\) is given by (2.62). In the above formulae, \(C_{n}[\phi]\) and \(\alpha\) are given by (2.68) and (1.18), respectively, and the number \(\rho\) is such that: \(1<\rho<\min\limits_{\begin{subarray}{c}1\leq j\leq m\\ |\varphi|>1\end{subarray}}\left\{|c_{j}|\right\}\), \(\max\limits_{\begin{subarray}{c}1\leq j\leq m\\ 0<|\varphi|<1\end{subarray}}\left\{|c_{j}|\right\}<\rho^{-1}<1\), and \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ This result is the last needed asymptotics to prove Theorem 1.5. In fact from (2.16): \[D_{n}^{B}[\phi;\boldsymbol{\psi}_{2}]=D_{n-1}^{B}[\phi;z^{-1}\psi_{1}]\frac{D_ {n-1}^{B}[z\phi;\psi_{2}]}{D_{n-2}[z\phi]}-D_{n-1}^{B}[\phi;z^{-1}\psi_{2}] \frac{D_{n-1}^{B}[z\phi;\psi_{1}]}{D_{n-2}[z\phi]},\] Corollary 2.14.1 and Lemma 2.9 we obtain (1.35) and (1.36). We have thus finished the proof of Theorem 1.5. **Remark 2.15**.: It is worthwhile to highlight that the framework presented in this section can be recursively used to find the asymptotics of a \(k\)-bordered Toeplitz determinant when each \(\psi_{j}\) is of the form (1.12)-(1.13), for any finite \(k\). For instance let us consider the three-bordered Toeplitz determinant \[\mathcal{D}:=D_{n}^{B}[\phi;\boldsymbol{\psi}_{3}]\equiv D_{n}^{B}[\phi;\psi_ {1},\psi_{2},\psi_{3}]\] Like the two-bordered case, we use the Dodgson Condensation identity (2.12), this time for \(\mathcal{D}\): \[\mathcal{D}\cdot\mathcal{D}\left\{\begin{matrix}0&n-1\\ n-2&n-1\end{matrix}\right\}=\mathcal{D}\left\{\begin{matrix}0\\ n-2\end{matrix}\right\}\cdot\mathcal{D}\left\{\begin{matrix}n-1\\ n-1\end{matrix}\right\}-\mathcal{D}\left\{\begin{matrix}0\\ n-1\end{matrix}\right\}\cdot\mathcal{D}\left\{\begin{matrix}n-1\\ n-2\end{matrix}\right\}. \tag{2.98}\] We observe that \[\mathcal{D}\left\{\begin{matrix}0&n-1\\ n-2&n-1\end{matrix}\right\}=D_{n-2}^{B}[z\phi;z^{-1}\psi_{1}] \tag{2.99}\] is a (single) bordered Toeplitz determinant, while all four determinants on the right hand side of (2.98) are two-bordered Toeplitz determinants: \[\mathcal{D}\left\{\begin{matrix}0\\ n-2\end{matrix}\right\} =D_{n-1}^{B}[z\phi;\psi_{1},\psi_{3}], \tag{2.101}\] \[\mathcal{D}\left\{\begin{matrix}n-1\\ n-1\end{matrix}\right\} =D_{n-1}^{B}[\phi;z^{-1}\psi_{1},z^{-1}\psi_{2}],\] (2.102) \[\mathcal{D}\left\{\begin{matrix}0\\ n-1\end{matrix}\right\} =D_{n-1}^{B}[z\phi;\psi_{1},\psi_{2}],\] (2.103) \[\mathcal{D}\left\{\begin{matrix}n-1\\ n-2\end{matrix}\right\} =D_{n-1}^{B}[\phi;z^{-1}\psi_{1},z^{-1}\psi_{3}]. \tag{2.100}\] These two-bordered Toeplitz determinants can be asymptotically analyzed using the results and methods described earlier in this section, and thus pave the way for the asymptotic analysis of \(D_{n}^{B}[\phi;\boldsymbol{\psi}_{3}]\) via (2.98). ### A new proof of the three term recurrence relations for Bopuc Finally, we would like to discuss the compatibility of (1.40) and (1.41) in view of the uniqueness of the solution of the \(Z\) Riemann-Hilbert problem, see Lemma 2.2. This compatibility provides a new proof for the recurrence relations of the system of bi-orthogonal polynomials on the unit circle in the next lemma. **Lemma 2.16**.: _[_[Sz]_, [DIK]_Lemma 2.2] Let \(D_{n}[\phi]\neq 0\), \(n\geq 0\). The system of bi-orthogonal polynomials \(\{Q_{j}(z),\widehat{Q}_{j}(z)\}_{j=0}^{\infty}\) satisfy the following recurrence relations for \(n\geq 0\):_ \[\varkappa_{n}zQ_{n}(z)=\varkappa_{n+1}Q_{n+1}(z)-Q_{n+1}(0)z^{n+1 }\widehat{Q}_{n+1}(z^{-1}), \tag{2.105}\] \[\varkappa_{n}z^{-1}\widehat{Q}_{n}(z^{-1})=\varkappa_{n+1} \widehat{Q}_{n+1}(z^{-1})-\widehat{Q}_{n+1}(0)z^{-n-1}Q_{n+1}(z),\] (2.106) \[\varkappa_{n+1}z^{-1}\widehat{Q}_{n}(z^{-1})=\varkappa_{n} \widehat{Q}_{n+1}(z^{-1})-\widehat{Q}_{n+1}(0)z^{-n}Q_{n}(z), \tag{2.104}\] _and_ \[\varkappa_{n+1}^{2}-\varkappa_{n}^{2}=Q_{n+1}(0)\widehat{Q}_{n+1}(0). \tag{2.107}\] Proof.: The compatibility of the 11 and 21 entries of (1.40) and (1.41) can be written as: \[\begin{split}\begin{pmatrix}-z^{-1}\overset{\infty}{X}_{1,12}( n)&\overset{\infty}{X}_{1,12}(n-1)\\ z^{-1}&0\end{pmatrix}\begin{pmatrix}X_{21}(z;n)\\ X_{21}(z;n-1)\end{pmatrix}=\\ -\frac{\overset{\infty}{X}_{1,12}(n)X_{21}(0;n)}{X_{11}(0;n)}z^{-1}-1&z \overset{\infty}{X}_{1,22}(n-1)-\frac{\overset{\infty}{X}_{2,12}(n-1)}{X_{1,1 2}(n-1)}\\ \frac{X_{21}(0;n)}{X_{11}(0;n)}z^{-1}&\frac{1}{\overset{\infty}{X}_{1,12}(n-1) }\end{pmatrix}\begin{pmatrix}X_{11}(z;n)\\ X_{11}(z;n-1)\end{pmatrix}.\end{split} \tag{2.108}\] Solving this linear system by inverting the coefficient matrix on the left hand side in particular yields \[X_{21}(z;n)=\frac{X_{21}(0;n)}{X_{11}(0;n)}X_{11}(z;n)+\frac{1}{\overset{ \infty}{X}_{1,12}(n-1)}zX_{11}(z;n-1) \tag{2.109}\] Using (2.8), shifting the index \(n\mapsto n+1\), and straight-forward simplifications yield \[zQ_{n}(z)-\frac{\varkappa_{n}^{3}\overset{\infty}{X}_{1,12}(n)}{Q_{n+1}(0)}Q_ {n+1}(z)=-\overset{\infty}{X}_{1,12}(n)\varkappa_{n}^{2}z^{n}\widehat{Q}_{n} (z^{-1}). \tag{2.110}\] Matching the coefficients of \(z^{n+1}\) yields the identity \[\overset{\infty}{X}_{1,12}(n)=\frac{Q_{n+1}(0)}{\varkappa_{n}^{2}\varkappa_{ n+1}}, \tag{2.111}\] using which we can write (2.110) as \[zQ_{n}(z)=\frac{\varkappa_{n}}{\varkappa_{n+1}}Q_{n+1}(z)-\frac{Q_{n+1}(0)}{ \varkappa_{n+1}}z^{n}\widehat{Q}_{n}(z^{-1}). \tag{2.112}\] Now, by inverting the coefficient matrix on the right hand side of (2.108) in particular we obtain \[\mathcal{D}X_{11}(z;n-1)=X_{21}(z;n-1)-X_{21}(z;n), \tag{2.113}\] for some constant \(\mathcal{D}\). Recalling (2.8) and mathching the coefficients of \(z^{n-1}\) gives \[\mathcal{D}=\varkappa_{n-1}\widehat{Q}_{n-1}(0). \tag{2.114}\] Using this along with (2.8), shifting the index \(n\mapsto n+2\) and straightforward rearrangement of terms in (2.113) yield (2.105). Now, we combine (2.105) and (2.112) to obtain \[\varkappa_{n}zQ_{n}(z)=\left[\frac{\varkappa_{n}^{2}+Q_{n+1}(0)\widehat{Q}_{n +1}(0)}{\varkappa_{n+1}}\right]Q_{n+1}(z)-Q_{n+1}(0)z^{n+1}\widehat{Q}_{n+1}( z^{-1}). \tag{2.115}\] Evaluating this equation at \(z=0\) gives (2.107). Combining (2.107) and (2.115) gives (2.104). Finally, eliminating \(Q_{n+1}(z)\) from (2.104) and (2.105), and using (2.107), yields (2.106). ## 3. Semi-Framed, Framed and Multi-Framed Toeplitz Determinants As will become clear later in the sequel, the semi-framed Toeplitz determinants form the building blocks to study the asymptotics of framed and multi-framed Toeplitz determinants. To get started in this section, it is useful to revisit the definitions of the semi-framed Toeplitz determinants which were introduced in the Introduction here again. For \(\phi,\psi,\eta\in L^{1}(\mathbb{T})\) and a parameter \(a\in\mathbb{C}\) define the \(n\times n\) semi-framed Toeplitz determinants \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{G}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\) and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) as \[\mathcal{E}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{n-2}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{n-3}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{0}\\ \eta_{n-2}&\eta_{n-3}&\cdots&\eta_{0}&a\end{pmatrix}, \tag{3.1}\] \[\mathcal{G}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{n-2}\\ \eta_{0}&\eta_{1}&\cdots&\eta_{n-2}&a\end{pmatrix}, \tag{3.2}\] \[\mathcal{H}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{n-2}\\ \eta_{n-2}&\eta_{n-3}&\cdots&\eta_{0}&a\end{pmatrix}, \tag{3.3}\] and \[\mathcal{L}_{n}[\phi;\psi,\eta;a]:=\det\begin{pmatrix}\phi_{0}&\phi_{-1}& \cdots&\phi_{-n+2}&\psi_{n-2}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+3}&\psi_{n-3}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n-2}&\phi_{n-3}&\cdots&\phi_{0}&\psi_{0}\\ \eta_{0}&\eta_{1}&\cdots&\eta_{n-2}&a\end{pmatrix}, \tag{3.4}\] where \(f_{j}\)'s are the Fourier coefficients of \(f\in\{\phi,\psi,\eta\}\). To distinguish these framed Toeplitz matrices, it is helpful to think of them visually as \(\leftarrow\uparrow,\rightarrow\downarrow,\leftarrow\downarrow\), and \(\rightarrow\uparrow\), respectively. For example, \(\leftarrow\uparrow\) is associated with \(\mathcal{E}_{n}\) because the index of the Fourier coefficients in the last row of \(\mathcal{E}_{n}\) increase from right to left (\(\leftarrow\)) and the index of the Fourier coefficients in the last column of \(\mathcal{E}_{N}\) increase from bottom to top (\(\uparrow\)). We should mention that each of these determinants can be written in terms of any other one by a simple observation (see Lemma 3.1). It can be easily checked that \[F_{N}\left[\phi;\sum_{j=1}^{m_{1}}A_{j}\psi_{j},\sum_{k=1}^{m_{2}}B_{k}\eta_{k} ;a\right]=\sum_{j=1}^{m_{1}}\sum_{k=1}^{m_{2}}A_{j}B_{k}F_{N}\left[\phi;\psi_{ j},\eta_{k};\widehat{a}_{j,k}\right],\qquad F\in\{\mathcal{E},\mathcal{G},\mathcal{H}, \mathcal{L}\}, \tag{3.5}\] where \(\widehat{a}_{j,k}\) are complex numbers satisfying \[\sum_{j=1}^{m_{1}}\sum_{k=1}^{m_{2}}A_{j}B_{k}\widehat{a}_{j,k}=a. \tag{3.6}\] If \(A_{j}\) and \(B_{k}\) are nonzero, one such set of numbers is obviously \[\widehat{a}_{j,k}=\frac{a}{m_{1}m_{2}A_{j}B_{k}}.\] **Lemma 3.1**.: _The semi-framed determinants \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{G}_{n}[\phi;\psi,\eta;a]\), and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) have the following representations in terms of \(\mathcal{H}_{n}[\phi;f,g;a]\):_ (3.7) \[\mathcal{E}_{n}[\phi;\psi,\eta;a] =\mathcal{H}_{n}[\phi;z^{n-2}\bar{\psi},\eta;a],\] (3.8) \[\mathcal{G}_{n}[\phi;\psi,\eta;a] =\mathcal{H}_{n}[\phi;\psi,z^{n-2}\bar{\eta};a],\] (3.9) \[\mathcal{L}_{n}[\phi;\psi,\eta;a] =\mathcal{H}_{n}[\phi;z^{n-2}\bar{\psi},z^{n-2}\bar{\eta};a],\] (3.10) _where \(\bar{f}\) denotes the function \(z\mapsto f(z^{-1})\), \(f\in\{\psi,\eta\}\)._ Proof.: It is enough to observe that \(\left(z^{n-2}\bar{f}\right)_{j}=f_{n-2-j}\). Notice that in general the semi-framed Toeplitz determinants can not be reduced to simpler objects like pure-Toeplitz determinants or bordered Toeplitz determinants via Dodgson Condensation identities. Let us discuss here why no such identity exists. Let \(\mathcal{M}\) be an \(N\times N\) semi-framed Toeplitz determinant. If one hopes for a Dodgson Condensation identity \[\mathcal{M}\cdot\mathcal{M}\begin{pmatrix}j_{1}&j_{2}\\ k_{1}&k_{2}\end{pmatrix}=\mathcal{M}\begin{pmatrix}j_{1}\\ k_{1}\end{pmatrix}\cdot\mathcal{M}\begin{pmatrix}j_{2}\\ k_{2}\end{pmatrix}-\mathcal{M}\begin{pmatrix}j_{1}\\ k_{2}\end{pmatrix}\cdot\mathcal{M}\begin{pmatrix}j_{2}\\ k_{1}\end{pmatrix}, \tag{3.11}\] with a simpler right hand side (free of semi-framed determinants) then they must choose \(j_{2}=k_{2}=N-1\)1. Then, it is easy to see that any other choice for \(j_{1}\) and \(k_{1}\) can not lead to a situation where the right hand side of the corresponding Dodgson Condensation identity is free of semi-framed determinants. For example, with the most natural choice \(j_{1}=k_{1}=N-2\), we have Footnote 1: Recall, say from (1.1), that we index the rows and columns of an \(N\times N\) matrix by \(0\leq j\leq N-1\) and \(0\leq k\leq N-1\), respectively. \[\underbrace{\mathcal{M}}_{\text{semi-framed}}\cdot\underbrace{\mathcal{M} \begin{pmatrix}N-2&N-1\\ N-2&N-1\end{pmatrix}}_{\text{pure Toeplitz}}=\underbrace{\mathcal{M} \begin{pmatrix}N-2\\ N-2\end{pmatrix}}_{\text{semi-framed pure Toeplitz}}\cdot\underbrace{ \mathcal{M}\begin{pmatrix}N-1\\ N-1\end{pmatrix}}_{\text{bordered Toeplitz}}\cdot\underbrace{\mathcal{M}\begin{pmatrix} N-1\\ N-2\end{pmatrix}}_{\text{bordered Toeplitz}}. \tag{3.12}\] This suggests that the semi-framed Toeplitz determinants (corresponding to generic symbols) are structured determinants which must be studied independently without the hope for their reduction to the pure Toeplitz or bordered Toeplitz determinants. In fact, to that end, it turns out that the characterizing objects for the semi-framed Toeplitz determinants \(\mathcal{E}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{G}_{n}[\phi;\psi,\eta;a]\), \(\mathcal{H}_{n}[\phi;\psi,\eta;a]\), and \(\mathcal{L}_{n}[\phi;\psi,\eta;a]\) are the reproducing kernel of the system of orthogonal polynomials associated with the symbol \(\phi\), while the characterizing objects for the bordered Toeplitz determinants \(D^{B}_{n}[\phi;\psi]\) are the orthogonal polynomials themselves (see SS2 and [BEGIL]). Even though in general, as we saw above, there does not exist a Dodgson Condensation identity which can relate a single semi-framed Toeplitz determinant to a number of pure-Toeplitz and bordered-Toeplitz ones, there are particular examples where such reductions are possible. In some sense such cases are the analogues of the identity (2.47), where for a particularly simple symbol (in (2.47): \(\psi\equiv 1\)) a more complex structured determinant (in (2.47): the bordered Toeplitz determinant) can be reduced to a less complex structured determinant (in (2.47): the pure-Toeplitz determinant). In the case of semi-framed Toeplitz determinants \(\mathcal{H}_{n+1}[\phi;\psi,\eta;a]\), we get such reductions when either \(\psi\equiv c\), \(\eta\equiv c\), \(\psi\equiv cz^{n-1}\), \(\eta\equiv cz^{n-1}\), \(c\in\mathbb{C}\), as described in the following lemma. **Lemma 3.2**.: _It holds that_ \[\mathcal{H}_{n+1}\left[\phi;1,\eta;a\right]=aD_{n}\left[\phi\right]+ (-1)^{n}D_{n}^{B}\left[z^{-1}\phi;\eta\right], \tag{3.14}\] \[\mathcal{H}_{n+1}\left[\phi;\psi,1;a\right]=aD_{n}\left[\phi\right] -D_{n}^{B}\left[\tilde{\phi};z^{n-1}\tilde{\psi}\right],\] (3.15) \[\mathcal{H}_{n+1}\left[\phi;z^{n-1},\eta;a\right]=aD_{n}\left[ \phi\right]-D_{n}^{B}\left[\phi;\eta\right],\] (3.16) \[\mathcal{H}_{n+1}\left[\phi;\psi,z^{n-1};a\right]=aD_{n}\left[ \phi\right]+(-1)^{n}D_{n}^{B}\left[z^{-1}\tilde{\phi};z^{n-1}\tilde{\psi}\right] \tag{3.13}\] _where \(\tilde{f}\) denotes the function \(z\mapsto f(z^{-1})\), \(f\in\{\psi,\phi\}\)._ Proof.: These are immediate consequences of the definitions (1.3) and (3.3) and observing that \(\left(z^{n-1}\tilde{f}\right)_{j}=f_{n-1-j}\). **Remark 3.3**.: In view of Lemma 3.1 it is indeed sufficient to prove the above lemma for the \(\mathcal{H}\)-semi-framed Toeplitz determinants. ### The Riemann-Hilbert characterization for semi-framed Toeplitz determinants: Proof of Theorem 1.9 Similar to what is shown about bordered Toeplitz determinant \(D_{n}^{B}\left[\phi,\psi\right]\) in section 2 of [BEGIL], in this section we show that the semi-framed Toeplitz determinants can also be expressed in terms of the solution of the Riemann-Hilbert problem for _pure_ Toeplitz determinants. Let \(\psi=q_{1}\phi+q_{2}\) and \(\eta=q_{3}\phi+q_{4}\) where \(\phi\) is the generating function of the Toeplitz part and \(q_{j}\)'s are rational functions with simple poles, \(j=1,2,3,4\). Below we show that unlike the bordered Toeplitz determinants which are related to the orthogonal polynomials and/or their Cauchy-type transforms (see SS2), the semi-framed Toeplitz determinants are related to the reproducing kernel of the same system of orthogonal polynomials. In order to see this connection, we need to first find a determinantal representation for the reproducing kernel which would play the same role for semi-framed Toeplitz determinants, as the determinantal representation (2.2) plays for the bordered Toeplitz determinants. We follow [GW] to find this determinantal representation for the reproducing kernel. To that end, we need to recall the LU decomposition of the Toeplitz matrix \(\boldsymbol{D}_{n}\left[\phi\right]\). Write the polynomials \(Q_{n}(z)\) and \(\widehat{Q}_{n}(z)\) as \[Q_{n}(z)=\sum_{j=0}^{n}q_{n,j}z^{j},\quad\text{and}\quad\widehat{Q}_{n}(z)= \sum_{j=0}^{n}\hat{q}_{n,j}z^{j}, \tag{3.17}\] and let us also denote \[\boldsymbol{Z}_{n}(z):=\begin{pmatrix}1\\ z\\ \vdots\\ z^{n}\end{pmatrix}\quad\text{and}\quad\boldsymbol{F}_{n}(z):=\begin{pmatrix}F_ {0}(z)\\ F_{1}(z)\\ \vdots\\ F_{n}(z)\end{pmatrix},\qquad\boldsymbol{F}\in\{\boldsymbol{Q},\widehat{ \boldsymbol{Q}}\}. \tag{3.18}\] We thus have \[\boldsymbol{Q}_{n}(z)=\boldsymbol{A}_{n}\boldsymbol{Z}_{n}(z),\qquad\widehat {\boldsymbol{Q}}_{n}(z)=\boldsymbol{B}_{n}\boldsymbol{Z}_{n}(z), \tag{3.19}\] where \(\boldsymbol{A}_{n}\) and \(\boldsymbol{B}_{n}\) are the following \((n+1)\times(n+1)\) lower triangular matrices \[\boldsymbol{A}_{n}:=\begin{pmatrix}q_{0,0}&0&\cdots&0\\ q_{1,0}&q_{1,1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ q_{n,0}&q_{n,1}&\cdots&q_{n,n}\end{pmatrix},\qquad\boldsymbol{B}_{n}:= \begin{pmatrix}\hat{q}_{0,0}&0&\cdots&0\\ \hat{q}_{1,0}&\hat{q}_{1,1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ \hat{q}_{n,0}&\hat{q}_{n,1}&\cdots&\hat{q}_{n,n}\end{pmatrix}. \tag{3.20}\] **Theorem 3.4**.: _The LU decomposition of \(\boldsymbol{D}_{n+1}\left[\phi\right]\) is given by_ \[\boldsymbol{D}_{n+1}\left[\phi\right]=\left[\boldsymbol{B}_{n}\right]^{-1} \left[\boldsymbol{A}_{n}^{T}\right]^{-1}. \tag{3.21}\] Proof.: We have \[\delta_{\nu\mu} =\int_{\mathbb{T}}Q_{\nu}(\zeta)\widehat{Q}_{\mu}(\zeta^{-1})\phi( \zeta)\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}=\sum_{m=0}^{\nu}\sum_{\ell=0} ^{\mu}q_{\nu,m}\hat{q}_{\mu,\ell}\int_{\mathbb{T}}\zeta^{m-\ell}\phi(\zeta) \frac{\mathrm{d}\zeta}{2\pi\mathrm{i}\zeta}\] \[=\sum_{m=0}^{\nu}\sum_{\ell=0}^{\mu}q_{\nu,m}\hat{q}_{\mu,\ell} \phi_{\ell-m}=\sum_{m=0}^{\nu}\sum_{\ell=0}^{\mu}\left(\mathbf{A}_{n}\right)_{\nu,m }\left(\mathbf{B}_{n}\right)_{\mu,\ell}\left(\mathbf{D}_{n+1}\left[\phi\right]\right)_{ \ell,m}\] \[=\sum_{m=0}^{\nu}\sum_{\ell=0}^{\mu}\left(\mathbf{B}_{n}\right)_{\mu, \ell}\left(\mathbf{D}_{n+1}\right)_{\ell,m}\left(\mathbf{A}_{n}^{T}\right)_{m,\nu}= \left(\mathbf{B}_{n}\mathbf{D}_{n+1}\mathbf{A}_{n}^{T}\right)_{\mu,\nu}, \tag{3.22}\] which is equivalent to (3.21). Let us now consider the reproducing kernel \[K_{n}\left(z,\hat{z}\right):=\sum_{j=0}^{n}Q_{j}(z)\widehat{Q}_{j}(z), \tag{3.23}\] and for a complex parameter \(a\) define \[\widehat{K}_{n}(z,\hat{z};a):=\frac{1}{D_{n+1}[\phi]}\det\begin{pmatrix}\phi_{0 }&\phi_{-1}&\cdots&\phi_{-n}&1\\ \phi_{1}&\phi_{0}&\cdots&\phi_{1-n}&z\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&z^{n}\\ 1&\hat{z}&\cdots&\hat{z}^{n}&a\end{pmatrix}. \tag{3.24}\] **Theorem 3.5**.: _[_GW_]_ _The reproducing kernel \(K_{n}(z,\hat{z})\) has the following semi-framed Toeplitz determinant representation_ \[K_{n}(z,\hat{z})=a-\widehat{K}_{n}(z,\hat{z};a). \tag{3.25}\] Proof.: Let \[\widehat{\mathbf{K}}_{n}(z,\hat{z};a):=\begin{pmatrix}\phi_{0}&\phi_{-1}&\cdots& \phi_{-n}&1\\ \phi_{1}&\phi_{0}&\cdots&\phi_{1-n}&z\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&z^{n}\\ 1&\hat{z}&\cdots&\hat{z}^{n}&a\end{pmatrix}, \tag{3.26}\] and consider the following \((n+2)\times(n+2)\) extensions of \(\mathbf{A}_{n}\) and \(\mathbf{B}_{n}\) introduced in (3.20): \[\widehat{\mathbf{A}}_{n}:=\begin{pmatrix}\mathbf{A}_{n}&\mathbf{0}_{n+1}\\ \mathbf{0}_{n+1}^{T}&1\end{pmatrix},\quad\text{and}\quad\widehat{\mathbf{B}}_{n}:= \begin{pmatrix}\mathbf{B}_{n}&\mathbf{0}_{n+1}\\ \mathbf{0}_{n+1}^{T}&1\end{pmatrix}, \tag{3.27}\] where \(\mathbf{0}_{n}^{T}\) is the \(1\times n\) vector of zeros. We now have \[\widehat{\mathbf{B}}_{n}\widetilde{\mathbf{K}}_{n}\left(z,\zeta;a\right) \widetilde{\mathbf{A}}_{n}^{T} =\begin{pmatrix}\mathbf{B}_{n}&\mathbf{0}_{n+1}\\ \mathbf{0}_{n+1}^{T}&1\end{pmatrix}\begin{pmatrix}\mathbf{D}_{n+1}\left[\phi\right]& \mathbf{Z}_{n}(z)\\ \mathbf{Z}_{n}^{T}(\hat{z})&a\end{pmatrix}\begin{pmatrix}\mathbf{A}_{n}^{T}&\mathbf{0}_{n+1 }\\ \mathbf{0}_{n+1}^{T}&1\end{pmatrix}\] \[=\begin{pmatrix}\mathbf{B}_{n}\mathbf{D}_{n+1}\left[\phi\right]\mathbf{A}_{n }^{T}&\mathbf{B}_{n}\mathbf{Z}_{n}(z)\\ \mathbf{Z}_{n}^{T}(\hat{z})\mathbf{A}_{n}^{T}&a\end{pmatrix}=\begin{pmatrix}\mathbf{I}_{n}& \widetilde{\mathbf{Q}}_{n}(z)\\ \mathbf{Q}_{n}^{T}(\hat{z})&a\end{pmatrix}. \tag{3.28}\] Taking the determinant of both sides of (3.28) yields \[\widehat{K}_{n}(z,\hat{z};a)=a-K_{n}(z,\hat{z}), \tag{3.29}\] where we have used \[\det\widehat{\mathbf{A}}_{n}=\det\widehat{\mathbf{B}}_{n}=\prod_{j=0}^{n}\varkappa_{j}= \frac{1}{\sqrt{D_{n+1}\left[\phi\right]}}. \tag{3.30}\] #### 3.1.1. Proof of Theorem 1.9 This Theorem bridges the semi-framed Toeplitz determinants (3.1) - (3.4) to the reproducing kernel \(K_{n}(z,\hat{z})\). We only prove (1.50) as the remaining identities can be proven identically. Recalling (3.24) notice that \[\int_{\mathbb{T}}\widehat{K}_{n}(z_{1}^{-1},z_{2};\hat{a})z_{2}^{-n}\eta(z_{2} )\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}}=\frac{1}{D_{n+1}\left[\phi\right] }\det\begin{pmatrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&1\\ \phi_{1}&\phi_{0}&\cdots&\phi_{1-n}&z_{1}^{-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&z_{1}^{-n}\\ \eta_{n}&\eta_{n-1}&\cdots&\eta_{0}&\hat{a}\eta_{n}\end{pmatrix}. \tag{3.31}\] Therefore \[\int_{\mathbb{T}}\left(\int_{\mathbb{T}}\widehat{K}_{n}(z_{1}^{ -1},z_{2};\hat{a})z_{2}^{-n}\eta(z_{2})\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_ {2}}\right)\psi(z_{1})\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}=\frac{1}{D_ {n+1}\left[\phi\right]}\det\begin{pmatrix}\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}& \psi_{0}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{1-n}&\psi_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{n}\\ \eta_{n}&\eta_{n-1}&\cdots&\eta_{0}&\hat{a}\eta_{n}\psi_{0}\end{pmatrix}, \tag{3.32}\] or \[\frac{\mathcal{H}_{n+2}\left[\phi;\psi,\eta;\hat{a}\eta_{n}\psi_{0}\right]}{D _{n+1}\left[\phi\right]}=\int_{\mathbb{T}}\left(\int_{\mathbb{T}}\widehat{K}_ {n}(z_{1}^{-1},z_{2};\hat{a})z_{2}^{-n}\eta(z_{2})\frac{\mathrm{d}z_{2}}{2 \pi\mathrm{i}z_{2}}\right)\psi(z_{1})\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{ 1}}. \tag{3.33}\] Now, employing (3.25) we find \[\begin{split}\frac{\mathcal{H}_{n+2}\left[\phi;\psi,\eta;\hat{a} \eta_{n}\psi_{0}\right]}{D_{n+1}\left[\phi\right]}&=\int_{ \mathbb{T}}\left(\int_{\mathbb{T}}\left(\hat{a}-K(z_{1}^{-1},z_{2})\right)z_{2 }^{-n}\eta(z_{2})\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}}\right)\psi(z_{1} )\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}\\ &=\hat{a}\eta_{n}\psi_{0}-\int_{\mathbb{T}}\left(\int_{\mathbb{T} }K(z_{1}^{-1},z_{2})z_{2}^{-n}\eta(z_{2})\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i} z_{2}}\right)\psi(z_{1})\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}.\end{split} \tag{3.34}\] This is the desired result (1.50) if we denote \(a\equiv\hat{a}\eta_{n}\psi_{0}\). #### 3.1.2. Proof of Corollary 1.9.1 Moving on, let us recall the Christoffel-Darboux identity for the bi-orthogonal polynomials on the unit circle [Sz],[DIK]. For any \(z\neq 0\) and \(n\in\mathbb{N}\cup\{0\}\) we have \[\begin{split} K_{n}(z^{-1},z)&=\sum_{j=0}^{n}Q_{j}( z)\widehat{Q}_{j}(z^{-1})=-(n+1)Q_{n+1}(z)\widehat{Q}_{n+1}(z^{-1})\\ &+z\left(\widehat{Q}_{n+1}(z^{-1})\frac{\mathrm{d}}{\mathrm{d}z}Q _{n+1}(z)-Q_{n+1}(z)\frac{\mathrm{d}}{\mathrm{d}z}\widehat{Q}_{n+1}(z^{-1}) \right),\end{split} \tag{3.35}\] and for any \(z_{2}\), \(z_{1}\neq 0\) and \(n\in\mathbb{N}\cup\{0\}\) we have \[(1-z_{1}^{-1}z_{2})\sum_{j=0}^{n}Q_{j}(z_{2})\widehat{Q}_{j}(z_{1}^{-1})=z_{1} ^{-n-1}Q_{n+1}(z_{1})z_{2}^{n+1}\widehat{Q}_{n+1}(z_{2}^{-1})-\widehat{Q}_{n+1}( z_{1}^{-1})Q_{n+1}(z_{2}) \tag{3.36}\] Therefore, if \(z_{1}\neq z_{2}\) \[K_{n}(z_{1}^{-1},z_{2})=\frac{z_{1}^{-n-1}Q_{n+1}(z_{1})z_{2}^{n+1}\widehat{Q}_{n+ 1}(z_{2}^{-1})-\widehat{Q}_{n+1}(z_{1}^{-1})Q_{n+1}(z_{2})}{1-z_{1}^{-1}z_{2}}. \tag{3.37}\] Now observe that the equations (3.35) and (3.37) are going to be particularly useful when we attempt to use (1.50) for the \(\leftarrow\downarrow\)- or \(\mathcal{H}\)- framed determinants. In fact we readily have (1.52). To see this, let us recall the integral on the right hand side of (1.50) \[\frac{1}{4\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}K_{n}(e^{-\mathrm{i}\theta_{1} },e^{\mathrm{i}\theta_{2}})\eta(e^{\mathrm{i}\theta_{2}})\psi(e^{\mathrm{i} \theta_{1}})e^{-\mathrm{i}n\theta_{2}}\mathrm{d}\theta_{1}\mathrm{d}\theta_{2} \tag{3.38}\] Notice that the integration over the diagonal set \(\{(\theta,\theta),0\leq\theta\leq 2\pi\}\) of measure zero makes no contribution to this integral and hence we do not need to employ the identity (3.35). Recalling (2.8) the equation (3.37) can be written in terms of the \(X\)-RHP data as follows: \[K_{n}(z_{1}^{-1},z_{2})=\frac{z_{1}^{-n}}{z_{1}-z_{2}}\det\begin{pmatrix}X_{1 1}(z_{2};n+1)&X_{21}(z_{2};n+2)\\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}. \tag{3.39}\] Plugging this into (1.50) gives the desired result (1.52). The remaininig \(X\)-RHP characterizations for \(\mathcal{E},\mathcal{G}\), and \(\mathcal{L}\) can be immediately obtained using Lemma 3.1. **Remark 3.6**.: Notice that the integrand in (1.52) is well defined on the unit circle because the first column of the solution of the \(X\)-RHP is entire (see **RH-X1** and **RH-X2**). Semi-framed Toeplitz determinants involving rational frame symbols: Proofs of Theorems 1.10 and 1.11 In the following two subsections, we examine frame symbols which are either rational or are products of the bulk symbol with a rational function. It is important to note that we make these choices because they represent simple and yet nontrivial examples for illustrating the asymptotic analysis. Indeed, the Riemann-Hilbert characterizations provided in Corollary 1.9.1 can be further explored with more complex symbol choices, such as Fisher-Hartwig \(\phi\) and other classes of frame symbols #### 3.2.1. Proof of Theorem 1.10 In the case of rational border symbols we have a simpler representation of semi-framed Toeplitz determinants. Recall the function \(q_{0}\) introduced in (2.48). The Fourier coefficients of \(q_{0}\) are \[q_{0,j}=\begin{cases}0,&|c|<1\\ -(c)^{-j-1},&|c|>1,\end{cases}\qquad 0\leq j\leq n. \tag{3.40}\] This immediately leads to the following elementary property of semi-framed Toeplitz determinants: **Lemma 3.7**.: _Let \(c_{j}\) be complex numbers with \(|c_{j}|<1\), \(j=1,\cdots,m\). It holds that_ \[\mathcal{H}_{n+2}[\phi;\psi,\sum_{j=1}^{m}\frac{b_{j}}{z-c_{j}};a]=aD_{n+1}[ \phi], \tag{3.41}\] _and_ \[\mathcal{H}_{n+2}[\phi;\sum_{j=1}^{m}\frac{b_{j}}{z-c_{j}},\eta;a]=aD_{n+1}[ \phi]. \tag{3.42}\] **Lemma 3.8**.: _Let \(c\) and \(d\) be complex numbers that do not lie on the unit circle. The semi-framed determinant \(\mathcal{H}_{n+2}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a\big{]}\) is encoded into the X-RHP data described by_ \[\frac{\mathcal{H}_{n+2}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a \big{]}}{D_{n+1}\big{[}\phi\big{]}}\] \[=a-\frac{1}{(dc)^{n+1}}\begin{cases}0&\text{Either $|c|<1$ or $|d|<1$,}\\ \dfrac{X_{11}(c;n+1)X_{21}(d;n+2)-X_{21}(c;n+2)X_{11}(d;n+1)}{d-c},&\text{$|c|>1$ and $|d|>1$ and $c\neq d$,}\\ X_{11}(d;n+1)X_{21}^{\prime}(d;n+2)-X_{21}(d;n+2)X_{11}^{\prime}(d;n+1),&\text{$|c|>1$ and $|d|>1$ and $c=d$.}\end{cases} \tag{3.44}\] \[\frac{\mathcal{L}_{n+2}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a \big{]}}{D_{n+1}\big{[}\phi\big{]}}\] \[=a-\begin{cases}0&\text{Either $|c|<1$ or $|d|<1$,}\\ \dfrac{X_{11}(c^{-1};n+1)X_{21}(d^{-1};n+2)-X_{21}(c^{-1};n+2)X_{11}(d^{-1};n+ 1)}{d-c},&\text{$|c|>1$ and $|d|>1$ and $c\neq d$,}\\ d^{-2}\left(X_{11}(d^{-1};n+1)X_{21}^{\prime}(d^{-1};n+2)-X_{21}(d^{-1};n+2)X_{ 11}^{\prime}(d^{-1};n+1)\right),&\text{$|c|>1$ and $|d|>1$ and $c=d$.}\end{cases}\] (3.45) \[\frac{\mathcal{E}_{n+2}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a \big{]}}{D_{n+1}\big{[}\phi\big{]}}\] \[=a-\frac{1}{c^{n+1}}\begin{cases}0&\text{Either $|c|<1$ or $|d|<1$,}\\ \dfrac{X_{11}(c;n+1)X_{21}(d^{-1};n+2)-X_{21}(c;n+2)X_{11}(d^{-1};n+1)}{1-dc},& \text{$|c|>1$ and $|d|>1$,}\end{cases}\] (3.46) \[\frac{\mathcal{G}_{n+2}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a \big{]}}{D_{n+1}\big{[}\phi\big{]}}\] \[=a-\frac{1}{d^{n+1}}\begin{cases}0&\text{Either $|c|<1$ or $|d|<1$,}\\ \dfrac{X_{11}(c^{-1};n+1)X_{21}(d;n+2)-X_{21}(c^{-1};n+2)X_{11}(d;n+1)}{dc-1}, &\text{$|c|>1$ and $|d|>1$,}\end{cases} \tag{3.43}\] _Moreover, if \(\phi\) is of Szego-type, we have_ \[\mathcal{H}_{n+1}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a\big{]}=G^{n}\big{[} \phi\big{]}E\big{[}\phi\big{]}\left(a+O(\rho^{-n})\right) \tag{3.48}\] \[\mathcal{L}_{n+1}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a\big{]}=G ^{n}\big{[}\phi\big{]}E\big{[}\phi\big{]}\left(a+O(\rho^{-n})\right),\] (3.49) \[\mathcal{E}_{n+1}\big{[}\phi;\frac{1}{z-d},\frac{1}{z-c};a\big{]}=G ^{n}\big{[}\phi\big{]}E\big{[}\phi\big{]}\begin{cases}\left(a+O(\rho^{-n}) \right),&\text{Either $|c|<1$ or $|d|<1$,}\\ \left(a+\frac{\alpha(c)}{\alpha(d^{-1})}\cdot\frac{1}{1-cd}+O(\rho^{-n}) \right),&\text{$|c|>1$ and $|d|>1$,}\end{cases} \tag{3.47}\] \[\mathcal{G}_{n+1}[\phi;\frac{1}{z-d},\frac{1}{z-c};a]=G^{n}[\phi]E[\phi]\begin{cases} \left(a+O\left(\rho^{-n}\right)\right),&\text{Either $|c|<1$ or $|d|<1$,}\\ \\ \left(a+\frac{\alpha(d)}{\alpha(c^{-1})}\cdot\frac{1}{1-cd}+O(\rho^{-n}) \right),&|c|>1\text{ and $|d|>1$,}\end{cases} \tag{3.50}\] _Here the number \(\rho\) is such that for \(\lambda\in\{c,d\}\): a) \(1<\rho<|\lambda|\), if \(|\lambda|>1\), b) \(|\lambda|<\rho^{-1}<1\), if \(|\lambda|<1\), and c) \(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ Proof.: The statements about the cases in which \(|c|<1\) or \(|d|<1\) are obvious in view of (3.40). We only prove the statements involving \(\mathcal{H}\), since the proof of the statements about \(\mathcal{L},\mathcal{G}\) and \(\mathcal{E}\) can be obtained in a similar way. Consider the case \(|c|>1\) and \(|d|>1\). Notice that \[\mathcal{H}_{n+2}[\phi;\frac{1}{z-d},\frac{1}{z-c};a]=\det\begin{pmatrix}\phi _{0}&\phi_{-1}&\cdots&\phi_{-n}&-d^{-1}\\ \phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&-d^{-2}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&-d^{-n-1}\\ -c^{-n-1}&-c^{-n}&\cdots&-c^{-1}&a\end{pmatrix} \tag{3.51}\] Recalling (3.24) we observe that \[\mathcal{H}_{n+2}[\phi;\frac{1}{z-d},\frac{1}{z-c};a]=\frac{1}{d\cdot c^{n+1}} D_{n+1}[\phi]\widehat{K}_{n}\left(d^{-1},c;adc^{n+1}\right)=\frac{1}{d\cdot c^{n+1} }D_{n+1}[\phi]\left(adc^{n+1}-K_{n}(d^{-1},c)\right), \tag{3.52}\] where in the last step we have used Theorem 3.5. Therefore using (3.35) and (3.37) we find \[\mathcal{H}_{n+2}[\phi;\frac{1}{z-d},\frac{1}{z-c};a]=aD_{n+1}[ \phi]\] \[-\frac{D_{n+1}[\phi]}{d\cdot c^{n+1}}\begin{cases}-(n+1)Q_{n+1}(d )\widehat{Q}_{n+1}(d^{-1})+d\left(\widehat{Q}_{n+1}(d^{-1})\frac{\mathrm{d}}{ \mathrm{d}z}Q_{n+1}(z)\bigg{|}_{z=d}-Q_{n+1}(d)\frac{\mathrm{d}}{\mathrm{d}z} \widehat{Q}_{n+1}(z^{-1})\bigg{|}_{z=d}\right)&c=d\\ \frac{d^{-n-1}Q_{n+1}(d)c^{n+1}\widehat{Q}_{n+1}(c^{-1})-\widehat{Q}_{n+1}(d ^{-1})Q_{n+1}(c)}{1-d^{-1}c}&c\neq d\end{cases} \tag{3.53}\] Now, recall from (2.8) that \(Q_{n+1}(z)=\kappa_{n+1}X_{11}(z;n+1)\) and \(\widehat{Q}_{n+1}(z^{-1})=-\kappa_{n+1}^{-1}z^{-n-1}X_{21}(z;n+2)\). Plugging these into (3.53) after straightforward simplifications we get (3.43). The asymptotics (3.47) now follows from (3.43), (4.19), and The strong Szego theorem for pure Toeplitz determinants when \(\phi\) is of Szego-type. We have now arrived at the proof of Theorem 1.10, as for instance, (1.56) follows from (3.5), (3.6), and (3.47). **Remark 3.9**.: In view of Lemma 3.8, we observe that the way one positions the Fourier coefficients in the last row and the last column of a semi-framed Toeplitz determinant does in fact affect the leading order term of the large-size asymptotics. This observation motivated us to revisit the bordered Toeplitz determinants with the reverse order of positioning the Fourier coefficients. Indeed, let us denote12 Footnote 12: Compare with (1.3). (3.54) Then, for \(\psi=q_{2}\) (given by (1.13)) by the same techniques used in [BEGIL] we obtain13 Footnote 13: Notice that \(\partial_{n+1}^{B}[\phi;1]/D_{n}[\phi]=(-1)^{n}D_{n}[z\phi]/D_{n}[\phi]=\kappa_{n }^{-1}Q_{n}(0)=X_{11}(0;n)\), where we have used (2.18) and (2.8). Also, note that \(\partial_{n+1}^{B}[\phi;z]/D_{n}[\phi]\) is the coefficient of \(z\) in the polynomial \(X_{11}(z;n)\), so \(\partial_{n+1}^{B}[\phi;z]/D_{n}[\phi]=X_{11}^{\prime}(0;n)\). \[\frac{\mathcal{D}_{n+1}^{B}\left[\phi;\hat{a}_{0}+\hat{a}_{1}z+\frac{\hat{b}_{ 0}}{z}+\sum_{j=1}^{m}\frac{\hat{b}_{j}}{z-c_{j}}\right]}{D_{n}[\phi]}=\hat{a}_ {0}X_{11}(0;n)+\hat{a}_{0}X_{11}^{\prime}(0;n)-\sum_{j=1\atop|c_{j}|>1}^{m} \hat{b}_{j}c_{j}^{-1}X_{11}(c_{j}^{-1};n)\,. \tag{3.55}\] Notice that the right hand side of (3.55) is exponentially small as \(n\to\infty\) in view of (4.19) and (4.16), as opposed to \[\frac{D_{n+1}^{B}\left[\phi;\hat{a}_{0}+\hat{a}_{1}z+\frac{\hat{b}_{0}}{z}+ \sum_{j=1}^{m}\frac{\hat{b}_{j}}{z-c_{j}}\right]}{D_{n}[\phi]}=\hat{a}_{0}- \hat{a}_{1}[\log\phi]_{-1}-\sum_{j=1\atop|c_{j}|>1}^{m}\frac{\hat{b}_{j}}{c_{ j}}\alpha(c_{j})+O(e^{-cn}),\qquad\text{for some $c>0$,}\] which we have taken from Theorem 1.3 (recall that \(\alpha(0)=G[\phi]\)). So we see that different ways of positioning the Fourier coefficients in the last column of a bordered Toeplitz determinant also changes the leading order term of the large-size asymptotics. #### 3.2.2. Proof of Theorem 1.11 Throughout this section we assume that \(\phi\) is a Szego-type symbol, so that we can refer to the Riemann-Hilbert analysis reminded in the appendix in SS4. **Lemma 3.10**.: _Let \(\phi\) be a Szego-type symbol, and suppose that \(|c|,|d|\neq 1\). It holds that as \(n\to\infty\)_ \[\mathcal{H}_{n+1}\left[\phi;\frac{\phi}{z-d},\frac{\phi}{z-c};a\right]=G[ \phi]^{n}E[\phi]\left(a+O(\rho^{-n})\right), \tag{3.56}\] \[\mathcal{L}_{n+1}\left[\phi;\frac{\phi}{z-d},\frac{\phi}{z-c};a\right]=G[ \phi]^{n}E[\phi]\left(a+O(\rho^{-n})\right), \tag{3.57}\] \[\mathcal{E}_{n+1}\left[\phi;\frac{\phi}{z-d},\frac{\phi}{z-c};a\right]=G[ \phi]^{n}E[\phi]\left\{\begin{aligned} a+\frac{\alpha(c)}{\alpha(d-1)} \cdot\frac{1}{1-cd}+O(\rho^{-n}),&\text{if $\ |c|<1$ and $|d|<1$,}\\ a+O(\rho^{-n}),&\text{either $\ |c|>1$ or $|d|>1$,}\end{aligned}\right. \tag{3.58}\] _and_ \[\mathcal{G}_{n+1}\left[\phi;\frac{\phi}{z-d},\frac{\phi}{z-c};a\right]=G[ \phi]^{n}E[\phi]\left\{\begin{aligned} a+\frac{\alpha(d)}{\alpha(c-1)} \cdot\frac{1}{1-cd}+O(\rho^{-n}),&\text{if $\ |c|<1$ and $|d|<1$,}\\ a+O(\rho^{-n}),&\text{either $\ |c|>1$ or $|d|>1$.}\end{aligned}\right. \tag{3.59}\] _Here the number \(\rho\) is such that for \(\lambda\in\{c,d\}\): a) \(1<\rho<|\lambda|\), if \(|\lambda|>1\), \(b)\)\(|\lambda|<\rho^{-1}<1\), if \(|\lambda|<1\), and \(c)\)\(\phi\) is analytic in the annulus \(\{z:\rho^{-1}<|z|<\rho\}\)._ Proof.: We only prove (3.58) as (3.56), (3.57), and (3.59) can be obtained similarly. From (4.19) in \(\Omega_{1}\) (see Figure 1) we have \[X_{11}(z;n)=z^{n}\alpha(z)\phi^{-1}(z)\left(1+O(\rho^{-n})\right), \qquad n\to\infty, \tag{3.61}\] \[X_{21}(z;n)=-\alpha^{-1}(z)\left(1+O(\rho^{-n})\right),\qquad n \to\infty. \tag{3.60}\] Therefore, recalling (1.53), we have \[\begin{split}\frac{\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]}{D_{n +1}\left[\phi\right]}-a&=-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{ z_{2}^{-n}\eta(z_{2})\tilde{\psi}(z_{1})}{z_{1}-z_{2}}\det\begin{pmatrix}X_{11}(z_{2};n+1)&X_{21}(z_{2};n+2) \\ X_{11}(z_{1};n+1)&X_{21}(z_{1};n+2)\end{pmatrix}\frac{\mathrm{d}z_{2}}{2\pi \mathrm{i}z_{2}}\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}\\ &=-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{z_{2}^{-n}\eta(z_{2}) \tilde{\psi}(z_{1})}{z_{1}-z_{2}}\left(-z_{2}^{n+1}\alpha_{+}(z_{2})\phi^{-1}( z_{2})\alpha_{+}^{-1}(z_{1})\right)\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}} \frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}\\ &-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{z_{2}^{-n}\eta(z_{2}) \tilde{\psi}(z_{1})}{z_{1}-z_{2}}\left(z_{1}^{n+1}\alpha_{+}(z_{1})\phi^{-1}( z_{1})\alpha_{+}^{-1}(z_{2})\right)\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}} \frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}+O(\rho^{-n}).\end{split} \tag{3.62}\] In view of (4.8) we can rewrite this as \[\begin{split}\frac{\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]} {D_{n+1}\left[\phi\right]}-a=&\int_{\mathbb{T}}\int_{\mathbb{T}} \frac{\eta(z_{2})\tilde{\psi}(z_{1})}{z_{1}-z_{2}}\alpha_{+}(z_{2})\phi^{-1}( z_{2})\phi^{-1}(z_{1})\alpha_{-}^{-1}(z_{1})\frac{\mathrm{d}z_{2}}{2\pi \mathrm{i}}\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_{1}}\\ &-\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{\eta(z_{2})\tilde{\psi} (z_{1})}{z_{1}-z_{2}}\left(\frac{z_{1}}{z_{2}}\right)^{n}\alpha_{+}(z_{1}) \phi^{-1}(z_{1})\phi^{-1}(z_{2})\alpha_{-}^{-1}(z_{2})\frac{\mathrm{d}z_{2}}{2 \pi\mathrm{i}z_{2}}\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}}+O(\rho^{-n}).\end{split} \tag{3.63}\] For \(\mathfrak{r}>1\), define \(\mathbb{T}_{\pi}^{\mathfrak{r}}:=\left\{z\,:\,\left|z\right|=\mathfrak{r}^{ \pm 1}\right\}\) and \(\mathbb{D}_{\pi}^{\mathfrak{r}}:=\left\{z\,:\,\left|z\right|<\mathfrak{r}^{\pm 1 }\right\}\). We choose \(\mathfrak{r}\) so that \(\psi\), \(\eta\), and \(\phi\) are analytic in \(\mathbb{D}_{\mathfrak{r}}^{\mathfrak{r}}\setminus\overline{\mathbb{D}_{+}^{ \mathfrak{r}}}\). With this choice of \(\mathfrak{r}\) we deform the contours of integration to rewrite the previous equation as \[\begin{split}\frac{\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a\right]} {D_{n+1}\left[\phi\right]}-a=&\int_{\mathbb{T}_{\pi}^{\mathfrak{r }}}\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\frac{\eta(z_{2})\tilde{\psi}(z_{1})} {z_{1}-z_{2}}\alpha(z_{2})\phi^{-1}(z_{2})\phi^{-1}(z_{1})\alpha^{-1}(z_{1}) \frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}}\frac{\mathrm{d}z_{1}}{2\pi\mathrm{i}z_ {1}}\\ &-\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\int_{\mathbb{T}_{\pi}^{ \mathfrak{r}}}\frac{\eta(z_{2})\tilde{\psi}(z_{1})}{z_{1}-z_{2}}\left(\frac{z_{1 }}{z_{2}}\right)^{n}\alpha(z_{1})\phi^{-1}(z_{1})\phi^{-1}(z_{2})\alpha^{-1}(z _{2})\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}}\frac{\mathrm{d}z_{1}}{2\pi \mathrm{i}}+O(\rho^{-n})\\ =&\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\int_{\mathbb{ T}_{\pi}^{\mathfrak{r}}}z_{1}^{-1}\eta(z_{2})\tilde{\psi}(z_{1})\alpha(z_{2})\phi^{-1}( z_{2})\phi^{-1}(z_{1})\alpha^{-1}(z_{1})\left(\sum_{k=0}^{\infty}\frac{z_{k}^{k}}{z_{k}^{k}} \right)\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}}\frac{\mathrm{d}z_{1}}{2\pi \mathrm{i}z_{1}}\\ &+\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\int_{\mathbb{T}_{\pi}^{ \mathfrak{r}}}z_{2}^{-1}\eta(z_{2})\tilde{\psi}(z_{1})\alpha(z_{1})\phi^{-1}( z_{1})\phi^{-1}(z_{2})\alpha^{-1}(z_{2})\left(\sum_{k=0}^{\infty}\frac{z_{1}^{k+n}}{z_{k}^{k+n}} \right)\frac{\mathrm{d}z_{2}}{2\pi\mathrm{i}z_{2}}\frac{\mathrm{d}z_{1}}{2\pi \mathrm{i}}+O(\rho^{-n})\\ =&\sum_{k=0}^{\infty}\left[\int_{\mathbb{T}_{\pi}^{ \mathfrak{r}}}\frac{\eta(z)}{\phi(z)}\alpha(z)z^{k}\frac{\mathrm{d}z}{2\pi \mathrm{i}}\right]\left[\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\frac{\tilde{\psi}(z )}{\phi(z)}\alpha^{-1}(z)z^{-k-1}\frac{\mathrm{d}z}{2\pi\mathrm{i}z}\right]\\ &+\sum_{k=0}^{\infty}\left[\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}} \frac{\tilde{\psi}(z)}{\phi(z)}\alpha(z)z^{k+n}\frac{\mathrm{d}z}{2\pi \mathrm{i}}\right]\left[\int_{\mathbb{T}_{\pi}^{\mathfrak{r}}}\frac{\eta(z)}{ \phi(z)}\alpha^{-1}(z)z^{-n-k-1}\frac{\mathrm{d}z}{2\pi\mathrm{i}z}\right]+O( \rho^{-n}).\end{split} \tag{3.64}\] Since \(\left|c\right|\neq 1\) and \(\left|d\right|\neq 1\), we can choose \(\varepsilon>0\) small enough so that \(c\) and \(d\) do not belong to \(\mathbb{D}_{-}^{\varepsilon}\setminus\overline{\mathbb{D}_{+}^{\varepsilon}}\). Replacing \(\psi\) and \(\eta\) respectively by \(\frac{\tilde{\phi}}{z-d}\) and \(\frac{\phi}{z-c}\) in the last member of (3.64) gives \[\begin{split}\frac{\mathcal{E}_{n+2}\left[\phi;\frac{\bar{\phi}}{z-d}, \frac{\phi}{z-c};a\right]}{D_{n+1}[\phi]}-a\simeq&\sum_{k=0}^{ \infty}\left[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha(z)z^{k}}{z-c}\frac {\mathrm{d}z}{2\pi\mathrm{i}}\right]\left[\int_{\mathbb{T}_{+}^{\mathrm{t}}} \frac{\alpha^{-1}(z)z^{-k-1}}{z^{-1}-d}\frac{\mathrm{d}z}{2\pi\mathrm{i}z} \right]\\ &+\sum_{k=0}^{\infty}\left[\int_{\mathbb{T}_{+}^{\mathrm{t}}} \frac{\alpha(z)z^{k+n}}{z^{-1}-d}\frac{\mathrm{d}z}{2\pi\mathrm{i}}\right] \left[\int_{\mathbb{T}_{-}^{\mathrm{t}}}\frac{\alpha^{-1}(z)z^{-n-k-1}}{z-c} \frac{\mathrm{d}z}{2\pi\mathrm{i}z}\right]\\ &=\sum_{k=0}^{\infty}\left[\int_{\mathbb{T}_{+}^{\mathrm{t}}} \frac{\alpha(z)z^{k}}{z-c}\frac{\mathrm{d}z}{2\pi\mathrm{i}}\right]\left[\int_ {\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha^{-1}(z^{-1})z^{k}}{z-d}\frac{ \mathrm{d}z}{2\pi\mathrm{i}}\right]\\ &+\frac{1}{c\cdot d}\sum_{k=0}^{\infty}\left[\int_{\mathbb{T}_{+ }^{\mathrm{t}}}\frac{\alpha(z)z^{n+k+1}}{z-d^{-1}}\frac{\mathrm{d}z}{2\pi \mathrm{i}}\right]\left[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha^{-1}(z ^{-1})z^{n+k+1}}{z-c^{-1}}\frac{\mathrm{d}z}{2\pi\mathrm{i}}\right]\end{split} \tag{3.65}\] Since \(\alpha\) is analytic in \(\mathbb{C}\setminus\mathbb{T}\), we immediately have \[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha(z)z^{k}}{z-c}\frac{\mathrm{d}z} {2\pi\mathrm{i}}=\begin{cases}\alpha(c)c^{k}&|c|<1,\\ 0&|c|>1,\end{cases} \tag{3.66}\] and \[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha(z)z^{n+k+1}}{z-d^{-1}}\frac{ \mathrm{d}z}{2\pi\mathrm{i}}=\begin{cases}0&|d|<1,\\ \alpha(d^{-1})d^{-n-k-1}&|d|>1.\end{cases} \tag{3.67}\] Recall that \[\alpha(z)=\exp\left[\frac{1}{2\pi\mathrm{i}}\int_{\mathbb{T}}\frac{\ln(\phi( \tau))}{\tau-z}d\tau\right].\] Since \(\alpha(z)\neq 0\) where it is defined, \(\alpha^{-1}(z^{-1})\) is also analytic in \(\mathbb{C}\setminus\mathbb{T}\). So \[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha^{-1}(z^{-1})z^{k}}{z-d}\frac{ \mathrm{d}z}{2\pi\mathrm{i}}=\begin{cases}\alpha^{-1}(d^{-1})d^{k}&|d|<1,\\ 0&|d|>1,\end{cases} \tag{3.68}\] and \[\int_{\mathbb{T}_{+}^{\mathrm{t}}}\frac{\alpha^{-1}(z^{-1})z^{n+k+1}}{z-c^{-1 }}\frac{\mathrm{d}z}{2\pi\mathrm{i}}=\begin{cases}0&|c|<1,\\ \alpha^{-1}(c)c^{-n-k-1}&|c|>1.\end{cases} \tag{3.69}\] Using these in last member of (3.65) and summing up the resulting geometric series we obtain14 Footnote 14: We choose \(\rho>1\) so that there are no poles in the annulus with radii \(\rho\) and \(\rho^{-1}\), so \(c^{-n}=O(\rho^{-n})\) when \(|c|>1\). \[\frac{\mathcal{E}_{n+2}\left[\phi;\frac{\bar{\phi}}{z-d},\frac{\phi}{z-c};a \right]}{D_{n+1}[\phi]}-a=\begin{cases}\frac{\alpha(c)}{\alpha(d^{-1})} \cdot\frac{1}{1-cd}+O(\rho^{-n}),&\text{if }\ |c|<1\ \text{and}\ |d|<1,\\ O(\rho^{-n}),&\text{Either }\ |c|>1\ \text{or}\ |d|>1.\end{cases} \tag{3.70}\] Changing \(n\mapsto n-1\), and recalling the Strong Szego Theorem we obtain (3.58). We have now arrived at the proof of Theorem 1.11, as the asymptotic formulas (1.60) through (1.63) simply follow from the above Lemma and equations (3.5) and (3.6). ### Beyond the semi-framed case: framed and multi-framed Toeplitz determinants Finally we turn our attention to the framed and multi-framed Toeplitz determinants, which are determinants of matrices of Toeplitz structure in addition to one or multiple frames around them (recall (1.4) and (1.6)). As mentioned before, in this section we do not intend to prove any asymptotic results, instead we would like to highlight the general framework for how one may approach such asymptotic analysis. For a framed Toeplitz determinant, there are many ways one can place the Fourier coefficients of the symbols along the four borderes. For example, consider an \((n+3)\times(n+3)\) framed Toeplitz determinant with border symbols \(\psi,\eta,\xi\) and \(\gamma\), respectively for the right, bottom, top and left borders. Then, if we want to use the zeroth up to the \(n\)-th Fourier coefficients of these symbols along the borders, we have sixteen ways to construct such framed Toeplitz determinants15. Nevertheless, for any of these choices, a framed Toeplitz determinant can be related to four semi-framed Toeplitz determinants by the following Dodgson Condensation identity Footnote 15: Recall the semi-framed Toeplitz determinants where we had four forms (3.1) through (3.4). \[\underbrace{\mathcal{M}}_{\text{framed}}\cdot\underbrace{\mathcal{M}\left\{ \begin{matrix}0&n+2\\ 0&n+2\end{matrix}\right\}}_{\text{pure Toeplitz}}=\underbrace{\mathcal{M}\left\{ \begin{matrix}0\\ 0\end{matrix}\right\}}_{\text{semi-framed}}\cdot\underbrace{\mathcal{M}\left\{ \begin{matrix}n+2\\ n+2\end{matrix}\right\}}_{\text{semi-framed}}-\underbrace{\mathcal{M}\left\{ \begin{matrix}0\\ n+2\end{matrix}\right\}}_{\text{semi-framed}}\cdot\underbrace{\mathcal{M}\left\{ \begin{matrix}n+2\\ 0\end{matrix}\right\}}_{\text{semi-framed}}. \tag{3.71}\] For example, among the aforementioned sixteen choices, suppose we want to study the asymptotics of \[\mathcal{M}_{n+3}\left[\phi;\xi,\psi,\eta,\gamma;\mathbf{a}_{4}\right]:=\det\!\left( \begin{matrix}a_{1}&\xi_{n}&\xi_{n-1}&\cdots&\xi_{0}&a_{2}\\ \gamma_{n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{0}\\ \gamma_{n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \gamma_{0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{n}\\ a_{4}&\eta_{n}&\eta_{n-1}&\cdots&\eta_{0}&a_{3}\end{matrix}\right), \tag{3.72}\] or \[\mathcal{N}_{n+3}\left[\phi;\xi,\psi,\eta,\gamma;\mathbf{a}_{4}\right]:=\det\!\left( \begin{matrix}a_{1}&\xi_{0}&\xi_{1}&\cdots&\xi_{n}&a_{2}\\ \gamma_{0}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{n}\\ \gamma_{1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{n-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \gamma_{n}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{0}\\ a_{4}&\eta_{n}&\eta_{n-1}&\cdots&\eta_{0}&a_{3}\end{matrix}\right), \tag{3.73}\] where \(\mathbf{a}_{4}\) denotes the ordered set \(\{a_{1},a_{2},a_{3},a_{4}\}\), and \(a_{k}\)'s are arbitrary complex numbers, \(k=1,\cdots,4\). Employing the Dodgson Condensation Identity (3.71) for \(\mathcal{M}\) and \(\mathcal{N}\) we respectively obtain \[\mathcal{M}_{n+3}\left[\phi;\xi,\psi,\eta,\gamma;\mathbf{a}_{4}\right]\cdot D_{n+1 }\left[\phi\right]=\mathcal{H}_{n+2}\left[\phi;\psi,\eta;a_{3}\right]\cdot \mathcal{E}_{n+2}\left[\phi;\gamma,\xi;a_{1}\right]-\mathcal{E}_{n+2}\left[ \phi;\gamma,\eta;a_{4}\right]\cdot\mathcal{H}_{n+2}\left[\phi;\psi,\xi;a_{2} \right], \tag{3.74}\] and \[\mathcal{N}_{n+3}\left[\phi;\xi,\psi,\eta,\gamma;\mathbf{a}_{4}\right]\cdot D_{n+1 }\left[\phi\right]=\mathcal{E}_{n+2}\left[\phi;\psi,\eta;a_{3}\right]\cdot \mathcal{G}_{n+2}\left[\phi;\gamma,\xi;a_{1}\right]-\mathcal{H}_{n+2}\left[ \phi;\gamma,\eta;a_{4}\right]\cdot\mathcal{L}_{n+2}\left[\phi;\psi,\xi;a_{2} \right]. \tag{3.75}\] These identities and their analogues for the other fourteen framed Toeplitz determinants, provide a pathway to the asymptotics at least for the class of border symbols concidered in SS3.2, since we already know how to compute the asymptotics of the semi-framed Toeplitz determinants appearing on the right hand side (see SS3.2). When dealing with a multi-framed Toeplitz determinant, repeated application of appropriate Dodgson Condensation identities can simplify the analysis, ultimately reducing it to the asymptotic analysis of semi-framed Toeplitz determinants once again. For example, consider the following \((n+5)\times(n+5)\) two-framed Toeplitz determinant: \[\mathcal{K}_{n+5}\left[\phi;\boldsymbol{\xi},\boldsymbol{\psi},\boldsymbol{ \eta},\boldsymbol{\gamma};\boldsymbol{a}_{8}\right]:=\det\begin{pmatrix}a_{5}& \xi_{2,n+2}&\xi_{2,n+1}&\xi_{2,n}&\cdots&\xi_{2,1}&\xi_{2,0}&a_{6}\\ \gamma_{2,n+2}&a_{1}&\xi_{1,n}&\xi_{1,n-1}&\cdots&\xi_{1,0}&a_{2}&\psi_{2,0}\\ \gamma_{2,n+1}&\gamma_{1,n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{1,0}&\psi_ {2,1}\\ \gamma_{2,n}&\gamma_{1,n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1,1}& \psi_{2,2}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ \gamma_{2,1}&\gamma_{1,0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{1,n}&\psi_ {2,n+1}\\ \gamma_{2,0}&a_{4}&\eta_{1,n}&\eta_{1,n-1}&\cdots&\eta_{1,0}&a_{3}&\psi_{2,n+2} \\ a_{8}&\eta_{2,n+2}&\eta_{2,n+1}&\eta_{2,n}&\cdots&\eta_{2,1}&\eta_{2,0}&a_{7} \end{pmatrix}. \tag{3.76}\] To this two-framed Toeplitz determinant we apply \[\underbrace{\mathcal{K}}_{\text{two-framed}}\underbrace{\cdot\mathcal{K} \begin{pmatrix}0&n+4\\ 0&n+4\\ \text{framed}\end{pmatrix}}=\mathcal{K}\begin{pmatrix}0\\ 0\end{pmatrix}\cdot\mathcal{K}\begin{pmatrix}n+4\\ n+4\end{pmatrix}-\mathcal{K}\begin{pmatrix}0\\ n+4\end{pmatrix}\cdot\mathcal{K}\begin{pmatrix}n+4\\ 0\end{pmatrix}, \tag{3.77}\] where each of the determinants on the right hand side are that of a framed Toeplitz matrix with an extra _semi-frame_ around it. Indeed, \[\mathcal{K}\begin{pmatrix}0\\ 0\end{pmatrix}=\det\begin{pmatrix}a_{1}&\xi_{1,n}&\xi_{1,n-1}&\cdots&\xi_{1,0}& a_{2}&\psi_{2,0}\\ \gamma_{1,n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{1,0}&\psi_{2,1}\\ \gamma_{1,n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1,1}&\psi_{2,2}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ \gamma_{1,0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{1,n}&\psi_{2,n+1}\\ a_{4}&\eta_{1,n}&\eta_{1,n-1}&\cdots&\eta_{1,0}&a_{3}&\psi_{2,n+2}\\ \eta_{2,n+2}&\eta_{2,n+1}&\eta_{2,n}&\cdots&\eta_{2,1}&\eta_{2,0}&a_{7}\end{pmatrix}, \tag{3.78}\] \[\mathcal{K}\begin{pmatrix}n+4\\ n+4\end{pmatrix}=\det\begin{pmatrix}a_{5}&\xi_{2,n+2}&\xi_{2,n+1}&\xi_{2,n}& \cdots&\xi_{2,1}&\xi_{2,0}\\ \gamma_{2,n+2}&a_{1}&\xi_{1,n}&\xi_{1,n-1}&\cdots&\xi_{1,0}&a_{2}\\ \gamma_{2,n+1}&\gamma_{1,n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{1,0}\\ \gamma_{2,n}&\gamma_{1,n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1,1}\\ \gamma_{2,1}&\gamma_{1,0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{1,n}\\ \gamma_{2,0}&a_{4}&\eta_{1,n}&\eta_{1,n-1}&\cdots&\eta_{1,0}&a_{3}\end{pmatrix}, \tag{3.79}\] \[\mathcal{K}\begin{pmatrix}0\\ n+4\end{pmatrix}=\det\begin{pmatrix}\gamma_{2,n+2}&a_{1}&\xi_{1,n}&\xi_{1,n-1}& \cdots&\xi_{1,0}&a_{2}\\ \gamma_{2,n+1}&\gamma_{1,n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{1,0}\\ \gamma_{2,n}&\gamma_{1,n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1,1}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \gamma_{2,1}&\gamma_{1,0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{1,n}\\ \gamma_{2,0}&a_{4}&\eta_{1,n}&\eta_{1,n-1}&\cdots&\eta_{1,0}&a_{3}\\ a_{8}&\eta_{2,n+2}&\eta_{2,n+1}&\eta_{2,n}&\cdots&\eta_{2,1}&\eta_{2,0}\end{pmatrix}, \tag{3.80}\] and \[\mathcal{K}\begin{pmatrix}n+4\\ 0\end{pmatrix}=\det\begin{pmatrix}\xi_{2,n+2}&\xi_{2,n+1}&\xi_{2,n}&\cdots&\xi_{2,1 }&\xi_{2,0}&a_{6}\\ a_{1}&\xi_{1,n}&\xi_{1,n-1}&\cdots&\xi_{1,0}&a_{2}&\psi_{2,0}\\ \gamma_{1,n}&\phi_{0}&\phi_{-1}&\cdots&\phi_{-n}&\psi_{1,0}&\psi_{2,1}\\ \gamma_{1,n-1}&\phi_{1}&\phi_{0}&\cdots&\phi_{-n+1}&\psi_{1,1}&\psi_{2,2}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ \gamma_{1,0}&\phi_{n}&\phi_{n-1}&\cdots&\phi_{0}&\psi_{1,n}&\psi_{2,n+1}\\ a_{4}&\eta_{1,n}&\eta_{1,n-1}&\cdots&\eta_{1,0}&a_{3}&\psi_{2,n+2}\end{pmatrix}. \tag{3.81}\] Now consider the following auxiliary DCIs: \[\mathcal{K}\begin{pmatrix}0\\ 0\end{pmatrix}\cdot\underbrace{\mathcal{K}\begin{pmatrix}0&n+3&n+4\\ 0&n+3&n+4\\ \text{semi-framed}\end{pmatrix}}_{\text{remed}}=\underbrace{\mathcal{K} \begin{pmatrix}0&n+3\\ 0&n+4\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 0&n+4\\ 1&n+4\end{pmatrix}}_{\text{framed}}-\underbrace{\mathcal{K}\begin{pmatrix}0&n+4\\ 1&n+4\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 1&n+4\\ 0&n+4\end{pmatrix}}_{\text{framed}}, \tag{3.82}\] \[\mathcal{K}\begin{pmatrix}0\\ n+4\end{pmatrix}\cdot\underbrace{\mathcal{K}\begin{pmatrix}0&n+3&n+4\\ 0&1&n+4\end{pmatrix}}_{\text{semi-framed}}=\underbrace{\mathcal{K} \begin{pmatrix}0&n+3\\ 0&n+4\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 0&n+4\\ 1&n+4\end{pmatrix}}_{\text{framed}}-\underbrace{\mathcal{K}\begin{pmatrix}0&n+3\\ 1&n+4\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 0&n+4\\ 0&n+4\end{pmatrix}}_{\text{framed}}, \tag{3.83}\] and \[\mathcal{K}\begin{pmatrix}n+4\\ 0\end{pmatrix}\cdot\underbrace{\mathcal{K}\begin{pmatrix}0&1&n+4\\ 0&n+3&n+4\\ \text{semi-framed}\end{pmatrix}}_{\text{remed}}=\underbrace{\mathcal{K} \begin{pmatrix}0&n+4\\ 0&n+3\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 1&n+4\\ 0&n+4\end{pmatrix}}_{\text{framed}}-\underbrace{\mathcal{K}\begin{pmatrix}0&n+4\\ 0&n+4\end{pmatrix}}_{\text{framed}}\cdot\underbrace{\mathcal{K}\begin{pmatrix} 1&n+4\\ 0&n+3\end{pmatrix}}_{\text{framed}}. \tag{3.85}\] Using these we can express the objects on the right hand side of (3.77) in terms of the framed Toeplitz determinant \(\mathcal{M}\) introduced in (1.6). Indeed, from the equations (3.82) through (3.85) we respectively obtain \[\mathcal{K}\begin{pmatrix}0\\ 0\end{pmatrix}=\frac{1}{\mathcal{E}_{n+2}[\phi;\gamma_{1},\xi_{1};a_{1}]} \times\left(\mathcal{M}_{n+3}\left[\phi;\xi_{1},z^{-1}\psi_{2},z^{-1}\eta_{2}, \gamma_{1};\mathbf{a}_{4}^{(1)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;\xi_{1}, \psi_{1},\eta_{1},\gamma_{1};\mathbf{a}_{4}^{(2)}\right]\right. \tag{3.87}\] \[\left.-\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},z^{-1}\eta_{2 },\gamma_{1};\mathbf{a}_{4}^{(3)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;\xi_{1},z ^{-1}\psi_{2},\eta_{1},\gamma_{1};\mathbf{a}_{4}^{(4)}\right]\right), \tag{3.86}\] \[\mathcal{K}\begin{pmatrix}n+4\\ n+4\end{pmatrix}=\frac{1}{\mathcal{J}_{n+2}[\phi;\psi_{1},\eta_{1};a_{3}]} \times\left(\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},\eta_{1}, \gamma_{1};\mathbf{b}_{4}^{(1)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;z^{-1}\xi_ {2},\psi_{1},\eta_{1},z^{-1}\gamma_{2};\mathbf{b}_{4}^{(2)}\right]\right.\] \[\left.-\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},\eta_{1},z^{- 1}\gamma_{2};\mathbf{b}_{4}^{(3)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;z^{-1}\xi_ {2},\psi_{1},\eta_{1},\gamma_{1};\mathbf{b}_{4}^{(4)}\right]\right) \tag{3.88}\] \[\mathcal{K}\begin{pmatrix}0\\ n+4\end{pmatrix}=\frac{1}{(-1)^{n+1}\mathcal{J}_{n+2}[\phi;\psi_{1},\xi_{1};a_{2}]} \times\left(\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},z^{-1}\eta_{2}, \gamma_{1};\mathbf{c}_{4}^{(1)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;\xi_{1}, \psi_{1},\eta_{1},z^{-1}\gamma_{2};\mathbf{c}_{4}^{(2)}\right]\right.\] \[\left.-\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},z^{-1}\eta_{2 },z^{-1}\gamma_{2};\mathbf{c}_{4}^{(3)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;\xi_ {1},\psi_{1},\eta_{1},\gamma_{1};\mathbf{c}_{4}^{(4)}\right]\right), \tag{3.89}\] \[\mathcal{K}\begin{pmatrix}n+4\\ 0\end{pmatrix}=\frac{1}{(-1)^{n+1}\mathcal{E}_{n+2}\left[\phi;\gamma_{1},\eta_{1} ;a_{4}\right]}\times \left(\mathcal{M}_{n+3}\left[\phi;\xi_{1},z^{-1}\psi_{2},\eta_{1},\gamma_{1};\boldsymbol{d}_{4}^{(1)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;z^ {-1}\xi_{2},\psi_{1},\eta_{1},\gamma_{1};\boldsymbol{d}_{4}^{(2)}\right]\right.\] \[-\mathcal{M}_{n+3}\left[\phi;\xi_{1},\psi_{1},\eta_{1},\gamma_{1} ;\boldsymbol{d}_{4}^{(3)}\right]\cdot\mathcal{M}_{n+3}\left[\phi;z^{-1}\xi_{2},z^{-1}\psi_{2},\eta_{1},\gamma_{1};\boldsymbol{d}_{4}^{(4)}\right]\right), \tag{3.89}\] with \[\boldsymbol{a}_{4}^{(1)} =\{a_{1},\psi_{2,0},a_{7},\eta_{2,n+2}\}, \boldsymbol{b}_{4}^{(2)}=\{a_{5},\xi_{2,0},a_{3},\gamma_{2,0}\},\] \[\boldsymbol{c}_{4}^{(3)} =\{\gamma_{2,n+2},a_{2},\eta_{2,0},a_{8}\}, \boldsymbol{d}_{4}^{(4)}=\{\xi_{2,n+2},a_{6},\psi_{2,n+2},a_{4}\}\] \[\boldsymbol{a}_{4}^{(3)} =\boldsymbol{c}_{4}^{(1)}=\{a_{1},a_{2},\eta_{2,0},\eta_{2,n+2}\} \boldsymbol{a}_{4}^{(4)}=\boldsymbol{d}_{4}^{(1)}=\{a_{1},\psi_{2,0}, \psi_{2,n+2},a_{4}\}\] \[\boldsymbol{b}_{4}^{(3)} =\boldsymbol{c}_{4}^{(2)}=\{\gamma_{2,n+2},a_{2},a_{3},\gamma_{2, 0}\}, \boldsymbol{b}_{4}^{(4)}=\boldsymbol{d}_{4}^{(2)}=\{\xi_{2,n+2},\xi_{2,0},a_ {3},a_{4}\}\] \[\boldsymbol{a}_{4}^{(2)} =\boldsymbol{b}_{4}^{(1)}=\boldsymbol{c}_{4}^{(4)}=\boldsymbol{d} _{4}^{(3)}=\{a_{1},a_{2},a_{3},a_{4}\}.\] The above equations together with (3.74) and the results of Section 3.2, provides the needed pathway to compute the asymptotics of the two-framed Toeplitz determinant (3.76). ## Acknowledgements The author would like to thank Harini Desiraju, Alexander Its, Karl Liechty, and Nicholas Witte for their interest in this project and for helpful conversations. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1928930. The author gratefully acknowledges the Mathematical Sciences Research Institute, Berkeley California and the organizers of the semester-long program _Universality and Integrability in Random Matrix Theory and Interacting Particle Systems_ for their support in the Fall of 2021, during which part of this project was completed. ## 4. Appendix: solution of the Riemann-Hilbert problem for BOPUC with Szego-type symbols The following Riemann-Hilbert problem for BOPUC is due to J.Baik, P.Deift and K.Johansson [BDJ]. * **RH-X1**\(X(\cdot;n):\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-X2** The limits of \(X(\zeta;n)\) as \(\zeta\) tends to \(z\in\mathbb{T}\) from the inside and outside of the unit circle exist, and are denoted \(X_{\pm}(z;n)\) respectively and are related by (4.1) \[X_{+}(z;n)=X_{-}(z;n)\begin{pmatrix}1&z^{-n}\phi(z)\\ 0&1\end{pmatrix},\qquad z\in\mathbb{T},\] * **RH-X3** As \(z\to\infty\) \[X(z;n)=\big{(}I+O(z^{-1})\big{)}z^{n\sigma_{3}}, \tag{4.2}\] (see [D],[DIK],[CIK]). Below we show the standard steepest descent analysis to asymptotically solve this problem for a Szego-type symbol. We first normalize the behavior at \(\infty\) by defining \[T(z;n):=\begin{cases}X(z;n)z^{-n\sigma_{3}},&|z|>1,\\ X(z;n),&|z|<1.\end{cases} \tag{4.3}\] The function \(T\) defined above satisfies the following RH problem * **RH-T1**\(T(\cdot;n):\mathbb{C}\setminus\mathbb{T}\to\mathbb{C}^{2\times 2}\) is analytic, * **RH-T2**\(T_{+}(z;n)=T_{-}(z;n)\begin{pmatrix}z^{n}&\phi(z)\\ 0&z^{-n}\end{pmatrix},\qquad z\in\mathbb{T},\) * **RH-T3**\(T(z;n)=I+O(1/z),\qquad z\to\infty,\) So \(T\) has a highly-oscillatory jump matrix as \(n\to\infty\). The next transformation yields a Riemann Hilbert problem, normalized at infinity, having an exponentially decaying jump matrix on the _lenses_. Note that we have the following factorization of the jump matrix of the \(T\)-RHP : \[\begin{pmatrix}z^{n}&\phi(z)\\ 0&z^{-n}\end{pmatrix}=\begin{pmatrix}1&0\\ z^{-n}\phi(z)^{-1}&1\end{pmatrix}\begin{pmatrix}0&\phi(z)\\ -\phi(z)^{-1}&0\end{pmatrix}\begin{pmatrix}1&0\\ z^{n}\phi(z)^{-1}&1\end{pmatrix}\equiv J_{0}(z;n)J^{(\infty)}(z)J_{1}(z;n). \tag{4.4}\] Now, we define the following function : \[S(z;n):=\begin{cases}T(z;n)J_{1}^{-1}(z;n),&z\in\Omega_{1},\\ T(z;n)J_{0}(z;n),&z\in\Omega_{2},\\ T(z;n),&z\in\Omega_{0}\cup\Omega_{\infty}.\end{cases} \tag{4.5}\] Also introduce the following function on \(\Gamma_{S}:=\Gamma_{0}\cup\Gamma_{1}\cup\mathbb{T}\): \[J_{S}(z;n)=\begin{cases}J_{1}(z;n),&z\in\Gamma_{0},\\ J^{(\infty)}(z),&z\in\mathbb{T},\\ J_{0}(z;n),&z\in\Gamma_{1}.\end{cases} \tag{4.6}\] Therefore we have the following Riemann-Hilbert problem for \(S(z;n)\) * **RH-S1** \(S(\cdot;n):\mathbb{C}\setminus\Gamma_{S}\to\mathbb{C}^{2\times 2}\) is analytic. * **RH-S2** \(S_{+}(z;n)=S_{-}(z;n)J_{S}(z;n),\qquad z\in\Gamma_{S}.\) * **RH-S3** \(S(z;n)=I+O\left(1/z\right),\qquad\text{as $z\to\infty$}.\) Note that the matrices \(J_{0}(z;n)\) and \(J_{1}(z;n)\) tend to the identity matrix uniformly on their respective contours, exponentially fast as \(n\to\infty\). We are looking for a piecewise analytic function \(P^{(\infty)}(z):\mathbb{C}\setminus\mathbb{T}:\to\mathbb{C}^{2\times 2}\) such that * **RH-Global1** \(P^{(\infty)}\) is holomorphic in \(\mathbb{C}\setminus\mathbb{T}\). * **RH-Global2** for \(z\in\mathbb{T}\) we have (4.7) \[P^{(\infty)}_{+}(z)=P^{(\infty)}_{-}(z)\begin{pmatrix}0&\phi(z)\\ -\phi^{-1}(z)&0\end{pmatrix}.\] * **RH-Global3** \(P^{(\infty)}(z)=I+O\left(1/z\right),\qquad\text{as $z\to\infty$}.\) We can find a piecewise analytic function \(\alpha\) which solves the following scalar multiplicative Riemann-Hilbert problem \[\alpha_{+}(z)=\alpha_{-}(z)\phi(z)\qquad z\in\mathbb{T}. \tag{4.8}\] Figure 1. Opening of lenses: the jump contour for the \(S\)-RHP. By Plemelj-Sokhotski formula we have \[\alpha(z)=\exp\left[\frac{1}{2\pi i}\int_{\mathbb{T}}\frac{\ln(\phi(\tau))}{\tau-z }d\tau\right], \tag{4.9}\] Now, using (4.8) we have the following factorization \[\begin{pmatrix}0&\phi(z)\\ -\phi^{-1}(z)&0\end{pmatrix}=\begin{pmatrix}\alpha_{-}^{-1}(z)&0\\ 0&\alpha_{-}(z)\end{pmatrix}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\begin{pmatrix}\alpha_{+}^{-1}(z)&0\\ 0&\alpha_{+}(z)\end{pmatrix}. \tag{4.10}\] So, the function \[P^{(\infty)}(z):=\begin{cases}\begin{pmatrix}0&\alpha(z)\\ -\alpha^{-1}(z)&0\\ \alpha(z)&0\\ 0&\alpha^{-1}(z)\end{pmatrix},\quad\ |z|>1,\end{cases} \tag{4.11}\] satisfies (4.7). Also, by the properties of the Cauchy integral, \(P^{(\infty)}(z)\) is holomorphic in \(\mathbb{C}\setminus\mathbb{T}\). Moreover, \(\alpha(z)=1+O(z^{-1})\), as \(z\to\infty\) and hence \[P^{(\infty)}(z)=I+O\left(1/z\right),\qquad z\to\infty. \tag{4.12}\] Therefore \(P^{(\infty)}\) given by (4.11) is the unique solution of the global parametrix Riemann-Hilbert problem. Let us now consider the ratio \[R(z;n):=S(z;n)\left[P^{(\infty)}(z)\right]^{-1}. \tag{4.13}\] We have the following Riemann-Hilbert problem for \(R(z;n)\) * **RH-R1** \(R\) is holomorphic in \(\mathbb{C}\setminus(\Gamma_{0}\cup\Gamma_{1})\). * **RH-R2** \(R_{+}(z;n)=R_{-}(z;n)J_{R}\left(z;n\right)\), \(z\in\Gamma_{0}\cup\Gamma_{1}=:\Sigma_{R}\), * **RH-R3** \(R(z;n)=I+O\left(1/z\right)\) as \(z\to\infty\). This Riemann Hilbert problem is solvable for large \(n\) ([?],[?]) and \(R(z;n)\) can be written as \[R(z;n)=I+R_{1}(z;n)+R_{2}(z;n)+R_{3}(z;n)+\cdots,\quad\ n\geq n_{0} \tag{4.14}\] where \(R_{k}\) can be found recursively. Indeed \[R_{k}(z;n)=\frac{1}{2\pi\mathrm{i}}\int_{\Sigma_{R}}\frac{\left[R_{k-1}(\mu;n )\right]_{-}\left(J_{R}(\mu;n)-I\right)}{\mu-z}\mathrm{d}\mu,\qquad z\in \mathbb{C}\setminus\Sigma_{R},\qquad k\geq 1. \tag{4.15}\] It is easy to check that \(R_{2\ell}(z;n)\) is diagonal and \(R_{2\ell+1}(z;n)\) is off-diagonal; \(\ell\in\mathbb{N}\cup\{0\}\), and that \[R_{k,ij}(z;n)=O\left(\frac{\rho^{-kn}}{1+|z|}\right),\qquad n\to\infty,\qquad k \geq 1, \tag{4.16}\] uniformly in \(z\in\mathbb{C}\setminus\Sigma_{R}\), where \(\rho\) (resp. \(\rho^{-1}\)) is the radius of \(\Gamma_{1}\)(resp. \(\Gamma_{0}\)). Let us compute \(R_{1}(z;n)\); we have (4.17) Therefore \[R_{1}(z;n)=\begin{pmatrix}0&-\dfrac{1}{2\pi i}\int_{\Gamma_{0}}\dfrac{\tau^{n}\phi ^{-1}(\tau)\alpha^{2}(\tau)}{\tau-z}d\tau\\ \dfrac{1}{2\pi i}\int_{\Gamma_{1}}\dfrac{\tau^{-n}\phi^{-1}(\tau)\alpha^{-2}( \tau)}{\tau-z}d\tau&0\end{pmatrix}. \tag{4.18}\] If we trace back the Riemann-Hilbert problems \(R\mapsto S\mapsto T\mapsto X\) we will obtain \[X(z;n)=R(z;n)\begin{pmatrix}\begin{pmatrix}\alpha(z)&0\\ 0&\alpha^{-1}(z)\end{pmatrix}z^{n\sigma_{3}},&z\in\Omega_{\infty},\\ \begin{pmatrix}\alpha(z)&0\\ -z^{-n}\alpha^{-1}(z)\phi^{-1}(z)&\alpha^{-1}(z)\end{pmatrix}z^{n\sigma_{3}}, &z\in\Omega_{2},\\ \begin{pmatrix}z^{n}\alpha(z)\phi^{-1}(z)&\alpha(z)\\ -\alpha^{-1}(z)&0\end{pmatrix},&z\in\Omega_{1},\\ \begin{pmatrix}0&\alpha(z)\\ -\alpha^{-1}(z)&0\end{pmatrix},&z\in\Omega_{0},\end{pmatrix} \tag{4.19}\] where for \(z\in\mathbb{C}\setminus\Sigma_{R}\), as \(n\to\infty\), we have \[R(z;n)=\begin{pmatrix}1+O\left(\frac{\rho^{-2n}}{1+|z|}\right)&R_{1,12}(z;n)+ O\left(\frac{\rho^{-3n}}{1+|z|}\right)\\ R_{1,21}(z;n)+O\left(\frac{\rho^{-3n}}{1+|z|}\right)&1+O\left(\frac{\rho^{-2n} }{1+|z|}\right)\end{pmatrix}. \tag{4.20}\]
2301.11871
Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image
Corneal diseases are the most common eye disorders. Deep learning techniques are used to per-form automated diagnoses of cornea. Deep learning networks require large-scale annotated datasets, which is conceded as a weakness of deep learning. In this work, a method for synthesizing medical images using conditional generative adversarial networks (CGANs), is presented. It also illustrates how produced medical images may be utilized to enrich medical data, improve clinical decisions, and boost the performance of the conventional neural network (CNN) for medical image diagnosis. The study includes using corneal topography captured using a Pentacam device from patients with corneal diseases. The dataset contained 3448 different corneal images. Furthermore, it shows how an unbalanced dataset affects the performance of classifiers, where the data are balanced using the resampling approach. Finally, the results obtained from CNN networks trained on the balanced dataset are compared to those obtained from CNN networks trained on the imbalanced dataset. For performance, the system estimated the diagnosis accuracy, precision, and F1-score metrics. Lastly, some generated images were shown to an expert for evaluation and to see how well experts could identify the type of image and its condition. The expert recognized the image as useful for medical diagnosis and for determining the severity class according to the shape and values, by generating images based on real cases that could be used as new different stages of illness between healthy and unhealthy patients.
Samer Kais Jameel, Sezgin Aydin, Nebras H. Ghaeb, Jafar Majidpour, Tarik A. Rashid, Sinan Q. Salih, P. S. JosephNg
2022-12-25T17:45:21Z
http://arxiv.org/abs/2301.11871v1
Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image ###### Abstract Cornel diseases are the most common eye disorders. Deep learning techniques are used to perform automated diagnoses of cornea. Deep learning networks require large-scale annotated datasets, which is conceded as a weakness of deep learning. In this work, a method for synthesizing medical images using conditional generative adversarial networks (CGANs), is presented. It also illustrates how produced medical images may be utilized to enrich medical data, improve clinical decisions, and boost the performance of the conventional neural network (CNN) for medical image diagnosis. The study includes using corneal topography captured using a Pentacam device from patients with corneal diseases. The dataset contained 3448 different corneal images. Furthermore, it shows how an unbalanced dataset affects the performance of classifiers, where the data are balanced using the resampling approach. Finally, the results obtained from CNN networks trained on the balanced dataset are compared to those obtained from CNN networks trained on the imbalanced dataset. For performance, the system estimated the diagnosis accuracy, precision, and F1-score metrics. Lastly, some generated images were shown to an expert for evaluation and to see how well experts could identify the type of image and its condition. The expert recognized the image as useful for medical diagnosis and for determining the severity class according to the shape and values, by generating images based on real cases that could be used as new different stages of illness between healthy and unhealthy patients. Keywords:conditional generative adversarial networks, transfer learning, synthesize images, corneal diseases, data augmentation + Footnote †: journal: Computer Science ## 1 Introduction Medical image datasets are one of the most important problems facing researchers in the field of machine learning [1]. The limited amount of medical data comes from the difficulty of capturing it [2]. With the problem of final ethical approval, the acquisition and labelling of medical images are time-consuming, and considerable effort needs to be spent by both researchers and specialists [3, 4]. Several studies tried to overcome the dataset scarcity challenge through the famous task in computer vision, a method called data augmentation [5]. Using classic data augmentation can give a simple extra feature where it involves simple modifications, such as rotation, translation, scaling, and flipping [6]. On the other hand, some researchers employed innovative techniques for data augmentation to improve the system training process, based on synthesizing high-quality sample images using a generative model known as generative adversarial networks (GANs) [7, 8, 9]. The GANs involved two networks; the first generates a real image from the input with the help of the noise, and the other discriminates between real and fake (generated by the first network) images. This model has been used in many studies hoping to generate realistic images, especially for medical imaging applications, such as image-to-image translation [10], image inpainting [11], segmentation-to-image translation [12], medical cross-modality translations [13], and label-to-segmentation translation [14]. Exploiting the GAN models by researchers led to the creation of cross-modality images, such as a PET scan, which was generated from a CT scan of the abdomen to show the presence of liver lesions. The GAN model of image inpainting has served as inspiration for many studies. Costa et al. [15] used a fully convolutional network to learn retinal vessel segmentation images. The binary vessel tree was then translated into a new retinal image. By using chest X-ray images, Dai et al. [16] generated lung and heart image segmentation by training a GAN model. Xu et al. [17] trained a model to translate brain MRI images into binary segmentation maps for brain tumour images. Nie et al. [18] trained a patch-based GAN to translate between brain CT and MRI images. As a step of image refinement, they recommended using an auto-context model. Schlegl et al. [19] trained a GAN model on normal retinal. To detect anomalies in retinal images, the model was tested on normal and abnormal data. Based on what was mentioned above, the scarcity of data needs to be resolved so that researchers can use it more freely to analyze that data and produce results that serve the scientific process. The latter motivated the authors of this paper to use GAN models with the ability to synthesize real images, increase the existing data, and overcome the problem of lacking data. In this work, high-quality corneal images based on GAN models are synthesized for a specific task of corneal disease diagnosis to improve the clinical decision by introducing different stages and predicted shapes for images with illness. As an illustrated sample of manipulation for the imaging in the cornea, the different stages of keratocons are, in most cases, unclear in borderlines. From a clinical perspective, overlapping features between stages of keratocons lead to a controversial approach to treatment. To decide the severity and clinical or surgical procedure of work per patient clinically, considerable evidence is collected from different images per case to reach the final approach. The possibility of studying the effect and weight of this evidence per case is an attractive medical training to produce a final highly medical sensation and observation for the trained physician. In more detail, thinning in pachymetry images with its location, steepening in the inferior or superior position of the tangential mapping, and the isolated land or tongue shape that may appear in elevation front and back maps, with the astigmatism axis and obliqueness of the bowtie, would improve the effectiveness of the final diagnosis. The cornea, which protects the eye from external substances and helps to control visual focus, is stiff but very sensitive to touch [20]. There are many corneal disorders, for instance, bullous keratopathy, Cogan syndrome, corneal ulcer, herpes simplex keratitis, herpes zoster ophthalmicus, etc. [21]. Any disorders in the cornea may cause ripping, discomfort, and dwindling vision clarity and, finally, may lead to blindness. On the other hand, any action on the cornea, such as vision correction, requires a diagnosis of the cornea's health before treatment [22]. Clinical decisions on the human cornea require reviewing numerous aspects, and ophthalmologists must handle this revision. Corneal topographical parameters are so extensive that it is difficult for surgeons or ophthalmologists to remember them all and make decisions [23]. As a consequence, based on deep learning models, we also proposed to build a sophisticated medical system using the original and the generated images (using the GAN model) for diagnosing corneal cases, to aid clinicians in the interpretation of medical images and improve clinical decision-making. Many researchers used a variety of complex and diverse medical devices to collect data, as well as a variety of diagnostic approaches. Salih and Hussein (2018) used 732 submaps as inputs to the deep learning network; a kind of deep learning technology called the VGG-16 network was utilized to predict corneal abnormalities and normality [24]. The detection of the keratoconus eyes dataset and recognition of the normal cornea was the focus of a group of authors who used 145 normal cornea cases and 312 keratoconus cases from a database of photographs. As a classification tool, they used support vector machine (SVM) and multilayer perceptron methods. The features were extracted from the input images, then passed to the classifiers [25]. A group of researchers used a compilation of data from both Placido and Scheimpug as a feature vector. The prototype was tested with and without a posterior corneal surface, and it performed well in both situations. The thickness and posterior characteristics were found to be critical tools for predicting corneal keratoconus and avoiding corneal ectasia surgery in patients with early corneal ectasia disease [26]. Researchers employed machine learning techniques, such as ANN, SVM, regression analysis, and decision tree algorithms, to identify the disease. The information was gathered from a group of patients; in total, 23 cases of ectasia after LASIK were discovered, as well as 266 stable post-LASIK cases with over a year of follow-up. They concluded that this study method still needed to be validated [27]. Samer et al. presented a method known as SWFT for diagnosing the corneal image by extracting features from the corneal image using a Wavelet and diagnosing it using an SVM classifier [28]. In 2021, Samer and his participants designed an LIP algorithm to extract corneal image features, and they evaluated their method using many classifiers. Thus, they could train a system capable of automatically classifying corneal diseases [22]. We used deep learning techniques in the current study to diagnose corneal diseases. GAN networks were used as a tool to generate realistic corneal images. On the other hand, pre-trained convolutional neural networks (CNN) [29; 30; 31] are employed in diagnosing corneal diseases, which have recently been used in many medical imaging studies and have been reported to improve performance for a broad range of medical tasks. This paper has made the following contributions: (1) Using the GAN model for creating high-quality corneal images from topographical images to solve the scarcity of the cornea dataset. (2) Examining various transfer learning methods as a based solution for the corneal diagnosis task. (3) Augmentation of the dataset to be used in training the networks, using the generated synthetic data for improved clinical decisions. (4) Solving the issue of time consumption that is suffered by deep learning networks. ## 2 Corneal Diseases Diagnosis This section begins by describing the data and its features. The architecture of the GAN model for cornea image creation is discussed after that. Due to the restricted quantity of data available for training transfer learning networks, we have presented a method for augmenting synthesized images. ### Dataset The dataset is made up of images taken by scanning the cornea with a device called Pentacam, which generates various images and parameters known as corneal topography. Ophthalmalmologists use corneal topography to check eye conditions in clinics. Each patient's eye data, which includes four corneal maps (sagittal, corneal thickness (CT), elevation front (EF), and elevation back maps (EB)) with a set of parameters, are saved independently [32] (see Figure 1). The data were gathered using a Pentacam (OCULUS, Germany), an image Scheimpflug instrument. The camera scans the eye from many angles in a circular pattern, producing maps with information about the anterior and posterior parts of the cornea and a quick screening report. The Pentacam can be upgraded and altered to meet the user's requirements [33]. It is worth noting that the data were obtained from the Al-Amal center in Baghdad, Iraq, and the data were labelled with the help of eye specialists, Dr. Nebras H. Gareb, an Ophthalmic Consultant, and Dr. Sohaib A. Mohammed and Dr. Ali A. Al-Razaq, Senior Specialist Ophthalmologists. The images were categorized based on all four corneal maps, and each map was treated separately and labelled as normal or abnormal. As such, we have eight categories of cornea cases. The collected data contains 3448 images of the four maps that have been scientifically collected and classified. The number of images for each class is 248 Normal_Sagittal, 460 Abnormal_Sagittal, 338 Normal_Cornael Thickness, 548 Abnormal_Cornael Thickness, 765 Normal_Elevation Front, 167 Abnormal_Elevation Front, 693 Normal_Elevation Back, and 229 Abnormal_Elevation Back maps. ### Transfer Learning Models There are numerous common transfer learning models available in computer vision that are typically utilized as a tool for the categorization of medical images; however, in this study, the MobileNetv2 [34], Resnet50 [35], Xception [36], Vision Transformer (ViT) [37], Co-scale conv-attentional image Transformers (CoaT) [38], and Swin transformer (Swin-T) [39] models have been used, which are trained by the original and synthesized images to evaluate the system's effectiveness for diagnosing corneal instances. The models demonstrate the influence of synthesized and imbalanced datasets on the corneal diagnosis task; the data were manipulated, and varied numbers of data were used for training and testing. To be balanced, the data were processed using the resample method (oversampling and downsampling). After training each transfer learning model, the results are compared to the results of other approaches see tables 3 and 4. Figure 1: The four corneal maps: (**a**) sagittal, (**b**) elevation front, (**c**) corneal thickness, and (**d**) elevation back maps. The Resnet50 forecasts the delta required to get from one layer to the next and arrive at the final prediction. It addresses the vanishing gradient problem by enabling the gradient to flow through an additional shortcut path. It enables the model to skip over a CNN weight layer if it is not required. This helps to avoid the difficulty of overfitting the training set. ResNet50 is a 50-layer network [36]. The MobileNetv2 is a convolutional architecture built for usage with mobile or low-cost devices that minimizes network cost and size [40]. Segmentation, classification, and object recognition may all be performed with the MobileNetV2 model. In comparison to its predecessor, MobileNetV2 includes two new features occurring linearly between layers, and bottleneck shortcuts are established [41]. Xception is a depthwise separable convolutions-based deep convolutional neural network architecture; Google researchers came up with the idea. Xception has three different flows: entry, middle, and exit. The data initially pass via the entering flow, then eight times through the middle flow, and finally through the exit flow. Batch normalization is applied to all convolution and separable convolution layers [36]. [37] have investigated the possibility of using transformers for straightforward image recognition. Apart from the initial patch extraction step, this architecture does not have any image-specific inductive biases, which sets it apart from previous research leveraging self-attention in computer vision. Instead, [37, 42] employs a standard transformer encoder seen in natural language processing to decode an image as a sequence of patches. With pre-training on massive datasets, this straightforward approach scales remarkably well. Therefore, vision transformer competes with or outperforms the state-of-the-art on many picture classification datasets, while only requiring a little initial investment. CoaT, an image classifier based on the transformer, features cross-scale attention and efficient conv-attention operations, and is given in [38]. CoaT models achieve strong classification results on ImageNet, and their utility for subsequent computer vision tasks, such as object detection and instance segmentation, has been established. In [39], a novel vision Transformer called Swin-T is introduced; it generates a hierarchical feature representation and scales computationally linearly with the size of the input image. For all models, corneal images were fed into the networks to train the models and extract the weights. For 20 epochs, we used a batch size of 32. Moreover, we employed the Adam optimization approach, with a learning rate of 0.001, to iteratively modify network weights. Table 1 displays all of the parameter values utilized by the various classifiers. ### Generating Synthetic Cornea Images The diagnostic ratio is negatively affected by a lack of data [43], and this is the fundamental challenge with model training [44]. We synthesized new examples that were learned from existing data examples using a new way of producing synthetic corneal images using generative adversarial networks (GANs) to expand the training data and enhance diagnostic rates. GANs are deep CNN networks that generate new data from previously trained data such as images [45]. For synthesizing labeled images of the cornea, we employed conditional GANs [46]. The structure of the CGAN model used in this work (see Figure 2) is two networks that compete against one another to achieve a common goal, which is to learn the distribution of data \(p_{data}\) from samples (images in our work). Whereas in the first network, called the generator G network, an image G(x) is generated, \begin{table} \begin{tabular}{c c c} \hline \hline **Method.** & **Image size** & **Parameters** \\ \hline MobilenetV2 & 224\(\times\)224 & 3.5M \\ Resnet50 & 224\(\times\)224 & 25.6 \\ Xception & 299\(\times\)299 & 22.9 \\ ViT & 128\(\times\)128 & 36.3 \\ CoaT & 224\(\times\)224 & 22M \\ Swin-T & 224\(\times\)224 & 29M \\ \hline \hline \end{tabular} \end{table} Table 1: Values of the parameters used in the classifiers. usually from noise shaped by the uniform distribution \(P_{x}\), which is close to the target image, as it produces an image representing the class you want to generate, in addition to noise, to function as an assistant factor that aids the model in synthesizing images that are close to reality. On the other hand, the second network, dubbed Discriminator \(D\), tries to discern between real and fake images entered into the network; in other words, the input is \(x\), whereas the output is D(x). It compares the image created by the rival network to the actual image. The loss function, shown in equation (1), is optimized to train adversarial networks [47]. \[min_{G}max_{D=}E_{x-pdata}logD(x)\,+\,E_{x-p_{Z}}[\log\Big{(}1-D\big{(}G(z) \big{)}\Big{)}] \tag{1}\] where the \(D\) is trained to maximize \(D(x)\) for images derived from real data and minimize the D(x) that is derived from not real data. On the other hand, the Generator seeks to trick the Discriminator by generating an image \(G(z)\), which calls for maximizing the value of \(D(G(z))\). These two networks are still in competition during the training phase, with the Generator attempting to improve its performance to deceive the Discriminator, while the latter distinguishes between the real and fake images. The generator accepts a vector of random numbers with a size of 100 created by uniform distribution, and this vector reshapes into 4x4x1024. The architecture involved four deconvolution layers to up-sample the image using a 5x5 filter size. Finally, the output is the image with a size of 64x64x3. Except for the last layer, batch normalization and ReLU activation functions are used. The Discriminator-issued class label, in addition to the real or fake decision, derives from a corneal image with size 64x64x3 using a filter with size 5x5 with four convolutional layers. To reduce the spatial dimensionality, stride convolution is used in each layer. Batch normalization and ReLU were also applied in each layer (except the fully connected layer). The training of CGAN was conducted separately to generate every corneal image category, as well as conducted iteratively for the Discriminator and Generator. The noise sample \(Z^{1}...Z^{n}\) derives from a uniform distribution in the range [-11], \(n=100\). The slope of the leak of ReLU was equal to 0.2. The zero-centered center normal distribution was employed to initialize the weights with a standard deviation of 0.02. Moreover, for 20 epochs, we used the Adam optimizer, and the learning rate was equal to 0.0001. Figure 2 illustrates the structure of the proposed system. Figure 2: _Structure of the proposed system._ ## 3 Results The goal of this research, in which all of the steps have been outlined in detail in Table 2, is to find out to what extent generated data affect the diagnosis of corneal diseases, and how well classifiers can classify them. Therefore, the CGAN model has been trained to deal with data disparities; in other words, each corneal disease's image generated is separated with high-quality topographical images by using fine-tuning parameters to disband the scarcity of the cornea dataset. For clinical decision transfer, learning methods have been exploited, where the augmented dataset is used in training the networks. The results of diagnosing corneal diseases are reported using different types of transfer learning models, such as MobileNetv2, Resnet50, and Xception. To detect the importance of data generation, as well as its effect on classification tasks, we used the original dataset to train and test each classifier with and without corneal-generated images. On the other hand, to assess the strength of the synthesis model and its ability to synthesize convergent data in a particular category and divergent from other categories, each classifier was trained on the synthesized data without using the original data. We employed eight-fold cross-validation with case separation at the patient level in all of our experiments and evaluations. The used examples contained the corneal cases (normal or abnormal for each corneal map). For each batch of data images, we trained the network and assessed the outcomes individually. The CGAN architecture is used to train each corneal-case class separately, utilizing the same eight-fold cross-validation method and data split. Following training, the generator is capable of creating realistic corneal case images separately using a vector of noise formed by uniform distributions (see Figure 3). Accordingly, the model synthesized eight different cases of corneal images: normal and abnormal cases for sagittal, corneal thickness, elevation front, and elevation back images. We employed two main kinds of metrics in our research. First, we used observational error metrics such as accuracy, precision, recall, and F1-score metrics to evaluate classification accuracy (equations 2, 3, 4, and 5, respectively). Second, we used equations 6 and 7 to evaluate the synthesized image's quality with the original images via the structural similarity index method (SSIM) [48] and the peak signal-to-noise ratio (PSNR) [49]. \(Accuracy=\frac{TP+TN}{TP+FP+FN}\) (2) \(Precision=\frac{TP}{TP+FP}\) (3) \(Recall=\frac{TP}{TP+FN}\) (4) \(F1\_Score=\frac{2TP}{2TP+FP+FN}\) (5) where TP = true positives, TN = true negatives, FP = false positives, and FN = false negatives. \begin{table} \begin{tabular}{c c} \hline \hline & _Inputs: D: Dataset, img: a cornea’s image which is selected from the D:_ \\ \hline 1 & _GI = Build a model M which generate images from noise and targeting D_ \\ 2 & _For I = 1: CNN classifiers_ // (_MobilenetV2, Resnet50, Xception, ViT, CoT, and Swin-T_) \\ 3 & _[accuracy, precision, recall, f1-score] = Calculate metrics [Accuracy, Precision, Recall, F1-score] from GI_ \\ 4 & _End for_ \\ 5 & _[SSIM, MSE, PSNR, FID] = Calculate [SSIM, MSE, PSNR, FID] between an image from GI and D_ \\ 6 & _End_ \\ \hline \hline \end{tabular} \end{table} Table 2: Proposed Method: Structural similarity (SSIM) [48] is an image quality measurement based on equation (6) between the approximated image \(y_{t}^{t}\) and the ground truth image \(y_{t}^{t}\). \[SSIM(y_{t}^{L},y_{e}^{L})=\frac{1}{\pi}\sum_{l=1}^{M}\frac{(2\mu_{l}\mu_{l}\mu_{ e}+\epsilon_{4})(2\sigma_{l\mu_{e}}+\epsilon_{2})}{(\alpha^{2}\mu_{l}+\alpha^{2} \mu_{e}+\epsilon_{1})(\alpha^{2}\mu_{l}+\alpha^{2}\mu_{j}+\epsilon_{2})} \tag{6}\] In contrast, peak signal-to-noise ratio (PSNR) [49] is an objective assessment based on comparisons using particular numerical criteria [50, 51]; a higher PSNR value indicates better image quality. Images generated by equation (7) have significant numerical differences at the low end of the PSNR scale [52, 53]. \[PSNR(f,g)=10log_{10}(\frac{255^{2}}{MSE(f,g)})) \tag{7}\] MATLAB2020b is used for the implementation of corneal diagnosing. All training processes were performed using an NVIDIA GeForce GTX 1660 GPU. Using the above-mentioned metrics for different classifiers, few results were recorded when no synthesized data were used; this might be due to overfitting over the smaller number of training images. Conversely, using the CGAN model, the results improved as the number of training instances grew (see Table 3). Since our data images are unbalanced, we suggested revealing how the corneal diagnosis would be affected if a balanced dataset was available. Therefore, we used the traditional data balancing methods, where we conducted data resampling using both approaches to make a balanced dataset out of an imbalanced one. The first approach was undersampling (keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class); the second approach was oversampling (increasing the size of rare samples using repetition). These two approaches were applied to the data before and after generating images. Results reported that, generally, when applying data resampling on the original data (before using the CGAN model), the classifiers achieved a moral performance, while the data were balanced. Moreover, training by oversampling synthesized data for all classifiers outperforms training by underdamped synthesized data. On the other hand, applying oversampled data on the generated image (after implementing the CGAN model) will not affect the classifier results since the data are vast enough to train the models correctly. In contrast, undersampling negatively affected the achievement of classifiers due to the data being decreased again (see Table 4). \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Classifier** & **Data** & **Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline \multirow{2}{*}{MobilenetV2} & Original & 75.2 & 72.4 & 73.2 & 72.3 \\ & Synthesized & 88.6 & 86.5 & 89.8 & 87.5 \\ \multirow{2}{*}{Resnet50} & Original & 77.13 & 74.6 & 74.6 & 74.3 \\ & Synthesized & 90.5 & 90 & 90.4 & 90.1 \\ \multirow{2}{*}{Xception} & Original & 78.9 & 75.6 & 75.7 & 75.1 \\ & Synthesized & 90.7 & 90 & 90.6 & 90.2 \\ \multirow{2}{*}{ViT} & Original & 71.2 & 68.2 & 68.1 & 67 \\ & Synthesized & 88.7 & 90.7 & 84.4 & 86.2 \\ \multirow{2}{*}{CoaT} & Original & 65.6 & 64.9 & 65.2 & 65.1 \\ & Synthesized & 69.3 & 68.1 & 68.4 & 68.2 \\ \multirow{2}{*}{Swin-T} & Original & 58.4 & 56.3 & 57.5 & 56.9 \\ & Synthesized & 63.4 & 62.5 & 62.7 & 62.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison for classification of corneal conditions among obstetric models (%). \begin{table} \begin{tabular}{c c c} \hline \hline & **Sagittal Images** & **CT Images** & **EF and EB Images** \\ \hline Diagnosis by an Expert & 0.94 & 0.98 & 0.93 \\ \hline \hline \end{tabular} \end{table} Table 6: Average of Structural Similarity Index (SSIM) and peak signal-to-noise ratio (PSNR) for 100 random images. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Classifier**} & \multirow{2}{*}{**Data**} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{**Precision**} & \multicolumn{2}{c}{**Recall**} & \multicolumn{2}{c}{**F1-score**} \\ \cline{3-10} & & **OVS** & **UNS** & **OVS** & **UNS** & **OVS** & **UNS** & **OVS** & **UNS** \\ \cline{3-10} MobilenetV2 & Original & 85.5 & 75.4 & 85.7 & 76.2 & 85.5 & 75.4 & 85.3 & 75.4 \\ Synthesized & Synthesized & 88.5 & 81.1 & 88.8 & 81.6 & 88.4 & 81.1 & 88.4 & 81 \\ Resnet50 & Original & 86.36 & 75.7 & 86.3 & 76 & 86.6 & 75.7 & 86.3 & 75.5 \\ Synthesized & Synthesized & 90.2 & 82.8 & 90.8 & 83 & 90.8 & 82.8 & 90.8 & 82.7 \\ Xception & Original & 86 & 77.3 & 86.2 & 78 & 86 & 77.3 & 85.9 & 76.7 \\ Synthesized & Synthesized & 90 & 82.7 & 90.3 & 82.7 & 90 & 82.6 & 90 & 82.6 \\ ViT & Original & 74.5 & 70.9 & 73.2 & 68.4 & 72.8 & 69.5 & 73 & 68.9 \\ Synthesized & Synthesized & 89.8 & 86.1 & 88.2 & 85.5 & 88.9 & 85.6 & 88.5 & 85.6 \\ CoaT & Original & 69.4 & 63.7 & 69.1 & 62.6 & 68.9 & 62.8 & 69 & 62.7 \\ Synthesized & Synthesized & 73.8 & 66.8 & 72.6 & 65.7 & 72.9 & 65.9 & 72.7 & 65.8 \\ Swin-T & Original & 60.2 & 56.9 & 59.8 & 56.5 & 58.9 & 56.5 & 59.3 & 56.5 \\ Synthesized & Synthesized & 65.6 & 61.7 & 64.6 & 60.8 & 64 & 60 & 64.3 & 60.4 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison for classifying corneal conditions among obstetric models after balancing data (%). Figure 3: Samples of the generated image using the Conditional Generative Adversarial Network (CGAN) model. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Sagittal Images** & **CT Images** & **EF and EB Images** \\ \hline Diagnosis by an Expert & 0.94 & 0.98 & 0.93 \\ \hline \hline \end{tabular} \end{table} Table 5: The results from the expert (%). images very close to the original. Therefore, we can consider those images to be legitimate for training CNNs models, and ophthalmologists can use them in clinical research. The CNN classifiers are repeatedly tested in this work to determine the testing process. The suggested model can be applied in real-time, where testing images only takes a few moments, according to Table 7. While the CoaT model requires the longest ATT, the ATT for ViT beats the other classifiers. The high quality of the images can be seen in the images synthesized from the test images using the CGAN model, which are displayed in Figure 4. It is also possible to notice the stability of the structures and morphologies of the images. ## 4 Discussion The objectives of this work were to apply the CGAN model to generate synthetic medical images for data augmentation to expand limited datasets and improve clinical decision-making for corneal diseases. Thus, we investigated the extent to which synthetic corneal images help another system perform better behind the scenes. The study used a small dataset comprising the sagittal, corneal thickness, elevation front, and elevation back of corneal images. Each class has its distinct characteristics, although there is considerable intra-class variation. Our diagnosis was based on the four maps, each of which was examined to determine whether it was normal or diseased. To identify corneal disorders, a variety of transfer learning architectures were employed. We discovered that by utilizing the CGAN model to synthesize extra realistic images, we could increase the size of the training data groups, thus boosting the clinical decision. The diagnostic outcomes for mobilenetV2, Resnet50, Xception, ViT, CoaT, and Swin-T classifiers improved from 75.2 % to 88.6 %, 77.13% to 90.5%, 78.9% to 90.7 %, 71.2% to 88.7%, 65.6% to 69.3%, and 58.4% to 63.4%, respectively. Results from Table 3 show that the synthetic data samples generated can increase the variability of the input dataset, resulting in more accurate clinical decisions. The scores demonstrate that the synthesized images have useful visuals and, more crucially, useful characteristics that may be used in computer-aided diagnosis. The other aspect of this research is to test the effect of data balance on diagnostic results, where we used the resampling method to make the dataset balanced. The results showed that training the model before generating a new set of data on a balanced dataset is very important, especially in circumstances where data are scarce. On the contrary, we did not notice a significant impact on the performance of the classifiers when using the data resampling \begin{table} \begin{tabular}{c c c c c c} \hline \hline **MobilenetV2** & **Resnet50** & **Xception** & **ViT** & **CoaT** & **Swin-T** \\ \hline 0.0258 & 0.0187 & 0.0152 & 0.0108 & 0.0342 & 0.0203 \\ \hline \hline \end{tabular} \end{table} Table 7: Convolutional neural network **(CNN)** classifier’s average time test (ATT) (sec.). Figure 4: Example of original and synthesis images. on the generated data because the data was sufficient and suitable for training the models without the need to balance them using data balancing methods. This is clear evidence of the importance of the model proposed in this paper. In a final experiment, we compared the performance of the classifiers-based systems employed in this study for clinical decision-making (Table 4). The highest performance was derived from synthesized data in the Xception classifier, whereas the best performance came from using balance data in Resnet50 when using the oversampling approach, but the ViT model while using the undersampling approach. This work has several limitations. For example, the training complexity was enhanced by training distinct GANs for each corneal case class. It might be useful to look into GAN designs that produce multi-class samples at the same time. Another type of GAN learning process might increase the quality of the corneal image. It is also possible to do more research to improve the training loss function by adding regularization terms. Because the human factor is critical in evaluating the proposed model's outputs, an expert opinion was obtained after providing him with a set of generated corneal images containing a randomly selected set of normal and abnormal corneal images. The following was the expert's opinion: "Creating a new template for the corneal topographical of four refractive maps is considered an interesting subject as it enriched the overall expected shapes that could be seen during the daily clinic. These new images which created based on real cases collected previously and diagnosed that the new images are still inside the reality borderlines. Gain good experience with the new shapes and specify the further required steps of a diagnosis other than the topographical maps that could be specified advanced for predicted out-of-skim cases. In such a way, offline training for the new ophthalmologists and improving the skill of diagnosis with the preparation for new unseen cases could be done." In the future, we look to develop our research to exploit other GANs that might benefit from corneal image synthesis for better achievement. ## 5 Conclusion In conclusion, we proposed a strategy for improving performance in a medical issue with little data by generating synthetic medical images for data augmentation. On a corneal diseases diagnosis task, we discovered that synthetic data augmentation beat traditional data augmentation in accuracy by roughly 13%. Additionally, we investigated the performance of the classifiers in different conditions, and we found that while working with cornea images to diagnose diseases, the Xception classifier is more responsive than the rest of the used classifiers. We anticipate that synthetic augmentation can help with a variety of medical issues and that the method we have outlined can lead to more powerful and reliable support systems. **Author Contributions:** "Conceptualization, S. K. J. and S. A.; methodology, S. K. J., S. A., N. H. G., and J. M.; software, S. K. J. and S. A.; validation, S. A., N. H. G., and J. M.; formal analysis, S. K. J., and S. A.; investigation, N. H. G., and J. M.; resources, S. K. J., S. A., N. H. G., and J. M.; data curation, S. K. J., S. A., N. H. G., and J. M.; writing - original draft preparation, S. K. J., S. A., N. H. G., and J. M.; writing - review and editing. T. A. R. and S. Q. S.; visualization, T. A. R.; supervision, T. A. R.; project administration; funding acquisition, S. Q. S. and P. S. J. All authors have read and agreed to the published version of the manuscript." **Funding:** Dr.P. S. JosephNg, Faculty of Data Science & Information Technology, INTI International University, Persiaran Perdana BBN, 71800 Nilai, Negeri Sembilan, Malaysia. **Institutional Review Board Statement:** The manuscript is conducted within the ethical manner advised by the targeted journal. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** Data can be shared upon request from the corresponding author. **Acknowledgments:** None. **Conflicts of Interest:** The authors declare no conflict of interest to any party.
2309.07831
Intense high-order harmonic generation in giant fullerene molecule C$_{240}$
In this work the extreme nonlinear optical response of a giant fullerene molecule C$_{240}$ in strong laser field is studied. The investigation of high-order harmonic generation in such quantum nanostructure is presented modeling the C$_{240}$ molecule and its interaction with the laser field in the scope of the tight-binding mean-field approach. Electron-electron interaction is modeled by the parametrized Ohno potentail, which takes into account long-range Coulomb interaction. The essential role of many body Coulomb interaction in determining of harmonics intensities is demonstrated. We also consider vacancy-deffected molecule C$_{240}$. The presence of a single vacancy breaks the icosahedral symmetry leading to the emergence of intense even-order harmonics. We examine the dependence of moderate harmonics on laser frequency that shows the multiphoton resonant nature of high harmonics generation. The dependence of cutoff harmonics on both laser intensity and frequency are examined too.
H. K. Avetissian, S. Sukiasyan, T. M. Markosyan, G. F. Mkrtchian
2023-09-14T16:20:42Z
http://arxiv.org/abs/2309.07831v1
# Intense high-order harmonic generation in giant fullerene molecule C\({}_{240}\) ###### Abstract In this work the extreme nonlinear optical response of a giant fullerene molecule C\({}_{240}\) in strong laser field is studied. The investigation of high-order harmonic generation in such quantum nanostructure is presented modeling the C\({}_{240}\) molecule and its interaction with the laser field in the scope of the tight-binding mean-field approach. Electron-electron interaction is modeled by the parametrized Ohno potential, which takes into account long-range Coulomb interaction. The essential role of many body Coulomb interaction in determining of harmonics intensities is demonstrated. We also consider vacancy-deffected molecule C\({}_{240}\). The presence of a single vacancy breaks the icosahedral symmetry leading to the emergence of intense even-order harmonics. We examine the dependence of moderate harmonics on laser frequency that shows the multiphoton resonant nature of high harmonics generation. The dependence of cutoff harmonics on both laser intensity and frequency are examined too. ## I Introduction Intense light interaction with nanostructures can excite the electrons of the system through multiphoton channels, leading to extreme nonequilibrium states [1]. The excited electrons subsequently emit coherent electromagnetic radiation, encompassing tens to hundreds of harmonics of the incident light [2; 3]. This fundamental process in intense laser-matter interaction is known as high harmonic generation (HHG) phenomenon [4; 5]. In atoms, HHG has been widely used to produce coherent extreme ultraviolet radiation, allowing access to the extreme time resolution of the underlying quantum processes and enabling attosecond physics [6; 7]. Among the diverse range of nanostructured materials suitable for nonlinear extrime optical applications, carbon allotropes hold a central position [8; 9]. One of the carbon allotropes are fullerenes [10] which are large molecules formed by closing a graphite sheet, where the required curvature is achieved by incorporating twelve pentagons among a given number of graphene hexagons. The most well-known fullerene is the buckminsterfullerene C\({}_{60}\)[11], which possesses icosahedral symmetry. The discovery of fullerene C\({}_{60}\) through laser evaporation of graphite was triggered the study of many other fullerene molecules. Larger fullerenes, often referred to as giant fullerenes, can also be constructed with icosahedral symmetry [12]. These large fullerenes can be visualized as cut-out pieces of graphene that are folded into an icosahedron. Consequently, they exhibit similar properties to graphene [13] or graphene quantum dots [14], while remaining stable due to their closed topological structure. Note that in continuous limit C\({}_{60}\) and related molecules are well described by the Dirac equation in the curved space and in the field of a monopole [15; 16]. Giant or large fullerenes have been the subject of active research since the 1990s. For a more comprehensive overview, we refer the reader to references[17; 18; 19; 20; 21; 22; 23] for earlier studies and references [24; 25; 26; 27; 28; 29; 30] for more recent investigations. In the field of HHG, enhancing conversion efficiency is of utmost importance. This efficiency strongly relies on the density of emitters and the density of states of these emitters. To this end, molecular systems, clusters, and crystals have shown potential in significantly increasing harmonic intensity compared to atomic systems, as they can exploit multiple excitation channels [31; 32; 33]. As a result, there has been a growing interest in extending HHG to carbon-based materials, such as semimetallic graphene [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50], graphene quantum dots [51; 52; 53; 54], and fullerenes [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66].. Experimental studies, namely Refs. [59; 60], have reported a robust harmonic signal from C\({}_{60}\) plasma. Additionally, theoretical works have predicted strong HHG from both C\({}_{60}\)[56; 57; 65; 66] and C\({}_{70}\) molecules [65] and solid C\({}_{60}\)[64]. Notably, the increase in conducting electrons in fullerene molecules leads to a subsequent rise in density of states, thereby opening up new channels that can amplify the HHG signal. Consequently, exploring the HHG process in giant fullerenes becomes a compelling area of interest. With the increasing fullerene size, the molecules are subject to various types of defects. Therefore, investigating the impact of defects on HHG in large fullerenes holds significance. Recent research involve effects of disorder, impurities, and vacancies on HHG in solids [67; 68; 69; 70; 71; 72; 73; 74; 75]. These studies have revealed that an imperfect lattice can enhance HHG compared to a perfect lattice, especially when considering doping-type impurities or disorders. For C\({}_{60}\) and C\({}_{180}\), it has been shown that both diagonal and off-diagonal disorders break inversion symmetry, lift the degeneracy of states, and create new channels for interband transitions, resulting in enhanced high harmonic emission [66]. This raises intriguing questions about how vacancies specifically affect the HHG spectra in large fullerenes. Vacancies can occur naturally or be introduced in fullerenes through laser or ion/electron irradiation [76; 77]. Taking into account that vacancy defects introduce localized electronic states [78] and the HHG process is highly sensitive to electron wave functions, we can expect new effects in the HHG process at consideration of vacancy-defected fullerenes. In this study, we present a microscopic theory that explores the extreme nonlinear interaction of normal and single vacancy-defected fullerene C\({}_{240}\) with strong electromagnetic radiation. Particularly, we consider coherent interaction with a linearly polarized electromagnetic radiation taking into account collective electron-electron interactions. Employing the dynamical Hartree-Fock approximation, we reveal the general and basal structure of the HHG spectrum and its relation to molecular excitations and icosahedral symmetry breaking of giant molecules. The paper is organized as follows. In Sec. II, the model and the basic equations are formulated. In Sec. III, we present the main results. Finally, conclusions are given in Sec. IV. ## II The model and theoretical approach We start by describing the model and theoretical approach. Fullerene molecule C\({}_{240}\) and C\({}_{240}\) with a monovacancy is assumed to interact with a mid-infrared or visible laser pulse that excites electron coherent dynamics. For the brevity we refer vacancy-defected C\({}_{240}\) molecule as C\({}_{239}\). The schematic structure of these fullerene molecules are deployed in Fig. 1. We assume a neutral molecules, which will be described in the scope of the tight-binding (TB) theory. The electron-electron interaction (EEI) is described in the extended Hubbard approximation [65; 79; 80]. Hence, the total Hamiltonian reads: \[\widehat{H}=\widehat{H}_{0}+\widehat{H}_{\rm int}, \tag{1}\] where \[\widehat{H}_{0}=-\sum_{\langle i,j\rangle\sigma}t_{ij}c_{i\sigma}^{\dagger}c_ {j\sigma}+\frac{U}{2}\sum_{i\sigma}n_{i\sigma}n_{i\overline{\sigma}}+\frac{1} {2}\sum_{i,j}V_{ij}n_{i}n_{j} \tag{2}\] is the free fullerene Hamiltonian. Here \(c_{i\sigma}^{\dagger}\) creates an electron with spin polarization \(\sigma=\{\uparrow,\downarrow\}\) at site \(i\) (\(\overline{\sigma}\) is the opposite to \(\sigma\) spin polarization), and \(\langle i,j\rangle\) runs over all the first nearest-neighbor hopping sites with the hopping integral \(t_{ij}\) between the nearest-neighbor atoms at positions \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\). The density operator is: \(n_{i\sigma}=c_{i\sigma}^{\dagger}c_{i\sigma}\), and the total electron density for the site \(i\) is: \(n_{i}=n_{i\uparrow}+n_{i\downarrow}\). The second and third terms in Eq. (2) describe the EEI Hamiltonian, with the parameters \(U\) and \(V_{ij}\) representing the on-site, and the long-range Coulomb interactions, respectively. The involved molecules contain single and double carbon bonds, for which model Hamiltonian (2) has been parameterized extensively over the years. The input Cartesian coordinates for C\({}_{240}\) are obtained from the Yoshida database [81]. In the present paper, as first approximation, monovacancy is simulated by removing one carbon atom. The initial structures are further optimized with the help of IQmol programm [82]. Hence, in the vicinity of the vacancy the bond lengths are changed. There is also scenary when the structure undergoes a bond reconstruction in the vicinity of the vacancy [83]. In either case, a local distortion of the lattice takes place resulting states that are strongly localized around defects [84; 85]. For the one-electron hopping matrix elements, which in this work have been restricted to the nearest neighbors, we use values close to the graphene hopping matrix elements. The common choice of hopping matrix element is \(t_{0}=2.7\) eV, corresponding to the C-C bond length of \(d_{0}=1.42\)A, while for shorter or longer bonds, its value is extrapolated using the linear relationship \(t_{ij}=t_{0}+\alpha\left(d_{0}-|\mathbf{r}_{i}-\mathbf{r}_{j}|\right)\), with \(\alpha=3.5\) eV/A being the electron-phonon coupling constant. The EEI is modeled by the Ohno potential [86]: \[V_{ij}=\frac{U}{\sqrt{1+\frac{U^{2}|\mathbf{r}_{i}-\mathbf{r}_{j}|^{2}}{V^{2} d_{m}^{2}}}}, \tag{3}\] where \(V\) means the strength of the long range Coulomb interaction, and \(d_{m}\) is the average bond length. Depending on the screening effects a popular choice of parameters for the Coulomb interactions is \(0\leq U\leq 4t_{0}\), and \(V=0.5U\)[80; 87]. The light-matter interaction is described in the length-gauge \[\widehat{H}_{\rm int}=e\sum_{i\sigma}\mathbf{r}_{i}\cdot\mathbf{E}\left(t \right)c_{i\sigma}^{\dagger}c_{i\sigma}, \tag{4}\] where \(\mathbf{E}\left(t\right)=f\left(t\right)E_{0}\mathbf{\hat{e}}\cos\omega t\) is the electric field strength, with the amplitude \(E_{0}\), frequency \(\omega\), polarization \(\mathbf{\hat{e}}\) unit vector, and pulse envelope \(f\left(t\right)=\sin^{2}\left(\pi t/\mathcal{T}\right)\). The pulse duration \(\mathcal{T}\) is taken to be 10 wave cycles: \(\mathcal{T}=20\pi/\omega\). From the Heisenberg equation under the Hartree-Fock approximation one can obtain evolutionary equations for the single-particle density matrix \(\rho_{ij}^{(\sigma)}=\left\langle c_{j\sigma}^{\dagger}c_{i\sigma}\right\rangle\)[65]: \[i\hbar\frac{\partial\rho_{ij}^{(\sigma)}}{\partial t}=\sum_{k}\left(\tau_{kj \sigma}\rho_{ik}^{(\sigma)}-\tau_{ik\sigma}\rho_{kj}^{(\sigma)}\right)+\left( V_{i\sigma}-V_{j\sigma}\right)\rho_{ij}^{(\sigma)}\] \[+e\mathbf{E}\left(t\right)\left(\mathbf{r}_{i}-\mathbf{r}_{j}\right)\rho_{ij} ^{(\sigma)}-i\hbar\gamma\left(\rho_{ij}^{(\sigma)}-\rho_{0ij}^{(\sigma)}\right), \tag{5}\] where \(V_{i\sigma}\) and \(\tau_{ij\sigma}\) are defined via density matrix \(\rho_{ij}^{(\sigma)}\) and its initial value: \[V_{i\sigma}=\sum_{j\alpha}V_{ij}\left(\rho_{jj}^{(\alpha)}-\rho_{0jj}^{( \alpha)}\right)+U\left(\rho_{ii}^{(\overline{\sigma})}-\rho_{0ii}^{(\overline{ \sigma})}\right), \tag{6}\] \[\tau_{ij\sigma}=t_{ij}+V_{ij}\left(\rho_{ji}^{(\sigma)}-\rho_{0ji}^{(\sigma)}\right). \tag{7}\] In addition, we assumed that the system relaxes at a rate \(\gamma\) to the equilibrium \(\rho_{0ij}^{(\sigma)}\) distribution. As we see, due to the mean field modification hopping integrals (7) become non-zero between the remote nodes, irrespective of the distance. ## III Results Now we discuss full numerical solution of the evolutionary equations for the single-particle density matrix (5) and to get more physical insight, we study the question: which effects can be already observed in a linear regime of interaction. The time propagation of Eq. (5) is performed by the 8-order Runge-Kutta algorithm. As an initial density matrix we take a fully occupied valence band and a completely empty conduction band. To study the HHG process in giant fullerene molecule we evaluate the high-harmonic spectrum by Fourier transformation of the dipole acceleration, \(\mathbf{a}\left(t\right)=d^{2}\mathbf{d}(\mathbf{t})/dt^{2}\), where the dipole momentum is defined as \(\mathbf{d}\left(t\right)=e\sum_{i\sigma}\mathbf{r}_{i}\rho_{ii}^{(\sigma)} \left(t\right)\): \[\mathbf{a}\left(\Omega\right)=\int_{0}^{\mathcal{T}}\mathbf{a}\left(t\right)e ^{i\Omega t}W\left(t\right)dt,\] and \(W\left(t\right)\) is the window function to suppress small fluctuations [88] and to decrease the overall background (noise level) of the harmonic signal. As a window function we take the pulse envelope \(f\left(t\right)\). To obtain the mean picture which does not depend on the orientation of the molecule with respect to laser polarization, we take the wave polarization unity vector as \(\mathbf{\hat{e}}=\left(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3}\right)\). We begin by examining the effect of vacancy on the states near the Fermi level. In Fig. 1, electron probability density corresponding to the highest energy level in the valence band on the 3D color mapped molecular structures are shown. As is seen from this figure, for a vacancy deffected case we have state strongly localized around the vacancy. Thus, the presence of single vacancy also breaks the icosahedral symmetry. To examine intrinsic molecular transitions, we consider the extreme case of an external electric field that has the shape of a delta-like impulse in time to excite all electronic eigenmodes of the systems considered. In this case the relaxation rate is taken to be very small \(\hbar\gamma=0.5\) meV to resolve transitions as much as possible. The right pannels of Fig. 1 show linear absorption spectra (in arbitrary units), for Coulomb interaction, turned on and off. The peaks are intrinsic molecular excitation lines and the area of a particular peak defines the weight of the oscillator strengths. The effects of the EEI are similar to those of the fullerene molecule \(C_{60}\) molecule [80]. The Coulomb interaction shift peaks to higher energies, and oscillator strengths at higher energies have relatively larger weight than in the free electron case. These effects are due to the fact that the long range Coulomb interactions (3) give rise to large hopping integrals between the remote nodes (7) in the Hartree-Fock approximation. For the vacancy defected case the transitions are overall suppresed compared to intriscise case, although the low energy transitions are strongly modified. From this figure we also see that the optical gap in fullerene molecule C\({}_{240}\) is approximately 1.7 eV, which is narrower than that in C\({}_{60}\) (2.8 eV). Notably, in both cases the absorption spectra exhibit many peaks up to the high energies, suggesting the presence of efficient multiphoton excitation channels and subsequent high-energy single-photon transitions. These factors play a significant role in shaping the HHG spectrum, as we will explore in the following. Next, we will study more comprehensive the extreme nonlinear response of giant fullerene molecule C\({}_{240}\) and its vacancy-defected counterpart C\({}_{239}\). For all further calculations, except of Fig. 7, the relaxation rate is taken to be \(\hbar\gamma=0.1\) eV. For the convenience, we normalize the dipole acceleration by the factor \(a_{0}=e\overline{\omega}^{2}\overline{d}\), where \(\overline{\omega}=1\) eV/\(\hbar\) and \(\overline{d}=1\) A. The power radiated at the given frequency is proportional to \(\left|\mathbf{a}\left(\Omega\right)\right|^{2}\). In Fig. 2, we show the typical HHG spectra in the strong field regime (\(E_{0}=0.5\) V/A) for both molecules. For the C\({}_{240}\) molecule, the presence of inversion symmetry restricts the appearance of only odd harmonics in the HHG spectrum. In contrast, the introduction of a single vacancy in the C\({}_{239}\) molecule disrupts its icosahedral symmetry, resulting in the prominent emergence of even-order harmonics with enhanced intensity. Besides, we see strongly nonlinear picture, where the strength of the 9th harmonic surpasses that of the 5th and 7th harmonics. Additionally, a distinctive plateau spanning from the 11th to the 21st harmonics exhibits comparable strengths. Notably, for the C\({}_{239}\) molecule, the harmonics near the cutoff display a slight suppression relative to C\({}_{240}\) one. This disparity is attributed to the differing effectiveness of excitation channels, which favors enhanced harmonics in the case of C\({}_{240}\) molecule (see Fig. 1). Let us now consider the influence of the pump wave frequency on the HHG process within the energy range of \(\hbar\omega=1-2\) eV. This analysis is presented in Fig. 3 that illustrates the frequency-dependent HHG spectra. Notably, we discern that the position of the cutoff harmonic \(N_{\mathrm{cut}}\) demonstrates a relatively gradual response to changes in the wave field of frequency \(\omega\). Additionally, this cutoff exhibits distinctive peaks within the mid-frequency range. It's worth noting that in atomic HHG processes involving free continua, the cutoff harmonic position \(N_{\mathrm{cut}}\sim\omega^{-3}\)[5]. Furthermore, a noteworthy feature emerges when considering the C\({}_{239}\) molecule: even-order harmonics are suppressed for higher frequency pump waves. This phenomenon can be attributed to the fact that with higher frequency pump waves, excitation and recombination channels predominantly involve highly excited states that still retain the inversion sym metry. Of particular interest is the plateau region within the spectra. Here, a pattern of alternating variation in relation to frequency becomes evident, a hallmark of multiphoton resonant transitions between the valence and conduction bands. This resonant behavior is further illuminated by Figs. 4 and 5, where we visualize the dependency of emission strength for the preplateau harmonics on the pump wave frequency. It is apparent that these harmonics exhibit resonant behavior. Upon a closer examination of Fig. 1, we discern that the molecular excitations exhibit peaks coinciding with these resonant frequencies, providing supplementary evidence for the mul Figure 1: The top and bottom pannels represent C\({}_{240}\) fullerene and C\({}_{240}\) with a monovacancy, respectively. For the brevity for refer to the latter as C\({}_{239}\). Within each row, the following visualizations are presented from left to right: electron probability density corresponding to the highest energy level in the valence band on the 3D color mapped molecular structures and the linear absorption spectra, for Coulomb interaction turned on and off. Figure 3: The dependence of the HHG spectra on the wave field frequency is illustrated for C\({}_{240}\) (top) and C\({}_{229}\) (bottom) using the normalized dipole acceleration Fourier transformation, \(a\left(\Omega\right)/a_{0}\), plotted on a logarithmic scale. The wave amplitude is taken to be \(E_{0}=0.5\) V/Å. The relaxation rate is set to \(\hbar\gamma=0.1\) eV. The EEI energy is U = 6 eV. Figure 2: The HHG spectra in the strong-field regime in logarithmic scale via the normalized dipole acceleration Fourier transformation \(a\left(\Omega\right)/a_{0}\) (in arbitrary units) for C\({}_{240}\) and for C\({}_{239}\). The laser frequency is \(\omega=1.2\) eV/\(\hbar\). The spectra are shown for EEI energy \(U=6\) eV. tiphoton resonant transitions. For instance, in the case of molecule C\({}_{240}\), the highest peak for the 5th harmonic emerges at around \(1.3\mathrm{eV}\). This frequency aligns with the local peak at \(5\omega\sim 6.5\mathrm{eV}/\hbar\) in Fig. 1, accompanied by multiple excitation channels. Similarly, considering molecule C\({}_{239}\), the peak for the 6th harmonic is proximate to \(1.18\ \mathrm{eV}\), in accordance with the local peak at \(6\omega\sim 7\mathrm{eV}/\hbar\) in Fig. 1. The peaks displayed in Figs. 4 and 5 correspond with similar peaks in the molecular excitation spectra, as depicted in Fig. 1. The multiphoton resonance-driven characteristics are further supported by the evident alteration in the population of energy levels within the valence and conduction bands, as highlighted in Fig. 6. This figure presents the post-interaction population distribution of energy levels, demonstrating a marked departure from the equilibrium distribution. This discrepancy underscores the substantial impact of multiphoton resonant transitions within the HHG process of giant fullerene C\({}_{240}\) under the influence of intense near-infrared laser fields. Continuing our exploration, let us examine the influence of the relaxation rate on the HHG phenomenon across a span of \(\hbar\gamma=0.1-0.2\ \ \mathrm{eV}\). The corresponding dependencies of the HHG spectra on the relaxation rate are presented in Fig. 7. It is discernible that HHG exhibits resistance to relaxation processes, with preplateau harmonics, in particular, displaying notable robustness. As have been seen from Fig. 1, the position of molecular excitonic lines and relative intensities depend on EEI. It is also expected HHG yield change due to EEI. The latter is shown in Fig. 8, where the HHG spectra in the strong-field regime for different EEI energies are shown for fullerene C\({}_{240}\) molecule. The similar picture we have for C\({}_{239}\) molecule. As is seen, HHG yield strongly depends on the EEI energy. The inclusion of the Coulomb interaction leads to two noteworthy characteristics in the HHG spectra: (a) the most prominent feature is a substantial increase in the HHG signal by several orders of magnitude near the cutoff regime compared to the case of free quasiparticles. (b) The cutoff frequency is significantly enhanced. The significant enhancement in the HHG signal can be explained by the strong modification of hopping integrals (7) and the resulting level dressing due to the mean field effect. This observation Figure 4: The dependence of emission strength in the case of C\({}_{240}\) for the 3rd, 5th, 7th, and 9th harmonics on the pump wave frequency for the setup of Fig. 3. Figure 5: The dependence of emission strength in the case of C\({}_{239}\) for the 2nd, 4th, 6th, and 8th harmonics on the pump wave frequency for the setup of Fig. 3. Figure 6: The residual population of levels for the setup of Fig. 3. gains further support from the noticeable prominence of these features in the case of the giant fullerene C\({}_{240}\), in stark contrast to the behavior observed in C\({}_{60}\) molecule [65]. Another notable aspect of the HHG signals in giant fullerene molecules is their dependence on the size of the molecule. The HHG signals per particle for C\({}_{240}\) and C\({}_{60}\) are compared in Fig. 9. As demonstrated, there is a significant increase in the HHG signal for C\({}_{240}\) molecule, a result also observed for C\({}_{70}\) molecule according to previous studies [65]. This enhancement may be attributed to the density of states, which is indirectly reflected in Fig. 1 via the absorption spectra. The inset in Fig. 9 shows the linear absorption spectrum for C\({}_{60}\) molecule obtained in the same way, as in Fig. 1. This figure reveals that C\({}_{240}\) molecule has substantially more transition channels than C\({}_{60}\) one. Finally, note that within the scope of described methodology we have explored the correlation between the cutoff frequency and the intensity of a pump wave by analysing the HHG spectra for various intensities. The relationship between the HHG spectra and the amplitude of the wave field for both giant molecules is visually represented in Fig. 10. This figure prominently illustrates the nonlinear connection between the pre-plateau harmonics and the amplitude of the pump wave. The analysis of obtained results reveals that for high intensities, the positions of the cutoff harmonics can be adequately described by scaling with the square root of the field strength amplitude. The solid lines superimposed on the density plot in Fig. 10, represent envelopes (\(\sim\sqrt{E_{0}}\)) that determine the positions of the cutoff harmonics. Notably, it is evident that these envelopes provide a reasonably accurate approximation for the cutoff harmonics for a large field strengths. ## IV Conclusion We have done an extensive exploration of the highly nonlinear optical response of giant fullerene molecules, with a particular emphasis on C\({}_{240}\), which possesses the characteristic icosahedral point group symmetry often encountered in such molecular systems. To disclose the complete physical picture of HHG process on giant fullerene molecules with the mentioned icosahedral symmetry, we have also investigated a vacancy-defected molecule, C\({}_{239}\). Our investigation employed consistent quantum/analytic and numerical calculation of the HHG spectra using a mean-field methodology that rigorously accounts for long-range many-body Coulomb interactions Figure 8: The comparision of HHG signals for C\({}_{240}\) at different EEI energies. The pump wave frequency is \(\omega=1.2\) eV/\(\hbar\) and wave amplitude is 0.5 V/A. The relaxation rate is set to \(\hbar\gamma=0.1\) eV. Figure 7: The dependencies of the HHG spectra on the relaxation rate illustrated for C\({}_{240}\) (top) and C\({}_{239}\) (bottom). The spectra are shown for EEI energy: U = 6 eV. The pump wave frequency is \(\omega=1.5\) eV/\(\hbar\). The wave amplitude is taken to be \(E_{0}=0.5\) V/Å The color bar shows the relaxation rate in eV/\(\hbar\). Figure 9: The comparision of HHG signals per particle for C\({}_{240}\) and C\({}_{60}\). The pump wave frequency is \(\omega=1.1\) eV/\(\hbar\) and wave amplitude is 0.5 V/A. The relaxation rate is set to \(\hbar\gamma=0.1\) eV. The inset shows the linear absorption spectrum for C\({}_{60}\) obtained in the same way as in Fig. 1. too. Through the solution of the evolutionary equations governing the single-particle density matrix we have disclosed resonant effects within the HHG spectra and have demonstrated the fundamental role of Coulomb interaction in shaping the intensities of the harmonics. A significant enhancement in HHG yield, as compared with fullerene molecule C\({}_{60}\), has been established. Moreover, our research has elucidated that the presence of a single vacancy, causing the breakdown of icosahedral symmetry, stimulates the appearance of pronounced even-order harmonics. In terms of the dependence of the cutoff harmonics on the intensity of the wave field, we have established that this relationship can be approximated with greater accuracy by scaling with the square root of the amplitude of a pump wave strength. ###### Acknowledgements. The work was supported by the Science Committee of Republic of Armenia, project No. 21AG-1C014.
2302.14281
Periodic and open classical spin Calogero-Moser chains
We construct a class of interacting spin Calogero-Moser type systems. They can be regarded as a many particle system with spin degrees of freedom and as an integrable spin chain of Gaudin type. We prove that these Hamiltonian systems are superintegrable.
Nicolai Reshetikhin
2023-02-28T03:29:42Z
http://arxiv.org/abs/2302.14281v1
# Periodic and open classical spin Calogero-Moser chains ###### Abstract. We construct a class of interacting spin Calogero-Moser type systems. They can be regarded as a many particle system with spin degrees of freedom and as an integrable spin chain of Gaudin type. We prove that these Hamiltonian systems are superintegrable. ## Introduction **1.** Classical Calogero-Moser (CM) systems were among the first integrable \(N\)-particle systems of one dimensional particles [3][25] with the potential \(1/(q_{i}-q_{j})^{2}\). This model was generalized to the potential \(1/sh^{2}(q_{i}-q_{j})\) in [36]. Then it was extended to other root systems and to elliptic potentials in [28], to a model involving spin degrees of freedom in [16]. There is an extensive literature on spin versions of CM systems. For example in [21][22] solutions to equations of motion to the elliptic spin Calogero-Moser system were related to special elliptic solutions to the matrix KP hierarchy. The relation to gauge theories were explored in many papers, see for example [27][15]. A variety of spin CM systems were obtained by L. Feher, see for example [10][11][12], in particular he derived important examples related to homogeneous spaces. Two spin CM systems were studied in [18][19]. Integrable chains of relativistic spin CM type systems were studied in [5][2]. Superintegrability of spin CM systems and of spin Ruijsenaars systems was established in [29]. In [31] the superintegrability of spin CM systems on homogeneous spaces was established. A family of superintegrable systems on moduli spaces of flat connections was constructed in [1]. This family includes systems studied in [5][2]. In these particular case the system is also Liouville integrable. In this paper we will describe classical superintegrable system which we call spin Calogero-Moser(CM) chains. We call them spin CM chains because they combine features of many particle systems (as in CM systems) and of spin chains. We distinguish two cases: a _periodic chain_ and an _open chain_. The periodic case is the classical version of a quantum integrable system where joint eigenfunctions of quantum commuting Hamiltonians are trace functions, see [7]. In this case the spin part of the system reminds a spin chain with periodic boundary conditions. In case of rank 1 orbits for \(\mathfrak{sl}_{n}\) these systems are linearized versions of [5] and [2]. In the open case they are a classical version of quantum integrable systems constructed in [35][33]. For these systems the spin part of the system is similar to an open spin chain. In both cases, i.e. in the periodic and in the open spin Calogero-Moser chains, the phases space is a stratified symplectic space [23], which, in some cases have only one stratum and becomes a symplectic manifold. **2.** Recall that a _superintegrable system_ is the structure on a symplectic manifold \(\mathcal{M}\) that consists of a Poisson manifold \(\mathcal{P}\), a Poisson manifold \(\mathcal{B}\) with the trivial Poisson structure (i.e. zero Poisson tensor) and two surjective Poisson projections \[\mathcal{M}\overset{p_{1}}{\to}\mathcal{P}\overset{p_{3}}{\to}\mathcal{B} \tag{1}\] such that \(\dim(\mathcal{M})=\dim(\mathcal{P})+\dim(\mathcal{B})\). For a superintegrable system a generic fiber of \(p_{1}\) is an isotropic submanifolds of dimension \(\dim(\mathcal{B})\) and connected components of a generic fiber of \(p_{2}\) is a disjoint union of symplectic leaves of \(\mathcal{P}\). For details see [26], [30] and references therein. Here we adopt this notion to the case of stratified symplectic and Poisson spaces in which case \(p_{1}\) and \(p_{2}\) are Poisson mapping between stratifies spaces. In this paper the superintegrability means the balance of dimensions for the big stratum. How the system behave at smaller strata will be a subject of a separate publication. In the algebraic case, the appropriate setting is symplectic and Poisson stacks. Let \(I\) be a Poisson commutative subalgebra of \(A=C^{\infty}(\mathcal{M})\) that consists of functions which are constant on fibers of \(p_{2}\circ p_{1}\) (the pull-back of functions on \(\mathcal{B}\) to functions on \(\mathcal{M}\)) and \(J\) be the Poisson algebra of functions which are constant on fibers of \(p_{1}\) (the pull-back of functions on \(\mathcal{P}\)). The condition on \((\mathcal{M},\mathcal{P},\mathcal{B})\) for being a superintegrable system is equivalent to the following condition on \(I\subset J\subset A\). The Poisson algebra \(A\) has trivial center, \(I\subset A\) is a Poisson commutative subalgebra, such that \(J\), its centralizer in \(A\) maximal possible Gelfand-Kirillov dimension for the given Gelfand-Kirillov dimension of \(I\). The Hamiltonian dynamics generated by a function \(H\in I\) is called _superintegrable_. Any function from \(J\) is constant along flow lines of the vector field generated by \(H\) and thus, it is an integral of motion for the Hamiltonian dynamics generated by \(H\). This is why we call elements of the Poisson commutative subalgebra \(I\) Hamiltonians and elements of \(J\) conservation laws. ## 3. Throughout this paper \(G\) is a split real connected semisimple Lie algebra with finite center which admits a complexification, and \(\Theta\in\operatorname{Aut}(G)\) is a Cartan involution. We denote by \(K=G^{\Theta}\) the closed subgroup of fixed points of \(\Theta\), which is connected and maximal compact. Let \(\theta\) the corresponding Cartan involution1 of \(\mathfrak{g}\), and \(\mathfrak{k}\) the Lie algebra of \(K\). The associated Cartan decomposition of \(\mathfrak{g}\) is \(\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}\), with \(\mathfrak{p}\) the \((-1)\)-eigenspace of \(\theta\). Footnote 1: Recall that an involution \(\theta:\mathfrak{g}\to\mathfrak{g}\) is a Cartan involution when the bilinear from \((-\theta(x),y)\) on \(\mathfrak{g}\) is positive definite. Here \((\cdot,\cdot)\) is the Killing form. Let \(\mathfrak{a}\subset\mathfrak{g}\) be maximally noncompact \(\theta\)-stable Cartan subalgebra of \(\mathfrak{g}\). Since \(\mathfrak{g}\) is split we have \(\mathfrak{a}\subseteq\mathfrak{p}\). On the Lie group level, \(A=\exp(\mathfrak{a})\subset G\) is a maximal real split torus in \(G\) and \(H:=Z_{G}(A)\), the centraliser of \(A\) in \(G\), is a Cartan subgroup in \(G\) containing \(A\). The exponential map provides an isomorphism \(\overset{\sim}{\longrightarrow}A\), whose inverse we denote by \(\log:A\to\mathfrak{a}\). Consider the root decomposition of \(\mathfrak{g}\) with respect to the Cartan subalgebra \(\mathfrak{a}\), \[\mathfrak{g}=\mathfrak{a}\oplus\bigoplus_{\alpha\in R}\mathfrak{g}_{\alpha}\] where \(R\subset\mathfrak{a}^{*}\) is the root system of \(\mathfrak{g}\) relative to \(\mathfrak{a}\). Choose \(e_{\alpha}\in\mathfrak{g}_{\alpha}\) such that \[\theta(e_{\alpha})=-e_{-\alpha} \tag{2}\] and \((e_{\alpha},e_{-\alpha})=1\) for each \(\alpha\in R\), and choose a subset \(R_{+}\subset R\) of positive roots. Let \(W\subset\operatorname{GL}(\mathfrak{a}^{*})\) be the Weyl group of \(R\). The Weyl group \(W\) is isomorphic to \(N_{G}(A)/H\), where \(N_{G}(A)\) is the normaliser of \(A\) in \(G\). Denote by \(A_{reg}\) the set of regular elements in \(a\in A\), \[A_{reg}:=\{a\in A\mid a_{\alpha}:=e^{\alpha(\log(a))}\neq 1\text{ for all }\alpha\in R\}.\] It is the union of all the regular \(W\)-orbits in \(A\). A fundamental domain for the \(W\)-action on \(A_{reg}\)2 is the positive Weyl chamber Footnote 2: In case of \(SL_{n}(\mathbb{R})\) one can take \(A\) to be the diagonal unimodular matrices with positive real entries, and \(A_{reg}\) consists of those with distinct diagonal entries. \[A_{+}=\{a\in A\mid a_{\alpha}:=e^{\alpha(\log(a))}>1\text{ for any }\alpha\in R_{+}\}.\] Let \(G^{\prime}\subset G\) be the set of elements \(g\in G\) which are \(G\)-conjugate to some element in \(A_{reg}\). The inclusion \(A_{reg}\hookrightarrow G^{\prime}\) induces a bijection \(A_{reg}/W\stackrel{{\sim}}{{\longrightarrow}}G^{\prime}/G\), with \(A_{reg}/W\) the set of \(W\)-orbits in \(A_{reg}\) and \(G^{\prime}/G\) the set of \(G\)-conjugacy classes in \(G^{\prime}\). The Weyl group \(W\) is also isomorphic to \(N_{K}(A)/M\), where \(N_{K}(A)=N_{G}(A)\cap K\) and \(M=H\cap K\) (note that \(M\) is a finite group since \(G\) is split). The inclusion map \(A\hookrightarrow G\) induces an isomorphism \(A/W\stackrel{{\sim}}{{\longrightarrow}}K\backslash G/K\). We write \(G_{reg}=KA_{+}K\) for the union of the double \((K,K)\)-cosets intersecting \(A_{reg}\). **4.** The phase space of a periodic spin Calogero-Moser chain corresponding to a collection \(\mathcal{O}=\{\mathcal{O}_{1},\ldots,\mathcal{O}_{n}\}\) of coadjoint orbits \(\mathcal{O}_{i}\subset\mathfrak{g}^{*}\) is the regular part of the symplectic leaf \(\mathcal{S}(\mathcal{O})\) of the stratified Poisson space \(T^{*}(G^{\times n})/G_{n}\), with the action of the gauge group \(G_{n}=G^{\times n}\) the lift of a twisted conjugation action on \(G^{\times n}\)(see section 1.1).3 Here we assume that each of \(\mathcal{O}_{i}\) is non-trivial, i.e. \(\mathcal{O}_{i}\neq\{0\}\) These symplectic leaves are obtained by the Hamiltonian reduction, as it is described in section 1.1. As a stratified symplectic space Footnote 3: In this paper we assume that all quotients \(X/H\) are GIT quotients. \[\mathcal{S}(\mathcal{O})\simeq\{(x_{1},\ldots,x_{n},g)\in\mathfrak{g}^{* \times n}\times G\mid x_{1}-\operatorname{Ad}_{g^{-1}}^{*}x_{n}\in\mathcal{O} _{1},\;x_{i}-x_{i-1}\in\mathcal{O}_{i},\;i=2,\ldots,n\}/G,\] see section 1.3.4 Its regular part is defined as the intersection \(\mathcal{S}(\mathcal{O})_{reg}=\mathcal{S}(\mathcal{O})\cap(\mathfrak{g}^{* \times n}\times G^{\prime})/G\).5 The regular part has the following structure as a symplectic manifold, \(\mathcal{S}(\mathcal{O})_{reg}\simeq\big{(}\nu_{\mathcal{O}}^{-1}(0)/H\times T ^{*}A_{reg}\big{)}/W\), where \(\nu_{\mathcal{O}}:\mathcal{O}_{1}\times\cdots\mathcal{O}_{n}\to\mathfrak{a}^ {*}\) is the moment map for the diagonal coadjoint action of \(H\) on \(\mathcal{O}_{1}\times\cdots\times\mathcal{O}_{n}\) (see section 1.3). Footnote 4: There are \(n\) such natural isomorphisms \(\varphi_{j}\) (\(1\leq j\leq n\)), see section 1.2. In the introduction we use \(\varphi_{n}\). Trivialization of \(T^{*}G\) by right translations gives an isomorphism \(T^{*}(G^{\times n})\simeq\mathfrak{g}^{*\times n}\times G^{\times n}\) and the Poisson projection \(T^{*}(G^{\times n})/G_{n}\to(\mathfrak{g}^{*}/G)^{\times n}\), which is the projection to the cotangent directions followed by the quotienting with respect to the coadjoint action of \(G^{\times n}\). Poisson commuting Hamiltonians of the periodic spin Calogero-Moser system are functions on \(T^{*}(G^{\times n})/G_{n}\) which are constant on fibers of this Poisson projection. More precisely, tPoisson commuting Hamiltonians are such functions restricted to \(\mathcal{S}(\mathcal{O})_{reg}\). Consider the increasing set of natural numbers \(2=d_{1}\leq\cdots\leq d_{r}\) with \(r=\operatorname{rank}(\mathfrak{g})\) and \(d_{k}-1\) the exponents of \(\mathfrak{g}\). Let \(c_{d_{k}}\) be the nonzero coadjoint invariant functions on \(\mathfrak{g}^{*}\) of degree \(d_{k}\), known as Casimir functions. The function \(c_{2}\) is the quadratic Casimir of \(\mathfrak{g}\). Let \(H^{(I)}_{d_{k}}\) be the function on \((\mathfrak{g}^{*}/G)^{\times n}\) which is \(c_{d_{k}}\) on the \(l\)-th factor and constant on all other factors. Let us denote vectors in \(\mathcal{O}_{k}\subset\mathfrak{g}^{*}\) by \(\mu^{(k)}\), its Cartan component by \(\mu^{(k)}_{0}\) and set \(\mu^{(k)}_{\alpha}=\mu^{(k)}(e_{-\alpha})\) for \(\alpha\in R\). Denote \((p,a)\) points on \(T^{*}A\simeq\mathfrak{a}^{*}\times A\). Now let us describe quadratic Hamiltonians in terms of these variables. The \(n\)-th quadratic Hamiltonian is the spin Calogero-Moser Hamiltonian. It has particularly simple form: \[H_{2}^{(n)}=\frac{1}{2}(p,p)-\sum_{\alpha>0}\frac{\mu_{\alpha}\mu_{-\alpha}}{2 \operatorname{sh}^{2}(q_{\alpha})}\] where we used the parametrization \(a_{\alpha}=e^{q_{\alpha}}\) (so \(q_{\alpha}=\alpha(\log(a))\)) and \(\mu_{\alpha}=\mu_{\alpha}^{(1)}+\cdots+\mu_{\alpha}^{(n)}\), and \((\cdot,\cdot)\) is the Euclidean form on \(\mathfrak{a}^{*}\) obtained by dualising the restriction of the Killing form of \(\mathfrak{g}\) to \(\mathfrak{a}\). The differences \(D_{k}=H_{2}^{(k)}-H_{2}^{(k-1)}\) for \(1<k\leq n\) are classical analogs of topological Knizhnik-Zamolodchikov-Bernard differential operators, \[D_{k}=(\mu_{0}^{(k)},p)-\sum_{l=1}^{k-1}r_{lk}+\sum_{l=k+1}^{n}r_{kl} \tag{3}\] where \(r_{kl}\) for \(k\neq l\) is a classical version of the Felder's dynamical \(r\)-matrix [13], \[r_{kl}=-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})+\sum_{\alpha}\frac{\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}-1} \tag{4}\] and \(\sum_{\alpha}\) stands for the sum over all the roots \(\alpha\in R\). This explicit form of \(D_{k}\) is derived in section 1.3. The superintegrability of this system is described in section 1.5. The projection method for constructing solutions of equations of motion and angle variables are described in section 1.6. One can choose \(G\) to be a the maximal compact real form of the complexification \(G_{\mathbb{C}}\). In this case the integrable system is similar, but hyperbolic functions gets replaces by the trigonometric ones. The structure of the phase space is again a stratified symplectic space. The superintegrability of the quantum counterpart of such compact case is proven in [32]. **5.** The phase space of an open Calogero-Moser spin chain is the regular part of a symplectic leaf of the Poisson manifold \(T^{*}(G^{\times n+1})/(K\times G^{\times n}\times K)\) where the action of the gauge group \(K\times G^{\times n}\times K\) is described in section 2.3, and \(K\subset G\) is as above. Such symplectic leaves are given by the Hamiltonian reduction. They are parametrized by collections of coadjoint orbits \(\mathcal{O}=\{\mathcal{O}_{\ell}^{K},\mathcal{O}_{1},\ldots,\mathcal{O}_{n}, \mathcal{O}_{r}^{K}\}\) where \(\mathcal{O}_{i}\subset\mathfrak{g}^{*}\) and \(\mathcal{O}_{\ell,r}^{K}\subset\mathfrak{k}^{*}\subset\mathfrak{g}^{*}\) are coadjoint orbits. We assume that none of \(\mathcal{O}_{i}\) is trivial, i.e. \(\mathcal{O}_{i}\neq\{0\}\). We denote the corresponding symplectic leaf by \(\mathcal{S}(\mathcal{O})\). It is a stratified symplectic space. Using Cartan decomposition \(G=KAK\) and a "gauge fixing fixing", we define the regular part \(\mathcal{S}(\mathcal{O})_{reg}\) of \(\mathcal{S}(\mathcal{O})\) as the strtatum \[S(\mathcal{O})_{reg}\simeq(T^{*}A_{reg}\times\mathcal{O}_{1}^{K}\times \mathcal{O}_{1}\times\cdots\times\mathcal{O}_{n}\times\mathcal{O}_{2}^{K}))/N_ {K}(A),\] where on the right we have a natural product symplectic structure. Similarly to the periodic case, quadratic Hamiltonians can be computed explicitly in terms of Cartan components \(\mu_{0}^{(k)}\) and root coordinates \(\mu_{\alpha}^{(k)}\) of vectors \(\mu^{(k)}\in\mathcal{O}_{k}\), coordinates \(\mu_{[\alpha]}^{\prime},\mu_{[\alpha]}^{\prime\prime}\) on \(\mathcal{O}_{\ell}^{K}\) and \(\mathcal{O}_{r}^{K}\) respectively (in the basis elements \(e_{[\alpha]}=e_{-\alpha}-e_{\alpha}\in\mathfrak{k}\subset\mathfrak{g}\) for \(\alpha\in R_{+}\)), and \((p,a)\in T^{*}A_{reg}\). Assuming the gauge fixing \(\phi_{n}\) (see section 2.3) we have \[H_{2}^{(n)}=\frac{1}{2}(p,p)+\sum_{\alpha>0}\frac{(a_{\alpha}\mu_{[\alpha]}^{ \prime}+\mu_{[\alpha]}^{\prime\prime}+a_{\alpha}(\mu_{\alpha}-\mu_{-\alpha}))( a_{\alpha}^{-1}\mu_{[\alpha]}^{\prime}+\mu_{[\alpha]}^{\prime\prime}+a_{\alpha}^{-1}( \mu_{\alpha}-\mu_{-\alpha}))}{(a_{\alpha}-a_{-\alpha})^{2}}\] For other quadratic Hamiltonians the differences \[D_{k}=H_{2}^{(k)}-H_{2}^{(k-1)}\qquad\qquad(1\leq k\leq n)\] are more interesting. They are classical analogs of boundary Knizhnik-Zamolodchikov-Bernard differential operators [35][33]. We have the following formula for \(D_{k}\): \[D_{k}=(\mu_{0}^{(k)},p)-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_{l}})+(\sum_{ \alpha}K_{\alpha}\mu_{-\alpha}^{(k)}-\kappa_{k})+\sum_{l=k+1}^{n}(r_{kl}-r_{ kl}^{\theta_{k}}).\] Here \(r_{kl}\) for \(k\neq l\) now is Felder dynamical \(r\)-matrix rescaled in \(a\in A_{reg}\), \[r_{kl}=-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})+\sum_{\alpha}\frac{\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}^{2}-1}, \tag{5}\] \(\theta_{k}\) is the transpose of the Cartan involution acting on \(\mu^{(k)}\), \[\kappa_{k}=\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(k)})+\sum_{\alpha}\frac{(\mu_{ \alpha}^{(k)})^{2}}{1-a_{\alpha}^{2}}\] and \[K_{\alpha}=\frac{a_{\alpha}\mu_{[\alpha]}^{\prime}+\mu_{[\alpha]}^{\prime \prime}}{a_{\alpha}-a_{\alpha}^{-1}}. \tag{6}\] The differences \(D_{k}=H_{2}^{(k)}-H_{2}^{(k-1)}\) are classical analogs of boundary KZB operators derived in [35][33]. The superintegrability of open spin CM chains is proven in section 2.6. The projection method for solving equations of motion and angle coordinates are described in section 2.7. The structure of the paper is as follows. In section 1 we construct periodic spin CM chains by the Hamiltonian reduction and prove the superintegrability. In section 1.1 we describe the phase space of such a system. In sections 1.2, 1.3 we describe the regular part of the phase space. Hamiltonians of a periodic spin CM chain, restricted to the regular part of the phase space are described in section 1.4. The superintegrability of a periodic spin CM chain is proven in section 1.5. In section 1.6 solutions to equations of motion are described algebraically by the projection method, and angle variables are described. In section 2 we focus on open spin CM chains. In section 2.1 we describe phase spaces. In section 2.3, 2.4 we describe the regular part of the phase space. Hamiltonians of an open spin CM chain, restricted to the regular part of the phase space are described in section 2.5. The superintegrability of an open spin CM chain is proven in section 2.6. In section 2.7 solutions to equations of motion are described algebraically by the projection method, and angle variables are described. In the conclusion (section 3) we discuss some open problems and describe in details periodic CM spin chain for \(SL_{N}\) with orbits of rank 1. In Appendix A we compare our symplectic leaves with the ones from [31]. Throughout this paper we will focus on split real semisimple Lie groups. However, since all constructions are algebraic they extend (with appropriate modifications) to the complex algebraic case. The non-split real case will be the subject of a separate publication (see [33] for the quantum case). Another important real case is when \(G\) is compact, which can be deduced from the complex algebraic case by restriction to a compact real form. The structure of phase spaces as stratified symplectic spaces will be explored further in [6]. ### Acknowledgments This paper was started as a joint project with Jasper Stokman. The author is grateful to Jasper for many discussions and for the collaboration on this paper. He also would like to thank Vladimir Fock, Eva Miranda and Hessel Postuma for important discussions and remarks and to Zhuo Chen, Kai Jiang and Husileng Xiao for discussions on stratified symplectic spaces. N.R. want to thank ITS-ETH for the hospitality, where the bulk of this work was completed. The work of N.R. was supported by the NSF grant DMS-1902226, by the Dutch research council (NWO 613.009.126), and by the grant RFBR No. 18-01-00916. ## 1. Periodic spin Calogero-Moser chains ### The phase space as the Hamiltonian reduction Here we will describe the phase space of a periodic spin Calogero-Moser chain as a Hamiltonian reduction of \(T^{*}(G^{\times n})\). Let us start with the description of these symplectic spaces. Consider the manifold \(T^{*}(G^{\times n})\) with the standard symplectic structure. The cotangent bundle over a Lie group can be trivialized by right translations, which gives an isomorphism of vector bundles \[T^{*}(G^{\times n})\simeq(T^{*}G)^{\times n}\simeq\mathfrak{g}^{*\times n} \times G^{\times n}\] We will choose this trivialization throughout the paper. The Lie group \(G_{n}:=G^{\times n}\) acts naturally on itself by left and right translations. Lifting these actions to \(T^{*}(G^{\times n})\), after the trivialization of the cotangent bundle, we can write the action by left translations as: \[h_{L}(x,g)=(Ad^{*}_{h_{1}}(x_{1}),Ad^{*}_{h_{2}}(x_{2})\ldots,Ad^{*}_{h_{n}}( x_{n}),h_{1}g_{1},h_{2}g_{2},\ldots,h_{n}g_{n})\] and the action by right translations as \[h_{R}(x,g)=(x_{1},\ldots,x_{n},g_{1}h_{1}^{-1},\ldots,g_{n}h_{n}^{-1})\] Both these actions are Hamiltonian with moment maps \[\mu_{L}(x,g)=(x_{1},x_{2},\ldots,x_{n})\] and \[\mu_{R}(x,g)=(-Ad^{*}_{g_{1}}(x_{1}),\ldots,-Ad^{*}_{g_{n}}(x_{n}))\] respectively. Actions by left and right translations can be twisted by permutations. In particular, we can twist the action by left translations by a cyclic permutation. Combining the twisted left action with the non-twisted right action we obtain the "gauge action" of \(G_{n}\) on \(G^{\times n}\)6 Footnote 6: One can twist both left and right actions by a permutation. This leads to other superintegrable systems. \[h(g_{1},\ldots,g_{n})=(h_{1}g_{1}h_{2}^{-1},h_{2}g_{2}h_{3}^{-1},\ldots,h_{n}g _{n}h_{1}^{-1})\] Lifting the twisted conjugation action of \(G_{n}\) on \(G^{\times n}\) to \(T^{*}(G^{\times n})\) we obtain the "gauge action" on the cotangent bundle: \[h(x,g)=(Ad^{*}_{h_{1}}(x_{1}),Ad^{*}_{h_{2}}(x_{2}),\ldots,Ad^{*}_{h_{n}}(x_{n }),h_{1}g_{1}h_{2}^{-1},h_{2}g_{2}h_{3}^{-1},\ldots,h_{n}g_{n}h_{1}^{-1}) \tag{7}\] Because this is the diagonal action for two Hamiltonian actions, the gauge action is also Hamiltonian with the moment map \(\mu:T^{*}(G^{\times n})\rightarrow\mathfrak{g}^{*\times n}\): \[\mu(x,g)=\mu_{L}(x,g)+\mu_{R}^{tw}(x,g)=(x_{1}-Ad^{*}_{g_{n}}(x_{n}),x_{2}-Ad^ {*}_{g_{1}^{-1}}(x_{1}),\ldots,x_{n}-Ad^{*}_{g_{n-1}}(x_{n-1})) \tag{8}\] where \(\mu_{R}^{tw}\) is the right moment map, twisted by cyclic permutation. Because the gauge action (7) of \(G_{n}\) is Hamiltonian, the quotient space \(T^{*}(G^{\times n})/G_{n}\) is a Poisson space.7 Symplectic leaves of \(T^{*}(G^{\times n})/G_{n}\) are given by the Hamiltonian reduction with respect to the moment map (8). Let \(\mathcal{O}_{1},\ldots,\mathcal{O}_{n}\) be coadjoint orbits in \(\mathfrak{g}^{*}\), then the corresponding symplectic leaf in \(T^{*}(G^{\times n})/G_{n}\) is \[\mathcal{S}(\mathcal{O})=\mu^{-1}(\mathcal{O}_{1}\times\cdots\times\mathcal{O} _{n})/G_{n}=\{(x,g)\in\mathfrak{g}^{*\times n}\times G^{\times n}|x_{i}-Ad_{g _{i-1}}^{*}\,(x_{i-1})\in\mathcal{O}_{i}\}/G_{n} \tag{9}\] where \(G_{n}\) acts by the gauge transformations (7) and the indices \(i\) should be taken modulo \(n\). On each of these symplectic leaves we will construct a superintegrable system which we will call a _periodic spin Calogero-Moser chain_. ### The gauge fixing Let us fix \(i\in 1,\ldots,n\) and \(g=(g_{1},\ldots,g_{n})\in G^{\times n}\). Let \(h\in G_{n}\) such that \[h_{j}=\begin{cases}h_{i}g_{i-1}^{-1}\cdots g_{j+1}^{-1}g_{j}^{-1}&\text{for } \ 1\leq j<i,\\ h_{i}g_{i-1}^{-1}\cdots g_{2}^{-1}g_{1}^{-1}g_{n}^{-1}\cdots g_{j+1}^{-1}g_{j }^{-1}&\text{for }\ i<j\leq n.\end{cases}\] Denote such element of \(G_{n}\) by \(h_{g}\) (we suppress the dependence on \(i\)). It is easy to check that the gauge transformation of \(g=(g_{1},\ldots,g_{n})\) by the element \(h_{g}\) brings it to \((1,\ldots,1,h_{i}(g_{i}g_{i+1}\cdots g_{n}g_{1}g_{2},\ldots g_{i-1})h_{i}^{-1}, 1,\ldots,1)\), with the \(i^{\text{th}}\)-entry being the nontrivial entry. This identifies the \(G_{n}\) gauge orbit through \(g=(g_{1},\ldots,g_{n})\) with the \(G\)-conjugation orbit through \(g_{1}\cdots g_{n}\). It thus gives an (\(i\)-independent) isomorphism \[G^{\times n}/G_{n}\stackrel{{\sim}}{{\longrightarrow}}G/G,\] where \(G/G\) denotes the set of conjugacy classes in \(G\). On the cotangent bundles the gauge fixing with gives the isomorphism \(\varphi_{i}\colon\big{(}\mathfrak{g}^{*\times n}\times G^{\times n}\big{)}/G _{n}\stackrel{{\sim}}{{\longrightarrow}}\big{(}\mathfrak{g}^{* \times n}\times G\big{)}/G\) mapping the \(G_{n}\)-orbit \(G_{n}(x,g)\) through \((x,g)\in\mathfrak{g}^{*\times n}\times G^{\times n}\) to the \(G\)-orbit through \[\big{(}Ad_{g_{i}^{-1}\cdots g_{1}^{-1}}^{*}(x_{1}),\ldots,Ad_{g_{i}^{-1}}^{*} (x_{i}),Ad_{g_{i}^{-1}\cdots g_{1}^{-1}g_{n}^{-1}\cdots g_{i+1}^{-1}}^{*}(x_{i +1}),\ldots,Ad_{g_{i}^{-1}\cdots g_{1}^{-1}g_{n}^{-1}}^{*}(x_{n}),g_{i+1}\cdots g _{n}g_{1}\cdots g_{i}\big{)}\] (for \(i=n\) this should be read as \(\big{(}Ad_{g_{n}^{-1}\cdots g_{1}^{-1}}^{*}(x_{1}),\ldots,Ad_{g_{n}^{-1}}^{*} (x_{n}),g_{1}\cdots g_{n}\big{)}\)). Here \(G\) is acting diagonally on \(\mathfrak{g}^{*\times n}\times G\) via the coadjoint action on \(\mathfrak{g}^{*}\) and the conjugation action on \(G\). From now on we will work with the isomorphism \(\varphi_{n}\). ### The regular part of the phase space The image of the symplectic leaf \(\mathcal{S}(\mathcal{O})\) under the isomorphism \(\varphi_{n}\) is \[\mathcal{S}(\mathcal{O})=\{(z_{1},\ldots,z_{n},g)\in\mathfrak{g}^{*\times n} \times G\mid z_{1}-Ad_{g^{-1}}^{*}z_{n}\in\mathcal{O}_{1},\ z_{i}-z_{i-1}\in \mathcal{O}_{i},\ i=2,\ldots,n\}/G.\] Define the _regular_ part \(\mathcal{S}(\mathcal{O})_{reg}\subset\mathcal{S}(\mathcal{O})\) of the phase space as \(\mathcal{S}(\mathcal{O})\cap(\mathfrak{g}^{*\times n}\times G^{\prime})/G\). On \(\mathcal{S}(\mathcal{O})_{reg}\) we can choose a representative where \(g\) is in the regular part \(A_{reg}\) of the real split torus \(A\) in \(G\): \(g=bab^{-1},z_{i}=\text{Ad}_{p}^{*}x^{(i)}\) with \(a\in A_{reg}\). Then we have \[\mathcal{S}(\mathcal{O})_{reg}=\{(x^{(1)},\ldots,x^{(n)},a)\in \mathfrak{g}^{*\times n}\times A_{reg}\mid x^{(1)}-Ad_{a^{-1}}^{*}x^{(n)}\in \mathcal{O}_{1},\] \[x^{(i)}-x^{(i-1)}\in\mathcal{O}_{i},\ i=2,\ldots,n\}/N_{G}(A).\] Identify \(\mathfrak{g}^{*}\simeq\mathfrak{g}\) and \(\mathfrak{a}^{*}\simeq\mathfrak{a}\) via the Killing form of \(\mathfrak{g}\). The element \(y\in\mathfrak{g}^{*}\) then corresponds to \(y_{0}+\sum_{\alpha}y_{\alpha}e_{\alpha}\), where \(y_{0}\) is the element in \(\mathfrak{a}\) corresponding to \(y|_{\mathfrak{a}}\) and \(y_{\alpha}=y(e_{-\alpha})\). Let \(\mu^{(j)}\in\mathcal{O}_{j}\) be vectors \(\mu^{(1)}=x^{(1)}-Ad_{a^{-1}}^{*}x^{(n)}\) and \(\mu^{(i)}=x^{(i)}-x^{(i-1)}\) for \(i=2,\ldots,n\). For coordinates \(x_{\alpha}^{(i)}\) and \(\mu_{\alpha}^{(i)}\) of vectors \(x^{(i)}\) and \(\mu^{(i)}\) we then have \[x_{\alpha}^{(1)}-a_{\alpha}^{-1}x_{\alpha}^{(n)}=\mu_{\alpha}^{(1)},\qquad\quad x _{\alpha}^{(i)}-x_{\alpha}^{(i-1)}=\mu_{\alpha}^{(i)},\qquad i=2,\ldots,n.\] For the Cartan components we have \[x_{0}^{(i)}-x_{0}^{(i-1)}=\mu_{0}^{(i)},\qquad i=1,\ldots,n,\] with the index \(i\) taken to be modulo \(n\). Solving these equations for \(x^{(i)}\) we have \[x_{\alpha}^{(i)} =\frac{a_{\alpha}(\mu_{\alpha}^{(1)}+\mu_{\alpha}^{(2)}+\cdots+\mu _{\alpha}^{(i)})+\mu_{\alpha}^{(i+1)}+\mu^{(i+2)}+\cdots+\mu_{\alpha}^{(n)}}{a _{\alpha}-1},\] \[x_{0}^{(i)} =x_{0}^{(1)}+\mu_{0}^{(2)}+\cdots+\mu_{0}^{(i)}=x_{0}^{(n)}-\mu_{ 0}^{(n)}-\cdots-\mu_{0}^{(i+1)} \tag{10}\] and we have the constraint \[\mu_{0}^{(1)}+\cdots+\mu_{0}^{(n)}=0. \tag{11}\] This gives an isomorphism \[\mathcal{S}(\mathcal{O})_{reg}\stackrel{{\sim}}{{ \longrightarrow}}\big{(}\nu_{\mathcal{O}}^{-1}(0)/H\times T^{*}A_{reg}\big{)}/W \tag{12}\] which preserves the natural symplectic structures, where \(\nu_{\mathcal{O}}:\mathcal{O}_{1}\times\cdots\times\mathcal{O}_{n}\to \mathfrak{a}^{*}\) is the moment map \((\mu^{(1)},\ldots,\mu^{(n)})\mapsto(\mu^{(1)}+\cdots+\mu^{(n)})|_{\mathfrak{a}}\) for the diagonal action of \(H\) on the product \(\mathcal{O}_{1}\times\cdots\times\mathcal{O}_{n}\) of coadjoint orbits, and \(W=N_{G}(A)/H\) acts diagonally on \(\nu_{\mathcal{O}}^{-1}(0)/H\times T^{*}A_{reg}\). The isomorphism (12) maps the \(N_{G}(A)\)-orbit through \(\big{(}x^{(1)},\ldots,x^{(n)},a\big{)}\) to the \(W\)-orbit through \(\big{(}H(x^{(1)}-Ad_{\alpha-1}^{*}x^{(n)},x^{(2)}-x^{(1)},\ldots,x^{(n)}-x^{(n- 1)}),x_{0}^{(n)},a\big{)}\), where we used the trivialisation \(T^{*}A_{reg}\simeq\mathfrak{a}\times A_{reg}\). The inverse maps the \(W\)-orbit through \(\big{(}H(\mu^{(1)},\ldots,\mu^{(n)}),p,a\big{)}\) to the \(N_{G}(A)\)-orbit through \((x^{(1)},\ldots,x^{(n)},a)\), with \[x^{(i)}=p-\mu_{0}^{(n)}-\cdots-\mu_{0}^{(i+1)}+\sum_{\alpha}\left(\frac{a_{ \alpha}(\mu_{\alpha}^{(1)}+\cdots+\mu_{\alpha}^{(i)})+\mu_{\alpha}^{(i+1)}+ \cdots+\mu_{\alpha}^{(n)}}{a_{\alpha}-1}\right)e_{\alpha}, \tag{13}\] where we use the identification \(\mathfrak{g}\simeq\mathfrak{g}^{*}\) via the Killing form. ### Hamiltonians of a periodic spin CM chain After the trivialization of the cotangent bundle by translations, we have a natural projection: \[T^{*}(G^{\times n})\simeq\mathfrak{g}^{*\times n}\times G^{\times n}\to \mathfrak{g}^{*\times n} \tag{14}\] which is simply the projection to the first factor. This projection depends on the trivialization. In this paper we alway assume that we use the trivialization by right translations. However, the corresponding projection of quotient spaces \[T^{*}(G^{\times n})/G_{n}\to(\mathfrak{g}^{*}/G)^{\times n} \tag{15}\] does not depend on the trivialization and in this sense is canonical. The projection (15) is Poisson8 with the trivial Poisson structure on \((\mathfrak{g}^{*}/G)^{\times n}\). Thus the \(G^{\times n}\)-invariant functions on \(\mathfrak{g}^{*\times n}\) give rise to a Poisson commutative subalgebra in the algebra of functions on \(T^{*}(G^{\times n})/G_{n}\). The restriction of these functions to the symplectic leaf \(\mathcal{S}(\mathcal{O})\) gives the algebra of Poisson commuting functions on it. This is the subalgebra of _Hamiltonians of the periodic spin Calogero-Moser chain_. Now let us describe the restriction of the Hamiltonians corresponding to quadratic Casimir functions \[H_{2}^{(k)}(x,g)=\frac{1}{2}\big{(}x^{(k)},x^{(k)}\big{)}=\frac{1}{2}\big{(}x_{0} ^{(k)},x_{0}^{(k)}\big{)}+\sum_{\alpha>0}x_{\alpha}^{(k)}x_{-\alpha}^{(k)}\qquad (1\leq k\leq n)\] to the regular part of \(\mathcal{S}(\mathcal{O})\), where \(x=(x^{(1)},\ldots,x^{(n)})\in\mathfrak{g}^{*\times n}\) and \(g\in G^{\times n}\). Consider the functions \[D_{k}=H_{2}^{(k)}-H_{2}^{(k-1)}\qquad\qquad(1<k\leq n)\] which we call _Knizhnik-Zamolodchikov-Bernard (KZB) Hamiltonians9_. Footnote 9: The proper name would be _constant Knizhnik-Zamolodchikov-Bernard Hamiltonians_ emphasizing the fact that they are related to finite dimensional simple Lie algebras, not to the affine Kac-Moody algebras. See for example references [13][8][9][34]. **Theorem 1**.: _The restriction of the KZB Hamiltonians to \(\mathcal{S}(\mathcal{O})_{\text{reg}}\) can be written as_ \[D_{k}=(\mu_{0}^{(k)},p)-\sum_{l=1}^{k-1}r_{lk}+\sum_{l=k+1}^{n}r_{kl} \tag{16}\] _where \(r_{kl}\) for \(k\neq l\) is the classical version of the Felder's dynamical r-matrix [13]:_ \[r_{kl}=-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})+\sum_{\alpha}\frac{\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}-1}. \tag{17}\] **Remark 1**.: _Note that (17) can also be written as_ \[r_{kl}=-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})+\sum_{\alpha>0}\frac{\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}-1}-\sum_{\alpha>0}\frac{a_{\alpha} \mu_{\alpha}^{(k)}\mu_{-\alpha}^{(l)}}{a_{\alpha}-1}.\] Proof.: We need to show that formula (16) gives the expression of \(D_{k}\) in terms of the coordinates on \(\mathcal{S}(\mathcal{O})_{\text{reg}}\) of \(\big{(}\nu_{\mathcal{O}}^{-1}(0)/H\times T^{*}A_{\text{reg}}\big{)}/W\), obtained from the isomorphism (12). In particular, let \(\big{(}H(\mu^{(1)},\ldots,\mu^{(n)}),p,a\big{)}\in\nu_{-\mathcal{O}}^{-1}(0)/ H\times\mathfrak{a}^{*}\times A_{\text{reg}}\) and let \(\big{(}(x^{(1)},\ldots,x^{(n)}),(1,\ldots,1,a)\big{)}\) be the corresponding point in \(\mathfrak{g}^{*\times n}\times G^{\times n}\), with \(x^{(i)}\) given by (13). Taking into account the relation \(x^{(k)}-x^{(k-1)}=\mu^{(k)}\) between the \(x^{(i)}\) and the \(\mu^{(j)}\) we have \[D_{k} =\big{(}\mu^{(k)},x^{(k-1)}\big{)}+\frac{1}{2}\big{(}\mu^{(k)}, \mu^{(k)}\big{)}\] \[=\big{(}\mu_{0}^{(k)},x_{0}^{(k-1)}+\frac{1}{2}\mu_{0}^{(k)} \big{)}+\sum_{\alpha}x_{\alpha}^{(k-1)}\mu_{-\alpha}^{(k)}+\sum_{\alpha>0}\mu _{\alpha}^{(k)}\mu_{-\alpha}^{(k)}.\] Substitute here the expression (10) for \(x_{\alpha}^{(k-1)}\) in terms of the \(\mu_{\alpha}^{(j)}\): \[D_{k}=\big{(}\mu_{0}^{(k)},x_{0}^{(k-1)}+\frac{1}{2}\mu_{0}^{(k)}\big{)}+\sum_ {l=1}^{k-1}\sum_{\alpha}\frac{a_{\alpha}\mu_{\alpha}^{(l)}\mu_{-\alpha}^{(k)}} {a_{\alpha}-1}+\sum_{l=k+1}^{n}\sum_{\alpha}\frac{\mu_{\alpha}^{(l)}\mu_{- \alpha}^{(k)}}{a_{\alpha}-1}.\] From here using the identities \[\sum_{\alpha}\frac{a_{\alpha}\mu_{\alpha}^{(l)}\mu_{-\alpha}^{(k)}}{a_{\alpha} -1}=-r_{lk}-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})\] and \[\sum_{\alpha}\frac{\mu_{\alpha}^{(l)}\mu_{-\alpha}^{(k)}}{a_{\alpha}-1}=r_{ kl}+\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})\] we conclude \[D_{k}=\Big{(}\mu_{0}^{(k)},x_{0}^{(k-1)}+\frac{1}{2}\sum_{l=k}^{n}\mu_{0}^{(l)}- \frac{1}{2}\sum_{l=1}^{k-1}\mu_{0}^{(l)}\Big{)}-\sum_{l=1}^{k-1}r_{lk}+\sum_{l=k +1}^{n}r_{kl}.\] Using \(x_{0}^{(k-1)}=p-\mu_{0}^{(n)}-\cdots-\mu_{0}^{(k)}\) (see (13)) and the constraint (11) we obtain (16). A particularly simple expression has the quadratic Hamiltonian \(H_{2}^{(n)}\) on \(\mathcal{S}(\mathcal{O})_{reg}\), \[H_{2}^{(n)}=\frac{1}{2}(p,p)+\sum_{\alpha}\frac{\mu_{\alpha}\mu_{-\alpha}}{(1- a_{\alpha})(1-a_{\alpha}^{-1})}.\] Here \(\mu_{\alpha}=\mu_{\alpha}^{(1)}+\cdots+\mu_{\alpha}^{(n)}\). Setting \(q_{\alpha}=\alpha(\log(a))\) this formula becomes a familiar formula for the spin Calogero-Moser Hamiltonian, \[H_{2}^{(n)}=\frac{1}{2}(p,p)-\sum_{\alpha>0}\frac{\mu_{\alpha}\mu_{-\alpha}}{ 2sh^{2}(q_{\alpha})}. \tag{18}\] Note that the periodic spin CM chain is the classical version of the dynamical Knizhnik-Zamolodchikov equation from [8][9]. ### Periodic spin Calogero-Moser chain as a superintegrable system Now let us establish the superintegrability of the periodic spin CM chain. For this we should construct an intermediate Poisson manifold and projections as in [26][30]. Observe that we have natural Poisson projections: \[T^{*}(G^{\times n})/G_{n}\overset{p_{1}}{\rightarrow}\mathcal{P}_{n}\overset{ p_{2}}{\rightarrow}\mathcal{B}_{n} \tag{19}\] Firstly, \(\mathcal{P}_{n}=(\mathfrak{g}^{*\times n}\times_{(\mathfrak{g}^{*}/G)^{ \times n}}\mathfrak{g}^{*\times n})/G_{n}\) with \[\mathfrak{g}^{*\times n}\times_{(\mathfrak{g}^{*}/G)^{\times n}}\mathfrak{g}^{ *\times n}:=\{(x,y)\in\mathfrak{g}^{*\times n}\times\mathfrak{g}^{*\times n}|Gy_{i}=-Gx_{i-1 }\},\] where \(Gz\) is the coadjoint orbit through \(z\in\mathfrak{g}^{*}\) and the indices \(i\) are taken modulo \(n\), and \(G_{n}\) is acting by \[g(x,y):=(Ad_{g_{1}}^{*}(x_{1}),\ldots,Ad_{g_{n}}^{*}(x_{n}),Ad_{g_{1}}^{*}(y_{ 1}),\ldots,Ad_{g_{n}}^{*}(x_{n})). \tag{20}\] The map \(p_{1}\) is the map induced from the \(G_{n}\)-equivariant map \(\mu_{L}\times\mu_{R}^{tw}\). Explicitly, the mapping \(p_{1}\) acts as \[\begin{split} p_{1}:G_{n}(x,g)&\mapsto G_{n}(\mu_{L} (x,g),\mu_{R}^{tw}(x,g))\\ &=G_{n}(x_{1},x_{2},\ldots,x_{n},-Ad_{g_{n}}^{*}(x_{n}),-Ad_{g_{1 }^{-1}}(x_{1}),\ldots,-Ad_{g_{n-1}^{-1}}(x_{n-1})).\end{split} \tag{21}\] Secondly, \[\mathcal{B}_{n}=(\mathfrak{g}^{*}/G)^{\times n}\] and the map \(p_{2}\) is the projection to the first factor. Restricting projection \(p_{1}\) to the symplectic leaf \(\mathcal{S}(\mathcal{O})\) (see (9)), we obtain the surjective Poisson projection \[p_{1,\mathcal{O}}:\mathcal{S}(\mathcal{O})\rightarrow\mathcal{P}(\mathcal{O})\] where \[\mathcal{P}(\mathcal{O})=\{(x,y)\in\mathfrak{g}^{*\times n}\times_{(\mathfrak{ g}^{*}/G)^{\times n}}\mathfrak{g}^{*\times n}\mid x_{i}+y_{i}\in\mathcal{O}_{i}\}/G_{n} \subset\mathcal{P}_{n}\] with the \(G_{n}\)-action described by (20). Restricting the second projection \(p_{2}\) to \(\mathcal{P}(\mathcal{O})\) we have the Poisson projection \[p_{2,\mathcal{O}}:\mathcal{P}(\mathcal{O})\to\mathcal{B}(\mathcal{O})\subset \mathcal{B}_{n},\qquad G_{n}(x,y)\mapsto(Gx_{1},\ldots,Gx_{n})\] where \(\mathcal{B}(\mathcal{O})\) is the image of \(p_{2,\mathcal{O}}\). It can be explicitly described as \[\mathcal{B}(\mathcal{O})=\{(\mathcal{O}^{(1)},\ldots,\mathcal{O}^{(n)})\in( \mathfrak{g}^{*}/G)^{\times n}\mid\mathcal{O}_{i}\subseteq\mathcal{O}^{(i)}- \mathcal{O}^{(i-1)}\},\] with the indices \(i\) taken modulo \(n\). **Lemma 1**.: _The dimension of \(\mathcal{B}(\mathcal{O})\) is nr where \(r\) is the rank of the Lie algebra \(\mathfrak{g}\)._ Proof.: Let \(\mathfrak{h}_{\geq 0}^{*}\) be the positive Weyl chamber in the dual space to Cartan subalgebra of \(\mathfrak{g}\). For each generic orbit \(\mathcal{O}\) there is a unique representative \(y\in\mathcal{O}\cup\mathfrak{h}^{*}{>0}\). Let \(x_{1}\) be such representative of \(\mathcal{O}^{(1)}\). Let us describe orbits \(\mathcal{O}^{(2)}\) such that \(x_{1}+y_{1}\in\mathcal{O}^{(2)}\) for some \(y_{1}\in\mathcal{O}_{1}\). Assume that \(\mathcal{O}^{(1)}\) is very large, i.e. \(||x_{1}||>>1\). Because \(\mathcal{O}_{1}\) is compact \(||y_{1}||<C_{1}\) for some constant \(C_{1}\) determined by the orbit \(\mathcal{O}_{1}\). Let \(c_{k}^{(i)}\) be the value of \(k\)-th Casimir function on the orbit \(\mathcal{O}^{(i)}\). For \(k\)-th Casimir function \(c_{k}^{(2)}\) we have: \[c_{k}^{(2)}=c_{k}(x_{1}+y_{1})=c_{k}^{(1)}+\sum_{i=1}^{r}\frac{\partial c_{k}( h)}{\partial h_{i}}|_{h=x_{1}}(y_{1})_{i}+O(y^{2})\] Because the matrix \(\frac{\partial c_{k}(h)}{\partial h_{i}}\) is nondegenerate for generic \(h\), possible values of the Euclidean vector with components \(c_{k}^{(2)}\) span an \(r\)-dimensional neighborhood of \(\{c_{k}^{(1)}\}\). Repeating this argument for each \(\mathcal{O}^{(i)}\) we conclude that each of \(\mathcal{O}_{i}\) is non-zero, \(\dim(\mathcal{B}(\mathcal{O}))=nr\). Now let us describe the fiber \(\mathcal{P}(\mathcal{O};\mathcal{O}^{(1)},\ldots,\mathcal{O}^{(n)})\) of \(p_{2,\mathcal{O}}\) over \((\mathcal{O}^{(1)},\ldots,\mathcal{O}^{(n)})\in\mathcal{B}(\mathcal{O})\): \[\mathcal{P}(\mathcal{O};\mathcal{O}^{(1)},\ldots,\mathcal{O}^{(n )}) =\{(x,y)\in\mathfrak{g}^{*\times n}\times\mathfrak{g}^{*\times n}|x_{i}+y _{i}\in\mathcal{O}_{i},\;\;x_{i}\in\mathcal{O}^{(i)},y_{i}\in-\mathcal{O}^{(i- 1)}\}/G_{n}\] \[=\prod_{i=1}^{n}\{(x_{i},y_{i})\in\mathcal{O}^{(i)}\times- \mathcal{O}^{(i-1)}\mid x_{i}+y_{i}\in\mathcal{O}_{i}\}/G\] with the index \(i\) taken modulo \(n\) and with \(G\) acting by the diagonal coadjoint action on \(\mathcal{O}^{(i)}\times-\mathcal{O}^{(i-1)}\). Set \[\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)})=\{(x,y,z) \in\mathcal{O}^{(1)}\times\mathcal{O}^{(2)}\times\mathcal{O}^{(3)}\mid x+y+z=0 \}/G, \tag{22}\] with \(G\) acting by the diagonal coadjoint action. Then \[\big{\{}(x_{i},y_{i})\in\mathcal{O}^{(i)}\times-\mathcal{O}^{(i-1 )}\mid x_{i}+y_{i}\in\mathcal{O}_{i}\big{\}}/G\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{(i-1)}, \mathcal{O}_{i})\] \[\qquad\qquad G(x_{i},y_{i})\mapsto G(-x_{i},-y_{i},x_{i}+y_{i}), \tag{23}\] and hence we conclude that \[\mathcal{P}(\mathcal{O};\mathcal{O}^{(1)},\ldots,\mathcal{O}^{(n)})\simeq\prod _{i=1}^{n}\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{(i-1)},\mathcal{O}_{i}), \tag{24}\] with the index \(i\) taken modulo \(n\). **Lemma 2**.: _Let \(\mathcal{O}^{(1)},\mathcal{O}^{(2)}\) be generic, sufficiently large coadjoint orbits and \(\mathcal{O}^{(3)}\neq 0\), then_ \[\dim(\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)}))=\dim (\mathcal{O}^{(3)})-2r.\] Proof.: Let \(x\in\mathcal{O}^{(1)}\) be the unique representative which lies in the positive Weyl chamber. Assume this orbit is "big", i.e. \(||x||>>1\). The condition \(x+y+z=0\) for \(y\in\mathcal{O}^{(2)}\) and \(z\in\mathcal{O}^{(3)}\) for a large orbit \(\mathcal{O}^{(2)}\) means that we have \(r\) constraints \(c_{k}(-x-y)=c_{k}^{(3)}\) on \(y\). For large orbits \(\mathcal{O}^{(1)}\) and \(\mathcal{O}^{(2)}\) these constraints are independent. Taking into account that we are quotiening by \(H\) we have \(\dim(\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)})=\dim( \mathcal{O}^{(3)}-2r.\) **Corollary 1**.: _Thus the dimension of the fiber is \(\dim(\mathcal{P}(\mathcal{O};\mathcal{O}^{(1)},\dots,\mathcal{O}^{(n)}))= \sum_{i=1}^{n}\dim(\mathcal{O}_{i})-2nr.\)_ Each of the factors in (24) is the Hamiltonian reduction of the product of the three coadjoint orbits relative to the moment map of the diagonal coadjoint \(G\)-action, and therefore carries a natural symplectic structure. Moduli spaces \(\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)})\) and therefore fibers of \(p_{2,\mathcal{O}}\) are stratified symplectic spaces. **Theorem 2**.: _The Hamiltonian system generated by any Hamiltonian for the periodic spin \(CM\) chain described in section 1.4 is superintegrable with the superintegrable structure described by the Poisson maps_ \[\mathcal{S}(\mathcal{O})\stackrel{{ p_{1,\mathcal{O}}}}{{ \longrightarrow}}\mathcal{P}(\mathcal{O})\stackrel{{ p_{2, \mathcal{O}}}}{{\longrightarrow}}\mathcal{B}(\mathcal{O})\] _as introduced earlier in this section._ Here, as everywhere above, we assume that \(\mathcal{O}_{i}\neq\{0\}\) for each \(i=1,\dots,n\). Proof.: For \(G_{n}(x,y)\in\mathcal{P}(\mathcal{O})\) let \(\widetilde{g_{i}}\in G\) such that \(y_{i+1}=-Ad_{\widetilde{g_{i}}^{-1}}^{*}(x_{i})\). Then \[p_{1,\mathcal{O}}^{-1}\big{(}G_{n}(x,y)\big{)}=\{G_{n}(x,g)\in\mathcal{S}( \mathcal{O})\mid g_{i}\in\widetilde{g}_{i}Z_{G}(y_{i+1})\}\] (index \(i\) taken modulo \(n\)) which, generically, is isotropic and of dimension \(nr=\dim(\mathcal{B}(\mathcal{O}))\). It remains to check the balance of dimensions. It follows from (12) that \[\dim(\mathcal{S}(\mathcal{O}))=\sum_{i=1}^{n}\dim(\mathcal{O}_{i}).\] By Remark 2 we have, for generic \((\mathcal{O}^{(1)},\dots,\mathcal{O}^{(n)})\in(\mathfrak{g}^{*}/G)^{\times n}\), \[\dim(\mathcal{P}(\mathcal{O})) =\dim(\mathcal{B}(\mathcal{O}))+\dim(\mathcal{P}(\mathcal{O}; \mathcal{O}^{(1)},\dots,\mathcal{O}^{(n)}))\] \[=\dim(\mathcal{B}(\mathcal{O}))+\sum_{i=1}^{n}\dim(\mathcal{O}_{ i})-2nr.\] Then \[\dim(\mathcal{P}(\mathcal{O}))+\dim(\mathcal{B}(\mathcal{O}))=\sum_{i=1}^{n} \dim(\mathcal{O}_{i})=\dim(\mathcal{S}(\mathcal{O})),\] as desired. **Remark 2**.: _In the compact case, the quantum version of functions on \(\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)})\) is the algebra of endomorphisms \(End((V_{\lambda_{1}}\otimes V_{\lambda_{2}}\otimes V_{\lambda_{3}})^{G})\) of the subspace of \(G\)-invariant vectors in the tensor product \(V_{\lambda_{1}}\otimes V_{\lambda_{2}}\otimes V_{\lambda_{3}}\) with \(V_{\lambda_{i}}\) the representation corresponding to \(\mathcal{O}^{(i)}\)._ _The quantum version of the algebra of functions on the fiber \(\mathcal{P}(\mathcal{O};\mathcal{O}^{(1)},\dots,\mathcal{O}^{(n)})\) is the algebra of endomorphisms of the vector space_ \[\operatorname{Hom}_{G}(V_{\lambda_{1}},V_{\lambda_{n}}\otimes V_{1})\otimes \operatorname{Hom}_{G}(V_{\lambda_{2}},V_{\lambda_{1}}\otimes V_{2})\otimes \dots\otimes\operatorname{Hom}_{G}(V_{\lambda_{n}},V_{\lambda_{n-1}}\otimes V _{n}).\] _Here the orbits \(\mathcal{O}_{i}\) correspond to \(V_{i}\), and \(\operatorname{Hom}_{G}(V_{\lambda_{i}},V_{\lambda_{i-1}}\otimes V_{i})\) is the space of \(G\)-linear intertwiners \(V_{\lambda_{i}}\to V_{\lambda_{i-1}}\otimes V_{i}\). For details see [2]._ ### Constructing solutions by the projection method and angle variables For \(\mathcal{H}\) a \(G\)-invariant function on \(\mathfrak{g}^{*}\), write \(\mathcal{H}^{(i)}\) for the \(G_{n}\)-invariant function on \(T^{*}(G^{\times n})\) defined by \(\mathcal{H}^{(i)}(x,g):=\mathcal{H}(x_{i})\). The Hamiltonian flow through \((x,g)\in\mathfrak{g}^{*\times n}\times G^{\times n}\) generated by \(\mathcal{H}^{(i)}\) is \[(x(t_{i}),g(t_{i}))=(x_{1},\ldots,x_{n},g_{1},\ldots,g_{i-1},e^{\nabla\mathcal{ H}(x_{i})t_{i}}g_{i},g_{i+1}\ldots,g_{n}) \tag{25}\] where \(\nabla\mathcal{H}(x)\in\mathfrak{g}\) is the gradient of \(\mathcal{H}\) at \(x\in\mathfrak{g}^{*}\), i.e., \[y(\nabla\mathcal{H}(x))=\frac{d}{dt}\mathcal{H}(x+ty)|_{t=0}\] for all \(y\in\mathfrak{g}^{*}\). The projection of such flow line to \(T^{*}(G^{\times n})/G^{\times n}\) and further restricted to \(\mathcal{S}(\mathcal{O})\subset T^{*}(G^{\times n})/G_{n}\) is a flow line of the Hamiltonian vector field generated by the restriction of \(\mathcal{H}^{(i)}\) to \(\mathcal{S}(\mathcal{O})\). Now let us construct angle variables, i.e., functions on \(S(\mathcal{O})\) which evolve linearly with respect to the evolution (25) for each \(i=1,\ldots,n\). Write \(\mathfrak{a}^{*}_{+}\subset\mathfrak{g}^{*}\) for the elements \(x\in\mathfrak{g}^{*}\) which vanish on root spaces and satisfy \((x,\alpha)>0\) for \(\alpha\in R_{+}\), where \((\cdot,\cdot)\) is the bilinear form on \(\mathfrak{g}^{*}\) induced by the Killing form. Write \(\mathfrak{g}^{\prime*}\) for the elements in \(\mathfrak{g}^{*}\) which are \(G\)-conjugate to an element in \(\mathfrak{a}^{*}_{+}\), relative to the coadjoint action. For \((x,g)\in\mathfrak{g}^{\prime*\times n}\times G^{\times n}\) define \(s_{i}\in G\) by the property \(Ad^{*}_{s_{i}}(x_{i})\in\mathfrak{a}^{*}_{+}\). These elements are defined only up to \(s_{i}\mapsto a_{i}s_{i}\) where \(a_{i}\in H\). Gauge transformations \(h\in G_{n}\) act by \((s_{1},\ldots,s_{n})\mapsto(s_{1}h_{1}^{-1},\ldots,s_{n}h_{n}^{-1})\). Let \(G_{\mathbb{C}}\) be a complexification of \(G\), which we take to be connected. Let \(H_{\mathbb{C}}\subset G_{\mathbb{C}}\) be the Cartan subgroup \(Z_{G_{\mathbb{C}}}(\mathfrak{h})\), where \(\mathfrak{h}\) is the Cartan subalgebra \(\mathfrak{a}\oplus i\mathfrak{a}\) of the Lie algebra \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\oplus i\mathfrak{g}\) of \(G_{\mathbb{C}}\). Then \(A\subseteq H\subset H_{\mathbb{C}}\). We identify \(\mathfrak{g}^{*}\) with the real subspace of \(\mathfrak{g}^{*}_{\mathbb{C}}\) consisting of the complex linear functionals that take real values on \(\mathfrak{g}\). For finite dimensional \(G_{\mathbb{C}}\)-representations \(V_{1},\ldots,V_{n}\) choose vector \(v_{i}\in V_{i}\) of \(H_{\mathbb{C}}\)-weight \(\lambda_{i+1}\) and linear functionals \(u^{*}_{i}\in V^{*}_{i}\) of \(H_{\mathbb{C}}\)-weight \(-\lambda_{i}\) (indices \(i\) taken modulo \(n\)).10 Define Footnote 10: Recall that \(G\) acts on dual vectors as \((gu^{*})(v)=u^{*}(g^{-1}v)\). \[f_{u,v}(x,g)=u^{*}_{1}(s_{1}g_{1}s_{2}^{-1}v_{1})u^{*}_{2}(s_{2}g_{2}s_{3}^{-1} v_{2})\ldots u^{*}_{n}(s_{n}g_{n}s_{1}^{-1}v_{n}) \tag{26}\] for \((x,g)\in(\mathfrak{g}^{\prime*}\times G)^{\times n}\). This expression is well defined (i.e., invariant with respect to transformations \(s_{i}\to a_{i}s_{i}\) with \(a_{i}\in H\)), and invariant with respect to gauge transformations. Thus, it defines a function on the subset \((\mathfrak{g}^{\prime*\times n}\times G^{\times n})/G_{n}\) of \(T^{*}(G^{\times n})/G_{n}\). From the \(G\)-invariance of \(\mathcal{H}\) we have the identity \[u^{*}_{i}(s_{i}e^{t^{i}\nabla\mathcal{H}(x_{i})}g_{i}s_{i+1}^{-1}v_{i})=e^{t ^{i}\lambda_{i}(\nabla\mathcal{H}(y_{i}))}u^{*}_{i}(s_{i}g_{i}s_{i+1}^{-1}v_{i})\] where \(y_{i}=Ad^{*}_{s_{i}}(x_{i})\in\mathfrak{a}^{*}_{+}\), and consequently \[f_{u,v}(x(t_{i}),g(t_{i}))=e^{t^{i}\lambda_{i}(\nabla\mathcal{H}(y_{i}))}f_{u, v}(x,g). \tag{27}\] Logarithms of these functions evolve linearly, and hence they produce angle variables for the Hamiltonians \(\mathcal{H}^{(i)}\) on \(S(\mathcal{O})\cap(\mathfrak{g}^{\prime*\times n}\times G^{\times n})/G_{n}\). ## 2. Open spin Calogero-Moser chains Recall from the introduction that \(G\) is a split real connected Lie group with finite center which admits a complexification, and \(K\subset G\) is the subgroup of fixed points of a fixed Cartan involution \(\Theta\) of \(G\). Recall furthermore the root space decomposition \(\mathfrak{g}=\mathfrak{a}\oplus\bigoplus_{\alpha>0}(\mathbb{R}e_{\alpha} \oplus\mathbb{R}e_{-\alpha})\) with the Cartan subalgebra \(\mathfrak{a}\subset\mathfrak{g}\) and the root vectors \(e_{\alpha}\in\mathfrak{g}_{\alpha}\) such that the infinitesimal Cartan involution \(\theta\) acts as \[\theta(h)=-h,\ \ \theta(e_{\alpha})=-e_{-\alpha}\] for \(h\in\mathfrak{a}\) and \(\alpha\in R\). We will furthermore normalise the root vectors in such a way that \((e_{\alpha},e_{-\alpha})=1\), with \((\cdot,\cdot)\) the Killing form of \(\mathfrak{g}\). To avoid cumbersome notations, we will not always indicate in notations that we are in the open case. This leads to an overlap of some of the notations with the ones for the periodic case. For instance, the moment maps, Poisson spaces and Poisson projections will be denoted in the same way. ### The phase space as the Hamiltonian reduction Consider for \(n\geq 0\) the manifold \(T^{*}(G^{\times n+1})\) with the standard symplectic structure. We trivialize the cotangent bundle \(T^{*}(G^{\times n+1})\) by right translations: \[T^{*}(G^{\times n+1})\simeq(T^{*}G)^{\times n+1}\simeq\mathfrak{g}^{*\times n +1}\times G^{\times n+1} \tag{28}\] We have a natural action of \(K\times G^{\times n}\) on \(G^{\times n+1}\) by left translations: \[(k_{\ell},h_{1},\dots,h_{n})_{L}(g_{0},g_{1},\dots,g_{n})=(k_{\ell}g_{0},h_{1 }g_{1},\dots,h_{n}g_{n})\] This action lifts to the following Hamiltonian action on \(T^{*}G^{\times n+1}\), \[(k_{\ell},h_{1},\dots,h_{n})_{L}(x_{0},\dots,x_{n},g_{0},g_{1}, \dots,g_{n})=\] \[=(Ad^{*}_{k_{\ell}}(x_{0}),Ad^{*}_{h_{1}}(x_{1}),\dots,Ad^{*}_{h_ {n}}(x_{n}),k_{\ell}g_{0},h_{1}g_{1},\dots,h_{n}g_{n})\] with the moment map \[\mu_{L}(x,g)=(\pi(x_{0}),x_{1},\dots,x_{n})\] where the projection \(\pi:\mathfrak{g}^{*}\to\mathfrak{k}^{*}\) is the dual map dual to the embedding \(\mathfrak{k}\hookrightarrow\mathfrak{g}\). Similarly, the action of \(G^{\times n}\times K\) on \(G^{\times n+1}\) by right translations \[(h_{1},\dots,h_{n},k_{r})_{R}(g_{0},g_{1},\dots,g_{n})=(g_{0}h_{1}^{-1},g_{1} h_{2}^{-1},\dots,g_{n-1}h_{n}^{-1},g_{n}k_{r}^{-1})\] lifts to the following Hamiltonian action on \(T^{*}G^{\times n+1}\), \[(h_{1},\dots,h_{n},k_{r})_{R}(x_{0},\dots,x_{n},g_{0},g_{1},\dots,g_{n})\] \[=(x_{0},x_{1},\dots,x_{n},g_{0}h_{1}^{-1},g_{1}h_{2}^{-1},\dots,g _{n-1}h_{n}^{-1},g_{n}k_{r}^{-1}),\] with the moment map \[\mu_{R}(x,g)=(-Ad^{*}_{g_{0}^{-1}}(x_{0}),-Ad^{*}_{g_{1}^{-1}}(x_{1}),\dots, -Ad^{*}_{g_{n-1}^{-1}}(x_{n-1}),-\pi(Ad^{*}_{g_{n}^{-1}}(x_{n}))).\] As a result, the group \(G_{n,K}:=K\times G^{\times n}\times K\) acts on \(T^{*}(G^{\times n+1})\) as \[(k_{\ell},h_{1},\dots, h_{n},k_{r})(x_{0},\dots,x_{n},g_{0},\dots,g_{n})=\] \[= (Ad^{*}_{k_{\ell}}(x_{0}),Ad^{*}_{h_{1}}(x_{1}),\dots,Ad^{*}_{h_ {n}}(x_{n}),k_{\ell}g_{0}h_{1}^{-1},h_{1}g_{1}h_{2}^{-1},\dots,h_{n}g_{n}k_{r }^{-1}) \tag{29}\] with \(k_{\ell},k_{r}\in K\) and \(h_{i}\in G\). This action is Hamiltonian with the moment map \(\mu:T^{*}(G^{\times n})\to\mathfrak{k}^{*}\times\mathfrak{g}^{*\times n} \times\mathfrak{k}^{*}\) given by \[\mu((x_{0},\dots,x_{n},g_{0},\dots,g_{n})=(\mu_{L}(x,g),0)+(0,\mu _{R}(x,g))=\] \[=(\pi(x_{0}),x_{1}-Ad^{*}_{g_{0}^{-1}}(x_{0}),x_{2}-Ad^{*}_{g_{1} ^{-1}}(x_{1}),\dots,x_{n}-Ad^{*}_{g_{n-1}^{-1}}(x_{n-1}),-\pi(Ad^{*}_{g_{n}^{ -1}}(x_{n}))). \tag{30}\] For \(n=0\) this is the \(K\times K\) action \((k_{\ell},k_{r})(x,g)=(Ad^{*}_{k_{\ell}}(x),k_{\ell}gk_{r}^{-1})\) on \(T^{*}G\), with the moment map \((x,g)\mapsto(\pi(x),-\pi(Ad_{g^{-1}}(x)))\). It is easy to check explicitly that this moment map intertwines the action of \(G_{n,K}\) on \(T^{*}(G^{\times n+1})\) given by (29) with its diagonal coadjoint action on \(\mathfrak{k}^{*}\times\mathfrak{g}^{*\times(n-1)}\times\mathfrak{k}^{*}\). Because the action of \(G_{n,K}\) on \(T^{*}(G^{\times n+1})\) is Hamiltonian, the space \(T^{*}(G^{\times n+1})/G_{n,K}\)11 is Poisson with symplectic leaves being given by the Hamiltonian reduction with respect to the moment map (30). Let \(\mathcal{O}=(\mathcal{O}_{\ell}^{K}\times\mathcal{O}_{1}\times\ldots, \mathcal{O}_{n}\times\mathcal{O}_{r}^{K})\) with \(\mathcal{O}_{i}\subset\mathfrak{g}^{*}\) coadjoint \(G\)-orbits and \(\mathcal{O}_{\ell}^{K},\mathcal{O}_{r}^{K}\subset\mathfrak{k}^{*}\) coadjoint \(K\)-orbits, then the corresponding symplectic leaf in \(T^{*}(G^{\times n+1})/G_{n,K}\) is Footnote 11: Recall that here and in everywhere else in this paper \(X/H\) means the GIT quotient for a Lie group \(H\) action on a manifold \(X\). \[\begin{split}&\mathcal{S}(\mathcal{O})=\mu^{-1}(\mathcal{O})/G_{n,K}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad x_{1} -Ad^{*}_{g^{-1}_{0}}(x_{0})\in\mathcal{O}_{1},\ldots,x_{n}-Ad^{*}_{g^{-1}_{n- 1}}(x_{n-1})\in\mathcal{O}_{n}\big{\}}/G_{n,K}.\end{split} \tag{31}\] Each symplectic leaf \(\mathfrak{f}(\mathcal{O})\) is a stratified symplectic space and is the phase space for the corresponding open spin Calogero-Moser chain. We will describe the largest stratum of \(\mathcal{S}(\mathcal{O})\) later. ### The Hamiltonians of the open spin Calogero-Moser chain After the trivialization (28) of \(T^{*}(G^{\times n+1})\) by right translations we have a natural Poisson projection \(T^{*}(G^{\times n+1})\to\mathfrak{g}^{*\times n+1}\) to the first factor. It is \(G_{n,K}\)-invariant with the following action of \(G_{n,K}\) on \(\mathfrak{g}^{*\times n+1}\) \[(k_{\ell},h_{1},\ldots,h_{n},k_{r}):(x_{0},x_{1},\ldots,x_{n})\mapsto(Ad^{*}_ {k_{\ell}}(x_{0}),Ad^{*}_{h_{1}}(x_{1}),\ldots,Ad^{*}_{h_{n}}(x_{n})).\] This gives rise to the projection \[p:T^{*}(G^{\times n+1})/G_{n,K}\to(\mathfrak{g}^{*}/G)^{\times n+1}\] which is Poisson because it is the composition of natural Poisson projections \[T^{*}(G^{\times n+1})/G_{n,K}\to(\mathfrak{g}^{*\times n+1})/G_{n,K}=\mathfrak{ g}^{*}/K\times(\mathfrak{g}^{*}/G)^{\times n}\to(\mathfrak{g}^{*}/G)^{\times n +1}. \tag{32}\] Here the Poisson structure on the right is trivial (the Poisson tensor is zero). The last projection is a consequence of the embedding \(K\hookrightarrow G\). Restricting this projection to the symplectic leaf \(\mathcal{S}(\mathcal{O})\) we have the Poisson projection \[p_{\mathcal{O}}:\mathcal{S}(\mathcal{O})\to\mathcal{B}(\mathcal{O}),\quad G_ {n,K}(x_{0},\ldots,x_{n},g_{0},\ldots,g_{n})\mapsto(Gx_{0},\ldots,Gx_{n}) \tag{33}\] where \(\mathcal{B}(\mathcal{O})\subset(\mathfrak{g}^{*}/G)^{\times n+1}\) is, by definition, the image of \(\mathcal{S}(\mathcal{O})\). It can be described explicitly from the description of \(\mathcal{S}(\mathcal{O})\) as \[\begin{split}\mathcal{B}(\mathcal{O})=\big{\{}(\mathcal{O}^{(0)}, \ldots,\mathcal{O}^{(n)})\in(\mathfrak{g}^{*}/G)^{\times n+1}\mid\mathcal{O}_{ \ell}^{K}\subseteq\pi(\mathcal{O}^{(0)}),\mathcal{O}_{r}^{K}\subseteq-\pi( \mathcal{O}^{(n)}),\\ \mathcal{O}_{1}\subseteq\mathcal{O}^{(1)}-\mathcal{O}^{(0)}, \ldots,\mathcal{O}_{n}\subseteq\mathcal{O}^{(n)}-\mathcal{O}^{(n-1)}\big{\}}. \end{split} \tag{34}\] Hamiltonians of the open spin Calogero-Moser system are pull back \(p^{*}\) of functions on \((\mathfrak{g}^{*}/G)^{\times n+1}\) restricted to \(\mathcal{S}(\mathcal{O})\). The subalgebra of Hamiltonians is a Poisson commuting subalgebra. Quadratic Hamiltonians are given by Casimir functions. We will compute the radial components of the quadratic Casimirs explicitly in section 2.5. Hamiltonians are constant on fibers of the projection \(p_{\mathcal{O}}\). ### The gauge fixing Fix \(i\in\{0,\dots,n\}\). For \(g=(g_{0},\dots,g_{n})\in G^{\times n+1}\) and \(k_{\ell},k_{r}\in K\) define \(h\in G^{\times n}\) (depending on \(i,g,k_{\ell},k_{r}\)) by \[h_{j}=\begin{cases}k_{\ell}g_{0}g_{1}\cdots g_{j-1}&\text{if }\ j\leq i,\\ k_{r}g_{n}^{-1}g_{n-1}^{-1}\cdots g_{j}^{-1}&\text{if }\ j>i.\end{cases}\] Then \((k_{\ell},h_{1},\dots,h_{n},k_{r})\) acts on \(g\in G^{\times n+1}\) as \[g\mapsto(1,\dots,1,k_{\ell}g_{0}g_{1}\dots g_{n}k_{r}^{-1},1,\dots,1),\] Here the nontrivial entry is at the position \(i\). This gives an \(i\)-independent isomorphism \[G^{\times n+1}/G_{n,K}\to K\backslash G/K,\qquad G_{n,K}(g_{0},\dots,g_{n}) \mapsto Kg_{0}g_{1}\dots g_{n}K. \tag{35}\] For the cotangent bundles the gauge fixing gives an isomorphism \[\phi_{i}:\bigl{(}\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\bigr{)}/G_{ n,K}\stackrel{{\sim}}{{\to}}K\backslash\bigl{(}\mathfrak{g}^{* \times n+1}\times G\bigr{)}/K \tag{36}\] mapping the \(G_{n,K}\)-orbit through \((x,g)\in\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\) to the double \(K\)-coset through \[\bigl{(}x_{0},Ad_{g_{0}}^{*}(x_{1}),\dots,Ad_{g_{0}\cdots g_{i-1}}^{*}(x_{i}),Ad_{g_{n}^{-1}\cdots g_{i+1}^{-1}}^{*}(x_{i+1}),\dots,Ad_{g_{n}^{-1}}^{*}(x_ {n}),g_{0}g_{1}\cdots g_{n}\bigr{)}\] For example for \(i=n\) this expression is \(\bigl{(}x_{0},Ad_{g_{0}}^{*}(x_{1}),\dots,Ad_{g_{0}\cdots g_{n-1}}^{*}(x_{n}),g_{0}g_{1}\cdots g_{n}\bigr{)}\). In (36) the double \(K\)-cosets in the codomain of \(\phi_{i}\) are taken relative to the \(i\)-dependent \(K\times K\)-action \[(k_{\ell},k_{r})(x_{0},\dots,x_{n},g)=(Ad_{k_{\ell}}^{*}(x_{0}),\cdots,Ad_{k_{ \ell}}^{*}(x_{i}),Ad_{k_{r}}^{*}(x_{i+1}),\dots,Ad_{k_{r}}^{*}(x_{n}),k_{\ell }gk_{r}^{-1})\] on \(\mathfrak{g}^{*\times n+1}\times G\). Now we can describe the symplectic leaf \(\mathcal{S}(\mathcal{O})\) as a subvariety in \(K\backslash(\mathfrak{g}^{*\times n+1}\times G)/K\) though the isomorphism \(\varphi_{n}\) as \[\begin{split}\mathcal{S}(\mathcal{O})=K\backslash\bigl{\{}(y_{0}, y_{1},\dots,y_{n},g)\in\mathfrak{g}^{*\times n+1}\times G\mid&\pi(y_{0})\in \mathcal{O}_{\ell}^{K},-\pi\bigl{(}Ad_{g^{-1}}^{*}(y_{n})\bigr{)}\in\mathcal{O }_{r}^{K},\\ & y_{1}-y_{0}\in\mathcal{O}_{1},\dots,y_{n}-y_{n-1}\in\mathcal{O }_{n}\bigr{\}}/K.\end{split} \tag{37}\] Note that, as in the periodic case, \(\mathcal{S}(\mathcal{O})\) is a symplectic stratified space. From now on we will focus mostly on the largest stratum \(\mathcal{S}(\mathcal{O})_{reg}\). ### The regular part of the symplectic leaf \(\mathcal{S}(\mathcal{O})\) We use the gauge fixing isomorphism \(\varphi_{n}\) in the remainder of the text. We now use \(K\backslash G/K\simeq A/W\) with \(W=N_{K}(A)/M\) the Weyl group of \(G\) (see subsection SS3 of the introduction) to describe the regular part of the symplectic leaf \(\mathcal{S}(\mathcal{O})\) in radial coordinates. Define the regular part of the phase space \(\mathcal{S}(\mathcal{O})\) (see (37)) as \[\mathcal{S}(\mathcal{O})_{reg}=\mathcal{S}(\mathcal{O})\cap K\backslash( \mathfrak{g}^{*\times n+1}\times G_{reg})/K.\] The regular part \(\mathcal{S}(\mathcal{O})_{reg}\subset\mathcal{S}(\mathcal{O})\) is its largest stratum of the stratified symplectic space \(\mathcal{S}(\mathcal{O})\). We can then choose a representative of \(K(y_{0},\dots,y_{n},g)K\in\mathcal{S}(\mathcal{O})_{reg}\) with the \(G\)-component in \(A_{reg}\) by writing \(g=k_{\ell}ak_{r}^{-1}\) and \(y_{i}=Ad_{k_{\ell}}^{*}(x^{(i)})\) with \(k_{\ell},k_{r}\in K\) and \(a\in A_{reg}\). It follows that \[\begin{split}\mathcal{S}(\mathcal{O})_{reg}\simeq\bigl{\{}(x^{(0 )},\dots,x^{(n)},a)\in\mathfrak{g}^{*\times n+1}\times& A_{reg} \mid\pi(x^{(0)})\in\mathcal{O}_{\ell}^{K},-\pi(Ad_{a^{-1}}^{*}(x^{(n)}))\in \mathcal{O}_{r}^{K},\\ & x^{(1)}-x^{(0)}\in\mathcal{O}_{1},\dots,x^{(n)}-x^{(n-1)}\in \mathcal{O}_{n}\bigr{\}}/N_{K}(A).\end{split}\] Here \(N_{K}(A)\) acts diagonally on \(\mathfrak{g}^{*\times n+1}\times A_{reg}\) via the coadjoint action on \(\mathfrak{g}^{*}\) and the conjugation action on \(A_{reg}\).12 We can now also divide out the action of \(M=Z_{K}(A)\), to obtain the isomorphism Footnote 12: We use here the fact that \(k_{\ell}ak_{r}^{-1}=k_{\ell}^{\prime}a^{\prime}k_{r}^{\prime-1}\) for \(k_{\ell},k_{\ell}^{\prime},k_{r},k_{r}^{\prime}\in K\) and \(a,a^{\prime}\in A_{reg}\) is implying that \(k_{\ell}^{-1}k_{\ell}^{\prime}=k_{r}^{-1}k_{r}^{\prime}\in N_{K}(A)\), cf., e.g., [20, §VII.3]. This essentially follows from the global Cartan decomposition of \(G\). \[\mathcal{S}(\mathcal{O})_{reg}\simeq\big{\{}(M(x^{(0)},\ldots,x^ {(n)}),a)\in\mathfrak{g}^{*\times n+1}/M\times A_{reg}\mid\pi(x^{(0)})\in \mathcal{O}_{\ell}^{K},-\pi(Ad_{a^{-1}}^{*}(x^{(n)}))\in\mathcal{O}_{r}^{K},\] \[x^{(1)}-x^{(0)}\in\mathcal{O}_{1},\ldots,x^{(n)}-x^{(n-1)}\in \mathcal{O}_{n}\big{\}}/W,\] where \(M\) acts by the diagonal coadjoint action on \(\mathfrak{g}^{*\times n+1}\) and \(W\) acts diagonally on the space \(\mathfrak{g}^{*\times n+1}/M\times A_{reg}\) in the natural way. Recall that we identified \(\mathfrak{g}\simeq\mathfrak{g}^{*}\) and \(\mathfrak{a}\simeq\mathfrak{a}^{*}\) via the Killing form, so that \(x\in\mathfrak{g}^{*}\) corresponds to \(x_{0}+\sum_{\alpha}x_{\alpha}e_{\alpha}\) with \(x_{0}\) the element in \(\mathfrak{a}\) corresponding to \(x|_{\mathfrak{a}}\in\mathfrak{a}^{*}\) and \(x_{\alpha}=x(e_{-\alpha})\). Denote by \(x_{0}^{(k)},x_{\alpha}^{(k)}\) the components of vectors \(x\in\mathfrak{g}^{*\times n+1}\) from the \(k\)-th factor of \(\mathfrak{g}^{*n+1}\), and \(\mu_{0}^{(k)},\mu_{\alpha}^{(k)}\) the components of \(\mu\in\mathcal{O}_{k}\). For \(y\in\mathfrak{k}^{*}\) we write \(y_{[\alpha]}=y(e_{-\alpha}-e_{\alpha})\), so that \(y_{[\alpha]}=-y_{[-\alpha]}\). Consider \[T(\mathcal{O})_{reg}=\big{\{}(x^{(0)},\ldots,x^{(n)},a)\in \mathfrak{g}^{*\times n+1}\times A_{reg}\mid\pi(x^{(0)})\in\mathcal{O}_{\ell}^ {K},-\pi(Ad_{a^{-1}}^{*}(x^{(n)}))\in\mathcal{O}_{r}^{K},\] \[x^{(1)}-x^{(0)}\in\mathcal{O}_{1},\ldots,x^{(n)}-x^{(n-1)}\in \mathcal{O}_{n}\big{\}}.\] Clearly \(S(\mathcal{O})_{reg}=T(\mathcal{O})_{reg}/N_{K}(A)\). For \((x^{(0)},\ldots,x^{(n)},a)\in T(\mathcal{O})_{reg}\) write \(\mu^{\prime}=\pi(x^{(0)})\in\mathcal{O}_{\ell}^{K}\), \(\mu^{\prime\prime}=-\pi(Ad_{a^{-1}}^{*}(x^{(n)}))\in\mathcal{O}_{r}^{K}\) and \(\mu^{(i)}=x^{(i)}-x^{(i-1)}\in\mathcal{O}_{i}\) for \(i=1,\ldots,n\). The Cartan components of \(x^{(k)}\) and their root coordinates then satisfy \[x_{\alpha}^{(0)}-x_{-\alpha}^{(0)}=\mu_{[\alpha]}^{\prime}, \qquad a_{\alpha}x_{-\alpha}^{(n)}-a_{\alpha}^{-1}x_{\alpha}^{(n)}= \mu_{[\alpha]}^{\prime\prime},\] \[x_{\alpha}^{(i)}-x_{\alpha}^{(i-1)}=\mu_{\alpha}^{(i)}, \qquad x_{0}^{(i)}-x_{0}^{(i-1)}=\mu_{0}^{(i)} \tag{38}\] for \(i=1,\ldots,n\). It is easy to solve the equations for Cartan parts \(x_{0}^{(i)}\) (\(0<i<n\)) in terms of Catran components of \(x^{(0)},x^{(n)}\) and \(\mu^{(j)}\), \[x_{0}^{(i)}=\mu_{0}^{(i)}+\cdots+\mu_{0}^{(1)}+x_{0}^{(0)}=x_{0}^{(n)}-\mu_{0}^ {(n)}-\cdots-\mu_{0}^{(i+1)} \tag{39}\] **Proposition 1**.: _The following identities hold for \(\alpha\in R\) and \(k=0,1,\ldots,n\):_ \[x_{\alpha}^{(k)}=K_{\alpha}+\sum_{l=1}^{k}\frac{a_{\alpha}\mu_{\alpha}^{(l)}-a _{\alpha}\mu_{-\alpha}^{(l)}}{a_{\alpha}-a_{\alpha}^{-1}}+\sum_{l=k+1}^{n} \frac{a_{\alpha}^{-1}\mu_{\alpha}^{(l)}-a_{\alpha}\mu_{-\alpha}^{(l)}}{a_{ \alpha}-a_{\alpha}^{-1}} \tag{40}\] _where_ \[K_{\alpha}=\frac{a_{\alpha}\mu_{[\alpha]}^{\prime}+\mu_{[\alpha]}^{\prime \prime}}{a_{\alpha}-a_{\alpha}^{-1}}. \tag{41}\] Proof.: Denote \[\mu=\mu^{(1)}+\cdots+\mu^{(n)}. \tag{42}\] Note that \(x^{(n)}-x^{(0)}=\mu\). Fix \(\beta\in R_{+}\). For \(x_{\pm\beta}^{(1)}\) and \(x_{\pm\beta}^{(n)}\) the formula \(x^{(n)}-x^{(0)}=\mu\) implies \[x_{\beta}^{(n)}-x_{\beta}^{(0)}=\mu_{\beta},\ \ x_{-\beta}^{(n)}-x_{-\beta}^{(0)}=\mu_{- \beta}.\] Combined with the first line of (38) we end up with four linear equations in \(x_{\beta}^{(0)},x_{-\beta}^{(0)},x_{\beta}^{(n)},x_{-\beta}^{(n)}\) which, by the assumption that \(a\) is regular, are uniquely solved by \[\begin{split} x_{\alpha}^{(0)}&=\frac{a_{\alpha}\mu_ {[\alpha]}^{\prime}+\mu_{[\alpha]}^{\prime\prime}+(a_{\alpha}^{-1}\mu_{\alpha }-a_{\alpha}\mu_{-\alpha})}{a_{\alpha}-a_{\alpha}^{-1}},\\ x_{\alpha}^{(n)}&=\frac{a_{\alpha}\mu_{[\alpha]}^{ \prime}+\mu_{[\alpha]}^{\prime\prime}+(a_{\alpha}\mu_{\alpha}-a_{\alpha}\mu_{ -\alpha})}{a_{\alpha}-a_{\alpha}^{-1}}\end{split} \tag{43}\] for \(\alpha=\beta,-\beta\) (here we used that \(a_{-\beta}=a_{\beta}^{-1}\), \(\mu_{[-\beta]}^{\prime}=-\mu_{[\beta]}^{\prime}\) and \(\mu_{[-\beta]}^{\prime\prime}=-\mu_{[\beta]}^{\prime\prime}\)). By the second line of (38) we then obtain \[x_{\alpha}^{(k)}=\frac{a_{\alpha}\mu_{[\alpha]}^{\prime}+\mu_{[\alpha]}^{ \prime\prime}+(a_{\alpha}^{-1}\mu_{\alpha}-a_{\alpha}\mu_{-\alpha})+(a_{ \alpha}-a_{\alpha}^{-1})(\mu_{\alpha}^{(1)}+\cdots+\mu_{\alpha}^{(k)})}{a_{ \alpha}-a_{\alpha}^{-1}}\] for \(k=0,1,\ldots,n\). Substituting (42) it is now easy to see that this is exactly what we wanted to prove. The proposition and (39) give an isomorphism \[S(\mathcal{O})_{reg}\simeq\big{(}(\mathcal{O}_{\ell}^{K}\times\mathcal{O}_{1 }\times\cdots\times\mathcal{O}_{n}\times\mathcal{O}_{r}^{K})/M\times T^{*}A_{ reg}\big{)}/W, \tag{44}\] mapping \(N_{K}(A)(x^{(0)},\ldots,x^{(n)},a)\) to the \(W\)-orbit of \((M(\mu^{\prime},\mu^{(1)},\ldots,\mu^{(n)},\mu^{\prime\prime}),x_{0}^{(n)},a)\), which preserves the natural symplectic structures. Here the finite discrete group \(M=Z_{K}(A)\subset K\) acts diagonally via the coadjoint action, and \(W=N_{K}(A)/M\) acts diagonally. The quantum version of this isomorphism is described in [35]. ### Quadratic Hamiltonians of open spin Calogero-Moser chain on the regular part of the phase space In this section we compute the restriction of the Hamiltonian corresponding to the quadratic Casimir function on \(\mathfrak{g}^{*}\), \[H_{2}^{(k)}(x,g)=\frac{1}{2}(x^{(k)},x^{(k)})=\frac{1}{2}(x_{0}^{(k)},x_{0}^{( k)})+\sum_{\alpha>0}x_{\alpha}^{(k)}x_{-\alpha}^{(k)}\] to the regular part of \(\mathcal{S}(\mathcal{O})\) (see (31)) for \(k=0,\ldots,n\), where \((x,g)\in\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\). Here \((\cdot,\cdot)\) is the Killing form and \(x_{\alpha}^{(i)},x_{0}^{(i)}\) are the components of \(x^{(i)}\) which were computed in the previous section on the regular part of the phase space. We first consider the differences, which we will call the _boundary Knizhnik-Zamolodchikov-Bernard (bKZB) Hamiltonians_, \[D_{k}=H_{2}^{(k)}-H_{2}^{(k-1)}\qquad\qquad(1\leq k\leq n).\] **Theorem 3**.: _For the bKZB Hamiltonians we have the following formula:_ \[D_{k}=(\mu_{0}^{(k)},x_{0}^{(n)})-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_{l}}) +(\sum_{\alpha}K_{\alpha}\mu_{-\alpha}^{(k)}-\kappa_{k})+\sum_{l=k+1}^{n}(r_{ kl}-r_{kl}^{\theta_{k}}). \tag{45}\] _Here \(r_{kl}\) for \(k\neq l\) is Felder's rescaled dynamical \(r\)-matrix_ \[r_{kl}=-\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})+\sum_{\alpha}\frac{\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}^{2}-1}, \tag{46}\] \(\theta_{k}\) is the transpose of the Chevalley involution \(\theta\) acting on \(\mu^{(k)}\),_ \[\kappa_{k}=\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(k)})+\sum_{\alpha}\frac{(\mu_{ \alpha}^{(k)})^{2}}{1-a_{\alpha}^{2}}\] _is the core quadratic classical dynamical \(k\)-matrix and \(K_{\alpha}\) is given by (41)._ Proof.: The first step of the proof is the same as in the proof of Theorem 1, resulting in the expression \[D_{k}=(\mu_{0}^{(k)},x_{0}^{(k-1)}+\frac{1}{2}\mu_{0}^{(k)})+\sum_{\alpha}x_{ \alpha}^{(k-1)}\mu_{-\alpha}^{(k)}+\sum_{\alpha>0}\mu_{\alpha}^{(k)}\mu_{- \alpha}^{(k)}. \tag{47}\] Now let us the formula (40) for \(x_{\alpha}^{(k-1)}\), \[\begin{split}\sum_{\alpha}x_{\alpha}^{(k-1)}\mu_{-\alpha}^{(k)}& =\sum_{l=1}^{k-1}\sum_{\alpha}\frac{a_{\alpha}\mu_{\alpha}^{(l)} \mu_{-\alpha}^{(k)}-a_{\alpha}\mu_{-\alpha}^{(l)}\mu_{-\alpha}^{(k)}}{a_{ \alpha}-a_{\alpha}^{-1}}\\ &\quad+\sum_{\alpha}K_{\alpha}\mu_{-\alpha}^{(k)}+\sum_{\alpha} \frac{a_{\alpha}^{-1}\mu_{\alpha}^{(k)}\mu_{-\alpha}^{(k)}-a_{\alpha}\mu_{- \alpha}^{(k)}\mu_{-\alpha}^{(k)}}{a_{\alpha}-a_{\alpha}^{-1}}\\ &\quad+\sum_{l=k+1}^{n}\sum_{\alpha}\frac{a_{\alpha}^{-1}\mu_{- \alpha}^{(k)}\mu_{\alpha}^{(l)}-a_{\alpha}\mu_{-\alpha}^{(k)}\mu_{-\alpha}^{( l)}}{a_{\alpha}-a_{\alpha}^{-1}}.\end{split} \tag{48}\] We express the different terms in the right hand side of (48) in terms of the dynamical \(r\)-matrix and \(k\)-matrix. Note first that \[r_{kl}^{\theta_{k}}=\frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(l)})-\sum_{\alpha} \frac{\mu_{\alpha}^{(k)}\mu_{\alpha}^{(l)}}{a_{\alpha}^{2}-1}=r_{lk}^{\theta_ {l}}.\] Then the terms in the right hand side of (48) with \(l\) strictly smaller than \(k\) can be rewritten as \[\sum_{\alpha}\frac{a_{\alpha}\mu_{\alpha}^{(l)}\mu_{-\alpha}^{(k)}-a_{\alpha} \mu_{-\alpha}^{(l)}\mu_{-\alpha}^{(k)}}{a_{\alpha}-a_{\alpha}^{-1}}=-(r_{lk}+ r_{lk}^{\theta_{l}})\] while the terms in the right hand side of (48) with \(l\) strictly larger than \(k\) reduce to \[\sum_{\alpha}\frac{a_{\alpha}^{-1}\mu_{-\alpha}^{(k)}\mu_{\alpha}^{(l)}-a_{ \alpha}\mu_{-\alpha}^{(k)}\mu_{-\alpha}^{(l)}}{a_{\alpha}-a_{\alpha}^{-1}}=( \mu_{0}^{(k)},\mu_{0}^{(l)})+(r_{kl}-r_{kl}^{\theta_{k}}).\] Finally, for the middle term in (48) a direct computation shows that \[\sum_{\alpha}\frac{a_{\alpha}^{-1}\mu_{\alpha}^{(k)}\mu_{-\alpha}^{(k)}-a_{ \alpha}\mu_{-\alpha}^{(k)}\mu_{-\alpha}^{(k)}}{a_{\alpha}-a_{\alpha}^{-1}}= \frac{1}{2}(\mu_{0}^{(k)},\mu_{0}^{(k)})-\sum_{\alpha>0}\mu_{\alpha}^{(k)}\mu _{-\alpha}^{(k)}-\kappa_{k}.\] Substitute these formulas in (48), then the resulting formula (47) for \(D_{k}\) becomes \[D_{k} =(\mu_{0}^{(k)},x_{0}^{(k-1)}+\mu_{0}^{(k)}+\mu_{0}^{(k+1)}+\cdots +\mu_{0}^{(n)})\] \[\quad-\sum_{l=1}^{k-1}(r_{lk}+r_{lk}^{\theta_{l}})+(\sum_{\alpha} K_{\alpha}\mu_{-\alpha}^{(k)}-\kappa_{k})+\sum_{l=k+1}^{n}(r_{kl}-r_{kl}^{\theta_{ k}}).\] By (39) this reduces to the formula (45). The quantum versions of the boundary KZB Hamiltonians in the present context were obtained in [35, SS6]. It was extended to the case of non-split real semisimple Lie groups \(G\) in [33]. For the Hamiltonian \(H_{2}^{(n)}\) we obtain by (43) the expression \[H_{2}^{(n)}=\frac{1}{2}(p,p)+\sum_{\boldsymbol{\alpha}>0}\frac{(a_{\boldsymbol {\alpha}}\mu_{[\boldsymbol{\alpha}]}^{\prime}+\mu_{[\boldsymbol{\alpha}]}^{ \prime\prime}+a_{\boldsymbol{\alpha}}(\mu_{\boldsymbol{\alpha}}-\mu_{- \boldsymbol{\alpha}}))(a_{\boldsymbol{\alpha}}^{-1}\mu_{[\boldsymbol{\alpha}] }^{\prime}+\mu_{[\boldsymbol{\alpha}]}^{\prime\prime}+a_{\boldsymbol{\alpha} }^{-1}(\mu_{\boldsymbol{\alpha}}-\mu_{-\boldsymbol{\alpha}}))}{(a_{\boldsymbol {\alpha}}-a_{-\boldsymbol{\alpha}})^{2}}\] on \(\mathcal{S}(\mathcal{O})_{\text{reg}}\), where \(\mu=\mu^{(1)}+\cdots+\mu^{(n)}\). Here we use notation \(p=x_{0}^{(n)}\) for the cotangent vectors to \(A_{\text{reg}}\) in formula (44). Note that the potential term only depends on the restrictions \(\pi(\mu^{(i)})\) of \(\mu^{(i)}\in\mathfrak{g}^{*}\) to \(\mathfrak{k}\), since \(\mu_{\boldsymbol{\alpha}}-\mu_{-\boldsymbol{\alpha}}=-(\pi(\mu))_{[\boldsymbol {\alpha}]}\). The radial component of the quantum quadratic Hamiltonian in the current open context was obtained in [35, SS6]. ### The superintegrability of the open spin CM chain In this section we will prove that Poisson commutative subalgebra of Hamiltonians constructed in section 2.2 defines a superintegrable system. Fix \(\mathcal{O}=(\mathcal{O}_{\ell}^{K},\mathcal{O}_{1},\ldots,\mathcal{O}_{n}, \mathcal{O}_{r}^{K})\in(\mathfrak{k}^{*}/K\times(\mathfrak{g}^{*}/G)^{\times n }\times\mathfrak{k}^{*}/K\). We willl construct Poisson projections \[S(\mathcal{O})\stackrel{{ p_{1,\mathcal{O}}}}{{\longrightarrow}} \mathcal{P}(\mathcal{O})\stackrel{{ p_{2,\mathcal{O}}}}{{ \longrightarrow}}\mathcal{B}(\mathcal{O}) \tag{49}\] such that \(p_{\mathcal{O}}=p_{2,\mathcal{O}}\circ p_{1,\mathcal{O}}\) (see (33)), satisfying the desired properties. Let \((\mathfrak{k}^{*}\times\mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^{*}/G)^ {\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^{*})\) be the subset of \((\mathfrak{k}^{*}\times\mathfrak{g}^{*\times n})\times(\mathfrak{g}^{*\times n }\times\mathfrak{k}^{*})\) consisting of elements \((z_{\ell},x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z_{r})\) satisfying \[z_{\ell}\in-\pi(Gy_{1}),\quad x_{i}\in-Gy_{i+1}\ (1\leq i<n),\quad z_{r}\in- \pi(Gx_{n}).\] The gauge group \(G_{n,K}\) acts on \(\big{(}\mathfrak{k}^{*}\times\mathfrak{g}^{*\times n}\big{)}\times_{( \mathfrak{g}^{*}/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^ {*})\) by \[\begin{split}(k_{\ell},h_{1},\ldots,h_{n},k_{r})&(z_ {\ell},x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z_{r})=\\ &=(Ad_{k_{\ell}}^{*}z_{\ell},Ad_{h_{1}}^{*}x_{1},\ldots,Ad_{h_{n} }^{*}x_{n},Ad_{h_{1}}^{*}y_{1},\ldots,Ad_{h_{n}}^{*}y_{n},Ad_{k_{r}}^{*}z_{r}).\end{split} \tag{50}\] Consider the resulting Poisson space \[\mathcal{P}=\big{(}(\mathfrak{k}^{*}\times\mathfrak{g}^{*\times n})\times_{( \mathfrak{g}^{*}/G)^{\times n+1}}(\mathfrak{g}^{*\times n}\times\mathfrak{k}^ {*})\big{)}/G_{n,K}\] and define the Poisson map \[p_{1}:T^{*}(G^{\times n+1})/G_{n,K}\to\mathcal{P}\] by \[\begin{split} p_{1}\big{(}G_{n,K}(x,g)\big{)}&=G_{n,K }\big{(}\mu_{L}(x,g),\mu_{R}(x,g)\big{)}\\ &=G_{n,K}\big{(}\pi(x_{0}),x_{1},\ldots,x_{n},-Ad_{\mathfrak{g}_{ 0}}^{*}(x_{0}),\ldots,-Ad_{\mathfrak{g}_{n-1}}^{*}(x_{n-1}),-\pi(Ad_{\mathfrak{ g}_{n}}^{*}(x_{n}))\big{)}.\end{split}\] Here \((x,g)=(x_{0},\ldots,x_{n},g_{0},\ldots,g_{n})\in\mathfrak{g}^{*\times n+1} \times G^{\times n+1}\simeq T^{*}(G^{\times n+1})\). Define the Poisson projection \[p_{2}:\mathcal{P}\to(\mathfrak{g}^{*}/G)^{\times n+1}\] by \[p_{2}\big{(}G_{n,K}(z_{\ell},x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z_{r})\big{)} =(-Gy_{1},Gx_{1},\ldots,Gx_{n}),\] with the trivial Poisson structure on the target space. The restriction of the Poisson projection \(p_{1}\) to the symplectic leaf \(S(\mathcal{O})\subset T^{*}(G^{\times n+1})/G_{n,K}\) (see (31)) gives the Poisson projection \[p_{1,\mathcal{O}}:S(\mathcal{O})\to\mathcal{P}(\mathcal{O})\] where \[\mathcal{P}(\mathcal{O})=(\mu_{L}\times\mu_{R})(\mu^{-1}(\mathcal{O}))/G_{n,K},\] or, more explicitly, \[\begin{split}\mathcal{P}(\mathcal{O})=\big{\{}(z_{\ell},x_{1},& \ldots,x_{n},y_{1},\ldots,y_{n},z_{r})\in(\mathfrak{k}^{*}\times \mathfrak{g}^{*\times n})\times_{(\mathfrak{g}^{*}/G)^{\times n+1}}(\mathfrak{ g}^{*\times n}\times\mathfrak{k}^{*})\mid\\ z_{\ell}\in\mathcal{O}_{\ell}^{K},&\ x_{1}+y_{1} \in\mathcal{O}_{1},\ldots,x_{n}+y_{n}\in\mathcal{O}_{n},\,z_{r}\in\mathcal{O}_ {r}^{K}\big{\}}/G_{n,K}.\end{split} \tag{51}\] The generic fibers of this mapping are isotropic submanifolds in \(\mathcal{S}(\mathcal{O})\). The restriction of the Poisson projection \(p_{2}\) to \(\mathcal{P}(\mathcal{O})\) gives a surjective Poisson projection \[p_{2,\mathcal{O}}:\mathcal{P}(\mathcal{O})\to\mathcal{B}(\mathcal{O}),\] with \(\mathcal{B}(\mathcal{O})\) given by (34). Clearly, the composition of \(p_{2,\mathcal{O}}\circ p_{1,\mathcal{O}}:\mathcal{S}(\mathcal{O})\to\mathcal{B }(\mathcal{O})\) is the projection \(p_{\mathcal{O}}\) as given by (33). Now let us describe fibers of \(p_{2,\mathcal{O}}\) are symplectic leaves of \(\mathcal{P}(\mathcal{O})\). **Lemma 3**.: _We have he following symplectomorphysm_ \[p_{2,\mathcal{O}}^{-1}((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)}))\stackrel{{ \sim}}{{\longrightarrow}}\mathcal{M}(-\mathcal{O}^{(0)},\mathcal{O}_{\ell}^{K })\times\prod_{i=1}^{n}\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{(i-1)}, \mathcal{O}_{i})\times\mathcal{M}(\mathcal{O}^{(n)},\mathcal{O}_{r}^{K}) \tag{52}\] _where symplectic spaces \(\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{(i-1)},\mathcal{O}_{i})\) are defined in (22) and_ \[\mathcal{M}(\mathcal{O}^{\prime},\mathcal{O}^{K})=\{(x,z)\in\mathcal{O}^{ \prime}\times\mathcal{O}^{K}\mid\pi(x)+z=0\}/K\] _Here \(\mathcal{O}\subset\mathfrak{g}^{*}\) is a \(G\)-coadjoint orbit and \(\mathcal{O}^{K}\subset\mathfrak{k}^{*}\) is a \(K\)-coadjoint orbit. It has a natural symplectic structure because it is the Hamiltonian reduction of \(\mathcal{O}\times\mathcal{O}^{K}\) with respect to the Hamiltonian diagonal action of \(K\)._ Proof.: Let \((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)})\in\mathcal{B}(\mathcal{O})\). By a direct computation, the fiber \(p_{2,\mathcal{O}}^{-1}((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)}))\) consists of the \(G_{n,K}\)-orbits in \(\mathcal{P}\) with representatives \[(z_{\ell},x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z_{r})\in(\mathfrak{k}^{*} \times\mathfrak{g}^{*\times n})\times(\mathfrak{g}^{*\times n}\times\mathfrak{ k}^{*})\] satisfying the following conditions \[\begin{array}{ccc}z_{\ell}\in\mathcal{O}_{\ell}^{K}\cap\pi(\mathcal{O}^{(0) }),&x_{i}+y_{i}\in\mathcal{O}_{i}&(1\leq i\leq n),&z_{r}\in\mathcal{O}_{r}^{K }\cap-\pi(\mathcal{O}^{(n)}),\\ -y_{1}\in\mathcal{O}^{(0)},&x_{i}\in\mathcal{O}^{(i)},-y_{i+1}\mathcal{O}^{(i) }\in(1\leq i\leq n-1),&x_{n}\in\mathcal{O}^{(n)}.\end{array}\] Using this explicit description of the fiber, we can write it as a direct product of symplectic spaces. The isomorphism (23) for symplectic spaces \(\mathcal{M}(\mathcal{O}^{(1)},\mathcal{O}^{(2)},\mathcal{O}^{(3)})\) defined by (22) gives factors \(\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{(i-1)},\mathcal{O}_{i})\). The space \[\mathcal{M}(\mathcal{O}^{\prime},\mathcal{O}^{K})=\{(x,z)\in\mathcal{O}^{ \prime}\times\mathcal{O}^{K}\mid\pi(x)+z=0\}/K\] with \(K\) acting by the diagonal coadjoint action is symplectic because, see above. The isomorphism \[\mathcal{M}(\mathcal{O}^{\prime},\mathcal{O}^{K})\stackrel{{ \sim}}{{\longrightarrow}}(\mathcal{O}^{\prime}\cap\pi^{-1}(\mathcal{O}^{K}))/K,\] completes the proof. The isomorphism maps \(G_{n,K}(z_{\ell},x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z_{r})\in p_{2,\mathcal{ O}}^{-1}((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)}))\) to \[(K(\widetilde{y}_{1},z_{\ell}),G(-x_{1},-y_{1},x_{1}+y_{1}),\ldots,G(-x_{n},-y_ {n},x_{n}+y_{n}),K(\widetilde{x}_{n},z_{r})),\] with \(\widetilde{y}_{1}\in Gy_{1}=-\mathcal{O}^{(0)}\) such that \(-\pi(\widetilde{y}_{1})=z_{\ell}\) and \(\widetilde{x}_{n}\in Gx_{n}=\mathcal{O}^{(n)}\) such that \(-\pi(\widetilde{x}_{n})=z_{r}\). Note that for generic \(\mathcal{O}^{\prime}\), the symplectic space \(\mathcal{M}(\mathcal{O}^{\prime},\mathcal{O}^{K})\) is of dimension \(\dim(\mathcal{O}^{K})\). **Remark 3**.: _In the compact case, the algebra of function on the fiber of \(p_{2,\mathcal{O}}\) has the algebra of endomorphisms of the vector space_ \[\operatorname{Hom}_{K}(V_{\lambda_{0}},U_{\nu_{t}})\otimes\operatorname{Hom}_{G }(V_{\lambda_{1}},V_{\lambda_{0}}\otimes V_{\mu_{1}})\otimes\cdots\otimes \operatorname{Hom}_{G}(V_{\lambda_{n}},V_{\lambda_{n-1}}\otimes V_{\mu_{n}}) \otimes\operatorname{Hom}_{K}(U_{\nu_{r}},V_{\lambda_{n}})\] _as natural quantization, where the finite dimensional \(G\)-representation \(V_{\lambda_{i}}\) (resp. \(V_{\mu_{i}}\)) corresponds to \(\mathcal{O}^{(i)}\) (resp. \(\mathcal{O}_{i}\)) and the \(K\)-representation \(U_{\nu_{t}}\) (resp. \(U_{\nu_{r}}\)) corresponds to \(\mathcal{O}_{\ell}^{K}\) (resp. \(\mathcal{O}_{r}^{K}\)), compare with Remark 2 in the cyclic case. For details see [35] (which treats the noncompact case) and [2]._ **Lemma 4**.: _Dimensions of spaces \(\mathcal{B}(\mathcal{O})\) and \(\mathcal{P}(\mathcal{O})\) are_ \[\dim(\mathcal{B}(\mathcal{O}))=(n+1)r,\ \ \dim(\mathcal{P}(\mathcal{O}))=\dim( \mathcal{O})-2nr\] _where we define \(\dim(\mathcal{O})\) as \(\dim(\mathcal{O}_{\ell}^{K})+\sum_{i=1}^{n}\dim(\mathcal{O}_{i})+\dim( \mathcal{O}_{r}^{K})\)._ Proof.: The proof of the dimension formula for \(\mathcal{B}(\mathcal{O})\) is completely similar to the periodic case. It is enough to consider large orbits. For the dimension of \(\mathcal{P}(\mathcal{O})\) we have: \[\dim(\mathcal{P}(\mathcal{O}))=\dim(\mathcal{B}(\mathcal{O}))+\dim\bigl{(}p_ {2,\mathcal{O}}^{-1}\bigl{(}(\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)}) \bigr{)}\] and by (52) the dimension of \(p_{2,\mathcal{O}}^{-1}((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)}))\) is equal to \[\dim(\mathcal{M}(-\mathcal{O}^{(0)},\mathcal{O}_{\ell}^{K})) +\sum_{i=1}^{n}\dim(\mathcal{M}(-\mathcal{O}^{(i)},\mathcal{O}^{( i-1)},\mathcal{O}_{i}))+\dim(\mathcal{M}(\mathcal{O}^{(n)},\mathcal{O}_{r}^{K}))=\] \[=\dim(\mathcal{O}_{\ell}^{K})+\sum_{i=1}^{n}(\dim(\mathcal{O}_{i })-2r)+\dim(\mathcal{O}_{r}^{K})=\dim(\mathcal{O})-2nr.\] This finishes the proof. We now have the following main result of this section. **Theorem 4**.: _The Hamiltonian system generated by any Hamiltonian for the open spin CM chain described in section 2.2 is superintegrable with the superintegrable structure described by the surjective Poisson maps_ \[\mathcal{S}(\mathcal{O})\stackrel{{ p_{1,\mathcal{O}}}}{{ \longrightarrow}}\mathcal{P}(\mathcal{O})\stackrel{{ p_{2,\mathcal{O}}}}{{ \longrightarrow}}\mathcal{B}(\mathcal{O})\] _as introduced earlier in this section._ Recall that \(\mathcal{O}_{i}\neq\{0\}\) for all \(i=0,1,\ldots,n\). Proof.: We already verified most of the conditions. What remains to show is the matching of dimensions, \[\dim(\mathcal{S}(\mathcal{O}))=\dim(\mathcal{P}(\mathcal{O}))+\dim(\mathcal{B }(\mathcal{O})). \tag{53}\] For the collection \(\mathcal{O}=(\mathcal{O}_{\ell}^{K},\mathcal{O}_{1},\ldots,\mathcal{O}_{n}, \mathcal{O}_{r}^{K})\) of coadjoint orbits we write \(\dim(\mathcal{O})\) for the sum of the dimensions of the coadjoint orbits. For the dimension of the symplectic leaf \(\mathcal{S}(\mathcal{O})\) we have, using (44), \[\dim(\mathcal{S}(\mathcal{O}))=2r+\dim(\mathcal{O}),\] with \(r\) the rank of \(\mathfrak{g}\). For \(\mathcal{P}(\mathcal{O})\) we obtain for generic \((\mathcal{O}^{(0)},\ldots,\mathcal{O}^{(n)})\in\mathcal{B}(\mathcal{O})\), Because \[\dim(\mathcal{B}(\mathcal{O}))=(n+1)r,\] we have \[\dim(\mathcal{P}(\mathcal{O}))+\dim(\mathcal{B}(\mathcal{O})) =\dim(\mathcal{O})-2nr+2\dim(\mathcal{B}(\mathcal{O}))\] \[=\dim(\mathcal{O})+2r\] \[=\dim(\mathcal{S}(\mathcal{O})),\] as desired. ### Constructing solutions by the projection method and angle variables Let \(\mathcal{H}\) be a \(G\)-invariant function on \(\mathfrak{g}^{*}\) and \(\mathcal{H}^{(i)}\) for \(i=0,\ldots,n\) the associated \(G_{n}\)-invariant function \((x,g)\mapsto\mathcal{H}(x_{i})\) on \(T^{*}(G^{\times n+1})\) (cf. section 1.6). The Hamiltonian flow generated by \(\mathcal{H}^{(i)}\) on \(T^{*}(G^{\times n+1})\simeq\mathfrak{g}^{*\times n+1}\times G^{\times n+1}\) was already described in section 1.6. The flow line passing through \((x,g)\) at \(t=0\) is \[(x(t_{i}),g(t_{i}))=(x_{0},\ldots,x_{n},g_{0},\ldots,g_{i-1},e^{\nabla \mathcal{H}(x_{i})t_{i}}g_{i},g_{i+1}\ldots,g_{n}). \tag{54}\] The corresponding Hamiltonian flow on the symplectic leaf \(\mathcal{S}(\mathcal{O})\subset T^{*}(G^{\times n+1})/G_{n,K}\) is obtained by projecting the flow (54) to \(T^{*}(G^{\times n+1})/G_{n,K}\) and restricting it to \(\mathcal{S}(\mathcal{O})\). Thus, we reformulated the problem of solving nonlinear differential equations of motion for open spin Calogero-Moser chains to a problem of linear algebra. This is a version of the original projection method which goes back to earlier papers on Calogero-Moser type systems [28]. Now let us describe angle variables for this integrable dynamics. We use the notations from section 1.6. Fix \((x,g)\in\mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}\). For \(i=1,\ldots,n\) define elements \(s_{i}\in G\) by the condition \(Ad^{*}_{s_{i}}(x_{i})\in\mathfrak{a}^{*}_{+}\). As in the periodic case (see section 1.6) it is defines up to \(s_{i}\mapsto a_{i}s_{i}\) with \(a_{i}\in H\subset G\), where \(H\subset G\) is the Cartan subgroup containing \(A\). Define \(s_{0}\in K\) such that \(Ad^{*}_{s_{0}}(x_{0})|_{\mathfrak{p}}\in\mathfrak{a}^{*}_{+}\) (where we view \(\mathfrak{a}^{*}_{+}\) now as subset of \(\mathfrak{p}^{*}\) in the natural manner). The element \(s_{0}\) is defined up to \(s_{0}\mapsto ms_{0}\) with \(m\in M=Z_{K}(A)\). Similarly, we define \(s_{n+1}\in K\) such that \(Ad^{*}_{s_{n+1}}(x_{n})|_{\mathfrak{p}}\in\mathfrak{a}^{*}_{+}\). Choose finite dimensional representations \(V_{0},V_{1},\ldots,V_{n}\) of \(G_{\mathbb{C}}\), \(H_{\mathbb{C}}\)-weight vectors \(v_{i}\in V_{i}\) of weight \(\lambda_{i+1}\) for \(0\leq i<n\) and \(H_{\mathbb{C}}\)-weight vectors \(u^{*}_{j}\in V^{*}_{j}\) of weight \(-\lambda_{j}\) for \(0\leq j\leq n\). Finally, we choose \(M\)-invariant vectors \(u^{*}_{0}\in V^{*}_{0}\) and \(v_{n}\in V_{n}\) (i.e., \(mu^{*}_{0}=u^{*}_{0}\) and \(mv_{n}=v_{n}\) for all \(m\in M\)). Define \[f_{u,v}(x,g)=u^{*}_{0}(s_{0}g_{0}s_{1}^{-1}v_{0})u^{*}_{1}(s_{1}g_{1}s_{2}^{-1 }v_{1})\cdots u^{*}_{n-1}(s_{n-1}g_{n-1}s_{n}^{-1}v_{n-1})u^{*}_{n}(s_{n}g_{n}s _{n+1}^{-1}v_{n}). \tag{55}\] It is an easy check that \(f_{u,v}(x,g)\) is a well defined \(G_{n,K}\)-invariant function on \(\mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}\). Similarly as in the periodic case (see section 1.6) we then have for \(i=1,\ldots,n\), \[f_{u,v}(x(t_{i}),g(t_{i}))=e^{t_{i}\lambda_{i}(\nabla\mathcal{H}(y_{i}))}f_{u,v}(x,g) \tag{56}\] with \(y_{i}=Ad^{*}_{s_{i}}(x_{i})\in\mathfrak{a}^{*}_{+}\). Logarithms of these functions thus evolve linearly, and hence give rise to angle variables for the Hamiltonians \(\mathcal{H}^{(i)}\) on \(\mathcal{S}(\mathcal{O})\cap(\mathfrak{g}^{\prime*\times n+1}\times G^{ \times n+1})/G_{n,K}\). For \(i=0\) we need to restrict further to \((x,g)\in\mathfrak{g}^{\prime*\times n+1}\times G^{\times n+1}\) with \(x_{0}\in\mathfrak{p}\), and assume that \(u^{*}_{0}\in V^{*}_{0}\) is not only \(M\)-invariant but also a \(H_{\mathbb{C}}\)-weight vector, say of weight \(-\lambda_{0}\). In this case \(\operatorname{Ad}^{*}_{s_{0}}(x_{0})=y_{0}\in\mathfrak{a}^{*}_{+}\) and hence \[u^{*}_{0}(s_{0}e^{t_{0}\nabla\mathcal{H}(x_{0})}g_{0}s_{1}^{-1}v_{0})=e^{t_{0} \lambda_{0}(\mathcal{H}(y_{0}))}u^{*}_{0}(s_{0}g_{0}s_{1}^{-1}v_{0}).\] As a consequence (56) then also holds true for \(i=0\), and the logarithm of \(f_{u,v}(x,g)\) becomes a linear functions of time \(t_{0}\). ## 3. A Liouville integrable example of a periodic spin Calogero-Moser example for orbits of rank 1. Let us briefly discuss a particular case of periodic spin CM chain corresponding to \(G=SL_{N}(\mathbb{R})\) with rank one orbits \(\mathcal{O}_{k}\). This case is related to the original paper [16] where spin CM systems were first introduced. Take \(\mathfrak{a}\subset\mathfrak{sl}_{N}\) the Cartan subalgebra consisting of diagonal matrices, and denote the roots by \(\{\varepsilon_{i}-\varepsilon_{j}\}_{i\neq j}\subset\mathfrak{a}^{*}\) with \(\varepsilon_{i}\in\mathfrak{a}^{*}\) the linear functional picking out the \(i^{th}\) diagonal entry. We identify \(\mathfrak{sl}_{N}\) with its dual via the Killing form \((x,y)=2N\operatorname{Tr}(xy)\). Then for \(p\in\mathfrak{a}^{*}\simeq\mathfrak{a}\) we have \((p,p)=2N\sum_{i=1}^{N}p_{i}^{2}\), with \(p_{i}\) the \(i^{th}\) diagonal entry of the diagonal matrix \(p\). For \(y\in\mathfrak{sl}_{N}^{*}\simeq\mathfrak{sl}_{N}\) and \(i\neq j\) we have \(y_{\varepsilon_{i}-\varepsilon_{j}}=\sqrt{2N}y_{ij}\), with \(y_{ij}\) the \((i,j)^{th}\) entry of the matrix \(y\). For \(\xi\in\mathbb{R}\) set \[\mathcal{O}^{(\xi)}=\Big{\{}x-\frac{\xi}{N}\mathrm{id}_{N}\ |\ x\ \text{is a rank one $N \times N$ matrix with }\ \operatorname{Tr}(x)=\xi\Big{\}}.\] Then \(\mathcal{O}^{(\xi)}\) is a coadjoint orbit in \(\mathfrak{sl}_{N}\simeq\mathfrak{sl}_{N}^{*}\) of dimension \(2(N-1)\). Viewing elements in \(\mathbb{R}^{N}\) as column vectors, we have a natural mapping \[\big{\{}(a,b)\in\mathbb{R}^{N}\times\mathbb{R}^{N}\ |\ d^{\prime}b=\xi\big{\}}/ \mathbb{R}^{\times}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{O}^{(\xi)},\qquad\mathbb{R}^{\times}(a,b)\mapsto ba^{\prime}-\frac{ \xi}{N}\mathrm{id}_{N} \tag{57}\] where \(\lambda\in\mathbb{R}^{\times}\) acts by \((a,b)\mapsto(\lambda a,\lambda^{-1}b)\) and \(a^{\prime}\) is the transpose of \(a\in\mathbb{R}^{N}\). Because of the rank one condition, this is an isomorphism. It is easy to check that this is symplomorphism, with the Poisson brackets of the coordinate functions \(a_{i}\) and \(b_{j}\) of \((a,b)\in\mathbb{R}^{N}\times\mathbb{R}^{N}\) given by \[\{b_{i},a_{j}\}=\delta_{ij},\ \ \{a_{i},a_{j}\}=0=\{b_{i},b_{j}\}=0.\] The value of the quadratic Casimir function \(y\mapsto\frac{(y,y)}{2N}=\sum_{i,j=1}^{N}y_{ij}y_{ji}\) on \(\mathcal{O}^{(\xi)}\) is easily computed using (57): \[\sum_{i,j=1}^{N}\mu_{ij}\mu_{ji}=\xi^{2}\Big{(}1-\frac{1}{N}\Big{)},\qquad \qquad\mu\in\mathcal{O}^{(\xi)}.\] Here we use notations \(\mu=y|_{\mathcal{O}^{(\xi)}}\). The quadratic \(n\)-th Hamiltonian \(H_{2}^{(n)}\) in radial coordinates, rescaled by a factor \(2N\), then is \[H_{2}=\frac{1}{2}\sum_{i=1}^{N}p_{i}^{2}-\sum_{i<j}\frac{\mu_{ij}\mu_{ji}}{2 \mathrm{sh}^{2}(q_{i}-q_{j})} \tag{58}\] (see (18)), where \(q_{i}=\varepsilon_{i}(\log(a))\) and \[\mu_{ij}=\sum_{k=1}^{n}\mu_{ij}^{(k)}.\] We now consider the Hamiltonian (58) on \(\mathcal{S}(\mathcal{O})_{reg}\simeq(\nu_{\mathcal{O}}^{-1}(0)/H\times T^{*}A _{reg})/W\) (see (12)) with \(\mathcal{O}=(\mathcal{O}^{(\xi_{1})},\dots,\mathcal{O}^{(\xi_{n})})\) a collection of \(n\) rank one orbits. Here \(H\subset\mathrm{SL}_{N}(\mathbb{R})\) is the Cartan subgroup of diagonal matrices and \(\nu_{\mathcal{O}}^{-1}(0)\subset\mathcal{O}^{(\xi_{1})}\times\dots\times \mathcal{O}^{(\xi_{n})}\) consists of the \(n\)-tuple of rank one matrices \[(\mu^{(1)},\dots,\mu^{(n)})=\Big{(}b^{(1)}a^{(1)t}-\frac{\xi_{1}}{N}\mathrm{id }_{N},\dots,b^{(N)}a^{(N)t}-\frac{\xi_{N}}{N}\mathrm{id}_{N}\Big{)} \tag{59}\] where the diagonal action of \(h\in H\) is given by \(a_{i}^{(k)}\to h_{i}a_{i}^{(k)}\), \(b_{j}^{(k)}\to h_{j}^{-1}b_{j}^{(k)}\), and vectors \(a^{(k)},b^{(k)}\in\mathbb{R}^{N}\) satisfy the relations \[\sum_{i=1}^{N}a_{i}^{(k)}b_{i}^{(k)}=\xi_{k}\quad(1\leq k\leq n),\qquad\sum_{k=1 }^{n}a_{i}^{(k)}b_{i}^{(k)}=\frac{\boldsymbol{\xi}}{N}\quad(1\leq i\leq N). \tag{60}\] Here \(\boldsymbol{\xi}=\sum_{k=1}^{n}\xi_{k}\). In other words, \(\mu_{ij}^{(k)}=b_{i}^{(k)}a_{j}^{(k)}-\delta_{ij}\xi_{k}/N\), where \(b_{i}^{(k)},a_{j}^{(k)}\) are as above. In terms of the variables \(a^{(k)}\) and \(b^{(k)}\) the Hamiltonian \(H_{2}\) on \(\mathcal{S}(\mathcal{O})_{reg}\) can be rewritten as \[H_{2}=\frac{1}{2}\sum_{i=1}^{N}p_{i}^{2}-\sum_{i<j}\frac{\sum_{k,\ell=1}^{n}b _{i}^{(k)}a_{j}^{(k)}b_{j}^{(\ell)}a_{i}^{(\ell)}}{2\mathrm{sh}^{2}(q_{i}-q_{j })}. \tag{61}\] Here we will rewrite the Hamiltonian (61) in terms of variables attached to each \(q_{i}\). It is natural to think of these variable as spin variables attached to a one dimensional particle with the position \(q_{i}\). They are defined as follows. For \(\xi\in\mathbb{R}\) denote by \(\widetilde{\mathcal{O}}^{(\xi)}\) the rank one coadjoint \(\mathrm{SL}_{n}(\mathbb{R})\)-orbit defined as \[\widetilde{\mathcal{O}}^{(\xi)}=\Big{\{}x-\frac{\xi}{n}\mathrm{id}_{n}\ |\ x\ \text{is a rank one $n\times n$ matrix with $\operatorname{Tr}(x)=\xi$}\,\Big{\}},\] and set \[\widetilde{\mathcal{O}}:=(\underbrace{\widetilde{\mathcal{O}}^{(\boldsymbol{ \xi}/N)},\ldots,\widetilde{\mathcal{O}}^{(\boldsymbol{\xi}/N)}}_{N}).\] The coadjoint action of the Cartan subgroup \(\widetilde{H}\subset\mathrm{SL}_{n}(\mathbb{R})\) is Hamiltonian and gives the moment map \[\widetilde{\mathcal{V}}_{\widetilde{\mathcal{O}}}:\underbrace{\widetilde{ \mathcal{O}}^{(\boldsymbol{\xi}/N)}\times\cdots\times\widetilde{\mathcal{O}}^ {(\boldsymbol{\xi}/N)}}_{N}\to\widetilde{\mathfrak{a}}^{*},\qquad(g_{1}, \ldots,g_{N})\mapsto\big{(}g_{1}+\cdots+g_{N})_{0}\] where \(\widetilde{\mathfrak{a}}=\mathrm{Lie}(\widetilde{H})\). Finally, consider the traceless diagonal \(n\times n\)-matrix \[t_{\underline{\xi}}=\mathrm{diag}\Big{(}\xi_{1}-\frac{\boldsymbol{\xi}}{n}, \ldots,\xi_{n}-\frac{\boldsymbol{\xi}}{n}\Big{)}\in\widetilde{\mathfrak{a}}.\] From the above we immediately have the following statement. **Lemma 5**.: _We the following isomorphism of \(2(n-1)(N-1)\)-dimensional symplectic varieties_ \[\begin{split}\nu_{\mathcal{O}}^{-1}(0)/H&\stackrel{{ \sim}}{{\longrightarrow}}\widetilde{\nu}_{\widetilde{\mathcal{O}}}^{-1}(t_{ \underline{\xi}})/\widetilde{H},\\ H(\mu^{(1)},\ldots,\mu^{(n)})&\mapsto\widetilde{H}(g^{(1)}, \ldots,g^{(N)}),\end{split}\] _Here \(\mu_{ij}^{(k)}=b_{i}^{(k)}a_{j}^{(k)}-\delta_{ij}\xi_{k}/N\) and the local spin variables \(g_{k\ell}^{(i)}\) are_ \[g_{k\ell}^{(i)}=b_{i}^{(k)}a_{i}^{(\ell)}-\delta_{k\ell}\frac{\boldsymbol{\xi} }{Nn}.\] It is easy to check that if \(i\neq j\) the following identity holds: \[\sum_{k,\ell=1}^{n}g_{k\ell}^{(i)}g_{\ell k}^{(j)}=\mu_{ij}\mu_{ji}-\frac{ \boldsymbol{\xi}^{2}}{N^{2}n}, \tag{62}\] Thus, we can rewrite the Hamiltonian (58) in terms of spin variables from \(\widetilde{\nu}_{\widetilde{\mathcal{O}}}^{-1}(t_{\underline{\Sigma}})/\widetilde {H}\times T^{*}A_{reg}\) as \[H_{2}=\frac{1}{2}\sum_{i=1}^{N}p_{i}^{2}+\sum_{i<j}\frac{\operatorname{Tr}(g^{(i )}g^{(j)})+\frac{\underline{g}^{2}}{N^{2}n}}{2\text{sh}^{2}(q_{i}-q_{j})}. \tag{63}\] This Hamiltonian describes \(N\) classical particles each carrying a "spin" from a rank one coadjoint orbit in \(\mathfrak{sl}_{n}^{*}\) with the Casimir value given by \[\sum_{\alpha,\beta=1}^{n}(g_{i})_{\beta}^{\alpha}(g_{i})_{\alpha}^{\beta}= \frac{c^{2}}{n^{2}}(1-\frac{1}{N}) \tag{64}\] The system is Liouville integrable since we constructed \(n(N-1)\) integrals for the periodic spin chain earlier (see the proof of Theorem 2). Integrable system described above are closely related [16] and [21]. This project, together with results of [1], is the first step towards constructing superintegrable systems on moduli spaces of flat connections on a surface where on part of the boundary the gauge group \(G\) is constrained to \(K\). When the boundary gauge group is not constrained, corresponding integrable systems are described in [1]. We expect that such moduli spaces have the structure of a cluster variety similar to the one described in [14]. It would be interesting to to extend the construction of spin CM chains to the elliptic case as it was done for \(N=1\) in [21]. ## Appendix A Comparison with the \(n=2\) case from [31] Consider the periodic spin CM chain from section 1 for \(n=2\). The symplectic leaves of \(T^{*}(G^{\times 2})/G_{2}\) are then \[\mathcal{S}(\mathcal{O}_{1},\mathcal{O}_{2})=\{(x_{1},x_{2},g_{1},g_{2})\mid x _{1}-Ad_{g_{2}^{-1}}^{*}(x_{2})\in\mathcal{O}_{1},\;\;x_{2}-Ad_{g_{1}^{-1}}^{ *}(x_{1})\in\mathcal{O}_{2}\}/G_{2} \tag{65}\] where \(\mathcal{O}_{1},\mathcal{O}_{2}\) are coadjoint orbits in \(\mathfrak{g}^{*}\), relative to the gauge action \[(h_{1},h_{2})(x_{1},x_{2},g_{1},g_{2})=(Ad_{h_{1}}^{*}(x_{1}),Ad_{h_{2}}^{*}( x_{2}),h_{1}g_{1}h_{2}^{-1},h_{2}g_{2}h_{1}^{-1}).\] In [31, SS3 & App. C] the following Hamiltonian action of \(G_{2}\) on \(T^{*}(G^{\times 2})\) is considered, \[(h_{1},h_{2})_{*}(x_{1},x_{2},g_{1},g_{2})=(Ad_{h_{1}}^{*}(x_{1}),Ad_{h_{1}}^{ *}(x_{2}),h_{1}g_{1}h_{2}^{-1},h_{1}g_{2}h_{2}^{-1}), \tag{66}\] with corresponding moment map \(\mu_{*}:T^{*}(G^{\times 2})\to\mathfrak{g}^{*\times 2}\) given by \[\mu_{*}(x_{1},x_{2},g_{1},g_{2})=(x_{1}+x_{2},-Ad_{g_{1}^{-1}}^{*}(x_{1})-Ad_{ g_{2}^{-1}}^{*}(x_{2})).\] The corresponding symplectic leaves are \[\mathcal{S}_{*}(\mathcal{O}_{1},\mathcal{O}_{2}) =\mu_{*}^{-1}(\mathcal{O}_{1},\mathcal{O}_{2})/G_{2}\] \[=\big{\{}(x_{1},x_{2},g_{1},g_{2})\mid x_{1}+x_{2}\in\mathcal{O} _{1},\;-Ad_{g_{1}^{-1}}^{*}(x_{1})-Ad_{g_{2}^{-1}}(x_{2})\in\mathcal{O}_{2} \big{\}}/G_{2},\] with the gauge group \(G_{2}\) now acting by (66). These symplectic leaves were used in [31]. They are related to the symplectic leaves \(\mathcal{S}(\mathcal{O}_{1},\mathcal{O}_{2})\) in the following way. Consider the map \(\psi:T^{*}(G^{\times 2})\to T^{*}(G^{\times 2})\), defined by \[\psi(x_{1},x_{2},g_{1},g_{2})=(-x_{1},Ad_{g_{1}}^{*}(x_{2}),g_{1},g_{1}g_{2}g_{ 1}).\] Then \(\psi\) is \(G_{2}\)-equivariant, \[\psi((h_{1},h_{2})(x_{1},x_{2},g_{1},g_{2}))=(h_{1},h_{2})_{*}\psi(x_{1},x_{2}, g_{1},g_{2}),\] and the resulting map on the \(G_{2}\)-orbits restricts to an isomorphism \[\mathscr{S}(\mathscr{O}_{1},\mathscr{O}_{2})\stackrel{{\sim}}{{ \longrightarrow}}\mathscr{S}_{*}(\mathscr{O}_{2},\mathscr{O}_{1}).\]
2309.10395
How (Not) to Understand Weak Measurements of Velocities
To-date, the most elaborated attempt to complete quantum mechanics by the addition of hidden variables is the de Broglie-Bohm (pilot wave) theory (dBBT). It endows particles with definite positions at all times. Their evolution is governed by a deterministic dynamics. By construction, however, the individual particle trajectories generically defy detectability in principle. Of late, this lore might seem to have been called into question in light of so-called weak measurements. Due to their characteristic weak coupling between the measurement device and the system under study, they permit the experimental probing of quantum systems without essentially disturbing them. It's natural therefore to think that weak measurements of velocity in particular offer to actually observe the particle trajectories. If true, such a claim would not only experimentally demonstrate the incompleteness of quantum mechanics: it would provide support of dBBT in its standard form, singling it out from an infinitude of empirically equivalent alternative choices for the particle dynamics. Here we examine this possibility. Our result is deflationary: weak velocity measurements constitute no new arguments, let alone empirical evidence, in favour of standard dBBT; One mustn't na\"ively identify weak and actual positions. Weak velocity measurements admit of a straightforward standard quantum mechanical interpretation, independent of any commitment to particle trajectories and velocities. This is revealed by a careful reconstruction of the physical arguments on which the description of weak velocity measurements rests. It turns out that for weak velocity measurements to be reliable, one must already presuppose dBBT in its standard form: in this sense, they can provide no new argument, empirical or otherwise, for dBBT and its standard guidance equation.
Johannes Fankhauser, Patrick M. Dürr
2023-09-19T07:51:05Z
http://arxiv.org/abs/2309.10395v1
# How (Not) to Understand Weak Measurements of Velocities ###### Abstract To-date, the most elaborated attempt to complete quantum mechanics by the addition of hidden variables is the de Broglie-Bohm (pilot wave) theory (dBBT). It endows particles with definite positions at all times. Their evolution is governed by a deterministic dynamics. By construction, however, the individual particle trajectories generically defy detectability in principle. Of late, this lore might seem to have been called into question in light of so-called weak measurements. Due to their characteristic weak coupling between the measurement device and the system under study, they permit the experimental probing of quantum systems without essentially disturbing them. It's natural therefore to think that weak measurements of velocity in particular offer to actually observe the particle trajectories. If true, such a claim would not only experimentally demonstrate the incompleteness of quantum mechanics: it would provide support of dBBT in its standard form, singling it out from an infinitude of empirically equivalent alternative choices for the particle dynamics. Here we examine this possibility. Our result is deflationary: weak velocity measurements constitute no new arguments, let alone empirical evidence, in favour of standard dBBT; One mustn't naively identify weak and actual positions. Weak velocity measurements admit of a straightforward standard quantum mechanical interpretation, independent of any commitment to particle trajectories and velocities. This is revealed by a careful reconstruction of the physical arguments on which the description of weak velocity measurements rests. It turns out that for weak velocity measurements to be reliable, one must already presuppose dBBT in its standard form: in this sense, they can provide no new argument, empirical or otherwise, for dBBT and its standard guidance equation. _Keywords--_ weak values, quantum mechanics, particle trajectories, underdetermination, measurement ## 1 Introduction Since its inception, Quantum Mechanics (QM) has faced three major interpretative conundrums (see e.g. Lewis 2016; Myrvold 2018). The first is the so-called Measurement Problem (see e.g. Maudlin 1995): how are we to make sense of the superpositions of states which the formalism of QM (if assumed to be universally valid) appears to attribute to objects? The second pertains to the interpretation of Heisenberg's uncertainty relations (see e.g. Hilgevoord and Uffink 2016): do they circumscribe an absolute limit of simultaneous knowledge of, say, a particle's momentum and position? Or does it reflect an _ontological_ indeterminacy? Finally, how should one understand entanglement (see e.g. Ney and Albert 2013) -- the fact that generically, composite systems appear to defy an unambiguous description of their individual constituent parts? These three puzzles culminate in the so-called EPR paradox (see e.g. Redhead 1987, Chapter 3 or Fine 2017). Suppose one widely separates the partners of an entangled pair of particles. They can then no longer interact. Hence we may, according to Einstein, Podolsky and Rosen, "without in any way disturbing the system" perform (and expect a well-defined outcome of) a position measurement on one partner, and a simultaneous momentum measurement on the other (Einstein et al., 1935, p. 777). Prima facie, it looks as if thereby we can bypass the uncertainty relations. This raises the question whether QM in its current form is complete: does every element of physical reality have a counterpart in the description of the QM formalism? Famously, Einstein thought otherwise (see e.g. Lehner 2014). He was "[...] firmly convinced that the essentially statistical character of contemporary quantum theory is solely to be ascribed to the fact that this [theory] operates with an incomplete description of physical systems" (Einstein, 1949, p. 666). To-date, the most elaborated attempt to thus "complete" (cf. Goldstein 2017, Section 4) QM dates back to Bohm (1952a,b) -- "Bohmian Mechanics" or, in recognition of de Broglie's earlier proposal, "de Broglie-Bohm theory" (dBBT).1 (We'll stick to the latter term throughout.) Footnote 1: There exist two _distinct_ variants of dBBT: the “quantum potential” school (expounded e.g. by Bohm and Hiley 2006, or Holland 1995), and the “1\({}^{st}\)-order formulation”, canonised in the oeuvre of Dür, Goldstein, Zanghi and their collaborators. The present paper will only be concerned with the latter; "dBBT” will exclusively refer to this variant of dBBT, throughout. It supplements the QM formalism by a deterministic, but manifestly non-local dynamics for particles. At all times, they occupy determinate positions, evolving continuously in time. Only the particles' initial exact distribution (and the initial wave function) is unknown. Due to this fact, QM emerges from dBBT in a manner "approximately analogous [...] to the statistical mechanics within the framework of classical mechanics" -- as Einstein (ibid) had hoped. But dBBT isn't free of problems. From its early days on, a principal objection to it2 targets the unobservability of its particle dynamics. By construction, in dBBT the individual particle trajectories seem to be undetectable _in principle_. Only their statistical averages are observable. They coincide with the standard quantum mechanical predictions. Thereby, standard dBBT achieves empirical equivalence with QM.3 Footnote 2: For subtleties in the early objections to dBBT, related to dBBT’s unobservability, we refer to Myrvold (2003) Footnote 3: Here, we’ll set aside possible subtleties, see Arageorgis and Earman (2017). Recently, this lore seems to have been called into question in light of a novel type of measurements -- so-called weak measurements (Aharonov et al., 1988). These denote setups in which some observable is measured, without significantly disturbing the state. Inspired by Wiseman (2007), eminent advocates of standard dBBT seem to have touted such weak measurements as a means of actually observing individual trajectories in standard dBBT (e.g. Goldstein 2017, Section 4). Moreover, they point to already performed experiments (e.g. Kocsis et al. 2011; Mahler et al. 2016) that appear to corroborate dBBT's predictions and claim to show the particle trajectories. The present paper will critically examine those claims. Should they hold up to scrutiny, they would not only establish the incompleteness of QM. Almost more spectacularly, they would also furnish the remedy: they would vindicate dBBT in its standard form. Those claims, we contend, are mistaken: weak measurements constitute no new arguments, let alone empirical evidence in favour of dBBT's guidance equation. To show this, we'll carefully reconstruct the physical arguments on which the description of weak measurement rests. dBBT is entirely dispensable for a coherent treatment and interpretation of weak measurements; they receive a natural interpretation within standard QM as observational manifestations of the gradient of the wave function's phase. For weak velocity measurements to disclose the particles' actual velocities, one must not only presuppose the prior existence of deterministic (and differentiable) trajectories, but also the specific form of standard dBBT's particle dynamics. We contest Durr et al.'s suggestion of a legitimate sense in which weak velocity measurements allow a genuine measurement of particle trajectories. We'll proceed as follows. SS2 will revisit de Broglie-Bohm theory -- its basics (SS2.1), and one of its principal challenges, its empirical underdetermination (SS2.2). In SS3, we'll turn to weak velocity values. SS3.1 will introduce Wiseman's measurement protocol for so-called weak velocity measurements. We'll subsequently illustrate it in the double-slit experiment (SS3.2). Our main analysis of the significance of weak measurements for velocities in de Broglie-Bohm theory will form the subject of SS4. We'll first elaborate when actual velocities and weak ones (as ascertained in Wiseman's measurement protocol) coincide (SS4.1). This will enable a critical evaluation both of Durr et al.'s claim that weak velocity measurements are in some sense genuine (SS4.2), and as well as the idea that they provide _non_-empirical support for standard dBBT (SS4.3). Our findings will be summarised in SS5. A mathematical appendix (SS6) contains a concise review of weak interactions within the von Neumann measurement scheme (SS6.1), as well as of post-selection and the two-vector-formalism (SS6.2). ## 2 De Broglie-Bohm Theory ### Basics dBBT is best conceived of as an example of what Popper (1967) dubbed a "quantum theory without observer" (cf. Goldstein 1998; Allori et al. 2008, esp. Section 8): it aspires to provide an understanding of quantum phenomena without fundamental recourse to non-objective (i.e. subjective or epistemic) notions. Such endeavours grew out of the dissatisfaction with influential presentations of QM, notably by von Neumann, Heisenberg and (common readings of) Bohr (see e.g. Jammer 1974; Scheibe 2006, Ch. VIII, IX; Cushing 1996). In its non-relativistic form, dBBT is a theory about (massive, charged, etc.4) particles. At all times, they occupy definite positions. Across time, the particles follow deterministic trajectories. Like a "pilot wave", the quantum mechanical wave function guides them along those paths. Assuming a particular initial distribution of the particles, one recovers the empirical content of QM. Footnote 4: For our present purposes, we’ll elide subtleties concerning the ascription of such intrinsic properties (cf. Brown et al. 1995; Brown 1996; Brown et al. 1996). We’ll also set aside Esfeld’s (2014; 2017) “Humeanism without properties” (the ontology and ideology of which is limited to primitively occupied spacetime points and the spatiotemporal relations). More precisely, for an \(N\)-particle system, dBBT can be taken to consist of three postulates. (We closely follow Durr and Teufel (2009), to whom we refer for all details.) * The wave function \(\Psi\colon\mathbb{R}^{3N}\times\mathbb{R}\to\mathbb{C}\) satisfies the standard \(N\)-particle Schrodinger Equation (SEQ) in the position representation: \[i\hbar\frac{\partial}{\partial t}\Psi(\boldsymbol{Q},t)=\hat{H}\Psi(\boldsymbol {Q},t)\] (2.1) with the \(N\)-particle Hamiltonian \(\hat{H}=-\sum\limits_{i=1}^{N}\frac{\hbar^{2}}{2m_{i}}\nabla_{i}^{2}+V(Q,t)\), where \(\nabla_{i}=\frac{\partial}{\partial\boldsymbol{Q}_{i}}\), \(i=1,...,N\) acts on the \(i\)-th position variable \(\boldsymbol{Q_{i}}\) and \(\boldsymbol{Q}:=(\boldsymbol{Q_{1}},...,\boldsymbol{Q_{N}})\). * The continuous evolution of the \(i\)-th particle's position \(\boldsymbol{Q_{i}}(t)\colon\mathbb{R}\to\mathbb{R}^{3}\) in 3-dimensional Euclidean space is generated by the flow of the velocity field5 Footnote 5: For motivations, see Passon 2004, Chapter 4. \[v_{i}^{\Psi}:=\frac{\hbar}{m_{i}}\operatorname{Im}\frac{\nabla_{i}\Psi}{\Psi} |_{(\boldsymbol{Q_{1}}(t),...,\boldsymbol{Q_{N}}(t))}.\] (2.2) That is, the particle position \(\boldsymbol{Q_{i}}\) obeys the so-called guidance equation (GEQ) \[m_{i}\boldsymbol{\dot{Q_{i}}}=v_{i}^{\Psi}.\] (2.3) For all relevant types of potentials, unique solutions (up to sets of initial conditions of measure zero) have been shown to exist (Teufel and Tumulka, 2005). Notice that \(v_{i}^{\Psi}\) depends on all particle positions simultaneously. This is the source of dBBT's manifest action-at-a-distance in the form of an instantaneous non-locality (see e.g. Goldstein 2017, Section 13). * The wave function induces a natural (and, under suitable assumptions, unique, see Goldstein and Struyve (2007)) measure on configuration space, the so-called Born measure: \[\mathbb{P}^{\Psi}(d^{3N}\boldsymbol{Q}):=|\Psi|^{2}d^{3N}\boldsymbol{Q}.\] (2.4) It quantifies which (measurable) sets of particle configurations \(\mathcal{Q}\subseteq\mathbb{R}^{3N}\) count as large ("typical"). That is: \[\int_{\mathcal{Q}}d^{3N}\boldsymbol{Q}|\Psi(\boldsymbol{Q})|^{2}=1-\varepsilon,\] for some small \(\varepsilon>0\) (see Maudlin 2011; Durr and Struyve 2019; Lazarovici and Reichert 2015 for details; cf. Frigg 2009, 2011).6 This definition of typicality respects a generalised sense of time-independence. A universe typical in this sense is said to be in quantum equilibrium (see Durr et al. 1992 for further details). The continuity equation for \(|\Psi|^{2}\) obtained from the Schrodinger Equation implies that a system is in quantum equilibrium at all times, if and only if in equilibrium at _some_ point in time. This is called the Quantum Equilibrium Hypothesis (QEH). Footnote 6: Typicality raises intriguing questions about whether appeal to it is explanatory (and if so, in which sense). For a recent account, see Wilhelm (2019). Consider now a de Broglie-Bohmian \(N\)-particle universe, satisfying these three axioms. An \(M\)-particle subsystem is said to possess an "effective" wave function \(\Phi\), if the universal wave function (i.e. the wave function of the universe) \(\Psi\colon X\times Y\to\mathbb{C}\), with \(X\) and \(Y\) denoting the configuration space of the subsystem and its environment, respectively, can be decomposed as \[\forall(x,y)\in X\times Y\colon\Psi(x,y)=\psi(x)\Phi(y)+\Psi_{\perp}(x,y).\] Here \(\Phi\) and \(\Psi_{\perp}\) have macroscopically disjoint \(y\)-support and \(Y\subseteq\operatorname{supp}(\Phi)\). That is, the configurations in which \(\Phi\) and \(\Psi_{\perp}\) vanish are macroscopically distinct (e.g. correspond to distinct pointer positions). For negligible interaction with their environment, the effective wave function \(\psi\) of subsystems can be shown to satisfy the Schrodinger Equation itself. ### Underdetermination Empirically, the guidance equation 2.3 isn't the only option. More precisely, for empirical equivalence with QM, the specific guidance equation 2.3 isn't necessary. Infinitely many different choices \[\mathbf{v}^{\Psi}\mapsto\mathbf{v}^{\Psi}+|\Psi|^{-2}\mathbf{j} \tag{2.5}\] are equally possible for otherwise arbitrary vector fields \(\mathbf{j}\) whose divergence vanishes, \(\nabla\cdot\mathbf{j}=0\). They yield coherent alternative dynamics with distinct particle trajectories, whilst leaving the predictive-statistical content unaltered (Deotto and Ghirardi, 1998). One needn't even restrict oneself to a deterministic dynamics (an option expressly countenanced by e.g. Durr and Teufel 2009, Chapter 1.2): a stochastic dynamics, with \(|\Psi|^{-2}\mathbf{j}\) corresponding to a suitable random variable can also be introduced. As a result the particles would perform random walks, with the r.h.s. of the integral equation \[\mathbf{Q}(t)-\mathbf{Q}(t_{0})=\int\limits_{t_{0}}^{t}\mathbf{v}^{\Psi}d\tau\] containing a diffusion term. A proposal of this type is Nelson Stochastics (see e.g. Goldstein 1987; Bacciagaluppi 2005). In short: by construction, dBBT's individual particle trajectories are observationally inaccessible. In consequence, dBBT is vastly underdetermined by empirical data: all versions of dBBT with guidance equations of the type 2.5 are experimentally indistinguishable. Yet, the worlds described by them clearly differ. (We illustrate this in Figure 1.) This underdetermination poses a challenge to a realist understanding of dBBT (cf. for example Stanford 2017, Chapter 3.2) For the purposes of this paper, we'll confine the class of considered choices to the family of de Broglie-Bohmian-like theories (cf. Durr and Ehmann 2020, Section 3.4) -- i.e. particle theories within the Primitive Ontology framework (see, e.g. Allori et al. 2008; Allori 2013, 2015). It encompasses e.g. the "identity-based Bohmian Mechanics" (Goldstein et al., 2005) or "Newtonian QM" (Sebens, 2015). Let's even further whittle down the list of candidate theories to deterministic variants of dBBT with differentiable paths, i.e. to variants of dBBT that differ only with respect to their vector field of the type in Equation 2.5. Still, the underdetermination persists; its severity is scarcely diminished: how to justify the particular choice for the standard guidance equation amongst the uncountably infinite alternatives? An argument frequently cited in response is a result by Durr et al. (1992, p. 852): "The standard guidance equation is the simplest first-order equation that respects Galilei covariance and time-reversal invariance." But this is not decisive. First, individually neither desideratum of Durr et al.'s theorem seems compelling -- unless one is already guided by intuitions, shaped by either classical physics or by standard dBBT itself. In particular, one may reject the ab initio requirement of Galilei covariance as implausible: Galilei covariance is the symmetry group of _Classical_ Mechanics.7 Footnote 7: Not even this is entirely obvious. On the one hand, at least once one incorporates Newtonian gravity, the most perspicuous spacetime setting is no longer Galilei spacetime (e.g. Pooley 2013). On the other hand, one may be attracted to the idea of a theory of classical mechanics that incorporates the Leibniz Group as its symmetry group, such as in the Barbour-Bertotti theory (ibid., Section 6.2). (Notice that recently attempts have indeed been made, e.g. by Vassallo (2015), to combine Barbourian shape dynamics with dBBT.) Why impose it on a more fundamental theory -- dBBT -- which is supposed to _supersede_ Classical Mechanics?8 Footnote 8: From the perspective of the so-called dynamical approach (e.g. Brown 2005; Brown and Read 2018) to spacetime symmetries, this requirement lacks apriori force. According to advocates of this approach, spacetime symmetries merely codify (are reducible to) the symmetries of the matter dynamics. For them, therefore, to demand any particular spacetime symmetry as an ab initio constraint on a possible, fundamental matter dynamics is to put the cart before the horse (cf. Acuna 2016; Myrvold 2017). Secondly, Durr et al.'s argument rests on an assumption about how the Galilei group acts on the wave function. As Skow (2010) has argued, such an assumption Figure 1: A particle follows different trajectories corresponding to different/non-standard guidance equations. **(a)** The familiar wiggly deterministic trajectories that lead to the interference pattern in a double-slit experiment determined by the standard guidance equation. **(b)** Alternative trajectories obtained from adding a divergence-free vector field \(\mathbf{j}:=\frac{1}{x^{2}+y^{2}}\left(\begin{array}{c}x\\ -y\end{array}\right)\) to the standard Bohmian velocity field. **(c)** A single stochastic trajectory generated by a random variable sampled according to \(|\psi|^{2}\). For illustration, the probability density \(|\psi|^{2}\) is shown at three different snapshots in time. All choices of the dynamics (i.e. (a), (b), (c)) are observationally indiscernible: The resulting measurable distributions at the screen at the top of each figure are the same. is essentially unwarranted. Thirdly, let's grant that a satisfactory answer can be given to the preceding two questions. Durr et al.'s argument pivotally turns on mathematical simplicity. We confess, we'd be hard-pressed to pinpoint what _mathematical_ (rather than, say, ontological) simplicity precisely means. Whatever it may be, as a super-empirical criterion, it may well be felt a dubious indicator of truth (see e.g. Van Fraassen 1980, Chapter 4.4; Norton 2000, Norton 2018, Chapter 5-7; Ivanova 2014, 2020). At best we are inclined to regard it as a pragmatic criterion, at worst an aesthetic one for theory acceptance. Can a realist legitimately invoke it to argue that one theory is more likely to be true than an otherwise equally good alternative? This context -- underdetermination -- renders weak value measurements particularly interesting. By (prima facie) allowing measurements of individual particle trajectories, they appear to directly overcome dBBT's underdetermination. But wouldn't that contradict the empirical inaccessibility of the trajectories? Let us see. ## 3 Weak velocity values This section will offer a concise review of so-called weak values. We'll first outline how they are harnessed in Wiseman's measurement protocol for weak velocity measurements (SS3.1). An application to the double-slit experiment will further illustrate the salient points (SS3.2). This will pave the way for our subsequent discussion in (SS4). ### Wiseman's measurement protocol for weak velocity measurements Following Aharonov et al. 1988, weak measurements are measurement processes (modelled via the von Neumann scheme, see SS6.1) in which the interaction between the measurement apparatus ("pointer device") and the particle ("system") is weak: it disturbs the wave function only slightly. As a result, one can extract still further information about the particles (say, regarding their initial momenta) via a subsequent ordinary "strong" (or projective) measurement (say, regarding their positions). More precisely: after a weak interaction (say, at \(t=0\)), the pointer states aren't unambiguously correlated with eigenstates of the system under investigation. In contradistinction to strong measurements, the system doesn't (effectively) "collapse" onto eigenstates; the particles can't be (say) located very precisely in a single run of an experiment. This apparent shortcoming is compensated for when combined with a strong measurement a tiny bit _after_ the weak interaction: the experimenter is then able not only to ascertain the individual particle's precise location (via the strong measurement); for a sufficiently large ensemble of identically prepared particles with initial state \(\psi_{in}\) (viz. Gaussian wave packets with a large spread), she can also gain statistical access to the probability amplitude of all subensembles whose final states -- the so-called "post-selected" state -- have been detected (in the strong measurement) to be \(\psi_{fin}\): \[\langle\hat{x}\rangle_{w}:=\frac{\langle\psi_{fin}|\hat{x}|\psi_{in}\rangle}{ \langle\psi_{fin}|\psi_{fin}\rangle} \tag{3.1}\] This quantity is called the "weak position value" (for the position operator \(\hat{x}\)). (The concept is straightforwardly applied also to other operators, mutatis mutandis.) It can be shown (see 6.2) that after many runs, the pointer's average position will have shifted by \(\langle\hat{x}\rangle_{w}\). Specifically, if we characterise the final/post-selected state via position eigenstates \(\left|x\right\rangle\), determined in a strong position measurement and unitary evolution of the initial state, we obtain \[\langle\hat{x}(\tau)\rangle_{w}=\text{Re}\left(\frac{\langle x|\hat{U}(\tau) \hat{x}|\psi_{in}\rangle}{\langle x|\hat{U}(\tau)|\psi_{in}\rangle}\right), \tag{3.2}\] where \(\hat{U}(\tau)\) denotes the unitary time evolution operator during the time interval \([0;\tau]\). Following Wiseman 2007, it's suggestive to construe \(\langle\hat{x}(\tau)\rangle_{w}\) as the mean displacement of particles whose position was found (in a strong position measurement at \(t=\tau\)) to be at \(x\). From this displacement, a natural definition of a velocity field ensues: \[\mathbf{v}(\mathbf{x},t)=\lim_{\tau\to 0}\frac{1}{\tau}(\mathbf{x}-\langle \hat{x}_{w}\rangle). \tag{3.3}\] Note that all three quantities entering this velocity field -- \(\tau\), \(x\) and \(\langle\hat{x}(\tau)\rangle_{w}\) -- are experimentally accessible. In this sense, the velocity field is "defined operationally" (Wiseman). In what follows, we'll refer to the application of this measurement scheme -- a strong position measurement in short succession upon a particle's weak interaction with the pointer -- for the associated "operationally defined" velocity field as "Wiseman's measurement protocol for weak velocity measurements", or simply _"weak velocity measurements"_. For a better grasp of its salient points, let's now spell out such weak velocity measurements in the context of the double-slit experiment. In SS4, it will prove useful to occasionally refer back to this concrete setup. ### Weak measurements in the double-slit experiment Consider the standard double-slit experiment with, say, electrons, hitting a screen. It enables a detection of the electrons' positions. This constitutes a strong position measurement. Accordingly, we'll dub this screen the _strong screen_. Between the strong screen and the two slits from which the particles emerge, let a weak measurement of position be performed. Let this be called the _weak screen_. The two screens can be moved to perform measurements at various distances from the double-slit. Suppose that it takes the particles some time \(\tau>0\) to travel from the weak to the strong screen. After passing through the slits, the electron will be described by the wave function \(\left|\psi\right\rangle=\int\psi(x,t)\left|x\right\rangle dx\). This leads to the familiar double-slit interference fringes. We assume that the weak screen, i.e. the pointer variable, is in a Gaussian ready state with width \(\sigma\), peaked around some initial position. After the particles have interacted with the measurement device (at time \(t=0\)), the composite wave function \(\left|\Psi(0)\right\rangle\) of particle-_cum_-weak screen is \[\left|\Psi(0)\right\rangle=\int\psi(x,t)\left|x\right\rangle\otimes\varphi(y- x)\left|y\right\rangle dxdy. \tag{3.4}\] Here, \(\left|\varphi\right\rangle\) denotes the wave function of the weak screen, and \(y\) its free variable (e.g. the position of some pointer device). The wave function then evolves unitarily for some time \(\tau\), according to the particle Hamiltonian \(\hat{H}\): \[\left|\Psi(\tau)\right\rangle=\hat{U}(\tau)\left|\Psi(t)\right\rangle=e^{- \frac{i}{\hbar}\hat{H}\tau}\left|\Psi(0)\right\rangle. \tag{3.5}\] After weakly interacting, the particle and pointer are entangled. Hence, only the composite wave function -- _not_ the reduced state of the pointer -- evolves unitarily during time \(\tau\). The unitary operator \(\hat{U}(\tau):e^{-\frac{i}{\hbar}\hat{H}\tau}\) only acts on \(x\) (not on \(y\)). Due to this evolution, the post-selected position \({\bf x}\) on the strong screen will in general differ from the weak value \(\langle\hat{x}_{w}\rangle\), obtained from averaging the conditional distribution of the pointer of the weak screen. The procedure is depicted in Figure 3. On both screens the wave function is slightly washed out. It evidently differs from an undisturbed state (i.e. in the absence of the weak screen). To obtain the two position values -- the weak and the strong one -- strong measurements are now performed both at the weak and the strong screen (i.e. on the pointer variable and on the target system). For each position outcome \(x\) at the strong screen, let's select a subensemble. For any such subensemble, we then read out the statistical distribution of the position measurement outcomes at the weak screen. We have thus assembled all three observable quantities needed for Wiseman's operationally defined velocity 3.1: the time \(\tau\) that elapsed between the two measurements, the positions \(x\) (obtained as values at the strong screen), and the average value of all positions of the subensemble, associated with (i.e. post-selected for) \(x\). This may now be done for different positions \(x\) on the strong screen. To that end, move the screens to different locations; there repeat the measurements. With this method one can eventually map the velocity field, for a sufficiently large number of measurements. We'll defer the discussion of how to construe this result to the next section. For now, let's rest content with stating it as a calculational fact, suspending any further conclusions. Kocsis et al. have indeed performed an experiment of a similar kind, using weak measurements of momentum. Their result, depicted in Figure 2, qualitatively reproduces the trajectories of standard dBBT. (We'll return to this experiment and how to understand it in SS4. Here, we mention it primarily to convey an impression of the qualitative features of Wiseman's operational velocity, when experimentally realised.) Moreover, it can be shown (cf. SS6.3) that weak velocity measurements are measurements of the gradient of the phase of the wave function. Thus, they coincide with definition of standard Bohmian velocities in the guidance equation \[v=\frac{\hbar}{m}\nabla S, \tag{3.6}\] where \(S\) is the gradient of the phase of the wave function, \(\psi(x)=|\psi|e^{iS(x)}\). Notice that for this, only the standard quantum-mechanical formalism has been utilised. Therefore, we may conclude that -- based solely on standard QM -- weak velocity measurements permit experimental access to the gradient of the wave function's phase. Next, we'll ponder whether commitment to _further_, generic and supposedly mild interpretative choices (viz. the adoption of a de Broglie-Bohmian framework) might grant us a peep into an allegedly deeper reality, veiled under this standard quantum mechanical interpretation. Why weak velocity measurements _do not_ measure velocities Suggestive as these results are, we will now show that such measurements could not provide direct experimental evidence displaying the shape of particle trajectories, even if it is assumed that some deterministic particle trajectories exist. They cannot, that is, go any way to experimentally resolving the underdetermination in putative dBBT guidance equations mentioned previously. First (SS4.1), we'll analyse the relation between Wiseman's operationally defined velocity Equation 3.3 and the particle's actual velocity. In particular, we'll show that a strong assumption is required that would render it question-begging to employ weak velocity measurements in order to infer the particles' actual velocities. This analysis will subsequently allow us to critically evaluate two stances regarding the significance of weak velocity values for dBBT -- Durr et al.'s portrayal of weak velocity measurements as allegedly "genuine" measurements (SS4.2), and a view of weak velocity measurements as _non_-empirical support of standard dBBT (SS4.3). ### When do weak and actual velocities coincide? Here, we'll address the question of whether -- or rather: _when_ -- weak velocities coincide with the particles' actual velocities, assuming that they exist. That is, we'll explicate the conditions under which weak velocity measurements count as reliable. That, we'll argue, turns out to _presuppose_ standard dBBT. In the following, \(x\) and \(y\) will denote the position variables of the individual particles to be measured, and the measurement apparatus, respectively. For simplicity, Figure 2: A weak velocity measurement for photons allows the reconstructions of trajectories, qualitatively identical to those of particles in standard dBBT. Particle trajectories in a double-slit experiment performed by Kocsis et al. 2011. we'll work in one dimension only. Let the particles be prepared in the initial state \[\left|\psi\right\rangle=\int dx\ \psi(x)\left|x\right\rangle. \tag{4.1}\] Furthermore, let the pointer device (i.e. the weak screen of the double-slit version of weak measurements in SS3.2) be prepared in the initial state given by a Gaussian with large spread \(\sigma\), centred around \(0\): \[\left|\varphi\right\rangle=\int dy\ \varphi(y)\left|y\right\rangle=N\int dye^{- \frac{y^{2}}{4\sigma^{2}}}\left|y\right\rangle, \tag{4.2}\] where \(N\) is a suitable normalization factor. Together, the particle and the pointer form the compound system with the joint initial state \[\left|\psi\right\rangle\otimes\left|\varphi\right\rangle=\int dxdy\ \psi(x) \varphi(y)\left|x\right\rangle\otimes\left|y\right\rangle. \tag{4.3}\] Now consider the first -- the weak -- measurement process. It consists of an interaction between the particle and the pointer. Upon completion of this process (say at \(t=0\)), the compound system ends up in the entangled state \[\left|\Psi(x,y,0)\right\rangle=\int dxdy\ \psi(x)\varphi(y-x)\left|x\right\rangle \otimes\left|y\right\rangle. \tag{4.4}\] The probability distribution for the pointer variable \(y\), _given_ some position \(X\) of the particle, is therefore: \[\rho_{X}(y)=\frac{|\Psi_{0}(X,y)|^{2}}{|\psi(X)|^{2}}=|\varphi(y-X)|^{2}. \tag{4.5}\] This probability density determines the expectation value \[\mathbb{E}(y|x=X)=\int dy\ y\rho_{X}(y)=X. \tag{4.6}\] That is, the mean value of the pointer distribution, conditional on the particle occupying position \(X\), coincides with that position. This underwrites the following counterfactual: **(C\({}_{\mathbf{0}}\))**: _If one were to perform an ordinary (strong) position measurement on the particles immediately after the weak interaction, the expectation value would yield the actual position of the particle._ Via \(\mathbb{E}(y|x=X)\), the particle position thus is empirically accessible through the statistics of large ensembles of identically prepared particles from which we cull post-selected outcomes \(x=X\). This thought is further exploited in the final steps of Wiseman's procedure. In the foregoing considerations, the strong measurement was performed immediately upon the weak one. Instead, we'll now allow for a small delay. That is, after the particle and the pointer have (weakly) interacted, the total system evolves freely for some small, but finite time \(\tau\). Its state then is \[\left|\Psi(x,y,\tau)\right\rangle=e^{-\frac{i}{\hbar}\tau\hat{H}_{0}}\left| \Psi(x,y,0)\right\rangle, \tag{4.7}\] where \(\hat{H}_{0}\) denotes the system's free Hamiltonian. Eventually, we perform a strong measurement of the particle's position \(X_{\tau}\) at \(t=\tau\). (The strong coupling between the measurement device and the particle enables a precise detection of the latter's actual position.) We thus get the expectation value for the pointer variable, conditional on the particle occupying the position \(X_{\tau}\) at \(t=\tau\): \[\mathbb{E}(y|x=X_{\tau})=\int dy\ y|\Psi(X_{\tau},y,\tau)|^{2}. \tag{4.8}\] Through the statistics of a sub-ensemble of particles whose strong position measurements yielded \(X_{\tau}\), this expectation value is empirically accessible. In _analogy_ to Equation 4.6, let's define the position: \[X_{0}:=\mathbb{E}(y|x=X_{\tau}). \tag{4.9}\] Combined with the particle position \(X_{\tau}\), obtained from the strong measurement at \(t=\tau\), it thus appears as if we have access to particle positions at two successive moments. Using Equation 4.9, the associated displacement is \[X_{\tau}-X_{0}=X_{\tau}-\mathbb{E}(y|x=X_{\tau}) \tag{4.10}\] Let's grant one can make it plausible that the particles' trajectories are differentiable. Then, the displacement (Equation 4.10) gives rise to the velocity field \[v(X_{0}):=\lim_{\tau\to 0}\frac{1}{\tau}(X_{\tau}-\mathbb{E}(y|x=X_{\tau})). \tag{4.11}\] Note that all terms on the r.h.s. of Equation 4.11 are observable. (Hence, presumably, Wiseman's labelling 4.11 as an "operational definition".) In conclusion, it seems, via the statistics of an experimental setup implementing Wiseman's procedure, we are able to empirically probe this velocity field. But what does this velocity field signify? It's tempting to identify it with the particles' actual velocities. That is, should this be true, the flow of Equation 4.11 generates the particles' trajectories (assumed to be deterministic and differentiable). Is this identification justified? By _defining_ an \(X_{0}\stackrel{{ def}}{{=}}\mathbb{E}(y|x=X_{\tau})\) via Equation 4.9, our notation certainly suggests so. Let's indeed assume that this is correct. We'll dub this the "Correspondence Assumption" (COR). That is, suppose that the actual particle position \(X_{\tau}\) at \(t=\tau\) is connected with the earlier particle position \(x(0)=X_{0}=\hat{T}_{-\tau}X_{\tau}\) at \(t=0\), where \(\hat{T}_{-\tau}\) denotes the shift operator that backwards-evolves particle positions by \(\tau\). (In other words: for arbitrary initial positions, \(\hat{T}_{\tau}\) supplies the full trajectory.) Then, according to (COR), the expectation value (4.9) corresponds to the particles' position at \(t=0\). For post-selection of subensembles with \(x(\tau)=X_{\tau}\), (COR) thus takes the form (in the limit of large spread \(\sigma\)): \[\textbf{(COR)}\ \mathbb{E}(y|x(\tau)=X_{\tau})=\hat{T}_{-\tau}X_{\tau}. \tag{4.12}\] In other words, (COR) implies the counterfactual: * _If one were to perform a strong position measurement at \(t=\tau\) (with the weak interaction taking place at \(t=0\)), yielding the particles' position at \(x(\tau)=X_{\tau}\), the weak value would be directly correlated with the particles' earlier position \(\hat{T}_{-\tau}X_{\tau}\). That is, upon a strong measurement at \(t=\tau\), the expectation value would reveal the particles' true positions:_ \[\mathbb{E}(y|x(\tau)=X_{\tau})=\hat{T}_{-\tau}X_{\tau}.\] (4.13) On (COR), the weak value thus gives the particle's _actual_ position at the weak screen: the expectation value on the l.h.s. is reliably correlated with the particle's earlier positions. But most importantly, this is an _if and only if condition_: If (COR) is satisfied, then we recover the actual position, but if it is not, we don't. As a result one ought to have to assume that (COR) is true for weak position measurements to yield actual particle positions. Thereby, any set of data compatible with QM appears to corroborate standard dBBT: _given (COR)_, weak velocity measurements yield standard dBBT's velocity field. It thus seems as if standard dBBT's empirical underdetermination has been overcome. Such an apparent possibility of confirming standard dBBT would be remarkable. It crucially hinges, however, on the soundness of (COR). Why believe that it's true? We'll first refute a prima facie cogent argument for (COR). We'll then give a more general argument why (COR) is generically false. This will eventually be illustrated with a simple counterexample. Prima facie, (COR) looks like a plausible extrapolation of a strong measurement immediately after the weak interaction (i.e. at \(t=0\)). This idea may be developed in three steps. First, (COR) indeed holds in the limit \(\tau\to 0^{+}\). Next, in a deterministic world, it would seem that \[\mathbb{E}(y|x(\tau)=\hat{T}_{\tau}\kappa)=\mathbb{E}(y|x(0)=\kappa), \tag{4.14}\] where \(\kappa\in\mathbb{R}\) denotes a position. By appeal to \(C_{0}\), this would then yield \[\mathbb{E}(y|x(\tau)=\hat{T}_{\tau}\kappa)=\mathbb{E}(y|x(0)=\kappa)=\kappa, \tag{4.15}\] as desired. At first blush, this argument looks watertight. Its first step ensues from the standard rules of QM (see Equation 4.6). Its third step, too, seems innocuous: only a few lines earlier, we derived (\(C_{0}\)) from the standard QM formalism. Let's therefore throw a closer glance at the second step. It's convenient to cast it in terms of the probability densities, associated with the expectation values: \[\mathbb{P}(y|x(\tau)=\hat{T}_{\tau}\kappa)=\mathbb{P}(y|x(0)=\kappa). \tag{4.16}\] Prima facie, given determinism, this identity stands to reason: all else being equal, the probability of craving a biscuit around 5 pm, given our momentary glucose levels, isn't altered by conditioning on our glucose levels a few minutes earlier (provided that they evolve deterministically). Determinism ensures that those physiological states (and _only_ they) evolve into the physiological states, considered initially. By the same token, one might think, the events \(\{(x(\tau),y)\in\mathbb{R}\times\mathbb{R}:x(\tau)=\hat{T}_{\tau}\kappa\}\) and \(\{(x(0),y)\in\mathbb{R}\times\mathbb{R}:x(0)=\kappa\}\) refer to the same events of our probability space (i.e. the same diachronically identical configurations, as it were, merely pointed to via (inessential) different time indices) and _therefore_ are assigned the same probability measure. Yet, this inference is illicit. While it's true that \(\{(x(\tau),y)\in\mathbb{R}\times\mathbb{R}:x(\tau)=\hat{T}_{\tau}\kappa\}\) and \(\{(x(0),y)\in\mathbb{R}\times\mathbb{R}:x(0)=\kappa\}\) contains the same pointer configurations, this _doesn't_ imply that \(\mathbb{P}(y|x(\tau)=\hat{T}_{\tau}\kappa)=\mathbb{P}(y|x(0)=\kappa)\). For this to hold, the conditional probabilities -- as defined via post-selection -- on both sides must be well-defined. That is, \[\frac{\mathbb{P}(y\&x(\tau)=\hat{T}_{\tau}\kappa)}{\mathbb{P}(x(\tau)=\hat{T}_ {\tau}\kappa)}\text{ and }\frac{\mathbb{P}(y\&x(0)=\kappa)}{\mathbb{P}(x(0)=\kappa)} \tag{4.17}\] must exist (and coincide). In classical Statistical Mechanics, one may take this for granted. In a _quantum_ context, entanglement complicates the situation: it compromises the ascription of probability measures to certain events. One must heed the time with respect to which the assigned probability measure is defined. This is the case with weak velocity measurements. Recall that in Wiseman's measurement protocol, the strong measurement is only performed at \(t=\tau\). This precludes defining the second term in 4.17! That is, no strong measurement is performed -- and no attendant "effective collapse" of the wave function occurs -- at an _earlier_ time (viz. at \(t=0\)). As a result, at the time of the weak interaction (\(t=0\)), the wave function of the pointer and that of particles are entangled. That means, however, that we _can't_ naively assign the event of any particular particle position at \(t=0\) an objective, individual probability measure9; that would require post-selection at \(t=0\). Only the entangled pointer-_cum_-particle system as whole has a physically grounded, objective probability measure. Footnote 9: In this regard, one should bear in mind that, on the mainstream view of dBBT (espoused by DGZ), probabilistic statements about subsystems (construed in terms of typicality) should be _derived_ from dBBT’s axioms of §2 (cf. Teufel and Tumulka 2005, Chapter 4, 9; Lazarovici and Reichert 2015; Lazarovici et al. 2018, Section 3). This follows from the fact that \(\mathbb{P}(x(0)=\kappa)\) is obtained from the pointer-_cum_-particle system's reduced density matrix (i.e. by partially tracing out the pointer's degrees of freedom). But this transition from the density matrix of a pure state to the reduced density matrix of an "improper mixture" (d'Espagnat, 2018, Chapter 7) lacks objective-physical justification (see, e.g., Mittelstaedt 2004, Chapter 3-4). Contrast that with the situation of \(\frac{\mathbb{P}(y\&x(\tau)=X_{\tau})}{\mathbb{P}(x(\tau)=X_{\tau})}\): this _is_ well-defined via post-selection. That is, due to the "effective collapse" (see, e.g., Durr and Teufel 2009, Chapter 9.2), induced by the strong measurement at \(t=\tau\), the event \(x(\tau)=X_{\tau}\)_can_ be assigned a well-defined probability measure. In d'Espagnat's terminology, we are dealing with a "proper mixture". In short: Owing to the pointer's entanglement with the particle, determinism _doesn't_ imply \(\mathbb{E}(y|x(\tau)=\hat{T}_{\tau}\kappa)=\mathbb{E}(y|x(0)=\kappa)\). The initially auspicious argument for (COR) therefore fails. From its failure, we gain also a wider-reaching insight: unless (at \(t=0\)) the strong measurement is _actually_ performed (unlike in Wiseman's measurement protocol), the conditional probabilities \(\mathbb{P}(y|x(0)=\kappa)\) (or equivalently: their associated expectation values) aren't objectively defined -- _if_ one adopts their usual definition in terms of post-selection. Strictly speaking, the _un_realised measurement renders \(\mathbb{P}(y|x(0)=\kappa)\), thus defined, meaningless.10 Footnote 10: In this proxy, one might descry a possible loophole. Why not define the prerequisite probability measures \(\mathbb{P}(y\&x(0)=\kappa)\) and \(\mathbb{P}(y|x(0)=\kappa)\)_indirectly_ — via their respective _later_ states? That is, instead of the direct definition via post-selection at \(t=0\), one might _stipulate_ the following probability measures (cf. Steinberg 1995) No _independent_ reasons have been given so far for believing that (COR) is true, though. (Conversely, the lack of independent reasons for standard dBBT (rather than any of its non-standard variants), especially in light of its empirical underdetermination, was our major motivation for applying weak velocities in the context of de Broglie-Bohmian theories.) Consequently, counterexamples to (COR) abound -- and are perfectly familiar: _any_ non-standard variant of dBBT of the type of Equation 2.5 (i.e. with non-vanishing, divergence-free vector field \(\mathbf{j}\)). In them, the particle's trajectory generically crosses the weak screen at a point _distinct_ from what the weak velocity measurements would make us believe. Figure 3 illustrates this. Footnote 1: The reader is referred to the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of _mean_ of the _mean_ of _mean_ of the _mean_ of _mean_ of the _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_of _mean_of _mean_ of _mean_ of _mean_of _mean_of _mean_of _mean_of _mean_ of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_mean_of _mean_mean_of _mean_mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_of _mean_mean_of _mean_of _mean_of _mean_mean_of _mean_ man's operationally defined velocity 3.3 uniquely picks out a guidance equation -- that of standard dBBT. Conversely, suppose standard dBBT to be true. A weak velocity measurement then discloses the actual particle velocities. Thus, \(C_{t}\) holds true.) In conclusion: Here, we argued that a particle's weak velocity coincides with its actual velocity (provided one is wiling to attribute deterministic, differentiable paths to the particles), if and only if standard dBBT is true. But this coincidence is a sine qua non for deploying weak velocity measurements in _support_ of standard dBBT. To attempt to do so -- absent independent arguments for the reliability of weak velocity measurements -- would one thus incur circularity. This analysis permits us to evaluate two verdicts on the significance of weak velocity measurements for standard dBBT, found in the literature. Let's start with Durr, Goldstein and Zanghi's claim that they enable genuine measurements. ### Weak measurements as genuine measurements? The foregoing analysis sheds light on a recent claim by Durr, Goldstein, and Zanghi (2009). These authors (henceforth abbreviated as "DGZ") aver that Wiseman's measurement protocol for weak velocities allows "in a reasonable sense, a _genuine_ measurement of velocity" in standard dBBT (ibid., pp. 1025, DGZ's emphasis). Such a statement, we maintain, is misleading. DGZ themselves identify a condition as crucial for their claim. This identification, too, we deem the source of further potential confusion. The crucial -- but in DGZ's account _tacit_ -- condition for weak Figure 3: The weak measurement procedure for a given post-selected state \(x(\tau)=X_{\tau}\). The weak value is obtained from the distribution on the weak screen. When the velocity field is that of standard dBBT (\(\mathbf{j}=0\)), the actual position of the particle \(x(0)\) matches the weak value \(x_{w}\). For an alternative guidance equation (\(\mathbf{j}\neq 0\)), it doesn’t: the particle crosses the weak screen at a point \(x^{\prime}(0)\), other than the weak value. This shows that depending on which guidance equation one chooses, the weak value needn’t yield the actual position of the particle at time \(0\). velocity measurements to be reliable, as we saw in the previous sections, is (COR). But (COR) is equivalent to assuming the standard form of the guidance equation. The essential equivalence between (COR), and dBBT's standard guidance equation impinges upon the significance of weak measurements for dBBT: whether we regard weak velocity measurements as enabling genuine measurements of the particle's actual velocity is essentially equivalent to an _antecedent_ commitment to standard dBBT. Pace DGZ, this curtails the significance of weak velocities as genuine. Yet, albeit misplaced in the context of weak measurements, DGZ's (misleadingly) identified crucial condition might open up a potentially illuminating perspective on standard dBBT. DGZ assert that weak velocity measurements, as realised by Wiseman's measurement protocol, constitute real measurements in standard dBBT (cf. Durr et al. 2004, Section 3.7). What is more, in his authoritative review of dBBT (Goldstein, 2017, Section 4) writes: "In fact, quite recently Kocsis et al. (2011) have used weak measurements to reconstruct the trajectories for single photons 'as they undergo two-slit interference, finding those predicted in the Bohm-de Broglie interpretation of quantum mechanics' " (cf. Durr and Lazarovici 2018, pp. 142 for a similar statement). DGZ are aware of the fact that such a claim needs an additional assumption; they (as we'll show) misidentify that "crucial condition" (Durr et al., 2009, p. 1026, 1030). Before adverting to DGZ's declaration of weak velocity measurements as genuine, a repudiation is apposite of the claim that such weak velocity measurements have actually been performed, _in accordance with dBBT's predictions._ Figure 2 displays the weak velocities measurements ascertained in Kocsis et al.'s double-slit experiment. Indeed, they qualitatively tally with the trajectories of standard dBBT (cf., for instance, Figure 5.7 in Holland 1995, p. 184). Still, _nothing_ immediately follows from that regarding the status of standard dBBT (see also Flack and Hiley 2014; Flack and Hiley 2016; Bricmont 2016, p. 181)! Kocsis et al.'s experiment has been performed for (massless) _photons_. Standard dBBT, however, is a non-relativistic quantum theory for massive particles: as such, it can't handle photons.11 Kocsis et al.'s experiment hence has no direct bearing on dBBT's status.12 Footnote 11: The treatment of photons within field-theoretic _extensions_ of dBBT, capable of dealing with photons (or bosons, more generally), is a delicate matter, outside the present paper’s ambit. We refer the interested reader to e.g. Holland 1995, Chapter 11 and Dür et al. 2012, Chapter 10 (also for further references). Footnote 12: Rather than the trajectories of _individual_ photons, Flack and Hiley 2014 and Flack and Hiley 2016 have argued that Kocsis et al.’s experiments measure mean momentum flow lines. This interpretation (as we saw in Equation 6.14) has a counterpart in weak velocity measurements of the electrons of the present setup: per se, the weak velocity measurements only allow experimental access to the gradient of the wave function’s phase. (This view on weak velocity values remains neutral, though, vis-à-vis any interpretation of the wave function. In particular, it’s not necessarily committed to a statistical/ensemble interpretation.) Now to DGZ's main claim, as we understand it: that for a coherent application of weak velocity measurements to the Bohmian framework as reliable velocity measurements, one needs an assumption on the disturbance of actual velocities is needed. Only standard dBBT, so the story goes, has this feature. In turn it appears that weak velocity measurements can constitute genuine measurements of the particle's actual velocities only in standard dBBT. DGZ's considerations seem to _start from_ the reliability of weak velocity measurements; they are predicated on (COR). DGZ (correctly) state that only standard dBBT is consistent with that. As the "crucial condition", responsible for that result, they identify a characteristic feature of standard dBBT's velocity field. **(SPE)**: _Whenever the particle-cum-pointer compound system has the form_ \(\psi(x)\otimes\phi(y-x)\)_, the particle's velocity field_ \(v\) _(conceived of as a function of the compound system's wave function_ \(\psi\otimes\phi\)_) is supposed to depend only on the particle's wave function_ \(\phi\)_:_ \(v[\psi\otimes\phi]=v[\phi]\)_._ We'll dub this condition "separability of particle evolution" (SPE). It uniquely singles out standard dBBT (Durr et al., 2009, Section 4). DGZ's mathematical proof of this latter claim is beyond dispute. Their identification of (SPE) as a _physically_ essential condition, however, is wrong-headed: (SPE) in fact plays no obvious role in the attempt to exploit weak velocity measurements for standard dBBT (see SS3 and SS4.1): nowhere is it invoked explicitly. Moreover, it remains elusive how (SPE) _could_ enter that analysis: (SPE) is an _exact_ equality, postulated to hold, whenever the composite particle-pointer wave function is factorisable. By contrast, DGZ's decisive equations (viz. (21) and (22) in their paper) are only approximations, valid at \(t=\tau\). Their terms linear in \(\tau\)_don't_ take a factorisable form (nor do they vanish). Not even at \(t=0\) is the pointer-particle wave function factorisable. Hence, (SPE) doesn't seem to be applicable from the outset. To call (SPE) "crucial" -- understood as _directly_ responsible -- for the reliability of weak velocity measurements in dBBT muddies the waters: _it's solely in virtue of (SPE)'s essential equivalence with standard dBBT_ that (SPE) is relevant at all. That (SPE) singles out standard dBBT is salient of the (mathematical) form of the standard guidance equation: the latter is uniquely characterised by the factorisation of velocities at \(t=0\), as asserted by (SPE). As a result, only because (COR) presupposes standard dBBT, _and_ because the latter is essentially equivalent to (SPE) (recall our remark at the end of SS4.1), is (SPE) "crucial" -- in the sense of necessarily satisfied for (COR) to hold. In short: (COR), (SPE) and standard dBBT's guidance equation are essentially equivalent. That is: \[(COR)\wedge(DIF)\wedge(DET) \Longleftrightarrow\text{dBBT's standard guidance equation}\] \[\Longleftrightarrow(SPE)\wedge(DIF)\wedge(DET),\] where (DET) and (DIF) denote the assumption of deterministic and differentiable particle trajectories, respectively. For weak velocity measurements to reveal the particles' actual trajectories (assuming determinism and differentiability, that is) -- i.e. for weak velocity measurements to be reliable -- (COR) _not_ (SPE) -- is the crucial condition that must be satisfied: without it, the counterfactual \(C_{t}\) no longer holds (recall 4.1); the particle's later positions can't be inferred from the weak measurements. In particular given (COR)'s essential equivalence with standard dBBT or (SPE), this means that if weak velocity measurements are reliable, (SPE) needn't be assumed separately: it's implied by (COR). We thus reject DGZ's identification of (SPE) as the crucial condition for the reliability of weak measurements. Pace DGZ, one might hence baulk at calling them genuine in a sufficiently _robust_ sense. Unless _independent_ reasons for (SPE), (COR) or standard dBBT are forthcoming, weak velocity measurements lack epistemic significance for gauging the status of dBBT. The analysis of weak measurements in a de Broglie-Bohmian framework _doesn't_ rely on (SPE). DGZ are right, however, in observing that if standard dBBT is true, weak measurements are reliable (i.e. weak position values and actual position values coincide). DGZ's purely mathematical result -- the equivalence of (SPE) and standard dBBT -- hints at an alluring possibility (completely independently of weak measurements): it _might_ serve as a prima facie interesting avenue for justifying (or, at least, motivating) standard dBBT. Underlying (SPE) seems to be the hunch that for particle-pointer systems with separable (factorisable) quantum states, the particle is supposed to be guided exclusively by the particle's wave function -- not by that of the pointer. More generally, due to (SPE), whenever a quantum system is prepared as separable, the dynamics for the particles of one subsystem doesn't depend on the quantum state of other subsystem(s).13 Footnote 13: This is somewhat reminiscent of so-called Preparation Independence, a key assumption in the Pusey-Barrett-Rudolph Theorem (see e.g. Leifer 2014, sect. 7; specifically for the theorem in the context of standard dBBT, see Drezet 2014). Roughly speaking, Preparation Independence asserts that in hidden-variable theories, the so-called “ontic states” (i.e. the states represented by the two systems “hidden” variables), should be statistically independent, if their joint quantum state is separable. For hidden-variable theories, this looks like a natural desideratum: it expresses how the separable systems’ independence at the quantum level (cf. e.g. Howard 1989, 1992) percolates to (i.e. constrains) the level of the more fundamental level of the hidden-variables (cf. Leifer 2014, sect. 7.3). As a desideratum, (SPE) implements the expectation that the statistical independence at the quantum level percolates to the level of the behaviour (i.e. dynamics) of the hidden-variables: whenever the quantum states of a composite system \(A\&B\) are independent, the dynamics of the particles constituting \(A\) shouldn't be affected by \(B\)'s quantum state. One may deem this a plausible (albeit defeasible) heuristic principle for the construction of hidden-variable theories: it aligns the statistical independence of the known (empirically accessible) realm of the quantum formalism (for separable quantum states), and the independence of the unknown (empirically inaccessible) realm of the (putatively) more fundamental hidden-variables' dynamics. A dynamics respecting this alignment, one might feel, "naturally" explains the statistical independence at the coarse-grained quantum level. On the other hand, one may well query the status of (SPE). The separability of quantum states is arguably related to their individuation (see e.g. Howard 1985, 1989; Brown 2005, Appendix B3): for composite systems with separable quantum states, subsystems have distinct quantum states. But why deem the individuation of quantum states -- usually construed in this context as encoding statistical, _coarse-grained_ properties to which our empirical knowledge seems limited -- relevant for a constraint on the (putatively more) fundamental particle dynamics? Even if the particle and the pointer possess distinct (individual) quantum states, why should it follow that the particle's dynamics should depend only on the particle's wave function? What might seem to suggest that is that SPE encodes a form of locality. (_Standard_ (Bell-)locality forbids an action-at-a-distance. The kind of locality enshrined in (SPE) forbids that a particle's dynamics depends on the pointer's quantum state, even if the joint quantum state of the particle and the pointer is separable.) But standard non-locality is a manifest, distinctive feature of dBBT. The type of locality that (SPE) asserts doesn't restore standard locality. What then is it supposed to achieve? We leave the prospects of (SPE) as a potentially promising motivation for standard dBBT to future inquiry. This section afforded two main lessons. Standard dBBT is mathematically uniquely characterised by a factorisation condition on the velocity field. We ar gued that DGZ's identification of that condition as "crucial" for the reliability of weak measurements was misleading. 2. Weak velocities coincide with the particle's actual velocities, if and only if standard dBBT is true. It thus remains questionable what argument (if any) weak velocity measurements provide in support of standard Bohmian trajectories or any other Bohmian theory. On their own, weak velocity measurements thus don't provide any empirical support for standard dBBT. What about _non_-empirical inferential support, though? ### Non-empirical support for dBBT? The main result of Wiseman's original paper can be read as a conditional claim: _if_ one adopts his operationally defined velocity, _and_ assumes deterministic, differential particle trajectories, the latter is uniquely determined as that of standard dBBT; on this reading, Wiseman remains neutral vis-a-vis this claim's premises -- whether they are plausibly satisfied (or not). Stated thus, Wiseman's stance is impeccable. More exciting, however, would be the prospect of learning something novel about the status of standard dBBT from weak measurements (granting certain background assumptions). We'll now examine such a stronger interpretation of Wiseman's result: as a non-empirical justification of standard dBBT. We flesh out three possible variants of such an argument.14 Footnote 14: This may be thought of as an eliminative induction (see e.g. Norton 1995), where one eliminates from a universe of candidate theories all but one with the help of background assumptions and principles. The starting point of the envisioned reasoning will be two tenets, explicitly endorsed by Wiseman: 1. One should construe the weak value in Wiseman's weak measurement protocol of SS3.3 as the average velocity of a large ensemble of particles (Wiseman, 2007, sect. 3). 2. Albeit not per se referring to individual particles, this _statistical_ property provides a "justification for [standard dBBT's] law of motion [i.e. the standard guidance equation]" (ibid., p. 2). According to tenet (1), the weak value, obtained in Wiseman's setup, by itself corresponds to a real property only of an ensemble of particles -- rather than one naively ascribable to the individual particles: "Thus strictly the weak value [...] should be interpreted [...] only as the mean velocity in configuration space -- this noise could be masking variations in the velocity between individual systems that have the same Bohmian configuration \(x\) at time \(t\)." (Wiseman 2007, p. 5). One of the premises in the conditional claim is determinism. With that assumption in place, weak values within a de Broglie-Bohmian framework are plausibly interpreted as first and foremost statistical properties of ensembles, as asserted in (1): formally, weak values are (normalised) transition amplitudes (cf. Kastner 2017; pace, Vaidman 1996). Hence, the usual interpretation of probability amplitudes within dBBT as statistical (ensemble) properties applies (see e.g. Holland 1995, Chapter 3.8; Bohm and Hiley 2006, Chapter 9.3).15 Footnote 15: Outside of the framework of dBBT — in particular outside of epistemic interpretations of the Born Rule-based quantum formalism — the interpretation of weak values as ensemble properties is more tenuous (see Matzkin 2019). Here, we’ll set aside such quibbles. Tenet (2) purports that _in virtue of this statistical (ensemble) property_ dBBT's standard form "is preferred over all other on physical grounds" (Wiseman 2007, p. 12). That is, although other velocity fields generate the same (statistically-empirically accessible) mean velocity, we ought to believe that the standard velocity field is true -- rather than any of its alternatives: for Wiseman, (2) serves as a non-empirical rule of inference16, "justifying [dBBT's] foundations" (ibid., p. 12). Footnote 16: The intended form of non-empirical support is distinct from Dawid’s (2013; 2019) ideas of non-empirical confirmation. Regardless of how convincing one finds Dawid’s proposal, it doesn’t apply to the present case. As Wiseman reiterates, no experiment can discriminate between dBBT's standard velocity field and alternative choices. How then is the envisaged non-empirical justification supposed to work? What undergirds (2)? Three strategies (intimated to some extent by Wiseman and his commentators) spring to mind: _(A)_ some variant of operationalism, _(B)_ simplicity and/or parsimony, and _(C)_ some variant of inference to the best explanation. _(A)_ The first invokes some form of operationalism in the spirit of Bridgman 1927. In its crudest form, it demands that all theoretical quantities be operationalisable: there must exist suitable measurement instructions for them. Yet, operationalism "[...] is nowadays commonly regarded as an extreme and outmoded position" (Chang 2009, also for a compilation of the arguments against operationalism). We'll therefore not discuss it further. Perhaps an attenuated form fares better -- one according to which (ceteris paribus) it's merely _desirable_ that theoretical quantities be operationalisable. Wiseman seems to cherish the desideratum that "the [Bohmian particle] dynamics are deterministic, and that the velocity-field of the [hidden variable, i.e. the particle positions] should be naively observable [...]". But what would buttress such a desideratum? In particular, why believe that a theory that satisfies it is more likely to be true than empirically equivalent rival theories that don't? _(B)_ A second strategy (expressly disavowed by Wiseman) might turn on simplicity. Wiseman's operational definition, on this line of thought, should be regarded as distinguished -- as particularly simple. Even if we set aside both Wiseman's concern that "simplicity is not a property that can be rigorously defined" (Wiseman, 2007, p. 9), and the problematic assumption that simplicity is truth-conductive, an appeal to simplicity isn't promising: simplicity and postulating that individual particle trajectories coincide with their statistical averages are unrelated. Although intuitively it may prima facie appear _simple_, if the individual trajectories are chosen so as to coincide with their statistical averages, the precise sense of simplicity turns out to be elusive: neither the theory's qualitative nor its qualitative parsimony are affected by that choice. That is, neither new or additional kinds/types of entities are introduced or eliminated in the theory's ontology, nor is the overall number of entities multiplied or reduced. To appeal to parsimony would likewise be of no avail: neither in terms of quantitative (i.e. with respect to numbers of individual entities postulated) nor qualitative (i.e. with respect to numbers of types or kinds postulated) parsimony does such a postulate seem privileged. _(C)_ A third attempt to defend (2) might appeal to an Inference to the Best Explanation (IBE) (see e.g. Lipton 2003; Bartelborth 2012, Chapter 4): standard dBBT, on this view, provides the best explanation for the observational facts in Wiseman's protocol. Again, let's grant that IBEs are generically justifiable (pace e.g. Van Fraassen 1980, Chapter 2; Van Fraassen 1989, Part II). Yet, in light of the foregoing comments on parsimony and simplicity, it's opaque in which sense standard dBBT could explain (or help us understand) the empirical phenomena in any _better_ way than versions with non-standard velocity fields; both are _equally_ capable of accommodating the empirical phenomena. A variant of this appeal to an IBE17, found in the literature, fixates on Wiseman's emphasis of the allegedly natural character of his proposal to operationally define velocities via weak values: "(Standard dBBT) delivers thus the most natural explanation of the experiments described"(Durr and Lazarovici 2018, p. 145, our translation). Footnote 17: The variant of an IBE is known as “Inference to the _Loveliest_ Explanation” (see Lipton (2003), passim). Instances of the latter are supposed to provide the best _understanding_ of the phenomena. Even if one grants the controversial assumption that loveliness confers also likeliness to be true, the objection in the main text carries through: in no palpable, non-arbitrary way does it strike us as particularly “lucky”, if the particles’ mean velocity coincides with that of their individual ones. It certainly doesn’t _enhance_ our understanding. Three reasons militate also against this view. First, the intended notion of a natural explanation is to our minds vague. Hence, it's difficult to fathom its argumentative force. At best, it seems an aesthetic criterion. As such, its suitability for assessing theories is suspect (cf. Ivanova 2020, 2017; Hossenfelder 2018). Secondly, in light of the highly _unnatural_ consequences of the same reasoning in other contexts, one may well debate whether Wiseman's operationally defined velocity is indeed natural after all. Aharanov and Rohrlich (2005, p. 223) -- presumably against the authors' intentions -- summarise the generic "unnaturalness" of weak values: "weak values offer intuition about a quantum world that is free than we imagined -- a world in which particles travel faster than light, carry unbounded spin, and have negative kinetic energy." Thirdly, and quite generally, in SS4.1 and SS4.2 we have seen that in the present case the allegedly natural explanation would at any rate be deceitful: one mustn't _naively_ take it for granted that they reveal the actual particle positions. Leavens (2005) draws attention to the fact that under certain experimental circumstances "[...] there is no possibility of the weak value [...] reliably corresponding in general, even on average, to a successfully post-selected particle being found near (the weak value) at time \(t=0\) when the impulsive weak position measurement begins and being found near (the post-selected value) an instant after it ends" (p. 477). The perils of naive (i.e. literal) realism about weak position values are drastically demonstrated in the so-called Three-Box-Paradox (Aharonov and Vaidman 1991; Aharanov and Rohrlich 2005, Chapter 16.5; Maroney 2017). Imagine a particle and three boxes, labelled \(A\),\(B\), and \(C\). Let the particle's initial state be \[|\psi_{i}\rangle=\frac{1}{\sqrt{3}}(|A\rangle+|B\rangle+|C\rangle), \tag{4.21}\] where \(|A\rangle\) denotes the state in which the particle is in box \(A\), and similarly, \(|B\rangle\) and \(|B\rangle\). For its final state, on which we'll post-select, choose \[|\psi_{f}\rangle=\frac{1}{\sqrt{3}}(|A\rangle+|B\rangle-|C\rangle). \tag{4.22}\] Via the definition of weak values (see 6.2), one then obtains the resulting weak values for the projectors onto state \(i\in A,B,C\), \(\hat{P}_{i}:=\left|i\right\rangle\left\langle i\right|\): \[\langle\hat{P}_{A}\rangle_{w} =1\] \[\langle\hat{P}_{B}\rangle_{w} =1\] \[\langle\hat{P}_{C}\rangle_{w} =-1. \tag{4.23}\] If one were to believe that weak values invariably reveal the real positions of particles, one would have to conclude that box \(C\) contains \(-1\) particle! Within the ontology of dBBT (in any of its variants), this is an absurd conclusion: particles in dBBT either occupy a position or they don't; the respective position projectors take values only in \(\{0,1\}\). Consequently, it's imperative that adherents of dBBT be wary of interpreting weak values as real position values without qualification. Our analyses in SS4.1 and SS4.2 underscore this: the reliability of weak position (or velocity) measurements is a non-trivial (and generically _false_) assumption. In conclusion, our hopes were dashed that the velocity measurement in Wiseman's protocol supports dBBT in any robust, non-empirical sense. Neither the alleged merits of operationalisability per se nor considerations of simplicity or parsimony warrant it. An IBE proved implausible. Unqualified realism about weak position values inevitably conflicts with dBBT's default ontology. We are thus left with at best a considerably weaker position, one close to Bricmont's (2016, p. 136): "[Weak velocity measurements via Wiseman's protocol] (are) not meant to 'prove' that the de-Broglie-Bohm theory is correct', because other theories will make the same predictions, but the result is nevertheless suggestive, because the predictions made here by the de Broglie-Bohm theory is [sic] very natural within that theory [...]." Understanding that suggestiveness and "naturalness" possess scant epistemic or even non-subjective import, we concur. With such a verdict, however, one has relinquished the initial hope that weak measurements per se have a fundamental bearing on whether standard dBBT or one of its alternative versions are true. ## 5 Conclusion Let's recapitulate the findings of this paper. We started from the empirical underdetermination of dBBT's guidance equation. It poses a impediment to insouciant realism about the particles' trajectories, postulated by standard dBBT. We scrutinised whether Wiseman's measurement protocol for weak velocities is able to remedy this underdetermination by empirical or non-empirical means. Our result was negative. We elaborated that the reliability of weak velocities -- the fact that they coincide with the particles' real velocities -- presupposes standard dBBT. For non-standard versions of dBBT, its presumption is generically false. Hence, weak velocity measurements don't qualify as evidence or confirmation in favour of the velocity field, postulated by standard dBBT. Weak velocity measurements thus don't allow for genuine measurements in any robust sense (at least given the present knowledge). Finally, we critiqued an interpretation of Wiseman's measurement protocol as a non-empirical argument for standard dBBT in terms of alleged theoretical virtues. Even if one grants the questionable appeal to some popular virtues, it remains equivocal that in the context of weak velocity measurements standard dBBT actually exemplifies them. Most importantly, the 3-Box Paradox demonstrated the dangers of _any_ naive realism about weak _position_ values. In conclusion, our paper has, we hope, elucidated the status of weak velocity measurements in two regards. On the one hand, they are indubitably an interesting application of QM in a novel experimental regime (viz. that of weak pointer-system couplings). They allow us to empirically probe the gradient of the system's wave function -- irrespective of any particular interpretation of the quantum formalism. On the other hand, however, with respect to the significance of weak velocity mea surements, we proffered a deflationary account: per se, weak velocity measurements shed no light on the status of standard dBBT. In particular, on their own, they don't provide any convincing support -- empirical or non-empirical -- for standard dBBT over any of its alternative versions. ## 6 Appendix: Weak measurements and weak values Methods of weak measurement have opened up a flourishing new field of theoretical and experimental developments (see e.g. Aharanov and Rohrlich 2005; Tamir and Cohen 2013; Svensson 2013; Dressel et al. 2014. Broadly speaking, weak measurements generalise strong measurements in that the final states of measured systems need no longer be eigenstates. In this appendix, we'll first provide a concise overview of weak measurements (SS6.1). In particular, we'll expound how they differ from the more familiar strong ones. In SS6.2, we'll introduce notion of a weak value. ### Strong versus weak Strong or ideal measurements are closely related to the conventional interpretation of the Born Rule. Consider a quantum system \(\mathcal{S}\) and a measuring device \(\mathcal{M}\) with Hilbert spaces \(\mathcal{H}_{\mathcal{S}}\) and \(\mathcal{H}_{\mathcal{M}}\), respectively. The Hilbert space of the total system is \(\mathcal{H}=\mathcal{H}_{\mathcal{S}}\otimes\mathcal{H}_{\mathcal{M}}\). The system be in a normalized state \(\left|\psi\right\rangle\) before the measurement. We are interested in measuring an observable \(A\) represented by the self-adjoint operator \(\hat{A}\), which has a complete and orthonormal eigenbasis \(\{\left|c_{i}\right\rangle\}\). In that basis the system's state reads \(\psi=\sum\limits_{i}\alpha_{i}\left|c_{i}\right\rangle\) for some \(\alpha_{i}\). Furthermore, we assume for simplicity the eigenstates are non-degenerate, i.e. have distinct eigenvalues. The only possible outcome of a strong measurement on this system is one of the eigenstates \(\left|c_{i}\right\rangle\). The corresponding probabilities to observe \(\left|c_{i}\right\rangle\) are \[p_{i}=|\left\langle c_{i}|\psi\right\rangle|^{2}=|\alpha_{i}|^{2}. \tag{6.1}\] After the measurement was performed the system ends up in the final state \(\left|c_{i}\right\rangle\). This procedure is known as the _von Neumann measurement_ (cf., for example, see the reprint (Von Neumann, 2018)). In a weak measurement the interaction of system and measurement device is modelled quantum mechanically with the pointer device as an ancillary system on which a strong measurement is performed after the interaction. That is, assume that system and pointer interact via a von Neumann Hamiltonian \[\hat{H}=g(t)\hat{A}\otimes\hat{P}_{M}, \tag{6.2}\] , where \(\hat{P}_{M}\) is conjugate to the pointer variable \(\hat{X}_{M}\), i.e. \([\hat{X}_{M},\hat{P}_{M}]=i\hbar\). As before \(\hat{A}\) is the quantum operator of the observable to be measured, and \(g(t)\) a coupling constant satisfying \(\int\limits_{0}^{T}g(t)dt=1\). For simplicity, take a single qubit prepared in initial state \[\left|\psi\right\rangle=\sum\limits_{i}\alpha_{i}\left|c_{i}\right\rangle= \alpha\left|0\right\rangle+\beta\left|1\right\rangle. \tag{6.3}\] We stipulate the eigenvalues of \(\hat{A}\) are \(\hat{A}\left|0\right\rangle=\left|0\right\rangle\) and \(\hat{A}\left|1\right\rangle=-\left|1\right\rangle\). Suppose that the pointer that is to be coupled weakly to the qubit will initially be in a Gaussian ready state with spread \(\sigma\) peaked around \(0\), i.e. \[\left|\varphi\right\rangle=\int\varphi(x)\left|x\right\rangle dx=\int Ne^{- \left(\frac{x}{2\sigma}\right)^{2}}\left|x\right\rangle dx, \tag{6.4}\] with \(N\) a normalization factor. During the unitary interaction, the total initial state \(\ket{\Phi(0)}=\ket{\psi}\otimes\ket{\varphi}\) of system and pointer evolves according to Schrodinger's equation: \[\ket{\Phi(T)} =e^{-\frac{i}{\hbar}\frac{T}{\hat{H}}\hat{dt}}\ket{\Phi(0)} \tag{6.5}\] \[=e^{-\frac{i}{\hbar}A\otimes\hat{P}_{M}}\ket{\Phi(0)}\] \[=e^{-\frac{i}{\hbar}A\otimes\hat{P}_{M}}\left(\alpha\ket{0}+ \beta\ket{1}\right)\otimes\int\varphi(x)\ket{x}dx\] \[=N\int\left(\alpha e^{-\left(\frac{x-1}{2\sigma}\right)^{2}}\ket {0}+\beta e^{-\left(\frac{x+1}{2\sigma}\right)^{2}}\ket{1}\right)\otimes\ket{ x}dx.\] Recall that the momentum operator acts as a shift operator (\(e^{-\frac{i}{\hbar}aP_{M}}\varphi(x)=\varphi(x-a)\)). If the Gaussian peaks are narrowly localized and non-overlapping (to a good approximation), one can infer the state of the system from the pointer measurement. However, for weak measurements the Gaussians are assumed to widely spread over the pointer variable. The measurement outcome of the pointer is therefore consistent with the system being in states that are not eigenstates of the operator. This is read off from Equation 6.5. If, say, the pointer ends up at position \(0\), for example, we recover the initial state \(\ket{\psi}\) up to an overall factor. The two Gaussian amplitudes reduce to the same value. For arbitrary systems with finite Hilbert space, the interaction generalises to \[\ket{\Phi(T)}=\sum_{i}\alpha_{i}\ket{c_{i}}\otimes\int\varphi(x-a_{i})\ket{x}dx, \tag{6.6}\] where \(a_{i}\) are the eigenvalues of the measurement operator \(\hat{A}\). For simplicity, the free evolution Hamiltonian of system and pointer has been omitted; it would only give rise to additional total phases. So far the measurement scheme was standard. In Equation 6.6 no weakness is involved in particular. It becomes a weak one if the initial state of the pointer variable \(X_{M}\) has a large spread \(\sigma\). That is, the result of (strong) measurement on the pointer is not a projection onto eigenstates of the system. ### Post-selection and two-vector-formalism We may now introduce the notion of a weak value. A weak value of an observable \(\hat{A}\) is the result of an effective interaction with the system in the limit of weak coupling and a subsequent post-selection. Coming back to the simple case of the qubit, if the state in Equation 6.5 is post selected on \(\ket{0}\), for instance, the pointer ends up in a Gaussian lump centered around \(1\). Similarly, conditioned on \(\ket{1}\) the pointer is centered around \(-1\), as one would expect from a strong measurement as well. Depending on the choice of the post-selected state, however, the pointer states are "reshuffled" and can be concentrated around mean values that can be far away from the eigenvalues of the observable \(\hat{A}\). In the limit of large standard deviation \(\sigma\) the distribution is again Gaussian though. For post-selecting \(\ket{+}:=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\), for example, the distribution of the measurement device peaks around \[a_{w}=\frac{\alpha-\beta}{\alpha+\beta}. \tag{6.7}\] This is easily obtained by observing \[\ket{+}\bra{+}\otimes\mathds{1}\ket{\Phi(T)}=\ket{+}\otimes\frac{N}{\sqrt{2}} \int\left(\alpha e^{-\left(\frac{x-1}{2\sigma}\right)^{2}}+\beta e^{-\left( \frac{x+1}{2\sigma}\right)^{2}}\right)\ket{x}dx. \tag{6.8}\] In the weak limit \(\sigma\gg 1\) this gives \[\approx\left|+\right\rangle\otimes\frac{N}{\sqrt{2}}\int(\alpha+\beta)e^{-\left( \frac{x-\frac{\alpha-\beta}{\alpha+\beta}}{2\sigma}\right)^{2}}\left|x\right\rangle dx. \tag{6.9}\] Importantly, the measurements on the pointer and the ones to find a post selected state are _strong measurements_ in the sense defined above. For arbitrary post-selection on a final state \(\left|\psi_{f}\right\rangle\) the state of the total system evolves according to \[\left|\psi_{f}\right\rangle\left\langle\psi_{f}\right|e^{-\frac{i}{\hbar}\int \limits_{0}^{T}\hat{H}dt}\left|\psi_{i}\right\rangle\otimes\left|\varphi\right\rangle. \tag{6.10}\] Since the spread \(\sigma\) is large, the interaction Hamiltonian, which produces a shift in the pointer's wave function, can be effectively approximated by \(e^{-\frac{i}{\hbar}\int\limits_{0}^{T}\hat{H}dt}\approx 1-\frac{i}{\hbar}\hat{A} \otimes\hat{P}_{M}T\). Thus, the final state reads \[\approx\left|\psi_{f}\right\rangle\otimes\left\langle\psi_{f} \right|\psi_{i}\right)\left(1-\frac{i}{\hbar}a_{w}\hat{P}_{M}T\right)\left| \varphi\rangle\] \[\approx\left|\psi_{f}\right\rangle\otimes\left\langle\psi_{f} \right|\psi_{i}\right)e^{-\frac{i}{\hbar}a_{w}\hat{P}_{M}}\left|\varphi\rangle\,, \tag{6.11}\] where \[\left\langle\hat{A}_{w}\right\rangle:=a_{w}=\frac{\left\langle\psi_{f} \right|\hat{A}\left|\psi_{i}\right\rangle}{\left\langle\psi_{f}\right|\psi_{i }\right\rangle} \tag{6.12}\] the salient quantity of the weak value of the observable operator \(\hat{A}\). That is, after many runs, the pointer's average position is \(a_{w}\)18. In other words, \(\left|\varphi\right\rangle\) experiences the shift \(\varphi(x)\mapsto\varphi(x-a_{w})\). Note that the probability amplitude to obtain \(\left|\psi_{f}\right\rangle\) in the post-selection is \(p=|\left\langle\psi_{f}\right|\psi_{i}\rangle|^{2}\). If the initial and final state of \(S\) are nearly orthogonal, the measurement may require many runs to find \(a_{w}\) as the post selected state occurs only rarely. If there is time evolution of the target system between the weak interaction and the final measurement of \(\left\langle\psi_{f}\right|\), then the expression would include \(\left\langle\psi_{f}\right|U\), where \(U\) the unitary evolution operator: Footnote 18: There are cases in which \(a_{w}\) is complex. Then, besides the position, the momentum is shifted too \[\left\langle\hat{A}_{w}\right\rangle:=a_{w}=\frac{\left\langle\psi_{f} \right|U\hat{A}\left|\psi_{i}\right\rangle}{\left\langle\psi_{f}\right|U\left| \psi_{i}\right\rangle}. \tag{6.13}\] For a derivation, we refer the interested reader to literature. ### Weak velocity and the gradient of the phase We can manipulate the definition of the operationally defined weak velocity to give us the velocity of the guidance equation of standard dBBT. That is, for the unitary evolution \(\hat{U}(\tau)=e^{-i\hat{H}\tau/\hbar}\) during time \(\tau\) (with the non-relativistic Hamiltonian of a massive particle \(\hat{H}=\frac{\mathbf{p}^{2}}{2m}+V(x)\)), the expression for Wiseman's operationally defined velocity reduces to (Wiseman, 2007, p. 5) \[\mathbf{v}(\mathbf{x},t) =\lim_{\tau\to 0}\frac{1}{\tau}(\mathbf{x}-\langle\hat{x}_{w}\rangle)\] \[=\lim_{\tau\to 0}(\mathbf{x}-\mathrm{Re}\,\frac{\langle\mathbf{x}|\, \hat{U}(\tau)\hat{\mathbf{x}}\,|\psi\rangle}{\langle\mathbf{x}|\,\hat{U}(\tau) \,|\psi\rangle})\] \[=\lim_{\tau\to 0}\frac{1}{\tau}(\mathrm{Re}\,\frac{\langle\mathbf{x}| \,\hat{\mathbf{x}}\hat{U}(\tau)\,|\psi\rangle-\langle\mathbf{x}|\,\hat{U}( \tau)\hat{\mathbf{x}}\,|\psi\rangle}{\langle\mathbf{x}|\,\hat{U}(\tau)\,|\psi \rangle})\] \[=\lim_{\tau\to 0}\frac{1}{\tau}(\mathrm{Re}\,\frac{\langle\mathbf{x}| \,[\hat{\mathbf{x}},\hat{U}(\tau)]\,|\psi\rangle}{\langle\mathbf{x}|\,\hat{U} (\tau)\,|\psi\rangle})\] \[=\lim_{\tau\to 0}\frac{1}{\tau}(\mathrm{Re}\,\frac{\langle \mathbf{x}|\,[\hat{\mathbf{x}},\mathds{1}-\frac{i}{\hbar}\hat{H}\tau+\mathcal{ O}(\tau^{2})]\,|\psi\rangle}{\langle\mathbf{x}|\,\mathds{1}-\frac{i}{\hbar}\hat{H} \tau+\mathcal{O}(\tau^{2})\,|\psi\rangle})\] \[=\mathrm{Re}\,\frac{\langle\mathbf{x}|\,[\hat{\mathbf{x}},-\frac {i}{\hbar}\frac{\hat{p}^{2}}{2m}]\,|\psi\rangle}{\psi(x)}\] \[=\mathrm{Re}\,\frac{\langle\mathbf{x}|\,\frac{\hat{p}}{m}\,|\psi \rangle}{\psi(x)}\] \[=\frac{\hbar}{m}\,\mathrm{Im}\,\frac{\nabla\psi(x)}{\psi(x)}= \frac{\hbar}{m}\nabla S(x), \tag{6.14}\] where \(\nabla S(x)\) is the gradient of the phase of the wave function \(\psi(x)\).
2309.07819
Decomposition of linear tensor transformations
One of the main issues in computing a tensor decomposition is how to choose the number of rank-one components, since there is no finite algorithms for determining the rank of a tensor. A commonly used approach for this purpose is to find a low-dimensional subspace by solving an optimization problem and assuming the number of components is fixed. However, even though this algorithm is efficient and easy to implement, it often converges to poor local minima and suffers from outliers and noise. The aim of this paper is to develop a mathematical framework for exact tensor decomposition that is able to represent a tensor as the sum of a finite number of low-rank tensors. In the paper three different problems will be carried out to derive: i) the decomposition of a non-negative self-adjoint tensor operator; ii) the decomposition of a linear tensor transformation; iii) the decomposition of a generic tensor.
Claudio Turchetti
2023-09-14T16:14:38Z
http://arxiv.org/abs/2309.07819v1
# Decomposition of Linear Tensor Transformations ###### Abstract One of the main issues in computing a tensor decomposition is how to choose the number of rank-one components, since there is no finite algorithms for determining the rank of a tensor. A commonly used approach for this purpose is to find a low-dimensional subspace by solving an optimization problem and assuming the number of components is fixed. However, even though this algorithm is efficient and easy to implement, it often converges to poor local minima and suffers from outliers and noise. The aim of this paper is to develop a mathematical framework for exact tensor decomposition that is able to represent a tensor as the sum of a finite number of low-rank tensors. In the paper three different problems will be carried out to derive: i) the decomposition of a non-negative self-adjoint tensor operator; ii) the decomposition of a linear tensor transformation; iii) the decomposition of a generic tensor. Machine learning, Computer vision, tensor PCA, tensor decomposition, self-adjoint operator, tensor basis, eigentensors ## 1 Introduction Tensors are the higher order generalization of vectors and matrices, and can consequently be treated as arrays indexed by multiple indices [1], [2], [3]. Thanks to their ability to represent a wide range of real-world data, tensors have expanded quickly from psychometrics [4], [5] and chemometrics [6], [1] to image analysis [7], [8], [9], [10], [11], [12], [13], big data representation [14], machine learning, [15], [16], [17], [18], [19], [20], sensor array processing [21], and much more [22], [23], [24], [22], [25], [26]. As the order of tensor increases, the number of entries in the array increases exponentially thus involving prohibitively computational and storage costs. To prevent these limitations, techniques for dimensionality reduction are required to make possible the application of tensors to real data. A widely adopted tecnique for this purpose is to represent high-dimensional tensors as linear combination of low-dimensional tensors. If \(\mathcal{A}\) is an order- tensor, to reach this goal is essential to find low-order tensors \(\mathcal{U}_{i}\) so that \[\mathcal{A}=\sum_{i=1}^{r}\sigma_{i}\mathcal{U}_{i}, \tag{1}\] where \(\sigma_{i}\) are positive scalars. In this way a low-dimensional representation can be derived by simply retaining the principal components of such a decomposition, that is the ones corresponding to the highest values of \(\sigma_{i}\). In this context several very powerful tensor decomposition approaches have been developed in the past [27], [28], [29], [30], in which tensors \(\mathcal{U}_{i}\) in the summation (1) are rank-one tensors. Among these, Tucker decomposition [31] and CANDECOMP/PARAFAC (CP) decomposition [32], [5] are the most popular and fundamental models. Tucker model decomposes a tensor into a core tensor multiplied by a factor matrix along each mode, while CP model factorizes a tensor into a weighted sum of rank-1 tensors. Following these two seminal approaches, several extensions for low rank tensor decomposition have been proposed in the past few decades. Some of the most relevant developments in this field are: multilinear principal component analysis (MPCA) [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], tensor rank-one decomposition (TROD) [43], [44], [32], [45], [46], hierarchical Tucker PCA (HT-PCA) [47], [48], [49] and tensor-train PCA (TT-PCA) [50], [51]. One of the main issues in computing a tensor decomposition is how to choose the number of rank-one components, since there is no finite algorithms for determining the rank of a tensor [45], [52], defined as the smallest number of rank-one tensors that generate \(\mathcal{A}\) as their sum. As a consequence, all the above methods for tensor decomposition try to find a low-dimensional subspace by solving an optimization problem and assuming the number of components is fixed. A commonly used approach for this purpose is the alternating least square (ALS) method [53], which assumes the solution of all but one mode is known and then estimating the unknown parameter set of the remaining mode. However, even though this algorithm is efficient and easy to implement, it often converges to poor local minima and suffers from outliers and noise [54]. The aim of this paper is to develop a mathematical framework for exact tensor decomposition that is able to represent a tensor as the sum of a finite number of low-rank tensors. The core of the proposed approach is the derivation of a decomposition for _non-negative self-adjoint tensor operators_. In particular it will be proven that a correspondence exists between the spectral decomposition of a symmetric matrix, a result that is the basis of PCA and SVD development in matrix analysis, and the decomposition of a self-adjoint tensor operator. In the paper three different problems will be carried out to derive: i) the decomposition of a non-negative self-adjoint tensor operator; ii) the decomposition of a linear tensor transformation; iii) the decomposition of a generic tensor. In particular, with reference to the first issue, the properties of a self-adjoint tensor operator will be studied and the equivalence between eigenvalue equation for this operator and standard matrix eigenvalue equation will be proven. The paper is organized as follows. Section 2 deals with the finite linear space of tensors and some fundamental concepts on linear tensor transformations. In Section 3 the mathematical framework for exact tensor decomposition is developed. Section 4 reports some numerical results to validate the mathematical framework previously presented. ## 2 The linear space of tensors Let us refer to the set \(\mathcal{L}_{d}\) of _real order-\(d\) tensors_\(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times...\times I_{d}}\) whose generic element will be denoted as \[\mathcal{X}_{\mathbf{i}}=\mathcal{X}_{i_{1},\ldots,i_{d}},\ \ \mathbf{i}=(i_{1},\ldots,i_{d})\quad. \tag{2}\] The vector \(\mathbf{i}\), in bold font, is a _subscript vector_ where the index \(i_{k}\) in the \(k\)th mode ranges in the interval \(1\leq i_{k}\leq I_{k},\ \ k=1:d\). As \(\mathcal{L}_{d}\) is closed under addition and multiplication by scalar, this set forms a linear space, called the _tensor space_. In the following we will refer to real tensors alone, thus for simplicity the term real will be omitted being always implied. Inner productThe inner product of two tensors of the same size \(\mathcal{X}_{\mathbf{i}},\mathcal{Y}_{\mathbf{i}}\in\mathbb{R}^{I_{1}\times I _{2}\times\ldots I_{d}}\) is a real scalar defined as \[\langle\mathcal{X},\mathcal{Y}\rangle=\mathcal{X}_{\mathbf{i}}\mathcal{Y}_{ \mathbf{i}}. \tag{3}\] Here the Einstein summation convention, that interprets repeated subscript as summation over that index, has been used. Following this convention, (3) is equivalent to \[\langle\mathcal{X},\mathcal{Y}\rangle=\sum_{i_{1}=1}^{I_{1}}...\sum_{i_{d}=1 }^{I_{d}}\mathcal{X}_{i_{1},\ldots,i_{d}}\mathcal{Y}_{i_{1},\ldots,i_{d}} \tag{4}\] In the paper we will extensively use the Einstein convention to simplify mathematical notation. From this definition \(\mathcal{X},\mathcal{Y}\) are said to be orthogonal if \(\langle\mathcal{X},\mathcal{Y}\rangle=0\), and it follows that the norm of a tensor is given by \[\|\mathcal{X}\|=\sqrt{\langle\mathcal{X},\mathcal{X}\rangle} \tag{5}\] SubspaceA _subspace_\(S\) of \(\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{d}}\) is a subset that is also a tensor space. Given a collection of tensors \(\mathcal{U}_{1},\ldots\mathcal{U}_{n}\in\mathbb{R}^{I_{1}\times I_{2}\times \ldots\times I_{d}}\), the set of all linear combinations of these vectors is a subspace referred to as the \(span\{\mathcal{U}_{1},\ldots\mathcal{U}_{n}\}\): \[span\{\mathcal{U}_{1},\ldots\mathcal{U}_{n}\}=\{\alpha_{i}\mathcal{U}_{i}:\ \ \alpha_{i}\in\mathbb{R},\ \ i=1:n\}. \tag{6}\] ContractionGiven two tensors \[\mathcal{X}_{\mathbf{i}}\in\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I _{d}},\quad\mathcal{Y}_{\mathbf{j}}\in\mathbb{R}^{J_{1}\times J_{2}\times \ldots\times J_{e}} \tag{7}\] and assuming a common vector index \(\mathbf{k}\) exists such that \(\mathbf{i},\mathbf{j}\) can be partitioned as \[\mathbf{i}=(\mathbf{l},\mathbf{k},\mathbf{m}),\quad\mathbf{j}=(\mathbf{p}, \mathbf{k},\mathbf{q}) \tag{8}\] the tensor product of \(\mathcal{X}\) and \(\mathcal{Y}\) along the multi-index \(\mathbf{k}\) combines the two tensors to give a third tensor \(\mathcal{Z}\), whose generic element is given by \[\mathcal{Z}_{\mathbf{l},\mathbf{m},\mathbf{p},\mathbf{q}}=\mathcal{X}_{ \mathbf{l},\mathbf{k},\mathbf{m}}\mathcal{Y}_{\mathbf{p},\mathbf{k},\mathbf{q}} \tag{9}\] that implies summation over the vector index \(\mathbf{k}\). We refer to this operation as _contraction_ of index \(\mathbf{k}\). Outer productThe outer product of tensor \(\mathcal{X}_{\mathbf{i}}\in\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{p}}\) with tensor \(\mathcal{Y}_{\mathbf{j}}\in\mathbb{R}^{J_{1}\times J_{2}\times\ldots\times J_{ q}}\) is the order-\((p+q)\) tensor \(\mathcal{Z}\) defined as \[\mathcal{Z}=\mathcal{X}\circ\mathcal{Y}, \tag{10}\] where the generic element of \(\mathcal{Z}\) is \[\mathcal{Z}_{\mathbf{i},\mathbf{j}}=\mathcal{X}_{\mathbf{i}}\mathcal{Y}_{ \mathbf{j}},\ \ \mathbf{i}=(i_{1},\ldots,i_{p}),\ \ \mathbf{j}=(j_{1},\ldots,j_{q}) \tag{11}\] In particular for two vectors \(x,y\in R^{I_{1}}\) the outer product is the matrix \(Z=x\circ y\) whose generic element is \[z_{ij}=x_{i}\,y_{j}\ \ . \tag{12}\] Vector-to-linear index transformationWith reference to an order-\((p+q)\) tensor \[\mathcal{X}_{\mathbf{j},\mathbf{i}},\ \ \mathbf{j}=(j_{1},\ldots,j_{p}),\ \ \mathbf{i}=(i_{1},\ldots,i_{q}) \tag{13}\] it is worth to notice that the indices \(i_{1},\ldots,i_{q}\) can be arranged in \(M=I_{1}I_{2}\ldots I_{q}\) different ways (the number of permutations with repetition), so that a correspondence \(\alpha\) \[m=\alpha(\mathbf{i}),\ \ 1\leq m\leq M,\ \ 1\leq i_{k}\leq I_{k},\ \ k=1:q \tag{14}\] between \(m\) and \(\mathbf{i}\) holds. The inverse transformation exists, since the correspondence is one-to-one, and is denoted by \[\mathbf{i}=\alpha^{-1}(m),\ \ 1\leq m\leq M,\ \ \ \ 1\leq i_{k}\leq I_{k},\ \ k=1:q \tag{15}\] By defining the \((M\times q)\) matrix \(T\) whose \(m\)th row represents the corresponding vector index \(\mathbf{i}(m)\), then the inverse transformation (15) is formally given by \[\mathbf{i}=T(m,:),\ \ 1\leq m\leq M \tag{16}\] As a result this operation, transforming the vector index \(\mathbf{i}\) to the linear index \(m\), turns the order-\((p+q)\) tensor to the order-\((p+1)\) tensor \[\mathcal{X}_{\mathbf{j},m},\ \ \mathbf{j}=(j_{1},\ldots,j_{p}),\ \ m=1:M. \tag{17}\] The vector-to-linear-index transformation can be applied to more than one index. As an example, by transforming the two vector indices of \(\mathcal{X}_{\mathbf{j},\mathbf{i}}\) gives rise to the matrix \(\mathcal{X}_{\mathbf{j}(n),\mathbf{i}(m)}=X_{n,m},\ \ n=1:N,m=1:M\), with \(N=J_{1}J_{2}\ldots J_{p}\), \(M=I_{1}I_{2}\ldots I_{q}\). Linear transformationsA two-index tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in R^{I\times J}\), where \(I=I_{1}\times\ldots\times I_{d}\), \(J=J_{1}\times\ldots\times J_{e}\), defines a _linear transformation_\(\mathcal{A}:\mathbb{R}^{J}\rightarrow\mathbb{R}^{I}\) from \(\mathbb{R}^{J}\) to \(\mathbb{R}^{I}\) as the product \[\mathcal{Y}_{\mathbf{i}}=\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{X}_{ \mathbf{j}} \tag{18}\] that transforms a tensor \(\mathcal{X}_{\mathbf{j}}\in R^{J}\) to the tensor \(\mathcal{Y}_{\mathbf{i}}\in R^{I}\). Among the possible transformations a particular role is assumed by transformations from \(\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{d}}\) to itself. In this particular case the tensors \(\mathcal{Y}_{\mathbf{i}}\) and \(\mathcal{X}_{\mathbf{j}}\) are both in \(\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{d}}\) and the order-\(2d\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\) is said to be a tensor _operator_ (or _endomorphism_). Thus, the operator \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\) establishes a transformation from \(\mathcal{L}_{d}\) to \(\mathcal{L}_{d}\) and (18) can be rewritten without ambiguity as \[\mathcal{Y}=\mathcal{AX} \tag{19}\] A tensor \(\mathcal{V}\) in \(\mathcal{L}_{d}\) is said to be an _eigentensor_ if \(\mathcal{V}\neq 0\) and if for some scalar \(\lambda\) the following equation \[\mathcal{A}\mathcal{V}=\lambda\mathcal{V} \tag{20}\] is satisfied. The scalar \(\lambda\) is known as the _eigenvalue_ of \(\mathcal{A}\) associated with the eigentensor \(\mathcal{V}\), Self-adjoint non-negative definite operatorsA _self-adjoint_ operator is an operator for which the following property \[\langle\mathcal{A}\mathcal{Y},\mathcal{Z}\rangle=\langle\mathcal{Y},\mathcal{ AZ}\rangle \tag{21}\] holds. As an example, an order-\(2d\) tensor \[\mathcal{A}_{\mathbf{i}\mathbf{j}},\ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \mathbf{j}=(j_{1},\ldots,j_{d}) \tag{22}\] such that \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\mathcal{A}_{\mathbf{j}\mathbf{i}} \tag{23}\] is a self-adjoint operator, since it results \[\langle\mathcal{A}\mathcal{Y},\mathcal{Z}\rangle=\mathcal{A}_{\mathbf{i} \mathbf{j}}\mathcal{Y}_{\mathbf{j}}\mathcal{Z}_{\mathbf{i}}=\mathcal{Y}_{ \mathbf{j}}\mathcal{A}_{\mathbf{j}\mathbf{i}}\mathcal{Z}_{\mathbf{i}}=\langle \mathcal{Y},\mathcal{AZ}\rangle\,. \tag{24}\] In the particular case the operator acts on the order-\(1\) tensor space \(R^{I_{1}}\), that corresponds to \(d=1\) in (22), the self-adjoint operator (23) reduces to a _symmetric_ matrix \(A_{i_{1},j_{1}}=A_{j_{1},i_{1}}\). A _non-negative definite_ operator is such that the following property \[\langle\mathcal{V},\mathcal{A}\mathcal{V}\rangle\geq 0 \tag{25}\] holds for every tensor \(\mathcal{V}\). An important class of operators is the class of _self-adjoint non-negative definite_ (_SA-NND_) tensor operators that are both self-adjoint and non-negative. For such tensors, on the basis of the properties stated before, it follows that eigenvalues of an (_SA-NND_) tensor are non-negative and the eigentensors belonging to distinct eigenvalues are orthogonal. Indeed, combining (20) and (21) yields the scalar \[\lambda=\left\langle\mathcal{V},\mathcal{AV}\right\rangle/\left\langle\mathcal{ V},\mathcal{V}\right\rangle \tag{26}\] that is non-negative due to property (25). As for the eigentensor orthogonality, suppose that \(\mathcal{AV}_{1}=\lambda_{1}\mathcal{V}_{1}\) and \(\mathcal{AV}_{2}=\lambda_{2}\mathcal{V}_{2}\) for \(\lambda_{1}\neq\lambda_{2}\), if \(\mathcal{A}\) is self-adjoint, then \[\left\langle\mathcal{AV}_{1},\mathcal{V}_{2}\right\rangle=\lambda_{1}\left\langle \mathcal{V}_{1},\mathcal{V}_{2}\right\rangle \tag{27}\] and also \[\left\langle\mathcal{AV}_{1},\mathcal{V}_{2}\right\rangle=\left\langle \mathcal{V}_{1},\mathcal{AV}_{2}\right\rangle=\lambda_{2}\left\langle \mathcal{V}_{1},\mathcal{V}_{2}\right\rangle \tag{28}\] Therefore, \(\left(\lambda_{1}-\lambda_{2}\right)\left\langle\mathcal{V}_{1},\mathcal{V}_ {2}\right\rangle=0\), and hence \(\left\langle\mathcal{V}_{1},\mathcal{V}_{2}\right\rangle=0\), since \(\lambda_{1}\neq\lambda_{2}\). An SA-NND operator can be easily derived from a tensor \(\mathcal{A}_{\textbf{i,j}}\) as follows \[\mathcal{G}_{\textbf{i,j}}=\mathcal{A}_{\textbf{k,j}}\mathcal{A}_{\textbf{k,j}} \tag{29}\] The operator \(\mathcal{G}_{\textbf{i,j}}\) is self-adjoint, in fact we have \[\mathcal{G}_{\textbf{i,j}}=\mathcal{A}_{\textbf{k,i}}\mathcal{A}_{\textbf{k,j }}=\mathcal{A}_{\textbf{k,j}}\mathcal{A}_{\textbf{k,i}}=\mathcal{G}_{\textbf{ j,j}} \tag{30}\] The eigenvalues of \(\mathcal{G}_{\textbf{i,j}}\) are non-negative, since \[\left\langle\mathcal{V},\mathcal{GV}\right\rangle=\mathcal{V}_{\textbf{i}} \mathcal{G}_{\textbf{i,j}}\mathcal{V}_{\textbf{j}}=\mathcal{A}_{\textbf{k,i}} \mathcal{V}_{\textbf{i}}(\mathcal{A}_{\textbf{k,j}}\mathcal{V}_{\textbf{j}})= \mathcal{Z}_{\textbf{k}}\mathcal{Z}_{\textbf{k}}\geq 0 \tag{31}\] where \(\mathcal{Z}_{\textbf{k}}=\mathcal{A}_{\textbf{k,i}}\mathcal{V}_{\textbf{i}}\), so that from (26) it results \(\lambda\geq 0\). The operator \(\mathcal{G}^{\prime}{}_{\textbf{i,j}}=\mathcal{A}_{\textbf{i,k}}\mathcal{A}_{ \textbf{j,k}}\) satisfies this property as well. In particular for linear indices \(\textbf{i}=i,\ \textbf{j}=j\), \(\textbf{k}=k\), the operator reduces to the symmetric matrix \[G_{i,j}=A_{k,i}A_{k,j}=(A_{i,k}^{T})A_{k,j} \tag{32}\] which can be written as \(G=A^{T}A\). Similarly, the operator \(\mathcal{G}^{\prime}{}_{\textbf{i,j}}\) reduces to the matrix \(G^{T}=AA^{T}\). The matrix \(G\) represents the well known Gram-matrix, while matrix \(G^{T}\) its transpose. Thus, tensor \(\mathcal{G}_{\textbf{i,j}}\) is the analog in tensor space of Gram-matrix \(G\). Assuming \(G\) has size \((L\times L)\) and rank \(r\), i.e. \(r=rank(G)\), it is well known from geometry that, since \(G\) is a symmetric non-negative definite matrix, it admits \(L\) eigenvalues \(\{\lambda_{i},\ i=1:L\}\) such that \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{r}>0=\lambda_{r+1}=\cdots= \lambda_{L}\). Thus \(G\) can be diagonalized as \[G_{n,m}U_{m,p}=U_{n,q}\Lambda_{q,p},\ \ q,p=1:r \tag{33}\] where \(U=(u_{1},\ldots,u_{r})\) is the matrix whose columns are the eigenvectors corresponding to non-zero eigenvalues in the matrix \(\Lambda=diag(\lambda_{1},\ldots,\lambda_{r})\). Using orthogonality of matrix \(U\), i.e. \(U_{m,p}U_{m,p^{\prime}}=\delta_{p,p^{\prime}}\) and \(\Lambda_{q,p}=\lambda_{p}\delta_{q,p}\), (33) becomes \[G_{n,m}=\lambda_{p}U_{n,p}U_{m,p}. \tag{34}\] This relationship is called _spectral decomposition_ of matrix \(G\) and represents a fundamental result in matrix analysis, since both principal component analysis (PCA) and singular value decomposition (SVD) are based on this decomposition. Thus it is essential to ask whether a similar relationship can be derived for the class of (_SA-NND_) tensors. ## 3 Tensor decomposition Having proven there is a correspondence between self-adjoint tensor operators and symmetric matrices, the aim of this section is to discover that a correspondence to the spectral decomposition (34) exists in tensor space. To take advantage of these considerations, tensor \(\mathcal{A}\) to be decomposed will be treated as a linear transformation from tensor space \(\mathbb{R}^{J}\) to tensor space \(\mathbb{R}^{I}\), that is \(\mathcal{A}:\mathbb{R}^{J}\rightarrow\mathbb{R}^{I}\), which for linear indices reduces to the matrix \(A\). On the basis of this assumption, two cases will be considered first: i) the tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times I}\) is real self-adjoint operator and ii) the tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times J}\), with \(I\neq J\), is a linear transformation. Using these results, then the most general case of a generic tensor will be treated at the end of the section. ### Decomposition of a self-adjoint non-negative tensor operator The first case will be studied here deals with an order-\(2d\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times I}\) that represents a linear operator (or endomorphism) \(\mathcal{A}:\mathbb{R}^{I}\rightarrow\mathbb{R}^{I}\). Thus, with reference to such a tensor, we state the following proposition. Proposition1Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times I},\ \ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \ \mathbf{j}=(j_{1},\ldots,j_{d}) \tag{35}\] be a non-negative self-adjoint order-\(2d\) tensor operator with \(I=I_{1}\times I_{2}\times\ldots\times I_{d}\), there exists a set of scalars \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{r}>0\), and a set of orthogonal tensors \(\mathcal{U}=\{\mathcal{U}_{\mathbf{i},p},\ \ p=1:r\}\), such that the following decomposition \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ p=1:r \tag{36}\] holds. ProofFirst, we want to prove that the following eigenvalue equation \[\mathcal{A}\mathcal{U}=\lambda\mathcal{U}, \tag{37}\] where \(\mathcal{A}\) is a self-adjoint operator, admits solutions with \(\mathcal{U}\in\mathbb{R}^{I}\) and \(\lambda>0\). Using index convention introduced in Section 2, Eq. (37) is equivalent to the \(L\) equations \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{U}_{\mathbf{j}}=\lambda\mathcal{U} _{\mathbf{i}},\ \ \mathbf{i}\in I,\ \ L=dim(I)=I_{1}I_{2}\ldots I_{d} \tag{38}\] which can be written as \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I,\ \ \mathbf{i}\in I,\ \ \mathbf{i}\in I,\ \ \mathbf{i}\in I. \tag{39}\] The following proposition is equivalent to the following proposition. Proposition2Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I,\ \ \mathbf{i}\in I. \tag{40}\] Then, the following proposition is equivalent to the following proposition. Proposition3Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{41}\] Then, the following proposition is equivalent to the following proposition. Proposition4Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{42}\] Then, the following proposition is equivalent to the following proposition. Proposition5Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{43}\] Then, the following proposition is equivalent to the following proposition. Proposition6Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{44}\] Then, the following proposition is equivalent to the following proposition. Proposition7Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{45}\] Then, the following proposition is equivalent to the following proposition. Proposition8Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{46}\] Then, the following proposition is equivalent to the following proposition. Proposition9Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{47}\] Then, the following proposition is equivalent to the following proposition. Proposition10Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{48}\] Then, the following proposition is equivalent to the following proposition. Proposition11Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{49}\] Then, the following proposition is equivalent to the following proposition. Proposition12Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{50}\] Then, the following proposition is equivalent to the following proposition. Proposition13Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{51}\] Then, the following proposition is equivalent to the following proposition. Proposition14Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{52}\] Then, the following proposition is equivalent to the following proposition. Proposition15Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{53}\] Then, the following proposition is equivalent to the following proposition. Proposition16Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{54}\] Then, the following proposition is equivalent to the following proposition. Proposition17Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{55}\] Then, the following proposition is equivalent to the following proposition. Proposition18Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{56}\] Then, the following proposition is equivalent to the following proposition. Proposition19Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{57}\] Then, the following proposition is equivalent to the following proposition. Proposition19Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{58}\] Then, the following proposition is equivalent to the following proposition. Proposition11Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{59}\] Then, the following proposition is equivalent to the following proposition. Proposition12Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{60}\] Then, the following proposition is equivalent to the following proposition. Proposition13Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{61}\] Then, the following proposition is equivalent to the following proposition. Proposition14Let \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ \mathbf{i}\in I. \tag{62}\] Then, the following proposition is equivalent to the following proposition. Proposition15Let \[\mathcal{ \[({\cal A}_{\bf i,j}-\lambda\delta_{\bf i,j}){\cal U}_{\bf j}=0, \tag{39}\] where the term on the left \({\cal Y}_{\bf i}=({\cal A}_{\bf i,j}-\lambda\delta_{\bf i,j}){\cal U}_{\bf j}\) represents an order-d tensor. The solutions of (39) are independent on how the terms in the equation are arranged as entries in the tensor \({\cal Y}_{\bf i}\), meaning that (39) is invariant to a one-to-one transformation of subscript vectors. Assuming indices \({\bf i},{\bf j}\) are obtained by the inverse vector-to-linear index transformation (18) then, substituting the vector indices \({\bf i}(n)=T(n,:)\), \({\bf j}(m)=T(m,:)\), (39) becomes \[({\cal A}_{{\bf i}(n),{\bf j}(m)}-\lambda\delta_{{\bf i}(n),{\bf j}(m)}){\cal U }_{{\bf j}(m)}=0 \tag{40}\] By defining the \((L\times L)\) matrices \[a_{n,m}={\cal A}_{{\bf i}(n),{\bf j}(m)},\ \ \delta_{n,m}=\delta_{{\bf i}(n),{ \bf j}(m)} \tag{41}\] and the vector \[u_{m}={\cal U}_{{\bf j}(m)}, \tag{42}\] thus (40) can be re-arranged as \[(a_{n,m}-\lambda\delta_{n,m})u_{m}=0 \tag{43}\] or equivalently \[a_{n,m}u_{m}=\lambda u_{n},\ \ m,n=1:L \tag{44}\] Equation (44) is equivalent to \[Au=\lambda u. \tag{45}\] where the matrix \([A]_{n,m}=a_{n,m}\) is a symmetric non-negative definite (\(L\times L\)) matrix. In fact, as \({\cal A}_{\bf i,j}\) is self-adjoint, then it results \[a_{n,m}={\cal A}_{{\bf i}(n),{\bf j}(m)}={\cal A}_{{\bf j}(m),{\bf i}(n)}=a_{m,n} \tag{46}\] that proves matrix \(a_{n,m}\) is symmetric (or self-adjoint). In addition, since \(\langle{\cal V},{\cal A}{\cal V}\rangle\geq 0\) for every tensor \({\cal V}\), it results \[0\leq{\cal V}_{\bf j}{\cal A}_{\bf i,j}{\cal V}_{\bf j}={\cal V}_{{\bf i}(m)}{ \cal A}_{{\bf i}(m),{\bf j}(n)}{\cal V}_{{\bf j}(n)}=v_{m}A_{m,n}v_{n} \tag{47}\] for every vector \([v]_{n}=v_{n}\), thus proving the matrix \(A\) is non-negative. Assuming \(r=rank(A)\), a set of scalars \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{r}>0\) exist such that (45) holds, thus (45) can be rewritten as \[A_{n,m}U_{m,p}=U_{n,q}\Lambda_{q,p} \tag{48}\] where \(U=(u_{1},\ldots,u_{r})\) and \(\Lambda=diag(\lambda_{1},\ldots,\lambda_{r})\). Using the inverse vector-to-linear index transformation (16) we have \[U_{n(\mathbf{i}),q}=\mathcal{U}_{\mathbf{i},q},\ \ A_{n(\mathbf{i}),m(\mathbf{j})}= \mathcal{A}_{\mathbf{i}\mathbf{j}} \tag{49}\] and (48) becomes \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{U}_{\mathbf{j},p}=\mathcal{U}_{ \mathbf{i},q}\Lambda_{q,p}, \tag{50}\] or that is the same \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{U}_{\mathbf{j},p}=\lambda_{p} \mathcal{U}_{\mathbf{i},p},\ \ p=1:r \tag{51}\] This proves that (37) admits solutions with \(\lambda>0\). To decompose \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\), the tensor product of (51) and \(\mathcal{U}_{\mathbf{j},p^{\prime}}\) yields \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{U}_{\mathbf{j},p}\mathcal{U}_{ \mathbf{j},p^{\prime}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p}\mathcal{U}_{ \mathbf{j},p^{\prime}},\ \ p=1:r \tag{52}\] and from orthogonality \(\mathcal{U}_{\mathbf{i},p}\mathcal{U}_{\mathbf{j},p^{\prime}}=\delta_{p,p^{ \prime}}\), it results \[\mathcal{A}_{\mathbf{i}\mathbf{j}}\delta_{p,p^{\prime}}=\lambda_{p}\mathcal{U }_{\mathbf{i},p}\mathcal{U}_{\mathbf{j},p^{\prime}},\ \ p=1:r \tag{53}\] that is \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ p=1:r \tag{54}\] and this concludes the proof. A pseudo-code of the procedure previously described to derive a decomposition of a self-adjoint non-negative definite operator is described in Algorithm 1. INPUT: order-\(2d\) self-adjoint non-negative definite operator \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times I},L=dim(I)\) \(\mathcal{A}_{\mathbf{i}\mathbf{j}},\ \ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \ \mathbf{j}=(j_{1},\ldots,j_{d})\) 1. Derive (\(L\times d\)) matrix \(T\) defining vector-to-linear transformation \(\mathbf{i}(n)=T(n,:),\ \ \mathbf{j}(m)=T(m,:),\ \ m,n=1:L\) 2. Compute symmetric non-negative definite \((L\times L)\) matrix \(A\) \(A_{n,m}=\mathcal{A}_{\mathbf{i}(n),\mathbf{j}(m)},\ \ m,n=1:L\) 3. Solve eigenvalue equation \(A_{n,m}U_{m,p}=U_{n,q}\Lambda_{q,p},\ \ q,p=1:r,\ \ \Lambda=diag(\lambda_{1}, \ldots,\lambda_{r}),\ \ r=rank(A)\), 4. Convert matrix \(U_{n,q}\) to tensor \(\mathcal{U}_{\mathbf{i},q}\) by inverse vector-to-linear transformation \(U_{n(\mathbf{i}),q}=\mathcal{U}_{\mathbf{i},q}\) OUTPUT: the decomposition \(\mathcal{A}_{\mathbf{i}\mathbf{j}}=\lambda_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{U}_{\mathbf{j},p},\ \ p=1:r\) **Algorithm 1** Decomposition of self-adjoint non-negative definite tensor operator ### Decomposition of a linear tensor transformation The second case refers to the order-\((d+e)\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times J}\) that represents a linear transformation \(\mathcal{A}:\mathbb{R}^{J}\rightarrow\mathbb{R}^{I}\). Thus, we state the following proposition. \[\mathcal{U}=\{\mathcal{U}_{\mathbf{i},q},\ \ q=1:r\} \tag{63}\] is a set of orthogonal tensors in \(\mathbb{R}^{I}\). The inner product of two generic terms \(\mathcal{U}_{\mathbf{i},s}\) and \(\mathcal{U}_{\mathbf{i},q}\) gives \[\mathcal{U}_{\mathbf{i},s}\mathcal{U}_{\mathbf{i},q}=\mathcal{U}_{\mathbf{i}, s}\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{V}_{\mathbf{j},p}S_{p,q}^{-1} \tag{64}\] and for orthogonality, that is \(\mathcal{U}_{\mathbf{i},s}\mathcal{U}_{\mathbf{i},q}=\delta_{s,q}\), it results \[S_{q,p}=\mathcal{U}_{\mathbf{i},q}\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{V }_{\mathbf{j},p} \tag{65}\] Using \(S_{q,p}=\sigma_{p}\delta_{q,p}\) and the orthogonality of \(\mathcal{U}_{\mathbf{i},q}\) and \(\mathcal{V}_{\mathbf{j},p}\), (65) is equivalent to \[\mathcal{A}_{\mathbf{i}\mathbf{j}}=\sigma_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{V}_{\mathbf{j},p},\ \ p=1:r \tag{66}\] and this concludes the proof. A pseudo-code of the procedure previously described to derive a decomposition of a linear tensor transformation is described in Algorithm 2. INPUT: order-\((d+e)\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times J}\) \(\mathcal{A}_{\mathbf{i}\mathbf{j}},\ \ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \ \mathbf{j}=(j_{1},\ldots,j_{e})\) 1. Derive non-negative self-adjoint operator \(\mathcal{G}_{\mathbf{i}\mathbf{j}}=\mathcal{A}_{\mathbf{i}\mathbf{j}} \mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{J\times J}\) 2. Solve the eigenvalue equation using Algorithm 1 \(\mathcal{G}_{\mathbf{i}\mathbf{j}}\mathcal{V}_{\mathbf{j},p}=\mathcal{V}_{ \mathbf{i},q}S_{q,p}^{2},\ \ q,p=1:r,\ \ S_{q,p}=diag(\sigma_{1},\ldots,\sigma_{r})\) 3. Derive the orthogonal tensors \(\mathcal{U}_{\mathbf{i},q}=\mathcal{A}_{\mathbf{i}\mathbf{j}}\mathcal{V}_{ \mathbf{j},p}S_{p,q}^{-1}\) OUTPUT: the decomposition \(\mathcal{A}_{\mathbf{i}\mathbf{j}}=\sigma_{p}\mathcal{U}_{\mathbf{i},p} \mathcal{V}_{\mathbf{j},p},\ \ p=1:r\) **Algorithm 2** Decomposition of a linear tensor transformation ### Decomposition of a generic tensor The previous result can be generalized to an order-\((d+e+f)\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\in\mathbb{R}^{I\times J\times K}\). In this case \(\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\) does not unequivocally represents a transformation. With reference to this tensor we state the following proposition. **Proposition3**: Given \[\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\in\mathbb{R}^{I\times J\times K},\ \ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \ \mathbf{j}=(j_{1},\ldots,j_{e}),\ \ \mathbf{k}=(k_{1},\ldots,k_{f}) \tag{67}\] where \(I=I_{1}\times I_{2}\times\ldots\times I_{d}\), \(J=J_{1}\times J_{2}\times\ldots\times J_{e}\) and \(K=K_{1}\times K_{2}\times\ldots\times K_{f}\), there exist sets of orthogonal tensors \(\mathcal{U}=\{\mathcal{U}_{\mathbf{i},m}\in\mathbb{R}^{I},\ m=1:M\}\), \(\mathcal{Z}=\{\mathcal{Z}_{\mathbf{j},m}\in\mathbb{R}^{J},\ m=1:M\}\), \(\mathcal{W}=\{\mathcal{W}_{\mathbf{k},m}\in\mathbb{R}^{K},\ \ m=1:M\}\), and a set of scalars \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{M}>0\) with \(M=r_{1}r_{2}\), such that \(\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\) can be decomposed as \[\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}=\lambda_{m}\mathcal{U}_{\mathbf{ i},m}\mathcal{Z}_{\mathbf{j},m}\mathcal{W}_{\mathbf{k},m},\ \ m=1:M. \tag{68}\] **Proof** The product \(\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\mathcal{A}_{\mathbf{i}\mathbf{j },\mathbf{k}}\) is a self-adjoint operator on \(\mathbb{R}^{I}\), thus the eigenvalue equation \[\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\mathcal{A}_{\mathbf{i}\mathbf{j },\mathbf{k}}\mathcal{\hat{U}}_{\mathbf{i},p}=\mathcal{\hat{U}}_{\mathbf{i},q }S_{q,p}^{2},\ \ q,p=1:r_{1} \tag{69}\] admits a set of solutions \(\mathcal{U}=\{\mathcal{\hat{U}}_{\mathbf{i},q},\ \ q=1:r_{1}\}\) with \(S_{q,p}=diag(\sigma_{1},\ldots,\sigma_{r_{1}})\), \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r_{1}}>0\). Since the matrix \(S_{q,p}\) is diagonal full-rank, thus \(S_{q,p}^{-1}\) exists and multiplying by \(\mathcal{\hat{U}}_{\mathbf{i},q}\) (69) can be rewritten as \[\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\mathcal{\hat{U}}_{\mathbf{i},q}S _{q,p}^{-1}(\mathcal{A}_{\mathbf{i}\mathbf{j},\mathbf{k}}\mathcal{\hat{U}}_{ \mathbf{i},p}S_{q,p}^{-1})=\delta_{p,q}, \tag{70}\] that represents the orthogonal condition for the tensor \[\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},q}=\mathcal{A}_{\mathbf{j}\mathbf{j} \mathbf{k}}\hat{\mathcal{U}}_{\mathbf{l},p}S_{p,q}^{-1}\in\mathbb{R}^{J\times K}. \tag{71}\] Due to orthogonality of \(\hat{\mathcal{U}}_{\mathbf{l},p}\) we have \[\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}=\hat{\mathcal{U}}_{\mathbf{i},q}S _{q,p}\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p} \tag{72}\] that represents the decomposition of linear transformation \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}\) from \(\mathbb{R}^{J\times K}\) to \(\mathbb{R}^{I}\). (72) can be further decomposed since the product \(\mathcal{V}_{\mathbf{j}\mathbf{k},p}\mathcal{V}_{\mathbf{l}\mathbf{k},p}\) is a self-adjoint operator on \(\mathbb{R}^{J}\), thus the eigenvalue equation \[\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}\hat{\mathcal{V}}_{\mathbf{l} \mathbf{k},p}\hat{\mathcal{Z}}_{\mathbf{l},r}=\hat{\mathcal{Z}}_{\mathbf{j} \mathbf{,s}}\Lambda_{s,r}^{2},\;\;r,s=1:r_{2} \tag{73}\] admits a set of solutions \(\hat{\mathcal{Z}}=\{\hat{\mathcal{Z}}_{\mathbf{j}\mathbf{,s}},\;\;s=1:r_{2}\}\) with \(\Lambda_{r,s}=diag(\gamma_{1},\ldots,\gamma_{r_{2}})\), \(\gamma_{1}\geq\gamma_{2}\geq\cdots\geq\gamma_{r_{2}}>0\). The matrix \(\Lambda_{s,r}^{-1}\) exists and (73) can be rewritten as \[\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}\hat{\mathcal{Z}}_{\mathbf{j},s} \Lambda_{s,r}^{-1}(\hat{\mathcal{V}}_{\mathbf{l}\mathbf{k},p}\hat{\mathcal{Z }}_{\mathbf{l},r}\Lambda_{r,s}^{-1})=\delta_{r,s} \tag{74}\] that represents the orthogonality condition for the tensor \[\hat{\mathcal{W}}_{\mathbf{k},p,r}=\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p} \hat{\mathcal{Z}}_{\mathbf{j},s}\Lambda_{s,r}^{-1}. \tag{75}\] By deriving \(\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}\) we have \[\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}=\hat{\mathcal{Z}}_{\mathbf{j},s} \Lambda_{s,r}\hat{\mathcal{W}}_{\mathbf{k},p,r} \tag{76}\] that represents the decomposition of linear transformation \(\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}\) from \(\mathbb{R}^{K}\) to \(\mathbb{R}^{J}\). Finally combining (72) and (76) yields \[\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{,k}}=S_{q,p}\Lambda_{s,r}\hat{ \mathcal{U}}_{\mathbf{i},q}\hat{\mathcal{Z}}_{\mathbf{j},s}\hat{\mathcal{W}}_ {\mathbf{k},p,r}, \tag{77}\] and more concisely \[\mathcal{A}_{\mathbf{i}\mathbf{,j}\mathbf{,k}}=\sigma_{p}\gamma_{p}\hat{ \mathcal{U}}_{\mathbf{i},p}\hat{\mathcal{Z}}_{\mathbf{j},r}\hat{\mathcal{W}}_ {\mathbf{k},p,r}, \tag{78}\] since \(S_{q,p}=\sigma_{p}\delta_{q,p}\) and \(\Lambda_{s,r}=\gamma_{r}\delta_{s,r}\). Now, let us derive the matrix \(T\) that establishes the transformation between the vector index \(\mathbf{u}=(p,r),p=1:r_{1},r=1:r_{2}\) and the linear index \(m=1:M\) with \(M=r_{1}r_{2}\), such that \[(p,r)=\mathbf{u}(m)=(u_{1}(m),u_{2}(m)). \tag{79}\] Then (78) becomes \[\mathcal{A}_{\mathbf{i}\mathbf{,j}\mathbf{,k}}=\sigma_{u_{1}(m)}\gamma_{u_{2} (m)}\hat{\mathcal{U}}_{\mathbf{i},u_{1}(m)}\hat{\mathcal{Z}}_{\mathbf{j},u_{2 }(m)}\hat{\mathcal{W}}_{\mathbf{k},u_{1}(m),u_{2}(m)}, \tag{80}\] By defining \[\lambda_{m}=\sigma_{u_{1}(m)}\gamma_{u_{2}(m)}\] \[\mathcal{U}_{\mathbf{i},m}=\hat{\mathcal{U}}_{\mathbf{i},u_{1}(m)}\] \[\mathcal{Z}_{\mathbf{j},m}=\hat{\mathcal{Z}}_{\mathbf{j},u_{2}(m)}\] \[\mathcal{W}_{\mathbf{k},m}=\hat{\mathcal{W}}_{\mathbf{k},u_{1}(m), u_{2}(m)}, \tag{81}\] from (79) we have \[\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}=\lambda_{m}\mathcal{U}_{\mathbf{ i},m}\mathcal{Z}_{\mathbf{j},m}\mathcal{W}_{\mathbf{k},m}, \tag{82}\] and this concludes the proof. A pseudo-code of the procedure previously described to derive a decomposition of a generic tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}\) is described in Algorithm 3. ``` INPUT: order-\((d+e+f)\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}\in\mathbb{R}^{I\times J\times K}\) \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}},\ \ \mathbf{i}=(i_{1},\ldots,i_{d}),\ \ \mathbf{j}=(j_{1},\ldots,j_{e}),\ \ \mathbf{k}=(k_{1}, \ldots,k_{f})\) 1. Solve eigenvalue equation \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}\),\(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}\mathcal{U}_{\mathbf{i},p}=\hat{ \mathcal{U}}_{\mathbf{i},q}S_{q,p}^{2},\ \ q,p=1:r_{1}\) 2. Derive the orthogonal tensors \(\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},q}=\mathcal{A}_{\mathbf{i}\mathbf{j} \mathbf{k}}\hat{\mathcal{U}}_{\mathbf{i},p}S_{p,q}^{-1}\in\mathbb{R}^{J\times K}\) 3. Solve the eigenvalue equation \(\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p}\hat{\mathcal{V}}_{\mathbf{i}\mathbf{ k},p}\hat{\mathcal{Z}}_{\mathbf{1},r}=\hat{\mathcal{Z}}_{\mathbf{j}, \mathbf{s}}\Lambda_{s,r}^{2},\ \ r,s=1:r_{2}\) 4. Derive the orthogonal tensors \(\hat{\mathcal{W}}_{\mathbf{k},p,r}=\hat{\mathcal{V}}_{\mathbf{j}\mathbf{k},p} \hat{\mathcal{Z}}_{\mathbf{j},\mathbf{s}}\Lambda_{s,r}^{-1}\) 5. Decompose the tensor \(\mathcal{A}\) \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}=\sigma_{p}\gamma_{p}\hat{ \mathcal{U}}_{\mathbf{i},p}\hat{\mathcal{Z}}_{\mathbf{j},r}\hat{\mathcal{W}}_{ \mathbf{k},p,r}\) 6. Apply the inverse vector-to-linear transformation to the vector index \(\mathbf{u}\) \((p,r)=\mathbf{u}(m)=(u_{1}(m),u_{2}(m))\) 7. Define \(\lambda_{m}=\sigma_{u_{1}(m)}\gamma_{u_{2}(m)}\) \(\mathcal{U}_{\mathbf{i},m}=\hat{\mathcal{U}}_{\mathbf{i},u_{1}(m)}\) \(\mathcal{Z}_{\mathbf{j},m}=\hat{\mathcal{Z}}_{\mathbf{j},u_{2}(m)}\) \(\mathcal{W}_{\mathbf{k},m}=\hat{\mathcal{W}}_{\mathbf{k},u_{1}(m),u_{2}(m)}\) OUTPUT: the decomposition \(\mathcal{A}_{\mathbf{i}\mathbf{j}\mathbf{k}}=\lambda_{m}\mathcal{U}_{\mathbf{ i},m}\mathcal{Z}_{\mathbf{j},m}\mathcal{W}_{\mathbf{k},m}\) ``` **Algorithm 3** Decomposition of a generic tensor ## 4 Numerical experiments The aim of this section is to present some numerical experiments to validate the theoretical results previously derived for decomposition of tensor transformations. The first experiment refers to an order-\(2d\) tensor \(\mathcal{A}_{\mathbf{i}\mathbf{j}}\in\mathbb{R}^{I\times I}\) with \(d=3\) and \(I=[16,16,3]\), to validate the decomposition of an _SA-NND_ tensor operator, proven in Proposition1. The entries of tensor \(\mathcal{A}\) have been randomly generated and the behavior of eigenvalues as obtained by Algorithm 1 are shown in Fig.1. As you can see all the eigenvalues are positive and they decreases rapidly as the index \(n\) increases. Tensor \(\mathcal{A}\) can be reconstructed by decomposition (36) to less than a numerical error of \(2.6485\times 10^{-10}\). The second experiment deals with the decomposition of an order-\((1+2)\) tensor \(\mathcal{A_{ij}}\in\mathbb{R}^{I\times J}\), with \(I=[64]\) and \(J=[8,4]\). In this case the decomposition is obtained by applying Algorithm 2. Fig.2 shows the behavior of eigenvalues and using the decomposition (58) the tensor \(\mathcal{A}\) can be reconstructed to less than a numerical error of \(2.4618\times 10^{-10}\). Finally the third experiment refers to a tensor \(\mathcal{A_{ij,k}}\in\mathbb{R}^{I\times J\times K}\) with \(I=[64]\), \(J=[16]\), \(K=[3]\) and the decomposition described in Algorithm 3. The behavior of eigenvalues is shown in Fig.3 and the reconstruction of tensor \(\mathcal{A}\) is obtained with an error of \(5.2411\times 10^{-14}\). ## 5 Conclusion In this paper a mathematical framework for exact tensor decomposition, that is able to represent a tensor as the sum of a finite number of low-rank tensors, has been developed. The core of the proposed approach is the derivation of a decomposition for non-negative self-adjoint tensor operators. In particular it has been proven that a correspondence exists between the spectral decomposition of a symmetric matrix and the decomposition of a self-adjoint tensor operator. Figure 3: The behavior of eigenvalues as obtained from eq.(69) Figure 2: The behavior of eigenvalues as obtained from eq.(58)
2309.16914
Bounding the Price-of-Fair-Sharing using Knapsack-Cover Constraints to guide Near-Optimal Cost-Recovery Algorithms
We consider the problem of fairly allocating the cost of providing a service among a set of users, where the service cost is formulated by an NP-hard {\it covering integer program (CIP)}. The central issue is to determine a cost allocation to each user that, in total, recovers as much as possible of the actual cost while satisfying a stabilizing condition known as the {\it core property}. The ratio between the total service cost and the cost recovered from users has been studied previously, with seminal papers of Deng, Ibaraki, \& Nagomochi and Goemans \& Skutella linking this {\it price-of-fair-sharing} to the integrality gap of an associated LP relaxation. Motivated by an application of cost allocation for network design for LPWANs, an emerging IoT technology, we investigate a general class of CIPs and give the first non-trivial price-of-fair-sharing bounds by using the natural LP relaxation strengthened with knapsack-cover inequalities. Furthermore, we demonstrate that these LP-based methods outperform previously known methods on an LPWAN-derived CIP data set. We also obtain analogous results for a more general setting in which the service provider also gets to select the subset of users, and the mechanism to elicit users' private utilities should be group-strategyproof. The key to obtaining this result is a simplified and improved analysis for a cross-monotone cost-allocation mechanism.
Sander Aarts, Jacob Dentes, Manxi Wu, David Shmoys
2023-09-29T00:58:42Z
http://arxiv.org/abs/2309.16914v2
# Sharing the Cost of IoT Wireless Coverage with a Strengthened Linear Programming Formulation+ ###### Abstract Sharing infrastructure between many users is often advantageous, however finding a fair and reasonable way to allocate its cost between its users can be challenging. This is particularly true for LPWANs, a popular Internet of Things solution for wirelessly connecting devices to the internet. We study cost-allocation of LPWANS using a covering integer program. Standard cost-allocation methods are inapplicable in this model, because the integrality gap of its natural LP-relaxation is unbounded. We overcome this challenge by strengthening the natural LP with knapsack-cover inequalities. Our main result is proving that all dual-feasible solutions to the strengthened LP produce cost-allocations that satisfy the core property. This reduces the problem of finding a cost-allocation to that of finding a strengthened-LP-relative approximation algorithm. Existing algorithms imply improved cost-recovery ratios for families of sparse CIP instances. Finally, we show that the strengthened formulation simplifies and improves the analysis of a cross-monotone cost-allocation mechanism as well. Keywords:Cost sharing IoT Covering problems Computational social choice ## 1 Introduction It is often beneficial for a group of users to share infrastructure. However, finding a fair and reasonable way to allocate the costs of shared infrastructure between users can be challenging. For instance, the last decade has seen wireless communication networks for the Internet of Things (IoT) proliferate, now connecting countless wireless devices to the internet. While there are many heterogeneous IoT protocols, such as Low-Power Wide-Area Networks (LPWANs) including LoRaWAN, Sigfox, and NB-IoT, most require considerable investment in specialized wireless infrastructure, particularly wireless receivers that demodulate radio transmissions from the Things and pass them on as data to the Internet. Building and maintaining receivers is a major cost-driver. Indeed, some LPWAN applications require coverage from multiple receivers in order to ensure adequate robustness and reception rates. Meanwhile, these wireless receivers can typically support large volumes of traffic, far in excess of what a single user requires [11]. Therefore, sharing the network among multiple users is among the most effective means to ensure sufficient service quality and lower per-user costs, provided a mutually beneficial cost-sharing scheme can be orchestrated. The cost of IoT coverage is shared in drastically differnet ways by a diverse cast of network operators. There are large shared IoT networks operated by for-profit companies, non-profits, as well as municipal organizations, each employing different cost-sharing structures. For LoRaWAN alone, operators include private connectivity-as-a-service providers such as KPN that charge a subscription, to municipal networks funded by taxpayers, to crowd-sourced free-to-use sharing solutions such as The Things Network. Despite this diversity, little research has been conducted cost-sharing in IoT networks, and in particular the incentives and computational challenges underlying the problem of IoT network design and cost-sharing. This paper formulates a model for IoT wireless coverage provision, and studies the computational aspects of cost-sharing in LPWANS. We model the wireless coverage provision problem as a covering integer program (CIP). The model contains a set of \(m\) users, and a set of \(n\) potential wireless receivers, or facilities. Users have heterogeneous service requirements \(\mathbf{r}\in\mathbb{R}^{m}\); some user may need multiple one receiver to satisfy their requirements. Facilities have costs \(\mathbf{c}\in\mathbb{R}^{n}\), and each facility-user pair has a contribution \(a_{ij}\) that specifies the amount of coverage facility \(i\) provides user \(j\) if opened. These are collected in the \(n\times m\) matrix \(\mathbf{A}\). The objective is to minimize cost, subject to providing sufficient coverage. \[c^{*}=\min\{\mathbf{c}^{T}\mathbf{x}:\mathbf{Ax}\geq\mathbf{r},\ \mathbf{x}\in\{0,1\}^{n}\} \tag{1}\] The binary decision variables \(x_{i}\in\{0,1\}\) indicate whether facility \(i\) is opened or not1. If inputs \(\mathbf{A}\) and \(\mathbf{r}\) are binary, the CIP reduces to the set cover problem; if \(\mathbf{A}\) and \(\mathbf{r}\) are integral, they form a multi-set multi-cover instance. Finding an optimal solution is computationally hard [13, 12]. However, there are approximation algorithms that find near-optimal solutions to the general problem [5, 7, 19]. Footnote 1: Some formulations of CIPs permit multiplicity constraints \(x_{i}\in\{1,2,\ldots,d_{i}\}\) for any integers \(d_{i}\), including infinity. It is straightforward to extend our results this setting. The goal of this work is twofold: efficiently solve the above problem, and allocate the cost of a solution to the served users in a fair and stable manner. Our focus is on cost-allocations that satisfy the _core property_. These are well-understood concepts in cost-sharing, with strong ties to infrastructure economics [17, 28]. We say a cost-allocation satisfies the core property if no group of served users is allocated more cost than what it would minimally cost the group to provide coverage for themselves. This is a minimal requirement to make users want the service. If the users' valuations are unknown, there is often interest in _cross-monotone_ cost-allocations which can be used to elicit them via a bidding mechanism [26]. A cost-allocation is cross-monotone whenever no existing user is allocated more cost, upon an additional user joining the network. An operator's goal is to maximize the cost recovered subject to satisfying the core property. A cost allocation is \(\alpha\)-_budget-balanced_ if the cost charged to the users add up to at least an \(\alpha\)-fraction of the cost. Cost allocation in CIPs is challenging due to the presence of multi-cover constraints and variable contributions. Without multi-cover requirements, cost allocation is well-understood via linear programming duality [9]. For instance, for the set cover problem, the dual variables produce cost-shares satisfying the core property that recover \(1/\ln(n)\) of the cost. This bound is tight; the duality gap for set cover is \(\ln(n)\), and any cost-shares satisfying the core property can be shown to be dual variables. Indeed, it is considered folklore that one can recover at most a fraction \(\alpha\) of the cost whenever the integrality gap of the natural LP relaxation is \(1/\alpha\). The CIP is in direct contradiction with this understanding. On the one hand, there is an algorithm that produces cost-allocations that recover at least \(1/(1+\ln(n+1))\) of the cost, without explicitly using duality [24]. On the other hand, the integrality gap of CIP is unbounded [5]. This is at odds with the folklore, and invites the question what of role, if any, duality has in sharing the cost of CIP solution. This paper develops a general method for finding provably optimal cost allocations for the CIP via the dual of a strengthened linear programming formulation. Key to our approach are knapsack-cover inequalities that strengthen the linear programming formulation and its dual, shrinking the integrality gap [5]. This LP formulation underlies many state-of-the-art approximation algorithms for CIP [5, 6, 7, 19]. Our main result proves that _all_ dual-feasible solutions to the strengthened LP produce cost-allocations that satisfy the core property. These cost-allocations are optimal, in that they attain the maximum cost-recovery ratio on sub-problems an upper-bound is know. Moreover, our framework reduces the problem of finding cost-allocations to that of finding a strengthened-LP-relative approximation algorithm. By invoking known results for CIP approximation algorithms, this immediately implies improvements over the best existing cost-recovery ratio for families sparse CIP instances. Finally, we also employ the strengthened LP formulation to connect the analysis of Li _et al._'s [24] algorithm for group strategyproof cost-allocations to LP-duality, radically simplifying their proof, and improving the cost-recovery ratio on instances with bounded coverage per facility. ## 2 Background and model This section covers, respectively, background on IoT networks and LPWANs, our cost-sharing model in the context of LPWANs, background on cost-sharing in covering problems, and knapsack-cover inequalities. The first subsection can be skipped in case IoT is not of interest. The subsection on knapsack-cover inequalities, however, is critical to understanding our results. ### LoRaWAN particulars Low-Power Wide-Area Networks (LPWANs) are a popular IoT solution for connecting devices to the internet over long distances using little power. LoRaWAN one of the most widely used LPWAN protocols; see e.g., Alumhaya _et al._[1] for an overview. LoRaWAN is designed for the transmission of small packets of data from battery-driven sensors to wireless receivers called _gateways_. These forward the data to the internet over a broadband backhaul [31]. LoRaWAN applications include road-surface monitoring, sensing fill-levels of municipal garbage containers, and tracking livestock locations, which are used to, respectively, inform safe traffic routing, optimize garbage pickup, and reduce costs of livestock monitoring, respectively. While LoRaWAN is the primary motivating application of this work, the covering integer program we study is quite general, and therefore applicable to other technologies, not to mention problems entirely divorced form wireless networks and IoT. A property of LoRaWAN that is particularly relevant to cost-sharing are its low barriers to entry. Specifically, LoRaWAN uses the ISM band of the wireless spectrum [31]. This is an unlicensed band; anyone can operate a gateway free of charge, without a license or certification. Secondly, LoRaWAN gateways are relatively inexpensive, with some models costing just a few hundred US dollars. While the cost of carrier-grade gateways that support large volumes of traffic can be higher, individuals or small groups of users can serve their own needs at relatively low costs. Finally, organizations such as TTN provide open-source software and free online support for managing the server and software end of LoRaWAN networks, which further increases accessibility. Altogether, these low barriers give users considerable leverage over the cost of wireless coverage. This is in stark contrast with other wireless technologies and natural monopolies, such as 5G, that require costly licenses and hardware. Another unique and critical feature of LoRaWAN is the use of redundant gateways for improving the quality of coverage. LoRaWAN is _association-free_, in that a single device does not transmit to any particular receiver [31]. Instead, transmissions are broadcast to all receivers, and demodulated by each one that successfully receivers the packet. Duplicates are pruned in hindsight by the network server. This feature is critical for achieving robustness through redundancy; a transmitted packet is lost only if all receivers fail to demodulate it. Studies on LoRaWAN connectivity find that transmissions often fail at random, but the success rate varies predictably with distance, terrain, and wireless parameters [2, 3, 29]. By being covered by multiple gateways, the probability of packet loss can be kept minimal despite individual connections frequently failing. The CIP model accommodates user reception rate requirements, gateway redundancy, and heterogeneity in connection qualities. ### A covering integer program We model the problem of LoRaWAN coverage provision as covering integer program (CIP). The model is quite general, and captures many covering problems in which there are additive multi-cover requirements. The main model components are facilities, users, and the contributions of each facility to each user. Let \(\mathcal{F}\) denote a set of \(n\) facilities; these are sites at which a receiver can be installed. Each facility has a positive installation cost, \(\mathbf{c}=(c_{1},\ldots,c_{n})\). Costs represent hardware and installation costs, as well as amortized rents and expenses. Let \(\mathcal{U}\) denote a set of \(m\) users. These can be viewed as individual devices, or as representative points for small areas, that require service. The service requirements \(\mathbf{r}=(r_{1},\ldots,r_{m})\) are positive real numbers that specify the minimum quality of service at which each user \(j\) is satisfied. Each facility contributes specified amount of coverage, \(a_{ij}\geq 0\) to each user. This captures heterogeneous connection qualities typical to LoRaWAN. We denote by \(\mathbf{A}\) the \(n\times m\) matrix of connectivities; \(A_{i}\) denotes the \(i\)th column, and \(\mathbf{a}_{j}=(a_{1j},\ldots,a_{nj})\) the \(j\)th row. The tuple \((\mathbf{A},\mathbf{c},\mathbf{r})\) defines a CIP instance. Finally, let \(\Gamma\) denote the row-sparsity; the maximum number of non-zero entries of any row of \(\mathbf{A}\), and \(\Delta\) the corresponding column-sparsity. The objective in the covering integer program is to provide sufficient coverage to all users at minimum cost. The linear programming (LP) relaxation of the problem is \[\min_{x} \sum_{i\in F}c_{i}x_{i}\] (LP) s.t. \[\sum_{i\in F}a_{ij}x_{i}\geq r_{j}, \forall j\in U,;\] \[0\leq x_{i}\leq 1, \forall i\in\mathcal{F},\] where the variables \(x_{i}\) indicate whether facility \(i\) is purchased or not. The model assumes that the cost, and the total contribution to each user, are additive in the opened facilities. The constraints \(x_{i}\leq 1\) are multiplicity constraints; these prohibit the purchase of a facility multiple times. Solving a CIP to optimality is computationally hard. Even when the requirements are 1 and contributions \(a_{ij}\) are binary, the problem is a set cover problem, which is NP-hard and inapproximable within a factor of \(c\ln(n)\) of the optimum, for any \(c<1\), under standard complexity assumptions [12, 13]. The additive coverage constraints in the CIP can capture requirements in terms of packet reception probabilities via a simple reduction. Assume that the successful reception from user \(j\) to facility \(i\) is a binary variable, with given success, or reception, rate \(\rho_{ij}\). In this case, one can view coverage provision as a fuzzy set cover problem. Chian, Hwang, and Liu [8] reduce fuzzy set covering to a CIP. Assume that failures of links are independent, and let \(\mathbf{x}=(x_{1},\ldots,x_{n})\) be a binary vector encoding a subset of built facilities \(F\subseteq\mathcal{F}\). Then the failure rate of a transmission from user \(j\) is \[\Pr[\text{Failure}\mid F]=\prod_{i\in\mathcal{F}}(1-\rho_{ij})^{x_{i}}.\] Taking logarithms this yields an expression that is additive in the variables \(x_{i}\). \[\log\Pr[\text{Failure}\mid x]=\sum_{i\in\mathcal{F}}\log(1-\rho_{ij})x_{i}\] A constraint limiting the maximum packet error rate can therefore be expressed as a covering constraint in the CIP. While independence is a strong assumption, this reduction adds intuition to the CIP, and can be viewed as a stylized approximation to a dependent system, e.g., by choosing more "pessimistic" estimates for \(\rho_{ij}\), or higher service requirements \(r_{j}\). ### Cost-sharing, the core property, and budget-balance Cost-allocation is the problem of allocating the cost of a solution to the users in fair and reasonable way. This work focuses on the _core_: a set of cost-allocations that are considered both fair and stable. We refer to Jain and Mahdian [17] for further context. A cost allocation \(\xi:\mathcal{U}\times 2^{\mathcal{U}}\to\mathbb{R}\) assigns, given a group \(U\subseteq\mathcal{U}\) of served users, each user \(j\in\mathcal{U}\) a non-negative cost to pay. If a user \(j\) is not served, they are allocated zero cost. A cost-allocation \(\xi\) is in the _core_ if it satisfies the _core-property_ and _budget balance_. The core property is said to hold for cost allocation \(\xi\) if, for all served groups \(U\subseteq\mathcal{U}\), no sub-group \(J\subseteq U\) can profitably deviate and serve only themselves: \[\sum_{j\in J}\xi(j,U)\leq c(J)\quad\forall J\subseteq U, \tag{2}\] where \(c(J)\) is the cost of serving group \(J\). Meanwhile, a cost allocation is budget-balanced whenever the total cost shares paid by the served users, \(\sum_{j\in U}\xi(j,U)\), covers the cost of serving them. It is not always possible to both satisfy the core property and recover the full cost. In particular, there are simple set cover instances that have no cost-allocations in the core (see e.g. Li _et al._[24]). Moreover, Deng, Ibaraki, and Nagamochi [9] show that a general covering problem has a non-empty core if and only if its natural LP-relaxation has no integrality gap (i.e., if the LP is guaranteed to provide a tight bound on the optimum). If the core is empty, it is common to maximize the cost recovered, subject to the core core-property [14, 17]. To this end, we say cost-allocation \(\xi\) is \(\alpha\)-budget-balanced whenever it recovers a fraction \(\alpha\in[0,1]\) of the cost, \[\sum_{j\in U}\xi(j,U)\geq\alpha\cdot c(U). \tag{3}\] The parameter \(\alpha\) is called the _cost-recovery ratio_. A good cost-allocation is one that guarantees a high cost-recovery ratio. The cost-recovery ratio is is well understood via linear programming duality. On the one hand, dual feasible solutions yield cost shares satisfying the core property with provably large cost-recovery ratios for a number of problems [9, 10, 14, 15, 21, 23, 25, 27]. On the other hand, seminal work by Deng, Ibaraki, and Nagamochi [9], and Goemans and Skutella [14] show for the set cover and facility location problems, respectively, that all cost-allocations satisfying the core property are dual feasible solutions. Because the cost-allocation mechanism purchases an integer solution, but only allocates costs as a fractional solution, weak duality implies that the cost-recovery ratio is bounded above by the reciprocal of the integrality gap of the problem; it is folklore that one can recover at most a fraction \(\alpha\) of the cost, whenever the integrality gap of the natural LP relaxation is \(1/\alpha\)[17, 25]. Covering Integer Program appears to be in direct contradiction with the folklore on duality and cost-recover. Li and _et al._[24] develop a greedy algorithm for multi-set multi-cover that produces cost-allocations that satisfy the core property and recover \(\mathcal{O}\left(1/\ln\Delta\right)\) of the cost. This is accomplished without explicitly using duality. Meanwhile, the integrality gap of the natural linear programming relaxation of CIP is known to be unbounded, even when all inputs are integral [5]. We show that the folklore continues to apply, provided one uses the "right", and not necessarily natural, linear programming formulation (see Theorem 1.1). The strengthened linear programming formulation we use is covered in Subsection 2.5. ### Group strategyproofness and cross-monotonicity So far it has been assumed that the group of users to be served is given; sometimes this choice also falls on the service provider. In an extended setting the goal of the service provider is to elicit private user preferences and design a mechanism for choosing who to serve, in addition to allocating costs to those served. See Jain and Mahdian [17] for more details on definitions. Assume that each user \(j\in\mathcal{U}\) has a private utility \(u_{j}\) for receiving service, and has the option of opting out for a utility of \(0\). When utilities are unknown, they must be elicited by a mechanism. This creates a challenge, because users and groups of users may be able to strategically misrepresent their utilities. A mechanism is said to be _strategyproof_ if individual users cannot benefit from misrepresenting their utility, and _group strategyproof_ if no group of users can benefit by colluding to misrepresent their utilities. The goal of a mechanism in this setting is to elicit preferences in a group strategyproof manner, decide who to serve, and allocate-costs in a way that maximizing the cost-recovery ratio. To find a group strategyproof mechanism, Moulin and Shenker [26] prove that it suffices to have cross-monotonic cost-allocation mechanism. Cost-shares \(\xi:\mathcal{U}\times 2^{\mathcal{U}}\) are _cross-monotonic_ if the cost allocated to user \(j\) does not increase when more users are served: \[\xi(j,U)\leq\xi(j,J)\quad\forall j\in J\subseteq U\subseteq\mathcal{U}. \tag{4}\] Not all cost-shares that satisfy the core property are cross-monotonic. Pal and Tardos [27] show a general approach for using primal-dual algorithms to find cross-monotonic cost shares in the facility location and rent-or-buy problems. This approach has been used for many other problems as well [18, 20, 21, 23]. Interestingly, imposing cross-monotonicity usually lowers the best attainable cost-recovery ratio [27]. Indeed, Immorlica, Mahdian, and Mirrokni [16] upper bound the cost-recovery ratio for cross-monotonic cost allocations using an elegant probabilistic method. In particular, a cross-monotonic cost allocation for set cover can recover at most \(1/\Delta\) of the cost, where \(\Delta\) is the size of the largest set. Moreover, even if all elements are covered by at most two sets, the cost recovery ratio is at most \(\mathcal{O}\left((2+\epsilon)/n^{1/3}\right)\). Li _et al._[24] describe a cross-monotonic cost allocation for the CIP that recovers \(1/(2n)\) of the cost without explicit use of duality; simplify their proof using a strengthened dual, generaliz the algorithm to CIP, and improve this to \(1/(2\Delta)\). ### Knapsack-cover inequalities and approximation algorithms Working with the natural linear programming relaxation of CIP is challenging because the integrality gap is unbounded. Carr _et al._[5] provide a simple example using only one user. Let \(R>0\) be a large integer requirement for the one user. Let there be two items. Item A provides \(R-1\) units of coverage at zero cost; Item B provides \(R\) units of coverage at a cost of \(1\). Clearly, a feasible integer solution must include item B, with a total cost of \(1\). A fractional solution, however can select a full unit of item B, and only a \(1/R\) fraction of item \(A\), with a cost of \(1/R\). The cost ratio of the integer solution to the fractional solution is \(R\), which can be chosen to be arbitrarily large. Carr _et al._[5] strengthen the linear programming formulation by adding an exponentially large family of valid _knapsack-cover_ (KC) inequalities. Suppose facilities \(S\subseteq\mathcal{F}\) have been built. Then user \(j\) has _residual demand_\(r_{j}^{S}=\max\left\{r_{j}-\sum_{i\in S}a_{ij},0\right\}\). That is, to satisfy her requirement, user \(j\) must collect a contribution of at least \(r_{j}^{S}\) from the remaining unbuilt facilities \(\mathcal{F}\backslash S\). In addition, there is no benefit to exceeding \(r_{j}^{S}\), so contributions \(a_{ij}\) that exceed \(r_{j}^{S}\) can be reduced to \(r_{j}^{S}\). We call \(a_{ij}^{S}=\min\left\{a_{ij},r_{j}^{S}\right\}\) the _residual contribution_, and set it to zero whenever \(i\) is in \(S\). This defines a knapsack-cover inequality for every subset \(S\subseteq\mathcal{F}\) and user \(j\in\mathcal{U}\) \[\sum_{i\in F\backslash S}a_{ij}^{S}\geq r_{j}^{S}. \tag{5}\] There are \(|\mathcal{U}|\times|2^{\mathcal{F}}|=m2^{n}\) KC-inequalities. It is critical to note that adding these inequalities does not change the set of feasible _integer_ solutions. The KC-inequalities define a strengthened linear programming relaxation with a corresponding strengthened dual program. We call the strengthened primal program the knapsack-cover linear program (KC-LP). For a given subset of users \(U\subseteq\mathcal{U}\), the program is: \[\min_{x} \sum_{i\in\mathcal{F}}c_{i}x_{i},\] \[s.t. \sum_{i\in F\backslash S}a_{ij}^{S}x_{i}\geq r_{j}^{S}, \forall(j,S)\in U\times 2^{\mathcal{F}},\] (KC-LP) \[x_{i}\geq 0, \forall i\in\mathcal{F}.\] This is a pure covering program, as the multiplicity constraints are no longer necessary2. Associated with this primal is a dual packing program; the knapsack-cover dual program (KC-DP). The dual has one constraint for each facility in \(\mathcal{F}\), and an exponential number of variables \(y_{j}^{S}\), each corresponding to one KC-inequality. For a fixed facility \(i\), the main constraint requires that the sum of all variables corresponding to selections \(S\) not including facility \(i\) should add up to at most the cost of the facility. Footnote 2: They are implicit in setting \(a_{ij}^{S}\) to zero whenever \(i\in S\). \[\max_{y} \sum_{j\in U}\sum_{S\subseteq F}r_{j}^{S}y_{j}^{S},\] \[s.t. \sum_{j\in U}\sum_{S\subseteq F\backslash\{i\}}a_{ij}^{S}y_{j}^{S }\leq c_{i}, \forall i\in F,\] (KC-DP) \[y_{j}^{S}\geq 0, \forall(j,S)\in U\times 2^{F}.\] There are state-of-the art approximation algorithms that produce integer solutions with costs within a multiplicative factor of the KC-LP optimum. Some also produce feasible solutions to the strengthened dual. Carr _et al._[5] describe a \(\Gamma\)-approximation algorithm based on rounding a solution to KC-LP. Conversely, using a more involved rounding approach, Kolliopoulos and Young [19] give a \(\mathcal{O}\left(\log(1+\Delta)\right)\) approximation algorithm. Together, these upper bound the strengthened integrality gap by \(\Gamma\) and \(\mathcal{O}\left(1+\log\Delta\right)\). More recently, KC-LP rounding algorithms have approximation ratios that are logarithmic in the maximum column sum \(\Delta_{1}=\max_{i\in\mathcal{F}}\sum_{j\in U}a_{ij}/r_{j}\) rather than \(\Delta\); the former term being at most \(\Delta\), and often much smaller [6, 7]. However, these solutions may violate the multiplicity constraints by a small amount.3 Finally, Carnes and Shmoys [4] develop a primal-dual algorithm that uses the strengthened dual, providing a 2-approximation for the minimum-cost knapsack problem. While this algorithm produces feasible dual solutions, the rounding algorithms require solving the strengthened LP and its dual first. Footnote 3: These are nevertheless relevant to cost-sharing applications that have no multiplicity constraints, or in which small violations can be tolerated. The strengthened linear program and its dual can be efficiently solved to near optimality despite the large number of constraints. Chekuri and Quanrud [6] develop a multiplicative weights method that returns approximately optimal primal and dual solutions in near linear time, although with a dependence on the error term on the order of \(\mathcal{O}\left(1/\epsilon^{5}\right)\). This approach builds on the approach of Plotkin, Shmoys, and Tardos [30], and the solution method outlined by Carr _et al._[5]. The returned dual solutions are feasible, within a \(\frac{1}{1-\epsilon}\)-factor of the optimum, and have a polynomially bounded number of non-zero variables [6]. In practice, the primal and dual problems can often be solved exactly using column generation; the problem of finding a _most violated inequality_ can be reduced to solving a sequence of pseudo-polynomially-many minimum-cost knapsack problems, each which admits a pseudo-polynomial exact algorithm (e.g. by adapting the algorithm in [22]). ## 3 Cost-shares and strengthened LP Our main result shows that any feasible solution to the strengthened dual (KC-DP) produces natural cost-allocations that satisfy the core property. This enables the use of existing KC-LP-relative approximation algorithms to find cost-shares with good cost-recovery ratios. Theorem 3.1: _Fix users \(U\subseteq\mathcal{U}\). Let \(\mathbf{y}=(y_{j}^{S})_{S\subseteq\mathcal{F},j\in U}\) be a KC-LP dual-feasible solution. Then, cost shares_ \[\xi(j,U)=\sum_{S\subseteq\mathcal{F}}r_{j}^{S}y_{j}^{S}\qquad\forall j\in U, \tag{6}\] _with \(\xi(j,U)=0\) for \(j\notin U\), satisfy the core property._ Clearly, the sum of cost shares over all users is exactly the strengthened dual objective in (KC-DP). The theorem can be used in two ways. First, it can be used to exploit approximation algorithms that find integer solutions with costs bounded in terms of a corresponding dual-feasible solution. Corollary 1: _Fix users \(U\subseteq\mathcal{U}\), and let \(X\subseteq\mathcal{F}\) be a feasible integral solution serving users \(U\), and \(\mathbf{y}\) a KC-LP dual feasible solution for the problem defined on users \(U\). If the solution \(X\) is KC-LP-relative \(\alpha\)-approximation, i.e.,_ \[\sum_{i\in X}c_{i}\leq\alpha\sum_{j\in U}\sum_{S\subseteq\mathcal{F}}r_{j}^{S} y_{j}^{S},\] _then the cost shares in Theorem 3.1 recover at least \(1/\alpha\) of the cost._ This immediately implies that the Carr _et al._[5] algorithm can produce cost shares that recover \(1/\Gamma\) cost, and that the rounding approach of Kolliopoulos and Young [19] produces a cost-recovery ratio of \(\mathcal{O}\left(1/\ln\Delta\right)\). Moreover, if small violations of the multiplicity bounds are tolerated, this is improved to \(\mathcal{O}\left(1/\ln\Delta_{1}\right)\). The \(1/\Gamma\) bound is new. The latter matches the \(\mathcal{O}\left(1/\ln\Delta\right)\) ratio of Li _et al._[24] for the multi-set multi-cover problem, and applies more generally, including when inputs are fractional. The second use of the result is to completely decouple the search for an integral primal solution from the search of a dual solution; for any integer solution, one can solve the dual (KC-DP) to near optimality and user Theorem 1 to produce a cost-allocation in the core. We proceed by proving the theorem. The proof is relatively simple; it naturally uses dual feasibility, and a rearrangement of summations that is standard in the analysis of primal-dual schema [33]. Fix the users \(U\subseteq\mathcal{U}\). We need to show that for any coalition \(J\subseteq U\), \[c(X_{J}^{*})\geq\sum_{j\in U}\xi(j,U),\] where \(c(X_{J}^{*})\) is the minimum cost of an integer solution serving coalition \(J\). Let \(\mathbf{y}\) be a dual (KC-DP) feasible solution for users \(U\), with corresponding cost-shares \(\xi(j,\mathcal{U})\) computed from \(\mathbf{y}\) as in the statement of Theorem 1. Now consider a subset of users \(J\subseteq\mathcal{U}\). To prove the main theorem, we must argue that the total cost allocated to group \(J\) are no more than the cost of serving the coalition \(J\) optimally. To this end, let \(X_{J}^{*}\subseteq\mathcal{F}\) be a minimum-cost solution for serving group \(J\). Using KC-DP feasibility of \(\mathbf{y}\), the minimum cost of serving group \(J\) is bounded below by \[c(X_{J}^{*})=\sum_{i\in X_{J}^{*}}c_{i}\geq\sum_{i\in X_{J}^{*}}\left(\sum_{j \in U}\sum_{S\subset S\setminus\{i\}}a_{ij}^{S}y_{j}^{S}\right).\] One right-hand-side, the middle sum is over all users \(U\), while we are more interested in the coalition \(J\). However, because all data and variables are non-negative, the inequality is preserved if summing only over the users \(J\). If suffices to show that the RHS above is an upper bound on the total cost allocation of coalition \(J\). A standard change in the order of summation completes the proof. The right hand side terms above counts, for every item \(i\in X^{*}\), the sum over subsets \(S\subseteq F\) that do not contain \(i\). Equivalently, one can sum, for each subset \(S\subseteq F\), every item in \(X_{J}^{*}\) not in \(S\). \[\sum_{i\in X_{J}^{*}}\sum_{j\in J}\sum_{S\subset F\setminus\{i\}}a_{ij}^{S}y_ {j}^{S}=\sum_{j\in J}\sum_{S\subseteq F}y_{j}^{S}\left(\sum_{i\in X_{J}^{*} \setminus S}a_{ij}^{S}\right)\] Recall now that \(X_{J}^{*}\) is a feasible solution to the sub-problem induced by coalition \(J\). In other words, the residual demand at each subset \(S\) is satisfied by \(X_{J}^{*}\) for all users in \(J\); \[\sum_{i\in X_{J}^{*}\setminus S}a_{ij}^{S}\geq r_{J}^{S},\quad\forall j\in J.\] The resulting resulting expression is precisely the cost allocation to group \(J\). We have shown \[c(X_{J}^{*})\geq\sum_{j\in J}\sum_{S\subseteq F}r_{j}^{S}y_{j}^{S}=\sum_{j} \xi(j,\mathcal{U}). \tag{7}\] In other words, any KC-DP dual solution \(\mathbf{y}\) produces cost-shares satisfying the core propriety. ### Tightness of the cost-recovery ratio An natural question is whether there are cost-allocations, perhaps not derived from dual variables, that have higher worst-case cost-recovery ratios than ours. For CIP the answer is _no_, at least with respect to parameters \(\Gamma\) and \(\Delta\). In particular, when all contributions and requirements are binary, the KC-LP and its dual reduce to the standard set cover linear programs. Here, the folk theorem applies, and the cost-recovery ratio is bounded by the integrality gap [9]. For Set Cover, hand hence CIP as well, the integrality gap can be as large as \(\Gamma\) and \(\ln\Delta\)[12, 32]. All in all, this shows that the KC-LP framework provides a simple framework for finding cost allocations satisfying the core property, with optimal cost-recovery ratios. However, there these cost allocations need not satisfy cross-monotonicity, and hence need not produce a group strategyproof mechanism. ## 4 A group strategyproof primal-dual algorithm Our strengthened linear programming approach and also be used to find cross-monotonic cost shares. Our mechanism uses a primal-dual algorithm following the general framework of Pal and Tardos [27]. While our algorithm is equivalent to that of Li _et al._[24], it differs in its analysis. In particular the connection to primal-dual algorithms is new. Moreover, we believe our analysis is simpler, and it improves on the worst-case \(1/(2n)\) cost-recovery ratio of Li _et al._, to \(1/(2\Delta)\), with \(\Delta\leq n\). The following theorem summarizes the the performance of the mechanism. Theorem 4.1: _Fix users \(U\subseteq\mathcal{U}\). The mechanism (Algorithm 1) produces a feasible solution \(X\) for users \(U\), and cross-monotone cost shares \(\xi(j,J)\) that satisfy \(\sum_{j\in J}\xi(j,J)\geq\left(\frac{1}{2\Delta}\right)c(X)\)._ The main algorithmic idea behind the mechanism is to let each user independently select facilities in complete isolation from the other users. Cross-monotonicity by preventing any interactions between users' dual variables. Moreover, the problem faced by an individual user is a minimum cost-knapsack problem, which each be solved with a primal-dual algorithm (Algorithm 2) [4]. This produces, for each user \(j\in U\), a selection of facilities \(X_{j}\), and a KC-LP dual solution \(\mathbf{y}_{j}\) that is feasible to the individual problem in which there are constraints for user \(j\) only, and no variables corresponding to other users. Finally, our mechanism selects the union of all selected facilities \((X_{j})_{j\in U}\), and scales down the individual dual variables \(\mathbf{y}_{j}\) by the column sparsity \(\Delta\). The procedure is summarized in Algorithm 1, in which MinCostKnapsackPrimalDual is Algorithm 2. ``` Input:\((F,U,\mathbf{r},\mathbf{c},\mathbf{A})\) for all users \(j\in J\) independentlydo \(X_{j},\mathbf{y}_{j}^{\prime}\leftarrow\) MinCostKnapsackPrimalDual\((F,r_{j},ca_{j})\) \(\mathbf{y}_{j}\leftarrow\mathbf{y}_{j}^{\prime}/\Delta\) \(X\leftarrow\cup_{j\in U}X_{j}\) \(\xi(j,U)\leftarrow\frac{1}{2\Delta}\sum_{S\subseteq F}y_{j}^{S}\) for all \(j\in U\) return\(X,\xi\) ``` **Algorithm 1**A cross-monotonic primal-dual algorithm Carnes and Shmoys [4] develop and analyze a primal-dual algorithm for the minimum-cost knapsack problem based on the KC-LP formulation. Fix a single user \(j\in\mathcal{U}\) and let \(\mathbf{a}_{j}=(a_{1j},\ldots,a_{nj})\) denote their contributions. The user starts with an all-zero dual solution \(\mathbf{y}_{j}\), and an empty selection \(X=\emptyset\). While the residual demand \(r_{j}^{X}\) is positive, they increase the dual variable \(y_{j}^{X}\). Eventually, some constraint \(\sum_{S\subseteq F-\{i\}}a_{ij}^{X}y_{j}^{X}\leq c_{i}\) becomes tight for a facility \(i\). This facility is added to the selection \(X\), and the process repeats. This procedure is summarized in Algorithm 2. The algorithm returns a selection of facilities \(X_{j}\) and a feasible KC-LP dual solution. Critically, the cost of \(X_{j}\) is at most twice the KC-LP dual objective under \(\mathbf{y}_{j}\). Theorem 3.1 (Carnes and Shmoys [4]): _Let \(X_{j}\subseteq\mathcal{F}\) and \(\mathbf{y}_{j}\) be a selection of facilities, and the corresponding dual solution returned by Algorithm 2. These satisfy the inequality_ \[\sum_{i\in X_{j}}c_{i}\leq 2\sum_{S\subseteq\mathcal{F}}r_{j}^{S}y_{j}^{S}.\] This result is used in two parts of our proof. First, we use the dual feasibility of the individual dual variables \(\mathbf{y}_{j}\) to construct a feasible dual solution to the master CIP, which gives us the core-property via Theorem 3.1. Secondly, the approximation ratio is used to derive our cost-recovery ratio. ``` Input:\((\mathcal{F},\mathbf{c},r_{j},\mathbf{a}_{j})\) \(X,\mathbf{y}_{j}\leftarrow\emptyset,\mathbf{0}\) while\(r_{j}^{X}>0\)do Increase \(y_{j}^{X}\) until for some \(i\in\mathcal{F}\backslash X\): \[\sum_{S\subseteq F-\{i\}}a_{ij}^{X}y_{j}^{X}=c_{i}\] (8) \(X\gets X\cup\{i\}\) return\(X\), \(\mathbf{y}_{j}\) ``` **Algorithm 2**Minimum-Cost Knapsack Primal-Dual Algorithm ### Proof of the theorem The proof of our result is remarkably simple. We only need to argue for cross-monotonicity, the core-property, and cost recovery. We argue for cross-monotonicity first. Clearly, the dual variables \(\mathbf{y}_{j}^{\prime}\) of each user are entirely independent of the other users. Meanwhile, the maximum number of users in \(U\) served by any facility, \(\Delta\), is monotonically increasing in the size of \(U\), so the dual variables \(\mathbf{y}_{j}=\mathbf{y}_{j}^{\prime}/\Delta\) are are monotonically decreasing in \(U\), as are the induced cost-shares. The core property is easy to prove using dual feasibility and our Theorem 3.1. Consider some selected facility \(i\in X\). Then \[\sum_{j\in U}\sum_{S\subseteq F\backslash\{i\}}a_{ij}^{S}y_{j}^{S}=\frac{1}{ \Delta}\sum_{\{j\in U:a_{ij}>0\}}\left(\sum_{S\subseteq F\backslash\{i\}}a_{ ij}^{S}y_{j}^{\prime S}\right)\leq\frac{1}{\Delta}\sum_{\{j\in J:a_{ij}>0\}}c_{i} \leq c_{i}\] The first equality follows from dropping users not served by facility \(i\). The second equality uses the definition of \(\mathbf{y}_{j}=\mathbf{y}_{j}^{\prime}/\Delta\). The following inequality uses fact that the dual variables are individually feasible to the min cost knapsack LP of each user \(j\). The final inequality follows from the definition of \(\Delta\). The above shows that dual variables \((\mathbf{y}_{j})_{j\in U}\) are feasible for the CIP induced by users \(U\), and so Theorem 3.1 implies that the core-property is satisfied by the accompanying cost shares \((\xi(j,U))_{j\in U}\). Finally, the cost-recovery recovery ratio follows from the approximation ratio of the minimum-cost knapsack algorithm. In particular, observe that \[\sum_{i\in X}c_{i}\leq\sum_{j\in U}\sum_{i\in X_{j}}c_{i}\leq 2\sum_{j\in U} \sum_{S\subseteq\mathcal{F}}r_{j}^{S}y_{j}^{\prime S}=2\Delta\sum_{j\in U}\sum_ {S\subseteq\mathcal{F}}r_{j}^{S}y_{j}^{S}.\] The first inequality is obvious; the second follows from applying Theorem 3.1 to each user \(j\) in \(U\) individually. The last equality follows from the definition of \(\mathbf{y}_{j}\). This proves that the cost-recovery ratio is at least \(1/(2\Delta)\), as claimed. Finally, the proof suggests there is potential for improvement. In fact, whenever the contributions \(\mathbf{A}\) are binary, the min-cost knapsack algorithm is exact, in which case the cost-recovery ratio is \(1/\Delta\)[24]. Moreover, if we know that each selected facility \(X\) always selected by at least two users, it also follows that the cost-recovery ratio is a factor 2 larger, i.e., \(1/\Delta\). On the other hand, no group strategyproof mechanism can recover more than \(1/\Delta\) of the cost in general [16]. Whether the \(1/(2\Delta)\) cost-recovery ratio is tight when contributions are non-binary remains an open problem [24].
2309.08294
Speech-dependent Modeling of Own Voice Transfer Characteristics for In-ear Microphones in Hearables
Many hearables contain an in-ear microphone, which may be used to capture the own voice of its user in noisy environments. Since the in-ear microphone mostly records body-conducted speech due to ear canal occlusion, it suffers from band-limitation effects while only capturing a limited amount of external noise. To enhance the quality of the in-ear microphone signal using algorithms aiming at joint bandwidth extension, equalization, and noise reduction, it is desirable to have an accurate model of the own voice transfer characteristics between the entrance of the ear canal and the in-ear microphone. Such a model can be used, e.g., to simulate a large amount of in-ear recordings to train supervised learning-based algorithms. Since previous research on ear canal occlusion suggests that own voice transfer characteristics depend on speech content, in this contribution we propose a speech-dependent system identification model based on phoneme recognition. We assess the accuracy of simulating own voice speech by speech-dependent and speech-independent modeling and investigate how well modeling approaches are able to generalize to different talkers. Simulation results show that using the proposed speech-dependent model is preferable for simulating in-ear recordings compared to using a speech-independent model.
Mattes Ohlenbusch, Christian Rollwage, Simon Doclo
2023-09-15T10:19:06Z
http://arxiv.org/abs/2309.08294v1
# Speech-dependent modeling of own voice transfer characteristics for in-ear microphones in hearables ###### Abstract Many hearables contain an in-ear microphone, which may be used to capture the own voice of its user in noisy environments. Since the in-ear microphone mostly records body-conducted speech due to ear canal occlusion, it suffers from band-limitation effects while only capturing a limited amount of external noise. To enhance the quality of the in-ear microphone signal using algorithms aiming at joint bandwidth extension, equalization, and noise reduction, it is desirable to have an accurate model of the own voice transfer characteristics between the entrance of the ear canal and the in-ear microphone. Such a model can be used, e.g., to simulate a large amount of in-ear recordings to train supervised learning-based algorithms. Since previous research on ear canal occlusion suggests that own voice transfer characteristics depend on speech content, in this contribution we propose a speech-dependent system identification model based on phoneme recognition. We assess the accuracy of simulating own voice speech by speech-dependent and speech-independent modeling and investigate how well modeling approaches are able to generalize to different talkers. Simulation results show that using the proposed speech-dependent model is preferable for simulating in-ear recordings compared to using a speech-independent model. **Keywords:**_hearables, own voice, system identification, acoustic modeling, relative transfer function_ \(10^{\text{th}}\) Convention of the European Acoustics Association Turin, Italy \({}^{11}-15^{\text{th}}\) September 2023 \({}^{*}\) Politecnico di Torino ## 1 Introduction Hearables, i.e. smart earbuds containing a loudspeaker and one or more microphones, are often used in everyday noisy environments. Although hearables are frequently used to enhance the voice of a person the hearable user is communicating with in a noisy environment, the scenario we are considering in this paper is to enhance the own voice of the user while talking in a noisy environment (e.g., to be transmitted via a wireless link to a mobile phone or another hearable). In-ear microphones may offer benefits for own voice pickup since external noise is attenuated due to ear canal occlusion. However, own voice recorded inside the occluded ear suffers from amplification below 1 kHz and heavy attenuation above 2 kHz, leading to a limited bandwidth [1]. The occlusion effect is affected by the ratio between the airborne and body-conducted component of own voice, which depends on the phonemes uttered by the user [2, 3]. Body conduction from different places of excitation and mouth movements during articulation likely influence this transmission behavior, which we refer to as transfer characteristics in this work. Additionally, body-produced noise (e.g., breathing sounds, heartbeats) may be recorded by an in-ear microphone [4]. To enhance the quality of the in-ear microphone signal, several approaches have been proposed aiming at bandwidth extension, equalization and/or noise reduction, either based on signal processing [1] or supervised learning [5]. For supervised learning-based approaches large amounts of training data are typically required, which may be hard to obtain for realistic in-ear recordings. Similar requirements have been addressed in supervised learning-based speech enhancement approaches using bone-conduction sensors. In [6], it has been proposed to use a device-specific corpus of recordings to train a deep neural network (DNN) combining bone- and air-conducted speech recordings. In [7], a semi-supervised training scheme has been utilized to jointly train a DNN simulating bone-conduction and a multi-modal enhancement DNN together. In [8], it has been proposed to convert airborne to bone-conducted speech using a DNN that accounts for individual differences between talkers using a speaker identification system. In previous work, we have proposed to estimate the transfer characteristics between the entrance of the ear canal and the in-ear microphone using a time-invariant linear model to simulate short segments of in-ear speech for data augmentation in DNN training [5]. In this paper, we propose to model own voice transfer characteristics using a phoneme-dependent system identification approach, where for each phoneme a different linear filter is estimated. The proposed approach can be utilized to simulate speech at an in-ear microphone from regular speech recordings. ## 2 Signal Model Fig. 1 depicts the considered scenario, where a talker is wearing a bearable device equipped with an in-ear microphone and an outer microphone, denoted by subscript i and \(\alpha\), respectively. In the short time Fourier transform (STFT) domain the recorded own voice signal at the outer microphone of talker \(a\) is denoted by \(Y_{\alpha}^{\alpha}(k,l)\) with \(k\) the frequency bin index and \(l\) the time frame index. The signal recorded at the in-ear microphone \(Y_{\alpha}^{\alpha}\) is assumed to contain an in-ear own voice speech component \(S_{\mathrm{i}}^{\alpha}\) and a body-noise component \(V_{\mathrm{i}}^{\alpha}\), i.e. \[Y_{\mathrm{i}}^{\alpha}(k,l)\!=\!S_{\mathrm{i}}^{\alpha}(k,l)\!+\!V_{\mathrm{ i}}^{\alpha}(k,l) \tag{1}\] where \(S_{\mathrm{i}}^{\alpha}\) and \(V_{\mathrm{i}}^{\alpha}\) are assumed to be uncorrelated. The own voice speech at the in-ear microphone \(S_{\mathrm{i}}^{\alpha}(k,l)\) is related to the own voice speech at the outer microphone \(Y_{\alpha}^{\alpha}(k,l)\) by transfer characteristics \(T^{\alpha}\{\cdot\}\), i.e. \[S_{\mathrm{i}}^{\alpha}(k,l)\!=\!T^{\alpha}\{Y_{\mathrm{o}}^{\alpha}(k,l)\}. \tag{2}\] We assume these transfer characteristics to be linear, time-varying and individual. The goal in this paper is to obtain an accurate transfer characteristics model, which is robust against talker mismatch. ## 3 Modeling Own Voice Transfer Characteristics In this section, we present two system identification-based approaches to model the transfer characteristics (see Fig. 2). During identification, recordings of talker \(a\) are used to obtain the model \(\hat{T}^{a}\). During simulation, this model is used to obtain an estimate \(\hat{S}_{\mathrm{i}}^{\alpha}(k,l)\) of the own voice of talker \(b\). If \(a=b\), the model is applied to the same talker as in identification. If \(a\neq b\), the model is applied to a different talker, i.e. talker mismatch is present. In Section 3.1, we present a time-invariant speech-independent model; in Section 3.2 we propose a time-varying speech-dependent model. ### Speech-independent model Assuming a linear time-invariant filter, the transfer characteristics \(T^{a}\) of the speech-independent model are modeled as a relative transfer function (RTF) between the outer microphone and the in-ear microphone. Since the outer microphone signal does not contain any additive noise, the RTF \(\hat{H}^{a}(k)\) can be estimated using the well-known least squares Figure 1: The signal model of own voice transfer characteristics considered in this paper. Figure 2: Overview of the identification and simulation steps for modeling own voice transfer characteristics. approach [9], i.e. \[\hat{H}^{a}(k)\!=\!\frac{\sum_{l}\!Y_{\alpha}^{\alpha,*}(k,l)\!\cdot\!Y_{\alpha}^ {a}(k,l)}{\sum_{l}\!|Y_{\alpha}^{a}(k,l)|^{2}} \tag{3}\] where \({}^{*}\) denotes complex conjugation. For simulation, own voice speech of talker \(b\) recorded at the outer microphone is filtered in the STFT domain as \[\hat{S}_{i}^{b}(k,l)\!=\!\hat{H}^{a}(k)\!\cdot\!Y_{\alpha}^{b}(k,l). \tag{4}\] ### Speech-dependent model Since own voice transfer characteristics likely depend on speech content, we propose a time-varying speech-dependent model for the transfer characteristics \(T^{a}\). Using a phoneme recognition system, we first obtain a frame-wise phoneme annotation \(p(l)\!\in\!1,\!...,\!P\) of a recorded speech signal, with \(P\) possible phoneme classes. For each unique phoneme \(p^{\prime}\), an RTF is then estimated over all detected occurrences of this phoneme within the identification utterances of talker \(a\), i.e. \[\hat{H}_{p^{\prime}}(k)\!=\!\frac{\sum_{p(l)=p^{\prime}}Y_{\alpha}^{\alpha,*} (k,l)\!\cdot\!Y_{\alpha}^{a}(k,l)}{\sum_{p(l)=p^{\prime}}\!|Y_{\alpha}^{a}(k, l)|^{2}}. \tag{5}\] In total, the speech-dependent model hence consists of a database of \(P\) RTFs. For simulation, the phoneme sequence \(p^{b}(l)\) is first determined on the own voice speech of talker \(b\) recorded at the outer microphone. For each frame, the corresponding RTF \(\hat{H}_{p(l)}^{a}(k)\) is then used to filter this signal in the STFT domain, i.e. \[\hat{S}_{i}^{b}(k,l)\!=\!\hat{H}_{p^{\prime}(l)}^{a}(k)\!\cdot\!Y_{\alpha}^{b }(k,l). \tag{6}\] ## 4 Technical evaluation In this section, the previously described transfer characteristics models are evaluated in terms of their accuracy in predicting own voice signals at an in-ear microphone. ### Evaluation data and setup A dataset of own voice speech from 14 native German talkers with approximately 30 minutes of speech in total is utilized in the evaluation. The hearable device used for recording is the closed-went variant of the Hearpiece [10]. Approximately 23 utterances per talker were recorded, resulting in a total of 329 utterances. For the speech-dependent model, an in-house proprietary phoneme recognition system with \(P\!=\!62\) phon classes was utilized, which was trained on German speech. Both the speech-independent and the speech-dependent model were estimated on all utterances per talker. For simulation, in-ear speech was predicted for each utterance of 5 s length. No voice activity detection was employed so that utterances may contain small pauses between words. The estimation accuracy is computed in two conditions: estimating speech of the same talker (\(a\!=\!b\)) for each utterance of each talker, and estimating speech with talker mismatch (\(a\!\neq\!b\)) each utterance of each talker is simulated using a randomly assigned different talker model. The simulations were carried out at a sampling frequency of \(5\) kHz to account for the reduced bandwidth of the in-ear speech. An STFT framework with a frame length of \(K\!=\!256\) (corresponding to \(51.2\) ms), a frame overlap of 50 % and a square-Hot man window for analysis and synthesis was used. Since body-conducted speech travels faster than airborne speech, a prediction delay of 11 samples was applied to the in-ear signals prior to identification and simulation to enable causal modeling. As evaluation metric, we utilize the log-spectral distance (LSD) [11] between the real and simulated in-ear recordings, where a lower value indicates a more accurate estimate. ### Results and discussion The results for the same talker-condition are shown in Fig. 3. It can be observed that the in-ear speech signals can be predicted much better using the time-varying speech-dependent model than using the time-invariant speech-independent model. Since in-ear recordings are assumed to also contain body noise uncorrelated to the outer microphone signals, remaining modeling errors are expected due to models not accounting for body-noise. The results for the talker mismatch-condition are shown in Fig. 4. It can be observed for both models that the performance is worse than in the same talker condition, where especially for the speaker-independent model there is a larger spread of LSD scores in the talker mismatch condition than in the same-talker condition. Nevertheless, also for the talker Figure 3: LSD results for the same talker-condition. mismatch condition the speech-dependent model clearly outperforms the speech-independent model. ## 5 Conclusion In this paper, two approaches to model own voice transfer characteristics in hearables have been investigated. Results indicate that speech-dependent modeling is beneficial compared to speech-independent modeling. ## 6 Acknowledgments The Oldenburg Branch for Hearing, Speech and Audio Technology HSA is funded in the program \(>\)Vorab\(<\) by the Lower Saxony Ministry of Science and Culture (MWK) and the Volkswagen Foundation for its further development. Part of this work was funded by the German Ministry of Science and Education BMBF FK 16SV8811. This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project ID 352015383 (SFB 1330 C1).
2306.17531
Single and double quantum transitions in spin-mixed states under photo-excitation
Electronic spins associated with the Nitrogen-Vacancy (NV) center in diamond offer an opportunity to study spin-related phenomena with extremely high sensitivity owing to their high degree of optical polarization. Here, we study both single- and double-quantum transitions (SQT and DQT) in NV centers between spin-mixed states, which arise from magnetic fields that are non-collinear to the NV axis. We demonstrate the amplification of the ESR signal from both these types of transition under laser illumination. We obtain hyperfine-resolved X-band ESR signal as a function of both excitation laser power and misalignment of static magnetic field with the NV axis. This combined with our analysis using a seven-level model that incorporates thermal polarization and double quantum relaxation allows us to comprehensively analyze the polarization of NV spins under off-axis fields. Such detailed understanding of spin-mixed states in NV centers under photo-excitation can help greatly in realizing NV-diamond platform's potential in sensing correlated magnets and biological samples, as well as other emerging applications, such as masing and nuclear hyperpolarization.
Anand Patel, Zainab Chowdhry, Anil Prabhakar, A. Rathi, V. P. Bhallamudi
2023-06-30T10:42:54Z
http://arxiv.org/abs/2306.17531v1
# Single and double quantum transitions in spin-mixed states under photo-excitation ###### Abstract Electronic spins associated with the Nitrogen-Vacancy (NV) center in diamond offer an opportunity to study spin-related phenomena with extremely high sensitivity owing to their high degree of optical polarization. Here, we study both single- and double-quantum transitions (SQT and DQT) in NV centers between spin-mixed states, which arise from magnetic fields that are non-collinear to the NV axis. We demonstrate the amplification of the ESR signal from both these types of transition under laser illumination. We obtain hyperfine-resolved X-band ESR signal as a function of both excitation laser power and misalignment of static magnetic field with the NV axis. This combined with our analysis using a seven-level model that incorporates thermal polarization and double quantum relaxation allows us to comprehensively analyze the polarization of NV spins under off-axis fields. Such detailed understanding of spin-mixed states in NV centers under photo-excitation can help greatly in realizing NV-diamond platform's potential in sensing correlated magnets and biological samples, as well as other emerging applications, such as masing and nuclear hyperpolarization. ## I Introduction The negatively charged NV center in diamond [1] is a magneto-optically active defect that possesses an electronic spin-triplet (S = 1) with an exceptionally long spin lifetime [2]. A combination of remarkable properties make NV centers extremely promising for a wide range of applications such as magnetic sensing and imaging [3; 4; 5], quantum communication [6; 7], lasing [8], masing [9] and nuclear hyperpolarization [10; 11; 12]. A key feature enabling these is the optical polarization of the NV spins, which can be orders of magnitude greater than thermal polarization even at room temperature. While a majority of work done with NV centers has used a static field collinear to NV-axis to maximize the signal, to truly develop the NV spin as a universal magnetic sensor, we should be able to apply off-axis field, _i.e._, fields not collinear with the NV axis. Expanding the scope of NV research with off-axis fields opens up new possibilities for sensing magnetic fields from various directions relative to the NV axis, facilitating three-dimensional mapping of magnetic field distributions. This is particularly significant in the study of correlated electron systems [13; 14]. Similarly, biosensing with nanodiamonds [15; 16; 5; 17] may have off-axis fields given the challenges in controlling the orientation of nanodiamonds in biological samples. Understanding the dependence of polarization on field orientation may also be useful for NV-based amplifier or maser [9; 18], aiding in tunability and performance comprehension. An off-axis (static) field also enables double (\(\Delta m=2\)) quantum transitions (DQT) to be driven by ac magnetic/electric fields [19], which is otherwise forbidden in aligned magnetic fields due to magnetic dipole selection rules. DQT offer increased sensitivity compared to conventional single quantum (\(\Delta m=1\)) transitions (SQT). Furthermore, DQT exhibits more robustness against field misalignment (off-axis noise) [20] and strain effect [21]. DQT, being magnetic-dipole-forbidden, can securely probe the magnetic and electric noise [22]. In the above context, studying the spin polarization of DQT under laser illumination while accounting for the extent of magnetic field misalignment provides an additional degree of freedom for NV-based diamond magnetometry. Optically induced spin polarization of NV has been studied in literature [9; 18; 23] using conventional inductively detected electron spin resonance (ESR) experiments. Such measurement can specifically focus on the effect of the off-axis fields on the polarization process, without the complication of changes in optical spin-readout due to off-axis fields, as in an optically detected magnetic resonance experiment. Meanwhile, a comprehensive study analyzing the effect of both the knobs, off-axis field and laser intensity, is still lacking, especially for samples with low NV concentration (\(<0.5\) ppm). Such samples, with extended spin lifetimes, are desirable for wide-field magnetic imaging applications. In this work, we perform ESR spectroscopy on such a sample, studying the effects of varying laser intensity and magnetic field misalignment with the NV-axis. Our observations include both single and double quantum transitions. The experimental findings are interpreted within the theoretical framework of NV spin mixing in its seven-level electronic structure. ## II Spin mixing in NV centers NV center is an atom-like defect in diamond lattice (see figure 1a) with the property of getting spin polarized by optical pumping with green laser. This occurs because of its seven-level electronic structure (see figure 1b) where both the ground and excited states are spin triplets, and there is an additional singlet state (\(\ket{7}\)). The singlet state enables a non-radiative, spin-selective inter-system crossing, which play the central role in NV spin dynamics under laser illumination [12; 24; 25]. When optically excited, the \(\ket{\mathrm{m_{s}}=0}\) preferentially decay in a spin-conserving radiative process, whereas \(\ket{\mathrm{m_{s}}=\pm 1}\) also selectively populates \(\ket{\mathrm{m_{s}}=0}\) ground state via singlet state (\(\ket{7}\)). This leads to the polarization of NV spins into the \(\ket{\mathrm{m_{s}}=0}\) sublevel, even at room temperature, which is significantly higher than the polarization achievable through thermal equilibrium. The ground-state triplet, whose spin population/polarization will be measured in the experiment, is described by following Hamiltonian: \[\mathcal{H}_{\mathrm{gs}}=hD_{\mathrm{gs}}(\hat{S_{z}}^{2}+\frac{2}{3})+g\mu_{ \mathrm{B}}\mathbf{B}\cdot\mathbf{\hat{S}}+\mathbf{\hat{S}}\mathbf{\hat{A}} \mathbf{\hat{I}} \tag{1}\] where, \(h\) is the Plank's constant, \(D_{\mathrm{gs}}=2.87\) GHz is the zero-field splitting, \(g\approx 2\) is the Land\(\acute{e}\) g-factor and \(\mu_{\mathrm{B}}\) is the Bohr magneton. \(\mathbf{B}\) and \(\mathbf{\hat{S}}\) are the applied magnetic field and spin vectors, respectively. The excited state is also described by a similar Hamiltonian (\(\mathcal{H}_{\mathrm{es}}\)) with \(D_{\mathrm{es}}=1.42\) GHz. The zero-field term accounts for the magnetic anisotropy in the system. This defines the spin quantization axis to be along the NV-axis (see figure 1a) which lies along one of the four \(\bra{111}\) crystallographic directions in diamond lattice. The third term represents the hyperfine interaction between the NV electron and Figure 1: **Mixing of Nitrogen Vacancy (NV) spin sub-levels in seven-level (kinetic) model used in this study.** (a) Atomic configuration of the NV center with a substitutional Nitrogen (N) atom in adjacent to a vacancy (V) defect in the diamond lattice. The line joining the two, called NV-axis, is used as reference for degree of NV defect misalignment (polar angle \(\theta\)) with the applied magnetic field \(\mathbf{B}\). Inset shows the real-time (red) fluorescence image of NV-containing diamond single crystal under (green) laser illumination. (b) Schematic seven-level electronic structure of the NV center, showing optical excitation, spin-conserving radiative decay (solid lines) and nonradiative, spin-selective intersystem crossing (dashed lines) via the metastable singlet state \(\ket{7}\). The labels with a superscript \({}^{0}\) represent the seven zero-field eigen states for each spin-pure sub-level (\(m_{s}=0,\pm 1\)). For the simplest case of aligned magnetic fields, \(\mathbf{B}\)(B, \(\theta=0^{\circ}\)), a Zeeman splitting of the ground state (degenerate) spin sub-levels is also shown, with possible SQT (violet) and DQT (orange dashed lines) between them under (resonant) microwave excitation. DQT is only weakly allowed in \(\theta=0^{\circ}\) case under local strain effects. (c) The calculated spin-mixing of (ground-state) spin sub-levels as a function of \(\mathbf{B}\)(B, \(\theta\)). Green, blue, and red represent the \(\ket{1^{0}},\ket{2^{0}},\ket{3^{0}}\) character or \(\ket{\alpha_{i1}}^{2},\ket{\alpha_{i2}}^{2},\ket{\alpha_{i3}}^{2}\) coefficients (see equation (2)) respectively where \(\ket{i}\) is the given state. \({}^{14}\)N nuclei spin. \(\mathbf{A}\) and \(\mathbf{\hat{I}}\) are the nuclear hyperfine tensor and nuclear spin vector, respectively. We have ignored other terms in the Hamiltonian that are not important for the current study. In absence of external magnetic field, NV will have spin-pure states with \(\mathrm{m_{s}}=0\) (\(\ket{1^{0}},\ket{4^{0}}\)), \(-1\) (\(\ket{2^{0}},\ket{5^{0}}\)) and \(+1\) (\(\ket{3^{0}},\ket{6^{0}}\)) characters (see figure 1b). When the magnetic field is precisely applied along the NV axis, specifically for \(\mathbf{B}\)(B, \(\theta=0^{\mathrm{o}}\)), the spin purity remains intact, except at the level anti-crossings at B = 102.5 mT, where the spin character gets swapped within the levels crossing over. Any off-axis field, \(\mathbf{B}\)(B, \(\theta\neq 0^{\mathrm{o}}\)) will result in mixing of the spin sub-levels. This spin mixing can be modelled by expressing each of the eigen-states as a linear combination of all the zero-field (spin-pure) eigen-states (see figure 1b), as follows [26]. \[\ket{i}=\sum_{j=1}^{7}\alpha_{ij}(\mathbf{B})\ket{j^{0}} \tag{2}\] where, \(\alpha_{ij}(\mathbf{B})\) are the complex coefficients. These can be numerically computed for both the ground and excited states using \(\mathcal{H}_{\mathrm{gs}}\) and \(\mathcal{H}_{\mathrm{es}}\), respectively. Figure 1c represents the evolution of spin-pure state \(i\in 1,2,3\) for \(\theta=0^{\mathrm{o}}\) to a spin-mixed state in off-axis magnetic fields, \(\mathbf{B}\)(B, \(\theta\neq 0^{\mathrm{o}}\)) as a function of both field magnitude (B) and its orientation (\(\theta\)). The color indicates the proportion of the zero-field states. The strong spin-mixing, especially at fields and angles explored in this work, is clearly seen. ## III Results Figure 2a depicts the schematic of our experiment utilizing a commercial ESR spectrometer, featuring the capability of photoexciting NV spins during measurements. We measure the ESR spectra of a diamond sample containing NV centers, varying their orientation (\(\theta\)) relative to the applied static magnetic field and laser illumination power. More details of the sample and experimental procedure are provided in Section V: Methods. Figure 2b displays the differential ESR spectra (top panel) acquired for the \(\ket{1}\leftrightarrow\ket{2}\) SQT for \(\theta=0^{\mathrm{o}}\) with the laser illumination on and off. The spectra are integrated, and a background is removed (bottom panel) prior to further analysis (see supplementary information for baseline correction in figure S1). The three peaks correspond to the hyperfine levels of this transition resulting from the coupling between the NV electron spin and the \({}^{14}\)N nuclei spin (I = 1). The linewidth analysis of the hyperfine peaks gives an upper bound on \(T_{2}^{*}\) of \(\simeq 5.6\)\(\mu\)s, which is in close agreement with value provided by the sample manufacturer. The ESR spectrum obtained in the dark condition (with the laser turned off) represents the Boltzmann distribution of the spin population in thermal equilibrium. Conversely, the second spectrum acquired under laser illumination at a low intensity of \(\sim 1\) mW/mm\({}^{2}\) displays a notable increase in the signal, which signifies the polarization of (ground state) spins driven by photo-excitation. This optically-induced polarization also results in inversion of the peaks [9], reflecting a population inversion (_i.e._ higher spin population in the higher energy spin sublevel) as expected for the \(\ket{1}\leftrightarrow\ket{2}\) transition above a field of 102.5 mT (see figure 1b). ESR results, along with the deduced degree of spin polarization (with comparison to theory), for this and another SQT (\(\ket{2}\leftrightarrow\ket{3}\)) for \(\theta=0^{\mathrm{o}}\) and in off-axis magnetic fields (enabling \(\ket{1}\leftrightarrow\ket{3}\) DQT also) are described below. Figure 2: **Schematic for ESR spectroscopy of NV spin-mixed states under photo-excitation.** (a) The (100) diamond crystal was oriented at \(45^{\circ}\) and glued to the quartz tube (yellow), which is inserted into a X-Band TE\({}_{011}\) cylindrical cavity (blue). It can then be rotated about its axis to achieve the desired degree of NV defect misalignment (\(\theta\)) with external \(\mathbf{B}\) (see Section V for details). The induced spin-mixed states is studied by applying (resonant) microwaves (in dashed lines) in dark and under laser illumination via the optical access of the cavity. (b) Representative ESR spectra demonstrating the inversion and amplification of NV single-quantum ESR (\(\ket{1}\leftrightarrow\ket{2}\)) signal under photo-excitation. ### Single Quantum ESR Spectroscopy In figure 3a, the ESR spectra are shown for the experimental configuration with \(\mathbf{B}\) pointing along one (\(\theta=0^{o}\)) of the four possible orientations of NV in diamond, under high-intensity laser illumination of 83 mW/mm\({}^{2}\). In such a case, the remaining NV centres in sample lies at \(\theta=70.5^{o}\) relative to \(\mathbf{B}\). The two outermost peaks in the ESR spectra represents the \(|1\rangle\leftrightarrow|2\rangle\) and \(|2\rangle\leftrightarrow|3\rangle\) SQTs for NVs at \(\theta=0^{o}\), while the inner pair of peaks corresponds to the SQTs at \(\theta=70.5^{o}\). In addition to \(|1\rangle\leftrightarrow|2\rangle\) SQT at \(\theta=0^{o}\), the \(|2\rangle\leftrightarrow|3\rangle\) SQT at \(\theta=70.5^{o}\) also show a population inversion (negative polarity), which is facilitated by spin-mixing in off-axis magnetic fields (see figure 1b) as explored in subsection III.2. A key feature to notice is the unequal amplitudes of the hyperfine peaks at \(\theta=0^{o}\), which indicates the capture of a notable polarization of the nuclear spins via the NV electron spin. Such a effect is minimal for hyperfine peaks under a much lower laser intensity of \(\sim\) 1 mW/mm\({}^{2}\) (in figure 2b). The picture becomes more complex in ESR signal measured for the case of \(\theta=70.5^{o}\). The \(|1\rangle\leftrightarrow|2\rangle\) and \(|2\rangle\leftrightarrow|3\rangle\) SQTs show a marked difference in intensity, which can be attributed to a significant change in microwave coupling for two SQTs at that angle (see figure S3 in the supplementary information). Furthermore, the hyperfine peaks for \(|2\rangle\leftrightarrow|3\rangle\) SQT tend to merge, while those for \(|1\rangle\leftrightarrow|2\rangle\) merge into a single broader peak. This may arise from the imperfect experimental configuration (\(1^{o}\) resolution of the setup) in orientation of NVs at the intended \(\theta=70.5^{o}\) in three (possible) different directions, considering the strong variation of the resonant field with field orientation at this angle. The electronic transition peak (for \(|1\rangle\leftrightarrow|2\rangle\)) is accompanied by two additional peaks (marked by *), which may originate from the forbidden double-quantum nuclear transitions or hyper-polarization of the nuclear spins. We next acquired the ESR data by rotating the sample for the desired degree (\(\theta\)) of misalignment of \(\mathbf{B}(\theta)\) with the NVs in diamond, as described in section V: Methods. The ESR data for both the SQTs at all \(\theta\) (including \(\theta=0^{o}\)) is presented in figure S2 in the supplementary information. The ESR spectra are fitted to multiple Lorentzian peaks, with the constraint of equal magnitude for the hyperfine peaks (e.g., for \(\theta=0^{o}\) and \(70.5^{o}\) in figure 3a). The (average) resonant fields obtained by fitting the data at all \(\theta\) are plotted in figure 3(b), which agree well with the theoretically expected values by solving the Hamiltonian for NV spins in ground state (equation (1)). This validates the Hamiltonian and subsequently spin-mixing calculations (see figure 1c), which provide the basis for the analysis of degree of spin polarization in subsection III.2 as follows. ### Spin Polarization We next focus on extracting the spin polarization, \(S_{z}^{ij}=(n_{i}-n_{j})/(\sum_{p=1}^{7}n_{p})\) between any two spin sublevels with eigen states \(i,j\), with \(n_{i}\) being the population of state \(i\). Using the ESR data, the degree of spin polarization under laser illumination (with optical pumping strength \(\beta\)) is calculated using the relation as follows. \[S_{z}^{ij}(\beta,\mathbf{B}(\theta))=\frac{A_{ij}(\beta,\mathbf{B}(\theta))}{ C_{ij}(\theta)}\cdot\frac{S_{z}^{12}(0,\mathbf{B}(0^{o}))}{A_{12}(0, \mathbf{B}(0^{o}))} \tag{3}\] For the specific transition (\(|i\rangle\leftrightarrow|j\rangle\)), \(A_{ij}\) represents the area under the peak (summed over all the hyperfine Figure 3: **Single-quantum ESR spectroscopy of NV spin-mixed states under laser illumination.****(a)** (Integrated) single-quantum ESR spectra of NV centers (in blue symbols) measured under the maximum laser intensity of 83 mW/mm\({}^{2}\) at selected degrees (\(\theta=0^{o}\) and \(\simeq 70.5^{o}\)) of NVs orientation relative to \(\mathbf{B}\) (see supplementary information for ESR spectra at all \(\theta\) in figure S2). The ESR intensity is analyzed by fitting to multiple Lorentzian peaks (in red lines), associated with hyperfine splitting of the electronic transition. (b) The extracted resonant fields for the SQTs at all \(\theta\) (in black and blue symbols) are compared to the theoretical values (in red lines) obtained by solving the ground state Hamiltonian (equation (1)). peaks) and \(C_{ij}\) accounts the change in microwave coupling at that \(\theta\) relative to that for \(|1\rangle\leftrightarrow|2\rangle\) SQT at \(\theta=0^{\circ}\), which can be found in Figure S3 in the supplementary information. \(S_{2}^{12}(0,\mathbf{B}(0^{\circ}))\) refers to the thermal spin polarization, which is calculated by quantifying the Boltzmann's distribution of NV spins in two sublevels (\(|1\rangle\), \(|2\rangle\)) using ESR data acquired for SQT between them at \(\theta=0^{\circ}\) in dark condition (\(\beta=0\)), as presented in figure 2b. The optically-induced spin polarization thus calculated, using equation (3), for two SQTs are plotted in figure 4 with varying \(\theta\) (at maximum laser intensity of 83 mW/mm\({}^{2}\)) as well as the laser intensity (at specific \(\theta=0^{\circ}\) and \(70.5^{\circ}\)). To make a quantitative understanding of optically-induced spin polarization in the framework of spin-mixing, we have also calculated it computationally using seven-level kinetic model (presented in figure 1b) by extracting the steady state populations for spin-mixed eigen states (\(i,j\)) from classical rate equation as follows. \[\frac{dn_{i}}{dt}=\sum_{j=1}^{7}(k_{ij}n_{j}-k_{ji}n_{i}) \tag{4}\] Here, the optical transition rates, \(k_{ij}(\mathbf{B})\) between spin-maxed states for the specific (\(|i\rangle\leftrightarrow|j\rangle\)) transition can be obtained using the following relation: \[k_{ij}(\mathbf{B})=\sum_{p=1}^{7}\sum_{q=1}^{7}\lvert\alpha_{ip}\rvert^{2} \lvert\alpha_{jq}\rvert^{2}k_{pq}^{0} \tag{5}\] The coefficients, \(\alpha_{ij}(\mathbf{B})\) are computed by solving the Hamiltonian (equation (1)) at resonant fields for the respective transition (see figure S4 in the supplementary information). Here \(k_{pq}^{0}\) are the zero-field optical transition rates except \(k_{21,12,31,13}^{0}=1/2T_{1}\) where \(T_{1}\) is the longitudinal spin relaxation rate. While zero-field optical decay rates, including the intersystem crossing, are measured experimentally in literature (see Table 1), the optical pumping rates can be calculated from the radiative decay rates (\(k_{r}^{0}\)) using the relation \(k_{14,25,36}^{0}=\beta k_{r}^{0}\). Here \(\beta\) is a dimensionless parameter related to the optical pumping rate as follows. \[\beta=\frac{\sigma}{4\cdot k_{r}^{0}\cdot h\nu}\times\text{laser intensity}, \tag{6}\] where \(\sigma\) is the absorption cross-section of NV centres under 532 nm laser illumination, \(\nu\) is the laser frequency. A factor of 4 accounts the optical excitation of NVs in only one of the four possible orientations in diamond. In steady state, one of the classical rate equations become redundant and can be replaced with \(\sum n_{i}=1\). In turn, this normalizes the solution w.r.t. the total NV concentration (as in the definition of spin polarization). To account the thermal effect, equation (4) can be solved by expressing it in matrix form: \[\mathbf{A}n=B \tag{7}\] where \(\mathbf{A}_{7\times 7}\) is the matrix of transition rate coefficients, \(k_{ij}\), while \(n_{7\times 1}\) is a column matrix of unknown populations. At a given temperature i.e., room temperature in current study, another column matrix, \(B_{7\times 1}\) can be calculated by setting \(\beta\) to 0 (dark) in transition rate matrix \(A\) and replacing \(n\) with the dark state populations given by the Boltzmann's distribution assuming that all the NV spins are in the ground state manifold at room temperature, thus \(i\) or \(j\in\{1,2,3\}\). Figure 4: **Optically-induced spin polarization of single-quantum transitions in NV centers.** Spin polarization between the eigen-states of SQTs calculated from the ESR line intensities (in symbols) using equation (3) as a function of (a) the degree of magnetic field misalignment, \(\theta\) under the highest laser intensity of 83 mW/mm\({}^{2}\) and (b) the laser intensity for selected \(\theta=0^{\circ}\) and \(70.5^{\circ}\). The experimental values are compared to the spin polarization values (in red lines) calculated from the seven level kinetic model using equation (7) with parameters given in table 1. Using the parameters provided in table 1, the obtained spin polarization are plotted in figure 4. The 7-level energy model of NV centres successfully predicts the trend of angular and laser power dependence of spin polarization in experimental results. The quantitative agreement between simulated and experimental polarization is discussed in Section IV. ### Double Quantum Transitions In the ESR data acquired for experimental configuration with NV centers in off-axis \(\mathbf{B}\)(B, \(\theta\neq 0^{\circ}\)) under photo-excitation, we made a notable observation of emergence of the ESR signal corresponding to \(|1\rangle\leftrightarrow|3\rangle\) DQT (e.g., for \(\theta=20^{\circ}\) in figure 5a). This finding is confirmed by comparing the resonant fields extracted from the ESR signal at all non-zero \(\theta\) (through multi-peak Lorentzian fits) with the values obtained by solving the Hamiltonian (equation (1)) for this transition (see 5b). This is an intriguing finding because under perfect alignment (\(\theta=0^{\circ}\)), this particular transition between \(\mathrm{m_{s}}=-1\) (\(|1^{0}\rangle\)) and \(+1\) (\(|3^{0}\rangle\)) (above 102.5 mT) is not allowed. It is only under the strong spin-mixing regime (see figure 1c) where these transitions are allowed and are visible in the ESR spectra. Nevertheless, the observation of the DQT signal can provide valuable insights into the nature (angle) and strength of the misaligned magnetic fields relative to NV centers. This can help improving the performance of their various applications particularly quantum sensing, where precise information of the magnetic field is essential. We also calculated the optically-induced spin polarization, both, experimentally from the ESR data (using equation (3)) and computationally by solving the seven-level rate equation into matrix formalism (using equation (7)). The extracted values are shown in figure 5c. A maximum (experimental) spin polarization is obtained within the \(\theta\) range of \(10^{\circ}\)-\(35^{\circ}\). While qualitatively the trends of the experimental data (markers) and fits (solid lines) match well, we cannot obtain as good a quantitative fit for larger \(\theta\) using the same parameters as the SQT case, as further discussed in Section IV. ## IV Discussion and Conclusions The commonly used seven-level kinetic model of NV centers was extended by [26] to incorporate spin-mixing effects in off-axis magnetic fields. Authors primarily focus on the low magnetic field regime and specific angles. Further studies [18; 9] have investigated NV spin polarization under photo-excitation with seven-level model analysis at high fields, but specifically for aligned fields. In other work, [23] examined the angular dependence of spin polarization in high off-axis magnetic fields. Their approach accounted for this by applying the Wigner rotation exclusively to the ground state Hamiltonian while considering equal population for \(m_{\mathrm{s}}=\pm 1\) levels and assuming a fixed polarization for the aligned case based on experimental data. In contrast, seven-level kinetic model, further incorporating thermal polarization, employed by us also derive the aligned case spin polarization from first principles and considers the differences in \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline \(k^{0}_{41,52,63}\) & 62.7 MHz \\ \(k^{0}_{47,67}\) & 80 MHz \\ \(k^{0}_{57}\) & 12.97 MHz \\ \(k^{0}_{1,73}\) & 1.08 MHz \\ \(k^{\prime}_{72}\) & 3.45 MHz \\ \(T_{1}\) & 5.5 ms \\ \(\sigma\) & \(9.3\times 10^{-17}\) cm\({}^{2}\) \\ \hline \end{tabular} \end{table} Table 1: Parameters used in seven-level kinetic model for calculation of spin polarization values using equation (7). Zero-field transition rates are taken from [27]. Figure 5: **Double-quantum ESR spectroscopy of NV centers in off-axis magnetic fields.** (a) (Integrated) ESR spectra under the laser illumination of 83 mW/mm\({}^{2}\), after baseline correction, of DQT between spin-mixed states at \(\theta=20^{\circ}\) (in blue symbols), with fitting into multiple hyperfine peaks with a Lorentzian profile (in red line). (b) The extracted resonant fields at different \(\theta\) are plotted against the theoretical values (in red line) derived from the ground state Hamiltonian (equation (1)). (c) The experimentally calculated spin polarization (in symbols) using equation (3) for DQT at different \(\theta\) are compared to the values obtained from the kinetic calculations using equation (7) with parameters given in table 1. zero-field splittings (\(D_{\rm gs}\) and \(D_{\rm es}\)) for the ground and excited states, which gives different spin mixing conditions. This is evident from the observation of DQT in our ESR data measured in off-axis field, demonstrating the distinct populations of \(m_{\rm s}=\pm 1\) sub-levels. Moreover, none of the previous studies have investigated the amplification of double quantum transitions (DQT) under laser illumination for high off-axis field conditions. Our objective is to comprehensively explore the phase-space of these experimental parameters, enabling a more thorough understanding of the influence of laser illumination on ground state spins. In this work, we conducted systematic ESR experiments on a diamond sample with a relatively lower NV concentration (\(\approx\) 0.2 ppm) compared to previous such studies (\(\geq\) 0.49 ppm in [18] and \(\geq\) 1.9 ppm in [23]). The lower concentration with long \(T_{2}^{*}\) of \(\simeq\) 5.6 \(\mu\)s (vs 29 ns in [18]) gives rise to fully resolved \({}^{14}\)N hyperfine levels in our ESR data. This longer \(T_{2}^{*}\) is advantageous for magnetic sensing applications. In contrast, the higher NV density samples used in [18; 23] exhibited broader linewidths, which are more desirable for diamond maser applications, specifically to achieve the high bandwidth of operation, as highlighted by Sherman et al. [18]. We compared our experimental results of optically-induced enhancement of spin polarization (vs thermal polarization) for \(\theta\) = 0\({}^{\circ}\) case with previous works. Our data demonstrated a high amplification of \(\times\) 685 for SQTs at laser intensity of 83 mW/mm\({}^{2}\). This surpasses the maximum amplification of \(\approx\)\(\times\) 400 obtained Sherman et al. [18] at similar illumination intensity for their lowest NV density (0.49 ppm) sample. The measured data in their study indicates a lowering of enhancement and \(T_{1}\) as NV density increases in the sample. Subsequent kinetic model calculations, considering experimental \(T_{1}\) values, clearly emphasize the noteworthy impact of \(T_{1}\) on the spin polarization of the sample. Based on this analysis, the higher amplification in our low NV density sample compared to [18] can be attributed to a relatively higher \(T_{1}\) for our sample. In other work, Drake et al. [23] reported a saturation of amplification, reaching \(\approx\)\(\times\) 350 at a much lower laser intensity of \(\approx\) 30 mW/mm\({}^{2}\). While above-stated works, including our, are performed at room temperature, Degen and co-workers [28] conducted ESR experiments at 200 K using a higher NV density sample (9 ppm). Their observations followed a similar trend as [23], revealing a optical saturation effect at even lower laser intensity of \(\approx\) 10 mW/mm\({}^{2}\) and a maximum enhancement of \(\times\) 170. The team accounted this saturation effect in their kinetic model calculations by considering the NV (negatively charged \(\leftrightarrow\) neutral) charge state conversion processes, which are recognized to be stimulated by photo-excitation [29; 30]. In contrast to the findings in [23; 28], our experimental data, up to 83 mW/mm\({}^{2}\), does not exhibit any signs of saturation. This excludes a significant photo-ionization of (negatively charged) NV charge state in our sample. Conversely, our data reveals a higher experimental spin polarization for SQTs compared to that predicted by kinetic model calculations (even without considering photo-ionization processes) using typical values of different parameters reported in the literature, as discussed in figure S5 in the supplementary information. Lastly, we analyze our experimental findings for both SQTs and DQT in the framework of seven-level kinetic model calculations. To start with, zero-field optical decay rates, including the intersystem crossing, for NV spin have been experimentally measured by different groups [9; 26; 27; 31; 32]. Specific values for different models are provided in Table S1 in supplementary information. A comprehensive analysis of the experimental results for different models are provided in figure S5 in the supplementary information. This reveals that the model 3, adapted from [27], describes well the angular (\(\theta\)) dependence for SQTs. However, when model 3 is used for DQT, although calculated values shows a similar trend to the experimental data, quantitatively, they tend to overestimate the spin polarization. This discrepancy can be attributed to the significantly lower \(T_{1}\) value for DQT compared to SQTs, as previously reported by Sangtawesin et al. [33]. In summary, our comprehensive study of NV transitions under off-axis fields has significant implications for advancing NV sensing for a broad variety of emerging quantum materials based on correlated electron systems as well as biological samples. To our knowledge this is the first experimental observation of the amplification of the double quantum transitions in NV centers and we likely have also seen the amplification of the signal from the nuclear double quantum transitions as well. These findings can also offer valuable insights into nuclear hyperpolarization using NV centers. ## V Methods The study employed a commercial continuous-wave Electron Spin Resonance (ESR) Spectrometer (JEOL JES-FA200) operating in the X-Band (\(\sim\) 9.43 GHz) with a cylindrical TE\({}_{011}\) cavity. The (100)- single crystal diamond (DNV-B1 from Element Six, NV concentration: 200 ppb) was fixed to a quartz tube at 45\({}^{\circ}\) and placed in the cavity such that the \(\langle 110\rangle\) axis of the diamond crystal is parallel to the quartz tube. The quartz tube was then rotated about its axis (along the lab vertical) to change the orientation of the NV-axis w.r.t the applied static field. This ensures that 2 of the 4 possible orientations of NV in diamond are always perpendicular to the microwave field and the applied magnetic field stays in the [110] plane. The angle was adjusted manually using a goniometer with 1\({}^{\circ}\) resolution. The sample was illuminated by a 532 nm laser via an optical access to the cavity. All the ESR measurements were performed at room temperature and with a low microwave power of 10 \(\mu\)W, after verifying the linear response of the ESR signal at this power level. ## Acknowledgements This work was supported by DST, India, under QuST program vide sanction no. DST/ICPS/QuST/Theme-2/2019/General and IIT Madras via exploratory research and team research grants. The authors extend their thanks to the staff at Sophisticated Analytical Instruments Facility (SAIF), IIT Madras for making the EPR facility available for our use.
2307.16506
Explainable Equivariant Neural Networks for Particle Physics: PELICAN
PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require very large numbers of parameters, PELICAN employs a fundamentally symmetry group-based architecture that demonstrates benefits in terms of reduced complexity, increased interpretability, and raw performance. We present a comprehensive study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of specifically identifying and measuring the $W$-boson inside the dense environment of the Lorentz-boosted top-quark hadronic final state. We also extend the application of PELICAN to the tasks of identifying quark-initiated vs.~gluon-initiated jets, and a multi-class identification across five separate target categories of jets. When tested on the standard task of Lorentz-boosted top-quark tagging, PELICAN outperforms existing competitors with much lower model complexity and high sample efficiency. On the less common and more complex task of 4-momentum regression, PELICAN also outperforms hand-crafted, non-machine learning algorithms. We discuss the implications of symmetry-restricted architectures for the wider field of machine learning for physics.
Alexander Bogatskiy, Timothy Hoffman, David W. Miller, Jan T. Offermann, Xiaoyang Liu
2023-07-31T09:08:40Z
http://arxiv.org/abs/2307.16506v4
# Explainable Equivariant Neural Networks for Particle Physics: PELICAN ###### Abstract We present a comprehensive study of the PELICAN machine learning algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of specifically identifying and measuring the \(W\)-boson inside the dense environment of the boosted hadronic final state. PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require very large numbers of parameters, PELICAN employs a fundamentally symmetry group-based architecture that demonstrates benefits in terms of reduced complexity, increased interpretability, and raw performance. When tested on the standard task of Lorentz-boosted top quark tagging, PELICAN outperforms existing competitors with much lower model complexity and high sample efficiency. On the less common and more complex task of 4-momentum regression, PELICAN also outperforms hand-crafted, non-machine learning algorithms. We discuss the implications of symmetry-restricted architectures for the wider field of machine learning for physics. ###### Contents * 1 Introduction * 2 Equivariance and jet physics * 3 PELICAN architecture * 4 Tagging jets from Lorentz boosted top quarks * 5 \(W\)-boson 4-momentum reconstruction * 6 \(W\)-boson mass measurement * 7 PELICAN explainability * 8 IRC-safety and PELICAN * 9 Conclusion * 10 Acknowledgements * A Additional results and plots * B IRC-safety and Lorentz symmetry ## 1 Introduction Identifying, reconstructing, and measuring the properties and dynamics of high-energy, short-distance particle phenomena is inherently an inference task, since direct access to the fundamental processes is often impossible due to the time and length scales at which they occur. The suite of detection techniques, pattern recognition algorithms, and measurement approaches used to perform this task inevitably imposes constraints on both the nature of the information used as well as on the form and structure of the results. Such constraints play a crucial role in the context of jet substructure measurements, in which detailed analysis is performed on the long-distance features of Lorentz-boosted particle decays, parton showering, and radiation patterns found in the collimated sprays of particles that form the jets themselves. In this work, we present a comprehensive analysis of a new approach to multiple jet substructure-based inference tasks using a machine learning (ML) architecture that fundamentally respects permutation and Lorentz-group symmetries: PELICAN, the permutation equivariant and Lorentz invariant or covariant aggregator network. Our approach thus imposes explicit physics-informed constraints on the system and consequently yields new insights and capabilities. Decades of jet substructure research have yielded a wide range of approaches to performing inference tasks such as: distinguishing quark-initiated from gluon-initiated jets [1; 2; 3; 4; 5]; discriminating jets formed from Lorentz-boosted top quarks, Higgs and \(W\)-bosons, from the continuum background of jets formed from light-quarks and gluons [6; 7; 8; 9]; dissecting and measuring the parton-shower structure of light-quark and gluon jets themselves [10; 11; 12; 13; 14; 15]. Many approaches have been adopted to perform these tasks, including the direct use of discriminating high-level observables and multi-variate methods [16; 17; 18], as well as a growing number of ML architectures using a variety of latent-space representations. For a comprehensive overview of jet substructure measurements, see Refs. [19; 20], as well as Ref. [21] for a general review of ML methods in high-energy physics (including substructure measurements). As the model complexity has grown, so too have questions regarding the relationship of both the methods and the constraints that they impose to the fundamental physical processes that they are used to model. In particular, the use of observables, architectures, and latent space representations that adhere closely to the structure and dynamics of the physics processes under study have been found to provide not only enhanced performance, but also significant insights and improvements in interpreting the results [18, 22, 23]. Imbuing these models with knowledge of, or even fundamental respect for, the symmetry group structures of the system under study has thus become increasingly impactful in the study of jet substructure, especially in the context of ML models and various neural network (NN) architectures [24, 25, 26]. There are several common approaches to enforcing continuous symmetries in NNs. Data augmentation can be used to train a model to have a particular sparsity structure and become approximately symmetric. However, when model complexity and interpretability are of concern, as is the case in particle physics and jet substructure analyses, a different approach is helpful. Similar issues arise with approaches that use preprocessing/normalization, which moreover come with inherent ambiguities and discontinuities that can be detrimental for sufficiently complex tasks. Traditionally, ML algorithms are evaluated based on basic performance metrics such as accuracy and computational cost. However, in contexts where the trained algorithms are treated not only as predictors or generators, but as actual models for some process, - which is especially true in scientific applications - other metrics of model quality are valuable. Model complexity (defined as e.g. the number of parameters), explainability and interpretability can be important for making a viable physics model out of an ML algorithm. Further, certain problem-specific properties such as symmetries can be critical as well. Symmetries in ML are known to produce less complex models which respect basic geometrical rules and arguably provide more opportunities for interpretability and explainability (e.g. convolutional neural network (CNN) kernels are often interpreted as visual features). Even in realistic settings where the symmetries are merely approximate, symmetry-constrained architectures often outperform more general architectures in terms of pure accuracy (see e.g. Section 4), but even in cases when that is not true symmetric architectures should not be discounted due to their other benefits. For these reasons, as advocated for in Ref. [27], we have adopted an approach of building all symmetries directly into the PELICAN network architecture itself, similar to the inherent translational symmetry of CNNs. ### Summary of results In Section 2 we discuss equivariance in jet physics and introduce the tools we need to build an efficient equivariant architecture. In Section 3 we describe the architectures of PELICAN classifiers and regressors. Now we briefly summarize the main results presented in this work, corresponding to Sections 4 through 8. Top-tagging with a PELICAN classifierWe train PELICAN top taggers using a public benchmark dataset, to distinguish between top quark jets, and light quark and gluon jets. These taggers achieve state-of-the-art performance on the benchmark with fewer learnable parameters than the previous highest-performing network. PELICAN top taggers with as few as 11k parameters outperform all non-equivariant networks in the benchmark. See Section 4 for details. \(W\)-boson 4-momentum reconstruction with PELICANWe train a PELICAN model using a custom dataset [28] of fully-hadronic top decays to reconstruct the full 4-momentum of the intermediate \(W\)-bosons. Specifically, PELICAN uses 4-momenta of the top quark jet constituents as inputs. PELICAN performs favorably in reconstructing the full \(W\) momentum when compared with the Johns Hopkins (JH) top tagger [7], which produces \(W\) candidates for the subset of jets that pass its tagging. PELICAN achieves better \(p_{T}\), mass, and angular resolutions on JH top-tagged jets - and achieves comparable resolutions to the JH tagger even when evaluated on the full dataset. Additionally, we train a PELICAN model to reconstruct the 4-momentum of only the products of the \(W\to qq^{\prime}\) decay which are contained within the jet. We discuss differences in performance and effects of this choice in reconstruction targets in Section 5. \(W\)-boson mass reconstruction with PELICANMass reconstruction is a common particle physics analysis task, and any reconstruction algorithm should be robust and relatively free of bias. In Section 6 we discuss the nuances of PELICAN mass reconstruction targeting the \(W\)-bosons in the above-mentioned dataset [28] as an example. The results show that eliminating bias in the underlying dataset is required to produce an unbiased final algorithm. In the case of \(W\) mass reconstruction, this is achieved by training PELICAN on a dataset with multiple values of \(m_{W}\). Explaining PELICAN weightsPELICAN's respect of the particle permutation and Lorentz symmetries inherent to particle datasets provides it with explainability and interpretability rarely found in particle physics machine learning applications. In Section 7 we investigate the rich penultimate layer of PELICAN and its discriminatory power. In particular, we discuss interpretations of PELICAN as a soft clustering and detector-unfolding algorithm of sorts. IRC-safety and PELICANIn particle physics, IRC-safety is an algorithmic concern which restricts tools to be robust with respect to soft-particle emissions (infrared - IR) and collinear (C) splits due to divergences in perturbative quantum chromodynamics (QCD). In Section 8 we modify PELICAN into IR-safe and IRC-safe versions and discuss their relative performances. ## 2 Equivariance and jet physics This section aims to establish a clear connection between the group theory that underlies the PELICAN architecture and the implementation of this approach for both classification and regression, as described in Section 3. Given a symmetry group \(G\) and two sets \(X,Y\) on which an action of \(G\) is defined, a mapping \(F:X\to Y\) is called _\(G\)-equivariant_ if \(F(g\cdot x)=g\cdot F(x)\) for any \(x\in X\) and \(g\in G\). In particular, if the action of \(G\) on \(Y\) happens to be trivial (i.e. \(g\cdot y=y\) for all \(g,y\)), then \(F\) is called _invariant_. In relativistic physics, equivariant maps are typically represented by tensors with equivariant spacetime indices treated via Einstein notation. For instance, the electromagnetic field tensor \(F^{\mu\nu}\) can be viewed as a Lorentz-equivariant mapping from covariant vector fields to contravariant ones. In this work we will be interested in tasks from jet physics that can be reduced to learning a Lorentz-equivariant map. In this section we review some basics of the Lorentz symmetry in the context of such tasks. ### Lorentz symmetry and jets Lorentz symmetry is one of the fundamental symmetries of the Standard Model of particle physics. The full Lorentz group \(\mathrm{O}(1,3)\) can be defined as the set of linear transformations of the 4-dimensional spacetime that preserve the Minkowski metric \(\eta=\mathrm{diag}(1,-1,-1,-1)\). However, in this work we will restrict ourselves to the _proper orthochronous_ subgroup \(\mathrm{SO}^{+}(1,3)\) that preserves spatial and temporal orientations. Lorentz invariance is the mathematical encapsulation of the fact that the outcomes of physical phenomena don't depend on the inertial frame of the observer. In the context of particle accelerators, this boils down to the observation that all initial and final states of a particle interaction are the same in all inertial frames. This is formally reflected in the fact that the Standard Model of particle physics is Lorentz-invariant, and therefore any model of any physically relevant processes encompassed by the Standard Model can be as well. A couple subtle points are worth addressing before applying Lorentz symmetry to experimental tasks in jet physics. Neither the actual particle detectors nor the software simulating particle decays and their detection are Lorentz-invariant. Reasons for this include: non-invariant corrections to perturbative computations in quantum chromodynamics (QCD); non-invariance of jet clustering algorithms; practical limitations of detectors such as finite resolution and energy cutoffs. Nevertheless, it is still valid to learn Lorentz-invariant models from data obtained this way. Firstly, QCD is globally Lorentz-invariant and boosting the _entire_ event does not change the outcome of the decay process. As long as inference is performed on data obtained in conditions similar to the conditions of the perturbative simulation, corrections from such effects as the running of the couplings with varying momentum scales are not a concern either. The same applies to jet clustering algorithms and the finite detection resolution: as long as the data used for inference was obtained in the same reference frame as the data used for training, the inference is valid and the outputs are expected to be Lorentz-equivariant. Finally, the fact that the detector itself introduces a fixed reference frame can be fully addressed without breaking the symmetry of the model by including detector geometry among its inputs. This will be discussed below in Section 3.1. ### Lorentz invariance The classification task considered in this work is exactly Lorentz invariant. The physical content of this statement will be discussed below, but mathematically it simply means the following. If the inputs to the network are a collection of 4-vectors (energy-momentum vectors in our case) \(p_{1},\ldots,p_{N}\), the output is \(F(p_{1},\ldots,p_{N})\), and \(\Lambda\in\text{SO}^{+}(1,3)\) is a Lorentz transformation, then \[F\left(\Lambda p_{1},\ldots,\Lambda p_{N}\right)=F\left(p_{1},\ldots,p_{N} \right). \tag{1}\] There are a few ways of constructing a machine learning model that satisfies a constraint of this kind. The simplest one is to hand-pick a set of invariant observables (such as particle masses, relative masses, particle identification labels and charge) and feed them into a generic neural network architecture. Another approach inspired by convolutional networks is to preserve group-equivariant latent representations in the hidden layers. In this case the neuron nonlinearity must be a Lorentz-equivariant operation, and examples of this can be found in both the Lorentz Group Network (LGN) [25] and LorentzNet [26]. As in traditional CNN's used in image processing, equivariant latent representations, as opposed to invariant ones, can regularize the network via efficient weight-sharing and improve training. Here, we take a slightly different approach. Given a set of 4-vector inputs \(p_{1},\ldots,p_{N}\), we compute a _complete_ set of Lorentz invariants on that set. For classical groups, including the Lorentz group, the space of invariants constructed out of a collection of vectors in the fundamental representation are functions of only the pairwise invariant dot products (using the appropriate invariant quadratic form for the given symmetry group) and of square determinants (of, say, 4 column-vectors for the Lorentz group) [29]. Furthermore, if the invariant is required to be symmetric in the vector inputs, then it's _only_ a function of the dot products (see also the discussion in Ref. [30]). In short, all totally symmetric Lorentz invariants can be written in the following form: \[I(p_{1},\ldots,p_{N})=f\left(\{p_{i}\cdot p_{j}\}_{i,j}\right). \tag{2}\] This is the first key idea used in our architecture. The first step performed by the input layer is the computation of the \(N\times N\) array of dot products between the particle 4-momenta (also known as the Gram matrix). Note, however, that from simple dimension counting it's clear that the \(N(N-1)/2\) components of the Gram matrix \(\{p_{i}\cdot p_{j}\}\) can't be independent. The physical manifold inside this high-dimensional space is defined by the set of constraints \(\det M_{5}=0\) for _every_ 5-minor \(M_{5}\) of the Gram matrix (that is, any matrix obtained from the original one by crossing out \(N-5\) rows and \(N-5\) columns). Moreover, a causally related set of points such as a particle jet will always satisfy \(p_{i}\cdot p_{j}\geqslant 0\) for all \(i,j\). Therefore a neural network whose input is an \(N\times N\) matrix will learn the task only on this \((4N-6)\)-dimensional submanifold of \(\mathbb{R}^{N^{2}}\). The outputs of the trained model on the rest of the space will be uncontrollable and physically meaningless. ### Permutation equivariance Particle data are often interpreted as a point cloud since there is no natural ordering on the vectors. For such problems it makes sense to use one of the permutation-invariant or equivariant architectures. One of the simplest approaches is called Deep Sets [31], which has been applied to jet tagging [24] and even heavy-flavor tagging [32]. The fundamental fact used in deep sets is that any permutation-invariant continuous mapping of inputs \(x_{1},\ldots,x_{N}\) can be written in the form \(\psi\left(\sum_{i}\varphi(x_{i})\right)\), where \(\psi\) and \(\varphi\) can be approximated by neural networks. The main limitation of permutation-invariant architectures such as Deep Sets is the difficulty of training. Since aggregation (summation over the particle index) happens only once, the Deep Sets architecture can struggle with modeling complex higher-order interactions between the particles [33]. The network representing \(\psi\) is forced to be a relatively wide fully connected network, which makes it difficult to train. The alternative to permutation-invariant architectures is provided by permutation-_equivariant_ ones. Given a symmetry group \(G\) (e.g. the group of permutations), a representation \((V,\rho)\) is a tuple where \(V\) is a set and \(\rho:G\times V\to V\) is a map that becomes a bijection \(\rho_{g}=\rho(g,\cdot):V\to V\) for any fixed value of the first argument, \(\rho_{e}=\text{id}\), and \(\rho_{g^{-1}}=\rho_{g}^{-1}\). Given two representations \((V,\rho)\) and \((V^{\prime},\rho^{\prime})\) of a group \(G\), a map \(F:V\to V^{\prime}\) is called equivariant if it _intertwines_ the two representations, that is: \[F(\rho_{g}(v))=\rho^{\prime}_{g}\left(F(v)\right),\quad v\in V,\ g\in G. \tag{3}\] Equivariance is a key property of all convolutional networks - for example, in CNN's the convolution operation is inherently equivariant with respect to translations (up to edge effects). Similarly, Graph Neural Networks (GNN's) use permutation equivariance to force architectures that respect the underlying graph structure and don't exhibit false implicit biases that produce different outputs after a mere renaming of the graph vertices. In this context, we review the standard definition of a message passing layer where the particles are treated as nodes in a graph (for example, the fully connected graph), and every layer of the network only updates the activation at every node. If we denote by \(f_{i}\) the data assigned to node \(i\), then the message passing layer will typically construct "messages" \(m_{ij}=m(f_{i},f_{j})\) and then update each node by aggregating the messages coming from all neighbors of that node and combining the result with the original state of the node: \(f_{i}^{\prime}=\psi(f_{i},\sum_{j}m_{ji})\). Sometimes the graph also possesses "edge data" \(D_{ij}\) that can be incorporated into the message-forming stage. Message passing architectures have been successfully applied to jet tagging, most prominently in Refs. [25, 26]. However, attempts to combine message passing with Lorentz invariance runs into a major obstacle: as we have seen, the inputs to the network consist of _nothing but_ edge data \(d_{ij}=p_{i}\cdot p_{j}\). Traditional message passing would require a reduction of this set of inputs to a point cloud (with only one particle index), potentially restricting the set of possible higher-order interactions between the points. To avoid making these unnecessary choices, we employ the general permutation-equivariant layers suggested in Refs. [34, 35]. In the general setting, permutation equivariance is a constraint on mappings \(F\) between arrays \(T_{i_{1}i_{2}\cdots i_{r}}\) of any rank \(r\), every index \(i_{k}\in\{1,\ldots,N\}\) referring to a particle label, whereby permutations of the particles "commute" with the map: \[F\left(\pi\circ T_{i_{1}i_{2}\cdots i_{r}}\right)=\pi\circ F\left(T_{i_{1}i_{2 }\cdots i_{s}}\right),\quad\pi\in S_{N}. \tag{4}\] Here, the action of permutations is "diagonal": \(\pi\circ T_{i_{1}i_{2}\cdots i_{r}}=T_{\pi(i_{1})\cdots\pi(i_{r})}\). Graph Neural Networks explicitly implement this constraint for rank 1 arrays (node information). A higher-order generalization of the Message Passing layer can be defined as \[\text{\bf Equivariant Layer:}\quad T^{(\ell+1)}=\text{\bf Agg}\circ\text{ \bf Msg}\left(T^{(\ell)}\right). \tag{5}\] Here, Msg is a node-wise nonlinear map ("message forming") shared between all nodes, and Aog is a general permutation-equivariant linear mapping ("aggregation") acting on the particle indices of \(T\). Note that whether Msg is node-wise and whether Agg is linear is somewhat ambiguous based on how one separates the mappings into their components, which is why, in particular, the traditional formulation of message passing allows messages to be functions of pairs of nodes. In practice, our aggregation block will also involve a nonlinear activation function. ### Elementary equivariant aggregators It only remains to describe the exact structure of the equivariant aggregation layers defined above. Since the general case is presented in Refs. [34, 35], here we will only present the layers that we need for jet tagging. Since the input is an array of rank 2, the main equivariant layer for us is one that transforms arrays of rank 2 to other arrays of the same rank: \(T_{ij}\mapsto T^{\prime}_{ij}\). The space of all linear maps of this type turns out to be 15-dimensional. The basis elements of this space can be conveniently illustrated using binary arrays of rank 4. There are 15 such arrays \(B^{a}_{ijkl},a=1,\ldots,15\), and the action of the equivariant layer can be written as \[T^{\prime a}_{ij}=\sum_{k,l=1}^{N}B^{a}_{ijkl}T_{kl}. \tag{6}\] The 15 aggregators \(B^{a}\) are easy to visualize. This is done below for \(N=2\). The smallest squares represent components of the input \(2\times 2\) array, and the larger \(2\times 2\) squares represent components of the output array. Dots represent the non-zero components of the binary tensors \(B^{a}\), and every component of the output tensor is the result of aggregation over all inputs marked by the dots. Output components that lack any dots are set to be a fixed constant, by default zero (the affine versions of these mappings include two such parameters: one constant for the diagonal and another for the remaining components). By "aggregation" we mean, in general, any symmetric function, but in practice it is usually a sum or mean. For example, the first aggregator is simply the identity map on matrices: the \(ij\)'th component of the output array is the result of aggregation over only the \(ij\)'th component of the input. The second aggregator realizes the transposition of arrays \(T^{\prime}_{ij}=T_{ji}\). The following three aggregators represent various ways of embedding the diagonal of the input array in an equivariant way. It is easy to see that simultaneously swapping the two rows and the two columns of the input is equivalent to doing the same to the output, which confirms equivariance. These first 5 aggregators are "order zero" in \(N\) because they do not actually perform any aggregation. Instead, they can be thought of as permutation-equivariant skip-connections. The second group of 8 "order one" aggregators aggregate over \(N\) components of the input by aggregating either over rows, columns, or the diagonal, and then embedding the result into the output array in all possible equivariant ways. Finally, the last 2 aggregators are the "order two" aggregators that aggregate over all \(N^{2}\) components of the input. If we allow aggregators to be nonlinear, then they can take the following form: the binary array \(B^{a}\) selects a subset of the components of the input array, and then a general symmetric function \(S^{a}\) is applied to that subset: \[T^{\prime a}_{ij}=S^{a}\left(\{T_{kl}\mid k,l:B^{a}_{ijkl}\neq 0\}\right). \tag{7}\] In practice we define \(S^{a}\) as the mean of its inputs followed by an additional scaling by a factor of \(N^{\alpha_{a}}/\bar{N}^{\alpha_{a}}\) with learnable exponents \(\alpha_{a}\), where \(\bar{N}\) is a constant representing the typical number of input vectors expected in the dataset, provided to the model as a hyperparameter. ### Equivariance and Jet Physics There are several reasons for enforcing the full Lorentz symmetry in our ML models. First and foremost, it is a fundamental symmetry of the space to which the inputs belong. Lorentz transformations represent the effect of switching between different inertial frames, and most fundamental processes in physics are independent of the choice of the observer's inertial frame: if a given collection of particles consists of products of a decay of a top quark for one observer, then the same is true for all other observers. Nevertheless, some processes involved in generating and observing high-energy collision events break the Lorentz symmetry in some subtle ways. At the fundamental level, the running of the couplings in QCD can cause Lorentz symmetry breaking in the parton shower distribution functions. Even the amount of final decay products depends on the transversal boost of the initial parton-level particles. However, there is no question that both the original protons and the final (asymptotic) decay products are accurately represented by a collection of 4-vectors subject to the spacetime Lorentz symmetry: the asymptotic outcome of a collision event is independent of the observer's reference frame. Another reason for symmetry-restricted modeling is that, from the geometric perspective, only some mathematical operations are permissible when working with objects that transform in a certain way under a symmetry group. A non-equivariant neural network effectively neglects the vector nature of the inputs by treating individual components of the input vectors as scalars. While improving network expressivity, non-equivariance fails to deliver physically interpretable models. Ultimately, a statement about equivariance is a statement about what the basic _features_ of the data are - e.g. vectors are features, but the individual components of those vectors are not. More relevant to the applications is the fact that both the simulation and the observation of collisions inevitably involves some degree of _clustering_. A particle detector is made of cells (e.g. calorimeters) of finite size and as such is unable to distinguish between some particles that are collinear or very close to collinear. Similarly, the standard algorithms for collision simulation typically perform _jet clustering_ to closely reproduce the detector behavior. Clustering of course is not a Lorentz-invariant procedure: particle tracks that diverge by a small angle in one frame will diverge by a large angle in another highly boosted frame. However, this limitation of Lorentz-invariant architectures is fairly minor. Since clustering is always done in a fixed laboratory frame, it is still reasonable to impose the full Lorentz symmetry on the resulting 4-vector data. So unless the pre-clustering data itself is coming from multiple significantly different inertial frames, clustering is not interfering with the fundamental symmetry. Simply put, however a given set of 4-vectors is obtained and represented in a specific inertial frame, those vectors will respect the Lorentz symmetry. ## 3 PELICAN architecture The PELICAN architecture is simplified with respect to LGN due to the use of the complete set of dot products between the input 4-momenta (See Section 2), and this has significant implications for both the overall architecture as well as the ease of training and interpretability of the network. This section discusses each of the primary components of the network, including the inputs and their embedding, the permutation and Lorentz equivariant blocks, and the output layers that determine the structure of the result, namely classification or 4-vector regression. ### Inputs and embeddings Dot Products and BeamsOn the input side of the architecture, the first step is to compute all pairwise dot products of the input 4-momenta. Appended to the list of these 4-momenta are two auxiliary beam particles with 4-momenta \((1,0,0,\pm 1)\). This is helpful since the datasets we are using are all simulated in a fixed laboratory frame where the original proton-proton collision happens along the \(z\)-axis, and the auxiliary inputs restore this orientational knowledge. In particular, the dot products between constituents and beams give PELICAN access to the energies and transverse momenta of all constituents. It is worth emphasizing that introducing beams in this manner allows us to fix a particular spatial orientation of the events without restricting or violating the global Lorentz symmetry inherent in the architecture. Indeed, if one were to treat the auxiliary beams as constant vectors of hyperparameters, then this action would reduce the full Lorentz symmetry to merely rotations in the \(xy\)-plane and \(z\)-boosts. However, due to the fact that the beams are fed into the network on equal footing with all other inputs, they are properly treated as full-fledged 4-vectors that should also transform under the global Lorentz symmetry. Thus, counter-intuitively, we let the network access individual energies, transverse momenta and \(z\)-momenta while still preserving full Lorentz symmetry and all the computational benefits that come with it. Input EmbeddingNext there is an embedding layer that applies the function \(f_{\alpha}(x)=((1+x)^{\alpha}-1)/\alpha\) to each dot product with \(C^{0}-2\) different values of the trainable parameter \(\alpha\) (initialized to span the interval \([0.05,0.5]\)). Then the result goes through a masked BatchNorm2D layer. Finally, this array of scalars gets concatenated with two labels \(\mathrm{L}_{i}\), \(\mathrm{L}_{j}\) per dot product \(d_{ij}=p_{i}\cdot p_{j}\) that indicate whether each of particles \(i\) and \(j\) is a beam or not. The label for a beam is chosen to be 1 and the label for all other particles is 0. At the end of this input block, we have a tensor of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{0}]\) where the feature vector for each particle pair has the form \(\left(\mathrm{BatchNorm2D}\left(f_{\alpha_{1}}(d_{ij}),\ldots,f_{\alpha_{C^{0},2}}(d_{ij})\right),\mathrm{L}_{i},\mathrm{L}_{j}\right)\). ### Permutation equivariant blocks The main element of the equivariant architecture is the permutation-equivariant block transforming arrays of rank 2. Namely, we assume that the input tensor to the block has shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{l}]\), where \(B\) is the batch size, \(N_{\mathrm{max}}\) is the maximum number of jet constituents per event (with zero padding for events with fewer constituents), and \(C^{l}\) is the number of input channels. We also use a binary mask of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}}]\) to appropriately exclude the zero padding from operations like BatchNorm and aggregation. The output of the block will be a similar tensor of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{l+1}]\) with the same mask. As outlined above, the equivariant layer consists of a message block and an aggregation block. The message block is chosen to be a dense multilayer perceptron (MLP) acting on the channel dimension with a LeakyReLU activation and BatchNorm2D (normalization over the first three dimensions of the tensor, for each channel separately, followed by an affine transform with two learnable parameters per channel). Here we use a masked implementation of batch normalization so that the variable particle number is respected. The message block is then followed by Dropout that zeroes out each of the \(B\times N_{\mathrm{max}}^{2}\times C_{\mathrm{eq}}^{l}\) components independently with a certain probability. he aggregation block applies 15 linear aggregation functions (LinEq\({}_{2\to 2}\)) which, for each component of the output tensor, compute the mean over some subset of the components of the input tensor, as explained in Section 2.4. Note that this is a non-parametric transformation performed on each channel separately. Each of the \(C^{l}_{\text{eq}}\times 15\) resulting aggregation values is then independently multiplied by \(N^{\alpha}/\tilde{N}^{\alpha}\) with a trainable exponent \(\alpha\) (initialized as a random float in \([0,1]\)), where \(N\) is the number of particles in the corresponding event. This allows for some flexibility in the aggregation process, for example \(\alpha=1\) returns the sum aggregation function, and combining multiple aggregators is known to boost accuracy [34]. Aggregation is followed by a dense layer that mixes the \(C^{l}_{\text{eq}}\times 15\) aggregators down to \(C^{l+1}\) features. Due to the size of this layer, we employ a simple factorization to reduce the number of parameters. Namely the weight tensor \(W_{abc}\), where \(a\) is the input channel index, \(b\) is the basis index (1 to 15), and \(c\) is the output channel index, can replaced by the following combination: \[W_{abc}=W^{0}_{ab}W^{1}_{ac}+W^{2}_{cb}W^{3}_{ac}. \tag{1}\] Here, the first term first mixes the 15 aggregators among each other for each output channel, and then mixes the channels. Similarly, the second term first mixes the 15 aggregators for each input channel, and then mixes the channels. The final result is a tensor of shape \([B,N_{\text{max}},N_{\text{max}},C^{l+1}]\), so these equivariant layers can be stacked multiple times. ### Classification and 4-vector regression outputs One of the strengths of the PELICAN architecture is the ability to easily switch between serving as a classification tool for discriminating between Lorentz-boosted top quarks and the QCD background, to being able to provide 4-vector regression results, such as momentum reconstruction. PELICAN classifierTo build a classifier, aside from the Eq\({}_{2\to 2}\) equivariant layer one needs a Eq\({}_{2\to 0}\) layer that reduces the rank 2 array to permutation-invariant scalars. This layer involves just 2 aggregation functions instead of 15 - the trace and the total sum of the input square matrix, but is otherwise identical to the equivariant layer described in the last section. \[\{d_{ij}\}\rightarrow\boxed{\text{Emb}\rightarrow[\text{Eq}_{2\to 2}]^{L} \rightarrow\text{Eq}_{2\to 0}\rightarrow\text{MLP}}\rightarrow\{w_{c}\} \tag{2}\] From the input block, the tensor is passed through \(L\) equivariant Eq\({}_{2\to 2}\) layers, and the Eq\({}_{2\to 0}\) layer with dropout. This produces a tensor of shape \([B,C_{\text{out}}]\). One final MLP mixes this down to just 2 classification weights per event. A cross-entropy loss function is then used for optimization. PELICAN 4-vector regressionThe same architecture can also be easily adapted for 4-vector regression tasks, such as momentum reconstruction. Any Lorentz-equivariant map from a collection of 4-vectors Figure 1: The PELICAN equivariant block updating square arrays. \(p_{1},\ldots,p_{N}\) to one (or several) 4-vector has the form \[F(p_{1},\ldots,p_{N})=\sum_{i=1}^{N}f_{i}(p_{1},\ldots,p_{N})\cdot p_{i}, \tag{10}\] where \(f_{i}\)'s are Lorentz-invariant functions [36]. Combining this with permutation invariance, we conclude that the multi-valued map \((p_{1},\ldots,p_{N})\mapsto(f_{1},\ldots,f_{N})\) must also be equivariant with respect to the permutations of the inputs. The only change required to the architecture we've introduced for classification is that \(\text{Eq}_{2\to 0}\) must be replaced with \(\text{Eq}_{2\to 1}\) and the final output layer must have only one output channel (assuming we are regressing on a single 4-vector). The \(\text{Eq}_{2\to 1}\) layer is again identical to \(\text{Eq}_{2\to 2}\) except that it uses only 4 linear aggregators: row sums, column sums, trace, and full sum. The architecture is summarized by the following diagram, where we treat \(d_{ij}\) as the inputs and \(f_{i}\) as the outputs, keeping in mind formula (10) that lets us recover the final predicted vector. \[\{d_{ij}\}\rightarrow\boxed{\text{Emb}\rightarrow[\text{Eq}_{2\to 2}]^{L} \rightarrow\text{Eq}_{2\to 1}\rightarrow\text{MLP}}\rightarrow\{f_{i} \}_{i=1}^{N} \tag{11}\] ## 4 Tagging jets from Lorentz boosted top quarks This section presents the dataset, training approach, and results of using PELICAN as a classifier in the context of identifying Lorentz-boosted top quarks. Three different versions of PELICAN are discussed, each with a different size in terms both the width of the network and the number of trainable parameters. Lastly, the dependence of the performance on the size of the training dataset is also presented, providing a quantitative relationship between the size of the network, the training dataset efficiency, and the resulting performance. ### Classification dataset We perform top-tagging on the reference dataset [37], which was also used in Ref. [8]. This dataset consists of 2M entries, each entry corresponding with a single hadronic top jet or the leading jet from a QCD dijet event. There are 1.2M training entries, 400k validation entries and 400k testing entries. The events were generated with, and the framework [38] was used for fast detector simulation in order to incorporate detector effects. For each jet, the 4-momentum of the 200 leading constituents are stored in Cartesian coordinates \((E,p_{x},p_{y},p_{z})\), in order of decreasing \(p_{T}\). This list is zero-padded, and all jets in the dataset have fewer than 200 constituents. The dataset does not contain any other information on the jet constituents, such as charge or spin. ### Classification training procedure The top-tagging model contains five \(\text{Eq}_{2\to 2}\) blocks of identical shapes. We train three different versions of the model with different widths. The widest model has 132 input and 78 output channels on every messaging layer (the equivariant layer then produces \(132\times 15\) quantities which get mixed down to 78 channels by a fully connected linear layer). The output MLP is just one layer that mixes 132 channels down to 2 classification weights. The number of jet constituents was capped at 80 (no noticeable performance gain was seen beyond that number). The dropout rate was 0.025, and the model was optimized using the AdamW optimizer [39] with weight decay of 0.005. The training on the full dataset went on for 35 epochs with the same learning rate schedule as in Ref. [26]: 4 epochs of linear warm-up up to learning rate of 0.001, followed by 28 epochs of \(\text{CosineAnnealingLR}\) with \(T_{0}\) of 4 epochs and \(T_{\text{mult}}=2\), and then 3 epochs of exponentially decaying learning rate with exponent \(\gamma=0.5\) per epoch. The three models were trained on Nvidia H100 GPU's with batch size of 100, taking 0.43, 0.17, or 0.08 seconds per batch, respectively. Inference took 0.17, 0.07, or 0.04 seconds per batch. Batches were shuffled between epochs. ### Classification results Figure 2 shows the _receiver operating characteristic_, here represented by the background rejection as a function of the signal efficiency, for the classification performance. In Table 1 we compare the accuracy, area under the curve (AUC), and background rejection values at 30% signal efficiency between PELICAN and a multiple existing ML top-taggers, including the previous state-of-the-art LorentzNet [26]. We trained three PELICAN top-taggers with layers of differing widths, with 208k, 48k, and 11k trainable parameters respectively. The results are averaged over 5 random initialization seeds, and the uncertainties are given by the standard deviation. The large PELICAN model improves upon the LorentzNet result with a comparable number of parameters, and the medium model roughly matches LorentzNet despite having 5 times fewer parameters. Perhaps most remarkably, the small model with 11k parameters beats every pre-LorentzNet competitor despite having at least times fewer parameters, and up to 130 times fewer parameters, than other networks. In addition to different model sizes, we also explore sample efficiency. Each of the three models above was trained on 0.5%, 1% and 5% of the training data and compared to the original. For these, the training went on for 70 epochs with 60 epochs of CosineAnnealingLR instead of 28, and 6 epochs of exponential decay instead of 3. The results can be found in Table 2. Notice that at lower amounts of training data the differences in performance between models of different width become much less significant, and at 1% and 0.5% of training data all three models fall within each other's uncertainty ranges. These results suggest that the larger PELICAN networks are likely able to learn a greater range of more subtle features from the training data and thus benefit from seeing a larger training dataset. On the other hand, the primary features are already \begin{table} \begin{tabular}{l c c c c} \hline \hline Architecture & Accuracy & AUC & \(1/\epsilon_{B}\) & \# Params \\ \hline TopoDNN[40] & 0.916 & 0.972 & 382\(\pm\) 5 & 59k \\ LGN[25] & 0.929(1) & 0.964(14) & 424 \(\pm\) 82 & 4.5k \\ PFN[24] & 0.932 & 0.982 & 891 \(\pm\) 18 & 82k \\ ResNeX[8] & 0.936 & 0.984 & 1122 \(\pm\) 47 & 1.46M \\ ParticleNet[41] & 0.938 & 0.985 & 1298 \(\pm\) 46 & 498k \\ LorentzNet[26] & 0.942 & 0.9868 & 2195 \(\pm\) 173 & 220k \\ \hline PELICAN\({}_{132/78}\) & 0.9425(1) & 0.9870(1) & 2250 \(\pm\) 75 & 208k \\ PELICAN\({}_{60/35}\) & 0.9423(1) & 0.9868(1) & 2133 \(\pm\) 148 & 48k \\ PELICAN\({}_{25/15}\) & 0.9410(3) & 0.9858(4) & 1879 \(\pm\) 103 & 11k \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of PELICAN models trained on different fractions of the training dataset. The subscripts indicate the width of the network: e.g. 132/78 means each Msq layer has 132 input and 78 output channels. Figure 2: Performance of various machine learning architectures represented by the background rejection as a function of the signal efficiency. learnable with just a few percent of the data. In particular, with 5% of the training data and only 11k learnable parameters, the PELICAN\({}_{25/15}\) version of the network appears to achieve the similar background rejection performance as ResNeXt, which uses 1.46M parameters learning on the full dataset. ## 5 \(W\)-boson 4-momentum reconstruction To test the equivariant regression architecture described in Section 3 we chose a task where the aim is to reconstruct (or _predict_) the full 4-momentum of the \(W\)-boson within the Lorentz-boosted top quark decay products. Specifically, we consider the same hadronic top quark decay that constitutes the signal in the top-tagging dataset, which uses the \(t\to bW\to bqq\) two-step decay, followed by hadronization, showering, and detection. Our aim is to reconstruct the true 4-momentum of the \(W\)-boson given the full set of observed final state particles of the top decay, as represented by the jet constituents. ### Regression dataset The dataset used for the regression task consists of 1.5M \(t\bar{t}\) events simulated with PYTHIA8, with 700k events for training, 200k events for validation, and 500k events for testing (with an additional 100k events set aside in a second testing set). From each event, we cluster anti-\(k_{T}\) jets with \(R=0.8\) using FastJET[42] and we select the jet nearest to the truth-level top quark in \((\eta,\phi)\), requiring the distance between the top quark and the jet to satisfy \(\Delta R\) (top quark, jet) \(<0.8\). This jet clustering is done both at truth-level, and using calorimeter tower objects produced by running the event through Delphes fast detector simulation using the ATLAS detector card. Thus, each _event_ in the dataset corresponds to a single jet, and includes information for truth-level particles such as the truth-level top quark - we may therefore use the terms _jet_ and _event_ interchangeably below with the understanding that each event in the dataset has one and only one jet recorded. This dataset is publicly available via Zenodo [28], where a full description of the various data fields is provided. Here we provide only an overview of some key features: 1. There are two versions of the dataset, corresponding with truth- and reconstruction-level (Delphes) jets. The events are the same between versions, so the two can be compared event-by-event to study the effects of detector reconstruction on network training and performance. 2. The input data for the network are the 4-momenta of the 200 leading jet constituents. For use as possible regression targets and for defining jet containment (explained below), each event contains 1. the truth-level top quark that initiated the jet, 2. the bottom quark from top quark decay, 3. the \(W\)-boson from top quark decay, 4. the two quarks from subsequent \(W\)-boson decay (\(W\to qq^{\prime}\)), \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & \% training data & Accuracy & AUC & \(1/\epsilon_{B}\) \\ \hline PELICAN\({}_{132}\)/78 & 100\% & 0.9425(1) & 0.9870(1) & 2250 \(\pm\) 75 \\ & 5\% & 0.9366(3) & 0.9841(1) & 1213 \(\pm\) 79 \\ & 1\% & 0.9316(6) & 0.9810(5) & 789 \(\pm\) 49 \\ & 0.5\% & 0.9289(11) & 0.9800(5) & 633 \(\pm\) 28 \\ \hline PELICAN\({}_{60/35}\) & 100\% & 0.9423(1) & 0.9868(1) & 2133 \(\pm\) 148 \\ & 5\% & 0.9368(2) & 0.9841(1) & 1148 \(\pm\) 49 \\ & 1\% & 0.9323(3) & 0.9813(4) & 799 \(\pm\) 52 \\ & 0.5\% & 0.9289(9) & 0.9795(5) & 637 \(\pm\) 105 \\ \hline PELICAN\({}_{25/15}\) & 100\% & 0.9410(3) & 0.9858(4) & 1879 \(\pm\) 103 \\ & 5\% & 0.9361(5) & 0.9835(2) & 1122 \(\pm\) 44 \\ & 1\% & 0.9316(1) & 0.9810(5) & 798 \(\pm\) 116 \\ & 0.5\% & 0.9286(11) & 0.9795(6) & 615 \(\pm\) 133 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of PELICAN models trained on different fractions of the training data. In addition, the event contains the stable \(W\)-boson daughter particles. These are the truth-level, final state particles that are traced back to the \(W\)-boson by PYTHIA. 3. Each jet is tagged with the Johns Hopkins top tagger [7] (JH), as implemented in FASTJET. This allows us to define a subpopulation of JH-tagged events, which we shall sometimes refer to as \(JH\)_events_. For jets that it tags as top quark jets, JH provides a \(W\)-boson candidate constructed from subjets. 4. Each jet is also tagged as whether or not it is _fully-contained_ (FC). We define FC events as those where the \(b\)-quark, as well as the two quarks from \(W\to qq^{\prime}\) decay, are within \(\Delta R<0.6\) of the jet centroid (i.e. within 75% of the jet radius). In such cases almost all of the \(W\) daughters are contained within the jet and we can expect a good reconstruction of the \(W\) momentum. FC events comprise 75% of the dataset. ### Regression training procedure Our model has 4 equivariant \(\mathrm{Eq}_{2\to 2}\) blocks. Each messaging layer takes in 132 channels and outputs 78 channels. Conversely, each equivariant aggregation layer has 78 input channels and outputs 132 channels. The \(\mathrm{Eq}_{2\to 1}\) block has the same shape, and the final fully connected layer has the shape \(1\times 132\). There are 210k parameters in total. Assuming \(N\) non-zero input jet constituents, this produces \(N\) scalar coefficients \(c_{i}\) with zero-padding, which are the Lorentz invariants introduced in (11). The reconstructed 4-momentum is then computed via \[p_{\mathrm{reco}}=\sum_{i}c_{i}p_{i}. \tag{12}\] The training regime for this task is essentially identical to the one for top-tagging: AdamW optimizer with weight decay of 0.01, 35 epochs in total with 4 epochs of warm-up and exponential learning rate decay for the last 3 epochs. The main difference is in the choice of the loss function \(L(p_{\mathrm{reco}},p_{\mathrm{target}})\). Spacetime geometry allows for many choices of this function, which in turn will affect the shape of the landscape near \(p_{\mathrm{target}}\) and in turn the precision of various reconstructed features of the vector, such as the mass, energy, spatial momentum, transverse momentum, and direction. It is even possible to construct Lorentz-invariant loss functions to make the training process itself equivariant. Nevertheless, for the purpose of simultaneous reconstruction of the direction and the mass of the \(W\)-boson, \(m_{W}\), we found \[L=0.01\|\mathbf{p}_{\mathrm{reco}}-\mathbf{p}_{\mathrm{target}}\|+0.05|m_{\mathrm{ reco}}-m_{\mathrm{target}}| \tag{13}\] to be very effective. It uses all 4 components of the target vector and strikes a good balance between the precision of the reconstructed mass and spatial momentum. A rarely discussed feature of this task is the choice of the target vector \(p_{\mathrm{target}}\). Even though our ultimate inference target is the true \(W\) momentum \(p_{\mathrm{true}}^{W}\), it is not necessarily the best training target given the nature of the dataset. Detection and jet clustering include multiple energy, momentum, and spatial cuts that exclude some decay products from the final jet. For instance, one of the three quarks in \(t\to bqq\) might fall outside of the \(R=0.8\) radius of the jet clustering algorithm, in which case most of the decay products of that quark are likely to be absent from the event record. If many of the decay products of the \(W\)-boson are missing, then we lack the information necessary to make an accurate estimate of its true momentum, or even to identify which of the jet constituents belong to the \(W\). This effect is often referred to as an _acceptance_ issue due to the finite purview of the final state reconstruction. To alleviate this issue and provide better control over the inference stage, we propose an alternative target 4-vector that we call the _contained true \(W\) momentum_\(p_{\mathrm{cont}}^{W}\), equal to the total 4-momentum of the _truth-level \(W\) decay products_ that fall within the radius of the final reconstructed top jet. In the truth-level dataset, this is simply \(p_{\mathrm{cont}}^{W}=\sum_{k}p_{i_{k}}\) where \(i_{k}\) are the indices of the constituents whose parent is the \(W\)-boson and not the \(b\)-quark. In the DelDES dataset, however, there is no simple analytic relationship between \(p_{\mathrm{cont}}^{W}\) and the jet constituents \(p_{i}\). That is to say that the mapping of the truth-level information to the detector-level reconstruction is highly non-linear. Nonetheless, in either dataset this vector more accurately reflects the available information about the \(W\)-boson and allows us to make inferences not only about the \(W\)-boson itself, but also about the containment qualities of the event. This will be discussed further in the Section 5.5 below. For reference, the true mass spectra of both \(p_{\mathrm{true}}^{W}\) and \(p_{\mathrm{cont}}^{W}\) are shown in Fig. 3. For fully-contained (FC) events, the mass spectra are similar between the true and the contained \(W\) mass as expected. Non-FC events are mostly confined to a clear second peak at 13 GeV corresponding to \(qb\) and \(q\) jets (where one of the quarks from \(W\to qq\) fell outside the jet), and a minor peak at \(m_{\rm cont}^{W}=0\) corresponding to \(b\) jets. Given the above observations, we prepared two PELICAN models, one trained to reconstruct \(p_{\rm true}^{W}\), and another trained to reconstruct \(p_{\rm cont}^{W}\). Otherwise the two models are identical and are trained in the same way and with the same loss function. We then compare the outputs of each model to \(p_{\rm true}^{W}\) and analyze the benefits of the two choices of the target. ### Regression results for \(p_{\rm true}^{W}\) reconstruction The results are summarized in Table 3. We quantify the precision of the reconstruction by the transverse momentum1, \(p_{T}\), and mass resolutions, given by half of the central 68th interquantile range of (\(x_{\rm predict}-x_{\rm true}\))/\(x_{\rm true}\), where \(x\) is \(m\) or \(p_{T}\). In addition we report the lower 68th interquantile range for \(\Delta R\), the \(z\)-boost-invariant spatial angle between predicted and true momenta2. Figure 3: Stacked histogram _with proportional bin heights_ showing the mass spectrum of the two targets, the true \(W\)-boson \(p_{\rm true}^{W}\), and the contained true \(W\) momentum \(p_{\rm cont}^{W}\). The top curve represents the spectrum over the entire dataset on log scale and the bottom curve shows the spectrum over FC events only, _scaled linearly_ relative to the top curve, i.e. the fraction of FC events in a given bin is given by the apparent height of the FC curve divided by the total height of the bin (heights are measured from the \(x\)-axis). The two mass spectra of FC events, in fact, match. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{p_{T}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centriad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 0.66\% & 1.26\% & 0.216 \\ & PELICAN\(|\)JH & 0.26\% & 0.57\% & 0.113 \\ & PELICAN\(|\)FC & 0.30\% & 0.71\% & 0.139 \\ & PELICAN & 0.79\% & 1.12\% & 0.473 \\ \hline \multirow{4}{*}{PELICAN} & JH & 9.8 \% & 8.3 \% & 9.6 \\ & PELICAN\(|\)JH & 3.5 \% & 2.6 \% & 2.8 \\ \cline{1-1} & PELICAN\(|\)FC & 4.0 \% & 2.9 \% & 3.1 \\ \cline{1-1} & PELICAN & 5.1 \% & 3.0 \% & 4.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Momentum reconstruction results for the Johns Hopkins (JH) tagger and PELICAN trained to reconstruct \(p_{\rm true}^{W}\). We report the relative \(p_{T}\) and mass resolutions, and the interquantile range for the angle \(\Delta R\) between predicted and true momenta. PELICAN uncertainties are within the last significant digit. ince there are no ML-based methods for this task, we use the \(W\)-boson identification of the Johns Hopkins top tagger [43] implemented in FastJet[42] for the baseline comparison. The tagger has a 36% efficiency on the truth-level dataset and 31% on the Delphes one. It can only identify \(W\)-boson candidates for jets it tags, so we report PELICAN results both on the JH-tagged jets only (PELICAN|JH) and on the full dataset (PELICAN). Moreover, we evaluate PELICAN on the population of FC events (PELICAN|FC). More than 99.9% of JH-tagged events contain all three true quarks \(bqq\) within the jet radius, so this population represents an especially restricted and 'ideal' type of event. The results were evaluated over 5 training runs initialized with different random seeds, and the resolutions reported in Table 3 are consistent across the runs. There are significant differences in PELICAN's performance on the different sub-populations of events. In the direct comparison with the JH tagger, PELICAN|JH is 2-4 times more precise. However, even on the much larger class of FC events, PELICAN produces predictions with almost the same precision. The highest loss of precision happens on non-FC events where many of the \(W\) decay products are missing from the jet, leading to lower average precision on the entire dataset. As we will discuss in Section 6, this result can be _explained_ by interrogating the PELICAN weights and kinematic information directly. In Fig. 4 we show the relative reconstructed \(W\) masses for two of the models, one trained on truth data, and one on Delphes data. The results also include the curve for the JH tagger's reconstruction, as well as PELICAN|JH and PELICAN|FC. The 68\({}^{\text{th}}\) interquantile ranges of these curves match the numbers in the \(\sigma_{m}\) column of Table 3. See Section 7 for further details on the causes of performance degradation in the Delphes case. For the complete set of results see Appendix A. ### Regression results for \(p_{\text{cont}}^{W}\) reconstruction Now we train new models with the target vector set to the contained true \(W\) momentum \(p_{\text{cont}}^{W}\), evaluate their precision by comparing the outputs to the true \(W\) momentum \(p_{\text{true}}^{W}\), and compare the results to Table 3. As shown in Table 4, the resolutions for these models on JH-tagged and FC events are slightly worse than the first set of models, in the Delphes case by 5-15%. The largest change is in non-FC events, leading to poor average resolutions on the whole dataset. Despite this, as we will now show, these models can in fact be better suited for real-world applications. ### Discussion To see the main benefit of this model, we present the behavior of the relative reconstructed mass shown in Fig. 5. PELICAN-reconstructed masses within the range of true \(W\) masses are almost as precise on the full dataset as they are on FC events (see Fig. 5 near the peak at 1). The most prominent feature obvious from these results is that, despite the slightly lower accuracies on FC events (at fixed width and depth of the network), the Figure 4: Reconstructed \(W\) mass relative to true \(W\) mass for the PELICAN model trained on truth (left) or Delphes (right) data, and targeting \(p_{\text{true}}^{W}\). model trained to reconstruct \(p_{\text{cont}}^{W}\) accurately reproduces the mass spectrum of \(m_{\text{cont}}^{W}\) in Fig. 3 and therefore discriminates between FC and non-FC events, allowing us to perform post-inference event selections. For instance, in the Delphes case, choosing a 55 GeV cutoff, 97% of all FC events have \(m_{\text{reco}}>55\) GeV, and vice versa, 97% of all events with \(m_{\text{reco}}>55\) GeV are FC. In this manner we can significantly improve the accuracy of the reconstruction without accessing truth-level information that is needed to identify FC events. This comes at the cost of a modest reduction in signal efficiency - from the ostensible 100% down to 75%. Note that in the Delphes case, the set of FC events is contaminated with a small number of events with significant losses of \(W\) decay products due to detector effects, but it can be refined by reducing the jet radius used in the definition of full containment. Consequently, we propose the following simple routine for real-world applications of these models. First, use the model trained targeting \(p_{\text{cont}}^{W}\) as an FC-tagger to refine the data. Then, apply the model targeting \(p_{\text{true}}^{W}\) to reconstruct the \(W\)-boson. We conclude that \(p_{\text{cont}}^{W}\) is the better target for many common reconstruction tasks where one is willing to sacrifice some signal efficiency - or to only fully measure the 4-momentum on a sub-sample of the identified events - to gain improved accuracy. In the following sections we will not present models trained on both targets, however a complete set of metrics and results, including models targeting \(p_{\text{true}}^{W}\), can be found in Appendix A. ## 6 \(W\)-boson mass measurement As we saw above, PELICAN is able to reconstruct the mass of the \(W\)-boson, \(m_{W}\), found within the dense environment o the complete decay products of a top quark jet. For truth-level datasets, the resolution of this \begin{table} \begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{PT}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centirad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 0.66\% & 1.26\% & 0.216 \\ & PELICAN\(|\)JH & 0.27\% & 0.62\% & 0.113 \\ & PELICAN\(|\)FC & 0.34\% & 0.86\% & 0.142 \\ & PELICAN & 2.37\% & 38.93\% & 0.681 \\ \hline \multirow{4}{*}{PELICAN} & JH & 9.8 \% & 8.3 \% & 9.6 \\ & PELICAN\(|\)JH & 3.6 \% & 2.8 \% & 3.1 \\ \cline{1-1} & PELICAN\(|\)FC & 4.2 \% & 3.6 \% & 3.4 \\ \cline{1-1} & PELICAN & 6.2 \% & 39.6 \% & 5.6 \\ \hline \hline \end{tabular} \end{table} Table 4: PELICAN resolutions for models trained to reconstruct \(p_{\text{cont}}^{W}\). Resolutions are still obtained by comparing the model predictions to \(p_{\text{true}}^{W}\). Figure 5: Reconstructed \(W\) mass relative to true \(W\) mass for the PELICAN model trained (on truth or Delphes data) targeting \(p_{\text{true}}^{W}\). reconstruction is below the natural width of the mass spectrum. In the Delphes case, the resolution is too wide to produce any substantial correlation between the true and reconstructed masses (see Appendix A for figures that demonstrate this). We would like to eliminate the possibility that the reason that the true masses are highly concentrated around 80 GeV is due in part to the potential for PELICAN to effectively _memorize_ a single number: the \(W\)-boson mass. In this section we examine a more realistic reconstruction task, where the true mass of the target particle is unknown, and the dataset uniformly covers a wide range of its masses. The reconstruction task is still identical to that of Section 5. Even though we could use a scalar-valued version of PELICAN to target the mass of the \(W\)-boson, the accuracy of that reconstruction would in fact suffer in comparison with the 4-vector model. This is simply due to the fact that the 4-momentum contains more relevant information than the mass alone, since the direction and the energy of the particle are, in part, correlated with the mass. Thus the only new element in this experiment will be the dataset, which will now involve \(W\)-bosons of varying masses uniformly covering the range \(m_{W}\in\{65,95\}\) GeV. The dataset is also identical to that used in Section 5, except that the \(W\)-boson mass is set to be variable. This is achieved by combining multiple independently-produced datasets where the generator-level value of \(m_{W}\) was modified from its default value. Fig. 6 shows the resulting distribution of \(W\)-boson masses, as well as that of the sum of \(W\) daughters contained within each jet. ### Regression results for \(m_{W}\) reconstruction The hyperparameters and the training regime used here are the same as in Section 5. Here we focus on the model trained to reconstruct the contained momentum \(p_{\text{cont}}^{W}\) (see Appendix A to find the results for the model targeting \(p_{\text{true}}^{W}\)). The outputs are then compared to the true \(W\)-boson \(p_{\text{true}}^{W}\). The accuracies for the full 4-vector reconstruction are presented in Table 5. The largest loss of accuracy relative to Section 5 is, unsurprisingly, in \begin{table} \begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{p^{\prime}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (entirad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 7.98\% & 4.75\% & 22.180 \\ & PELICAN\(|\)JH & 0.27\% & 0.63\% & 0.111 \\ & PELICAN\(|\)FC & 0.35\% & 0.89\% & 0.143 \\ & PELICAN & 2.64\% & 39.00\% & 0.744 \\ \hline \multirow{4}{*}{PELICAN} & JH & 16.0 \% & 12.0 \% & 25.4 \\ & PELICAN\(|\)JH & 4.2 \% & 6.5 \% & 3.4 \\ \cline{1-1} & PELICAN\(|\)FC & 4.9 \% & 8.0 \% & 3.8 \\ \cline{1-1} & PELICAN & 7.3 \% & 40.7 \% & 6.7 \\ \hline \hline \end{tabular} \end{table} Table 5: PELICAN resolutions for models trained to reconstruct \(p_{\text{cont}}^{W}\) with variable \(m_{W}\). Resolutions are still obtained by comparing the model predictions to \(p_{\text{true}}^{W}\). Figure 6: Stacked histogram with proportional bin heights (see description in Fig. 3) showing the mass spectrum of the two targets, \(p_{\text{true}}^{W}\) and \(p_{\text{cont}}^{W}\), in the variable \(W\) mass dataset. the mass column. However, since the true mass now covers a much wider range, this still presents a significant improvement in the mass reconstruction capability. To demonstrate this better, we show the 2D correlations between target and reconstructed masses in Figures 7 and 8 for the models trained targeting \(p_{\rm true}^{W}\) and \(p_{\rm cont}^{W}\), respectively. We also differentiate between non-FC (left) and FC (right) events in the two sides of each of the panels in each figure. ### Model complexity The model examined above has 210k trainable parameters, however even significantly smaller models achieve good accuracy. As an illustration, we compare the resolutions of three PELICAN models trained on the variable mass dataset targeting \(p_{\rm true}^{W}\). They are obtained from the original model by a proportional rescaling of the widths of all layers. The first model is the 210k parameter one, with 132/78 channels, i.e. each messaging layer has 132 input and 78 output channels. The second model has 60/35 channels and 49k parameters. The third model has 25/15 channels and 11k parameters. The resolutions over the Delphes test dataset are reported in Table 6, and we observe that even the 11k-parameter model handily beats the JH method. Figure 7: 2D histograms of true vs. reconstructed masses for models trained on the variable mass dataset targeting \(p_{\rm true}^{W}\) (top: truth data; bottom: Delphes data), broken up into two populations based on jet containment (left: non-FC events; right: FC events). ### Discussion In the Delphes dataset, we observe that for non-FC events (bottom left of Fig. 8), the reconstructed contained mass is only weakly correlated with the true contained mass (or with the true \(W\) mass, as shown in Fig. 22 in Appendix A). However, in the quadrant where both masses exceed 55 GeV, we find a 65% correlation on FC events in the Delphes case. The most important type of error PELICAN makes here is when a non-FC event gets assigned a high reconstructed mass. Meaning that a mass near that of the true \(W\) was assigned for a jet with few of the \(W\) decay products in it. Among all events with \(m_{\rm reco}>55\) GeV, 3.6% are non-FC, and \begin{table} \begin{tabular}{c c c c c} \hline \hline \# params & \(\sigma_{p_{T}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centirad) \\ \hline 210k & 6.1 \% & 8.2 \% & 2.8 \\ 49k & 6.5 \% & 8.6 \% & 3.2 \\ 11k & 7.4 \% & 9.5 \% & 3.8 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of PELICAN models of three different network widths trained to reconstruct \(p_{\rm true}^{W}\) with variable \(W\) mass. Tested on Delphes data. Figure 8: 2D histograms of target vs. reconstructed masses for models trained targeting \(p_{\rm cont}^{W}\) (top: truth data; bottom: Delphes data), broken up into two populations based on jet containment (left: non-FC events; right: FC events). they bring the correlation among that population down to 51% (\(p_{T}\), mass, and angular resolutions on this population closely track those of PELICAN|FC above). But since in practice we're interested in \(m_{\text{true}}^{W}\), the correlation between that and \(m_{\text{reco}}\) is higher, at 59% among events with \(m_{\text{reco}}>55\) GeV. This is a significant improvement over the model trained on the original \(m_{\text{true}}^{W}\sim 80\) GeV Delpres dataset, and especially over non-ML methods such as the JH tagger (see Fig. 9). However, even a model trained on Delpres data to reconstruct \(p_{\text{true}}^{W}\), in fact, achieves a 40% correlation with \(m_{\text{true}}^{W}\) on non-FC events (see Fig. 7), so FC-tagging may not be necessary. Overall, PELICAN provides a viable method for estimating particle masses. ## 7 PELICAN explainability Despite the output of the PELICAN regression model ostensibly being a 4-vector (or multiple 4-vectors), the richer and more natural object to treat as the output are the PELICAN weights \(\{c_{i}\}\) introduced in (Eq. 11). Each \(c_{i}\) is attached to its corresponding input constituent \(p_{i}\) due to permutation equivariance and therefore encodes _a scalar feature of that particle within the event_. As we will show in this section, the behavior of these weights is key to the unique explainability and visualization features of the PELICAN architecture. In essence, PELICAN is able to take a set of \(N\) input 4-vectors and assign \(N\) scalar features to them (of course there can be several features per input as well) in a Lorentz-invariant way. This can be powerful in a variety of applications, but in the context of particle reconstruction the problem of finding the right values of the weights is similar to a soft clustering problem. Assuming an idealized dataset with perfect information about the decay products, the model should identify the decay products of the \(W\)-boson, assign \(c_{i}=1\) to them, and zero to all other constituents. This is analogous to what taggers like the Johns Hopkins top-tagger aim to do via jet clustering. However, since any five 4-vectors are linearly dependent, there is a continuum family of solutions \(\{c_{i}\}\) and it is not clear that PELICAN will prefer the clustering solution. ### Distributions of PELICAN weights In Fig. 10 we display the distributions of all PELICAN weights for models from Section 5 trained targeting \(p_{\text{true}}^{W}\). We also mark each constituent as either a \(W\)- or a \(b\)-daughter. This yields several observations. Figure 9: JH tagger’s reconstruction of the \(W\) mass on the variable \(W\) mass dataset, truth-level and Delpres versions. The correlation values are 47% and 25%, correspondingly. Firstly, nearly all weights are either non-negative or very slightly negative (e.g. above \(-0.1\)) with a very sharp peak at zero (the peak is entirely to the left of zero to very high precision3). This is the first feature that justifies the interpretation of PELICAN as a _soft clustering_ method. Since our inputs represent realistic events, all input 4-vectors in them are causally related, and in particular they belong to the future light cone, as does the target vector. This implies that no linear combination of these vectors with positive coefficients can produce a zero vector. The distributions, therefore, show that PELICAN weights assigned to \(b\)-daughters are not "contaminated" with these degenerate combinations. Footnote 3: The bin \([-10^{-6},0)\) contains about 100 times more constituents than the bin \([0,10^{-6})\). Secondly, the truth-level distribution is highly concentrated at 0 and 1 and very closely matches the binary clustering solution. That is, almost all constituents assigned weight 0 are \(b\)-daughters, and almost all of those assigned 1 are \(W\)-daughters. Nevertheless, 30% of \(b\)-daughters are assigned positive weights, prompting further investigation. Moreover, the distribution of \(W\)-daughter weights in the Delphes case is so spread out that it becomes difficult to explain it by a mere analogy with clustering. We can delve more deeply into the weight distribution by evaluating the sub-populations of weights based on jet containment. Fig. 11 shows the distributions of weights for \(bqq\), \(qq\), and non-FC events. The majority of constituents at the high end of the weight scale belong to non-FC events. Similarly, the weights Figure 11: Stacked histograms with proportional bin heights of all PELICAN weights computed over the testing dataset for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\mathrm{true}}^{W}\). Broken up into three populations by jet containment: \(bqq\) events (all three truth-level quarks from the \(t\to bW\to bqq\) process fall within the jet clustering radius); \(qq\) events (only the \(b\)-quark fell outside of the jet); and non-FC events, which include \(bq\), \(b\), and \(q\) events. Figure 10: Stacked histograms with proportional bin heights of all PELICAN weights computed over the testing dataset for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\mathrm{true}}^{W}\). Broken up into two populations – \(W\)-boson products and \(b\)-quark products. In the Delphes case, a constituent is considered a \(W\) product if the corresponding calorimeter cell detected at least one true \(W\) daughter. produced by the models trained targeting \(p_{\text{cont}}^{W}\), shown in Fig. 12, are more highly concentrated at 0 and 1, and have much lower and shorter "tails" on the right, especially among \(b\)-daughters. This is the first indication that PELICAN tends to up-weight some constituents in events where it doesn't have enough information for an accurate reconstruction. This approach allows to characterize the constituents that are being up-weighted. Fig. 13 shows the constituent weights as a function of the constituent's \(p_{T}\). The main observation here is that among high-energy ("hard") constituents with \(p_{T}>100\) GeV the weight distribution is much more binary, and the vast majority of constituents with weights falling away from the two peaks are soft, below \(20\) GeV. In the Delphes case PELICAN appears to down-weight high-energy \(W\)-daughters and up-weight soft constituents. Once again, loss of information in the form of detector effects appears to lead to PELICAN up-weighting soft constituents. ### Detector effects on PELICAN weights While the truth-level PELICAN models reliably converge to a binary clustering solution, the weights in the Delphes case do not permit such a straightforward interpretation. To better understand their behavior, we Figure 12: Stacked histograms with proportional bin heights of all PELICAN weights for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\text{cont}}^{W}\). Broken up into two populations by parent type. Figure 13: 2D histogram of PELICAN weights vs constituent transverse momentum for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\text{cont}}^{W}\). Only FC events shown here. ran additional experiments using custom datasets that exclude different components of the Delphes detector simulation one by one. Delphes performs the following steps: simulate the effect of the magnetic field \(B_{z}\) on charged final-state particles; aggregate truth-level particle energies within each electromagnetic calorimeter (ECAL) and hadronic calorimeter (HCAL) detector cell; apply energy smearing by sampling a lognormal distribution; unify the ECAL and HCAL cells; apply spatial smearing by picking a uniformly random point within the detector cell; construct the spatial momentum so that the resulting 4-vector, which represents a detector cell, is massless. We found that while each of these steps contributes to smearing out the truth-level distribution of PELICAN weights and shifting the peak downwards, the magnetic field is responsible for almost all of the differences between truth and Delphes results. The simulated magnetic field is able to deflect charged particles very significantly, enough to account for most of the error in PELICAN's reconstruction relative to the truth-level reconstruction. Our hypothesis for why this leads to lower PELICAN weights for hard constituents is the following. Deflected hard particles produce large errors in the direction but not the energy of the reconstruction, therefore one can down-weight them and compensate for the energy deficit using softer constituents. Moreover, by up-weighting softer constituents PELICAN can in fact partially correct the error in the direction since the deflections of positively charged particles can be partially cancelled out by those of negatively charged particles. An extra piece of evidence in support of this hypothesis can be found by modifying the loss function. If we re-train the model on the same Delphes dataset using a loss function consisting of a single energy term \(|E_{\rm reco}-E_{\rm true}|\), we find a distribution of weights (see Fig. 14) nearly as bimodal as the original one trained on Figure 14: Same as Fig. 12 but this time the model is trained using a single-term loss function proportional to \(|E_{\rm reco}-E_{\rm cont}|\). Figure 15: A single event viewed in the \(\eta,\phi\) plane with color and size dependent on energy. The central cross marks the true \(W\) boson, and the other three crosses mark the three true quarks from the process \(t\to bqq\). truth-level data (see Fig. 12). This indicates that the source of the error in PELICAN's reconstruction on Delphes data is overwhelmingly _spatial_. Out of all the steps that Delphes performs, only two are purely spatial: momentum smearing within one cell, and the simulated magnetic field. However, the detector cells (approximately \(0.02\times 0.02\) in \((\eta,\phi)\)) are much smaller than the magnitude of PELICAN's typical angular error, and thus smearing cannot explain the error. ### Event visualization As we discussed above, despite being a single-vector regression model, PELICAN produces one feature _per input constituent_ (namely the weight \(c_{i}\)), and these features become interpretable by virtue of Eq. 1. This gives us a unique opportunity to make event-level visualizations that provide insight into how PELICAN treats jet topology and how it compares to conventional methods such as the JH tagger's jet clustering. In Fig. 16 we show an amalgamation of 200 events from the Delphes dataset from Section 5 projected onto the unit sphere. Each event was spatially rotated so that the position of the true \(W\) within the image is fixed and the true \(b\)-quark is located in the negative \(\phi\) direction. In one display the constituents are colored according to their parent being either the \(W\) boson or the \(b\)-quark, and in the other they're colored based on their assigned PELICAN weight. The correlation between the two images is clear: \(b\)-daughters tend to be correctly assigned zero weight, whereas \(W\)-daughters have positive weights with the hardest constituents having weights between \(0.4\) and \(0.8\). In Fig. 15 we show a single event in the \((\eta,\phi)\) plane, with dot color and size dependent on the constituent energy. Note the reduced number of constituents in the Delphes display, and how some of the constituents get strongly deflected by the simulated magnetic field. The same event can be visualized in three more helpful ways. In addition to parent type and PELICAN visualizations introduced in Fig. 16, we can also extract the list of constituents that the JH tagger identifies as belonging to the \(W\) boson and highlight them. Fig. 17 displays the same single event in all three ways. In addition, we add a special marker for the direction of the Figure 16: Composite event display of 200 events from the Delphes dataset from Section 5. Each event is transformed using a 3D rotation matrix such that the true \(W\) boson ends up at \((\theta,\phi)=(\pi/2,0)\) (white cross), and the true \(b\)-quark is directly below. PELICAN is rotationally invariant, so its output is unaffected by the normalization. Each dot is a Delphes constituent and the dot size increases logarithmically with constituent energy. (a) Color reflects parent type: constituents that are fully derived from \(W\)-daughters are orange and those from \(b\)-daughters are purple; in the rare cases when the fraction of \(W\)-derived energy in a given calorimeter cell is between \(0\) and \(1\), the corresponding color is taken from the color scale in the right pane. (b) Color reflects the value of the PELICAN weight, clipped to the interval \([0,1]\), as shown in the legend. Note how the hardest \(W\) constituents (largest dots) tend to have PELICAN weights between \(0.5\) and \(1\). reconstructed \(W\) boson. In the parent type pane, this reconstruction is defined as \(\sum_{i=1}^{N}r_{i}p_{i}\) where \(r_{i}\) is the energy of the true \(W\)-daughters within that constituent divided by the actual energy of the constituent. In the JH and PELICAN panes, the marker corresponds to the corresponding reconstructions obtained by those methods. ## 8 IRC-safety and PELICAN Perturbative computations in QCD suffer from a divergence caused by two types of processes: soft emission and collinear splittings. As a consequence, meaningful observables in this theory need to be insensitive to such processes, and this requirement is known as IRC-safety. In this section we provide a precise definition, give a characterization of IRC-safe Lorentz-invariant observables (see details in Appendix B), and describe modifications to the PELICAN architecture that make it IR-safe or IRC-safe. Infrared safety (IR-safety) guarantees insensitivity to soft emissions, i.e. particles with relatively low energies and momenta. A family of continuous symmetric observables \(f^{(N)}(p_{1},\ldots,p_{N})\) is said to define an IR-safe observable \(f\) if \[\lim_{\epsilon\to 0}f^{(N+1)}(p_{1},\ldots,p_{N},\epsilon p)=f^{(N)}(p_{1}, \ldots,p_{N}) \tag{10}\] for any \(N\) and any \(p_{1},\ldots,p_{N}\), \(p\), where \(\epsilon\) controls how infinitesimally small the considered soft emission \(p\) is. Figure 17: The same event as in Fig. 15 in the \((\eta,\phi)\) plane. (a) Constituents are colored according to the actual parent type; size increases with energy; the yellow cross marks the reconstruction obtained by summing all of the constituents that belong to the \(W\) boson. (b) Constituents are colored according to how they are tagged by the JH-tagger as either \(W\)-daughters or not; size increases with energy; the yellow cross marks the JH-reconstructed \(W\) boson. (c) Constituents are colored according to their PELICAN weight clipped to the interval \([0,1]\); size increases as the weight goes from 0 to 1 and decreases after that. Note how the soft Delphes \(W\)-constituents get assigned high PELICAN weights. Collinear safety (C-safety) is a restriction on observables in perturbative QCD that arises from the divergent contributions of collinear emissions of gluons. Positive gluon mass would prevent such divergences, which is why C-safety concerns only massless particles. We can define C-safety formally as follows: an observable \(f(p_{1},\ldots,p_{N})\) is C-safe if, whenever two massless 4-momenta \(p_{1}\) and \(p_{2}\) become collinear (which happens for massless particles iff \(p_{1}\cdot p_{2}=0\)), the value of \(f\) depends only on the total momentum \(p_{1}+p_{2}\). Expressed even more explicitly, C-safety says that setting \(p_{1}=\lambda p\) and \(p_{2}=(1-\lambda)p\) with some 4-vector \(p\) such that \(p^{2}=0\) must lead to the same output regardless of the value of \(\lambda\), i.e. \[C_{12}(p)f=\partial_{\lambda}f(\lambda p,(1-\lambda)p,p_{3},\ldots,p_{N})=0. \tag{101}\] In Appendix B we characterize IRC-safe Lorentz-invariant observables, but the following summary will suffice for the purpose of designing an IRC-safe version of PELICAN. First, a Lorentz-invariant observable (assumed to be consistently defined for any finite number \(N\) of 4-vector inputs) is IR-safe if and only if it has no explicit dependence on the multiplicity \(N\). More precisely, adding the zero 4-vector to the list of inputs should leave the output value invariant. Second, an IRC-safe Lorentz-invariant observable is one that is IR-safe and moreover depends on any of its massless inputs only through their total. E.g. if \(p_{1},p_{2},p_{3}\) are fixed to be massless, then \(f(p_{1},p_{2},p_{3},p_{4},\ldots)\) must depend only on \(p_{1}+p_{2}+p_{3}\). In particular, if all inputs are massless, then all IRC-safe invariant observables are functions of only the jet mass \(M^{2}=(\sum_{i}p_{i})^{2}\). Note, however, that such an observable can still depend on these vectors in an arbitrarily complex fashion away from the massless manifold. The original PELICAN architecture as introduced above is neither IR- nor C-safe. Below we modify the architecture to make it exactly IR-safe or IRC-safe and evaluate the implications. ### IR-safe PELICAN As shown above, IR-safety in Lorentz-invariant networks essentially requires the outputs to be independent of the multiplicity \(N\). There are four ways in which the multiplicity shows up in PELICAN: 1. Scaling with \(N^{\alpha}/\bar{N}^{\alpha}\) in the equivariant block. This must be disabled for IR-safety. 2. Non-zero bias values in linear layers. Since the network is permutation-equivariant, the bias values are shared across jet constituents, which means that upon aggregation in the following equivariant layer they contribute multiples of \(N\). All biases in all linear layers must be disabled for IR-safety. 3. The input embedding must map zero to zero, but our original choice already satisfies this. In addition, the activation function must also have a fixed point at zero. Our default choice, LeakyReLU, also satisfies this. 4. Following an application of a PELICAN equivariant block, rows and columns corresponding to soft constituents will contain a combination of sums over all constituents. Even in the absence of biasing constants, this effectively increases the multiplicity with which these values enter in the later aggregations. This can be resolved by making sure that rows and columns that are soft at the input remain soft throughout the whole network. Therefore we introduce _soft masking_, whereby the last 12 equivariant aggregators (those don't preserve the softness of rows/columns) are followed by a multiplication by the vector of values \(J\cdot p_{i}\), where \(J=\sum_{i=1}^{N}p_{i}\), scaled and clipped to be within \([-1,1]\). In \(\text{Eq}_{2\to 2}\) this multiplication is applied both row-wise and column-wise, and in \(\text{Eq}_{2\to 1}\) it's component-wise. With these modifications, PELICAN becomes IR-safe. As we will see, this restriction leads to a modest reduction in the performance of PELICAN's predictions in our tasks of interest. ### IRC-safe PELICAN Adding C-safety to the architecture is much simpler. As stated above, the necessary requirement is that the output depend on massless inputs only through their sum. In PELICAN this can be achieved by inserting a linear permutation-equivariant layer with a mass-based soft mask immediately at the input (any nonlinear embedding has to be done later). Consider a case where \(p_{1}\), \(p_{2}\) are massless and the dot product matrix \(\{d_{ij}\}\) is fed into such an equivariant layer. Most of the aggregators will compute sums over rows or columns, thus immediately producing C-safe quantities. However, several of the aggregators, including the identity, will preserve individual information about each \(p_{i}\), therefore their output rows and columns corresponding to \(p_{1}\) and \(p_{2}\) need to be thrown out. This can be done by a soft mask that turns to zero as the mass of any input goes to zero. This mask is defined in the same way as the IR mask except using \(m_{i}^{2}\) instead of \(J\cdot p_{i}\). It needs to be applied only to the first 2 order zero and the first 7 order one aggregators. Coincidentally, this soft mask can also be used in place of an IR mask, which means that we only need the C-safe soft mask to make a fully IRC-safe PELICAN architecture. Altogether it gets applied to all equivariant aggregators except the third one (which extracts the diagonal and is thus IRC-safe by definition). ### Testing IR/C-safe PELICAN models First we quantify the deviation in PELICAN's outputs that occurs under soft and collinear splittings and observe how training affects them. We define an IR-splitting as adding a zero 4-vector to the list of input constituents. Then PELICAN's output on IR-split data is directly compared to the original output. Defining a C-splitting is more difficult since realistic events never contain any exactly collinear constituents, and we want to avoid changing the number of particles so as to make this test independent of IR-safety. Therefore we prepare the data by inserting two copies of the vector \((1,0,0,1)\) to each event. Then the C-splitting will amount to replacing these two vectors with \((2,0,0,2)\) and \((0,0,0,0)\). The outputs on the same event prepared in these two ways can be directly compared. To compare two outputs \(p_{\text{reco}}\), \(p_{\text{reco}}^{\prime}\) we compute the relative deviation \(|(p_{\text{reco}}^{\prime}-p_{\text{reco}})/p_{\text{reco}}|\), where the division is component-wise. To estimate the effect of an IR- or C-splitting on PELICAN's predictions, we take the median value of this deviation over a batch of events. The same can also be done with PELICAN weights as the outputs. The splittings are applied to 100-event batches of events from one of our datasets and the median deviations are averaged over 300 batches. We test 5 randomly-initialized models and 5 models trained on the full variable \(W\) mass dataset from Section 6. We find that a randomly-initialized PELICAN regression model's output 4-vector deviates by 0.5%-7% under an IR-split, and the PELICAN weights deviate by up to 5%. However, regression models trained to reconstruct \(p_{\text{cont}}^{W}\) have both deviations under 5%, potentially indicating a slight improvement in IR-safety due to training. The resolutions \(\sigma_{p_{T}},\sigma_{m}\), and \(\sigma_{\Delta R}\) of the trained IR-safe truth-level models are about 20%-35% worse (larger) than the original models, and similarly the Delphes resolutions get 40%-50% worse. We also note that IR-safe PELICAN models appear to be slightly more rigid under C-splits, showing 5%-16% deviations that enhance slightly to at most 11% after training. Under a C-splitting, the randomly-initialized regression model's outputs (both the 4-vector and the weights) consistently deviate by 3%-8%, and the same range of deviations was observed on fully trained models as well. The resolutions of trained IRC-safe truth-level models suffer significantly in comparison to the regular models, exhibiting 5-6 times higher values of \(\sigma_{p_{T}},\sigma_{m}\), and \(\sigma_{\Delta R}\). We do not perform this comparison for Delphes models since the jet constituents coming out of Delphes are massless, so only functions of the jet mass are expressible by IRC-safe PELICAN on that data. ## 9 Conclusion We have presented a full description of PELICAN: a network that respects particle permutation and Lorentz symmetries important in particle physics. PELICAN is a general network which is performant at 4-vector regression and provides state-of-the-art performance in the task of top-tagging. To demonstrate PELICAN's regression capabilities, we chose the reconstruction of the \(W\)-boson's 4-momentum from a full top quark jet, and to our knowledge PELICAN is the first ML method applied to this problem. Even within these tasks there is room to improve PELICAN's performance by introducing additional scalar information such as particle charges, which would allow the network to account for the simulated collider's magnetic field. PELICAN's architecture, its flexibility, and generalizability may also allow for future applications to charged-particle track reconstruction, pile-up identification, and full-event reconstruction. Being a general architecture, PELICAN is not limited to top quark decays or even jets. This network inherently provides powerful tools for investigating its own behavior due to the equivariant architecture and shows promise as a tool which can be thoroughly investigated if deployed in real world scenarios. Acknowledgements The authors would like to thank the Data Science Institute at the University of Chicago for its generous support of this research. TH is supported by the Department of Physics at the University of Chicago. DWM and JTO are supported by the National Science Foundation under Grant PHY-2013010. The computations in this work were, in part, run at facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. The Center for Computational Mathematics at the Flatiron Institute is supported by the Simons Foundation.
2308.16383
Separate and Locate: Rethink the Text in Text-based Visual Question Answering
Text-based Visual Question Answering (TextVQA) aims at answering questions about the text in images. Most works in this field focus on designing network structures or pre-training tasks. All these methods list the OCR texts in reading order (from left to right and top to bottom) to form a sequence, which is treated as a natural language ``sentence''. However, they ignore the fact that most OCR words in the TextVQA task do not have a semantical contextual relationship. In addition, these approaches use 1-D position embedding to construct the spatial relation between OCR tokens sequentially, which is not reasonable. The 1-D position embedding can only represent the left-right sequence relationship between words in a sentence, but not the complex spatial position relationship. To tackle these problems, we propose a novel method named Separate and Locate (SaL) that explores text contextual cues and designs spatial position embedding to construct spatial relations between OCR texts. Specifically, we propose a Text Semantic Separate (TSS) module that helps the model recognize whether words have semantic contextual relations. Then, we introduce a Spatial Circle Position (SCP) module that helps the model better construct and reason the spatial position relationships between OCR texts. Our SaL model outperforms the baseline model by 4.44% and 3.96% accuracy on TextVQA and ST-VQA datasets. Compared with the pre-training state-of-the-art method pre-trained on 64 million pre-training samples, our method, without any pre-training tasks, still achieves 2.68% and 2.52% accuracy improvement on TextVQA and ST-VQA. Our code and models will be released at https://github.com/fangbufang/SaL.
Chengyang Fang, Jiangnan Li, Liang Li, Can Ma, Dayong Hu
2023-08-31T01:00:59Z
http://arxiv.org/abs/2308.16383v1
# Separate and Locate: Rethink the Text in Text-based Visual Question Answering ###### Abstract. Text-based Visual Question Answering (TextVQA) aims at answering questions about the text in images. Most works in this field focus on designing network structures or pre-training tasks. All these methods list the OCR texts in reading order (from left to right and top to bottom) to form a sequence, which is treated as a natural language "sentence". However, they ignore the fact that most OCR words in the TextVQA task **do not have a semantical contextual relationship**. In addition, these approaches use 1-D position embedding to construct the spatial relation between OCR tokens sequentially, which is not reasonable. The 1-D position embedding **can only represent the left-right sequence relationship between words in a sentence**, but not the complex spatial position relationship. To tackle these problems, we propose a novel method named Separate and Locate (SaL) that explores text contextual cues and designs spatial position embedding to construct spatial relations between OCR texts. Specifically, we propose a Text Semantic Separate (TSS) module that helps the model recognize whether words have semantic contextual relations. Then, we introduce a Spatial Circle Position (SCP) module that helps the model better construct and reason the spatial position relationships between OCR texts. Our SaL model outperforms the baseline model by 4.44% and 3.96% accuracy on TextVQA and ST-VQA datasets. Compared with the pre-training state-of-the-art method pre-trained on 64 million pre-training samples, our method, without any pre-training tasks, still achieves 2.68% and 2.52% accuracy improvement on TextVQA and ST-VQA. Our code and models will be released at [https://github.com/fangbufang/SaL](https://github.com/fangbufang/SaL). TextVQA, Multimodal Information, Scene Understanding + Footnote †: copyrighted: [leftmargin=*][leftmargin=*][leftmargin=*][leftmargin=*][rightmargin=*][leftmargin=*][rightmargin=*][leftmargin=*][rightmargin=*][rightmargin=*][leftmargin=*][rightmargin=*][rightmargin=*][leftmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=][rightmargin=*][rightmargin=*][rightmargin=][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightrightmargin=*][rightmargin=][right*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=]][right*][rightmargin=*][rightmargin=]][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=]][right*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=][right*][rightmargin=*][rightmargin=]][right*][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=][right*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][right*]][rightmargin=*][rightrightmargin=*][rightmargin=][right*]][rightrightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=][right*]]][rightrightmargin=*][rightrightmargin=*][rightmargin=][right*]][rightmargin=*][rightmargin=][right*]]][rightrightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=]]][rightrightmargin=*][rightmargin=*][rightrightmargin=*]][rightrightrightmargin=*][rightmargin=*][rightrightmargin=*]][rightrightmargin=*][rightmargin=*][rightrightmargin=*][right and the words in the sentence have semantic relevance. Conversely, the OCR texts "Pepsi GATORADE Ford at&t" do not actually have contextual semantic associations. However, all the previous works in TextVQA ignore that texts in an image are different from the sentence in NLP. The OCR texts sequence "Pepsi GATORADE Ford at&t" is regarded as a sentence in current methods, even though the words in it do not actually have contextual semantic associations. Forcing these irrelevant OCR texts to form a sentence will force the model to construct contextual relationships that should not exist in these texts, adding harmful noise to the learning process of the model. Another difference is that natural language text input has a reading order from left to right and top to down. The words and sentences in natural language inherently have semantic associations, so the text can be spliced in reading order and form as a linguistic sequence. There is no problem to use absolute position embedding or relative position embedding to indicate the sequential-position relationships between different words or sentences. However, OCR texts recognized in the scene image cannot simply be spliced from left to right and top to bottom. We attribute this to the nature that **OCR texts show strong spatial-position relations between each other**. The 1-D relative or absolute position encoding cannot express the 2-D complex spatial relationship in the image. It is not reasonable if we input the concatenated OCR texts into the model and then use the original 1-D position embedding for position modeling. This will lead to some OCR texts that are spatially close in the image being set away from each other by the 1-D position embedding. Intuitively, OCR texts that are adjacent left and right, or up and down in an image are more likely to have direct semantic associations. For example, in Figure 1 (b), current methods take in "Construction Card Stock Paper Fabric 3pieces." with 1-D position embedding adding to them, which will cause 'Construction' to be close to 'Card' while the distance between 'Construction' and 'Paper' is far. It will also lead to '3 pieces' close to 'Fabric' but farther away from 'Card Stock'. However, '3 pieces' is more semantically related to 'Card Stock' which is spatially close. Therefore, better spatial-aware position modeling is appealing. To alleviate the above problems, we rethink the text in Text-based Visual Question Answering and propose a new method called Separate and Locate (SaL). Aiming at the problem that there is no apparent semantic association between the text in the image, we introduce a Text Semantic Separate (TSS) module. Compared with directly combining unrelated OCR texts into a sentence, TSS can learn to reduce the noise during training by separating OCR texts that do not have semantic relevance. In this way, the model is promoted to better learn relationships between different OCR texts and help the subsequent reasoning of answers to text-related questions. As for the problem that 1-D position encoding cannot properly express the spatial position relationships between OCR texts, we introduce a Spatial Circle Position (SCP) module. SCP provides each text in the image with representations indicating the spatial relative distances between it and all other texts. Specifically, we follow the Pythagorean theorem [23] to calculate spatial relative distances between two texts. Benefiting from the two modules, our proposed SaL properly overcomes the problems of ambiguous semantic associations of OCR texts and inaccurate positional representations. With better correlation capturing and the spatial-position realization between different OCR texts, SaL enhances the model's feature fusion ability and multi-modal information reasoning ability. We conduct experiments on two benchmarks: TextVQA [29] and ST-VQA [5]. SaL, without any pretraining tasks that are adapted in previous works[4, 35, 22], outperforms the SOTA methods even including those pre-training ones. The reason for not adapting pre-training is that OCR annotations of the SOTA pre-training dataset with 64 million samples are not open-source. We believe Figure 1. (a) Previous methods spliced the OCR text into a sentence according to the reading order. This sentence is not consistent with the sentence with the complete contextual semantic relationship input in the NLP task. There is no semantic relationship between many OCR texts, which brings noises. (b) Previous methods use 1-D position encoding according to the reading order, which cannot well represent spatial position relationships between OCR texts. that our performance can be further improved by pre-training on the large-scale pre-training data. In summary, our contributions are three-folded: 1. We are the first to claim that the text input in TextVQA is different from that in NLP. Most of the OCR texts do not have semantic associations and should not be stitched together. For this, we designed a Text Semantic Separate (TSS) module. 2. We propose a Spatial Circle Position (SCP) module to help the model realize the spatial relative-position information between OCR texts. It can solve the problem that 1-D position embedding cannot well represent the text position in the image. 3. Extensive experiments demonstrate the effectiveness of our method. Sal. outperforms the current SOTA method by 2.68% and 2.52% on TextVQA and ST-VQA respectively (The SOTA method uses 64 million pre-training data but we do not use any pre-training data). ## 2. Related Work ### Vision-language tasks incorporating scene text As scene-text visual question answering (Sang et al., 2016; Liu et al., 2017) has gradually gained attention, in order to enhance the scene-text understanding ability of VQA models (Beng et al., 2015; Liu et al., 2017), several datasets (Sang et al., 2016; Liu et al., 2017; Liu et al., 2017) have been proposed to promote the development of this field. Previous works (Sang et al., 2016; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017) realize that texts play an important role in answering text-related questions. CRN (Liu et al., 2017) focuses on the interaction between text and visual objects. LaAP-Net (Liu et al., 2017) gives the bounding box of the generated answer to guide the process of answer generation. SMA (Liu et al., 2017) and SA-M4C (Liu et al., 2017) build graphs to build relations between OCR texts and objects. TAG (Liu et al., 2017) introduces a data augmentation method for TextVQA. TAP (Liu et al., 2017), LOGOS (Liu et al., 2017), LaTr (Sang et al., 2016), and PreSTU (Liu et al., 2017) propose different pre-training tasks to promote the development of scene-text visual question answering. Specifically, TAP proposed a text-aware pre-training task that predicts the relative spatial relation between OCR texts. LOGOS introduces a question-visual grounding task to enhance the connection between text and image regions. LaTr is based on the T5 model (Liu et al., 2017) and proposes a layout-aware pre-training task that incorporation of the layout information. PreSTU is based on the mT5 model and designed a simple pre-training recipe for scene-text understanding. However, all of them ignore the irrelevance between different OCR texts and the poor ability of original 1-D position encoding. Our model separates different OCR words according to their semantic contextual information and can realize the difference between OCR texts and complete semantically related sentences. Furthermore, it can establish accurate relative spatial relationships between each OCR word, which facilitates the answering process. ### Spatial position encoding methods After the Transformer (Liu et al., 2017) becomes the common paradigm of NLP, the 1-D absolute position encoding (Liu et al., 2017) and relative position encoding (Liu et al., 2017) are proposed to identify the position relation between the different words in the sentence. After Vit (Vaswani et al., 2017) uses the transformer to process the image task, Standalone self-attention (Tang et al., 2016) proposes an encoding method for 2-D images. The idea is simple. It divides the 2-D relative encoding into horizontal and vertical directions, such that each direction can be modeled by a 1-D encoding. However, it only gives each image region a simple absolute spatial position encoding, and cannot directly construct the relative spatial relationship and distance between images. Simply summing two one-dimensional positional embeddings of x and y to represent the positional relationship of regions in an image is the main limitation that prevents the model from learning spatial relationships. LaAP-Net (Liu et al., 2017), SMA (Liu et al., 2017), SA-M4C (Liu et al., 2017), and LaTr (Sang et al., 2016) prove the critical role of the position of OCR texts in the TextVQA field. They proposed various network structures and pre-training tasks to make the model learn the different spatial position relations between OCR texts. LaAP-Net, SMA, and SA-M4C still use traditional 1-D absolute position encoding in NLP to build spatial associations between OCR texts. Although LaTr uses six learnable lookup tables to construct the spatial layout information of OCR text, it does not improve much in the absence of a large amount of data pre-training, because it still uses the simple addition of multiple original 1-D position encoding to represent spatial position information. Our method models the spatial relative distance and implicit angle of each OCR text in the image to all other OCR texts, which is a more direct and reasonable spatial encoding representation. Figure 2. The pipeline of our model. Same-shape markers represent features of the same modality, and marks of the same color represent features from the same text mark or image region. ## 3. Method SaL analyzes the difference between OCR texts in TextVQA and complete sentences in NLP. In terms of semantic relations, SaL proposes a Text Semantic Separate (TSS) module that explicitly separates OCR texts without semantic contextual relations. In terms of spatial-position relations, SaL introduces a Spatial Circle Position (SCP) module that models 2-D relative spatial-position relations for OCR texts. With these two modules, our method can separate semantic irrelevance OCR texts and locate the accurate spatial position of OCR texts in the image. In this section, we introduce the whole process of our model. Specifically, we elaborate on the TSS and SCP modules in Sections 3.2 and 3.3 respectively. ### Multimodal Feature Embedding Following LaTr, we utilize a transformer of Text-to-Text Transfer Transformer (T5) as the backbone. As shown in Figure 2(a), the original data in each sample in the dataset is a image and the corresponding question. Next, as shown in Figure 2(b), we use the FRCNN model and the T5 token embedding layer to process visual and text information respectively. Finally, as shown in Figure 2(c), the question, OCR text, and object features are concatenated together and input into the model. The specific process is as follows: **Question Features.** Following LaTr, the question words are indexed as the question feature embeddings \(Q=\{q_{i}\}_{i=1}^{L}\) by T5 token embedding layer, where \(q_{i}\in\mathbb{R}^{d}\) is the embedding of the \(i\)-th question word, \(L\) is the length of the question, and \(d\) is the dimension of the feature. **OCR Features.** For texts recognized in input images by the OCR system, we have three different features: 1) visual features extracted by Faster R-CNN \(x_{i}^{ocr,f_{r}}\). 2) the corresponding bounding box of visual features \(x_{i}^{ocr,bx}=[\frac{x_{i}^{min}}{W},\frac{y_{i}^{min}}{H},\frac{x_{i}^{max} }{W},\frac{y_{i}^{max}}{H}]\). 3) the text embedding \(x_{i}^{ocr,t5}\) produced by the T5 embedding layer. The final OCR feature is: \[x_{i}^{ocr}=T5LN(W_{fr}x_{i}^{ocr,fr})+T5LN(W_{bx}x_{i}^{ocr,bx})+W_{t5}x_{i}^{ ocr,t5} \tag{1}\] **Object Features.** To get object region features, we apply the same Faster R-CNN model mentioned in OCR features: \[x_{i}^{obj}=T5LN(W_{fr}^{r}x_{i}^{obj,fr})+T5LN(W_{kr}^{r}x_{i}^{obj,bx})+W_{ t5}^{r}x_{i}^{obj,t5} \tag{2}\] where \(x_{i}^{obj,fr}\) is the appearance feature, \(x_{i}^{obj,bx}\) is the bounding box feature and \(x_{i}^{obj,t5}\) is the t5 word embedding corresponding to the object label. Therefore, the input embedding is: \[input=cat(X^{q},X^{ocr},X^{obj}) \tag{3}\] where \(X^{q}\) is the question embeddings, \(X^{ocr}\) is the OCR embeddings, \(X^{obj}\) is the object embeddings. The \(cat\) means the concatenating function. ### Text Semantic Separate Module Unlike words in a sentence, scene texts in images are distributed in various positions of the image. They are distributed on different text carriers and backgrounds of different materials, which naturally leads to no contextual relationship between most texts. Since the previous work did not realize this, OCR texts are directly spliced as a sentence and input into the model. This makes the learning process of the model suffer from noises. To this end, the Text Semantic Separate (TSS) module separates different OCR texts according to their visual and spatial information, so that the model can correctly recognize differences between OCR texts and the natural sentence, and better help the model to express and fuse the features of the question, the OCR text, and the object. Specifically, we can look at the bottom of Figure 3(a). Without the TSS module, the model stitches the OCR text together, making the model think that two adjacent words such as 'Paper' and 'Fabric' may have contextual relationships. However, it is not the case. Our TSS module inserts a context-separation token <context-after the last token of each OCR text. Every <context- token is represented by its learnable reserved embedding from the T5 lookup table. Then, we add the visual feature and bounding coordination of each OCR text to its corresponding context-separation token (shown in Figure 3 (a)). Finally, each context-separation token can interact with all other OCR texts and distinguish whether there is a semantic relationship by visual relation and coordination relation. The benefits of this are: 1) The model can learn that there is no contextual relationship between different OCR texts, which helps the model's reasoning and feature fusion. 2) <context> can combine the OCR texts before and after to learn the difference between different OCR texts. 3) Compared with directly splicing OCR text into sentences, this method reduces noise. ### Spatial Circle Position Module The spatial position relationship of text in natural scenes is extremely important for solving text-related questions. How to construct the complex spatial position relationship between texts has become one of the urgent problems to be solved. We, therefore, propose the SCP module to construct precise inter-textual spatial position relationships. Specifically, SCP includes the following three steps: 1) divide the scene image into \(11*11\) image areas, and assign all OCR texts to the corresponding areas by their coordinates; 2) calculate the spatial distance between each OCR text and all other texts through the Pythagorean theorem; 3) assign spatial position embeddings between the OCR text and other OCR texts based on the calculated spatial distances and feed them into the spatial circle position-aware self-attention. The formula for this process is as follows: (4) \[p_{i}^{ocr}=Patch(x_{i}^{ocr,bx})\] (5) \[dist_{i,j}^{ocr}=(Pytha(p_{i}^{ocr},p_{j}^{ocr})*2).long()\] (6) \[distEmbed_{i,j}^{ocr}=Embedding(dist_{i,j}^{ocr})\] \[att_{i,j}=\frac{(W_{q}*ocInput)(W_{k}*ocInput)^{T}+b_{i,j}}{\sqrt{ d}}\] (7) \[a_{i,j}=softmax(att_{i,j}+distEmbed_{i,j}^{ocr})\] (8) \[ocOutput_{i,j}=\alpha_{i,j}*(W_{p}*ocInput)\] (9) where Patch is a function that aligns the OCR text to the image patch coordination by its coordination. Pytha is a function that calculates the spatial distance between OCR texts. Embedding is a function in Pytorch and it is a \(32^{*}12\) look-up table. \(W_{q},W_{k},W_{o}\) are learnable parameters of the self-attention layer and \(\frac{1}{\sqrt{d}}\) is a scaling factor. Using the SCP module has the following advantages: 1) Compared with models such as SMA and SA-M4C, which construct an attention layer that specifically handles a variety of predefined spatial position relationships. SCP explicitly constructs all spatial relationships of each OCR text with other OCR texts through spatial distance. In addition, since in the first step of Figure 3 (b), various angle relationships implicitly exist between each image block, and each OCR text assigned to a different image block also implicitly contains various angle relationships. It does not require additional consumption of model parameters, only a 32\({}^{\circ}\)12 learnable look-up table is required. 2) Compared with LaTr, which uses 6 learnable look-up tables of 1000\({}^{\circ}\)768 and a large amount of pre-training data, the explicit construction method of SCP is much better (See Table 5 for details). 3) The previous method builds global spatial positional relationships of all OCR texts and cannot well construct the relative spatial positional relationship between each OCR text and all other OCR texts. SCP takes into account the implicit angle and spatial distance between each OCR text and all other OCR texts, and can more accurately locate the position of the text in the image. As we can see in Figure 3 (a), the previous methods tend to generate many unreasonable positional relationships due to the lack of spatial position representation capabilities. For example, 'Construction' is closer to 'Card' but 'Construction' is farther away from 'Paper'. This obviously doesn't match what we see in the image. Our approach solves this problem very well. Specifically, as shown in Figure 3 (b3), we added our SCP module to the attention layer, so that the model can capture various angles and spatial position relationships between OCR texts. ### Training Loss Following previous works, we use the binary cross-entropy loss to train our model. Since there are several answers to the question in the sample, it can be converted into a multi-label classification problem. The binary cross-entropy loss can be defined as followings: \[pred=\frac{1}{1+exp(-y_{pred})}\] \[L_{bce}=-(y_{gt}log(pred)+(1-y_{gt})log(1-pred)) \tag{10}\] where \(y_{pred}\) is the prediction and \(y_{gt}\) is the ground-truth target. ## 4. Experiments In this section, we verify the effectiveness of our method on two main benchmarks, TextVQA and ST-VQA. In Section 4.1, we introduce them and evaluation metrics. In Sections 4.2 and 4.3, we compare our method with SOTA methods and conduct ablation experiments. Finally, in Section 4.4 we present the visualization results. Following LaTr, we use \(\ddagger\) to refer to the models trained on TextVQA and ST-VQA. 'Base' and 'Large' model sizes refer to architectures that have 12+12 and 24+24 layers of transformers in encoder and decoder, respectively. ### Datasets and Evaluation Metrics **TextVQA**[(29)] is the main dataset for text-based visual question answering. It is a subset of open images [(20)] and is annotated with 10 ground-truth answers for each scene-text-related question. It includes 28,408 images with 45,336 questions. Following previous settings, we split the dataset into 21,953 images, 3,166 images, and 3,289 images respectively for train, validation, and test set. The methods are evaluated by the soft-voting accuracy of 10 answers. **ST-VQA** is similar to TextVQA and it contains 23,038 images with 31,791 questions. We follow the setting from M4C and split the dataset into train, validation, and test splits with 17,028, 1,893, and 2,971 images respectively. The data in ST-VQA is collected from Cooctext [(32)], Visual Genome [(19)], VizWiz [(3)], ICDAR [(16; 17)], ImageNet [(7)], and IIIT-STR [(24)] datasets. Different from TextVQA, we report both soft-voting accuracy and Average Normalized Levenshtein Similarity(ANLS) on this dataset. ANLS defined as \(1-d_{L}(pred,gt)/max(|pred|,|gt|)\), where \(pred\) is the prediction, \(gt\) is the ground-truth answer, \(d_{L}\) is the edit distance. Figure 3. (a) Text Semantic Separation module: OCR texts are split by the -context- token which will learn whether two adjacent tokens are semantic-related. (b) Spatial Circle Position module (b1) splits the image into blocks, (b2) computes the spatial circle distances between blocks at which the OCR texts are located, (b3) and then adds the corresponding position embedding into the SCP-aware Self-Attention. ### Experiment Results #### 4.2.1. TextVQA Results For a fair comparison with previous methods, we divide the previous methods into non-pre-trained methods and pre-trained methods pre-training on numerous data. Since the accuracy of the OCR system for recognizing text has a great influence on the performance of the model, our method uses the classic Rosetta OCR and Amazon OCR to make a fairer comparison with the previous methods. As shown in Table 1, in the case of using Rosetta OCR, the performance of our method has reached an accuracy rate of 48.67% in the validation set of TextVQA, which exceeds the previous SOTA method LaTr by 4.61%. In the case of using Amazon-OCR, our method uses the same T5-base as the previous SOTA LaTr-base as the model structure, and our method outperforms LaTr by 3.32% and 3.63% accuracy on the validation set and test set of TextVQA, respectively. In the case of using the T5-large and training on both TextVQA and ST-VQA datasets, our method achieved the new best accuracy of 64.58% and 64.28% on the validation set and test set of TextVQA respectively. Our method outperforms the SOTA LaTr-large method with the same configuration by 3.53% and 2.68% respectively. It is worth noting that the additional pre-training data set IDL (Dosov et al., 2017) containing 64 million data used by LaTr is not open source, which hinders our method from pre-training. This demonstrates the effectiveness and efficiency of our method. Since this pre-training method is orthogonal to our method, we believe that our method will be further improved by applying the pre-training task of LaTr. #### 4.2.2. ST-VQA Results As we can see in Table 2, under the unconstrained setting, SaL\(\ddagger\)-Large achieves 64.16% accuracy, 0.722 ANLS on the validation set of ST-VQA, and ANLS of 0.717 on the test set. Our model exceeds SOTA LaTr\(\ddagger\)-Large by 2.52% accuracy, 2.0% ANLS on the ST-VQA validation set, and 2.1% ANLS on the ST-VQA test set. Likewise, SaL-base also outperforms LaTr-base by a large margin. From Table 1 and Table 2, we can observe that training on both TextVQA and ST-VQA, the improvement on TextVQA (from 62.42% to 62.85% on SaL-base, from 63.88% to 64.58% on SaL-large ) is not significant as the improvement on the ST-VQA dataset (from 59.74% to 62.29% on SaL-base and from 61.45% to 64.16% on SaL-large). We will analyze this phenomenon in Appendix B. ### Ablation Study In this section, we provide insightful experiments that we deem influential for the TextVQA task and its future development. We first analyze the performance improvement of the different modules we propose on the TextVQA dataset. Afterward, we analyze the effect of different input information on the model performance. Then, to prove the effectiveness of our SCP module, we compare our SCP module and layout embedding in LaTr. Finally, for different methods of OCR text separation, we conduct experiments and further emphasize that our TSS module is the most effective choice. #### 4.3.1. The Effective Of Modules In order to prove the effectiveness of our proposed method, we follow LaTr to do ablation experiments on TextVQA and ST-VQA based on T5-Base. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \# & Model & OCR system & Pre-Training Data & Extra Finetune & Val Acc. (\%) & Test Acc. (\%) \\ \hline 1 & M4C (Kumar et al., 2018) & Rosetta-en & - & - & 39.40 & 39.01 \\ 2 & SMA (Kumar et al., 2018) & Rosetta-en & - & - & 40.05 & 40.66 \\ 3 & LaAP-Net (Kumar et al., 2018) & Rosetta-en & - & - & 40.68 & 40.54 \\ 4 & CRN (Kumar et al., 2018) & Rosetta-en & - & - & 40.39 & 40.96 \\ 5 & BOV (Kumar et al., 2018) & Rosetta-en & - & - & 40.90 & 41.23 \\ 6 & SC-Net (Kumar et al., 2018) & Rosetta-en & - & - & 41.17 & 41.42 \\ 7 & TAP (Kumar et al., 2018) & Rosetta-en & TextVQA & - & 44.06 & - \\ 8 & LaTr-Base (Dosov et al., 2017) & Rosetta-en & - & - & 44.06 & - \\ 9 & SaL-Base & Rosetta-en & - & - & **48.67** & - \\ \hline 10 & SA-M4C (Kumar et al., 2018) & Google-OCR & - & ST-VQA & 45.40 & 44.60 \\ 11 & SMA (Kumar et al., 2018) & SBD-Trans OCR & - & ST-VQA & - & 45.51 \\ 12 & LOGOS (Kumar et al., 2018) & Microsoft-OCR & - & ST-VQA & 51.53 & 51.08 \\ 13 & TWA (Dosov et al., 2017) & Microsoft-OCR & - & ST-VQA & 52.70 & 52.40 \\ 14 & TAG (Kumar et al., 2018) & Microsoft-OCR & TextVQA,ST-VQA & ST-VQA & 53.63 & 53.69 \\ 15 & TAP (Kumar et al., 2018) & Microsoft-OCR & TextVQA,ST-VQA,TextCaps,OCR-CC & ST-VQA & 54.71 & 53.97 \\ 16 & LaTr-Base (Dosov et al., 2017) & Amazon-OCR & IDL & - & 58.03 & 58.86 \\ 17 & LaTr\(\ddagger\)-Base (Dosov et al., 2017) & Amazon-OCR & IDL & ST-VQA & 59.53 & 59.55 \\ 18 & LaTr\(\ddagger\)-Large (Dosov et al., 2017) & Amazon-OCR & IDL & ST-VQA & 61.05 & 61.60 \\ 19 & SaL-Base & Amazon-OCR & - & - & 62.42 & 62.35 \\ 20 & SaL-Base & Amazon-OCR & - & ST-VQA & 62.85 & 63.18 \\ 21 & SaL-Large & Amazon-OCR & - & - & 63.88 & 63.92 \\ 22 & SaL\(\ddagger\)-Large & Amazon-OCR & - & ST-VQA & **64.58** & **64.28** \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison on the TextVQA dataset. For a fair comparison, the top of the table is the result of using Rosetta OCR and only using the TextVQA dataset training. The bottom of the table is the result of the unrestricted setting. \(\ddagger\) to refer to the models trained on TextVQA and ST-VQA. As shown in the first row of Table 3, our baseline model (removing all of our proposed modules) shows the worst performance compared to our full model and other ablated ones. As shown in the second row of Table 3, with the help of the TSS module, the accuracy of our baseline on TextVQA increased from 57.98% to 61.55%, and the accuracy on ST-VQA increased from 55.78% to 58.29%. This proves that the TSS module can make the model realize whether there is a semantic context relationship directly between different OCR texts, reducing the noise generated by treating all OCR texts as a sentence in previous work. When adding the SCP module, the performance of the baseline on TextVQA and ST-VQA increased from 57.98% to 60.98% and from 55.78% to 57.80% respectively. At the same time, the ANLS metric also improved by 1.7% on ST-VQA. This proves that the SCP module can help the model better locate OCR texts in different spatial positions in the image, and construct the spatial relationship between OCR texts in different positions. Adapting TSS and SCP modules at the same time (our full model), compared with the baseline, the accuracy on TextVQA and ST-VQA is increased by 4.44% and 3.96% respectively. The ANLS of our model on ST-VQA maintained the same trend as the accuracy and increased by 3.40%, indicating the importance of both modules. In the visualization section, we will more intuitively show the impact of these two modules on model inference. #### 4.3.2. The Influence Of Different Input Information We explore the effect of different input information on the model. We use the coordinate information of OCR and objects in the image by default. The results are shown in Table 4. It can be seen that OCR information (including both text and visual information) has the greatest impact on the performance of the model. Adding OCR text and its visual information extracted by FRCNN achieves an accuracy of 61.31%, which drastically improve the model with only question input. In addition, different from the conclusion in LaTr, when using the same T5-Base model as the model structure, the performance of OCR using FRCNN visual features is much better than using ViT visual features in our experiments. For this, two reasons can be regarded. The first one is that compared to ViT, we bind FRCNN features with OCR text features through addition operations. This means that for each OCR text, we can accurately fuse its text and visual features. However, using ViT visual features requires the model to be trained to match each OCR text with its associated image patch. There is no supervisory signal to help OCR text to match ViT features during training, making the performance of using ViT visual features much worse than that of using FRCNN features. The second reason is that the ViT model needs to resize the image to 224*224, which greatly reduces the resolution of the image, making the ViT feature unable to express the visual information of the original image well. #### 4.3.3. Effectiveness Of Spatial Circle Position Module In order to further prove the importance of the relative spatial position and distance between the OCR texts in the image and the effectiveness of our SCP module, we compare the LaTr method that represents the absolute spatial position of the OCR text with our SCP module. For a fair comparison, both our model and LaTr only input questions \begin{table} \begin{tabular}{c c c c c} \hline \hline \# & Model & Module & Val Acc. (\%) \\ \hline 1 & LaTr \(\circ\)[(4)] & - & 50.37\% \\ 2 & LaTr \(\circ\)[(4)] & layout embedding & 51.22\% \\ 3 & SaL \(\circ\) & - & 53.51\% \\ 4 & SaL \(\circ\) & layout embedding & 54.56\% \\ 5 & SaL \(\circ\) & SCP & 55.95\% \\ \hline \hline \end{tabular} \end{table} Table 5. Effectiveness of spatial circle position module. \(\circ\) represent only use input question and OCR text. \begin{table} \begin{tabular}{c c c c} \hline \hline \# & Model & Val Acc. (\%) & Val ANLS & Test ANLS \\ \hline 1 & M4C [(13)] & 38.05 & 0.472 & 0.462 \\ 2 & SA-M4C [(15)] & 42.23 & 0.512 & 0.504 \\ 3 & SMA [(11)] & - & - & 0.466 \\ 4 & CRN [(21)] & - & - & 0.483 \\ 5 & LaAP-Net [(12)] & 39.74 & 0.497 & 0.485 \\ 6 & LOGOS [(22)] & 48.63 & 0.581 & 0.579 \\ 7 & TAP [(35)] & 50.83 & 0.598 & 0.597 \\ 8 & LaTr-Base [(4)] & 58.41 & 0.675 & 0.668 \\ 9 & LaTr-Base [(4)] & 59.09 & 0.683 & 0.684 \\ 10 & LaTr\(\ddagger\)-Large [(4)] & 61.64 & 0.702 & 0.696 \\ 11 & SaL-Base & 59.74 & 0.683 & 0.673 \\ 12 & SaL\(\ddagger\)-Base & 62.29 & 0.708 & 0.697 \\ 13 & SaL-Large & 61.45 & 0.699 & 0.685 \\ 14 & SaL\(\ddagger\)-Large & **64.16** & **0.722** & **0.717** \\ \hline \hline \end{tabular} \end{table} Table 2. Results on the ST-VQA Dataset. SaL outperforms the state-of-the-art by 2.52% accuracy without extra pretraining data. \begin{table} \begin{tabular}{c c c c c} \hline \hline \# & Model & Module & TextVQA & ST-VQA \\ & & Acc.(\%) & Acc.(\%) & ANLS \\ \hline 1 & Baseline & - & 57.98 & 55.78 & 0.649 \\ 2 & Baseline & TSS & 61.55 & 58.29 & 0.667 \\ 3 & Baseline & SCP & 60.98 & 57.80 & 0.666 \\ 4 & Baseline & TSS + SCP & 62.42 & 59.74 & 0.683 \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation studies of different modules on TextVQA and ST-VQA datasets. TSS and SCP present text semantic separate module and spatial circle position module, respectively. and OCR text (no visual features). As shown in Table 5, the use of layout embedding to represent the spatial position of the OCR text in the image increases the accuracy of the LaTr model in the TextVQA validation set by 0.85%. We use layout embedding in SaI, and the accuracy increased from 53.51% to 54.56%. In contrast, when the SCP module is used to represent the relative spatial position relationship between each OCR text in the image, the accuracy of SaI is increased from 53.51% to 55.95%. #### 4.3.4. Different OCR Text Separation Methods This subsection studies the effect of different OCR text separation methods. We implement two method variants: Tag and Index. Tag add \(<\)context\(>\) and OCR visual feature to the last token of every OCR text in the image to distinguish each OCR text instead of separating possible phrases in OCR texts. As for Index, it separates each OCR text by directly shifting its position id for 1-D position encoding instead of inserting \(<\)context\(>\). The Tag variant provides the model with an embedding that learns the context between OCR texts, and the Index variant tells the model that the distance between different OCR texts should be appropriately distanced. As shown in Table 6, both variants improve the performance, which further emphasizes our motivation of separating OCR texts. Compared with these variants, Our TSS achieves the best performance, indicating its superiority, as our TSS can satisfy the goals of both variants. ### Visualization To further verify the effectiveness of our method, we illustrate some visual cases from the TextVQA validation set. As shown in Figure 4 (a), since neither the m4c model nor the baseline model considers whether the OCR text has semantic relevance, OCR texts are directly processed as a sentence. Therefore, the m4c model and the baseline take the text "fine food & spirits" belonging to the same line as the answer. With the help of the TSS module, our model learned whether there is a semantic relationship between each OCR text, and gave the correct answer "fine food". It can be seen from Figure 4 (c, d) that for images with multiple OCR texts, our model can better model the spatial position relationship between them, and get the correct answer based on reasoning. Specifically, in Figure 4(c), the spatial position relationship between '32' and 'NO' is closer, and the baseline uses '30' as the answer as it is closer to 'NO' according to the reading order. Figure 4 (b) can better reflect the role of our TSS module and SCP module. It can be seen that the baseline model directly stitches "dr. dr. dr. er's after" together as the answer in accordance with the reading order, while our model takes the text into account semantics and the spatial positional relationship between OCR texts to give the correct answer. More qualitative examples and error cases can find out in Appendix C. ## 5. Conclusion For the TextVQA community, we proved that the previous works are unreasonable for OCR text processing. Previous works added noise to the reasoning process of the model by forcibly splicing all OCR texts into a sentence. Our proposed Text Semantic Separate module effectively separates different OCR texts and enables the model to learn whether there is a semantic context relationship between different OCR texts. In addition, the spatial position relationship of OCR text is very important for the model to understand the spatial relationship of OCR text in different positions in the image. But the 1-D position embedding used in previous work cannot reasonably express this relationship. The Spatial Circle Position module we proposed can better reflect the spatial position relationship between each OCR text and other OCR texts in the image. It can make the model more accurately locate the position of the OCR text in the image. With these two modules, our proposed SaL model achieves SOTA performance on TextVQA and ST-VQA datasets without any pretraining tasks. Finally, we call on the community to rethink how to use the text information in the scene more reasonably. \begin{table} \begin{tabular}{c c c c} \hline \hline \# & Model & Method & Val Acc. (\%) \\ \hline 1 & Baseline & - & 57.98 \\ 2 & Baseline & Tag & 59.74 \\ 3 & Baseline & Index & 60.07 \\ 4 & Baseline & TSS & 61.55 \\ \hline \hline \end{tabular} \end{table} Table 6. Ablation studies of different OCR text separation methods. Figure 4. Some cases of SaL compared to the baseline and M4C. SaL can distinguish whether there is a contextual relationship between OCR texts and can better model the spatial position relationship between each OCR text and other OCR texts.
2309.14865
Unsupervised Multi-Person 3D Human Pose Estimation From 2D Poses Alone
Current unsupervised 2D-3D human pose estimation (HPE) methods do not work in multi-person scenarios due to perspective ambiguity in monocular images. Therefore, we present one of the first studies investigating the feasibility of unsupervised multi-person 2D-3D HPE from just 2D poses alone, focusing on reconstructing human interactions. To address the issue of perspective ambiguity, we expand upon prior work by predicting the cameras' elevation angle relative to the subjects' pelvis. This allows us to rotate the predicted poses to be level with the ground plane, while obtaining an estimate for the vertical offset in 3D between individuals. Our method involves independently lifting each subject's 2D pose to 3D, before combining them in a shared 3D coordinate system. The poses are then rotated and offset by the predicted elevation angle before being scaled. This by itself enables us to retrieve an accurate 3D reconstruction of their poses. We present our results on the CHI3D dataset, introducing its use for unsupervised 2D-3D pose estimation with three new quantitative metrics, and establishing a benchmark for future research.
Peter Hardy, Hansung Kim
2023-09-26T11:42:56Z
http://arxiv.org/abs/2309.14865v3
# Unsupervised reconstruction of 3D human pose interactions from 2D poses alone ###### Abstract Current unsupervised 2D-3D human pose estimation (HPE) methods do not work in multi-person scenarios due to perspective ambiguity in monocular images. Therefore, we present one of the first studies investigating the feasibility of unsupervised multi-person 2D-3D HPE from just 2D poses alone, focusing on reconstructing human interactions. To address the issue of perspective ambiguity, we expand upon prior work by predicting the cameras' elevation angle relative to the subjects' pelvis. This allows us to rotate the predicted poses to be level with the ground plane, while obtaining an estimate for the vertical offset in 3D between individuals. Our method involves independently lifting each subject's 2D pose to 3D, before combining them in a shared 3D coordinate system. The poses are then rotated and offset by the predicted elevation angle before being scaled. This by itself enables us to retrieve an accurate 3D reconstruction of their poses. We present our results on the CHI3D dataset, introducing its use for unsupervised 2D-3D pose estimation with three new quantitative metrics, and establishing a benchmark for future research. Peter Hardy and Hansung Kim University of Southampton Vision Learning and Control, ECS [email protected] Multi-Person 3D Human Pose Estimation, 3D Scene Reconstruction, Unsupervised Learning ## 1 Introduction Monocular 3D human pose estimation (HPE) is known to be an ill-posed inverse problem, as multiple different 2D poses can correspond to the same 3D pose. Despite this, unsupervised algorithms have developed rapidly in tackling single-person 2D-3D HPE from a single image, where they attempt to lift a 2D skeleton to 3D via some form of reprojected 2D pose likelihood [1, 2, 3, 4, 5]. Due to fundamental perspective ambiguity however, the absolute metric of depth cannot be obtained from a single view alone [3, 6]. To deal with this, unsupervised approaches centre the detected 2D pose around a root joint (typically the pelvis), while also setting the 3D prediction of the root to be a fixed unit of \(c\) from the camera. This means that instead of absolute depth, these models learn to predict the 3D depth offset from the root joint when the root is assumed to be \(c\) units from the camera. Although this works well in scenarios where we want to lift a single person, if we adapt this approach to lift multiple people simultaneously we obtain errors both in terms of the pose, as well as in the 3D distance between the two people as seen in Fig. 1. Therefore, the aim of this paper is to obtain an accurate reconstruction of 3D human poses interacting within a shared coordinate system, relying solely on their 2D poses obtained from a single image. Additionally, in our extensive Figure 1: Errors obtained when trying to use current unsupervised 2D-3D lifting approaches, to lift multiple people to 3D. In the above scenario, the root coordinate is the mid-point between each person’s pelvis in 2D. We show a side view of the GT and Predicted 3D to highlight both pose prediction and 3D distance errors. Note how the person further back in the image appears to be floating and smaller in the predicted 3D when compared to the GT, this is due to the depth ambiguity in a perspective projection setting. survey of prior literature, we found no prior work tackling unsupervised multi-person 2D-3D HPE from 2D poses alone. Therefore, we take the first leap in exploring if it is feasible to reconstruct an accurate 3D estimate of two people interacting from their 2D poses alone. ## 2 Methodology In this section, we present our unsupervised learning approach to independently lift 2D poses to 3D, combining them to a shared coordinate space and predicting the relative elevation angle of each person which is used for elevation and rotation compensation. An illustrative depiction of our approach is provided in Fig.2. Our 2D poses consisted of \(N\) keypoints, \((x_{i},y_{i})\), \(i=1...N\), where the root keypoint, located at the origin \((0,0)\), was the midpoint between the left and right hip joint (pelvis). Similar to prior work we adopted the practice of fixing the distance of the pose from the camera at a constant \(c\) units and normalising such that the average distance from the head to the root keypoint was \(\frac{1}{c}\) units in 2D, with \(c\) being set to 10 as is consistent with previous research [3, 2, 4, 1]. ### Independent Lifting and Pose Combining Our lifting networks were trained to predict the 3D depth offset (\(\hat{d}\)) from the poses root keypoint for each 2D keypoint \((x,y)\). To compute the final 3D location of a specific keypoint, \(\mathbf{x}_{i}\), we employed perspective projection, as defined by: \[\begin{split}\mathbf{x}_{i}&=(x_{i}\hat{z}_{i},y_{ i}\hat{z}_{i},\hat{z}_{i}),\\ \textbf{where}&\hat{z}_{i}&=\max(1, \hat{d}_{i}+c).\end{split} \tag{1}\] Here, \(d_{i}\) represents the depth-offset prediction made by our lifting network for keypoint \(i\). Each 2D pose obtained from an image was lifted into 3D independently. This approach effectively mitigated the 3D pose estimation errors present when lifting both poses together, as demonstrated in Fig.1. Since both 3D poses shared the same root location, they were combined into a unified coordinate system, as depicted in the 'Combined 3D Poses' section in Fig.2. For our lifting network, we adopted the LInKs algorithm, originally introduced by Hardy and Kim [1]. We extended this algorithm to lift two additional keypoints, specifically the left and right hands. This inclusion was motivated by the significance of hand movements in contact-based interactions. It is worth noting that these additional keypoints are not typically present in most 3D pose datasets, such as Human3.6M [7] and MPI-INF-3DHP [8]. ### 3D Elevation and Rotation Compensation As natural human behaviour places the subjects of interest in the centre of an image while holding the camera horizontally, and "narrow-angle" lenses typically have little or no horizontal distortion [9, 2], we can assume the horizontal displacement of the 2D poses can correspond to the horizontal displacement in 3D once scaled. However, if we naively used the elevation displacement in the 2D poses, and then applied scaling, we would obtain substantial errors, as depicted in Fig.3. These errors predominantly stem from the variable elevation angle of the camera, which when angled up or down exaggerates the perceived height differences for people in a scene. Moreover, any predicted 3D pose will be tilted to or from the camera depending on the camera's elevation angle, further increasing the error. To tackle this, we expanded the work of Wandt _et al._[2], who noticed that using a random elevation angle during 3D rotation and 2D reprojection can lead to unnatural 2D poses. Therefore, they sought to compensate by learning the elevation angle that would align the camera with the root of the pose prior to any further 3D transformations. However, it was not considered in their work how this predicted elevation angle could be used to calculate the elevation offset for two poses from the same scene. Therefore we include the same elevation angle prediction branch in our own lifting network while using the elevation angle prediction for the following additional steps. Let us have two predicted 3D poses, \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\), along with their respective elevation angle Figure 2: Overview of our multi-person pose estimation approach. Given two or more detected 2D poses our lifting network [1] predicts the 3D location for each joint for each pose independently. The 3D poses are then combined in their own global coordinate system. An elevation compensation approach accurately predicts the offset of each person’s pelvis in a 3D setting. Lastly, each pose is scaled so that their feet are on the same ground plane which produces our final prediction. predictions, \(\theta_{1}\) and \(\theta_{2}\). As the root of both poses is predicted to be \(c\) units from the camera, the vertical distance the camera needs to move to align with the root of pose \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\), is given by \(c\cdot\tan(\theta_{1})\) and \(c\cdot\tan(\theta_{2})\) respectively. Consequently, we estimated the vertical offset \(\Delta h\) of \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) by considering the difference in vertical distance the camera has to move to align with the root of each pose: \[\Delta h=c\cdot(\tan(\theta_{1})-\tan(\theta_{2})) \tag{2}\] To solve the error of our 3D poses being tilted depending on the camera's elevation angle, we also introduced a rotational compensation for each pose around the \(x\) axis. To do this we created the rotation matrix \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\) from each poses respective \(\theta\) where: \[\mathbf{R}=\begin{bmatrix}1&0&0\\ 0&\cos(-\theta)&-\sin(\theta)\\ 0&\sin(\theta)&\cos(\theta)\end{bmatrix} \tag{3}\] In summary, we obtained the horizontal displacement of \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) from the image, and the elevation displacement was obtained via trigonometry using the predicted elevation angle of each pose. First, as both poses are centred around the same origin, we rotated them by their respective rotation matrix \(\mathbf{R}\). We then displaced both poses along the \(x\) and \(y\) axis, based on the horizontal distance in the image and the predicted vertical distance via elevation compensation. Lastly, we scaled each pose so that the lower of their two feet were aligned on the \(y\) plane. This entire process led to the complete 3D reconstruction of the scene. ## 3 Evaluation The Close Human Interactions (CHI3D) dataset introduced by Fieraru _et al._[10], was one of the first 3D human interaction datasets, and new at the time of writing. It contains the 3D ground truth (GT) joints from mocap obtained from images taken by 4 cameras. Each sequence contains 2 people in various interactions such as grabbing, pushing, or holding hands. In our extensive literature review, we found that only two previous publications mentioned the CHI3D dataset in their writing. These publications include the original dataset release, and subsequent research work led by the same authors [11]. Additionally, we found two other studies that submitted their results for evaluation on the CHI3D webpage [12, 13]. All of these approaches relied on imagery or video, and estimated the GHUM [14] and SMPLX [15] body models for evaluation. Furthermore, out of the 5 sequences that make up CHI3D the 3D data pertaining to sequences 1 and 5, as employed for evaluation in prior research [11, 10, 12, 13], has not been publicly available at the time of writing. As we do not use imagery or videos in our study, but just 2D poses, we are unable to train or evaluate our approach using the same protocols. Therefore we first detail our training approach as well as the four evaluation metrics we use and their definitions, followed by our results on the CHI3D dataset. ### Training Approach and Error Metrics To train our lifting models and normalising flows we use the 2D pose data in sequences 2 and 3. Sequence 4 is then used for evaluation. As the relative size of the CHI3D dataset is much smaller than traditional HPE datasets, we do not use "interesting" frames depending on the subjects' movement, but use all frames for training and evaluation. We pre-train our normalising flows for 100 epochs and train our lifting network for 40 epochs. We use the Adam Optimiser with an initial learning rate of \(2\times 10^{-4}\) which decayed exponentially by 0.95 every epoch. We used an identical architecture for our flows and lifting networks as detailed within Hardy and Kim [1]. Additionally, as our predicted poses are within a normalised 3D coordinate system, we aligned them to the GT via Procrustes alignment prior to evaluation. Note that we treat the poses within our scene as one rigid structure during alignment, meaning that if pose 1 was scaled by \(s\) and translated by \(t\) the same would happen to pose 2. The evaluation metrics we used are: * **PA-MPJPE:** PA-MPJPE is the mean per joint position error in millimetres (mm), representing the Euclidean distance between the predicted and the GT 3D keypoints. Unlike prior approaches, we report this error collectively for all poses within a scene instead of for each pose individually. This inflates the error but provides a more accurate and comprehensive depiction of the joint errors within the reconstructed 3D scene. Figure 3: Top right shows the errors obtained in both scaling and displacement when we assume that the original vertical 2D displacement of the poses (top left) accurately represents the height offset in the real world. The bottom right image shows our proposed elevation compensation approach to displacement and scaling, allowing for more accurate depth offset and scaling to be predicted. * **Scale Error (SE):** SE represents the mean difference in mm between the L2 norm of the poses within our predicted and GT scenes. In other words, it assesses how much the total size of the poses in our predicted scene deviates from the poses in the GT scene. SE offers a detailed evaluation of scaling accuracy by focusing specifically on pose size instead of the overall scene size. * **Translation Error (TE):** TE represents the L2 norm of the mean absolute translation error in mm between our predicted and GT 3D scenes. It provides insight into the accuracy of our 3D reconstruction with respect to translation. * **Root Displacement Error (RDE):** RDE quantifies the mean error in mm between the pelvis displacement in the GT scene and the pelvis displacement in our predicted 3D scene. The RDE metric assesses whether our predicted 3D poses are displaced by the correct amount within our reconstruction. ### Results and Limitations We show the results of our model using 2D image displacement, our elevation compensation approach for displacement, and elevation and rotation compensation in Table 1. Qualitative results can be seen in Fig. 4. Our results show that both of our changes improved results, with the elevation compensation alone reducing the PA-MPJPE error by 23.4%. Furthermore, both of our changes reduced the error in scaling and displacement between the predicted and GT poses, showing how much the elevation angle of the camera can exaggerate the size and displacement of people within a scene. The main limitation of our approach is that it relies on an accurate 2D pose estimate to perform optimally, particularly an accurate pelvis keypoint. This is because the elevation angle prediction depends on this keypoint being accurate. Furthermore, we find multiple discrepancies in the CHI3D dataset between the 2D annotation, images and 3D poses. For example, when there is contact between subjects, the image and GT 3D poses showed a negligible vertical displacement in the pelvises along the \(y\) axis. However, the 2D annotation often showed a large displacement. To mitigate this, when the vertical pelvis distance between both poses in the image was 100 pixels or less, we assumed \(\theta_{1}=\theta_{2}\). To mitigate this constraint, in future work we plan on combining our elevation compensation approach with a contact detector. This would allow us to use the contact point as a reference when displacing and scaling the poses removing this constraint.
2304.00146
On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods
Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In this survey, we begin by providing an example of this with the parallels between the development trajectories of graph neural network acceleration for physical simulations and particle-based approaches. We then give an overview of simulation approaches, which have not yet found their way into state-of-the-art Machine Learning methods and hold the potential to make Machine Learning approaches more accurate and more efficient. We conclude by presenting an outlook on the potential of these approaches for making Machine Learning models for science more efficient.
Artur P. Toshev, Ludger Paehler, Andrea Panizza, Nikolaus A. Adams
2023-03-31T21:51:00Z
http://arxiv.org/abs/2304.00146v1
On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods ###### Abstract Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In this survey we begin by providing an example of this with the parallels between the development trajectories of graph neural network acceleration for physical simulations and particle-based approaches. We then give an overview of simulation approaches, which have not yet found their way into state-of-the-art Machine Learning methods and hold the potential to make Machine Learning approaches more accurate and more efficient. We conclude by presenting an outlook on the potential of these approaches for making Machine Learning models for science more efficient. Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Neural Networks, Classical Numerical Methods ## 1 Introduction Recent years have seen an ever-larger push towards the application of Machine Learning to problems from the physical sciences such as Molecular Dynamics (Musaelian et al., 2022), coarse-graining (Wang et al., 2022), the time-evolution of incompressible fluid flows (Wang et al., 2020), learning governing equations from data (Brunton et al., 2016; Cranmer et al., 2020), large-scale transformer models for chemistry (Frey et al., 2022), and the acceleration of numerical simulations with machine learning techniques (Kochkov et al., 2021). All of these algorithms build on the infrastructure underpinning modern Machine Learning in combing state-of-the-art approaches with a deep understanding of the physical problems at hand. This begs the questions if there exist more insights and tricks hidden in existing, classical approaches in the physical sciences which have the potential to maybe not only make the algorithm for the particular problem class more efficient, but maybe even Machine Learning in general? Inspired by recent theoretical advances in the algorithmic alignment between Graph Neural Networks (GNNs) and dynamic programming (Xu et al., 2020; Velickovic et al., 2020), we surmise that the extension of this analysis to classical PDE solvers, and the physical considerations they incorporate, enables us to learn from the development trajectory in the physical sciences to inform the development of new algorithms. In this workshop paper we make the following contributions towards this goal: * A comparison of the development of graph-based learned solvers, and the proximity of their development ideas to the development of Smoothed Particle Hydrodynamics starting from Molecular Dynamics in the physical sciences. * An analysis of classical numerical solvers, and their algorithmic features to inform new ideas for new algorithms. ## 2 MeshGraphNets and its relation to classical methods An excellent example of the parallels between the development of Machine Learning methods for the sciences and Figure 1: Characterization of the physical scales the example methods of section 2 operate on. The Graph Network-based approaches MeshGraphNets, and Graph Network-based Simulators are placed in relation to their classical counterparts. the development of classical approaches is the recent development of graph-based simulators. When we relate their inherent assumptions and techniques to the development of particle-based methods, starting with Molecular Dynamics, a great many parallels arise. For an impression of the scales the classical methods operate on, and where graph-based simulators are placed in relation, please refer to Figure 1. In this section, we analyze the structure of two of the first mature learned solvers (GNS Sanchez-Gonzalez et al., 2020), MeshGraphNets Pfaff et al. (2021)) and how these two approaches align with three of the classical methods (MD, FPM, SPH). We select these learned algorithms because they were one of the first of their kind to show promising results on real world data. Also, GNS is trained directly on SPH data which further motivates an algorithmic comparison. ### Graph Neural Network-based Approaches to Simulation The Graph Network (GN) Battaglia et al. (2018) is a framework that generalizes graph-based learning and specifically the Graph Neural Network (GNN) architecture by Scarselli et al. (2008). However, in this work, we use the terms GN and GNN interchangeably. Adopting the Graph Network formulation, the main design choices are the choice of update-function, and aggregation-function. For physics-informed modeling this gives us the ability to blur the line between classical methods and graph-based methods by including biases similar to CNNs for non-regular grids, as well as encoding physical laws into our network structure with the help of spatial equivariance/invariance, local interactions, the superposition principle, and differential equations. E.g. translational equivariance can easily be incorporated using relative positions between neighboring nodes, or the superposition principle can be encoded in graphs by using the summation aggregation over the representation of forces as edge features. Viewing MeshGraphNets Pfaff et al. (2021) from a physics-motivated perspective, we argue that MeshGraphNets originate from Molecular Dynamics. To present this argument in all its clarity, we have to begin with its predecessor: the Graph Network-based Simulators (GNS) Sanchez-Gonzalez et al. (2020). #### 2.1.1 Graph Network-based Simulators The Graph Networks-based Simulator builds on the encoder-processor-decoder approach, where Graph Networks are applied iteratively on the encoded space. Proving GNS' ability to simulate systems with up to 85k particles, their approach can be summarized as follows. Let \(X^{t}\) denote the states of a particle system at time \(t\). \(X\) might contain the position, velocity, type of particle, or any other physical information specific to a material particle. A set of \(k+1\) subsequent past states \[\mathbf{X}^{t_{0:K}}=\left\{X^{t_{0}},X^{t_{1}},\ldots,X^{t_{k}}\right\}\] if given to the network. The core task is to then learn the differential operator \(d_{\theta}\), which approximates the dynamics \[d_{\theta}:X^{t_{k}}\longrightarrow Y^{t_{k}},\quad X^{t_{k+1}}=\text{Update} \left\{X^{t_{k}},d_{\theta}\right\}.\] Here, \(Y^{t}\) is the acceleration, which is used to obtain the next state \(X^{t+1}\) via integration using a deterministic "Update" routine, e.g. semi-implicit Euler scheme. The differential operator \(d_{\theta}\) is learned with the encoder-processor-decoder approach where the encoder takes in 1 to 10 previous states, and encodes them into a graph. This graph consists of nodes - latent representation of the states \(X\) - and edges - between each pair of particles closer than some cut-off radius there is another latent vector, which initially contains the distance or displacement information. The processor is then a multilayer Graph Network of which the exact number of message-passing Graph Networks is a hyperparameter. The result on the graph-space is then decoded back to physical space. The loss is computed as the mean-squared error between the learned acceleration, and the target acceleration. While the approach showed promising results for fluid simulations, and fluid-solid interactions, it struggled on deforming meshes, such as thin shells. #### 2.1.2 MeshGraphNets To better represent meshes MeshGraphNets Pfaff et al. (2021) supplemented the Graph Network simulation with an additional set of edges to define a mesh, on which interactions can be learned. Closely related to the superposition principle in physics, the principle of splitting a complicated function into the sum of multiple simpler ones, the interaction function is split into the interaction of mesh-type edges and collision-type edges. Following the widespread use of remeshing in engineering, MeshGraphNets have the ability to adaptively remesh to model a wider spectrum of dynamics. Mesh deformation without adaptive remeshing would lead to the loss of high frequency information. The last major improvement of MeshGraphNets over GNS is extending the output vector \(Y\) with additional components to also predict further targets, such as the stress field. In difference to the Graph Network-based Simulators, the input here includes a predefined mesh and the output is extended to contain dynamical features like pressure. Similarities between the Development Trajectories of Particle-based Methods and Graph Neural Network-based Approaches to Simulations Beginning with Molecular Dynamics, the earliest and most fundamental particle-based method, we will now outline the similarities between the development trajectories, and the derivations inherent to them, of MeshGraphNets and the development of particle-based methods in physics. #### 2.2.1 Similarities to Molecular Dynamics Molecular Dynamics is a widely used simulation method which generates the trajectories of an N-body atomic system. For the sake of intellectual clarity we restrict ourselves to its simplest form, the unconstrained Hamiltonian mechanics description. The construction of connections, and edges is one of the clearest similarities between Molecular Dynamics and MeshGraphNets. Both can potentially have a mesh as an input, and both compute the interactions based on spatial distances up to a fixed threshold. Iterative updates, or the repeated application of Graph Network layers in the MeshGraphNets, extend the effective interaction radius beyond the immediate neighbourhood of a particle such that all particles can be interacted with. Both approaches are at the same time translationally invariant w.r.t. accelerations, and permutation equivariant w.r.t. the particles, and use a symplectic time-integrator. While there are theoretical reasons for this choice in Molecular Dynamics, it is choice of convenience in the context of learned approaches. The main difference between the two approaches lies in the computation of the accelerations. In Molecular Dynamics the derivative of a predefined potential function is evaluated, whereas a learned model is used in the Graph Network-based Simulators. #### 2.2.2 Similarities to Smoothed Particle Hydrodynamics A closer relative to the Graph Network-based Simulators is the Smoothed Particle Hydrodynamics algorithm originating from astrophysics (Lucy, 1977; Gingold and Monaghan, 1977). Smoothed Particle Hydrodynamics discretizes the governing equations of fluid dynamics, the Navier-Stokes equations, with kernels such that the discrete particles follow Newtonian mechanics with the equivalent of a prescribed molecular potential. Both, Smoothed Particle Hydrodynamics, and Graph Network-based Simulators obey the continuum assumption, whereas Molecular Dynamics presumes a discrete particle distribution, and is constrained to extremely short time intervals. #### 2.2.3 The Differences Summarizing the key differences between the closely related approaches, Molecular Dynamics and Smoothed Particle Hydrodynamics both take one past state \(X^{t}\) as an input, whereas Graph-based approaches require a history of \(k\) states \(\mathbf{X}^{t_{0:K}}\). Molecular Dynamics encodes geometric relations in the potential, MeshGraphNets encode the geometry in the mesh, while there exists no direct way for inclusion in the other two approaches. Molecular Dynamics, and Smoothed Particle Hydrodynamics explicitly encode physical laws, for learned methods all these parameters and relations have to be learned from data. A key advancement of MeshGraphNets, coming from the Graph Network-based Simulators, is the explicit superimposition of solutions on both sets of edges, which far outperforms the implicit distinction of interactions. This approach is equally applicable to all conventional particle-, and mesh-based simulations in engineering. Borrowing the Fluid Particle Model from fluid mechanics, we can subsequently connect the classical methods with the learned approaches by viewing meshes and particles as the same entity under the fluid-particle paradigm. 2.4 Connecting MeshGraphNets to Graph Neural Network-based Simulations with the Fluid Particle Model The Fluid Particle Model (Espanol, 1998) is a mesoscopic Newtonian model, as seen in Figure 1, situated on an intermediate scale between the microscopic Molecular Dynamics and the macroscopic Smoothed Particle Hydrodynamics. It views particles from the point of view of a Voronoi tesselation of the molecular fluid, see Figure 3. The Voronoi tesselation coarse-grains the atomistic system to a pseudoparticle system with ensembles of atoms in thermal equilibrium summarized as pseudoparticles. This pseudoparticle construction is closely related to the MeshGraphNets construction, where each mesh node also corresponds to Figure 2: Illustration of the MeshGraphNets scheme with a decomposition of its algorithm into the encoder, processor, and decoder (Image source: Pfaff et al. (2021)). the cell center of a simulated pseudoparticle. Smoothed Particle Hydrodynamics as well as Dissipative Particle Dynamics (Hoogerbrugge and Koelman, 1992) also both operate on pseudoparticles. All of these approaches share that they have to presume a large enough number of atoms per pseudoparticles to be viewed as a thermodynamic system. Especially in Dissipative Particle Dynamics one injects Gaussian noise to approximate a physical system, just as is done for Graph Network-based Simulators and MeshGraphNets to stabilize the training. We surmise that this injection of noise into graph-based simulators amounts to forcing the learned model to predict the true output despite the noisy inputs, hence leading the model to converge to the central limit of the estimated conditional distribution of the acceleration. The construction of Voronoi tesselations governs that the size of the cells is to be inversely proportional to variations in their properties, hence leading to more sampling in regions with high property variation. The very same argument based on the curvature as a heuristic is being used to derive the mesh refinement of the MeshGraphNets algorithm. ## 3 Relation to Numerical Schemes After the recent success of Neural ODEs solvers (Chen et al., 2018), it has taken almost four years to start considering Neural PDEs in general (Brandstetter et al., 2022). By definition, PDEs deal with derivatives of multiple variables, compared to ODEs having one variable. As a result, typical numerical approximations of PDEs are much more diverse depending on the peculiarities of the PDE of interest. Typical PDE solvers operating on grids (Eulerian description) include Finite Difference Methods (FDM), Finite Volume Methods (FVM), and Finite Element Methods (FEM), whereas other methods follow the trajectory of irregularly spaced points (Lagrangian description) like Smoothed Particle Hydrodynamics (SPH), Fluid Particle Model (FPM), Dissipative Particle Dynamics (DPD) (Hoogerbrugge and Koelman, 1992), Volume of Fluid Method (VOF) (Hirt and Nichols, 1981), Particle-in-Cell (PIC) (Brackbill and Ruppel, 1986), Material Point Method (MPM) (Sulsky et al., 1993), Discrete Element Method (DEM) (Cundall and Strack, 1979), and Meshless FEM (MFEM). Finally, there are also approaches to solving PDEs without any discretization as in Sawhney et al. (2022). Each of these methods works best for a specific type of PDE, boundary/initial conditions, and parameter range. In this section we compare concepts from these classical methods to state-of-the-art learned algorithms. ### Data augmentation with white noise Two popular papers corrupting training inputs with additive Gaussian noise include Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021), as described before. The goal of this approach is to force the model to deal with accumulating noise leading to a distribution shift during longer rollouts. Thus, the noise acts as an effective regularization technique, which in these two papers allows for much longer trajectories than seen during training. However, one major issue with this approach is that the scale of the noise is represented by two new hyperparameters, which have to be tuned manually (Pfaff et al. (2021), Appendix 2.2). A perspective on noise injection coming from the physical sciences is to see it through the lens of mesoscopic particle methods like the Fluid Particle Model and Dissipative Particle Dynamics, in which the noise originates from the Brownian motion at small scales. Although GNS and MeshGraphNets operate on scales too large for the relevance of Brownian motion, the Fluid Particle Model provides a principled way of relating particle size and noise scale. The underlying considerations from statistical mechanics might aid to a better understanding of the influence of training noise and in turn make approaches based on it more efficient. ### Data augmentation by multi-step loss Another way of dealing with the distribution shift is by training a model to correct its own mistakes via some form of a multi-step loss, i.e. during training a short trajectory is generated and the loss is summed over one or multiple past steps (Tompson et al., 2017; Um et al., 2020; Ummenhofer et al., 2020; Brandstetter et al., 2022). The results on this vary with some researchers reporting better performance than with noise injection (Brandstetter et al., 2022), while others report the opposite experience (Sanchez-Gonzalez et al., 2020). Looking at classical solvers for something related to the multi-step loss, it is natural to think of adaptive time integrators used by default in ODE routines like ODE45 in Matlab (Dormand and Prince, 1980). Adaptive integrators work by generating two short trajectories of the same time length, but with different step sizes, and as long as the outcome with larger steps differs within some bounds, then the step size is increased. This guarantees some level of long-term rollout stability just as attempted with the multi-step loss, but the Figure 3: Single points (left), Delaunay triangulation (middle), and Voronoi diagram (right)(Image source: Rokicki and Gawell (2016) multi-step loss forces the network to implicitly correct for future deviations of the trajectory without actually changing the step size. The adaptive step-size idea has gained popularity in ML with the introduction of Neural ODEs (Chen et al., 2018). ### Equivariance bias Numerical PDE solvers come in two flavors: stencil-based and kernel-based, both of which are equivariant to translation, rotation, and reflection in space (Euclidean group equivariance), as well as translation in time (by Noether's theorem). These properties arise from the conservation of energy, which is a fundamental principle in physics. While equivariance, with respect to the Euclidean group, has been around for a couple of years on grids (Weiler et al., 2018), its extension to the grid-free (Lagrangian) setting is gaining popularity just recently (Brandstetter et al., 2021; Schutt et al., 2021; Batzner et al., 2022; Musaelian et al., 2022). Here, we talk about equivariance in terms of a neural net operation on vectors, which rotates the output exactly the same way as the input is rotated, as opposed to working with scalar values, which is called an invariant operation, e.g. SchNet (Schutt et al., 2017). The performance boost by including equivariant features is significant and reaches up to an order of magnitude compared to invariant methods (Batzner et al., 2022). ### Input multiple past steps Another common performance improvement in neural net training is observed by stacking multiple past states as an input (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021; Brandstetter et al., 2022). One argument supporting this approach is overfitting prevention by inputting more data (Pfaff et al., 2021). Looking at conventional solvers we very rarely see multiple past states as input and this is done for materials with memory property, e.g. rheological fluids or "smart" materials. Thus, providing multiple past states implicitly assumes that there is some nonphysical non-Markovian retardation process, which in most cases does not correspond to the physics used for training data generated. The only physical justification of a multi-step input we are aware of arises if we train the model to learn a coarse-grained representation of the system. Li et al. (2015) showed that explicit memory effects are necessary in Dissipative Particle Dynamics for the correct coarse-graining of a complex dynamical system using the Mori-Zwanzig formalism. Given that papers like GNS and MeshGraphNets do not make use of coarse-graining, it is questionable why we observe improvement in performance and whether this trick generalizes well to different settings. ### Spatial multi-scale modeling Conventional multi-scale methods include, among others, all types of coarse-graining, Wavelet-based methods (e.g. Kim et al. (2008)), and the Fast Multipole Method (Rokhlin, 1985). Graph Networks seem especially suitable for tasks like coarse-graining as they are designed to work on unstructured domains, opposed for example to approaches using Wavelet or Fourier transforms, which require regular grids. GNNs seem especially promising with many applications in Molecular Dynamics (Husic et al., 2020) and engineering (Lino et al., 2021; Valencia et al., 2022; Migus et al., 2022; Han et al., 2022). It is particularly interesting to see works like Migus et al. (2022) inspired by multi-resolution methods and Valencia et al. (2022) resembling geometric coarse-graining by weighted averaging. All these methods rely on the fact that physical systems exhibit multi-scale behavior, meaning that the trajectory of a particle depends on its closest neighbors, but also on more far-reaching weaker forces. Splitting the scales and combining their contributions can greatly reduce computation. One of the great advantages of GNNs is their capability to operate on irregularly spaced data, which is necessary for most coarse-graining approaches. ### Locality of interactions In most cases, graph-based approaches to solving PDEs define the edges in the graph, based on an interaction radius. Methods using the Graph Network architecture (Battaglia et al., 2018) effectively expand the receptive field of each node with every further layer, in the extreme case resulting in the phenomenon known as over-smoothing. But if we keep the number of layers reasonably low, the receptive field will always be larger compared to a conventional simulation with the same radius. Until recently, it was thought that a large receptive field is the reason for the success of learned simulators, but Musaelian et al. (2022) question that assumption. In this paper, an equivariant graph network with fixed interaction neighbors performs on a par with the very similar Graph Network-based method NguIP (Batzner et al., 2022) on molecular property prediction tasks. This finding supports the physics-based argument about the locality of interactions. ### Mesh vs Particle GNN-based simulation approaches offer the flexibility to combine particles and meshes out-of-the-box. If we then train one neural network to reproduce the results of a Finite Element solution on a mesh and Smoothed Particle Hydrodynamics solution over particles, this is where learned methods really shine. This was achieved with the MeshGraphNets framework (Pfaff et al., 2021). We argue that the transition from particles to meshes is a direct result of a coarse-graining procedure using Voronoi tessellation, which is related to the derivation of the Fluid Particle Model. The main assumption in this derivation is that each mesh cell should be small enough that it can be treated as being in equilibrium - similar to the assumption made when discretizing a domain with points. ### Stencils We talk about stencils when operating on regular grids. Although this is not the main strength of GNNs, there are some useful concepts from stencil-based simulations, which are conventionally nontrivial to generalize to particles, but can easily be adapted with GNNs. Brandstetter et al. (2022) state that their paper is motivated by the observation that the Weighted Essentially Non-Oscillatory scheme (WENO) (Shu, 1998) can be written as a special case of a GNN. Another work, inspired by the general idea of the Finite Volume Method, looking at the fluxes at the left and right cell boundary, was developed by Praditia et al. (2021). Inspired by the Finite Element Method, finite element networks were introduced by weighting the contributions of neighbouring cells by their volume, as is done in Finite Element analysis (Lienen and Gunnemann, 2022). ### Integration schemes In addition to the time-step adaptation mentioned in relation to multi-step losses, another topic investigated in literature is the order of the integrator (Sanchez-Gonzalez et al., 2019). This work points to the fact that higher order integrators lead to much better robustness, with respect to the choice of an integration time step. Another interesting question discussed in this paper is whether symplectic integrators improve performance of a learned Hamiltonian neural net. The answer seems to be that the symplectic property is much less important than the order of the integrator, which is in contrast with conventional Molecular Dynamics integrators, which work extremely poorly if not symplectic. ## 4 Untapped Ideas from Classical Approaches In this subsection, we introduce potentially useful ideas from conventional differential equation solvers in science, which to the best of our knowledge have not been adapted in main-stream learned PDE solvers yet. Figure 4 is a collection of these concepts in the form of a word cloud. ### Noise during inference Adding noise to the inputs during training has proven to be useful, but has not been done during testing. One idea would be to use noise during inference to emulate Brownian motion. And one further topic we already mentioned is the relation of the noise scale to particle mass. From mesoscopic methods and the Fluctuation-dissipation theorem we would expect the noise to scale as \(1/\sqrt{m}\) if a coarser representation is used. ### Multiple time steps Learned Molecular Dynamics simulations stick to using only the last past state and doing the same for larger-scale simulations might partially explain the unphysical behavior of the GNS method demonstrated in Klimesch et al. (2022). For coarse-graining though a longer history might be helpful. ### Feature Engineering From the Volume of Fluid Method we could adapt the idea of including features corresponding to the ratio of different material, if we are interested in simulating multi-material flows. The Discrete Element Method suggests encoding much more features like rotational degree of freedom (in magnetic field or simulating friction), stateful contact information (contact simulations), and often complicated geometry (for non-spherical, e.g. granular particles). Inspired by shock-capturing methods used routinely for the solution of nonlinear fluid dynamics problems (Ketcheson et al., 2020), one could think of further hand-tuned node features indicating the presence of a shock. Figure 4: Overview of the currently under-utilized ideas discussed in Section 4 for Machine Learning approaches for the physical sciences. ### Particles and Grid There are a number of methods using the best of both particle and grid worlds like the Particle-in-Cell method and its successor Material Point Method. The idea of updating the node features and from time to time also based on the grid cell they belong to, might speed up simulations and is worth exploring. Now, if we restrict ourselves to regularly spaced particles, respectively grid cells, our solver toolkit becomes much richer with methods like the Fast Fourier Transform (which has already seen great success with the Fourier Neural Operator (Li et al., 2020)) and the Wavelet Transform (as used in the PDE-Net (Long et al., 2018)) at our disposal, as mentioned above in the context of multi-scale modeling. ### Integrator Taking the perspective of Neural ODEs (Chen et al., 2018) with the neural network learning the perfect acceleration, one could arguably expect the next evolutionary step to be the combination of learned integrators with adaptive integration schemes. Incorporating insights from classical numerical methods, one should possibly seek to define an equivalent stability criterion for learned methods as the Courant-Friedrichs-Lewy (CFL) condition for classical numerical methods. This would in turn aid in bounding the time, and subsequently explore time steps smaller than the critical value. ## 5 Conclusion & Discussion In this article, we claim that studying classical PDE solvers and their past development offers a direct path to the acceleration of the development of learned PDE solvers. Examples in literature show that biasing a learned solver by means of architectural design, data augmentation, feature engineering, etc. incorporating existing knowledge from classical solvers can greatly improve performance, explainability, and data-efficiency. In Section 2 we show how this development has already subconsciously played out in the development of graph-based learned solvers following the same development as particle-based methods such as Molecular Dynamics, Smoothed Particle Hydrodynamics, and the Fluid-Particle Model. This investigation is revisited for algorithmic comparisons and illustrations of the limitations of classical solvers later on. In Section 3 we then focus on ideas from classical approaches which have found their way into recent learned solver literature, and discuss the physical interpretation of these developments. In the discussed examples, the included physically motivated biases are used to improve robustness w.r.t. hyperparameter choices, lower errors, and speed-up inference. Section 4 takes a glimpse into a possible version of the future with ideas which have, to the best of our knowledge, not yet been integrated in learned methods. Given the elaborate history of classical methods, and the short, but highly dynamic history of learned approaches, there is still a lot of potential to be realized within the latter by incorporating insights from the former. Going further, many exciting problems in the physical sciences, such as simulations involving multiple spatial scales, multiple temporal scales, non-Newtonian fluids, or phase-changing materials, are heavily data-constrained and will hence have to rely on insights from classical methods for Machine Learning approaches to become feasible.
2309.16900
Prospective of Zr$^{3+}$ ion as a THz atomic clock
We demonstrate transition between the fine structure splitting of the ground state of triply ionized zirconium (Zr IV) is suitable for a terahertz (THz) atomic clock. Its transition frequency is about 37.52 THz and is mainly guided by the magnetic dipole (M1) transition and can be accessible by a readily available laser. We suggest to consider stable even isotopes of Zr and $M_J= \pm 1/2$ sublevels (i.e. $|4D_{3/2},M_J=\pm 1/2\rangle \rightarrow |4D_{5/2},M_J=\pm 1/2\rangle$ clock transition) for the experimental advantage. By performing necessary calculations, we have estimated possible systematics due to blackbody radiation, ac Stark, electric quadrupole and second-order Zeeman shifts along with shifts due to the second-order Doppler effects. The proposed THz atomic clock can be very useful in quantum thermometry and frequency metrology.
Jyoti, A. Chakraborty, Yan-mei Yu, Jingbiao Chen, Bindiya Arora, B. K. Sahoo
2023-09-28T23:58:30Z
http://arxiv.org/abs/2309.16900v1
# Prospective of Zr\({}^{3+}\) ion as a THz atomic clock ###### Abstract We demonstrate transition between the fine structure splitting of the ground state of triply ionized zirconium (Zr IV) is suitable for a terahertz (THz) atomic clock. Its transition frequency is about 37.52 THz and is mainly guided by the magnetic dipole (M1) transition and can be accessible by a readily available laser. We suggest to consider stable even isotopes of Zr and \(M_{J}=\pm 1/2\) sublevels (i.e. \(|4D_{3/2},M_{J}=\pm 1/2\rangle\rightarrow|4D_{5/2},M_{J}=\pm 1/2\rangle\) clock transition) for the experimental advantage. By performing necessary calculations, we have estimated possible systematics due to blackbody radiation, ac Stark, electric quadrupole and second-order Zeeman shifts along with shifts due to the second-order Doppler effects. The proposed THz atomic clock can be very useful in quantum thermometry and frequency metrology. ## I Introduction Atomic clocks are used to define the unit of time with very high precision such that they can lose only one second over several billion years. They also serve as important tools to probe much fundamental physics with applications ranging from probing variation of fundamental physical constants [1], relativistic geodesy [2; 3], gravitational-wave detection [4; 5; 6], dark matter search [7] and even beyond the Standard Model particle physics [8]. Most of the existing atomic clocks are based on either neutral atoms or singly charged ions, and operate both in microwave and optical domains. Singly charged ions are apt for carrying out many precise experiments with the advent of many cooling and trapping techniques. In fact single trapped \({}^{171}\)Yb\({}^{+}\)[9] and Al\({}^{+}\) ions [10] now provide clock frequencies with fractional uncertainties below \(10^{-19}\). Ions are relatively easier to control using electromagnetic radiation for performing high precision measurements. Atomic clocks operating at the microwave and optical frequencies have advantages in their own perspectives. Frequencies of these clocks differ by several orders of magnitude, thus they can be applied in a diverse range of fields. From this point of view, it is desirable to attain atomic clocks operating in between the microwave and optical clock frequencies like terahertz (THz). Recent advancements in science and technology have demonstrated applications of various ingenious modes of THz electromagnetic radiations in sensing, spectroscopy and communication [11] and for the analysis of interstellar matter [12]. The THz spectra have long been studied in the fields of astronomy and analytical science [11]. The implementation of absolute frequency standards in THz domain considering fine structure transition lines of Mg and Ca metastable triplet states was first proposed by Strumia in 1972 [13]. The salient feature of THz-ranged clock transition is that it is highly sensitive to blackbody radiations (BBR) and hence, can be used in quantum thermometers, especially in remote-sensing satellites [14]. Major applications of THz frequency standard lie in new generations of navigation, sensing, and communication systems, especially when the GPS timing service becomes incompetent [15]. In addition, THz clocks are also crucial in frequency calibration of various commercial THz instruments such as detectors, sources and high-resolution THz spectrometers [16]. Switching from optical frequency framework towards THz technology to study astronomical phenomena has also become evident because 98% of the photons emitted since the Big Bang and one-half of the total luminosity of our galaxy comprise of THz radiations [17; 18]. Moreover, the implementation of THz clocks can play a vital role in the investigation of the unexplored universe as well as the instrumentation of astronomical objects, especially astronomical interferometers and new-generation space telescopes. Even though the precision of optical clocks is far better than THz frequency metrology, still the clear insights of star formation and decay, the thermal fluctuations in environment due to immense release of green house gases [17] also requires the realization of THz frequency standards. Recently, several transitions lying in THz domain have drawn attention to be considered for atomic clocks. The generation of tunable THz optical clock was demonstrated by Yamamoto et al. [19]. Further, magic wavelengths of THz clock transitions in alkaline-earth atoms including Sr, Ca, and Mg have been identified between metastable triplet states by Zhou et al. [20]. The ac Stark shifts and magic wavelengths of THz clock transitions in barium have been calculated by Yu et al [21]. Two different molecular clocks probing carbonyl sulphide based on sub-THz frequency standard have been realized by Wang et al. [22]. In 2019, Kim et al. analyzed a miniature time-keeping device with high affordability in chip-scale terahertz carbonyl sulphide clock [15] whereas THz-rate Kerr microresonator optical clockwork based on silicon nitride has been performed by Drake et al. [23]. Recently, Leung et al. [24] constructed a molecular clock using vibrational levels of Sr\({}_{2}\) and achieved a systematic uncertainty at the level of \(10^{-14}\). In view of this, here, we propose a THz clock based on the M1 transition occurring between the \(4D_{3/2}\) and \(4D_{5/2}\) states of Zr\({}^{3+}\) ion. To support it, we have estimated major systematic shifts in the proposed clock transition. The outline of the paper is as follows: Sec. II presents the detailed proposal for our THz ion clock, Sec. III demonstrates the method of evaluation of atomic wave functions and matrix elements, Sec. IV presents electric dipole (E1) and magnetic dipole (M1) polarizabilities used for estimating systematic effects, Sec. V discusses the dominant systematic shifts, while the conclusion of the study is given in Sec. VI. Unless we have stated explicitly, physical quantities are given in atomic units (a.u.). ## II Schematic of THz \({}^{90}\)Zr\({}^{3+}\) clock Using various spectroscopic properties reported in our previous work [25], we find the wavelength of the \(4D_{3/2}\)-4\(D_{5/2}\) transition of Zr\({}^{3+}\) is about \(\lambda_{0}=7.9955\)\(\mu m\) corresponding to transition frequency 37.52 THz. Also, the lifetime of the \(4D_{5/2}\) state is reported to be \(\sim 47.38\) s [26]. These two conditions are sufficient enough to consider the \(4D_{3/2}\)-\(4D_{5/2}\) transition in Zr\({}^{3+}\) as a possible clock transition. Among several isotopes of Zr, we find \({}^{90}_{40}\)Zr would be more appropriate to be considered in the experiment. It is because this isotope has more than 51% natural abundance [27] and zero nuclear spin (\(I\)) and hence, cannot introduce additional systematic effects when \({}^{90}\)Zr\({}^{3+}\) interacts with the external magnetic field. Moreover, it can be trapped using electron beam ion traps [28; 29] and electron cyclotron resonance accelerators [30] in the laboratory. There are at least two ways one would be able to measure the transition frequency of the \(4D_{3/2}\)-4\(D_{5/2}\) transition in \({}^{90}\)Zr\({}^{3+}\). One can follow the quantum logic principle by trapping this ion simultaneously with another ion like Mg\({}^{+}\) or Ca\({}^{+}\) in the similar line with the \({}^{27}\)Al\({}^{+}\) ion clock to carry out the clock frequency measurement owing to the fact that they have similar charge to mass ratio [31; 10]. The schematic diagram for the other possible set up is illustrated in Fig. 1. As can be seen in this figure, the \(4D_{5/2}\) state has longer lifetime, so the desirable accumulation of the atomic population can be achieved in this state. This can lead to a favourable population inversion condition between the \(4D_{5/2}\) and \(4D_{3/2}\) states. In such case, electrons can be pumped first from the ground state to the excited \(5S_{1/2}\) state using a laser of 261.38 nm, which is far detuned to the clock transition at 7.9955 \(\mu m\). To acquire population inversion at the \(4D_{5/2}\) metastable state via spontaneous electric-dipole emission, it is aspired to again pump electrons from the \(5S_{1/2}\) state to the \(5P_{3/2}\) state using a second-stage laser of 216.44 nm. It should be noted that it would have been desirable to pump electrons directly from the ground to the \(5P_{3/2}\) state, but it is difficult to find a suitable laser to carry out this process. Our estimations suggest that lifetime of the \(5P_{3/2}\) state is about 0.61 ns with 64% decay rate to the \(4D_{5/2}\) state, which would be enough to carry out the measurement of the clock frequency for an atomic clock experiment. A small population (\(\sim 28\%\)) of the \(5S_{1/2}\) state due to the decay of electrons from the \(5P_{3/2}\) state can be managed with the help of the applied pump laser of 216.44 nm. Nonetheless, decay from the \(5S_{1/2}\) state to the \(4D_{5/2}\) state is highly forbidden, so it will not have much impact on the clock frequency measurement. Thus, it is feasible to acquire population inversion between the \(4D_{3/2}\) and \(4D_{5/2}\) states via the M1-decay channel for observing the clock frequency of 37.52 THz. To achieve high stability and accuracy in this proposed THz clock scheme, usage of a feedback loop to control the energy difference between the \(4D_{3/2}\) and \(4D_{5/2}\) states is recommended. This feedback loop would adjust the static magnetic field applied to the ion trap for maintaining a stable clock frequency over time [32]. Figure 1: Schematic of clock frequency measurement set-up using Zr\({}^{3+}\) ion. As shown, the \(4D_{3/2}\)-\(4D_{5/2}\) transition is used for THz clock frequency measurement and transitions \(4D_{3/2}\to 5S\to 5P_{3/2}\) are used for pumping the electrons to the excitation levels. The \(5P_{3/2}\to 4D_{5/2}\) decay channel is used to populate the upper level of the clock transition. ## III Method of evaluation Accurate evaluation of wave functions of the states involved in the clock transition is prerequisite for the determination of systematic shifts to which the clock transition is sensitive to. Therefore, we have implemented relativistic coupled cluster (RCC) theory for the precise computation of wave functions and thus, the matrix elements. We have incorporated higher-order correlations due to various physical effects such as core-polarization and pair-correlation effects. The general formulation and potential applications of RCC theory can be found in many previous studies including Refs. [33; 34; 35; 36; 37]. We give a brief outline of our employed RCC method below. We have considered Dirac-Coulomb (DC) Hamiltonian in our RCC method, which in a.u. is given by \[H_{DC} = \sum_{i=1}^{N_{e}}\left[c\vec{\alpha}_{D}\cdot\vec{p}_{i}+( \beta-1)c^{2}+V_{n}(r_{i})\right]+\sum_{i>j}\frac{1}{r_{ij}}, \tag{1}\] where \(N_{e}\) is the number of electrons in the atom, \(c\) is the speed of light, \(\vec{\alpha}_{D}\) and \(\beta\) are the Dirac matrices, \(V_{n}(r)\) is the nuclear potential, and \(r_{ij}\) is the inter-electronic distances between electrons located at \(r_{i}\) and \(r_{j}\). In the (R)CC theory ansatz, wave function of a many-electron system can be expressed in terms of mean-field wave function \(|\Phi_{0}\rangle\) of an atomic state and cluster operator \(T\) as [38] \[|\Psi_{0}\rangle=e^{T}|\Phi_{0}\rangle. \tag{2}\] In above equation, the mean-field wave function can be computed using the Dirac-Fock (DF) method. Following \(V^{N-1}\) potential formalism, we first solve the DF equation for closed-shell configurations (\([4p^{6}]\)) to get \(|\Phi_{0}\rangle\) and then, a valence orbital (\(v\)) is added to obtain the DF wave function of \([4p^{6}]v\) by defining [39] \[|\Phi_{v}\rangle=a_{v}^{\dagger}|\Phi_{0}\rangle \tag{3}\] where \(a_{v}^{\dagger}\) is the creation operator for the valence electron. Now, wave function of an atomic state with closed-shell electronic configuration and a valence orbital can be expressed [40] \[|\Psi_{v}\rangle=e^{T}\left\{1+S_{v}\right\}|\Phi_{v}\rangle, \tag{4}\] where \(T\) is the RCC operator that accounts for the excitations of core electrons to virtual orbitals, and \(S_{v}\) is the RCC operator that excites the valence orbital to a virtual orbital. Amplitudes of the \(T\) and \(S_{v}\) operators are obtained by solving the standard RCC equations. In our work, have considered only the singly and doubly excited-state configurations in our RCC theory (RCCSD method) by expressing [40] \[T=T_{1}+T_{2}\qquad\text{and}\qquad S_{v}=S_{1v}+S_{2v}. \tag{5}\] Here, the excitation operators take into account excitations from both, core and valence orbitals of the DF wave functions of Zr\({}^{3+}\) ion, and they are defined using the second quantized operators as [41] \[T_{1}=\sum_{p,a}\rho_{pa}a_{p}^{\dagger}a_{a}, \ T_{2}=\frac{1}{4}\sum_{pq,ab}\rho_{pqab}a_{p}^{\dagger}a_{q}^{ \dagger}a_{b}a_{a},\] \[S_{1v}=\sum_{m\neq a}\rho_{p}a_{p}^{\dagger}a_{v}, \text{ and }S_{2v}=\frac{1}{2}\sum_{pq,a}\rho_{pqaa}a_{p}^{\dagger}a_{q}^{ \dagger}a_{a}a_{v}, \tag{6}\] where the indices \(p\) and \(q\) range over all possible virtual orbitals and the indices \(a\) and \(b\) range over all occupied core orbitals. The quantities \(\rho\)s depict excitation coefficients. Consequently, the matrix elements for the operator \(\hat{O}\) between states \(k\) and \(v\) with the corresponding wave functions \(|\Psi_{v}\rangle\) and \(|\Psi_{k}\rangle\) can be evaluated by [42] \[O_{vk} = \frac{\langle\Psi_{v}|\hat{O}|\Psi_{k}\rangle}{\sqrt{\langle\Psi_ {v}|\Psi_{v}\rangle\langle\Psi_{k}|\Psi_{k}\rangle}} \tag{7}\] \[= \frac{\langle\Phi_{v}|\{S_{v}^{\dagger}+1\}\overline{\hat{O}}\{1 +S_{k}\}|\Phi_{k}\rangle}{\langle\Phi_{v}|\{S_{v}^{\dagger}+1\}\overline{\hat{N }}\{1+S_{k}\}|\Phi_{k}\rangle},\] where \(\overline{\hat{O}}=e^{T^{\dagger}}\hat{O}e^{T}\) and \(\overline{\hat{N}}=e^{T^{\dagger}}e^{T}\). Both \(\overline{\hat{O}}\) and \(\overline{\hat{N}}\) are the non-terminating series. In the above expression, the operator \(\hat{O}\) can be replaced by electric-dipole (\(E1\)), magnetic-dipole (\(M1\)) and electric quadrupole (E2) operators depending upon the matrix elements that need to be evaluated. ## IV Dipole polarizabilities Interactions between the electromagnetic fields with an atomic system cause shifts in the energy levels of the atomic system. First order effect due to electric field vanishes, and the next dominant second-order shift can be described with the knowledge of E1 polarizabilities. In fact the BBR shift of an atomic energy level can be estimated using its static E1 polarizability. Since the first-order magnetic field effects to the atomic energy levels in a clock experiment are cancelled out by carrying out measurements suitably, the second-order effects can be estimated with the knowledge of M1 polarizabilities. Thus, it is evident that accurate calculations of E1 and M1 polarizabilities are essential in order to estimate possible systematics in the clock states of the considered atomic system. Here, we use the dominant E1 and M1 matrix elements from the RCC method and excitation energies are taken from the National Institute of Standards and Technology (NIST) database [43] to determine these quantities. Details of these calculations and the obtained results are discussed below. ### E1 Polarizabilities The total dynamic dipole polarizability of an atomic state \(|J_{v},M_{J}\rangle\) in the presence of linearly polarized laser can be expressed as [44] \[\alpha_{v}^{E1}(\omega)=\alpha_{v0}^{E1}(\omega)+\frac{3M_{J}^{2}-J_{v}(J_{v}+1)}{ J_{v}(2J_{v}-1)}\alpha_{v2}^{E1}(\omega). \tag{8}\] Here, \(\alpha_{v0}^{E1}(\omega)\) and \(\alpha_{v2}^{E1}(\omega)\) represent scalar and tensor part of total dipole polarizability of the state \(v\) with angular momentum \(J_{v}\) and its corresponding magnetic projection \(M_{J}\). Both \(\alpha_{v0}^{E1}(\omega)\) and \(\alpha_{v2}^{E1}(\omega)\) do not depend on \(M_{J}\) and can easily be calculated by using [44] \[\alpha_{v0}^{E1}(\omega) = -\frac{1}{3(2J_{v}+1)}\sum_{k}|\langle J_{v}||\hat{O}^{E1}||J_{k} \rangle|^{2} \tag{9}\] \[\times\left[\frac{1}{\delta E_{vk}+\omega}+\frac{1}{\delta E_{vk} -\omega}\right],\] and \[\alpha_{v2}^{E1}(\omega)=2\sqrt{\frac{5J_{v}(2J_{v}-1)}{6(J_{v}+1) (2J_{v}+3)(2J_{v}+1)}}\] \[\times\sum_{k}(-1)^{J_{k}+J_{v}+1}\left\{\begin{array}{ccc}J_{ v}&2&J_{v}\\ 1&J_{k}&1\end{array}\right\}|\langle J_{v}||\hat{O}^{E1}||J_{k}\rangle|^{2}\] \[\times\left[\frac{1}{\delta E_{vk}+\omega}+\frac{1}{\delta E_{vk} -\omega}\right]. \tag{10}\] Here, \(|\langle J_{v}||\hat{O}^{E1}||J_{k}\rangle|\) are reduced electric-dipole matrix elements with \(J_{k}\) being angular momentum of intermediate state \(k\). The term in curly bracket refers to 6-j symbols. Moreover, the dipole polarizability of any atom with closed core and one electron in outermost shell can also be estimated by evaluating the core, core-valence and valence correlation contributions. i.e., [45] \[\alpha_{v}^{E1}(\omega)=\alpha^{c}(\omega)+\alpha^{vc}(\omega)+\alpha^{val}( \omega), \tag{11}\] where \(\alpha^{c}(\omega)\), \(\alpha^{vc}(\omega)\) and \(\alpha^{val}(\omega)\) are the core, core-valence and valence correlation contributions, respectively. Here, the tensor component of core and valence-core contribution is zero. Further, our valence contribution (\(\alpha^{val}(\omega)\)) to the polarizability is divided into two parts, Main (\(\alpha_{Main}^{val}\)) and Tail (\(\alpha_{Tail}^{val}\)), in which the first few dominant and the other less dominant transitions of Eqs. (9) and (10) are included, respectively. The results for the static dipole polarizabilities (\(\omega=0\)) of the considered \(4D_{3/2}\) and \(4D_{5/2}\) states are enlisted in Table 1, whereas dynamic dipole polarizabilities of the two states in the presence of 216.44 nm pumping laser have been tabulated in Table 2. These results are estimated by using the matrix elements from the RCCSD method. In order to cross-check the results, we have also estimated matrix elements using the random phase approximation that accounts for core-polarization effects to all-orders and separately adding other correlation effects through the Bruckner orbitals, structural radiations, and normalizations of wave functions at the third-order relativistic many-body perturbation theory (denoted as RMBPT3 method). Percentage deviations (\(\delta(\%)\)) in the E1 polarizability results are also mentioned in the above table. It can be seen from Table 1 that the \begin{table} \begin{tabular}{c c c c c c c c} \hline & \(4D_{3/2}\) & \multicolumn{6}{c}{\(4D_{5/2}\)} \\ Transition & d & \(\alpha_{w0}\) & \(\alpha_{w2}\) & Transition & d & \(\alpha_{v0}\) & \(\alpha_{v2}\) \\ \hline \(4D_{3/2}\to 5P_{1/2}\) & 1.465 & 0.9577 & -0.9577 & \(4D_{5/2}\to 5P_{3/2}\) & -1.955 & 1.1201 & -1.1201 \\ \(4D_{3/2}\to 6P_{1/2}\) & -0.257 & 0.0142 & -0.0142 & \(4D_{5/2}\to 6P_{3/2}\) & -0.362 & 0.0188 & -0.0188 \\ \(4D_{3/2}\to 7P_{1/2}\) & -0.121 & 0.0026 & -0.0026 & \(4D_{5/2}\to 7P_{3/2}\) & -0.175 & 0.0036 & -0.0036 \\ \(4D_{3/2}\to 8P_{1/2}\) & -0.073 & 0.0009 & -0.0009 & \(4D_{5/2}\to 8P_{3/2}\) & 0.108 & 0.0013 & -0.0013 \\ \(4D_{3/2}\to 9P_{1/2}\) & -0.050 & 0.0004 & -0.0004 & \(4D_{5/2}\to 9P_{3/2}\) & 0.074 & 0.0006 & -0.0006 \\ \(4D_{3/2}\to 10P_{1/2}\) & 0.038 & 0.0002 & -0.0002 & \(4D_{5/2}\to 10P_{3/2}\) & -0.051 & 0.0003 & -0.0003 \\ \(4D_{3/2}\to 5P_{3/2}\) & -0.642 & 0.1788 & 0.1430 & \(4D_{5/2}\to 4F_{5/2}\) & 0.549 & 0.0466 & 0.0534 \\ \(4D_{3/2}\to 6P_{3/2}\) & -0.120 & 0.0030 & 0.0024 & \(4D_{5/2}\to 5F_{5/2}\) & -0.209 & 0.0053 & 0.0061 \\ \(4D_{3/2}\to 7P_{3/2}\) & -0.058 & 0.0006 & 0.0005 & \(4D_{5/2}\to 6F_{5/2}\) & 0.092 & 0.0009 & 0.0011 \\ \(4D_{3/2}\to 8P_{3/2}\) & 0.036 & 0.0002 & 0.0002 & \(4D_{5/2}\to 4F_{7/2}\) & -2.461 & 0.9357 & -0.3340 \\ \(4D_{3/2}\to 9P_{3/2}\) & 0.025 & 0.0001 & 0.0001 & \(4D_{5/2}\to 5F_{7/2}\) & -0.960 & 0.1123 & -0.0401 \\ \(4D_{3/2}\to 10P_{3/2}\) & -0.022 & 0.0001 & 0.0001 & \(4D_{5/2}\to 6F_{7/2}\) & -0.466 & 0.0237 & -0.0085 \\ \(4D_{3/2}\to 4F_{5/2}\) & -2.027 & 0.9450 & -0.1900 & & & & \\ \(4D_{3/2}\to 5F_{5/2}\) & 0.779 & 0.1105 & -0.0221 & & & & \\ \(4D_{3/2}\to 6F_{5/2}\) & -0.359 & 0.0210 & -0.0042 & & & & \\ \(4D_{3/2}\to 7F_{5/2}\) & -0.170 & 0.0049 & -0.0010 & & & & \\ \(4D_{3/2}\to 8P_{5/2}\) & 0.022 & 0.0001 & 0.0000 & & & & \\ \(\alpha_{Main}^{val}\) & & 2.2403 & -1.0848 & \(\alpha_{Main}^{val}\) & 2.2692 & -1.5492 \\ \(\alpha_{Tail}^{val}\) & & 0.1752 & -0.0409 & \(\alpha_{Tail}^{val}\) & 0.2442 & -0.0787 \\ \(\alpha^{c}\) & & 2.9771 & \(\alpha^{c}\) & 2.9771 & & \\ \(\alpha^{vc}\) & -0.2431 & 0.1629 & \(\alpha^{vc}\) & -0.2649 & 0.2649 \\ Total & & 5.1495 & -0.9628 & Total & 5.2256 & -1.3630 \\ \(\delta\) (in \(\%\)) & 1.91 & 5.89 & \(\delta\) (in \(\%\)) & 1.75 & 12.03 \\ \hline \end{tabular} \end{table} Table 1: Contribution of different E1 matrix elements (d) to the static dipole polarizabilities (in a.u.) of the \(4D_{3/2}\) and \(4D_{5/2}\) states of Zr\({}^{3+}\). Percent deviations in the results (\(\delta(\%)\)) are given with respect to the values obtained using the RMBPT3 method. \(4D_{3/2}\to 5P_{1/2,3/2}\) and \(4D_{3/2}\to(4,5)F_{5/2}\) transitions contribute mainly to the valence part of static polarizability of the \(4D_{3/2}\) state. Similarly, the \(4D_{5/2}\to 5P_{3/2}\) and \(4D_{5/2}\to(4,5)F_{7/2}\) transitions seem to be dominant in the main part of the valence contribution of static dipole polarizability of the \(4D_{5/2}\) state. The total static scalar dipole polarizabilities of the \(4D_{3/2}\) and \(4D_{5/2}\) states of the Zr\({}^{3+}\) ion are found to be 5.1495 a.u. and 5.2256 a.u., respectively. The above table also depicts that a maximum of 12% deviation is obtained in tensor part of polarizability, which owes to the fact that the RCCSD method includes higher order correlations compared to the RMBPT3 method. In a similar manner, we have tabulated our dynamic dipole polarizability results for the linearly polarized pumping laser of wavelengths 216.44 nm in Table 2. On the basis of Eq. (8), we have determined total dipole polarizabilities of the ground \(|4D_{3/2},M_{J}=\pm 1/2\rangle\) and excited \(|4D_{5/2},M_{J}=\pm 1/2\rangle\) states of Zr\({}^{3+}\) ion for the 216.44 nm pumping laser. From Table 2, it can be perceived that the \(4D_{3/2}\to 5P_{1/2,3/2}\) and \(4D_{3/2}\to(4,5)F_{5/2}\) transitions again contribute significantly to the main part of the valence polarizability of the \(4D_{3/2}\) state for the pumping laser of 216.44 nm. Further in case of dynamic dipole polarizability of the \(4D_{5/2}\) state, it can be seen that the \(4D_{5/2}\to 5P_{3/2}\) and \(4D_{5/2}\to(4,5)F_{7/2}\) transitions are dominant and contribute mainly to the \(\alpha_{Main}^{val}\). It gives E1 polarizability values as 7.0919(1180) a.u. and 7.2587(1443) a.u. for the \(M_{J}=\pm 1/2\) components of ground and excited states, respectively, with an uncertainty less than 2% (estimated as the differences in the results from the RMBPT3 method). ### M1 Polarizability The interaction of magnetic moments \(\mu_{m}\) within an ion with external magnetic field leads to the induction of magnetic dipoles. This phenomenon of magnetic polarization can be described quantitatively by magnetic dipole polarizability \(\alpha^{M1}\). Defining M1 operator \(\hat{O}^{M1}=(\mathbf{L}+2\mathbf{S})\mu_{B}\) for Russel-Saunders coupling, with \(\mathbf{L}\) and \(\mathbf{S}\) being orbital and spin angular momentum operators, we can further calculate the magnetic dipole polarizability for any level \(|J_{v},M_{J}\rangle\) by \[\alpha_{v}^{M1}=-\frac{2}{3(2J_{v}+1)}\sum_{k}\frac{|\langle J_{v}||\hat{O}^{ M1}||J_{k}\rangle|^{2}}{E_{v}-E_{k}}, \tag{12}\] where \(J_{k}\) represents the intermediate states to which all the allowed transitions from \(J_{v}\) are possible. Unlike E1 polarizabilities, evaluation of the \(\alpha^{M1}\) values are highly dominated by the contributions from the transitions involving the fine-structure partners. Thus, we estimate \(\alpha^{M1}\) values of the \(4D_{3/2}\) and \(4D_{5/2}\) states by considering M1 amplitude between these two states and are found to be \(1.3940(92)\times 10^{-27}\) JT\({}^{-2}\) and \(-9.2925(600)\times 10^{-28}\) JT\({}^{-2}\), respectively. In this case, we have seen an \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(4D_{3/2}\)} & \multicolumn{3}{c}{\(4D_{5/2}\)} \\ Transition & d & \(\alpha_{w0}(\omega)\) & \(\alpha_{w2}(\omega)\) & Transition & d & \(\alpha_{v0}(\omega)\) & \(\alpha_{v2}(\omega)\) \\ \hline \(4D_{3/2}\to 5P_{1/2}\) & -1.465 & 1.4035 & -1.4035 & \(4D_{5/2}\to 5P_{3/2}\) & -1.955 & 1.6193 & -1.6193 \\ \(4D_{3/2}\to 6P_{1/2}\) & -0.257 & 0.0154 & -0.0154 & \(4D_{5/2}\to 6P_{3/2}\) & -0.362 & 0.0203 & -0.0203 \\ \(4D_{3/2}\to 7P_{1/2}\) & -0.121 & 0.0027 & -0.0027 & \(4D_{5/2}\to 7P_{3/2}\) & -0.175 & 0.0038 & -0.0038 \\ \(4D_{3/2}\to 8P_{1/2}\) & -0.073 & 0.0009 & -0.0009 & \(4D_{5/2}\to 8P_{3/2}\) & 0.108 & 0.0014 & -0.0014 \\ \(4D_{3/2}\to 9P_{1/2}\) & -0.050 & 0.0004 & -0.0004 & \(4D_{5/2}\to 9P_{3/2}\) & 0.074 & 0.0006 & -0.0006 \\ \(4D_{3/2}\to 10P_{1/2}\) & 0.038 & 0.0002 & -0.0002 & \(4D_{5/2}\to 10P_{3/2}\) & -0.051 & 0.0003 & -0.0003 \\ \(4D_{3/2}\to 5P_{3/2}\) & -0.642 & 0.2552 & 0.2041 & \(4D_{5/2}\to 4F_{5/2}\) & 0.549 & 0.0510 & 0.0584 \\ \(4D_{3/2}\to 6P_{3/2}\) & -0.120 & 0.0033 & 0.0026 & \(4D_{5/2}\to 5F_{5/2}\) & -0.209 & 0.0056 & 0.0064 \\ \(4D_{3/2}\to 7P_{3/2}\) & -0.058 & 0.0006 & 0.0005 & \(4D_{5/2}\to 6F_{5/2}\) & 0.092 & 0.0010 & 0.0011 \\ \(4D_{3/2}\to 8P_{3/2}\) & 0.036 & 0.0002 & 0.0002 & \(4D_{5/2}\to 4F_{7/2}\) & -2.461 & 1.0231 & -0.3653 \\ \(4D_{3/2}\to 9P_{3/2}\) & 0.025 & 0.0001 & 0.0001 & \(4D_{5/2}\to 5F_{7/2}\) & -0.960 & 0.1186 & -0.0424 \\ \(4D_{3/2}\to 10P_{3/2}\) & -0.022 & 0.0001 & 0.0001 & \(4D_{5/2}\to 6F_{7/2}\) & -0.466 & 0.0248 & -0.0089 \\ \(4D_{3/2}\to 4F_{5/2}\) & -2.027 & 1.0320 & -0.2064 & & & & \\ \(4D_{3/2}\to 5P_{5/2}\) & 0.779 & 0.1166 & -0.0233 & & & & \\ \(4D_{3/2}\to 6P_{5/2}\) & -0.359 & 0.0219 & -0.0044 & & & & \\ \(4D_{3/2}\to 7P_{5/2}\) & -0.170 & 0.0052 & -0.0010 & & & & \\ \(4D_{3/2}\to 8P_{5/2}\) & 0.022 & 0.0001 & 0.0000 & & & & \\ \(\alpha_{Main}^{val}\) & 2.8584 & -1.4506 & \(\alpha_{Main}^{val}\) & 2.8698 & -1.9964 \\ \(\alpha_{all}^{val}\) & 0.1799 & -0.0419 & \(\alpha_{All}^{val}\) & 0.2519 & -0.0811 \\ \(\alpha^{c}\) & 3.0154 & \(\alpha^{c}\) & 3.0154 & & 3.0154 & \\ \(\alpha^{vc}\) & -0.2726 & 0.1816 & \(\alpha^{vc}\) & -0.3002 & 0.3002 \\ Total & 5.7811 & -1.3109 & Total & 5.8369 & -1.7773 \\ \(\delta\) (in %) & 1.71 & 1.47 & \(\delta\) (in %) & 1.45 & 4.19 \\ \hline \end{tabular} \end{table} Table 2: Contribution of different E1 matrix elements (d) to the dynamic dipole polarizabilities (in a.u.) of the \(4D_{3/2}\) and \(4D_{5/2}\) states of Zr\({}^{3+}\) for the pumping laser with wavelength 216.44 nm. Percent deviation in the results (\(\delta\)(%)) are given with respect to the RMBPT3 results. uncertainty of 0.1% and 6% in comparison to the values obtained using the RMBPT3 method. ## V Frequency shifts In order to calculate various systematic shifts in the proposed clock transition, we have used E1 and M1 polarizabilities of the involved states as discussed above. The analysis and discussion on the major systematic shifts on the proposed clock frequency measurement are given below. ### BBR Shifts Thermal fluctuations of the electromagnetic field experienced by an ion due to temperature \(T\) of the surrounding are prevalent and need to be considered. At room temperature, the interactions of the system with both electric and magnetic field components of blackbody radiations lead to shifts in the energy states and are known as BBR Stark and BBR Zeeman shifts, respectively. They are one of the major irreducible contributions to uncertainty of any atomic clock [46; 47]. The generalized formula for energy shift due to blackbody radiation is given by [46] \[\Delta E_{v}=-\frac{(\alpha_{fs}K_{B}T)^{(2L+1)}}{2J_{v}+1}\sum_{k\neq v}| \langle\psi_{v}||\hat{O}||\psi_{k}\rangle|^{2}F_{L}\left(\frac{\omega_{kv}}{K _{B}T}\right), \tag{13}\] where, \(\hat{O}\) are the multipolar electromagnetic transition operators (can either be E1 or M1 operator), \(\alpha_{fs}\) is the fine structure constant, \(L\) is the orbital angular momentum, \(J_{v}\) is the total angular momentum of the state \(v\) and \(K_{B}\) is the Boltzmann constant. Here, \(\omega_{kv}=\omega_{v}-\omega_{k}\) corresponds to the difference in angular frequencies of the two levels. In Eq. 13, replacing \(\frac{\omega_{kv}}{K_{B}T}\) with \(y\), the Farley and Wing's function, \(F_{L}(y)\) can be written as [48] \[F_{L}(y)=\frac{1}{\pi}\frac{L+1}{L(2L+1)!!(2L-1)!!}\times\] \[\int_{0}^{\infty}\left(\frac{1}{y+x}+\frac{1}{y-x}\right)\frac{x ^{(2L+1)}}{e^{x}-1}dx. \tag{14}\] Further, the frequency shifts in the state \(v\) due to E1 and M1 channels can be given in terms of electric and magnetic dipole polarizabilities, respectively. At T=300 K, BBR Stark shift can be expressed in terms of differential static scalar polarizability \(\Delta\alpha_{0}^{E1}=\alpha_{v0}^{E1}-\alpha_{w0}^{E1}\), of the considered clock transition as [49] \[\Delta\nu_{\rm BBR}^{\rm E1}=-\frac{1}{2}(831.9~{}V/m)^{2}\Delta\alpha_{0}^{ E1} \tag{15}\] In Eq. 15, the polarizability \(\alpha\) in a.u. can be converted into SI via \(\alpha/h(Hz(V/m)^{-2})=2.48832\times 10^{-8}\alpha(a.u.)\). On the other hand, BBR Zeeman Shift through allowed M1 transitions from ground state is expressed as [50] \[\Delta\nu_{\rm BBR}^{\rm M1}=-\frac{1}{2h}(2.77\times 10^{-6}T)^{2}\Delta \alpha^{M1}, \tag{16}\] for \(T=300\)K. Here, \(\Delta\alpha^{M1}\) is the differential magnetic polarizability of the considered clock transition and can be calculated using Eq. 12 for our clock THz clock transition. Also, \(\alpha^{M1}\) in terms of Bohr magneton can be converted into SI units by using the relation that \(1\mu_{B}=9.274\times 10^{-24}\)JT\({}^{-1}\). The individual contribution of the dominant transitions in the static dipole polarizabilities of the considered clock states are enlisted in Table 1. The \(\alpha_{w0}^{E1}\) for \(|4D_{3/2},\pm 1/2\rangle\) and \(\alpha_{v0}^{E1}\) for \(|4D_{5/2},\pm 1/2\rangle\) are estimated as 6.1162 a.u. and 6.0351 a.u., respectively. Therefore, the differential static scalar electric dipole polarizability (\(\Delta\alpha_{0}^{E1}\)) of 0.0761 a.u. of these states gives a total BBR Stark Shift (\(\Delta\nu_{BBR}^{E1}\)) of \(-6.5524\times 10^{-4}\) Hz at temperature T= 300 K. This leads to the fractional shift of \(-1.7464\times 10^{-17}\) in the clock transition. Further, the magnetic dipole polarizabilities \(\alpha^{M1}\) for \(|4D_{3/2},\pm 1/2\rangle\) and \(|4D_{5/2},\pm 1/2\rangle\) states are estimated to be \(1.3940\times 10^{-27}\) JT\({}^{-2}\) and \(-9.2925\times 10^{-28}\) JT\({}^{-2}\), respectively, using Eq. 12. Substituting the values in Eq. 16, we get the net BBR Zeeman shift of \(1.3443\times 10^{-5}\) Hz, which further gives the fractional frequency shift of \(3.5829\times 10^{-19}\) at 300 K. Since this shift is directly proportional to \(\left(\frac{T(K)}{300K}\right)^{4}\), therefore, BBR shift can largely be suppressed by cooling the clock. ### AC Stark Shifts The interaction of external electric fields with clock states lead to an ac Stark shift within them. This ac Stark shift majorlly depends on dynamic dipole polarizabilities of the considered states in the presence of these external electric fields. The dynamic dipole polarizabilities of these states can be calculated by using Eq. 8. Consequently, the corresponding ac Stark shift for a transition occurring between states \(w\) and \(v\) is given by [51] \[\Delta\nu_{\rm Stark}=-\frac{1}{2\pi}\left(\frac{\mathcal{E}}{2}\right)^{2} \Delta\alpha^{E1}, \tag{17}\] where \(\Delta\alpha^{E1}\) is the differential dynamic polarizability given by \(\Delta\alpha^{E1}=\alpha_{v}^{E1}-\alpha_{w}^{E1}\). We have evaluated total dynamic dipole polarizabilities of both the ground and excited states as 7.0919 a.u. and 7.2587 a.u., respectively. Since the \(4D_{3/2}\)-\({}^{5}S_{1/2}\) transition is a near-resonant transition, hence the detuning frequency and frequency fluctuations at 261.38 nm pumping laser can cause an ac Stark shift in the \(4D_{3/2}\) state. This can be avoided by introducing pulse-light sequence [52]. Moreover, this shift can easily be controlled if the 261.38 nm laser is narrowed by Pound-Drever-Hall technique and is well locked to the 261.38 nm transition [53; 54]. Nonetheless assuming an electric field \(\mathcal{E}\) of 10 V/m [55], we have estimated ac Stark shift due to the 216.44 nm pumping laser to the clock frequency as \(-1.6342\times 10^{-8}\) Hz. This gives a fractional shift to the clock frquency as \(-4.3555\times 10^{-22}\). ### Zeeman Shifts In the presence of external magnetic field \(\mathcal{B}\), atomic energy levels as well as transition frequencies experience Zeeman shift which in fact, arises when atomic magnetic-dipole moment \(\mu_{m}\) interacts with external magnetic field [56]. Linear Zeeman shift can be avoided if average is taken over the transition frequencies with positive and negative \(M_{J}\) states, as described in Refs. [57; 58]. Although first-order Zeeman shift is avoidable, but quadratic Zeeman shift contributes largely to the frequency uncertainty budget and hence, must be considered. Further, the quadratic Zeeman shift can be expressed in terms of differential magnetic dipole polarizability \(\Delta\alpha^{M1}\), as [59] \[\Delta\nu^{(Z2)}=-\frac{1}{2h}\Delta\alpha^{M1}\mathcal{B}^{2}. \tag{18}\] with \(\Delta\alpha^{M1}=\alpha_{v}^{M1}-\alpha_{w}^{M1}\). In Eq. 18, magnetic polarizability for the corresponding states can be evaluated by using Eq. 12. The quadratic Zeeman shift is large enough to be considered for analyzing the systematics of the clock system. Therefore, the only considerable Zeeman shift in our study is of second-order, which can further be determined by evaluating magnetic dipole polarizabilities (\(\alpha^{M1}\)) of the involved states using Eq. 12. These values are thus substituted for the determination of second-order Zeeman shift using Eq. 18. The estimated values of \(\alpha^{M1}\) for the considered states as stated in Sec. V.1 lead to \(\Delta\nu^{(Z2)}\) and \(\frac{\Delta\nu^{(Z2)}}{\nu_{0}}\) of \(1.7521\times 10^{-10}\) Hz and \(4.5978\times 10^{-24}\), respectively, for \(\mathcal{B}=10^{-8}\) T [60]. ### Electric Quadrupole Shifts Electric quadrupole (EQ) shift is caused by the interaction of the quadrupole moments of the clock levels and a residual electric field gradient at the trap center [61; 62; 63; 64; 65; 66; 67; 68; 69]. Electric quadrupole shift can be expressed in terms of electric field gradient \(\frac{\partial\mathcal{E}_{z}}{\partial z}\) as [64; 70] \[\Delta\nu_{EQ}=-\frac{1}{2h}\Delta\Theta\frac{\partial\mathcal{E}_{z}}{ \partial z}, \tag{19}\] where, \(\Delta\Theta\) is the differential electric quadrupole moment [71]. We have considered the typical value of electric field gradient \(\frac{\partial\mathcal{E}_{z}}{\partial z}\) as \(10^{6}\) V/m\({}^{2}\) for traps [72]. Here, the quadrupole moment \(\Theta(J_{v})\) of an atom in electronic state \(|J_{v},M_{J}\rangle\) can be expressed in terms of quadrupole matrix element of the electric quadrupole operator \(\hat{O}^{E2}\) using the expression [73] \[\Theta(J_{v})=(-1)^{J_{v}-M_{J}}\left(\begin{array}{ccc}J_{v}&2& J_{v}\\ -M_{J}&0&M_{J}\end{array}\right)\langle J_{v}||\hat{O}^{E2}||J_{v}\rangle. \tag{20}\] Corresponding to \(|4D_{3/2},\pm 1/2\rangle\) and \(|4D_{5/2},\pm 1/2\rangle\) states, the quadrupole moments are estimated to be 0.7278 a.u. and 0.8426 a.u., respectively, using Eq. 20, which can further be converted into SI units by \(1ea_{0}^{2}=4.4866\times 10^{-40}\) C m\({}^{2}\). These values of quadrupole moments would lead to the quadrupole frequency shift of \(-0.0388\) Hz and fractional frequency shift of \(-1.0353\times 10^{-15}\). Even though this quadrupole shift is considerably high, but it can be eliminated by averaging the clock transition frequency over the three mutually orthogonal magnetic-field orientations, independent of the orientation of the electric-field gradient [64; 74]. ### Doppler Shift Doppler shift occurs when cold but moving ions interact with a field inside the microwave cavity that has a spatial phase variation, which basically does not form purely a standing wave [75]. The first-order Doppler shift can be eliminated by using two probe beams in opposite directions for the detection [76], however, second-order Doppler shift due to secular motion is quite considerable and can be expressed in terms of mass \(m\) of ion and speed of light \(c\) in vacuum, as [77] \[\Delta\nu_{\rm D2}=-\left(\frac{3\hbar\Gamma}{4mc^{2}}\right)\nu_{0}. \tag{21}\] \begin{table} \begin{tabular}{c c c} \hline Source & \(\Delta\nu\) (Hz) & \(\frac{\Delta\Gamma}{\alpha_{0}}\) \\ \hline Electric Quadrupole (\(\frac{\partial\mathcal{E}_{z}}{\partial z}=10^{6}V/m^{2}\)) & \(-0.03884\) & \(-1.0353\times 10^{-15}\) \\ BBR Stark (T=300 K) & \(-6.5524\times 10^{-4}\) & \(-1.7464\times 10^{-17}\) \\ BBR Zeeman (T=300 K) & \(1.3443\times 10^{-5}\) & \(3.5829\times 10^{-19}\) \\ AC Stark (216.44 nm) & \(-1.6527\times 10^{-8}\) & \(-4.4048\times 10^{-22}\) \\ Quadratic Zeeman (B=\(10^{-8}\) T) & \(1.7521\times 10^{-10}\) & \(4.5978\times 10^{-24}\) \\ Second-order Doppler (Thermal) & \(-4.6007\times 10^{-15}\) & \(-1.2262\times 10^{-28}\) \\ \hline \end{tabular} \end{table} Table 3: Estimated systematic shifts in the \(4D_{3/2}\)–\(4D_{5/2}\) clock transition of the Zr\({}^{3+}\) ion. With the advancement in experimentations, the cooling lasers under optimized working conditions are adopted for cooling the ion trap. The temperature of the ion trap is reduced to a value closer to the Doppler-cooling limit (\(T_{D}\)) further reducing the second-order Doppler shift due to the secular motion of the ion [78]. This Doppler-cooling limit is determined using the formula [79] \[T_{D}=\frac{\hbar\Gamma}{2K_{B}}, \tag{22}\] where \(\Gamma\) is the rate of spontaneous emission of the excited state (\(\Gamma^{-1}\) is the excited state lifetime), which is actually related to the natural linewidth of the atomic transition. Substituting the value of Doppler cooling limit from Eq. 22, Eq. 21 modifies to \[\Delta\nu_{\text{D2}}=-\left(\frac{3K_{B}T_{D}}{2mc^{2}}\right)\nu_{0}. \tag{23}\] Since \(\Gamma\) is the inverse of lifetime of upper state (\(\tau_{v}\)), viz, \(4D_{5/2}\) in the case of Zr\({}^{3+}\) ion. Thus, \(\Gamma=\frac{1}{\tau_{v}}=2.1106\times 10^{-2}\) Hz, which further gives doppler cooling limit of 0.0807 pK.Therefore, substituting the value of \(T_{D}\) in Eq. 23, second-order Doppler shift and fractional frequency shift are found to be \(-4.6007\times 10^{-15}\) Hz and \(-1.2262\times 10^{-28}\), respectively. ## VI Conclusion We have demonstrated that the \(|4D_{3/2},M_{J}=\pm 1/2\rangle\rightarrow|4D_{5/2},M_{J}=\pm 1/2\rangle\) transition of \({}^{90}\)Zr\({}^{3+}\) can be used for a THz atomic clock. In this regard, the clock transition principle has been discussed and major systematics to this transition such as BBR, ac Stark, electric quadrupole, second-order Doppler as well as second-order Zeeman shifts are estimated. We observed that the maximum contribution in the systematics of this transition is given by electric quadrupole effect, which in fact, can be eliminated by averaging the clock transition frequency over three mutually perpendicular directions of electric field for a given magnetic field. Other shifts determined for this transition are found to be suppressed. In the realistic experimental set up, they can be controlled further. Upon a successful development of the proposed THz clock, it will be highly useful in the quantum thermometry. ## VII Acknowledgement J and BA thank Priti at National Institute for Fusion Science, Gifu, Japan for fruitful discussions and critical feedback. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. We acknowledge Vikram-100 HPC facility at Physical Research Laboratory, Ahmedabad, India for carrying out relativistic coupled-cluster calculations. Yu acknowledge the support by the National Key Research and Development Program of China (2021YFA1402104).
2309.13878
On improved estimation of the larger location parameter
This paper investigates the problem of estimating the larger location parameter of two general location families from a decision-theoretic perspective. In this estimation problem, we use the criteria of minimizing the risk function and the Pitman closeness under a general bowl-shaped loss function. Inadmissibility of a general location and equivariant estimators is provided. We prove that a natural estimator (analogue of the BLEE of unordered location parameters) is inadmissible, under certain conditions on underlying densities, and propose a dominating estimator. We also derive a class of improved estimators using the Kubokawa's IERD approach and observe that the boundary estimator of this class is the Brewster-Zidek type estimator. Additionally, under the generalized Pitman criterion, we show that the natural estimator is inadmissible and obtain improved estimators. The results are implemented for different loss functions, and explicit expressions for the dominating estimators are provided. We explore the applications of these results to for exponential and normal distribution under specified loss functions. A simulation is also conducted to compare the risk performance of the proposed estimators. Finally, we present a real-life data analysis to illustrate the practical applications of the paper's findings.
Naresh Garg, Lakshmi Kanta Patra, Neeraj Misra
2023-09-25T05:05:50Z
http://arxiv.org/abs/2309.13878v1
# On improved estimation of the larger location parameter ###### Abstract This paper investigates the problem of estimating the larger location parameter of two general location families from a decision-theoretic perspective. In this estimation problem, we use the criteria of minimizing the risk function and the Pitman closeness under a general bowl-shaped loss function. Inadmissibility of a general location and equivariant estimators is provided. We prove that a natural estimator (analogue of the BLEE of unordered location parameters) is inadmissible, under certain conditions on underlying densities, and propose a dominating estimator. We also derive a class of improved estimators using the Kubokawa's IERD approach and observe that the boundary estimator of this class is the Brewster-Zidek type estimator. Additionally, under the generalized Pitman criterion, we show that the natural estimator is inadmissible and obtain improved estimators. The results are implemented for different loss functions, and explicit expressions for the dominating estimators are provided. We explore the applications of these results to for exponential and normal distribution under specified loss functions. A simulation is also conducted to compare the risk performance of the proposed estimators. Finally, we present a real-life data analysis to illustrate the practical applications of the paper's findings. **Keywords**: Decision theory, location family, Improved estimator, Brewster-Zidek type estimator, IERD approach, Pitman nearness. ## 1 Introduction The problem of estimating ordered parameters, whether the correct ordering between the parameters apriori, is known, has been extensively discussed in the literature. When the ordering among the parameters is known apriori, numerous studies have focused on estimating the smallest and the largest parameters, with contributions from Blumenthal and Cohen (1968), Kubokawa and Saleh (1994), Kumar et al. (2005), Chang and Shinozaki (2015) Patra and Kumar (2017), Chang et al. (2017) and Garg and Misra (2021). For a detailed review on estimation of restricted parameter we refer to Barlow et al. (1972), Robertson et al. (1988) and van Eeden (2006). However, limited attention has been given to the estimation of the smallest and the largest parameters when the correct ordering between the parameters is unknown. This problem can be viewed as an estimation counterpart to the well-known "Ranking and Selection" problems, where a basic goal is to select the population associated with the largest (or smallest) parameter while lacking knowledge of the correct ordering among parameters (refer to Dudewicz and Koo (1982), for an extensive bibliography on ranking and selection problems). In many real-world scenarios, such as in environmental studies, finance, or risk management, estimating the largest (or smallest) location/scale parameter is essential for assessing extreme events, outliers, or rare occurrences. For example, in environmental studies, estimating the largest (or smallest) location parameter of pollutant concentrations helps in determining critical levels at which adverse effects may occur. In finance, estimating the largest (or smallest) location parameter of stock returns allows investors to understand the potential for extreme losses or gains. In early work in this area, Blumenthal and Cohen (1968) considered estimation of the larger mean of two normal distributions having a common known variance. They proposed various estimators for the larger mean and compared their performances under the squared error loss function. Dhariyal et al. (1982) extended the class of estimators proposed by Blumenthal and Cohen (1968) by introducing two new estimators for estimating the larger mean of two normal distributions. Elfessi and Pal (1992) focused on the estimation of the smaller and larger scale parameters of two uniform distributions. They proposed improved estimators that outperformed the usual estimators based on the mean squared error criterion and the Pitman nearness criterion. Misra et al. (1997) consider two exponential distributions with unknown scale parameters. They dealt with estimation of the smaller and the larger scale parameters and obtained the MLEs. They showed that the MLEs are inadmissible and better estimators are derived under the squared error loss function. Most of the studies related to this problem have focused on specific distributions with independent marginals and specific loss functions. Some other contributions on the estimation of the larger and the smaller location/scale parameters can be found in Kumar and Sharma (1993), Misra et al. (1994), Mitra et al. (1994), and Misra et al. (2002). For a general framework, Misra et al. (2003) dealt with estimation of the largest scale parameter of \(k\) (\(\geq 2\)) independent and absolutely continuous scale parameter distributions (general probability scale models). Under the assumption of a monotone likelihood ratio on the probability models and the squared error loss function, they established that a natural estimator is inadmissible and obtained a dominating estimator. They also provided applications of these results to some specific probability models. In this paper, we make an attempt to unify/extend various studies by considering estimation of the larger location parameters of two general probability models under a general loss function. In most of the above studies, criterion of minimizing the risk function is used to obtain estimators outperforming usual estimators (such as those based on the component-wise best location/scale equivariant or the maximum likelihood estimators) under the squared error loss function. A popular alternative criterion to compare different estimators is the Pitman nearness (PN) criterion, due to Pitman (1937). It compares two estimators based on the probability of one estimator being closer to the estimand than the other estimator under the absolute error loss function. Rao (1981) has pointed out some advantages of the Pitman nearness (PN) criterion over the mean squared error criterion. Keating (1985) further supported Rao's observations through certain estimation problems, and Keating and Mason (1985) provided some practical examples where the PN criterion is more relevant than minimizing the risk function. Additionally, Peddada (1985) and Rao et al. (1986) extended the PN criterion to the generalized Pitman Criterion (GPN) by considering the general loss function instead of the absolute error loss function. A detailed description of the PN criterion and the relevant literature can be found in the monograph by Keating et al. (1993). The PN criterion has been extensively used in the literature for comparing various estimators in different estimation problems. However, there are only limited number in studies on the use of the PN criterion following Stein (1964) approach to obtain improvements over the usual estimators (Nayak (1990) and Kubokawa (1991)). Moreover, all these studies are centred around specific probability distributions (mostly, normal and gamma) and absolute error loss in the PN criterion. In this paper, we consider the problem of estimation of the larger location parameter of two general location models under the GPN criterion. We develop a result that is useful in finding improvements over location equivariant estimators in certain situations. Throughout, \(\Re\) will denote the real line and \(\Re^{2}=\Re\times\Re\) will denote the two dimensional Euclidean space. for any two real numbers \(x_{1}\) and \(x_{2}\), \(x_{(1)}=\min\{x_{1},x_{2}\}\) and \(x_{(2)}=\max\{x_{1},x_{2}\}\) also denote the smaller and larger of them respectively. Let \(X_{1}\) and \(X_{2}\) be two independently distributed random variable with densities \(f(x_{1}-\theta_{1}),\ x_{1}\in\Re,\ \theta_{1}\in\Re,\) and \(f(x_{2}-\theta_{2}),\ x_{2}\in\Re,\ \theta_{2}\in\Re,\) respectively. Our aim is to estimate the larger location parameter \(\theta_{(2)}=\max\{\theta_{1},\theta_{2}\}\) under a non-negative loss function \(L((\theta_{1},\theta_{2}),a)\), \((\theta_{1},\theta_{2})\in\Theta=\Re^{2}\) and \(a\in\mathcal{A}=\Re,\) here \(\Theta\) and \(\mathcal{A}\) denotes the parametric space and the action space, respectively. At first we will invoke the principle of invariance under a suitable group of transformation. For this purpose, we consider the group \(\mathcal{G}\) of transformation, where \[\mathcal{G}=\{g_{c}:c\in\mathbb{R}\}\cup\{g_{1}^{*},g_{2}^{*}\},\] \[g_{c}(x_{1},x_{2})=(x_{1}+c,x_{2}+c),\ \ g_{1}^{*}(x_{1},x_{2})=(x_{1},x_{2}) \ \ \mbox{and}\ \ g_{2}^{*}(x_{1},x_{2})=(x_{2},x_{1}),\ \ x_{1}\in\Re,\ x_{2}\in\Re,\ c\in\Re.\] It can be easily seen that, under the group of transformation the family of distributions under consideration is invariant and induced group of transformation of the parametric space \(\Theta\) and the action space \(\mathcal{A}\) as \[\bar{\mathcal{G}}=\{\bar{g}_{c}\ :\ c\in\Re\}\ \ \ \mbox{and}\ \ \ \bar{\mathcal{G}}=\{\bar{g}_{c}\ :\ c\in\Re\}\] respectively, where for every \((x_{1},x_{2})\in\Theta\) and \(c\in\Re\) \[\overline{g}_{c}(x_{1},x_{2})=(x_{1}+c,x_{2}+c),\ \ \overline{g}_{1}^{*}(x_{1},x_{2})=(x_{1},x_{2}) \ \ \mbox{and}\ \ \overline{g}_{2}^{*}(x_{1},x_{2})=(x_{2},x_{1})\] and for any \(a\in\mathcal{A}\) and \(c\in\Re\) \[\widetilde{g}_{c}(a)=a+c,\ \ \widetilde{g}_{1}^{*}(a)=a\quad\text{and}\quad \widetilde{g}_{2}^{*}(a)=a\] Now the loss function \(L((\theta_{1},\theta_{2}),a)\) is invariant under the group \(\mathcal{G}\) if and only is for any \((x_{1},x_{2})\in\Theta\), \(a\in\mathcal{A}\) and \(c\in\Re\) \[L(\overline{g}_{c}(x_{1},x_{2}),\widetilde{g}_{c}(a))=L((x_{1},x_{2}),a),\ \text{that is}\ L((x_{1}+c,x_{2}+c),a+c)=L((x_{1},x_{2}),a), \tag{1.1}\] and \[L(\overline{g}_{2}^{*}(x_{1},x_{2}),\widetilde{g}_{2}^{*}(a))=L((x_{1},x_{2}),a),\ \text{that is}\ L((x_{2},x_{1}),a)=L((x_{1},x_{2}),a). \tag{1.2}\] So combining conditions (1.1) and (1.2) we have, for any function \(V:\Re^{2}\to[0,\infty)\) and for all \((x_{1},x_{2})\in\Theta\), \(a\in\mathcal{A}\) and \(c\in\Re\) \[L((x_{1},x_{2}),a)=V((x_{(1)},x_{(2)}),a)\ \text{and}\ V((x_{(1)},x_{(2)}),a)=V((x_ {(1)}+c,x_{(2)}+c),a+c).\] This suggests us to consider the loss function as \[L((\theta_{1},\theta_{2}),a)=W(a-\theta_{(2)}),\ \ (\theta_{1},\theta_{2})\in \Theta,\ a\in\Re, \tag{1.3}\] where \(W:\Re\to[0,\infty)\) is given function. Now onwards we denote \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})\). Further we will make the following assumptions on the function \(W(.)\): **(C1):**: \(W(0)=0\), \(W(t)\) is strictly decreasing on \((-\infty,0)\) and is strictly increasing on \((0,\infty)\), that is, \(W(t)\) is strictly bowl shaped function in \(t\in(-\infty,\infty)\); **(C2):**: \(W^{\prime}(t)\) is increasing, almost everywhere; **(C3):**: Integrals involving \(W(t)\) are finite and differentiation under the integral sign is permissible. An estimator \(\delta(x_{1},x_{2})\) is invariant under the group of transformation \(\mathcal{G}\) if, and only if for any \((x_{1},x_{2})\in\Re\) and \(c\in\Re\) \[\delta(x_{1}+c,x_{2}+c)=\delta(x_{1},x_{2})+c\quad\text{and}\quad\delta(x_{1},x_{2})=\delta(x_{2},x_{1}). \tag{1.4}\] From the second condition of (1.4), we have \(\delta(x_{1},x_{2})=\delta^{*}(x_{(1)},x_{(2)})\), for some function \(\delta^{*}\), and from the first condition of (1.4) we have \(\delta^{*}(x_{(1)}+c,x_{(2)}+c)=\delta^{*}(x_{(1)},x_{(2)})+c\). This suggests that any invariant estimator has the form \(\delta_{\phi}(X_{1},X_{2})\), where \[\delta_{\phi}(X_{(1)},X_{(2)})=X_{(2)}-\phi(X_{(2)}-X_{(1)})=X_{(2)}-\phi(U) \tag{1.5}\] where \(U=X_{(2)}-X_{(1)}\) and \(\phi:[0,\infty)\to\Re\) is real valued function. For the component problem of estimating \(\theta_{i}\) usual estimator that is the best location equivariant estimator (BLEE) of \(\theta_{i}\) with respect the loss function \(L_{i}(\theta_{i},\delta)=W(\delta-\theta_{i})\) is obtained as \[\delta^{i}_{c_{0}}(\mathbf{X})=X_{i}-c_{0},\ \ i=1,2, \tag{1.6}\] where \(c_{0}\) is the unique solution of equation \[\int_{-\infty}^{\infty}W^{\prime}(x-c_{0})f(x)dx=0. \tag{1.7}\] So, we consider a natural estimator of \(\theta_{(2)}\) as \[\delta_{c_{0}}(\mathbf{X})=X_{(2)}-c_{0}. \tag{1.8}\] Our aim is to find estimators which improve upon the natural estimator \(\delta_{c_{0}}\), for estimating \(\theta_{(2)}\) under the loss function \(L((\theta_{1},\theta_{2}),a)\) defined by (1.3). The rest of the paper is organized as follows. Inadmissibility of the usual estimator \(\delta_{c_{0}}\) has been proved in Section 2. We have obtained Stein (1964)-type dominating estimator to demonstrate the inadmissibility. In Section 2.1, we consider a class of natural estimators for estimating \(\theta_{(2)}\) as \(\mathcal{D}=\{\delta_{b}=X_{(2)}-b:\ b\in\Re\}\). We obtain admissible estimators within the class \(\mathcal{D}\). It is seen that under the condition \(\lim_{t\to\infty}f(t)=0\), one of the boundary estimators of this admissible class is \(\delta_{c_{0}}\). Furthermore, we derive a class of improved estimators over a boundary estimator of the class of admissible estimators using the IERD approach proposed by Kubokawa (1994). Additionally, we obtain a Brewster and Zidek (1974)-type improved estimator, which improves upon the boundary estimator of the admissible class of estimators. In Section 3, we have obtained improved estimators for the special loss function: the squared error loss, the linear loss, and the absolute error loss. In section 4, we consider the estimation of \(\theta_{(2)}\) with respect to the Pitman closeness criterion. Under the Pitman closeness, we obtain the Stein (1964)-type estimator, which dominates the usual estimator of \(\theta_{(2)}\). As an application, in section 5, we derive improved estimators under the squared error loss, the linear loss, and the absolute error loss functions for the normal distribution and exponential distribution. We observe that the Stein (1964)-type improved estimator, and the usual estimator for the normal distribution with respect to the squared error loss and the absolute error loss are the same due to the symmetric nature of these loss functions. In Section 6, a simulation is carried out to compare the risk performance of the proposed estimators. Section 7 presents a real-life data analysis, showcasing the practical applications of our paper's findings. Lastly, in Section 8, we offer our concluding remarks on the paper's contributions. Inadmissibility of usual estimator In this section, we will prove inadmissibility of the natural estimator \(\delta_{c_{0}}=X_{(2)}-c_{0}\) by deriving dominating estimators under the bowl shaped \(\mathcal{G}\)-invariant loss function (1.3), where \(W(\cdot)\) satisfies assumptions (C1)-(C3). To prove the dominance result we require the following assumptions: * The family \(\{f(x-\eta):\eta\in\Re\}\) of p.d.f.s holds the MLR property, i.e., for \(-\infty<x_{1}<x_{2}<\infty\) and \(-\infty<\eta_{1}<\eta_{2}<\infty\), \(f(x_{1}-\eta_{1})f(x_{2}-\eta_{2})\geq f(x_{1}-\eta_{2})f(x_{2}-\eta_{1})\). * For every fixed \(u>0\) and \(\theta\geq 0\), let \(c\equiv c(\theta,u)\) be the unique minimizer of the following function \[\frac{\int_{-\infty}^{\infty}W(z-c)[f(z-u+\theta)f(z)+f(z-u)f(z+\theta)]dz}{ \int_{-\infty}^{\infty}[f(z-u+\theta)f(z)+f(z-u)f(z+\theta)]dz}.\] That implies, for every fixed \(u>0\) and \(\theta\geq 0\), \(c\equiv c(\theta,u)\) is the unique solution of the equation \[\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-c(\theta,u))[f(z-u+\theta)f(z)+f(z-u )f(z+\theta)]dz=0.\] **Theorem 2.1**: _Let assumptions (A), (B), and (C1)-(C3) hold. For any fixed \(u>0\), let \(c\equiv c(0,u)\) be the unique solution of the equation_ \[\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-c)f(z-u)f(z)dz=0. \tag{2.1}\] _Then the estimator,_ \[\delta_{\phi_{0}}(\mathbf{X})=X_{(2)}-\min\{\phi(U),c(0,U)\} \tag{2.2}\] _improves upon the equivariant estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) under the loss (1.3), provided \(P_{\boldsymbol{\theta}}\left(\phi(U)>c(0,U)\right)>0,\) at least for some \(\boldsymbol{\theta}\in\Theta\)._ _Proof:_ The risk function of any equivariant estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) is \[R(\boldsymbol{\theta},\delta_{\phi}) = E_{\boldsymbol{\theta}}\left[W(X_{(2)}-\phi(U)-\theta_{(2)})\right]\] \[= E_{\boldsymbol{\theta}}^{U}\left[E_{\boldsymbol{\theta}}^{X/U} \left[W\left(X_{(2)}-\phi(U)-\theta_{(2)}\right)|U\right]\right],\ \ \boldsymbol{\theta}\in\Theta.\] For any fixed \(\boldsymbol{\theta}\in\Theta\) and \(u>0\), consider \[R_{\boldsymbol{\theta},u}(c) = E_{\boldsymbol{\theta}}^{X/U}\left[W\left(X_{(2)}-c-\theta_{(2) }\right)|U=u\right]\] \[= \frac{\int_{-\infty}^{\infty}W(y-\theta_{(2)}-c)[f(y-u-\theta_{(1 )})f(y-\theta_{(2)})+f(y-u-\theta_{(2)})f(y-\theta_{(1)})]dy}{\int_{-\infty}^{ \infty}[f(y-u-\theta_{(1)})f(y-\theta_{(2)})+f(y-u-\theta_{(2)})f(y-\theta_{(1 )})]dy}\] \[= \frac{\int_{-\infty}^{\infty}W(z-c)[f(z-u+\theta)f(z)+f(z-u)f(z+ \theta)]dz}{\int_{-\infty}^{\infty}[f(z-u+\theta)f(z)+f(z-u)f(z+\theta)]dz},\ \ - \infty<c<\infty,\] here \(\theta=\theta_{(2)}-\theta_{(1)}\in[0,\infty)\). Using the assumption (B), for every fixed \(\theta\geq 0\) and \(u>0\), there exists a unique minimizer of \(R_{\boldsymbol{\theta},u}(c)\) say \(c\equiv c(\theta,u)\) which is the unique solution of the equation \(R^{\prime}_{\boldsymbol{\theta},u}(c)=0\), i.e., \[\frac{\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-c(\theta,u))[f(z-u+\theta)f(z)+ f(z-u)f(z+\theta)]dz}{\int_{-\infty}^{\infty}[f(z-u+\theta)f(z)+f(z-u)f(z+ \theta)]dz}=0.\] For any fixed \(\theta\geq 0\) and \(u>0\), let \(Z_{\theta,u}\) be a random variable having the density \[\Pi_{\theta,u}(z)=\frac{f(z-u+\theta)f(z)+f(z-u)f(z+\theta)}{\int_{-\infty}^{ \infty}f(t-u+\theta)f(t)+f(t-u)f(t+\theta)dt},\ -\infty<z<\infty,\] so that \(E[W^{{}^{\prime}}(Z_{\theta,u}-c(\theta,u))]=0.\) Then, for any fixed \(\theta\geq 0\) and \(u>0\), \[\frac{\Pi_{\theta,u}(z)}{\Pi_{0,u}(z)} = d(\theta,u)\frac{f(z-u+\theta)f(z)+f(z-u)f(z+\theta)}{2f(z-u)f(z)}\] \[= \frac{1}{2}\left[\frac{f(z-u+\theta)}{f(z-u)}+\frac{f(z+\theta)}{ f(z)}\right],\ -\infty<z<\infty,\] where \(d(\theta,u)\) is a positive constant. By the assumption (A), we have \(\frac{\Pi_{\theta,u}(z)}{\Pi_{0,u}(z)}\) decreasing in \(z\), for any fixed \(\theta\geq 0\) and \(u>0\). Since, for any constant \(c\), \(W^{{}^{\prime}}(z-c)\) is an almost everywhere increasing function of \(z\), we conclude that, for any \(u>0\), \[E[W^{{}^{\prime}}(Z_{\theta,u}-c)]\leq E[W^{{}^{\prime}}(Z_{0,u}-c)],\ \forall\ \theta\geq 0,\ c\in\Re. \tag{2.3}\] Taking \(c=c(\theta,u)\) in (2.3), we have, for any \(\theta\geq 0\) and \(u>0\), \[0= E[W^{{}^{\prime}}(Z_{\theta,u}-c(\theta,u))]\leq E[W^{{}^{ \prime}}(Z_{0,u}-c(\theta,u))]\] \[\Longrightarrow 0= E[W^{{}^{\prime}}(Z_{0,u}-c(0,u))]\leq E[W^{{}^{\prime}}(Z_{0,u} -c(\theta,u))]\] Since, for any fixed \(t\), \(W^{{}^{\prime}}(t-c)\) is a decreasing function of \(c\in\Re\), we get \[c(\theta,u)\leq c(0,u),\ \ \forall\ u>0,\ \theta\geq 0.\] Now consider the function \(\phi_{0}(u)=\min\{\phi(u),c(0,u)\},\ u>0\). Then, for any fixed \(\theta\geq 0\) and \(u>0\), we have \(c(\theta,u)\leq\phi_{0}(u)<\phi(u)\), provided \(\phi(u)>c(0,u)\). Using condition (C1), for any fixed \(\theta\geq 0\) and \(u>0\), \(R_{\boldsymbol{\theta},u}(c)\) is increasing in \(c\in[c(0,u),\infty)\). Consequently we get \[E^{X/U}_{\boldsymbol{\theta}}\left[W\left(\delta_{\phi_{0}}-\theta_{(2)} \right)|U=u\right]\leq E^{X/U}_{\boldsymbol{\theta}}\left[W\left(\delta_{\phi} -\theta_{(2)}\right)|U=u\right]\] for all \(\boldsymbol{\theta}\in\Theta\) and \(u>0\) and strict inequity holds for some \(u>0\). Hence we have \(R(\boldsymbol{\theta},\delta_{\phi_{0}})\leq R(\boldsymbol{\theta},\delta_{ \phi})\). This proves the theorem. \(\blacksquare\) **Corollary 2.2**: _Let the assumption (A) and assumptions (C1)-(C3) hold. Then the estimator,_ \[\delta_{\phi_{0}}({\bf X})=X_{(2)}-\min\{c_{0},c(0,U)\} \tag{2.4}\] _improves upon the natural estimator \(\delta_{\alpha_{0}}({\bf X})=X_{(2)}-c_{0}\) under the loss (1.3) provided \(P_{\boldsymbol{\theta}}(c_{0}>c(0,U))>0\), for some \(\boldsymbol{\theta}\in\Theta\)._ **Remark 2.1**: _If \(f(x)\) is decreasing in \(x\) then \(P_{\boldsymbol{\theta}}(c_{0}>c(0,U))>0\) for some \(\boldsymbol{\theta}\in\Theta\)._ ### A Class of improved estimators A natural class of estimators for estimating \(\theta_{(2)}\) is \({\cal D}=\{\delta_{b}=X_{(2)}-b:\ b\in\Re\}\). Firstly, we find admissible estimators within the class of estimators \({\cal D}\). The risk function of an estimator \(\delta_{b}\) is \[R(\boldsymbol{\theta},\delta_{b}) =E_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-b)]\] \[=\iint\limits_{-\infty<x_{1}\leq x_{2}<\infty}W(x_{2}-\theta_{(2 )}-b)\left[f(x_{1}-\theta_{(1)})f(x_{2}-\theta_{(2)})+f(x_{1}-\theta_{(2)})f( x_{2}-\theta_{(1)})\right]dx_{1}\,dx_{2}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{z}W(z-b)\left[f(x+\theta) f(z)+f(x)f(z+\theta)\right]dx\,dz\] \[=E_{\theta}[W(Z-b)],\ \ \theta\geq 0, \tag{2.5}\] where \(Z\) is a r.v. with the density \(g_{\theta}(z)=F(z+\theta)f(z)+F(z)f(z+\theta),\ z\in\Re,\ \theta\geq 0\), and \(F(z)=\int_{-\infty}^{z}f(t)\,dt,\ z\in\Re\). Using the assumption (A), it is easy to verify that, for every \(\theta\geq 0\), \(g_{\theta}(z)/g_{0}(z)\) is decreasing in \(z\). Let \(b_{\theta}\) be the continues function and be the unique solution of the equation \(E_{\theta}[W^{{}^{\prime}}(Z-b)]=\int_{-\infty}^{\infty}W^{{}^{\prime}}(x-b)g _{\theta}(x)dx=0\). Since, for every \(\theta\geq 0\), \(g_{\theta}(x)/g_{0}(x)\) is decreasing in \(x\) and \(W^{{}^{\prime}}(x-b)\) is decreasing in \(b\), and under the assumption \(\lim_{t\to\infty}f(t)=0\), it can easy to see that \[\inf_{\theta\geq 0}b_{\theta}=b_{\infty}=c_{0}\,\leq\,b_{\theta}\,\leq\,b_{0},\ \ \forall\ \theta\geq 0.\] **Theorem 2.3**: _Suppose that the assumption (A) holds and \(\lim_{t\to\infty}f(t)=0\). Then the estimators that are admissible within the class \({\cal D}\) are \(\{X_{(2)}-b:\ b_{\infty}\,\leq\,b_{\theta}\,\leq\,b_{0}\}\)._ _Proof:_ Note that, for any fixed \(\boldsymbol{\theta}\in\Theta\) (or fixed \(\theta\geq 0\)), the risk function \(R(\boldsymbol{\theta},\delta)\), given by (2.5), is uniquely minimized at \(b=b_{\theta}\), it is a strictly decreasing function of \(b\) on \((-\infty,b_{\theta})\) and strictly increasing function of \(b\) on \((b_{\theta},\infty)\). Since, for any \(\theta\geq 0\), \(b_{\theta}\) is a continuous function of \(\theta\in[0,\infty)\), it assumes all values between \(\inf_{\theta\geq 0}b_{\theta}=b_{\infty}=c_{0}\) and \(\sup_{\theta\geq 0}b_{\theta}=b_{0}\), as \(\theta\) varies on \([0,\infty)\). It follows that, each \(b\in[b_{\infty},b_{0}]\) uniquely minimizes the risk function \(R(\boldsymbol{\theta},\delta)\) at some \(\boldsymbol{\theta}\in\Theta\) (or at some \(\theta\geq 0\)). This proves that the estimators \(\{X_{(2)}-b:\ b_{\infty}\,\leq\,b_{\theta}\,\leq\,b_{0}\}\) are admissible among the estimators in the class \({\cal D}\). \(\blacksquare\) Hence the subclass of estimators \({\cal D}_{0}=\{X_{(2)}-b:\ b_{\infty}=c_{0}\,\leq\,b_{\theta}\,\leq\,b_{0}\}\) is admissible within the class \(\mathcal{D}\). Now, one can also consider whether improvements can be made to the estimators within the class \(\mathcal{D}_{0}\), but it may not be possible to obtain improvements over all estimators in \(\mathcal{D}_{0}\). Therefore, in this section, we aim to find improvements specifically for the boundary estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-b_{0}\) within the class \(\mathcal{D}_{0}\), where \(b_{0}\) is the unique solution of equation \[\int_{-\infty}^{\infty}W^{\prime}(x-b_{0})f(x)\,F(x)\,dx=0, \tag{2.6}\] where \(F(x)=\int_{-\infty}^{x}f(y)\,dy,\;x\in\Re\). Now, we use the IERD approach of Kubokawa (1994) to propose a class of estimators dominating over the estimator \(\delta_{b_{0}}\). Further, we will obtain the Brewster-Zidek (1974) type estimator improving over \(\delta_{b_{0}}\). Consider estimation of \(\theta_{(2)}\) under the loss function (1.2). Assume that the function \(W(\cdot)\) is absolute continuous and satisfies the assumptions (C1), (C2) and (C3). The following two lemmas will be useful in proving the next result. The lemma stated in the following lemma follows from relationship between the likelihood ratio order and the revised failure rate order in the theory of stochastic orders (see Shaked and Shanthikumar (2007)). The proof of the following lemma, being straightforward, is also omitted. **Lemma 2.4**: _Let \(s_{0}\in\Re\) and let \(M:\Re\to\Re\) be such that \(M(s)\leq 0,\;\forall\;s<s_{0},\) and \(M(s)\geq 0,\;\forall\;s>s_{0}\). Let \(M_{i}:\Re\to[0,\infty),\;i=1,2,\) be non-negative functions such that \(M_{1}(s)M_{2}(s_{0})\geq(\leq)\,M_{1}(s_{0})M_{2}(s),\;\forall\;s<s_{0},\text{ and }M_{1}(s)M_{2}(s_{0})\leq\;(\geq)\,M_{1}(s_{0})M_{2}(s),\;\forall\;s>s_{0}.\) Then,_ \[M_{2}(s_{0})\int\limits_{-\infty}^{\infty}M(s)\,M_{1}(s)ds\leq\;(\geq)\;M_{1} (s_{0})\int\limits_{-\infty}^{\infty}M(s)\,M_{2}(s)ds.\] In the following theorem, we provide a class of estimators that improve upon the natural estimator \(\delta_{b_{0}}\). **Theorem 2.5**: _Suppose that the assumption (A) holds. Additionally, assume that \(W(\cdot)\) is absolutely continuous and satisfies (C1), (C2) and (C3). Let \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) be a location equivariant estimator of \(\theta_{(2)}\) such that_ * \(\phi(t)\) _is increasing in_ \(t\in[0,\infty)\)_,_ * \(\lim_{t\to\infty}\phi(t)=b_{0}\)__ * \(\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-\phi(t))\;[F(z)-F(z-t)]f(z)\,dz\, \leq\,0,\;\forall\;t\in[0,\infty).\)__ _Then, the estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) is an improvement over the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-b_{0}\)._ **Proof:** Let us fix \(\boldsymbol{\theta}\in\Theta\) and let \(\theta=\theta_{(2)}-\theta_{(1)}\), so that \(\theta\geq 0\). Consider the risk difference \(\Delta(\boldsymbol{\theta})\) \[=R(\boldsymbol{\theta},\delta_{b_{0}})-R(\boldsymbol{\theta},\delta_{ \phi})\] \[=E_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-b_{0})-W(X_{(2)}- \theta_{(2)}-\phi(U))]\] \[=E_{\boldsymbol{\theta}}\left[\int_{U}^{\infty}\Big{\{}\frac{d}{ dt}W(X_{(2)}-\theta_{(2)}-\phi(t))\Big{\}}\;dt\right],\] \[=\int_{-\infty}^{\infty}\int_{u=0}^{\infty}\left[\int_{u}^{ \infty}\Big{\{}\frac{d}{dt}W(y-\theta_{(2)}-\phi(t))\Big{\}}\;dt\right][f(y-u- \theta_{(1)})f(y-\theta_{(2)})+f(y-u-\theta_{(2)})f(y-\theta_{(1)})]\,du\,dy,\] After changing the order of integration we have \[\Delta(\boldsymbol{\theta})\] \[=\int_{t=0}^{\infty}\int_{-\infty}^{\infty}\int_{u=0}^{t}\Big{\{} \frac{d}{dt}W(y-\theta_{(2)}-\phi(t))\Big{\}}\;[f(y-u-\theta_{(1)})f(y-\theta _{(2)})+f(y-u-\theta_{(2)})f(y-\theta_{(1)})]dudydt,\] \[=-\int_{t=0}^{\infty}\phi^{\prime}(t)\bigg{[}\int_{-\infty}^{ \infty}\int_{u=0}^{t}\!\Big{\{}W^{\prime}(y-\theta_{(2)}-\phi(t))\Big{\}}\![f( y-u-\theta_{(1)})f(y-\theta_{(2)})\!+\!f(y-u-\theta_{(2)})f(y-\theta_{(1)})]dudy \bigg{]}dt.\] Since \(\phi(t)\) is a increasing function of \(t\), it suffices to show that, for every \(t>0\), \[\int_{-\infty}^{\infty}\int_{u=0}^{t}\Big{\{}W^{\prime}(y-\theta _{(2)}-\phi(t))\Big{\}}\;[f(y-u-\theta_{(1)})f(y-\theta_{(2)})+f(y-u-\theta_{ (2)})f(y-\theta_{(1)})]dudy\leq 0\] \[\Longleftrightarrow \int_{u=0}^{t}\int_{-\infty}^{\infty}\Big{\{}W^{\prime}(z-\phi( t))\Big{\}}\;[f(z-u+\theta)f(z)+f(z-u)f(z+\theta)]dzdu\leq 0. \tag{2.7}\] Now, since \(W^{\prime}(t)\) is increasing function of \(t\) and, for every fixed \(\theta\geq 0\) and \(u\in\Re\), \(\frac{f(z-u+\theta)f(z)+f(z-u)f(z+\theta)}{2f(z-u)f(z)}\) is decreasing in \(z\), then, for \(\theta\geq 0\), we have \[\int_{u=0}^{t}\left[\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-\phi (t))\left[f(z-u+\theta)f(z)+f(z-u)f(z+\theta)\right]dz\right]du\] \[\leq\int_{u=0}^{t}\left[\int_{-\infty}^{\infty}W^{{}^{\prime}}(z- \phi(t))\;[f(z-u)f(z)+f(z-u)f(z)]dz\right]du\] \[=\ 2\int_{-\infty}^{\infty}W^{{}^{\prime}}(z-\phi(t))\;[F(z)-F(z-t )]f(z)\,dz\] Now, using hypothesis (iii), we obtain (2.7). This completes proof of the theorem. \(\blacksquare\) In the following we will prove a corollary which will provide Brewster and Zidek (1974) type improved estimators. **Corollary 2.6**: _Suppose that assumptions (A), (C1), (C2) and (C3) hold. Additionally suppose _that, for every fixed \(t\), the equation_ \[k_{1}(c|t)=\int_{-\infty}^{\infty}\;W^{{}^{\prime}}(z-c)\;[F(z)-F(z-t)]f(z)\,dz=0\] _has the unique solution \(c\equiv\phi_{0}(t)\). Then_ \[R(\mathbf{\theta},\delta_{\phi_{0}})\leq R(\mathbf{\theta}, \delta_{0}),\;\;\;\forall\;\;\mathbf{\theta}\in\Theta,\] _where \(\delta_{\phi_{0}}({\bf X})=X_{(2)}-\phi_{0}(U)\)._ **Proof:** It is suffices to show that \(\phi_{0}(t)\) satisfies conditions of Theorem 2.5. Note that a hypothesis of the corollary ensures that \(\lim_{t\to\infty}\phi_{0}(t)=b_{0}\). To show that \(\phi_{0}(t)\) is an increasing function of \(t\), suppose that, there exist numbers \(t_{1}\) and \(t_{2}\) such that \(0<t_{1}<t_{2}\) and \(\phi_{0}(t_{1})\neq\phi_{0}(t_{2}).\) Under the hypotheses of the corollary, we have \(k_{1}(\phi_{0}(t_{1})|t_{1})=0\), \(\phi_{0}(t_{2})\) is the unique solution of \(k_{1}(c|t_{2})=0\) and \(k_{1}(c|t_{2})\) is a decreasing function of \(c\). Let \(s_{0}=\phi_{0}(t_{1}),\;M(s)=W^{{}^{\prime}}(s-s_{0})f(s),\;M_{1}(s)=\int_{0}^ {t_{2}}f(s-u)du\) and \(M_{2}(s)=\int_{0}^{t_{1}}f(s-u)du\). Then, using Lemma 2.4, we get \[\int_{0}^{t_{1}}f(\phi_{0,1}(t_{1})-w)\,du\;\left(\int_{-\infty}^{\infty}\;W^{ {}^{\prime}}(z-\phi_{0}(t_{1}))\,f(z)\;\int_{0}^{t_{2}}f(z-u)\,du\;dz\right)\] \[\geq\;\int_{0}^{t_{2}}f(\phi_{0}(t_{1})-w)du\;\left(\int_{-\infty}^{\infty}\;W ^{{}^{\prime}}(z-\phi_{0}(t_{1}))\,f(z)\;\int_{0}^{t_{1}}f(z-u)\,du\;dz\right)=0.\] This implies that \[k_{1}(\phi_{0}(t_{1})|t_{2})=\int_{-\infty}^{\infty}\int_{0}^{t_{2}}\;W^{{}^{ \prime}}(z-\phi_{0,1}(t_{1}))f(z-u)f(z)\,du\,dz\geq\;0.\] So we have \(k_{1}(\phi_{0}(t_{1})|t_{2})\,>\,0,\) as \(k_{1}(c|t_{2})=0\) has the unique solution \(c\equiv\phi_{0}(t_{2})\) and \(\phi_{0}(t_{1})\neq\phi_{0}(t_{2})\). Since \(k_{1}(c|t_{2})\) is a decreasing function of \(c\), \(k_{1}(\phi_{0}(t_{2})|t_{2})=0\) and \(k_{1}(\phi_{0}(t_{1})|t_{2})\,>\,0,\) it follows that \(\phi_{0}(t_{1})<\,\phi_{0}(t_{2})\). Hence the result follows. \(\blacksquare\) ## 3 Dominance result for special loss functions In this section, we have obtained the improved estimators for three special loss functions namely squared error loss \(L_{1}:W(t)=t^{2},\;t\in\Re,\) linear loss \(L_{2}:W(t)=e^{at}-at-1,,\;t\in\Re,\;a\neq 0\) and absolute error loss \(L_{3}:W(t)=|t|,\;t\in\Re.\) **Theorem 3.1**: _Suppose that the assumption (A) holds. Then for estimating \(\theta_{(2)}\) with respect to loss function \(L_{1}\) the estimator_ \[\delta_{ST}({\bf X})=X_{(2)}-\min\{c_{0},c(0,u)\} \tag{3.1}\] _improves upon the estimator \(\delta_{0}({\bf X})\), provided \(P_{\boldsymbol{\theta}}(c_{0}>c(0,U))\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\), where_ \[c(0,u)=\int_{-\infty}^{\infty}zf(z-u)f(z)dz\bigg{/}\int_{-\infty}^{\infty}f(z-u )f(z)dz.\] _and \(c_{0}=\int_{-\infty}^{\infty}xf(x)dx\)._ **Theorem 3.2**: _The estimator_ \[\delta_{ST}({\bf X})=X_{(2)}-\min\{c_{0},c(0,U)\}, \tag{3.2}\] _improves upon the estimator \(\delta_{c_{0}}({\bf X})\) with respect to the loss \(L_{2}\), where \(c_{0}=\frac{1}{a}\ln\int_{\infty}^{\infty}e^{ax}f(x)dx\) and \(c(0,U)=\frac{1}{a}\ln H(U)\) with_ \[H(u)=\int_{-\infty}^{\infty}e^{az}f(z-u)f(z)dz\bigg{/}\int_{-\infty}^{\infty} f(z-u)f(z)dz\] _provided the assumption (A) holds and \(P_{\boldsymbol{\theta}}(c_{0}>\frac{1}{a}\ln H(u))\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\)._ **Theorem 3.3**: _The estimator_ \[\delta_{ST}({\bf X})=X_{(2)}-\min\{c_{0},c(0,U)\}, \tag{3.3}\] _improves upon the estimator \(\delta_{c_{0}}({\bf X})\) with respect to the loss \(L_{3}\), where \(c_{0}\) and \(c(0,u)\) are such that_ \[\int_{-\infty}^{c_{0}}f(z)dz=\frac{1}{2}\ \ \mbox{and}\ \ \int_{-\infty}^{c(0,u)}f(z-u)f(z)dz \bigg{/}\int_{-\infty}^{\infty}f(z-u)f(z)dz=\frac{1}{2},\] _respectively, provided the assumption (A) holds and \(P_{\boldsymbol{\theta}}(c_{0}>c(0,U))\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\)._ Now we will apply Theorem 2.5 to particular loss functions and provide a class of improved estimators over the estimator \(\delta_{b_{0}}({\bf X})=X_{(2)}-b_{0}\), where \(b_{0}\) be as defined by the equation (2.6). **Theorem 3.4**: _Suppose that the assumption (A) holds. Let \(\delta_{\phi}({\bf X})=X_{(2)}-\phi(U)\) be a location equivariant estimator of \(\theta_{(2)}\) such that_ * \(\phi(t)\) _is increasing in_ \(t\)_,_ * \(\lim_{t\to\infty}\phi(t)=2\int_{-\infty}^{\infty}xf(x)F(x)dx=b_{0},\)__ * \(\phi(t)\geq\frac{\int_{-\infty}^{\infty}\int_{0}^{t}zf(z-u)f(z)\,du\,dz}{\int _{-\infty}^{\infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz}\)_._ _Then the estimator \(\delta_{\phi}({\bf X})=X_{(2)}-\phi(U)\) improves upon the estimator \(\delta_{b_{0}}({\bf X})=X_{(2)}-b_{0}\) with respect to the \(L_{1}\)._ **Remark 3.1**: _The boundary estimator of the class estimators given by Theorem 3.4 is the Brewster-Zidek type estimator. So the Brewster-Zidek type estimator is obtained as_ \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\phi_{BZ}(U)\] _with_ \[\phi_{BZ}(t)=\frac{\int_{-\infty}^{\infty}\int_{0}^{t}zf(z-u)f(z)\,du\,dz}{ \int_{-\infty}^{\infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz}.\] **Theorem 3.5**: _Suppose that the assumption (A) holds. Let \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) be a location equivariant estimator of \(\theta_{(2)}\) such that_ * \(\phi(t)\) _is increasing in_ \(t\)_,_ * \(\lim_{t\to\infty}\phi(t)=\frac{1}{a}\ln\left(2\int_{-\infty}^{\infty}e^{ax}f( x)F(x)dx\right)=b_{0},\)__ * \(\phi(t)\leq\frac{1}{a}\ln\left(\frac{\int_{-\infty}^{\infty}\int_{0}^{t}e^{ax} f(z-u)f(z)\,du\,dz}{\int_{-\infty}^{\infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz}\right)\)_._ _Then the estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) improves upon the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-b_{0}\) with respect to the \(L_{2}\)._ **Remark 3.2**: _The boundary estimator of the class estimators given by Theorem 3.5 is the Brewster-Zidek type estimator. So the Brewster-Zidek type estimator is obtained as_ \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\phi_{BZ}(U)\] _with_ \[\phi_{BZ}(t)=\frac{1}{a}\ln\left(\frac{\int_{-\infty}^{\infty}\int_{0}^{t}e^{ az}f(z-u)f(z)\,du\,dz}{\int_{-\infty}^{\infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz} \right),\,\,\,t>0.\] **Theorem 3.6**: _Suppose that the assumption (A) holds and \(c_{0}\) be as in Theorem 3.3. Let \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) be a location equivariant estimator of \(\theta_{(2)}\) such that_ * \(\phi(t)\) _is increasing in_ \(t\)_,_ * \(\lim_{t\to\infty}\phi(t)=b_{0}\)_, where_ \(b_{0}\) _is the solution of the equation_ \(2\int_{-\infty}^{b_{0}}f(x)F(x)dx=\frac{1}{2}\)_,_ * \(\phi(t)\) _be such that it satisfies_ \[\int_{-\infty}^{\phi(t)}\int_{0}^{t}f(z-u)f(z)\,du\,dz\geq\frac{1}{2}\int_{- \infty}^{\infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz.\] _Then the estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) improves upon the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-b_{0}\) with respect to the \(L_{3}\)._ **Remark 3.3**: _The boundary estimator of the class estimators given by Theorem 3.6 is the Brewster-Zidek type estimator. So the Brewster-Zidek type estimator is obtained as_ \[\delta_{BZ}(\mathbf{X})=X_{(2)}-C,\] _where \(C\) is the unique solution of the equation_ \[\int_{-\infty}^{C}\int_{0}^{t}f(z-u)f(z)\,du\,dz=\frac{1}{2}\int_{-\infty}^{ \infty}\int_{0}^{t}f(z-u)f(z)\,du\,dz,\ \ \ t>0.\] ## 4 Improved estimator under generalized Pitman closeness criterion In this section we consider the estimation problem under the generalized Pitman nearness (GPN) criterion. A brief discussion on the Pitman nearness criterion is given in Garg and Misra (2023). For completeness in our presentation, we are again discussing it here. The notion of the Pitman nearness criterion was first introduced by Pitman (1937), as defined below. **Definition 4.1**: _Let \(\mathbf{X}\) be a random vector having a probability distribution involving an unknown parameter \(\boldsymbol{\theta}\in\Theta\) (\(\boldsymbol{\theta}\) may be vector valued). Let \(\delta_{1}\) and \(\delta_{2}\) be two estimators of a real-valued estimand \(\tau(\boldsymbol{\theta})\). Then, the Pitman nearness (PN) of \(\delta_{1}\) relative to \(\delta_{2}\) is defined by_ \[PN(\delta_{1},\delta_{2};\boldsymbol{\theta})=P_{\boldsymbol{\theta}}[|\delta _{1}-\tau(\boldsymbol{\theta})|<|\delta_{2}-\tau(\boldsymbol{\theta})|],\ \ \boldsymbol{\theta}\in\Theta,\] _and the estimator \(\delta_{1}\) is said to be nearer to \(\tau(\boldsymbol{\theta})\) than \(\delta_{2}\) if \(PN(\delta_{1},\delta_{2};\boldsymbol{\theta})\geq\frac{1}{2},\ \forall\ \boldsymbol{\theta}\in\Theta\), with strict inequality for some \(\boldsymbol{\theta}\in\Theta\)._ Nayak (1990) and Kubokawa (1991) modified the Pitman (1937) nearness criterion and defined the generalized Pitman nearness (GPN) criterion based on general loss function \(L(\boldsymbol{\theta},\delta)\). **Definition 4.2**: _Let \(\mathbf{X}\) be a random vector having a distribution involving an unknown parameter \(\boldsymbol{\theta}\in\Theta\) and let \(\tau(\boldsymbol{\theta})\) be a real-valued estimand. Let \(\delta_{1}\) and \(\delta_{2}\) be two estimators of the estimand \(\tau(\boldsymbol{\theta})\). Also, let \(L(\boldsymbol{\theta},a)\) be a specified loss function for estimating \(\tau(\boldsymbol{\theta})\). Then, the generalized Pitman nearness (GPN) of \(\delta_{1}\) relative to \(\delta_{2}\) is defined by_ \[GPN(\delta_{1},\delta_{2};\boldsymbol{\theta})=P_{\boldsymbol{\theta}}[L( \boldsymbol{\theta},\delta_{1})<L(\boldsymbol{\theta},\delta_{2})]+\frac{1}{2 }P_{\boldsymbol{\theta}}[L(\boldsymbol{\theta},\delta_{1})=L(\boldsymbol{ \theta},\delta_{2})],\ \ \boldsymbol{\theta}\in\Theta.\] _The estimator \(\delta_{1}\) is said to be nearer to \(\tau(\boldsymbol{\theta})\) than \(\delta_{2}\), under the GPN criterion, if \(GPN(\delta_{1},\delta_{2};\boldsymbol{\theta})\geq\frac{1}{2},\ \forall\ \boldsymbol{\theta}\in\Theta\), with strict inequality for some \(\boldsymbol{\theta}\in\Theta\)._ The following result, popularly known as Chebyshev's inequality, will be used in our study (see Marshall and Olkin (2007)). **Proposition 4.1**: _Let \(S\) be random variable and let \(k_{1}(\cdot)\) and \(k_{2}(\cdot)\) be real-valued monotonic functions defined on the distributional support of the r.v. \(S\). If \(k_{1}(\cdot)\) and \(k_{2}(\cdot)\) are monotonic functions of the same (opposite) type, then_ \[E[k_{1}(S)k_{2}(S)]\geq(\leq)E[k_{1}(S)]E[k_{2}(S)],\] _provided the above expectations exist._ The following lemma, taken from Garg and Misra (2023), will be useful in proving the main results of this section (also see Nayak (1990)) and Zhou and Nayak (2012)). **Lemma 4.2** (Garg and Misra (2023)): _Let \(Y\) be a random variable having the Lebesgue probability density function and let \(m_{Y}\) be the median of \(Y\). Let \(W:\Re\rightarrow[0,\infty)\) be a function such that \(W(0)=0\), \(W(t)\) is strictly decreasing on \((-\infty,0)\) and strictly increasing on \((0,\infty)\). Then, for \(-\infty<c_{1}<c_{2}\leq m_{Y}\) or \(-\infty<m_{Y}\leq c_{2}<c_{1}\), \(GPN=P[W(Y-c_{2})<W(Y-c_{1})]+\frac{1}{2}P[W(Y-c_{2})=W(Y-c_{1})]>\frac{1}{2}\)._ **Lemma 4.3**: _Suppose that assumptions (A) and (C1)-(C3) hold. For \(u>0\) and \(\boldsymbol{\theta}\in\Theta\), let \(m(\boldsymbol{\theta},u)\) denote the median of the conditional distribution of \(X_{(2)}-\theta_{(2)}\), given \(U=u\). Then_ \[m(\boldsymbol{\theta},u)\leq m(\boldsymbol{0},u),\ \ \forall\ u>0.\] _Proof:_ The joint distribution of \((Y,U)=(X_{(2)}-\theta_{(2)},X_{(2)}-X_{(1)})\) is \[h_{\theta}(y,u)=f(y-u+\theta)f(y)+f(y-u)f(y+\theta),\ \ -\infty<y<\infty,\ u>0,\] where \(\theta=\theta_{(2)}-\theta_{(1)}\geq 0\). The conditional p.d.f. of \(Y\) given \(U=u\) is \(\Pi_{\theta}(z|u)=\frac{1}{c_{\theta}}f(z-u+\theta)f(z)+f(z-u)f(z+\theta)\), \(z\in\Re\), where \(c_{\theta}=\int_{-\infty}^{\infty}\left[f(t-u+\theta)f(t)+f(t-u)f(t+\theta) \right]dt\). Now, for any fixed \(u>0\) and \(\theta\geq 0\), we have \[\frac{\Pi_{\theta}(z|u)}{\Pi_{0}(z|u)} = \frac{c_{0}}{c_{\theta}}\frac{f(z-u+\theta)f(z)+f(z-u)f(z+\theta) }{2f(z-u)f(z)}\] \[= \frac{c_{0}}{2\,c_{\theta}}\left[\frac{f(z-u+\theta)}{f(z-u)}+ \frac{f(z+\theta)}{f(z)}\right].\] By the assumption (A), for \(\theta>0\), \(\frac{\Pi_{\theta}(z|u)}{\Pi_{0}(z|u)}\) is decreasing in \(z\). For every fixed \(u>0\) and \(\theta\geq 0\), take \(k_{1}(s)=I_{(-\infty,m(\boldsymbol{\theta},u))}(s),\ s\in\Re\), and \(k_{2}(s)=\frac{\Pi_{\theta}(s|u)}{\Pi_{0}(s|u)},\ \in\Re\), where \(I_{A}(\cdot)\) denotes the indicator function of set \(A\subseteq\Re\). Here \(k_{1}(s)\) and \(k_{2}(s)\) are decreasing functions of \(s\). Using Proposition 4.1, for any \(u>\) and \(\theta\geq 0\), we get \[\frac{1}{2}= \int_{-\infty}^{\infty}k_{1}(s)k_{2}(s)\Pi_{0}(s|u)ds\geq\left(\int _{-\infty}^{\infty}k_{1}(s)\Pi_{0}(s|u)ds\right)\left(\int_{-\infty}^{\infty}k _{2}(s)\Pi_{0}(s|u)ds\right)\] \[\implies \int_{-\infty}^{m(\boldsymbol{\theta},u)}\Pi_{\theta}(s|u)ds=\int _{-\infty}^{m(\boldsymbol{0},u)}\Pi_{0}(s|u)ds=\frac{1}{2}\geq\int_{-\infty}^ {m(\boldsymbol{\theta},u)}\Pi_{0}(s|u)ds\] \[\implies m(\boldsymbol{0},u)\geq\,m(\boldsymbol{\theta},u),\,\,\, \forall\,\,u>0,\] establishing the assertion. **Theorem 4.4**: _Suppose that assumptions (A) and (C1)-(C3) hold. Let \(\delta_{\phi}=X_{(2)}-\phi(U)\) be an estimator of \(\theta_{(2)}\) such that \(P_{\boldsymbol{\theta}}(\phi(U)>m(\boldsymbol{0},U))\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\). Then the estimator_ \[\delta_{\phi_{0}}(\mathbf{X})=X_{(2)}-\min\{\phi(U),m(\boldsymbol{0},U)\} \tag{4.1}\] _improves over the estimator \(\delta_{\phi}(\mathbf{X})=X_{(2)}-\phi(U)\) in terms of the GPN criterion with a general loss ( 1.3)._ _Proof:_ Let \(\phi_{0}(t)=\min\{\phi(t),m(\boldsymbol{0},t)\},\,\,t\geq 0.\) The GPN of \(\delta_{\phi_{0}}(\mathbf{X})=X_{(2)}-\phi_{0}(U)\) relative to \(\delta_{\phi}(\mathbf{X})\) is given by \[GPN(\delta_{\phi_{0}},\delta_{\phi};\boldsymbol{\theta}) =P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-\phi_{0}(U))<W(X_ {(2)}-\theta_{(2)}-\phi(U))]\] \[\quad+\frac{1}{2}P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}- \phi_{0}(U))=W(X_{(2)}-\theta_{(2)}-\phi(U))]\] \[=\int_{-\infty}^{\infty}P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta _{(2)}-\phi_{0}(u))<W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}(u)\,du\] \[\quad+\frac{1}{2}\,\int_{-\infty}^{\infty}P_{\boldsymbol{\theta} }[W(X_{(2)}-\theta_{(2)}-\phi_{0}(u))=W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f _{U}(u)\,du,\] where \(f_{U}(u)=\int_{-\infty}^{\infty}\left[f(t-u+\theta)f(t)+f(t-u)f(t+\theta) \right]dt,\,\,u>0,\,\,\theta=\theta_{(2)}-\theta_{(1)}\geq 0\), is p.d.f. of r.v. \(U\). Now, define \(A=\{u>0:\phi(u)\leq m(\boldsymbol{0},u)\}\) and \(B=\{u>0:\phi(u)>m(\boldsymbol{0},u)\}\), we get \[GPN(\delta_{\phi_{0}},\delta_{\phi};\boldsymbol{\theta}) =\int_{A}P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-\phi(u))< W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}(u)\,du\] \[\quad+\frac{1}{2}\,\int_{A}P_{\boldsymbol{\theta}}[W(X_{(2)}- \theta_{(2)}-\phi(u))=W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}(u)\,du\] \[\quad+\int_{B}P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-m( \boldsymbol{0},u))<W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}(u)\,du\] \[\quad+\frac{1}{2}\,\int_{B}P_{\boldsymbol{\theta}}[W(X_{(2)}- \theta_{(2)}-m(\boldsymbol{0},u))=W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}( u)\,du\] \[=\frac{1}{2}\,\int_{A}f_{U}(u)\,du\] \[\quad+\int_{B}P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-m( \boldsymbol{0},u))<W(X_{(2)}-\theta_{(2)}-\phi(u))|U=u]\,f_{U}(u)\,du.\] From Lemma 4.3, we have \[m(\boldsymbol{\theta},u)\leq m(\boldsymbol{0},u),\ \ \forall\ u>0.\] Then, for every \(\boldsymbol{\theta}\in\Theta\), we have \(-\infty<m(\boldsymbol{\theta},u)\leq m(\boldsymbol{0},u)<\phi(u),\,\forall\ u\in B\). Since, for \(u\in B\), \(m(\boldsymbol{\theta},u)\) is the median of the conditional distribution of \(X_{(2)}-\theta_{(2)}\) given \(U=u\) and using Lemma 4.2, we have \(P_{\boldsymbol{\theta}}[W(X_{(2)}-\theta_{(2)}-m(\boldsymbol{0},u))<W(X_{(2)} -\theta_{(2)}-\phi(u))|U=u]\geq\frac{1}{2},\ \forall\ u\in B,\ \boldsymbol{\theta}\in\Theta\). Hence we get \[GPN(\delta_{\phi_{0}},\delta_{\phi};\boldsymbol{\theta})\,\geq\,\frac{1}{2}, \ \forall\ \boldsymbol{\theta}\in\Theta,\ u>0,\] and strict inequity holds for some \(u>0\). This proves the theorem. An immediate consequence of Lemma 4.2, the Pitman nearest (PN) equivariant estimator of \(\theta_{(2)}\) within the class \(\mathcal{D}\), under the GPN criterion, is obtained as \[\delta_{PN}(\mathbf{X})=X_{(2)}-m_{0},\] where \(m_{0}\) is such that \(\int_{-\infty}^{m_{0}}f(x)\,dx=\frac{1}{2}\). **Corollary 4.5**: _Suppose that assumptions (A) and (C1)-(C3) hold. Then for estimating \(\theta_{(2)}\) under the GPN criterion, the estimator_ \[\delta_{PN}^{*}(\mathbf{X})=X_{(2)}-\min\{m_{0},m(\boldsymbol{0},U)\} \tag{4.2}\] _is Pitman nearer to \(\theta_{(2)}\) than the estimator \(\delta_{PN}(\mathbf{X})=X_{(2)}-m_{0}\), provided \(P_{\boldsymbol{\theta}}(m(\boldsymbol{0},U)<m_{0})\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\)._ The following corollary provides an improvement over the usual estimator \(\delta_{0}(\mathbf{X})=X_{(2)}-c_{0}\) (as defined by (1.8)). **Corollary 4.6**: _Suppose that assumptions (A) and (C1)-(C3) hold. Then for estimating \(\theta_{(2)}\) under the GPN criterion, the estimator_ \[\delta_{PN}^{*}(\mathbf{X})=X_{(2)}-\min\{c_{0},m(\boldsymbol{0},U)\} \tag{4.3}\] _is Pitman nearer to \(\theta_{(2)}\) than the estimator \(\delta_{0}(\mathbf{X})=X_{(2)}-c_{0}\), provided \(P_{\boldsymbol{\theta}}(m(\boldsymbol{0},U)<c_{0})\neq 0\), for some \(\boldsymbol{\theta}\in\Theta\)._ ## 5 Applications **Example 5.1**: Let \(X_{1}\sim N(\theta_{1},\sigma^{2})\) and \(X_{2}\sim N(\theta_{2},\sigma^{2})\), where \((\theta_{1},\theta_{2})\in\Theta_{0}\) is vector of unknown means and \(\sigma^{2}>0\) is known common variance. The pdf of \(X_{i}\) is \(\Re,\ i=1,2.\) Here it is easy to see that \(f(\cdot)\) holds the assumption (A). Consider the estimation of parameter \(\theta_{(2)}=\max\{\theta_{1},\theta_{2}\}\) under the loss function \[L((\theta_{1},\theta_{2}),a)=W(a-\theta_{(2)}),\ (\theta_{1},\theta_{2})\in\Re^{2}, \ a\in\Re. \tag{5.1}\] For the squared error loss (i.e., \(W(t)=t^{2},\ t\in\Re\)), the usual estimator of \(\theta_{(2)}\) is \(\delta_{c_{0}}=X_{(2)}\). In this case, using Theorem 3.1, there is no improvement over the usual estimator \(\delta_{c_{0}}\) and the improved estimator \(\delta_{ST}(\mathbf{X})=X_{(2)}-\min\left\{0,\frac{U}{2}\right\}=X_{(2)}\) is same. Also, using Theorem 3.4, the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-\frac{\sigma}{\sqrt{\pi}}\) is dominated by the estimator \(\delta_{BZ}(\mathbf{X})\), where \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\phi_{BZ}(U)=X_{(2)}-\frac{\sigma\left(\frac{ 1}{\sqrt{2\pi}}-\phi\left(\frac{U}{\sqrt{2}\sigma}\right)\right)}{\sqrt{2} \left(\Phi\left(\frac{U}{\sqrt{2}\sigma}\right)-0.5\right)},\] \(U=X_{(2)}-X_{(1)}\), \(\phi(\cdot)\) is the p.d.f. of standard normal distribution and \(\Phi(\cdot)\) is the d.f. of standard normal distribution. For \(W(t)=e^{at}-at-1,\ t\in\Re,\ a\neq 0\), (i.e., the Linex loss) the usual estimator of \(\theta_{(2)}\) is \(\delta_{c_{0}}=X_{(2)}-\frac{a\sigma^{2}}{2}\). In this case, using Theorem 3.2, the estimator \(\delta_{c_{0}}\) is dominated by the estimator \(\delta_{ST}(\mathbf{X})\), where \[\delta_{ST}(\mathbf{X})=X_{(2)}-\min\left\{\frac{a\sigma^{2}}{2},\frac{U}{2}+ \frac{a\sigma^{2}}{4}\right\}=\max\bigg{\{}X_{(2)}-\frac{a\sigma^{2}}{2},\frac {X_{1}+X_{2}}{2}-\frac{a\sigma^{2}}{4}\bigg{\}}.\] Also, using Theorem 3.5, the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-\frac{1}{a}\left[\ln 2+\frac{a^{2}\sigma^{2}} {2}+\ln\left(\Phi\left(\frac{a\sigma}{\sqrt{2}}\right)\right)\right]\) is dominated by the estimator \(\delta_{BZ}(\mathbf{X})\), where \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\frac{1}{a}\left[\frac{a^{2}\sigma^{2}}{2}+\ln \left(\Phi\left(\frac{a\sigma}{\sqrt{2}}\right)-\Phi\left(\frac{-U+a\sigma^{2} }{\sqrt{2}\sigma}\right)\right)-\ln\left(\frac{1}{2}-\Phi\left(\frac{-U}{ \sqrt{2}\sigma}\right)\right)\right].\] Now we consider \(W(t)=|t|,\ t\in\Re,\) (i.e., the absolute error loss). Under this loss function the usual estimator of \(\theta_{(2)}\) is \(\delta_{c_{0}}=X_{(2)}\). In this case, using Theorem 3.3, the estimator \(\delta_{c_{0}}\) is dominated by the estimator \[\delta_{ST}(\mathbf{X})=X_{(2)}-\min\left\{0,\frac{U}{2}\right\}=X_{(2)}.\] Also, using Theorem 3.6, the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-b_{0}\), where \(b_{0}\) is the solution of the equation \(\int_{-\infty}^{C}\Phi\left(\frac{s}{\sigma}\right)\phi\left(\frac{s}{\sigma} \right)ds=\frac{\sigma}{4}\), is dominated by the estimator \(\delta_{BZ}(\mathbf{X})=X_{(2)}-C,\) where \(C\) is the unique solution of the following equation \[\int_{-\infty}^{C}\left[\Phi\left(\frac{s}{\sigma}\right)-\Phi\left(\frac{s-U }{\sigma}\right)\right]\phi\left(\frac{s}{\sigma}\right)ds=\frac{\sigma}{4}- \frac{\sigma}{2}\Phi\left(\frac{-U}{\sqrt{2}\sigma}\right). \tag{5.2}\] Now, we will illustrate an application of Theorem 4.4 and Corollary 4.5. Consider the estimation of parameter \(\theta_{(2)}\) under the GPN criterion with the general loss function (\(W(\cdot)\) satisfy (C1), (C2) and (C3)) \[L(\mathbf{\theta},a)=W(a-\theta_{(2)}),\;\mathbf{\theta}\in\Re^{2},\;a\in\Re. \tag{5.3}\] Under the GPN criterion, the Pitman nearest (PN) estimator of \(\theta_{(2)}\) is \(X_{(2)}\). In this case, using Theorems 4.4, the improved estimator \(\delta_{\phi^{*}}(\mathbf{X})=X_{(2)}-\min\left\{0,\frac{U}{2}\right\}=X_{(2)}\) is same as PN estimator. Hence, there is no improvement over the estimator \(X_{(2)}\) using our results. **Example 5.2**: Let \(X_{1}\) and \(X_{2}\) be independent exponential random variables with \(X_{i}\) having the p.d.f. \(f(x-\theta_{i}),\;i=1,2\), where \[f(z)=\begin{cases}\frac{1}{\sigma}\,e^{-\frac{z}{\sigma}},&\text{if}\;\;z\geq 0 \\ 0,&\text{if}\;\;z<0\end{cases},\] \(\sigma>0\) is known positive constant and \(\mathbf{\theta}\in\Re^{2}\) is the vector of unknown location parameters. Consider estimation of \(\theta_{(2)}\) under the loss function \[L(\mathbf{\theta},a)=W(a-\theta_{(2)}),\;\mathbf{\theta}\in\Re^{2},\;a\in\Re. \tag{5.4}\] Under the squared error loss (i.e., \(W(t)=t^{2},\;t\in\Re\)), the usual estimator of \(\theta_{(2)}\) is \(X_{(2)}-\sigma\). In this case, using Theorem 3.1, the BLEE \(X_{(2)}-\sigma\) is dominated by the estimator \(\delta_{ST}(\mathbf{X})\), where \[\delta_{ST}(\mathbf{X})=X_{(2)}-\min\left\{\sigma,\frac{\sigma}{2}+U\right\}= \max\left\{X_{(2)}-\sigma,X_{(1)}-\frac{\sigma}{2}\right\}\!.\] Also, using Theorem 3.4, the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-\frac{3\sigma}{2}\) is dominated by the estimator \(\delta_{BZ}(\mathbf{X})\), where \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\phi_{BZ}(U)=X_{(2)}-\frac{3\sigma-(2U+3 \sigma)e^{-\frac{U}{\sigma}}}{2(1-e^{-\frac{U}{\sigma}})},\] and \(U=X_{(2)}-X_{(1)}\). Under the Linear loss function \(W(t)=e^{at}-at-1,\;t\in\Re,\;a\neq 0\), the usual estimator of \(\theta_{(2)}\) is \(\delta_{c_{0}}=X_{(2)}+\frac{1}{a}\ln(1-a\sigma)\), whenever \(a\sigma<1\). In this case, using Theorem 3.2, the estimator \(X_{(2)}\) is dominated by the estimator \(\delta_{ST}(\mathbf{X})\), where \[\delta_{ST}(\mathbf{X}) =X_{(2)}-\min\bigg{\{}-\frac{1}{a}\ln(1-a\sigma),U+\frac{\ln(2)}{ a}-\frac{1}{a}\ln(2-a\sigma)\bigg{\}}\] \[=\max\bigg{\{}X_{(2)}+\frac{1}{a}\ln(1-a\sigma),X_{(1)}+\frac{1}{ a}\ln(2-a\sigma)-\frac{\ln(2)}{a}\bigg{\}}.\] Also, using Theorem 3.5 the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}-\frac{1}{a}[\ln 2-\ln(1-a\sigma)-\ln(2-a \sigma)]\), whenever \(a\sigma<1\), is dominated by the estimator \(\delta_{BZ}(\mathbf{X})\), where \[\delta_{BZ}(\mathbf{X})=X_{(2)}-\phi_{BZ}(U)=X_{(2)}-\frac{1}{a}\left[\ln 2- \ln(1-a\sigma)-\ln(2-a\sigma)+\ln\left(1-e^{-U\left(\frac{1}{\sigma}-a\right)} \right)-\ln\left(1-e^{-\frac{U}{\sigma}}\right)\right].\] Under the absolute error loss \(W(t)=|t|,\ t\in\Re\), the usual estimator of \(\theta_{(2)}\) is \(\delta_{c_{0}}=X_{(2)}-\sigma\ln(2)\). In this case, using Theorem 3.3, the estimator \(X_{(2)}-\sigma\ln(2)\) is dominated by the estimator \[\delta_{ST}(\mathbf{X})=X_{(2)}-\min\bigg{\{}\sigma\ln(2),U+\frac{\sigma}{2} \ln(2)\bigg{\}}=\max\bigg{\{}X_{(2)}-\sigma\ln(2),X_{(1)}-\frac{\sigma}{2}\ln( 2)\bigg{\}}.\] Also, using Theorem 3.6, the estimator \(\delta_{b_{0}}(\mathbf{X})=X_{(2)}+\sigma\ln\left(1-\frac{1}{\sqrt{2}}\right)\) is dominated by the estimator \(\delta_{BZ}(\mathbf{X})=X_{(2)}-C\), where \(C>0\) is the unique solution of the following equation \[\int_{0}^{C}\int_{0}^{\min\{U,\mathbf{s}\}}e^{-\frac{2\sigma}{\sigma}}e^{\frac {u}{\sigma}}dy\,ds=\frac{\sigma^{2}}{4}\left(1-e^{-\frac{U}{\sigma}}\right).\] Now, we will illustrate an application of Theorem 4.4 and Corollary 4.5. Consider the estimation of parameter \(\theta_{(2)}\) under the GPN criterion with the general loss function (\(W(\cdot)\) satisfy (C1), (C2) and (C3)) \[L(\boldsymbol{\theta},a)=W(a-\theta_{(2)}),\ \boldsymbol{\theta}\in\Re^{2},\ a \in\Re. \tag{5.5}\] Under the GPN criterion, the PN estimator of \(\theta_{(2)}\) is \(X_{(2)}-\sigma\ln(2)\). In this case, using Theorems 4.4, the estimator \[\delta_{\phi_{0}}(\mathbf{X})=\max\bigg{\{}X_{(2)}-\sigma\ln(2),X_{(1)}-\frac {\sigma}{2}\ln(2)\bigg{\}}\] is the Pitman nearer to \(\theta_{(2)}\) than the estimator \(X_{(2)}-\sigma\ln(2)\), under the GPN criterion. ## 6 Simulation study In Example 4.1, we considered the estimation of the larger mean \(\theta_{(2)}\) of two independent normal distributions with a known common variance (\(\sigma^{2}\)). We derived estimators that improved upon the usual estimator \(\delta_{c_{0}}(\boldsymbol{X})=X_{(2)}\) and the estimator \(\delta_{1}(\boldsymbol{X})=X_{(2)}-b_{0}\) under three different loss functions. To evaluate the risk performance of these estimators of \(\theta_{(2)}\) under these loss functions, we conducted Monte Carlo simulations to compare the risk performances of the usual estimator \(X_{(2)}\), the Stein-type estimator \(\delta_{ST}\), the estimator \(\delta_{1}(\boldsymbol{X})\), and the Brewster-Zidek type estimator \(\delta_{BZ}\). We computed the simulated risks based on \(50000\) simulations from relevant distributions, and the resulting risk function of the different estimators were plotted in Figures 1-3. The following observations are evident from Figures 1-3: * The risk of the Stein type estimator \(\delta_{ST}\) has smaller risk than the usual estimator \(\delta_{c_{0}}\). For smaller \(\sigma\), it can be seen that the risk values of both the estimators are almost equal when \(\theta_{(2)}-\theta_{(1)}\) takes values close to zero and approximately more than 3. For higher values of \(\sigma\), we observe that \(\delta_{ST}\) performs significantly better than \(\delta_{c_{0}}\). In this case, the region on improvement is better than the previous (small \(\sigma\)) case. 2. The risk performance of \(\delta_{BZ}\) is better then \(\delta_{1}\) for all \(\theta_{(2)}-\theta_{(1)}\). It is observed that for larger values of \(\sigma\), the improvement interval is bigger compared to the smaller values of \(\sigma\). 3. We also found that there was no clear winner among the various estimators, as the estimator \(\delta_{1}\) and the \(\delta_{BZ}\) performed better than the usual estimator and \(\delta_{ST}\) for small and moderate values of \(\theta_{(2)}-\theta_{(1)}\), whereas the estimator \(\delta_{ST}\) dominated the other two estimators for large values of \(\theta_{(2)}-\theta_{(1)}\). Figure 1: Risk plots of estimators of parameter \(\theta_{(2)}\) against values of \(\theta=\theta_{(2)}-\theta_{(1)}\): when \(W(t)=t^{2},\;t\in\Re\), and so, \(b_{0}=\frac{\sigma}{\sqrt{\pi}}\). Figure 2: Risk plots of estimators of parameter \(\theta_{(2)}\) against values of \(\theta=\theta_{(2)}-\theta_{(1)}\): when \(W(t)=e^{at}-at-1,\;t\in\Re,\;a\neq 0\), and so, \(b_{0}=\frac{1}{a}\left[\ln 2+\frac{a^{2}\sigma^{2}}{2}+\ln\left(\Phi\left(\frac{a \sigma}{\sqrt{2}}\right)\right)\right]\). Figure 3: Risk plots of estimators of parameter \(\theta_{(2)}\) against values of \(\theta=\theta_{(2)}-\theta_{(1)}\): when \(W(t)=|t|,\ t\in\Re\), and so, \(b_{0}\) is the solution of the equation \(\int_{-\infty}^{C}\Phi\left(\frac{s}{\sigma}\right)\phi\left(\frac{s}{\sigma} \right)ds=\frac{\sigma}{4}\). Real-life Data Analysis For real life data analysis, we have considered the "Jute fiber breaking strength data", discussed by Xia et al. (2009) and presented in Table 1. This data represents the breaking strength of jute fibre of two different gauge lengths. Jute is a versatile natural fiber widely employed in various products, including textiles, ropes, sacks, and geotextiles. The breaking strength of jute fibers plays a pivotal role in determining the quality and durability of these products. By estimating the maximum breaking strength with different gauge lengths, manufacturers can ensure that their products conform to the required standards and can withstand a range of stresses and loads. Therefore, the estimation of the maximum breaking strength of jute fibers using two different gauge lengths holds significant importance in the realm of material science and engineering for several compelling reasons. For data analysis, we initially examined whether these datasets follow two-parameter exponential distributions. We used the one-sample Kolmogorov-Smirnov test to see if the data with gauge lengths 10 mm and 15 mm may be following a two parameter exponential distributions and found p-values of 0.755 and 0.306, respectively, indicating that the data with gauge lengths 10 mm and 15 mm follow a two parameter exponential distributions with a common scale parameter value of 322. Let \(X_{1}\) and \(X_{2}\) be the two independent random variables representing the breaking strength of jute fibre corresponding to gauge length 10 mm and 15 mm, respectively. Therefore \(X_{1}\sim Exp(\theta_{1},\sigma)\) and \(X_{2}\sim Exp(\theta_{2},\sigma)\), where \(\theta_{1}\) and \(\theta_{2}\) are location parameters, and common known scale parameter \(\sigma=322/30=10.73\). Using our finding of this paper, we obtain estimates that are better than the natural estimates of parameter \(\max\{\theta_{1},\theta_{2}\}\). In Example 5.2, we have presented various estimators under the squared error loss, linear loss and absolute error loss functions, which are given in Table 2, Table 3 and Table 4, respectively. From theoretical results (in Example 5.2), we infer that the Stein type estimated value \(\delta_{ST}(\mathbf{x})\) is better than estimated value of \(\delta_{c_{0}}(\mathbf{x})\) and the Brewster-Zidek type estimated value \(\delta_{BZ}(\mathbf{x})\) is better than estimated value of \(\delta_{b_{0}}(\mathbf{x})\) for parameter \(\max\{\theta_{1},\theta_{2}\}\). \begin{table} \begin{tabular}{|c|c|c|} \hline No. & gauge length 10 mm & gauge length 15 mm \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: The breaking strength of jute fibre \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\delta_{c_{0}}({\bf x})\) & \(\delta_{ST}({\bf x})\) & \(\delta_{b_{0}}({\bf x})\) & \(\delta_{BZ}({\bf x})\) \\ \hline \hline 33.2 & 37.3 & 27.835 & 37.94 \\ \hline \end{tabular} \end{table} Table 2: Various estimated values of parameter \(\max\{\theta_{1},\theta_{2}\}\): under the squared error loss \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\delta_{c_{0}}({\bf x})\) & \(\delta_{ST}({\bf x})\) & \(\delta_{b_{0}}({\bf x})\) & \(\delta_{BZ}({\bf x})\) \\ \hline \hline 41.47 & 41.47 & 39.62 & 41.52 \\ \hline \end{tabular} \end{table} Table 3: Various estimated values of parameter \(\max\{\theta_{1},\theta_{2}\}\): under the Linear loss with \(a=-1\) ## 8 Conclusions In the present article, we have considered the estimation, from a decision-theoretic point of view, of the larger location parameter of two general location family of distributions under a general location invariant loss function. We have proposed a Stein-type estimator, which improves upon the usual estimator. Next, we have proposed an estimator, \(\delta_{b_{0}}\), which is similar to the usual estimator. Using the IERD approach of Kubokawa, a class of estimators has been derived that dominates \(\delta_{b_{0}}\). It is seen that the boundary estimator of this class is the Brewster-Zidek type estimator. As an application, the improved estimators are derived for particular loss functions. We have also considered the same estimation problem with respect to the generalized Pitman nearness criterion. We have proposed an estimator that is nearer than the usual estimator. As an application, explicit expressions of all the improved estimators are obtained for the normal and exponential models. Further, we have compared the risk performance of the proposed estimators using the Monte Carlo simulation. Finally, we present a real-life application that demonstrates the practical significance of the findings presented in this paper.
2309.12031
Simple Approximation Algorithms for Minimizing the Total Weighted Completion Time of Precedence-Constrained Jobs
We consider the precedence-constrained scheduling problem to minimize the total weighted completion time. For a single machine several $2$-approximation algorithms are known, which are based on linear programming and network flows. We show that the same ratio is achieved by a simple weighted round-robin rule. Moreover, for preemptive scheduling on identical parallel machines, we give a strongly polynomial $3$-approximation, which computes processing rates by solving a sequence of parametric flow problems. This matches the best known constant performance guarantee, previously attained only by a weakly polynomial LP-based algorithm. Our algorithms are both also applicable in non-clairvoyant scheduling, where processing times are initially unknown. In this setting, our performance guarantees improve upon the best competitive ratio of $8$ known so far.
Sven Jäger, Philipp Warode
2023-09-21T12:52:05Z
http://arxiv.org/abs/2309.12031v1
# Simple Approximation Algorithms ###### Abstract We consider the precedence-constrained scheduling problem to minimize the total weighted completion time. For a single machine several 2-approximation algorithms are known, which are based on linear programming and network flows. We show that the same ratio is achieved by a simple weighted round-robin rule. Moreover, for preemptive scheduling on identical parallel machines, we give a strongly polynomial 3-approximation, which computes processing rates by solving a sequence of parametric flow problems. This matches the best known constant performance guarantee, previously attained only by a weakly polynomial LP-based algorithm. Our algorithms are both also applicable in non-clairvoyant scheduling, where processing times are initially unknown. In this setting, our performance guarantees improve upon the best competitive ratio of 8 known so far. ## 1 Introduction Scheduling jobs with precedence constraints so as to minimize the sum of weighted completion times is a widely studied problem in the field of approximation and online algorithms. We consider this problem on a single machine, as well as on identical parallel machines when preemption is allowed. These problems, denoted in the 3-field notation for scheduling problems [13] as \(1\mid\operatorname{prec}\mid\sum w_{j}C_{j}\) and \(\operatorname{P}\mid\operatorname{prec},\operatorname{pmtn}\mid\sum w_{j}C_{j}\), respectively, are both strongly NP-hard [21]. We present simpler and faster approximation algorithms for both problems, achieving the same performance guarantees as the best previously known algorithms. Pisaruk [26] observed in 1992 that the single-machine scheduling problem is a special case of the so-called submodular ordering problem, for which he gave a 2-approximation algorithm and thus the first 2-approximation for the scheduling problem. Since then, a whole set of further 2-approximation algorithms have been developed for this problem, and under a variant of the Unique Games Conjecture no better guarantee is possible [4]. The algorithm of Hall, Schulz, Shmoys, and Wein [14] is based on a linear programming relaxation with an exponential number of efficiently separable constraints [29] and hence relies on the ellipsoid method. All approximation algorithms developed in the sequel are purely combinatorial and based on network flows. Chudak and Hochbaum [8] consider a special LP relaxation with only two variables in each constraint, which can be solved to optimality by a minimum capacity cut computation [15], and then apply list scheduling to the resulting LP completion times. Chekuri and Motwani [7], Margot, Queyranne, and Wang [23], and Pisaruk [27] all determine a Sidney decomposition [31] and order the jobs arbitrarily within each block of the decomposition. The flow-based methods for computing the Sidney decomposition differ slightly in the three algorithms. While Chekuri and Motwani [7] and Pisaruk [27] solve a separate (parametric) maximum flow problem for each Sidney block, Margot, Queyranne, and Wang manage to compute the entire decomposition by a single application of the algorithm by Gallo, Grigoriadis, and Tarjan [10] for computing all breakpoints of a parametric maximum flow problem. This yields the to date fastest 2-approximation algorithm for the problem, running in time \(O(n^{3})\) for \(n\) jobs. As to simplicity, the algorithm of Pisaruk [27], relying only on Sidney's result and a standard maximum flow computation, may be considered the easiest. For identical parallel machines, Hall et al. [14] gave a 3-approximation algorithm, which is still the best known constant performance guarantee for this problem. Their algorithm is based on the solution of a linear programming relaxation and thus runs only in weakly polynomial time. We show that a very simple weighted round-robin rule, which always alternates between all available jobs, also achieves the performance guarantee 2 on a single machine. Here we call a job available if it is unfinished but all its predecessors have been completed. The algorithm passes the weight of each job waiting for an uncompleted predecessor to an arbitrary available predecessor. It then runs each available job at a rate proportional to its collected weight, thus constructing a schedule with infinitesimally small processing intervals. Note that this algorithm is non-clairvoyant, i.e., it needs no a priori knowledge of the processing times. When the processing times are known upfront, it can easily be transformed to a non-preemptive algorithm by scheduling the jobs in the order of completion times in the computed preemptive schedule. The entire algorithm runs in time \(O(n^{2})\). For identical parallel machines we present a 3-competitive non-clairvoyant algorithm, which simplifies an algorithm by Garg, Gupta, Kumar, and Singla [11] and is based on a parametric maximum flow computation. It generalizes our algorithm for a single machine in the sense that, when applied to a single machine, it will return the same result. However, to accommodate the greater generality, it is more complicated than the single-machine algorithm, resulting in a longer running time. Similarly as above, the resulting schedule can be transformed into a schedule with only a finite number of preemptions. This gives the first strongly polynomial 3-approximation algorithm for this problem. Our contribution is thus twofold: First, we obtain simpler and faster approximation algorithms for clairvoyant precedence-constrained scheduling, see Table 1. Second, we get improved competitive ratios for non-clairvoyant scheduling with precedence constraints. Our non-clairvoyant algorithms for a single machine and for identical parallel machines generalize respective algorithms by Lassota, Lindermayr, Megow, and Schloter [20] for out-forest precedence constraints, which were shown to be 4-competitive and 6-competitive, respectively. For general precedence constraints, the algorithm by Garg et al. [11] was improved by Jager [16] to give 8-competitiveness, which has previously been the best known bound for this problem. In the absence of precedence constraints, 2-competitiveness was proved by Kim and Chwa [18] and Beaumont, Bonichon, Eyraud-Dubois, and Marchal [5], and no non-clairvoyant algorithm can achieve a better competitive ratio [25]. Table 2 provides an overview of the old and new performance guarantees for non-clairvoyant scheduling. Not only the presented algorithms are simple, but also their analysis. The performance guarantee is derived in each case by means of an induction over the number of jobs, inspired by the analysis of Beaumont et al. [5]. For a single machine, it is fully self-contained, using only elementary calculations and not relying on any theoretical foundations like linear programming theory or network flows, so it could be taught in an introductory algorithms course. For identical parallel machines, only network flow theory is needed. This is in contrast to previous analyses for non-clairvoyant scheduling with precedence constraints [11, 20], which were based on dual fitting for an appropriate LP relaxation. Further related results.For some special classes of precedence graphs the single-machine problem can be solved in polynomial time. A classical result states that this is the case for series-parallel graphs [19, 21]. This was generalized by Ambuhl and Mastrolilli [2] to 2-dimensional precedence orders, by proving that the scheduling problem is a special case of the vertex cover problem. A survey on the complexity of scheduling problems with special precedence constraints is given by Prot and Bellinguez-Morineau [28]. A generalization of the result of Ambuhl and Mastrolilli yields approximation factors below 2 for several further precedence graph structures: Ambuhl, Mastrolilli, Mutsanas, and Svensson [3] showed that if the precedence order is \(k\)-dimensional (and the \(k\) realizing linear orders are given), then there is a \((2-2/k)\)-approximation algorithm. This implies a \(4/3\)-approximation algorithm for convex bipartite orders and for semiorders. Moreover, by generalizing to fractional dimensions, they proved for any \(\Delta\geq 1\) that if no job has more than \(\Delta\) predecessors or no job has more than \(\Delta\) successors, then there is a \(2/(1+1/\Delta)\)-approximation. Sitters [32] recently \begin{table} \begin{tabular}{c c c} \hline \hline & old & new \\ \hline 1 \(\mid\) pmtn \(\mid\sum w_{j}C_{j}\) & 2 [18] & \\ 1 \(\mid\) out-forest, pmtn \(\mid\sum w_{j}C_{j}\) & 4 [20] & 2 \\ 1 \(\mid\) prec, pmtn \(\mid\sum w_{j}C_{j}\) & 8 [16] & 2 \\ \hline P \(\mid\) pmtn \(\mid\sum w_{j}C_{j}\) & 2 [5] & \\ P \(\mid\) out-forest, pmtn \(\mid\sum w_{j}C_{j}\) & 6 [20] & 3 \\ P \(\mid\) prec, pmtn \(\mid\sum w_{j}C_{j}\) & 8 [16] & 3 \\ \hline \hline \end{tabular} \end{table} Table 2: Old and new competitiveness results for non-clairvoyant scheduling \begin{table} \begin{tabular}{c c c} \hline \hline & old & new \\ \hline 2-approximation for & \(O(n^{3})\)[23] & \(O(n^{2})\) \\ 1 \(\mid\) prec \(\mid\sum w_{j}C_{j}\) & (parametric flows) & (weighted round-robin) \\ \hline 3-approximation for & weakly polynomial [14] & \(O(n^{4})\) \\ P \(\mid\) prec, pmtn \(\mid\sum w_{j}C_{j}\) & (LP-based) & (parametric flows) \\ \hline \hline \end{tabular} \end{table} Table 1: Running times and main ingredients of old and new approximation algorithms for precedence-constrained scheduling gave a polynomial-time approximation scheme for interval orders. For integral processing times, preemptive scheduling with precedence constraints is closely related to non-preemptive scheduling of precedence-constrained jobs with unit processing times. On the one hand, given an instance with arbitrary processing times, every job can be replaced by a chain of \(p_{j}\) unit jobs such that the entire weight lies on the last job, see e.g. [6]. In the resulting unit processing time instance, preemptions are not useful. Hence, any approximate solution for non-preemptive scheduling of the new instance corresponds to a preemptive schedule for the original instance with the same performance guarantee. This reduction is, however, only pseudopolynomial. On the other hand, Hall et al.'s algorithm computes a schedule where preemptions occur only at integer times. Hence, when applied to jobs with unit processing times, it will not introduce any preemptions. Therefore, it is a \(3\)-approximation algorithm also for \(\mathrm{P}\mid\mathrm{prec},p_{j}=1\mid\sum w_{j}C_{j}\). For this problem a better \((1+\sqrt{2})\)-approximation was recently developed by Li [22], which implies a pseudopolynomial approximation with this guarantee for preemptive scheduling of general jobs. For non-preemptive scheduling of precedence-constrained jobs with general processing times on identical parallel machines the currently best known performance guarantee is \(2+2\ln 2+\varepsilon\approx 3.386+\varepsilon\) for any \(\varepsilon>0\), achieved by another algorithm of Li [22]. Makespan minimization is a special case of our problem because this objective function can be modeled by adding one dummy job that has to wait for every other job. For this case Graham's list scheduling algorithm has performance guarantee \(2\)[12], and the \((2-\varepsilon)\)-hardness under the Unique Games Conjecture variant still applies [35]. The algorithms presented in this paper only work when all jobs are released at the beginning and can only be blocked due to unfinished predecessors. In contrast, the \(3\)-approximation algorithm of Hall et al. [14] for identical parallel machines can still be applied when each job has an individual release date, whereas for a single machine no known approximation algorithm with performance guarantee exactly \(2\) can handle release dates. However, a \((2+\varepsilon)\)-approximation was provided by Sitters and Yang [33]. For non-clairvoyant scheduling with release dates the algorithm of Jager [16] admits the currently best known performance guarantee of \(8\). Another generalization of the problem studied in this paper concerns the objective function: Schulz and Verschae [30] gave a universal \(2\)-approximation algorithm for \(1\mid\mathrm{prec}\mid\sum w_{j}f(C_{j})\) for all concave functions \(f\). Moreover, they showed that for any given concave functions \(f_{j}\) for all jobs \(j\), the problem \(1\mid\mathrm{prec}\mid\sum f_{j}(C_{j})\) admits a \((2+\varepsilon)\)-approximation. In our non-clairvoyant algorithms, we always assume that the weights and the precedence graph are known precisely. When the precedence constraints form an out-forest, the knowledge of the entire graph is not necessary, but it would be enough to know only the total weight of jobs depending on each available job. Lassota et al. [20] study the case when these aggregated weights are also not known exactly but only a prediction on their value is at hand. Using time sharing, they derive an algorithm whose performance guarantee depends on the maximum distortion factor between the predicted weights and the real weights, but is capped at the width of the precedence order. Structure of the paper.Some basics and notation are set up in Section 2. In Sections 3 and 4 the non-clairvoyant algorithms for a single and for identical parallel machines, respectively, are presented. It is discussed in Section 5 how these algorithms can be converted to approximation algorithms for the problems \(1\mid\mathrm{prec}\mid\sum w_{j}C_{j}\) and \(\mathrm{P}\mid\mathrm{prec},\mathrm{pmtn}\mid\sum w_{j}C_{j}\). ## 2 Preliminaries and Notation Assume that we are given a set of jobs \(N=\{1,\ldots,n\}\) with processing times \(p_{j}\geq 0\), weights \(w_{j}\geq 0\), and precedence constraints \(A\subset N\times N\) such that \((N,A)\) is a directed acyclic graph. A job \(k\) is _available_ if it is not yet completed but all jobs \(k\) with \((k,j)\in A\) are. A schedule S assigns to each available job a processing rate \(R^{\mathrm{S}}_{j}(t)\in[0,1]\) at any time \(t\geq 0\) so that the sum of all processing rates never exceeds \(1\). The processing time of a job \(j\) elapsed before time \(t\) is \(Y^{\mathrm{S}}_{j}(t)=\int_{0}^{t}R^{\mathrm{S}}_{j}(s)\,\mathrm{d}s\). A job \(j\) is completed at the time \(C^{\mathrm{S}}_{j}=\min\{t\geq 0\mid Y_{j}(t)\geq p_{j}\}\), at which its elapsed time reaches its required processing time. The goal is to minimize the total weighted completion time \(\sum_{j=1}^{n}w_{j}C^{\mathrm{S}}_{j}\) of all jobs. We omit the schedule S in these notations if it is clear from context or when the schedule is in the process of being constructed. The performance of an algorithm is assessed by comparing the achieved objective value to the objective value of an optimal schedule with full information in the worst case. For \(j\in N\) let \(S(j)\) be the set containing \(j\) and all its successors in the precedence graph, i.e., all nodes reachable from node \(j\). Moreover, for \(t\geq 0\) let \(U_{t}\coloneqq\{j\in N\mid C_{j}>t\}\) be the set of unfinished jobs, and let \(F_{t}\subseteq U_{t}\) be the set of available jobs at time \(t\). For a subset \(J\subseteq N\) of jobs we will write \(w(J)\coloneqq\sum_{j\in J}w_{j}\). In non-clairvoyant scheduling the weights and precedence constraints of all jobs are known at the beginning, but the processing times are unknown and are only revealed at the moment when the jobs are completed. ## 3 Non-clairvoyant Scheduling on a Single Machine In non-clairvoyant scheduling one specifies a tentative schedule, which can be updated whenever a job is completed. The tentative schedules occurring in our non-clairvoyant algorithm always process every job at a constant rate. These rates are computed by Algorithm 1, which iterates over all available jobs and assigns to every job all of its yet unassigned successors. Then, each available job is processed at a rate proportional to the total weight of its assigned jobs. _Example 1_.: Consider four jobs with the following weights and processing times \begin{tabular}{c|c c c c} \(j\) & 1 & 2 & 3 & 4 \\ \hline \(w_{j}\) & 1 & 2 & 1 & 1 \\ \(p_{j}\) & 6 & 4 & 3 & 5 \\ \end{tabular} and assume that there is a single precedence constraint from job 1 to job 2. The schedule for this instance resulting from Algorithm 1 is depicted in Figure 1, and the resulting completion times are \(C_{1}=10\), \(C_{2}=17\), \(C_{3}=14\), and \(C_{4}=18\). **Theorem 3.1**.: _Using Algorithm 1 at time \(0\) and at the first \(n-1\) job completion times to determine subsequent processing rates is a \(2\)-competitive strategy for total weighted completion time minimization._ Proof.: We prove the claim by induction on the number \(n\) of jobs. For a single job, the algorithm obviously computes the optimal schedule. Now consider an instance \(I\) with \(n>1\) jobs, and assume w.l.o.g. that \(w(N)=1\) and that job 1 is the first job completed in the schedule computed by the algorithm. We denote the the schedule resulting from Algorithm 1 applied to \(I\) by \(\operatorname{ALG}(I)\), and we write \(Y_{j}(t)\coloneqq Y_{j}^{\operatorname{ALG}(I)}(t)\) for \(t\geq 0\). Within \([0,C_{1}^{\operatorname{ALG}(I)})\) every job \(i\in F_{0}\) is processed at a constant rate equal to the total weight of jobs in the tree \(T(i)\). Therefore, \(C_{1}^{\operatorname{ALG}(I)}=p_{1}/w(T(1))\), and \(Y_{i}(C_{1}^{\operatorname{ALG}(I)})=w(T(i))C_{1}^{\operatorname{ALG}(I)}\) for all \(i\in F_{0}\). We define the instance \(I^{\prime}\) that consists of the jobs \(j=2,\ldots,n\) with processing times \(p^{\prime}_{j}\coloneqq p_{j}-Y_{j}(C_{1}^{\operatorname{ALG}(I)})\). Algorithm 1 applied to \(I\) behaves after time \(C_{1}^{\operatorname{ALG}(I)}\) exactly as if it were applied to \(I^{\prime}\). Hence, \(C_{j}^{\operatorname{ALG}(I)}=C_{1}^{\operatorname{ALG}(I)}+C_{j}^{ \operatorname{ALG}(I^{\prime})}\) for all \(j\in\{2,\ldots,n\}\). Therefore, \[\sum_{j=1}^{n}w_{j}C_{j}^{\operatorname{ALG}(I)}=\sum_{j=1}^{n}w_{j}C_{1}^{ \operatorname{ALG}(I)}+\sum_{j=2}^{n}w_{j}C_{j}^{\operatorname{ALG}(I^{ \prime})}=C_{1}^{\operatorname{ALG}(I)}+\sum_{j=2}^{n}w_{j}C_{j}^{ \operatorname{ALG}(I^{\prime})}.\] Now consider a fixed non-preemptive optimal schedule \(\operatorname{OPT}(I)\) for \(I\). By shortening jobs in \(\operatorname{OPT}(I)\), removing job 1, and contracting any occurring idle times, we can transform this to a feasible schedule \(\operatorname{S}^{\prime}\) for \(I^{\prime}\). Since we contract idle times, the starting time (and, thus, also completion time) of each job \(j\) is lowered by the total time by which all preceding jobs have been shortened. By construction of \(I^{\prime}\), only the processing time of jobs \(k\in F_{0}\) is shortened by exactly \(Y_{k}(C_{1}^{\operatorname{ALG}(I)})\). Figure 1: Schedule for the instance in Example 1 Hence, \(C_{j}^{\mathrm{OPT}(I)}-C_{j}^{\mathrm{S}^{\prime}}=\sum_{k\in F_{0}:C_{k}^{ \mathrm{OPT}(I)}\leq C_{j}^{\mathrm{OPT}(I)}}Y_{k}(C_{1}^{\mathrm{ALG}(I)})\) for \(j=2,\ldots,n\). Overall, we obtain \[\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{OPT}(I)} =w_{1}C_{1}^{\mathrm{OPT}(I)}+\sum_{j=2}^{n}w_{j}(C_{j}^{S^{\prime }}+C_{j}^{\mathrm{OPT}(I)}-C_{j}^{S^{\prime}})\] \[\geq\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+\sum_{j=1}^{n} w_{j}\sum_{\begin{subarray}{c}k\in F_{0}\\ C_{k}^{\mathrm{OPT}(I)}\leq C_{j}^{\mathrm{OPT}(I)}\end{subarray}}Y_{k}(C_{1 }^{\mathrm{ALG}(I)})\] \[=\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+\sum_{j=1}^{n}w_{ j}\sum_{\begin{subarray}{c}k\in F_{0}\\ C_{k}^{\mathrm{OPT}(I)}\leq C_{j}^{\mathrm{OPT}(I)}\end{subarray}}w\big{(}T( k)\big{)}\,C_{1}^{\mathrm{ALG}(I)}\] \[=\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+C_{1}^{\mathrm{ ALG}(I)}\sum_{i\in F_{0}}\sum_{j\in T(i)}w_{j}\sum_{\begin{subarray}{c}k\in F_{0} \\ C_{k}^{\mathrm{OPT}(I)}\leq C_{j}^{\mathrm{OPT}(I)}\end{subarray}}w\big{(}T( k)\big{)}\] \[\geq\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+C_{1}^{ \mathrm{ALG}(I)}\sum_{i\in F_{0}}w\big{(}T(i)\big{)}\sum_{\begin{subarray}{c}k \in F_{0}\\ C_{k}^{\mathrm{OPT}(I)}\leq C_{i}^{\mathrm{OPT}(I)}\end{subarray}}w\big{(}T( k)\big{)}\] \[\geq\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+C_{1}^{ \mathrm{ALG}(I)}\cdot\frac{1}{2}\bigg{(}\sum_{j=1}^{n}w_{j}\bigg{)}^{2}\] \[=\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+C_{1}^{\mathrm{ ALG}(I)}\cdot\frac{1}{2}\geq\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{OPT}(I^{\prime})}+ \frac{C_{1}^{\mathrm{ALG}(I)}}{2},\] where the second inequality holds because \(C_{j}^{\mathrm{OPT}(I)}\geq C_{i}^{\mathrm{OPT}(I)}\) for all \(i\in F_{0}\) and \(j\in T(i)\). Thus, we have by induction hypothesis: \[\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{ALG}(I)} =C_{1}^{\mathrm{ALG}(I)}+\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{ALG}(I ^{\prime})}\] \[\leq 2\bigg{(}\frac{C_{1}^{\mathrm{ALG}(I)}}{2}+\sum_{j=2}^{n}w_{j} C_{j}^{\mathrm{OPT}(I^{\prime})}\bigg{)}\leq 2\sum_{j=1}^{n}w_{j}C_{j}^{ \mathrm{OPT}(I)}.\qed\] ## 4 Non-clairvoyant Scheduling on Identical Parallel Machines The difficulty on multiple identical machines is that the total processing power of \(m\) cannot be arbitrarily divided among jobs because no job can be processed by more than one machine at the same time. If we imagine that unfinished unavailable jobs pass their weight to their available predecessors, which they can use to "buy" processor rate, then on a single machine, they can pass their weight to an arbitrary predecessor because the donated weight will in any case increase the processing rate of this predecessor. This is implemented in Algorithm 1 by simply passing the weight to the available predecessor considered first. Another simple case is when the precedence graph is an out-forest. In this case, every unfinished unavailable job passes its weight to its unique available predecessor, and the rate assignment to the available jobs follows the so-called WDEQ algorithm applied to the collected weights, i.e., as long as there is a job that should receive a rate larger than 1, this job receives rate 1 and the algorithm recurses on the remaining jobs and machines. This amounts exactly to Algorithm 5 from Lassota et al. [20]. In the case that we have multiple machines and arbitrary precedence constraints, the situation is more complicated. For a given unavailable job \(j\), some predecessors may have already collected enough weight to receive an entire processor, while others would still benefit from receiving more weight. So \(j\) better passes its weight to the latter in order to reduce its waiting time. It may also be beneficial to split the weight among multiple predecessors. Inspired by an algorithm by Garg et al. [11] for the more general setting with online release dates, we model this as a parametric maximum flow problem. For this we adopt a slightly different view than above. We assume that all unfinished jobs can buy "virtual" processing rate. If a job is unavailable, the bought virtual rate has to be sent to it via an available predecessor, and the total rate sent through each available job is at most 1. This corresponds to a resource allocation problem in a capacitated network. If \(\tilde{R}_{j}\) is the virtual rate sent to node \(j\), the goal is to balance the ratios \(\tilde{R}_{j}/w_{j}\). Since at the end of the day, we are not interested in the exact virtual rates of the unavailable jobs, but only in the total amount sent through each available job, it suffices to minimize the maximum ratio \(\tilde{R}_{j}/w_{j}\) over all unfinished jobs \(j\). Exactly this problem was studied by Gallo, Grigoriadis, and Tarjan [10, Section 4.1], who show that this Minimax flow sharing problem can be solved by computing a parametric maximum flow in an extended network. In the following we describe a variant of their construction for our specific case. Let \(t\geq 0\). We define a directed graph \(D_{t}=(\mathcal{V}_{t},\mathcal{A}_{t})\) as follows: The nodes are \(\mathcal{V}_{t}\coloneqq U_{t}\cup\{\mathrm{A},\mathrm{B},\mathrm{Z}\}\), and the set of arcs is \[\mathcal{A}_{t}\coloneqq\big{\{}(j,k)\in A\ \big{|}\ j,k\in U_{t}\big{\}} \cup\big{\{}(\mathrm{A},\mathrm{B})\big{\}}\cup\big{\{}(\mathrm{B},j)\ \big{|}\ j\in F_{t}\big{\}}\cup\big{\{}(j,\mathrm{Z})\ \big{|}\ j\in U_{t}\big{\}}.\] We define arc capacities \(u^{t}_{a}(\pi)\), \(a\in\mathcal{A}_{t}\), depending on a parameter \(\pi>0\), as follows: We set \(u^{t}_{(\mathrm{A},\mathrm{B})}(\pi)\coloneqq m\), \(u^{t}_{(\mathrm{B},j)}(\pi)\coloneqq 1\) for \(j\in F_{t}\), \(u^{t}_{(j,k)}(\pi)\coloneqq\infty\) for \(j,k\in U_{t}\) with \((j,k)\in A\), and \(u^{t}_{(j,\mathrm{Z})}(\pi)\coloneqq w_{j}/\pi\) for \(j\in U_{t}\). This notation allows us to formulate the rate distribution in Algorithm 2. ``` if\(|F_{t}|\leq m\), then set \(R_{j}(t)\gets 1\) for all \(j\in F_{t}\); else compute \(\pi_{t}\leftarrow\max\big{\{}\pi>0\ \big{|}\ (\{\mathrm{A}\},\mathcal{V}_{t} \setminus\{\mathrm{A}\})\) is a minimum-capacity A-Z-cut w.r.t. \(u^{t}(\pi)\big{\}}\), and let \(x^{t}\) be a maximum A-Z-flow for arc capacities \(u^{t}\coloneqq u^{t}(\pi_{t})\); set \(R_{j}(t)\gets x^{t}_{(\mathrm{B},j)}\) for all \(j\in F_{t}\). ``` **Algorithm 2**Processing rates at time \(t\) on identical parallel machines _Example 2_.: Assume that there are \(m=3\) machines and the following unfinished jobs at time \(t=0\): \[\begin{array}{c|cccccc}j&1&2&3&4&5&6\\ \hline p_{j}&9&12&12&9&3\\ w_{j}&1&1&1&6&5&1\end{array}\] Assume further that there are the precedence constraints \(\mathcal{A}_{t}=\{(1,5),(2,5),(2,6),(3,6),(4,6)\}\). Then the available jobs are \(F_{t}=\{1,2,3,4\}\), so that \(|F_{t}|>m\). Algorithm 2 considers the directed graph shown in Figure 2, where the available jobs are colored orange, and the unavailable jobs are colored purple. The black labels next to the nodes indicate the job weights. Using a parametric flow computation, the algorithm computes \(\pi_{t}=\frac{9}{2}\), resulting in the arc capacities given in Figure 2 in red, as well as the maximum flow \(x^{t}\) indicated in blue. The rates of the available jobs are set to the incoming flow values and are represented in orange in the figure. The two minimum-capacity cuts are shown in green. Note that some of the flow values and rates are not unique. For example, it would also be possible for more flow from node B to node 5 to take the route via node 2 instead of 1, resulting in a shift of processing rate from job 1 to job 2. When jobs are scheduled with the rates according to Figure 2, job 1 is completed first at time \(C_{1}=9\). In all subsequent iterations we always have \(|F_{t}|\leq m\). Therefore, after the completion of job 1, all jobs are processed with rate 1 leading to the schedule depicted in Figure 3 with the completion times \(C_{2}=C_{4}=12\), \(C_{3}=18\) and \(C_{5}=C_{6}=21\). The parametric minimum cut computation in Algorithm 2 can be carried out with the strongly polynomial algorithm by Gallo, Grigoriadis, and Tarjan [10]. This algorithm can be applied to instances with capacities that depend linearly on a parameter \(\lambda\) and computes the minimum cut Figure 3: Schedule for the instance in Example 2. Figure 2: Network \(D_{t}\) used by Algorithm 2 for the instance from Example 2 with capacities, flows, minimum capacity cuts, and processing rates resulting from the parametric maximum flow computation. capacity function. Setting \(\lambda\coloneqq 1/\pi\), the Minimax solution1 can be obtained as the largest breakpoint \(\lambda_{\max}\) of this function. Footnote 1: Note that in [10] the algorithms solving the Minimax and the similar Maximin sharing problems are interchanged. The resulting \(\pi_{t}=1/\lambda_{\max}\) can be interpreted as the price of processing rate in the following market: A total amount of \(m\) units of a single divisible good are sold to \(|U_{t}|\) buyers \(j\in U_{t}\) with budgets \(w_{j}\). The paths from B to the nodes \(j\in U_{t}\) represent different possible routes on which the good can be delivered from the supplier to the respective buyers, where the amount of good sent on any arc \((\mathrm{B},i)\), \(i\in F_{t}\), must not exceed \(1\). The price \(\pi_{t}\) is the largest possible price for the good so that all units of the good will be sold, and any corresponding maximum flow corresponds to a possible allocation to the buyers. We can alternatively interpret the price of the good as the price for transporting one unit along the arc \((\mathrm{A},\mathrm{B})\). Jain and Vazirani [17] considered the problem where prices for all arcs of a digraph have to be determined, which they solve by similar flow-based methods. **Theorem 4.1**.: _Using Algorithm 2 at time \(0\) and at the first \(n-1\) job completion times to determine subsequent processing rates is a \(3\)-competitive strategy for total weighted completion time minimization._ In order to prove this theorem consider an arbitrary instance \(I\) and denote by \(\mathrm{ALG}(I)\) the schedule output by Algorithm 2 for \(I\). We call a job \(j\in U_{t}\)_active_ at time \(t\) if one of its predecessors or \(j\) itself is processed at a rate smaller than \(1\), otherwise it is _inactive_. The set of active jobs at time \(t\) is denoted by \(A_{t}\). Let \(\mu_{j}^{I}\) be the total active time of job \(j\) in \(\mathrm{ALG}(I)\), and let \(\lambda_{j}^{I}\) be the inactive time before its completion, i.e., \(C_{j}^{\mathrm{ALG}(I)}=\mu_{j}^{I}+\lambda_{j}^{I}\). Clearly, \(\lambda_{j}^{I}\) is bounded by the maximum total processing time of a precedence chain ending in \(j\), which is a lower bound on \(C_{j}^{\mathrm{OPT}(I)}\). Let \(I_{1}\) be the instance with the same job set but with a single machine that is \(m\) times faster. We interpret this as still admitting a total processing rate of \(m\), which, however, can now be divided arbitrarily among jobs. Then \[\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{OPT}(I_{1})}\leq\sum_{j=1}^{n}w_{j}C_{j}^{ \mathrm{OPT}(I)} \tag{1}\] because the we relaxed the restriction that no job is processed at a rate larger than \(1\). We now bound the total weighted active time. **Lemma 4.2**.: \(\sum_{j=1}^{n}w_{j}\mu_{j}^{I}\leq 2\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{OPT}(I_{ 1})}\)_._ Proof.: We prove the lemma by induction on the number of jobs. For a single job, \(\mu_{1}^{I}=0\), and the claim is clear. So consider an instance \(I\) with \(n>1\) jobs, and let w.l.o.g. job \(1\) be completed first in \(\mathrm{ALG}(I)\). As in Section 3, we write \(Y_{j}(t)\coloneqq Y_{j}^{\mathrm{ALG}(I)}(t)\) and \(R_{j}(t)\coloneqq R_{j}^{\mathrm{ALG}(I)}(t)\) for \(t\geq 0\). We also consider the instances \(I^{\prime}\) and \(I_{1}^{\prime}\) with jobs \(2,\ldots,n\) and processing times \(p_{j}^{\prime}\coloneqq p_{j}-Y_{j}(C_{1}^{\mathrm{ALG}(I)})\) on \(m\) parallel machines and on a single fast machine, respectively. Then for every \(j\in\{2,\ldots,n\}\) we have \[\mu_{j}^{I}=\begin{cases}C_{1}^{\mathrm{ALG}(I)}+\mu_{j}^{I^{\prime}}&\text{ if }j\in A_{0};\\ \mu_{j}^{I^{\prime}}&\text{ else.}\end{cases}\] Therefore, \[\sum_{j=1}^{n}w_{j}\mu_{j}^{I}=\sum_{j\in A_{0}}w_{j}C_{1}^{\mathrm{ALG}(I)}+\sum_ {j=2}^{n}w_{j}\mu_{j}^{I^{\prime}}\leq w(A_{0})C_{1}^{\mathrm{ALG}(I)}+2\sum_{j =2}^{n}w_{j}C_{j}^{\mathrm{OPT}(I_{1}^{\prime})},\] where the last inequality holds by induction. It remains to show that the right-hand side is bounded by \(2\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{OPT}(I_{1})}\). If \(|F_{0}|\leq m\), then \(A_{0}=\emptyset\) and the claim is satisfied because increasing the processing times and adding a job cannot improve the optimal objective value. So assume from now on that \(|F_{0}|>m\). Then the algorithm computes the value \(\pi_{0}\) as well as flows and capacities \(x^{0}\leq u^{0}\). Since (\(\{\mathrm{A}\}\), \(\mathcal{V}_{t}\setminus\{\mathrm{A}\}\)) is a minimum-capacity cut, it is fully saturated by the maximum flow \(x^{0}\), meaning that \(x^{0}\) has flow value \(m\). This implies that \[\sum_{i\in F_{0}}R_{i}(0)=\sum_{i\in F_{0}}x_{(\mathrm{B},i)}^{0}=m, \tag{2}\] i.e., the algorithm utilizes the total available processor capacity. Apart from the cut \((\{\mathrm{A}\},\mathcal{V}_{t}\setminus\{\mathrm{A}\})\), there is another minimum-capacity \(\mathrm{A}\)-\(\mathrm{Z}\)-cut \((\mathcal{S},\mathcal{V}_{t}\setminus\mathcal{S})\), which is crossed only by arcs from \(\{\mathrm{B}\}\times F_{0}\) or \(U_{0}\times\{\mathrm{Z}\}\) because \(\pi_{0}\) is a breakpoint of the minimum cut capacity function. For every \(j\in N\) we have \[x_{(j,\mathrm{Z})}^{0}\leq u_{(j,\mathrm{Z})}^{0}=\frac{w_{j}}{\pi_{0}}. \tag{3}\] This inequality is tight for all active jobs \(j\in A_{0}\) because either \(j\) is available and processed with \(R_{j}(0)<1\) or there is an available predecessor \(i\in F_{0}\) with \(R_{i}(0)<1\). Since \(x_{(\mathrm{B},i)}^{0}=R_{i}(0)<1=u_{(\mathrm{B},i)}^{0}\), the arc \((\mathrm{B},i)\) does not cross the cut \((\mathcal{S},\mathcal{V}_{t}\setminus\mathcal{S})\). Therefore, \(j\in\mathcal{S}\), whence \((j,\mathrm{Z})\) crosses the cut, implying that \(x_{(j,\mathrm{Z})}^{0}=u_{(j,\mathrm{Z})}^{0}\). Let \(\tilde{w}_{i}\coloneqq\pi_{0}R_{i}(0)\) for all \(i\in F_{0}\). Then \[Y_{k}(C_{1}^{\mathrm{ALG}(I)})=R_{k}(0)\,C_{1}^{\mathrm{ALG}(I)}=\frac{\tilde {w}_{k}}{\pi_{0}}\,C_{1}^{\mathrm{ALG}(I)} \tag{4}\] for all \(k\in F_{0}\). Moreover, \[w(A_{0})=\sum_{j\in A_{0}}\pi_{0}\,x_{(j,\mathrm{Z})}^{0}\leq\sum_{i\in F_{0} }\pi_{0}\,x_{(\mathrm{B},i)}^{0}=\sum_{i\in F_{0}}\pi_{0}\,R_{i}(0)=\tilde{w} (F_{0}) \tag{5}\] because all flow to some node \(j\in A_{0}\) has to pass an arc from \(\{\mathrm{B}\}\times F_{0}\). Now consider a fixed optimal schedule \(\mathrm{OPT}(I_{1})\) for \(I_{1}\). By shortening the jobs, removing job 1, and contracting occurring idle times, we obtain a feasible schedule \(\mathrm{S}^{\prime}\) for \(I_{1}^{\prime}\). Then \[\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{OPT}(I_{1})} =w_{1}C_{1}^{\mathrm{OPT}(I_{1})}+\sum_{j=2}^{n}w_{j}(C_{j}^{ \mathrm{S}^{\prime}}+C_{j}^{\mathrm{OPT}(I_{1})}-C_{j}^{\mathrm{S}^{\prime}})\] \[\geq\sum_{j=2}^{n}w_{j}C_{j}^{\mathrm{S}^{\prime}}+\sum_{j=1}^{n} w_{j}\sum_{\begin{subarray}{c}k\in F_{0}\\ C_{k}^{\mathrm{OPT}(I_{1})}\leq C_{j}^{\mathrm{OPT}(I_{1})}\end{subarray}}\frac{Y _{k}(C_{1}^{\mathrm{ALG}(I)})}{m}\] \[\stackrel{{(\ref{eq:C_1})}}{{\geq}}\sum_{j=2}^{n}w_{j }C_{j}^{\mathrm{S}^{\prime}}+\pi_{0}\sum_{j=1}^{n}x_{(j,\mathrm{Z})}^{0}\sum_{ \begin{subarray}{c}k\in F_{0}\\ C_{k}^{\mathrm{OPT}(I_{1})}\leq C_{j}^{\mathrm{OPT}(I_{1})}\end{subarray}}\frac{Y _{k}(C_{1}^{\mathrm{ALG}(I)})}{m}. \tag{6}\] The flow \(x^{0}\) can be decomposed into path flows \(x^{0}_{P}\) for A-Z-paths \(P\) in \(D_{0}\) such that \(\sum_{P\ni a}x^{0}_{P}=x^{0}_{a}\) for all \(a\in\mathcal{A}_{0}\). Using the path decomposition, we can express the flow on the arcs \((\mathrm{B},i)\), \(i\in F_{0}\), and \((j,\mathrm{Z})\), \(j\in U_{0}\), as \[x^{0}_{(\mathrm{B},i)}=\sum_{P\ni(\mathrm{B},i)}x^{0}_{P}=\sum_{j=1}^{n}\sum_{ P\ni(\mathrm{B},i),(j,\mathrm{Z})}x^{0}_{P}\qquad\text{and}\qquad x^{0}_{(j, \mathrm{Z})}=\sum_{P\ni(j,\mathrm{Z})}x^{0}_{P}=\sum_{i\in F_{0}}\sum_{P\ni( \mathrm{B},i),(j,\mathrm{Z})}x^{0}_{P}.\] We use this in order to bound the second sum in (6) and obtain \[\sum_{j=1}^{n}w_{j}C^{\mathrm{OPT}(I_{1})}_{j} \geq\sum_{j=2}^{n}w_{j}C^{\mathrm{S}^{\prime}}_{j}+\pi_{0}\sum_{ j=1}^{n}\sum_{i\in F_{0}}\sum_{P\ni(\mathrm{B},i),(j,\mathrm{Z})}x^{0}_{P} \sum_{\begin{subarray}{c}k\in F_{0}\\ C^{\mathrm{OPT}(I_{1})}_{k}\leq C^{\mathrm{OPT}(I_{1})}_{j}\end{subarray}} \frac{Y_{k}(C^{\mathrm{ALG}(I)}_{1})}{m}\] \[\geq\sum_{j=2}^{n}w_{j}C^{\mathrm{S}^{\prime}}_{j}+\pi_{0}\sum_{ i\in F_{0}}\sum_{j=1}^{n}\sum_{P\ni(\mathrm{B},i),(j,\mathrm{Z})}x^{0}_{P}\sum_{ \begin{subarray}{c}k\in F_{0}\\ C^{\mathrm{OPT}(I_{1})}_{k}\leq C^{\mathrm{OPT}(I_{1})}_{i}\end{subarray}} \frac{Y_{k}(C^{\mathrm{ALG}(I)}_{1})}{m}.\] In the inequality we used that every path \(P\) in \(D_{0}\) containing \(i\in F_{0}\) and \(j\in N\) corresponds to a precedence chain from job \(i\) to job \(j\). Therefore, if such a path exists, \(C^{\mathrm{OPT}(I_{1})}_{i}\leq C^{\mathrm{OPT}(I_{1})}_{j}\), and thus, the set of jobs \(k\in F_{0}\) with \(C^{\mathrm{OPT}(I_{1})}_{k}\leq C^{\mathrm{OPT}(I_{1})}_{i}\) is included in the set of jobs \(k\in F_{0}\) completed before time \(C^{\mathrm{OPT}(I_{1})}_{j}\). Finally, we compute \[\sum_{j=1}^{n}w_{j}C^{\mathrm{OPT}(I_{1})}_{j} \geq\sum_{j=2}^{n}w_{j}C^{\mathrm{S}^{\prime}}_{j}+\pi_{0}\sum_{ i\in F_{0}}x^{0}_{(\mathrm{B},i)}\sum_{\begin{subarray}{c}k\in F_{0}\\ C^{\mathrm{OPT}(I_{1})}_{k}\leq C^{\mathrm{OPT}(I_{1})}_{i}\end{subarray}}\frac {Y_{k}(C^{\mathrm{ALG}(I)}_{1})}{m}\] \[\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: ## 5 Clairvoyant Approximation Algorithms When processing times are known in advance, the non-clairvoyant algorithms described above can be simulated in order to compute a _virtual_ preemptive schedule. On a single machine this virtual schedule can be transformed to a schedule without preemptions, while on identical parallel machines the number of preemptions can be reduced to \(O(n^{2})\). Both transformations do not increase the completion time of any job. Non-preemptive scheduling on a single machine.On a single machine, after computing the virtual preemptive schedule, we can simply perform list scheduling in the order of the virtual completion times. Since the preemptive schedule is not actually executed, we also have to compute its completion times (instead of observing them in the non-clairvoyant setting). This is specified in Algorithm 3. ``` Initialize \(t\gets 0\), \(U\gets N\), \(W\leftarrow\sum_{j\in N}w_{j}\), and \(Y_{j}\gets 0\) for all \(j\in N\). Perform depth-first search and store total weights \(W(j)\) of subtrees rooted at each \(\triangleright\)\(O(n^{2})\) node \(j\in N\). while\(U\neq\emptyset\),do\(\triangleright\)\(n\) iterations let \(F\) be the jobs from \(U\) without predecessor in \(U\); for all\(i\in F\)do\(\triangleright\)\(O(n)\) iterations let \(R_{i}\leftarrow\frac{W(i)}{W}\); let \(\tau_{i}\leftarrow\frac{p_{i}-\tau_{i}}{R_{i}}\); let \(j\leftarrow\arg\min_{i\in F}\tau_{i}\); set \(Y_{i}\gets Y_{i}+R_{i}\cdot\tau_{j}\) for all \(i\in F\); update \(t\gets t+\tau_{j}\), \(U\gets U\setminus\{j\}\), and \(W\gets W-w_{j}\); set \(C^{\prime}_{j}\gets t\). Perform list scheduling in order \(C^{\prime}_{j}\). \(\triangleright\)\(O(n)\) ``` **Algorithm 3**Clairvoyant scheduling with precedence constraints on a single machine _Example 3_.: The schedule resulting from Algorithm 3 for the instance from Example 1 is illustrated in Figure 4. Its total weighted completion time is \(\sum_{j=1}^{4}w_{j}C_{j}^{\text{ALG}}=6+9+2\cdot 13+18=59\). An optimal schedule processes the jobs in the order \(3,1,2,4\) and has objective value \(\sum_{j=1}^{4}w_{j}C_{j}^{\text{OPT}}=3+9+2\cdot 13+18=56\). The example demonstrates that the schedule resulting from our approximation algorithm is not consistent with a Sidney decomposition, in contrast to (almost) all previously known \(2\)-approximation algorithms [9]. **Corollary 5.1**.: _Algorithm 3 is a \(2\)-approximation algorithm that runs in time \(O(n^{2})\)._ Figure 4: Schedule obtained by list scheduling in order of the completion times from the schedule in Example 1 Proof.: In this proof we omit the instance from our notations. We assume w.l.o.g. that the jobs are scheduled in the order \(1,\ldots,n\) in the last step of the algorithm, so that \(C_{1}^{\mathrm{ALG}}<\cdots<C_{n}^{\mathrm{ALG}}\). Then for every job \(j\in N\) we have \(C_{j}^{\mathrm{ALG}}=\sum_{k\leq j}p_{k}\leq C_{j}^{\prime}\) because all jobs \(k\) with \(k\leq j\) have been processed to completion in the virtual schedule by time \(C_{j}^{\prime}\). Hence, \[\sum_{j=1}^{n}w_{j}C_{j}^{\mathrm{ALG}}\leq\sum_{j=1}^{n}w_{j}C_{j}^{\prime} \stackrel{{\ref{eq:1}}}{{\leq}}2\sum_{j=1}^{n}w_{j}C_{j}^{ \mathrm{OPT}}.\] Since in every iteration, one job is removed from \(U\), the while-loop has at most \(n\) iterations. In each iteration a linear number of elementary operations is performed to compute the rates and to find the job finished next, so that the total number of steps in the loop is in \(O(n^{2})\). Also list scheduling can be done in linear time, whence the total number of operations is bounded by a polynomial in \(O(n^{2})\). Note that this bound on the number of elementary operations does not immediately imply strongly polynomial running time because it also has to be ensured that the encoding length of the numbers occurring in the computation remains polynomially bounded. This will be shown in Appendix A. Finitely many preemptions on identical parallel machines.On identical parallel machines, we apply McNaughton's [24] wrap-around rule to each piece of the virtual schedule between two consecutive completion times. This is formalized in Algorithm 4. _Example 4_.: The schedule resulting from Algorithm 4 for the instance from Example 2 is illustrated in Figure 5. The virtual completion times \(C_{j}^{\prime}\) computed in the first part are exactly the completion times from Example 2. In the second part of the algorithm, McNaughton's wrap around rule considers the jobs in order of their virtual completion times, i.e., in the order \((1,2,4,3,5,6)\). **Corollary 5.2**.: _Algorithm 4 is a \(3\)-approximation ratio introducing at most \(O(n^{2})\) preemptions. It runs in time \(O(n^{4})\)._ Proof.: The approximation factor follows from Theorem 4.1 because \(C_{j}^{\mathrm{ALG}}\leq C_{j}^{\prime}\) for all \(j\in N\). Between any two consecutive virtual completion times, we preempt every unfinished job at most twice: once if it is wrapped around and once at the end of the interval. Hence, the number of preemptions can be bounded by \(2n^{2}\). At the beginning and for the first \(n-1\) virtual job completions, we apply the algorithm of Gallo, Grigoriadis, and Tarjan [10] to a network with \(O(n)\) nodes and \(O(n^{2})\) arcs. This can be done in time \(O(n|A|\log(n^{2}/|A|))\subseteq O(n^{3})\). This yields a total running time in \(O(n^{4})\). Clearly this dominates all other steps of the algorithm. Figure 5: Schedule obtained by Algorithm 4 for the instance from Example 2 ``` ``` \(\triangleright\)Compute virtual schedule Initialize \(t\gets 0\), \(U\gets N\), and \(Y_{j}(t)\gets 0\) for all \(j\in N\). while\(U\neq\emptyset\)do\(\triangleright\)\(n\) iterations let \(F_{t}\) be the jobs from \(U\) without predecessor in \(U\); apply Algorithm 2 to the graph \(D_{t}\) and the parametric capacities \(u^{t}\) defined \(\triangleright\)\(O(n^{3})\) in Section 4 to obtain \(R_{i}(t)\) for all \(i\in F_{t}\); set \(\tau_{i}\leftarrow\frac{p_{i}-Y_{i}(t)}{R_{i}(t)}\) for all \(i\in F_{t}\); \(\triangleright\)\(O(n)\) let \(j\leftarrow\arg\min_{i\in F_{t}}\tau_{i}\); \(\triangleright\)\(O(n)\) set \(Y_{i}(t+\tau_{j})\gets Y_{i}(t)+R_{t}(t)\tau_{j}\) for all \(i\in F_{t}\) and \(Y_{k}(t+\tau_{j})\gets Y_{k}(t)\) for \(k\in N\setminus F_{t}\); \(\triangleright\)\(O(n)\) update \(t\gets t+\tau_{j}\) and \(U\gets U\setminus\{j\}\); set \(C^{\prime}_{j}\gets t\). \(\triangleright\)Compute actual schedule (McNaughton's wrap around rule) \(\triangleleft\) Order the jobs so that \(C^{\prime}_{1}\leq\cdots\leq C^{\prime}_{n}\). \(\triangleright\) Reset \(t\gets 0\). for\(j=1,\ldots,n\)do\(\triangleright\)\(n\) iterations set \(i\gets 1\), \(u\gets t\), \(k\gets j\), and \(\Delta_{k}\gets Y_{k}(C^{\prime}_{j})-Y_{k}(t)\); while\(k\leq n\)do\(\triangleright\)\(\leq 2n\) iterations if\(u+\Delta_{k}\leq C^{\prime}_{j}\)then process job \(k\) on machine \(i\) from time \(u\) to time \(u+\Delta_{k}\); replace \(u\gets u+\Delta_{k}\); increment \(k\gets k+1\); if\(k\leq n\)then set \(\Delta_{k}\gets Y_{k}(C^{\prime}_{j})-Y_{k}(t)\); else process job \(k\) on machine \(i\) from time \(u\) to time \(C^{\prime}_{j}\); set \(\Delta_{k}\gets u+\Delta_{k}-C^{\prime}_{j}\); increment \(i\gets i+1\); set \(u\gets t\); set \(t\gets C^{\prime}_{j}\). ``` **Algorithm 4**Clairvoyant scheduling with precedence constraints on identical parallel machines In Appendix A a bound on the encoding length of the appearing numbers is proved, implying that the algorithm runs in strongly polynomial time. ## 6 Conclusion For single machine scheduling with precedence constraints, under UGC no approximation ratio better than 2 can be achieved within polynomial time. On the other hand, for non-clairvoyant scheduling, even without precedence constraints, no performance guarantee below 2 is possible, no matter how much computation time is allowed. Our algorithm shows that, when allowing infinitesimal preemptions, these two bounds can be reached simultaneously, i.e., there is a 2-competitive efficient non-clairvoyant algorithm. In other words, assuming UGC, the problem is so hard that knowing the processing times is of no use when polynomial running time is required. This is in contrast to many scheduling problems without precedence constraints, which also face a lower bound of 2 for any non-clairvoyant algorithm but can be solved in polynomial time or admit a PTAS in the clairvoyant setting [34, 1]. Also for preemptive scheduling on identical parallel machines, the presented algorithm has the best known performance guarantee of any approximation algorithm and of any non-clairvoyant algorithm, although no matching (conditional) lower bounds exist in this case. So it remains open to determine the exact approximability as well as the best possible competitive ratio of a non-clairvoyant algorithm.
2303.18221
Fluctuation without dissipation: Microcanonical Langevin Monte Carlo
Stochastic sampling algorithms such as Langevin Monte Carlo are inspired by physical systems in a heat bath. Their equilibrium distribution is the canonical ensemble given by a prescribed target distribution, so they must balance fluctuation and dissipation as dictated by the fluctuation-dissipation theorem. In contrast to the common belief, we show that the fluctuation-dissipation theorem is not required because only the configuration space distribution, and not the full phase space distribution, needs to be canonical. We propose a continuous-time Microcanonical Langevin Monte Carlo (MCLMC) as a dissipation-free system of stochastic differential equations (SDE). We derive the corresponding Fokker-Planck equation and show that the stationary distribution is the microcanonical ensemble with the desired canonical distribution on configuration space. We prove that MCLMC is ergodic for any nonzero amount of stochasticity, and for smooth, convex potentials, the expectation values converge exponentially fast. Furthermore, the deterministic drift and the stochastic diffusion separately preserve the stationary distribution. This uncommon property is attractive for practical implementations as it implies that the drift-diffusion discretization schemes are bias-free, so the only source of bias is the discretization of the deterministic dynamics. We applied MCLMC on a lattice $\phi^4$ model, where Hamiltonian Monte Carlo (HMC) is currently the state-of-the-art integrator. For the same accuracy, MCLMC converges 12 times faster than HMC on an $8\times8$ lattice. On a $64\times64$ lattice, it is already 32 times faster. The trend is expected to persist to larger lattices, which are of particular interest, for example, in lattice quantum chromodynamics.
Jakob Robnik, Uroš Seljak
2023-03-31T17:24:33Z
http://arxiv.org/abs/2303.18221v2
# Microcanonical Langevin Monte Carlo ###### Abstract We propose a method for sampling from an arbitrary distribution \(\exp[-S(\mathbf{x})]\) with an available gradient \(\nabla S(\mathbf{x})\), formulated as an energy-preserving stochastic differential equation (SDE). We derive the Fokker-Planck equation and show that both the deterministic drift and the stochastic diffusion separately preserve the stationary distribution. This implies that the drift-diffusion discretization schemes are bias-free, in contrast to the standard Langevin dynamics. We apply the method to the \(\phi^{4}\) lattice field theory, showing the results agree with the standard sampling methods but with significantly higher efficiency compared to the current state-of-the-art samplers. + Footnote †: preprint: APS/123-QED The dynamics of a particle with location \(\mathbf{x}(t)\), momentum \(\mathbf{\Pi}(t)\), and the Hamiltonian function \(H(\mathbf{x},\,\mathbf{\Pi})=\frac{1}{2}|\mathbf{\Pi}|^{2}+S(\mathbf{x})\) is described by the Hamiltonian equations, which is a deterministic system of ordinary differential equations (ODE) for the phase space variables \(\mathbf{z}=(\mathbf{x},\mathbf{\Pi})\). Langevin dynamics additionally models microscopic collisions with a heat bath by introducing damping and the diffusion process giving rise to a set of Stochastic Differential Equations (SDE). Damping and diffusion are tied together by the fluctuation-dissipation theorem, ensuring that the probability distribution \(\rho_{t}(\mathbf{z})\) converges to the canonical ensemble \(\propto\exp[-H(\mathbf{z})]\) on the phase space. The marginal configuration space distribution is then also canonical \(\rho(\mathbf{x})\propto\exp[-S(\mathbf{x})]\). The time evolution of the density is governed by the Liouville equation for Hamiltonian dynamics and by the Fokker-Planck equation for Langevin dynamics, both of which are deterministic partial differential equations (PDE). Hamiltonian dynamics, supplemented by occasional Gaussian momentum resampling converges to the canonical distribution on the phase space, so both Langevin and Hamiltonian dynamics can be used to sample from an arbitrary distribution \(\rho(\mathbf{x})\propto\exp[-S(\mathbf{x})]\), provided that we can compute the gradient \(\nabla S(\mathbf{x})\), as we need to simulate the dynamics. The resulting algorithms are called (underdamped) Langevin Monte Carlo (LMC) and Hamiltonian (also called Hybrid) Monte Carlo (HMC). Both LMC and HMC have been applied in the context of high dimensional Monte Carlo Markov Chain (MCMC) sampling, such as Bayesian posteriors, field theory, statistical physics, etc. In high-dimensional settings, these gradient-based methods are vastly more efficient than gradient-free MCMC, such as Metropolis-Hastings. Metropolis-Hastings adjustment is however used for acceptance or rejection of HMC trajectory and related procedures exist for LMC as well. An interesting question is what is the complete framework of possible ODE/SDE whose equilibrium solution corresponds to the target density \(\rho(\mathbf{x})\propto\exp[-S(\mathbf{x})]\). It has been argued [1] that the complete framework is given by a general form of the drift term \(\mathbf{B}(\mathbf{z})=[D(\mathbf{z})+Q(\mathbf{z})]\nabla H(\mathbf{z})+\Gamma(\mathbf{z})\), where \(H(\mathbf{z})\) is the Hamiltonian, \(D(\mathbf{z})\) is positive definite diffusion matrix and \(Q(\mathbf{z})\) is skew-symmetric matrix. \(\Gamma(\mathbf{z})\) is specified by derivatives of \(D(\mathbf{z})\) and \(Q(\mathbf{z})\). This framework implicitly assumes that the equilibrium distribution is canonical on the phase space, \(\rho(\mathbf{z})\propto\exp[-H(\mathbf{z})]\), after which the momentum can be integrated out for Hamiltonians with separable kinetic and potential energies. In general, however, we only require the marginal \(\mathbf{x}\) distribution to be canonical, \(\rho(\mathbf{x})\propto\exp[-S(\mathbf{x})]\), giving rise to the possibility of additional formulations for which the stationary distribution matches the target distribution but without the momentum distribution being canonical. One such example is the Microcanonical Hamiltonian Monte Carlo [2; 3] (MCHMC), where the energy is conserved throughout the process, and a suitable choice of the kinetic energy and momentum bounces enforces the correct marginal configuration space distribution. The purpose of this paper is to explore MCHMC in the continuous-time limit, with and without diffusion. We first derive the Liouville equation for continuous deterministic dynamics directly from the ODE, and show that its stationary solution is the target distribution. Second, we extend from ODE to an SDE, giving rise to continuous Microcanonical Langevin (MCLMC) dynamics. In contrast to the standard Langevin dynamics, the energy conservation leads to dynamics that does not have a velocity damping term associated with the stochastic term, such that the noise is energy conserving. We derive the associated Fokker-Planck equation and show its stationary solution is the same as for the Liouville equation. Fi nally, we discretize the SDE and apply it to the simplest non-trivial lattice field theory \(\phi^{4}\) model, showing that this approach correctly recovers the numerical solution with an efficiency that far exceeds that of the standard integrators. The time evolution of a particle in the Microcanonical Hamiltonian Monte Carlo is given by the ODE [2; 3] \[\dot{\mathbf{x}} =\mathbf{u} \tag{1}\] \[\dot{\mathbf{u}} =P(\mathbf{u})\mathbf{f}(\mathbf{x}),\] where \(\mathbf{x}\) is the position of a particle in the configuration space and \(\mathbf{u}\) is its velocity. We have introduced the projector \(P(\mathbf{u})=I-\mathbf{u}\mathbf{u}^{T}\) and we introduced the force \(\mathbf{f}(\mathbf{x})=-\nabla S(\mathbf{x})/(d-1)\). The force in [2; 3] was defined with a factor of \(d\) rather than \(d-1\), which required weights. These weights are eliminated with the new formulation here. The dynamics preserves the norm of \(\mathbf{u}\) if we start with \(\mathbf{u}\cdot\mathbf{u}=1\), \[\frac{d}{dt}(\mathbf{u}\cdot\mathbf{u})=2\mathbf{u}\cdot\dot{\mathbf{u}}=\mathbf{u}\cdot P(\mathbf{u} )\mathbf{f}=(1-\mathbf{u}\cdot\mathbf{u})(\mathbf{u}\cdot\mathbf{f})=0,\] so the particle is confined to the \(2d-1\) dimensional manifold \(\mathcal{M}=\mathbb{R}^{d}\times S^{d-1}\), i.e. the velocity is defined on a sphere of unit radius. We will denote the points on \(\mathcal{M}\) by \(\mathbf{z}\). Equivalently, the dynamics of Equation (1) can be described by the flow on the manifold, which is a 1-parametrical family of maps from the manifold onto itself \(\varphi_{t}:\mathcal{M}\to\mathcal{M}\), such that \(\varphi_{t}(\mathbf{z})\) is the solution of Equation (1) with the initial condition \(\mathbf{z}\). The flow induces the drift vector field \(\mathbf{B}\), which maps scalar observables on the manifold \(\mathcal{O}(\mathbf{z})\) to their time derivatives under the flow, \[\mathbf{B}(\mathcal{O})=\frac{d}{dt}\big{(}\mathcal{O}\circ\varphi_{t}\big{)}|_{t =0}.\] We will be interested in the evolution of the probability density distribution of the particle under the flow. In differential geometry, the density is described by a volume form, which is a differential \((2d-1)\)-form, \[\widehat{\mathbf{\rho}}(\mathbf{z})=\rho(\mathbf{z})\,dz^{1}\wedge dz^{2}\wedge...\,dz^{2 d-1}.\] Suppose we are given the volume form \(\widehat{\rho}_{t}\) at some time \(t\). Formally, we can translate it in time by applying the push-forward map \(\varphi_{ss}\), \(\widehat{\rho}_{t+s}=\varphi_{ss}\widehat{\rho}_{t}\).The infinitesimal form of the above equation gives us the differential equation for the density: \[\frac{d}{dt}\widehat{\rho}_{t} =\frac{d}{ds}\big{(}\varphi_{ss}\widehat{\rho}_{t}\big{)}|_{s=0} =\frac{d}{ds}\big{(}\varphi_{-s}^{*}\widehat{\rho}_{t}\big{)}|_{s=0} \tag{2}\] \[\equiv-\mathcal{L}_{\mathbf{B}}\widehat{\rho}_{t}=-\big{(}\mathrm{ div}_{\widehat{\rho}_{t}}\mathbf{B}\big{)}\widehat{\rho}_{t},\] which is also known as the Liouville equation. Here, \(\varphi_{-s}^{*}=\varphi_{ss}\) is the pull-back map, \(\mathcal{L}_{\mathbf{B}}\) is the Lie derivative along the drift vector field \(\mathbf{B}\) and div is the divergence. This is the continuity equation for the probability in the language of differential geometry. The Liouville equation in coordinates is \[\dot{\rho}(\mathbf{z})=-\nabla\cdot\big{(}\rho\mathbf{B}\big{)}\equiv\sum_{i=1}^{2d-1 }\frac{\partial}{\partial z^{i}}\big{(}\rho(\mathbf{z})B^{i}(\mathbf{z})\big{)} \tag{3}\] We will work in the Euclidean coordinates \(\{x^{i}\}_{i=1}^{d}\) on the configuration space and spherical coordinates \(\{\vartheta^{\mu}\}_{\mu=1}^{d-1}\) for the velocities on the sphere, such that the manifold is parametrized by \(\mathbf{z}=(\mathbf{x},\mathbf{\vartheta})\). We will adopt the Einstein summation convention and use the Latin letters (\(i\), \(j\),...) to indicate the sum over the Euclidean coordinates and the Greek letters (\(\mu\), \(\nu\),...) for the sum over the spherical coordinates. The spherical coordinates are defined by the inverse transformation, \[u_{1}= \cos\vartheta^{1},\] \[u_{2}= \sin\vartheta^{1}\cos\vartheta^{2}\] \[\vdots\] \[u_{d-1}= \sin\vartheta^{1}\cdots\sin\vartheta^{d-2}\cos\vartheta^{d-1}\] \[u_{d}= \sin\vartheta^{1}\cdots\sin\vartheta^{d-2}\sin\vartheta^{d-1},\] which automatically ensures \(\mathbf{u}\cdot\mathbf{u}=1\). The metric on the sphere in the spherical coordinates is \[g_{\mu\nu}=\frac{\partial u_{k}}{\partial\vartheta^{\mu}}\frac{\partial u_{k} }{\partial\vartheta^{\nu}}=\mathrm{Diag}[1,\,\sin^{2}(\vartheta^{1}),\,\sin^{2 }(\vartheta^{1})\sin^{2}(\vartheta^{2}),\,...]_{ij},\] and the volume element is \(\sqrt{g}=\mathrm{det}g_{\mu\nu}^{1/2}=\Pi_{k=1}^{d-2}(\sin\vartheta^{k})^{d-1-k}\). The drift vector field is \[\mathbf{B}(\mathbf{z})=u_{i}(\mathbf{\vartheta})\frac{\partial}{\partial x^{i}}+g^{\mu\nu}( \mathbf{\vartheta})\frac{\partial u_{i}}{\partial\vartheta^{\nu}}(\mathbf{\vartheta})P_ {ij}(\mathbf{\vartheta})f_{j}(\mathbf{x})\frac{\partial}{\partial\vartheta^{\mu}}.\] **Theorem 1**.: _The stationary solution of Liouville equation is_ \[\rho_{\infty}\propto e^{-S(\mathbf{x})}\sqrt{g(\mathbf{\vartheta})}. \tag{4}\] Proof.: Inserting \(\rho_{\infty}\) in the Liouville equation (3) gives \[\dot{\rho}_{\infty}=-\mathbf{u}\cdot\partial_{\mathbf{x}}\rho_{\infty}-\frac{1}{\sqrt{ g}}\partial_{\mu}\big{(}\sqrt{g}B^{\mu}\big{)}\rho_{\infty}.\] The first term is \(\mathbf{u}\cdot\partial_{\mathbf{x}}\rho_{\infty}=(d-1)\mathbf{u}\cdot\mathbf{f}\rho_{\infty}\). The second term transforms as a scalar under the transformations of the spherical coordinates. We can use this to simplify the calculation: at each fixed \(\mathbf{x}\) we will pick differently oriented spherical coordinates, such that \(\vartheta^{1}=0\) always corresponds to the direction of \(\mathbf{f}(\mathbf{x})\) and \(f_{i}=\delta_{1i}|\mathbf{f}|\). We then compute \[B_{\mu} =(\partial_{\mu}u_{i})P_{ij}f_{j}=(\partial_{\mu}u_{i})\big{(} \delta_{1i}-\cos\vartheta^{1}u_{i})|\mathbf{f}|\] \[=\big{(}\partial_{\mu}u_{1}-\cos\vartheta^{1}(\partial_{\mu}u_{i}) u_{i}\big{)}|\mathbf{f}|\] \[=\big{(}-\sin\vartheta^{1}\delta_{1\mu}-\cos\vartheta^{1}\partial_{ \mu}\big{(}\frac{1}{2}\mathbf{u}\cdot\mathbf{u}\big{)}\big{)}|\mathbf{f}|=-\sin\vartheta^{1} \delta_{1\mu},\] so \[\frac{1}{\sqrt{g}}\partial_{\mu}\big{(}\sqrt{g}B^{\mu}\big{)}=\frac{-| \boldsymbol{f}|}{\sin^{d-2}\vartheta^{1}}\frac{\partial}{\partial\vartheta^{1}} \sin^{d-1}\vartheta^{1}\] \[=-(d-1)|\boldsymbol{f}|\cos\vartheta^{1}=-(d-1)\boldsymbol{u} \cdot\boldsymbol{f}.\] The last expression transforms as a scalar with respect to transformations on the sphere and is therefore valid in all coordinate systems, in particular, in the original one. Combining the two terms gives \(\dot{\rho}_{\infty}=0\), completing the proof. \(\blacksquare\) In [3] it was proposed that adding a random perturbation to the momentum direction after each step of the discretized deterministic dynamics boosts ergodicity, but the continuous-time version was not explored. Here, we consider a continuous-time analog and show this leads to Microcanonical Langevin SDE for the particle evolution and to Fokker-Planck equation for the probability density evolution. We promote the deterministic ODE of Equation (1) to the following Microcanonical Langevin SDE: \[d\boldsymbol{x} =\boldsymbol{u}dt \tag{5}\] \[d\boldsymbol{u} =P(\boldsymbol{u})\boldsymbol{f}(\boldsymbol{x})dt+\eta P( \boldsymbol{u})d\boldsymbol{W}.\] Here, \(\boldsymbol{W}\) is the Wiener process, i.e. a vector of random noise variables drawn from a Gaussian distribution with zero mean and unit variance, and \(\eta\) is a free parameter. With the addition of the diffusion term, the Liouville equation (3) is now promoted to the Fokker-Planck equation [4], \[\dot{\rho}=-\nabla\cdot(\rho\boldsymbol{B})+\frac{\eta^{2}}{2}\widehat{\nabla }^{2}\rho, \tag{6}\] where \(\widehat{\nabla}^{2}=\nabla^{\mu}\nabla_{\mu}\) is the Laplace-Beltrami operator on the sphere and \(\nabla_{\mu}\) is the covariant derivative on the sphere. In coordinates, the Laplacian can be computed as \(\frac{1}{\sqrt{g}}\partial_{\mu}\big{(}\sqrt{g}\partial^{\mu}\rho\big{)}\). **Theorem 2**.: _The distribution \(\rho_{\infty}\) of Equation (4) is a stationary solution of the Fokker-Planck equation (6) for any value of \(\eta\)._ Proof.: Upon inserting \(\rho_{\infty}\) into the right-hand-side of the Fokker-Planck equation, the first term vanishes by Theorem 1. The second term also vanishes, \[\widehat{\nabla}^{2}\rho_{\infty}\propto\nabla^{\mu}\nabla_{\mu}\sqrt{g}e^{-S (\boldsymbol{x})}=e^{-S(\boldsymbol{x})}\nabla^{\mu}\nabla_{\mu}\sqrt{g}=0,\] since the covariant derivative of the metric determinant is zero. \(\blacksquare\) Consider for a moment Equation (5) with only the diffusion term on the right-hand-side. This SDE describes the Brownian motion on the sphere and the identity flow on the \(\boldsymbol{x}\)-space [4]. Realizations from the Brownian motion on the sphere can be generated exactly [5]. Let us denote by \(\psi_{s}^{\eta}\) the corresponding density flow map, such that \(\psi_{s}^{\eta}[\rho_{t}]=\rho_{t+s}\). The flow of the full SDE (5) can then be approximated at discrete times \(\{n\epsilon\}_{n=0}^{\infty}\) by the Euler-Maruyama scheme [6]: \[\rho_{(n+1)\epsilon}=\psi_{e}^{\eta}[\varphi_{\epsilon\epsilon}\,\rho_{n \epsilon}]. \tag{7}\] For a generic SDE, this approximation leads to bias in the stationary distribution. This is however not the case in MCLMC: **Theorem 3**.: _The distribution \(\rho_{\infty}\) of Equation (4) is preserved by the Euler-Maruyama scheme (7) for any value of \(\eta\)._ Proof.: The deterministic push forward map preserves \(\rho_{\infty}\) by the Theorem 1. The Fokker-Planck equation for the stochastic-only term is \(\dot{\rho}=\frac{\eta^{2}}{2}\widehat{\nabla}\rho\) which preserves \(\rho_{\infty}\) by the Theorem 2. \(\blacksquare\) In the standard Langevin equation the fluctuation term is accompanied by a dissipation term, and the strength of both is controlled by damping coefficient. The deterministic and stochastic parts do not preserve the stationary distribution separately, and so the discretization schemes lead to bias in the stationary distribution for a finite \(\epsilon\). In contrast, for MCLMC an exact deterministic ODE integrator would remain exact with SDE. **Application**: to show the promise of MCLMC as a general purpose MCMC tool we will apply it to the scalar \(\phi^{4}\) field theory in two Euclidean dimensions. This is one of the simplest non-trivial lattice field theory examples. The scalar field in a continuum is a scalar function \(\phi(x,y)\) on the plane with the area \(V\). The probability density on the field configuration space is proportional to \(e^{-S[\phi]}\), where the action is \[S[\phi(x,y)]=\int\big{(}-\phi\partial^{2}\phi+m^{2}\phi^{2}+\lambda\phi^{4} \big{)}dxdy.\] The squared mass \(m^{2}<0\) and the quartic coupling \(\lambda>0\) are the parameters of the theory. The system is interesting as it exhibits spontaneous symmetry breaking, and belongs to the same universality class as the Ising model. The action is symmetric to the global field flip symmetry \(\phi\to-\phi\). However, at small \(\lambda\), the typical set of field configurations splits in two symmetric components, each with non-zero order parameter \(\langle\bar{\phi}\rangle\), where \(\bar{\phi}=\frac{1}{V}\int\phi(x,y)dxdy\) is the spatially averaged field. The mixing between the two components is highly unlikely, and so even a small perturbation can cause the system to acquire non-zero order parameter. One such perturbation is a small external field \(h\), which amounts to the additional term \(-h\int\phi(x,y)dxdy\) in the action. The susceptibility of the order parameter to an external field is defined as \[\chi=V\frac{\partial\bar{\phi}}{\partial h}|_{h=0}=\lim_{h\to 0^{+}}V\langle \big{(}\bar{\phi}-\langle\bar{\phi}\rangle\big{)}^{2}\rangle,\] which diverges at the critical point, where the second order phase transition occurs. The \(\phi^{4}\) theory does not admit analytic solutions due to the quartic interaction term. A standard approach is to discretize the field on a lattice and make the lattice spacing as fine as possible [7]. The field is then specified by a vector of field values on a lattice \(\phi_{ij}\) for \(i,j=1,2,\ldots L\). The dimensionality of the configuration space is \(d=L^{2}\). We will impose periodic boundary conditions, such that \(\phi_{i,L+1}=\phi_{i1}\) and \(\phi_{L+1,j}=\phi_{1j}\). The \(h=0\) lattice action is [8] \[S_{\rm lat}[\phi]=\sum_{i,j=1}^{L}2\phi_{ij}\big{(}2\phi_{ij}-\phi_{i+1,j}- \phi_{i,j+1}\big{)}+m^{2}\phi_{ij}^{2}+\lambda\phi_{ij}^{4}.\] As common in the literature [9; 10; 11], we will fix \(m^{2}=-4\) (which removes the diagonal terms \(\phi_{ij}^{2}\) in the action) and study the susceptibility as a function of \(\lambda\). The susceptibility estimator is [11]\(\chi=L^{2}\big{(}\langle\bar{\phi}-\langle\bar{\phi}\rangle\big{)}^{2}\big{)}\)\(\bar{\phi}=\frac{1}{L^{2}}\sum_{ij}\phi_{ij}\), and the expectation \(\langle\cdot\rangle\) is over the samples. A measure of the efficiency of sampling performance is the number of action gradient calls needed to have an independent sample. Often we wish to achieve some accuracy of expected second moments [3]. We define the squared bias \(b_{2}^{2}\) as the relative error of the expected second moments in the Fourier basis, \(b_{2}^{2}=\frac{1}{L^{2}}\sum_{k,l=1}^{L}\big{(}\frac{(|\widetilde{\phi}_{kl}| ^{2})_{\rm sample}-\langle|\widetilde{\phi}_{kl}|^{2}\rangle_{\rm truth}}{(| \widetilde{\phi}_{kl}|^{2}\rangle_{\rm truth})}\big{)}^{2}\), where \(\widetilde{\phi}\) is the scalar field in the Fourier basis, \(\widetilde{\phi}_{kl}=\frac{1}{\sqrt{L^{2}}}\sum_{nm=1}^{L}\phi_{nm}e^{-2\pi i (kn+lm)/L}\). In analogy with Gaussian statistics, we define the effective sample size to be \(2/b_{2}^{2}\). Here, we report the effective sample size per action gradient evaluation at the instant when \(b_{2}=0.1\), which corresponds to 200 effective samples. The number we report is ESS per action gradient evaluation, such that its inverse gives the number of gradients needed to achieve one independent sample. In the discrete scheme (7) it is not necessary to have the Brownian motion on the sphere as a stochastic update in order to have \(\rho_{\infty}\) as a stationary distribution. In fact, any discrete stochastic process on the sphere, which has the uniform distribution as the stationary distribution and acts as an identity on the \(\mathbf{x}\)-space will do. In the practical algorithm, we therefore avoid generating the complicated Brownian motion on the sphere and instead use the generative process \(\mathbf{u}_{(n+1)\epsilon}=(\mathbf{u}_{n\epsilon}+\nu\mathbf{r})/|\mathbf{u}_{n\epsilon}+ \nu\mathbf{r}|\), where \(\mathbf{r}\) is a random draw from the standard normal distribution and \(\nu\) is a parameter with the same role as \(\eta\). We tune the parameter \(\eta\) by estimating the effective sample size (ESS) [3]. We approximate the deterministic flow \(\varphi_{t}\) with the Minimal Norm integrator [3; 12] and tune the step size by targeting 0.02 average single step squared energy error per dimension [3]. The tuning of the step size and \(\eta\) is done at each \(\lambda\) level separately and is included in the sampling cost. It amounts to around 10% of the sampling time. Figure 1: Top: susceptibility in the vicinity of the phase transition. We follow [11] and rescale the susceptibility and the quartic coupling by the Ising model critical exponents \(\nu=1\), \(\gamma=7/4\) and the critical coupling \(\lambda_{C}=4.25\) from [8]. The rescaling removes most of the lattice size dependence [13]. MCLMC agrees with a long NUTS chain. 2nd panel: effective sample size (ESS) per action gradient evaluation for MCLMC. Higher is better. MCLMC tuning cost is included. 3rd panel: same for NUTS. Bottom: same for HMC. The dotted lines are the corresponding results if the tuning cost of 500 warm-up samples is taken into account. We compare the results to standard HMC [14] and to a self-tuned HMC variant NUTS [15], both implemented in NumPyro [16]. For HMC, we find that the optimal number of gradient calls between momentum resamplings to be 20, 30, 40 and 50 for lattice sizes \(L=8\), 16, 32 and 64. The step size is determined with the dual averaging algorithm, which targets the average acceptance rate of 0.8 (NumPyro default), which adds considerably to the overall cost (Figure 1). The results for grid sizes from \(L=\) 8, 16, 32 and 64 are shown in Figure 1. The results for all samplers are computed with an annealing scheme: starting at high \(\lambda\) and using the final state of the sampler as an initial condition at the next lowest \(\lambda\) level. The initial condition at the highest \(\lambda\) level is a random draw from the standard normal distribution on each lattice site. There is a near perfect agreement between a very long NUTS run (denoted as truth) and MCLMC in terms of susceptibility, where we observe a second order phase transition around the rescaled \(\bar{\lambda}\sim 1\). Above the phase transition, ESS for MCLMC and HMC is relatively constant with \(\bar{\lambda}\). ESS for NUTS and HMC scales with \(L\) as \(d^{-1/4}=L^{-1/2}\), as expected from adjusted HMC [17]. At the phase transition, NUTS and HMC suffer from the critical slowing down, resulting in lower ESS. In contrast, ESS for MCLMC is almost independent of \(\bar{\lambda}\) and \(L\). Overall, MCLMC outperforms HMC and NUTS by 10-100 at \(L=64\) if HMC and NUTS tuning is not included, and by at least 40 if tuning is included (MCLMC auto-tuning is cheap and included in the cost, and we use the recommended 500 warm up samples for tuning of NUTS and HMC). We thus expect that for \(d=10^{8}\), typical of state-of-the-art lattice quantum chromodynamics calculations, the advantage of MCLMC over HMC and NUTS will be 2-3 orders of magnitude due to \(d^{1/4}\) scaling. MCLMC also significantly outperforms recently proposed Normalizing Flow (NF) based samplers [9; 11]. NFs scale poorly with dimensionality, and the training time increases by about one order of magnitude for each doubling of \(L\), e.g. of order 10 hours for \(L=32\) to reach 90% acceptance, and 60 hours to reach 60% acceptance at \(L=64\)[11]. In contrast, the wall-clock time of MCLMC at \(L=64\) on a GPU is a fraction of a second, while even at \(L=8096\) (completely out of reach of current NF based samplers) it is only 15 seconds. In summary, we introduced an energy conserving stochastic Langevin process in the continuous time limit that has no damping, and derived the corresponding Fokker-Planck equation. Its equilibrium solution is not canonical in the total energy, but it equals the desired target distribution given by the action, showing that the framework of [1] is not a complete recipe of all SDEs whose equilibrium solution is the target density. MCLMC is also of practical significance: we have shown that it vastly outperforms HMC on a lattice \(\phi^{4}\) model. HMC is currently the state-of-the-art integrator for lattice quantum chromodynamics, a field where the computational demands are particularly intensive. Numerical results presented here suggest that MCLMC could offer significant improvements over HMC in the setting of high dimensional lattice models. **Acknowledgments**: This material is based upon work supported in part by the Heising-Simons Foundation grant 2021-3282 and by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Contract No. DE-AC02-05CH11231 at Lawrence Berkeley National Laboratory to enable research for Data-intensive Machine Learning and Analysis.
2310.18319
Highly Conductive RuO$_2$ Thin Films from Novel Facile Aqueous Chemical Solution Deposition
Ruthenium dioxide (RuO$_2$) thin films were synthesized by Chemical Solution Deposition (CSD) on silicon substrates using only water and acetic acid as solvents. The microstructure, phase-purity, electrical and optical properties as well as the thermal stability of the thin films have been characterized. The microstructure of the thin films strongly depends on the annealing temperature: A smooth thin film was achieved at an annealing temperature of 600$^\circ$C. Higher annealing temperatures (800$^\circ$C) led to radial grain growth and an inhomogeneous thin film. A very low resistivity of 0.89 {\Omega}m was measured for a 220 nm-thick thin film prepared at 600$^\circ$. The resistivity of the thin films increases with temperature, which indicates metallic behavior. Phase-purity of the thin films was confirmed with X-ray Diffraction (XRD) measurements, X-ray Photoelectron Spectroscopy (XPS) and Raman spectroscopy. Transmission and reflectivity measurements indicate that RuO$_2$ efficiently blocks the UV-VIS and IR wavelengths. The optical constants determined via spectroscopic ellipsometry show high absorption in the near-IR region as well as a lower one in the UV-VIS region. The thermal stability was investigated by post-annealing, confirming that the thin films are stable up to 750$^\circ$C in synthetic air.
Martina Angermann, Georg Jakopic, Christine Prietl, Thomas Griesser, Klaus Reichmann, Marco Deluca
2023-09-20T10:50:26Z
http://arxiv.org/abs/2310.18319v1
## Highly Conductive RuO\({}_{2}\) Thin Films from Novel Facile Aqueous Chemical Solution Deposition ## Abstract Ruthenium dioxide (RuO\({}_{2}\)) thin films were synthesized by Chemical Solution Deposition (CSD) on silicon substrates using only water and acetic acid as solvents. The microstructure, phase-purity, electrical and optical properties as well as the thermal stability of the thin films have been characterized. The microstructure of the thin films strongly depends on the annealing temperature: A smooth thin film was achieved at an annealing temperature of 600\({}^{\circ}\)C. Higher annealing temperatures (800\({}^{\circ}\)C) led to radial grain growth and an inhomogeneous thin film. A very low resistivity of 0.89 \(\upmu\Omega\)m was measured for a 220 nm-thick thin film prepared at 600\({}^{\circ}\). The resistivity of the thin films increases with temperature, which indicates metallic behavior. Phase-purity of the thin films was confirmed with X-ray Diffraction (XRD) measurements, X-ray Photoelectron Spectroscopy (XPS) and Raman spectroscopy. Transmission and reflectivity measurements indicate that RuO\({}_{2}\) efficiently blocks the UV-VIS and IR wavelengths. The optical constants determined via spectroscopic ellipsometry show high absorption in the near-IR region as well as a lower one in the UV-VIS region. The thermal stability was investigated by post-annealing, confirming that the thin films are stable up to 750\({}^{\circ}\)C in synthetic air. **Keywords**: ruthenium dioxide, chemical solution deposition, thin film, conductive metal oxides; **Acknowledgements**: This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 951774. The authors want to thank J. S. Mateo (XRD measurements of the powders), K. Bakken, T. Gindel, A. Kobald and H. Kobald of the Materials Center Leoben Forschung GmbH for their collaboration and fruitful discussions. ###### Abstract The main challenge of the measurement of the thin films is the determination of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thickness of the thin film. The thickness of the thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the thin film is thickness of the thickness of the thickness of the thin film. The thickness of the film is thickness of the thickness of the thickness of the thin film. The thickness of the film is thickness of the thickness of the thickness of the film. The thickness of the film is thickness of the thickness of the thickness of the film film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film thickness of the film. The thickness of the film is thickness of the film thickness of the film thickness of the film. The thickness of the film is thickness of the thickness of the film thickness of the film thickness of the film. The thickness of the film is thickness of the film thickness of the film thickness of the film thickness of the film. The thickness of the film thickness of the film Experimental For RuO\({}_{2}\) thin film deposition, a 0.4 M solution was prepared by dissolving ruthenium(III)-nitrosylnitrate powder (Alfa Aesar, USA) in a 1:2 (V:V) water and acetic acid (Roth, Germany) mixture and stirring it overnight. The solution was deposited on plasma-cleaned silicon substrates (Si/600 nm <100>SiO\({}_{2}\), Siegert Wafer, Germany), which were subsequently spin-coated at 5000 rpm (with 2500 rpm/s rate) for 30 s. The thin films were dried at 160 \({}^{\circ}\)C for 5 min on a hotplate prior to heating to 350 \({}^{\circ}\)C (1\({}^{\circ}\)C/s, 2 min) and crystallized at higher temperatures (600/700/800 \({}^{\circ}\)C, 10\({}^{\circ}\)C/s, 10 min) in a rapid thermal annealer (MILA-5050, ULVAC GmbH, Germany) under a constant gas flow of 0.8 l/min of N\({}_{2}\) and 0.2 l/min of O\({}_{2}\),corresponding to synthetic air. The deposition cycle was repeated 10 times to yield a thickness of ~200 nm. Additional post-annealing in the rapid thermal annealer was done for some of the samples. For the optical characterization also samples on fused silica substrates (MicroChemicals, Germany) were prepared with the same procedure. RuO\({}_{2}\) powder was prepared by annealing the dried solution at 600/700/800/900 \({}^{\circ}\)C for 2h in a muffle oven and crushing the powder in an agate mortar. The thermal behavior of dried gel and powder was characterized with a TGA-DSC-MS (STA449F1A, coupled to a QMS 403c mass spectrometer, Netzsch, Germany) using a heating rate of 5\({}^{\circ}\)C/min. Raman measurements were performed with a WITec alpha300R spectrometer (WITec GmbH, Ulm, Germany) with 1800 gr/mm and an EC Epiplan-Neofluar DIC objective (Zeiss, Germany) using 10 mW of a 532 nm laser. Powder XRD was performed with the D2 Phaser (Bruker, Germany) using a Co-K\({}_{\alpha}\) source with 0.06\({}^{\circ}\) per step between 15\({}^{\circ}\) and 90\({}^{\circ}\). For the thin films Cu-K\({}_{\alpha}\) source grazing incidence XRD (GI-XRD) was done with the D8-Discover Series II (Bruker, Germany) with 0.03\({}^{\circ}\) per step between 10\({}^{\circ}\) and 80\({}^{\circ}\). X-ray photoelectron spectroscopy (XPS) was conducted with a Thermo Fisher Scientific Inc. Nexsa G2 photoelectron spectrometer system equipped with a low-power Al-K\({}_{\alpha}\) X-ray source yielding a 30-400 \(\upmu\)m adjustable X-ray spot size. Scanning Electron Microscope (SEM) images were recorded with the Auriga 40 (Zeiss, Germany). The transmission and reflectance of RuO\({}_{2}\) thin films in the UV to near infrared range were measured with a Lambda 900 spectrometer (Perkin Elmer, Great Britain). Infrared spectra were measured up to 16 \(\upmu\)m using a FTIR Bruker Tensor 27 Instrument. Spectroscopic ellipsometry was used to determine the optical constants of the thin films (instrument J.A.Woolam VASE with the proprietary software). The spectral range extended from 300 nm to 1700 nm with a step size of 5 nm, and 65\({}^{\circ}\), 70\({}^{\circ}\), and 75\({}^{\circ}\) were used as the angles of incidence for the measurement. A homogeneous layer model was applied to evaluate the measurements, the dielectric function of which consisted of a DC offset, 5 Gaussian-broadened oscillators, and a pole point in the far-infrared. With this model, the measured values could be reproduced very well. Moreover, the sheet resistance was measured with the 4-point-probe setup of the aixACCT TF Analyzer 3000 (aixACCT Systems GmbH, Aachen, Germany) using tungsten needles. The thickness needed for the calculation of resistivity was determined from SEM images of cleaved samples. ## 3 Results and Discussion ### Thermal Analysis For the DSC-TGA measurements the solution was dried at 200\({}^{\circ}\)C or calcined at 600\({}^{\circ}\)C in a muffle furnace for 2 h, both in air. For the powder prepared at 200\({}^{\circ}\)C there is an exothermic peak visible around 280\({}^{\circ}\)C, which is likely due to the pyrolysis of the material (see Fig. 1). Also, a drastic mass loss of 45% is recorded between 170\({}^{\circ}\)C-320\({}^{\circ}\)C, which could be due to gas evolution (e.g. CO\({}_{2}\), NO\({}_{\rm x}\),...). Such enormous gas evolution is typical for pyrolysis reactions. Theoretically, the mass loss of the conversion of ruthenium-nitrosylnitrate to RuO\({}_{2}\) should be 58%, which is relatively similar to the measured value. Measurement of the powder prepared at 600\({}^{\circ}\)C also exhibited an exothermic peak at 750\({}^{\circ}\)C (see Fig. 2), which was accompanied by a mass gain of ~18%. This could be linked to the oxidation of Ru-metal, which was present solely in the powder (see Fig. 3 in chapter 3.2). The kinetics of the conversion reaction in the thin films seem to be different to the prepared powder, since there was no Ru-metal visible in the XRD of the thin films. This indicates that the metal could be oxidized in air atmosphere to RuO\({}_{2}\), which is further confirmed by the fact that during the measurement in nitrogen atmosphere no mass change was observed. The fact that Ru metal was only recorded for the powders and not the thin films might be due to the different microstructure of both materials. Since each heat-treated thin film layer was only around 20 nm thick, oxygen probably could penetrate here the whole layer. However, in the case of the powders, RuO\({}_{2}\) might have first formed on the surface of the relatively large powder particles, and then acted as a diffusion barrier [21-22], which might have resulted in Ru metal being present in the core of the particles. The information gained from the TGA/DSC measurements was used to design the temperature program for thin film preparation. Slow heating with a heating rate of 1\({}^{\circ}\)C/s was applied to the thin films up to the pyrolysis temperature of 350\({}^{\circ}\)C in order to avoid rapid gas evolution, which could lead to the formation of pores. Figure 1: DSC-TGA measurements of the dried solution (exo \(\downarrow\)) Figure 2: DSC-TGA measurements of the calcined (600\({}^{\circ}\)C) powder (exo \(\downarrow\)) Additional TGA measurements coupled with mass spectrometry were done to identify the generated gases, and typical pyrolysis gases were detected (e.g. CO\({}_{2}\), NO\({}_{x}\),...), with a high share of CO\({}_{2}\) due to the high amount of acetic acid in the solution (see supplementary information). ### Phase Analysis The phase purity and crystal structure of the thin films and powders was analyzed using XRD and Raman spectroscopy. Fig. 3 shows the XRD patterns of the powders prepared at different annealing temperatures, namely 600oC, 700oC, 800oC and 900oC. At annealing temperatures of 700oC or higher, peaks related to the planes (110), (101), (200), (111), (211),(110),(002), (221), (112), (301), (202) appear and the XRD-pattern matches well the reference spectra of RuO\({}_{2}\) with tetragonal rutile structure (COD Card 2101852 [23]), which confirms phase purity in the produced films. For powders annealed at the lower temperature of 60oC, additional peaks appear for 20 angles between 40oC and 60oC, which can be linked to Ru metal in the powder. The XRD-spectra also indicate that the crystallinity of the samples improves by increasing the annealing temperature. GI-XRD was done on the thin films prepared with different annealing temperatures (see Fig. 4). The spectra show that phase pure and crystalline rutile RuO\({}_{2}\) thin films can be achieved even at relatively low annealing temperatures (60oCoC), which is confirmed by the sharpness of the peaks and the absence of Ru-metal peaks. The XRD pattern also indicates that there is a high (110) orientation, since the peak at \(\sim\)28oC is dominating. Figure 3: Normalized XRD patterns of the powders prepared at different annealing temperatures (60oC, 700oC, 800oC, 900oC) and rutile RuO\({}_{2}\) reference spectra (COD Card 2101852 [23]). The pattern has been converted to fit the Cu-K\({}_{x}\) reference, since a Cobalt-K\({}_{x}\) source has been used for measuring. Plotted with an offset for better visualization The Raman spectra of RuO\({}_{2}\) powders prepared at different temperatures are displayed in Fig. 5. The peaks can be assigned to the three major modes, E\({}_{g}\), A\({}_{ig}\) and B\({}_{2g}\), which are located at 528, 646 and 716 cm-1 (cf. single crystal [24], [25]), respectively. The Raman spectra of powders calcined at 600oC were similar to the other ones, which shows that the Ru metal did not interfere with the measurement (as expected). The sharp peaks again indicate the high crystallinity of the samples that is present even at lower annealing temperatures. The weak first harmonics of E\({}_{g}\) and A\({}_{ig}\) modes can be also seen in the Raman spectra at 1016 cm-1 and 1236 cm-1 [26], respectively. There is a significant red shift of the peak positions of the three first-order Raman peaks. The shift increases with calcination temperatures, which might be due to increased strain states in the powder induced by the higher temperatures. Also, for the thin films (cf. Fig. 6), a significant red shift of the first-order Raman modes was detected. This shift is likely to be attributed to a structural change in the RuO\({}_{2}\) lattice (similar to the powder Figure 4: Normalized GI-XRD patterns of thin films deposited on silicon substrates annealed at 600oC, 700oC, 800oC. Rutile RuO\({}_{2}\) XRD patterns are given as reference (COD Card 2101852 [23], black curve). Plotted with an offset for better visualization Figure 5: Normalized Raman spectra of the powders prepared at different temperatures (600oC, 700oC, 800oC, 900oC). The dashed lines indicate the 3 major Raman modes, \(E_{g}\), \(A_{ig}\)_and_\(B_{2g}\). Plotted with an offset for better visualization case) and should not be related to any thermal mismatch to the substrate, since the volume thermal expansion coefficient of silicon (13.2\(\cdot\)10-6\({}^{\circ}\)C-\({}^{\circ}\)), is quite similar to that of RuO\({}_{2}\) 22.7\(\cdot\)10-6\({}^{\circ}\)C-\({}^{\circ}\)[24]. Additionally, a XPS measurement of the thin film annealed at 60o\({}^{\circ}\)C has been done to analyze the surface states of the thin film (see Fig. 7). The peaks of the convoluted fit of the measured data were assigned to specific photoelectrons (see Table 1). From the convoluted fit it can be concluded that the surface is phase-pure, and that the values are in good agreement to literature (doublet separation of 4.2 [27]). There are so-called satellite peaks visible in the XPS, which are often mis-assigned in literature to higher order oxides (RuO\({}_{x}\)); however, as discussed by Morgan [27] these peaks are a result of spin-orbit coupling of non s-levels from the photoemission process, leading to this so-called satellite structure. A minor surface pollution with carbon was also detected (\(<\)6%), which is common in XPS investigations. No additional contaminants from the wafer (Si) or the precursor (Ru-nitrosyl-nitrate) were visible in the XPS, which shows that the CSD process is successfully creating a phase-pure thin film. Figure 6: Normalized Raman spectra of the thin films prepared at different temperatures (60o\({}^{\circ}\)C, 80o\({}^{\circ}\)C). The dashed lines indicate the 3 major Raman modes, \(E_{g}\), \(A_{sg}\)_and_\(B_{2g}\). Plotted with an offset for better visualization Figure 7: XPS spectra of a thin film prepared at 60o\({}^{\circ}\)C (black) and convoluted assigned peaks of the fit In summary, the XRD, Raman and XPS measurements of the thin films suggest that the material is phase-pure and highly crystalline. ### Microstructure Scanning electron microscope (SEM) images of cross-sections of the thin films have been taken for different annealing temperatures (see Fig. 8). The microstructure looks very different, which indicates that the annealing temperature has a huge impact on the grain growth of RuO\({}_{2}\). The film heated to 600oC has a dense microstructure with columnar grains, the film heated to 700oC looks similar, but more grain boundaries are visible and the thin film heated up to 800oC displayed large round grains, which probably grew in radial direction at the expense of neighboring grains (i.e. Ostwald ripening). Consequently, the film heated to 800oC has a very rough surface. The microstructure also impacted the resistivities of the thin films: The dense and smooth film annealed at 600oC showed the lowest resistivity (see chapter 3.4). The total thickness could be obtained from the cross-section SEM images and was 220, 170 and 200 nm for the thin films prepared at 600, 700 and 800oC, respectively. However, due to the roughness of the thin film annealed at 800oC, the estimation of the thickness is prone to larger errors. The decrease in film thickness by increasing the annealing temperature from 600oC to 700oC could be due to a higher density from the higher ion mobility at increased temperature. A closer look at the microstructure of the different thin films also shows that there are smaller grains accumulated near the interface to the substrate. Hence, the first deposited layer might serve as seed layer, which promotes the growth of columnar grains from heterogeneous nucleation. Additional SEM images have been taken to check the influence of the heating rate and amount of acetic acid in the solution (see supplementary information), suggesting that the heating rate does not impact the microstructure significantly, and that a higher amount of acetic acid in the solution leads to a smoother microstructure with well-aligned columnar grains. ### Electrical Properties The resistivity of thin films is calculated using the thickness (t) of the films: \begin{table} \begin{tabular}{l l l} \hline Compound & Binding & Orbital \\ & Energy [eV] & \\ \hline RuO\({}_{2}\) & 280.7 & 3d\({}_{5/2}\) \\ & 284.9 & 3d\({}_{3/2}\) \\ & 282.6 & 3d\({}_{5/2}\) satellite \\ & 286.6 & 3d\({}_{3/2}\) satellite \\ \hline Carbon & 284.3 & 1s \\ \hline \end{tabular} \end{table} Table 1: Overview of the compounds, binding energies and photoelectron orbital derived from the XPS measurements Figure 8: SEM images of cross-sections of the thin films annealed at 600oC (a), 700oC (b) and 800oC (c) on top of the silicon substrate \[\rho=R\frac{\pi}{\ln(2)}tf_{1}f_{2}\, \tag{1}\] Where R is the measured resistivity, F\({}_{1}\) and f\({}_{2}\) are geometric correction factors for non-negligible finite thickness compared to probe spacing, and finite sample dimensions to probe spacing, respectively. Not only the microstructure, but also the resistivity of the thin films is highly influenced by the annealing temperature (see Table 2). Increasing the annealing temperature from 600\({}^{\circ}\)C to 800\({}^{\circ}\)C led to a 2.5 times higher specific resistivity. This is in accordance with the change in microstructure as already discussed in the previous section. The best resistivity was reached for 600\({}^{\circ}\)C and was 0.89 u\(\Omega\)m, which is much lower than previous deposition attempts using CSD but employing toxic 2-methoxyethanol as solvent [18]. The resistivity was measured three times by repositioning the needles, and its low error values indicate that the low resistivity of the thin films is not just a local phenomenon. In Fig. 9 the specific resistivity and sheet resistance over thickness of the thin films are depicted for two annealing temperatures (600\({}^{\circ}\)C and 800\({}^{\circ}\)). The superior quality of the films annealed at 600\({}^{\circ}\)C is evident. Further, it can be clearly seen that the sheet resistance decreases with increasing thin film thickness in a similar manner for two annealing temperatures. From eq. (1) it is evident that the specific resistivity tends to saturate with increasing thickness, if the measured resistivity (R) is not decreasing significantly. Considering the results in Fig. 9, it can thus be concluded that the quality of the thin films remains good even after repeated depositions (>10 cycles). \begin{table} \begin{tabular}{l l} \hline Annealing Temperature [\({}^{\circ}\)C] & Specific Resistivity [u\(\Omega\)m] \\ \hline 600 & 0.89 \(\pm\) 0.06 \\ 700 & 1.03 \(\pm\) 0.20 \\ 800 & 2.26 \(\pm\) 0.25 \\ Literature & 2.7 [18] \\ \hline \end{tabular} \end{table} Table 2: Specific resistivity of the RuO\({}_{2}\) thin films annealed at different temperatures. All thin films showed linear ohmic behavior during measurements. Each thin film consisted of 10 layers, and the total thickness was estimated via SEM images (600\({}^{\circ}\)C: 220 nm, 700\({}^{\circ}\)C: 170 nm, 800\({}^{\circ}\)C: 200 nm) and used to calculate the resistivity. The thickness of the sample from literature was 150 nm [18] Figure 9: Sheet resistance (a) and resistivity (b) of thin films with increasing thickness. The same sample was measured between the repeated CSD steps, hence with increasing thickness The temperature stability of a thin film prepared at 600\({}^{\circ}\)C was tested by measuring the resistivity and heating the sample to the desired temperature (see. Fig. 10). The RuO\({}_{2}\) thin film resistivity rises with temperature, which indicates metallic behavior. The linear fit (R\({}^{2}\)=0.92) was used to calculate the temperature coefficient of resistivity, with the following formula: \[RTC=\frac{\Delta\rho}{\rho_{\mathrm{o}}\Delta T}\,, \tag{2}\] where \(\rho\) is the resistivity, \(\rho_{\mathrm{o}}\) is the initial resistivity value and T is the temperature. The RTC value calculated is5.8\(\cdot\)10\({}^{\circ}\) K\({}^{\text{-}}\). This value is relatively high compared to literature values of RuO\({}_{2}\) thin films (\(\text{$\sim$}3\cdot 10\text{${}^{\circ}$}\text{K${}^{\text{-}}$}\)[28] and other metallic thin films (200 nm thick Au, Cu, Al thin films: 3-36, 3.86, 3.86\(\cdot 10\text{${}^{\circ}$}\text{K${}^{\text{-}}$}\), respectively [29]), hence, the RuO\({}_{2}\) thin films might be interesting as thermistor material, for applications like temperature compensation circuits. After the heating cycle, the material was still highly conductive, since the resistivity reverted to the initial value. Hence, the metal oxide is highly stable, and can be used also for applications were stability against heat is necessary. ### High-Temperature Stability The stability of RuO\({}_{2}\) at high temperatures (\(\text{$>$}500\text{${}^{\circ}$}\text{C}\)) could be interesting for applications such as solid oxide fuel cells (SOFC), (chemical) sensors, micro electro-mechanical systems (MEMS) and for the use in harsh environments (e.g. geothermal applications, aerospace power electronics). According to literature, temperature stability is limited by the formation of gaseous RuO\({}_{x}\) in the presence of oxygen gases at temperatures above 800\({}^{\circ}\)C [30]\({}^{\text{-}}\)[32]. Hence, we investigated the resistivity of the thin films after post-annealing them at different temperatures: Post-annealing at 850\({}^{\circ}\)C in synthetic air for 1 h led to an increase in resistivity from 0.92 \(\upmu\Omega\)m to 1.37 \(\upmu\Omega\)m. The microstructure of the sample was investigated before and after the post-annealing and can be seen in Fig. 11. After annealing, the microstructure is much rougher, has additional pores and is thinned down in some areas. Moreover, an interface layer is visible in the SEM images, possibly due to interdiffusion. Repeating the post-annealing with oxygen atmosphere led to a discontinuous thin film with resistivities too high to be measured by the 4-point-probe method. These changes are likely caused by the oxidation of the RuO\({}_{2}\) with the formation of RuO\({}_{x}\) gases. In comparison, post-annealing in synthetic air at 750\({}^{\circ}\)C led to an Figure 10: Change in resistivity with increasing temperature of a RuO\({}_{2}\) thin film prepared at 600\({}^{\circ}\)C. For every measurement the temperature was increased. The red star indicates the last measurement done after the heating cycle unchanged resistivity, which shows that the thin film is stable in air even at such elevated temperatures. ### Optical Properties UV-VIS reflectivity and transmissivity spectra of RuO\({}_{2}\) thin films deposited on fused silica substrates with two different thicknesses - 22 nm (1 layer) and 220 nm (10 layers) - are displayed in Fig. 12 and Fig. 13, respectively. It can be seen that especially a 'thicker' layer of RuO\({}_{2}\) absorbs visible light well, since the reflectivity and transmissivity values are low in this wavelength range (A=1-R+T). This is in accordance with the observation that the thin films turned dark with increasing layers. The metallic character of the thin films detected in the electrical measurements is also indicated by the optical properties: There is high visible light absorption due to available energy states and surface electrons, which is typical for metals. Figure 12: Reflectivity measurement in the NIR-VIS-UV range of RuO\({}_{2}\) thin films of two different thicknesses and the fused silica substrate. The step at 860 nm is due to a measurement artefact (monochromator switching of the instrument) Figure 11: SEM images of the cross-sections of the thin film annealed at 600°C (a) and additionally post-annealed for 1h at 85o°C in synthetic air (b) on top of the silicon substrate. (c) SEM image of the top view of the post-annealed thin film FTIR transmission spectra show that RuO\({}_{2}\) blocks wavelengths between 2 and 16 \(\upmu\)m effectively, and this effect can be tuned well by decreasing the thickness of the thin film. Using a 23 nm thin film leads to transmission spectra that'mimicked' the pattern of the specific substrate used. Moreover, the transmission values are 'cut in half' by the ultrathin RuO\({}_{2}\) film, which makes it a suitable material to fine tune transmission in the infrared region. The resistivity of a 22 nm thin film, which is 3.4 \(\upmu\Omega\)m, would also make the thin film suitable for applications that require high electrical conductivity. Figure 14: Transmission measurement in the IR range of RuO\({}_{2}\) thin films of two different thicknesses and the fused silica substrate Figure 13: Transmission measurement in the NIR-VIS-UV range of RuO\({}_{2}\) thin films of two different thicknesses and the fused silica substrate. The step at 860 nm is due to a measurement artefact (monochromator switching of the instrument) In the UV-VIS-NIR region, the optical properties of the thin films were investigated in more detail using ellipsometry and the dielectric function \(\varepsilon=\varepsilon_{\mathrm{i}}+i\cdot\varepsilon_{\mathrm{z}}\) was determined in the range of 300 nm - 1700 nm or 0.73 eV - 4.13 eV, respectively. To model the latter for the evaluation of the ellipsometric spectra, we used a thin-film model that includes a Drude term [33]: \[\varepsilon(E)=-\frac{AB}{E+IBE}, \tag{3}\] where \(A\) is the amplitude and \(B\) is the broadening. Moreover, to describe absorption in the NIR we used a Gaussian-broadened oscillator [34, 35]: \[\varepsilon_{\mathrm{z}}(E)=Ae^{\frac{E-E_{\mathrm{o}}\,^{2}}{\sigma}}\text{ with } \sigma=\tfrac{B}{2}\sqrt{\ln 2}, \tag{4}\] where \(E_{\mathrm{o}}\) is the center energy. To treat absorption in the VIS-UV region, we used a Cody-Lorentz oscillator [36] (without Urbach absorption, see below): \[\varepsilon_{\mathrm{z}}(E)=\tfrac{(E-E_{\mathrm{o}}\,^{2})^{2}}{(E-E_{ \mathrm{g}}\,)^{2}+E_{\mathrm{p}}\,^{2}}AE_{\mathrm{o}}B\,\tfrac{E}{(E^{2}-E_ {\mathrm{o}}\,^{2})^{2}+B^{2}E^{2}}, \tag{5}\] where \(E_{\mathrm{o}}\) is the central energy, \(E_{\mathrm{g}}\) = energy gap and \(E_{\mathrm{p}}\) defines the energy where the absorption changes from Cody-like to Lorentz-like behavior. This model was originally developed for the description of amorphous semiconductors, but it describes the given polycrystalline layers very well. Furthermore, pole locations and a DC-\(\varepsilon_{\mathrm{i}}\)-offset were used to describe the real part of the dielectric function. Pole locations are given by the following equation: \[\varepsilon(E)=\tfrac{A}{E_{\mathrm{o}}\,^{2}-E^{2}}, \tag{6}\] where \(E_{o}\) is the pole outside the measured spectral range. As can be seen in formula (5) above, the Cody-Lorentz model assumes in the region of onset of the absorption above \(E_{\mathrm{g}}\) a course of \(\varepsilon_{\mathrm{z}}(E)\sim(E\,-\,E_{\mathrm{g}})^{2}\) and in principle also includes an exponential Urbach absorption term, for which the measurement on the investigated thin films was not sensitive enough. From the analytical expressions of the imaginary part \(\varepsilon_{\mathrm{z}}\) the corresponding real part \(\varepsilon_{\mathrm{i}}\) is calculated via the likewise analytical solution of the integral expression: Figure 15: Transmission measurement in the IR range of RuO\({}_{2}\) thin films of two different thicknesses and the silicon substrate \[\varepsilon_{1}(E)=1+\frac{2}{\pi}\,\wp\,\int_{0}^{\omega_{0}\,\omega_{2}\cdot x _{2}(x)}\,dx\;, \tag{7}\] where \(\,\wp\) denotes the principal value of the integral. For the fit based on a Levenberg-Marquardt algorithm, the parameters given above were thus available to fit the measured data. The layer thicknesses were fixed to the values determined from SEM measurements. The measured data could be fitted very well with this model, i.e. with correspondingly low error sums of squares. Table 3 gives the parameters obtained: Fig. 16 displays the real- and the imaginary part of the thin films dielectric function as a function of the spectral energy \(E=\hbar\cdot\omega=h\cdot\mathrm{c}/\lambda\). (c = light velocity in vacuum) obtained by using the functions given above. \begin{table} \begin{tabular}{l l l} **Pole \#1** & & \\ \hline Thickness & Position [eV] & Amplitude \\ \hline 22 nm & 6.6166 & 30.786 \\ 220 nm & 4.7808 & 11.568 \\ \hline **Pole \#2** & & \\ \hline Thickness & Position [eV] & Amplitude \\ \hline 22 nm & 0.38696 & 1.4226 \\ 220 nm & 0.36202 & 5.0103 \\ \hline **e-Offset** & & \\ \hline 22 nm & 1.0728 & \\ 220 nm & 2.0362 & \\ \hline **Drude** & & \\ \hline Thickness & Amplitude & Broadening \\ \hline 22 nm & 6.1141 & 2.6591 \\ 220 nm & 8.9504 & 2.75 \\ **Gaussian Oscillator** & & \\ \hline Thickness & Amplitude & Center Energy [eV] & Broadening \\ \hline 22 nm & 729.28 & 0.005418 & 1.3081 \\ 220 nm & 1288.1 & 0.005418 & 1.1235 \\ **Cody-Lorentz-Oscillator** & & \\ \hline Thickness & Amplitude & Center Energy [eV] & Broadening \\ \hline 22 nm & 14.899 & 2.7159 & 3.8408 \\ 220 nm & 26.76 & 2.7197 & 3.8512 \\ \hline \end{tabular} \end{table} Table 3: Model parameters obtained from the ellipsometric measurements of 22 nm and 220 nm RuO\({}_{2}\) thin films on fused silica for the constitutive elements of the dielectric function The dielectric function shows a metal-like behavior, where due to the free charge carrier absorption the imaginary part increases towards lower energies and the real part becomes negative. This behavior is quantitatively weaker than for typical metals, in agreement with the higher electrical resistivity exhibited by the measured thin films compared to metals. Compared to the 220 nm layer, the 22 nm layer shows an average of almost 50% lower \(\varepsilon_{2}\) over the entire spectral range. An interpretation of this phenomenon (e.g. possible increased charge carrier scattering due to microstructural differences), as well as for the significantly stronger drop of \(\varepsilon_{1}\) of the 220 nm layer into the negative at low energies, must be reserved for future investigations. The imaginary part of \(\varepsilon(E)\) shows a relative minimum (as a part of metals does) in the range of around 2 eV. Here, comparisons with band structure calculations should offer the possibility to decide whether this can be attributed to a corresponding electronic density of states distribution \(Z(E)\). Also, we found out, that the thin film dielectric functions are almost the same on different substrates (silicon and fused silica), hence similar optical layer properties can be obtained on varying substrates (see Fig. S4 supplementary section). From the real and imaginary part of the dielectric function, the optical constants refractive index \(n\) and absorption constant \(k\) result in: \[n^{2}=\tfrac{1}{2}\Big{[}\sqrt{\varepsilon_{1}^{2}+\varepsilon_{2}^{2}}+ \varepsilon_{1}\Big{]}\quad k^{2}=\tfrac{1}{2}\Big{[}\sqrt{\varepsilon_{1}^{2 }+\varepsilon_{2}^{2}}-\varepsilon_{1}\Big{]} \tag{8}\] Fig. 17 shows the obtained values for the 22 nm and 220 nm thin films on fused silica. Figure 16: Dielectric constants of a 22 nm and 220 nm thick RuO\({}_{2}\) thin film on fused silica calculated from the optical constants derived from ellipsometry measurements. ## 4 Conclusion RuO\({}_{2}\) thin films have been successfully prepared with a novel environmentally-friendly chemical solution deposition process using simply water and acetic acid as solvents. The influence of the annealing temperature on the microstructure and resistivity was investigated and it revealed that dense and smooth thin films with a very low resistivity of 0.89 \(\upmu\Omega\)m can be obtained with an annealing temperature of 600\({}^{\circ}\)C. XRD and Raman measurements confirmed that the thin films are phase pure. The electrical characterization showed that the thin films improve in conductivity when the thickness is increased and a metal-like increase in resistivity with increasing temperature. Optical measurements revealed that the thin films are non-transparent, due to their metallic character, but that it is possible to fine tune this behavior by adjusting the thickness. The thermal stability was investigated by post-annealing the samples, and the thin films were stable up to 750\({}^{\circ}\)C in synthetic air. However, higher temperature of 850\({}^{\circ}\)C led to formation of RuO\({}_{\mathrm{x}}\) gases with consequent degradation of the film's microstructure. In conclusion, the high conductivity and thermal, chemical and electrical stability of these simple-to-obtain RuO\({}_{2}\) thin films may render them useful as electrodes or buffer layers for a multitude of applications such as ferroelectric and magnetoresistive devices, SOFC, (chemical) sensors, MEMS, geothermal applications, aerospace power electronics and semiconductor devices (e.g. interconnects, memristors, gate contacts). Moreover, the tunable transparency behavior of the thin films makes the material interesting as optical filters for e.g. smart windows and other optoelectronic devices. Figure 17: Optical constants of a 22 nm and 220 nm thick RuO\({}_{2}\) thin film on fused silica derived from ellipsometry measurements.
2309.04653
Relative representations for cognitive graphs
Although the latent spaces learned by distinct neural networks are not generally directly comparable, recent work in machine learning has shown that it is possible to use the similarities and differences among latent space vectors to derive "relative representations" with comparable representational power to their "absolute" counterparts, and which are nearly identical across models trained on similar data distributions. Apart from their intrinsic interest in revealing the underlying structure of learned latent spaces, relative representations are useful to compare representations across networks as a generic proxy for convergence, and for zero-shot model stitching. In this work we examine an extension of relative representations to discrete state-space models, using Clone-Structured Cognitive Graphs (CSCGs) for 2D spatial localization and navigation as a test case. Our work shows that the probability vectors computed during message passing can be used to define relative representations on CSCGs, enabling effective communication across agents trained using different random initializations and training sequences, and on only partially similar spaces. We introduce a technique for zero-shot model stitching that can be applied post hoc, without the need for using relative representations during training. This exploratory work is intended as a proof-of-concept for the application of relative representations to the study of cognitive maps in neuroscience and AI.
Alex B. Kiefer, Christopher L. Buckley
2023-09-09T00:58:02Z
http://arxiv.org/abs/2309.04653v1
# Relative representations for cognitive graphs ###### Abstract Although the latent spaces learned by distinct neural networks are not generally directly comparable, even when model architecture and training data are held fixed, recent work in machine learning [13] has shown that it is possible to use the similarities and differences among latent space vectors to derive "relative representations" with comparable representational power to their "absolute" counterparts, and which are nearly identical across models trained on similar data distributions. Apart from their intrinsic interest in revealing the underlying structure of learned latent spaces, relative representations are useful to compare representations across networks as a generic proxy for convergence, and for zero-shot model stitching [13]. In this work we examine an extension of relative representations to discrete state-space models, using Clone-Structured Cognitive Graphs (CSCGs) [16] for 2D spatial localization and navigation as a test case in which such representations may be of some practical use. Our work shows that the probability vectors computed during message passing can be used to define relative representations on CSCGs, enabling effective communication across agents trained using different random initializations and training sequences, and on only partially similar spaces. In the process, we introduce a technique for zero-shot model stitching that can be applied _post hoc_, without the need for using relative representations during training. This exploratory work is intended as a proof-of-concept for the application of relative representations to the study of cognitive maps in neuroscience and AI. Keywords:Clone-structured cognitive graphs Relative representations Representational similarity ## 1 Introduction In this short paper we explore the application of relative representations [13] to discrete (graph-structured) models of cognition in the hippocampal-entorhinal system -- specifically, Clone-Structured Cognitive Graphs (CSCGs) [16]. In the first two sections we introduce relative representations and their extension to discrete latent state spaces via continuous messages passed on graphs. We then introduce CSCGs and their use in SLAM (Simultaneous Localization And Mapping). Finally, we report preliminary experimental results using relative representations on CSCGs showing that (a) relative representations can indeed be applied successfully to model the latent space structure of discrete, graph-like representations such as CSCGs, and more generally POMDPs such as those employed in discrete active inference modeling [1; 8; 19]; (b) comparison of agents across partially disparate environments reveals important shared latent space structure; and (c) it is possible to use the messages or beliefs (probabilities over states) of one agent to reconstruct the corresponding belief distributions of another via relative representations, without requiring the use of relative representations during training. These examples illustrate an extension of existing representational analysis techniques developed within neuroscience [10], which we hope will prove applicable to the study of cognitive maps in biological agents. ## 2 Relative representations Relative representation [13] is a technique recently introduced in machine learning that allows one to map the intrinsically distinct continuous latent space representations of different models to a common shared representation identical (or nearly so) across the source models, so that latent spaces can be directly compared, even when derived from models with different architectures. The technique is conceptually simple: given anchor points \(\mathcal{A}=[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}]\) sampled from a data or observation space and some similarity function \(sim\) (e.g. cosine similarity)1, the relative representation \(\mathbf{r}_{i}^{M}\) of datapoint \(\mathbf{x}_{i}\) with respect to model \(M\) can be defined in terms of \(M\)'s latent-space embeddings \(\mathbf{e}_{i}^{M}=f_{enc_{M}}(\mathbf{x}_{i})\) as: Footnote 1: The selection of both suitable anchor points and similarity metrics is discussed at length in [13]. We explain our choices for these hyperparameters in section 5.2 below. \[\mathbf{r}_{i}^{M}=[sim(\mathbf{e}_{i}^{M},\mathbf{e}_{a_{1}}^{M}),sim( \mathbf{e}_{i}^{M},\mathbf{e}_{a_{2}}^{M}),...,sim(\mathbf{e}_{i}^{M},\mathbf{ e}_{a_{N}}^{M})] \tag{1}\] where \(\mathbf{e}_{a_{i}}^{M}\) is the latent representation of anchor \(i\) in \(M\). Crucially, the anchor points \(\mathcal{A}\) must be matched across models in order for their relative representations to be compatible. "Matching" is in the simplest case simply identity, but there are cases in which it is feasible to use pairs of anchors related by a map \(g(x)\to y\) (see below). In [13] it is shown that the convergence of a model \(M_{target}\) during training is well predicted by the average cosine similarity between its relative representations of datapoints and those of an independently validated reference model \(M_{ref}\). This is to be expected, given that there is an optimal way of partitioning the data for a given downstream task, and that distinct models trained on the same objective approximate this optimal solution more or less closely, subject to variable factors like random initialization and hyperparameter selection. While relative representations were recently introduced in machine learning, they take their inspiration in part from prior work on representational similarity analysis (RSA) in neuroscience [10; 4]. Indeed, there is a formal equivalence between relative representations and the Representational Dissimilarity Matrices (RDMs) proposed as a common format for representing disparate types of neuroscientific data (including brain imaging modalities as well as simulated neuronal activities in computational models) in [10]. Specifically, if a similarity rather than dissimilarity metric is employed5, then each row (or, equivalently, column) of the RDM used to characterize a representational space is, simply, a relative representation of the corresponding datapoint. Footnote 5: See [10] fn.2. Arguably the main contribution of [13] is to exhibit the usefulness of this technique in machine learning, where relative representations may be employed as a novel type of latent space in model architectures. Given a large enough sample of anchor points, relative representations bear sufficient information to play functional roles similar to those of the "absolute" representations they model, rather than simply functioning as an analytical tool (e.g. to characterize the structure of latent spaces and facilitate abstract comparisons among systems). The most obvious practical use of relative representations is in enabling "latent space communication": Moschella et al [13] show that the projection of embeddings from distinct models onto the same relative representation enables "zero-shot model stitching", in which for example the encoder from one trained model can be spliced to the decoder from another (with the relative representation being the initial layer supplied as input to the decoder). A limitation of this procedure is that it depends on using a relative representation layer during training, precluding its use for establishing communication between "frozen" pretrained models. Below, we make use of a parameter-free technique that allows one to map from the relative representation space back to the "absolute" representations of the input models with some degree of success. ## 3 Extending relative representations to discrete state-space models Despite the remarkable achievements of continuous state-space models in deep learning systems, discrete state spaces continue to be relevant, both in machine learning applications, where discrete "world models" are responsible for state-of-the-art results in model-based reinforcement learning [6], and in neuroscience, where there is ample evidence for discretized, graph-like representations, for example in the hippocampal-entorhinal system [25, 18, 16] and in models of decision-making processes that leverage POMDPs (Partially Observable Markov Decision Processes) [19]. While typical vector similarity metrics such as cosine distance behave in a somewhat degenerate way when applied to many types of discrete representations (e.g., the cosine similarity between two one-hot vectors in the same space is 1 if the vectors are identical and 0 otherwise), they can still be usefully applied in this case (see section 5 below). More generally, the posterior belief distributions inferred over discrete state spaces during simulations in agent-based models may provide suitable anchor points for constructing relative representations. Concretely, such posterior distributions are often derived using message-passing algorithms, such as belief propagation [14] or variational message passing [27]. We pursue such a strategy for deriving relative representations of a special kind of hidden Markov model (the Clone-Structured Hidden Markov Model or (if supplemented with actions) Cognitive Graph [16]), in which it is simple to compute forward messages which at each discrete time-step give the probability of the hidden states \(z\) conditioned on a sequence of observations \(o\) (i.e. \(P(z_{t}|o_{1:t})\)). The CSCG/CHMM is particularly interesting both because of its fidelity as a model of hippocampal-entorhinal representations in the brain and because, as in the case of neural networks, distinct agents may learn superficially distinct CSCGs that nonetheless form nearly isomorphic cognitive maps, as shown below. ## 4 SLAM using Clone-Structured Cognitive Graphs An important strand of research in contemporary machine learning and computational neuroscience has focused on understanding the role of the hippocampus and entorhinal cortex in spatial navigation [20; 23; 25; 16], a perspective that may be applicable to navigation in more abstract spaces as well [18; 21]. This field of research has given rise to models like the Tolman-Eichenbaum machine [25] and Clone-Structured Cognitive Graph [5; 16]. We focus on the latter model in the present study, as it is easy to implement on toy test problems and yields a suitable representation for our purposes (an explicit discrete latent space through which messages can be propagated). The core of the CSCG is a special kind of "clone-structured" Hidden Markov Model (CHMM) [17], in which each of \(N\) possible discrete observations are mapped deterministically to only a single "column" of hidden states by the likelihood function, i.e. \(p(o|z)=\begin{cases}1&\text{if }z\in C(o)\\ 0&\text{if }z\notin C(o)\end{cases}\), where \(C(o)\) is the set of "clones" of observation \(o\). The clone structure encodes the inductive bias that the same observation may occur within a potentially large but effectively finite number of contexts (i.e. within many distinct sequences of observations), where each "clone" functions as a latent representation of \(o\) in a distinct context. This allows the model to efficiently encode higher-order sequences [3] by learning transition dynamics ("lateral" connections) among the clones. CSCGs supplement this architecture with a set of actions which condition transition dynamics, creating in effect a restricted form of POMDP. The most obvious use of CSCG models (mirroring the function of the hippocampal-entorhinal system) is to allow agents capable of moving in a space to perform SLAM (Simultaneous Localization And Mapping) with no prior knowledge of the space's topology. Starting with a random transition matrix, CSCGs trained on random walks in 2D "rooms", in which each cell corresponds to an observation, are shown in [16] to be capable of learning action-conditioned transition dynamics among hidden states that exhibit a sparsity structure precisely recapitulating the spatial layout of the room (see Fig. 1).6 Given a sequence of observations, an agent can then infer states that correspond to its location in the room, with increasing certainty and accuracy as sequence length increases. Crucially, location is not an input to this model but the agent's representation of location is entirely "emergent" from the unsupervised learning of higher-order sequences of observations. Building on the codebase provided in [16], we examined the certainty of agents' inferred beliefs about spatial location during the course of a random walk (see Figure 2.). Though less than fully confident, such agents are able to reliably infer room location from observation sequences alone after a handful of steps. Conditioning inference as well on the equivalent of "proprioceptive" information (i.e., about which actions resulted in the relevant sequence of observations) dramatically increases the certainty of the agents' beliefs. We explored both of these regimes of (un)certainty in our experiments. ## 5 Experiments: Communication across cognitive maps We investigate the extent to which common structure underlying the "cognitive maps" learned by distinct CSCG agents can be exploited to enable communication across them. As in the case of neural networks trained on similar data, CSCG agents trained on the same room but with distinct random initializations and observation sequences learn distinct representations that are nonetheless isomorphic at one level of abstraction (i.e. when comparing the structural relationships among their elements, which relative representations make explicit -- cf. Appendix 0.B, Fig. 5). We also explore whether partial mappings can be obtained across agents trained on somewhat dissimilar rooms. We used two metrics to evaluate the quality of cross-agent belief mappings: (1) recoverability of the maximum _a posteriori_ belief of one agent at a given timestep, given those of another agent Figure 1: Example of two cognitive graphs (B) learned by CSCG agents via distinct random walks on the same room (A). Following the convention in [16], colors indicate distinct discrete observations (in the room) or latent “clones” corresponding to those observations (in the graphs). Code for training and producing plots is provided in the supplementary materials for [16]. Note that the two graphs are obviously isomorphic upon inspection (the left graph is visually rotated about 50 degrees clockwise relative to the right one, and the node labels differ). following an analogous trajectory; (2) cosine similarity between a given message and its "reconstruction" via such a mapping. The main results of these preliminary experiments are reported in Table 1. ### Mapping via permutation We first confirmed that CSCG agents trained on distinct random walks of the same room (and with distinct random transition matrix initializations) learn functionally identical cognitive maps if trained to convergence using the procedure specified in [16]. Visualizations of the learned graphs clearly demonstrate topological isomorphism (see references as well as figure 1B), but in addition we found that the forward messages for a given sequence of observations are identical across agents up to a permutation (i.e., which "clones" are used to represent which observation contexts depends on the symmetry breaking induced by different random walks and initializations). It is thus possible to "translate" across such cognitive maps in a simple way. First, we obtain message sequences \(\mathbf{M}\) and \(\mathbf{M}^{\prime}\) from the first and second CSCGs conditioned on the same observation sequence, and extract messages \(\mathbf{m}\) and \(\mathbf{m}^{\prime}\) corresponding to some particular observation \(o_{t}\). We then construct a mapping \(sort\_index_{\mathbf{m}_{o_{t}}}(z)\to sort\_index_{\mathbf{m}^{\prime} _{o_{t}}}(z^{\prime})\) from the sort order of entries \(z\) in \(\mathbf{m}\) to that of entries \(z^{\prime}\) in \(\mathbf{m}^{\prime}\). Using this mapping, we can predict the maximum _a posteriori_ beliefs in \(\mathbf{M}^{\prime}\) nearly perfectly given those in \(\mathbf{M}\) under ideal conditions (see the "Permutation (identical)" condition in Table 1).7 Footnote 7: This procedure does not work if the chosen message represents a state of high uncertainty, e.g. at the first step of a random walk with no informative initial state Figure 2: Maximum probability assigned to any hidden state of a CSCG over time (during a random walk). The left panel shows confidence derived from messages inferred from observations alone, and the right panel shows the case of messages inferred from both actions and observations. ### Mapping via relative representations Though it is thus relatively simple to obtain a mapping across cognitive graphs in the ideal case of CSCGs trained to convergence on identical environments, we confirm that relative representations can be used in this setting to obtain comparable results. A message \(\mathbf{m}^{\prime}\) from the second sequence (associated with model B) can be reconstructed from message \(\mathbf{m}\) in the first (model A's) by linearly combining model B's embeddings \(\mathbf{E}_{\mathcal{A}}^{B}\) of the anchor points, via a softmax (\(\sigma\)) function (with temperature \(T\)) of the relative representation \(\mathbf{r}_{\mathbf{m}}^{A}\) of \(\mathbf{m}\) derived from model A's anchor embeddings:8 Footnote 8: In practice, a softmax with a low temperature worked best for reconstruction. \[\hat{\mathbf{m}^{\prime}}=\big{(}\mathbf{E}_{\mathcal{A}}^{B}\big{)}\sigma \Big{[}\frac{\mathbf{r}_{\mathbf{m}}^{A}}{T}\Big{]} \tag{2}\] Intuitively, the softmax term scales the contribution of each vector in the set of anchor embeddings to the reconstruction \(\hat{\mathbf{m}^{\prime}}\) in proportion to its relative similarity to the input embedding, so that the reconstruction is a weighted superposition (convex combination) of the anchor points. The reconstruction of a sequence \(\mathbf{M}^{\prime}\) of \(m\)\(d^{\prime}\)-dimensional messages from an analogous "source" sequence \(\mathbf{M}\) of \(d\)-dimensional messages, with the "batch" relative representation operation9\(\mathbf{R}_{\mathbf{M}}^{A}\in\mathbb{R}^{m\times|\mathcal{A}|}\) written out explicitly in terms of the matrix product between \(\mathbf{M}\in\mathbb{R}^{m\times d}\) and anchor embeddings \(\mathbf{E}_{\mathcal{A}}^{A}\in\mathbb{R}^{|\mathcal{A}|\times d}\), is then precisely analogous to the self-attention operation in transformers: Footnote 9: If \(\mathbf{M}=\mathcal{A}\), this term is a representational similarity matrix in the sense of [10]. \[\hat{\mathbf{M}^{\prime}}=\sigma\Big{[}\frac{\mathbf{M}\big{[}\mathbf{E}_{ \mathcal{A}}^{A}\big{]}^{T}}{T}\Big{]}\mathbf{E}_{\mathcal{A}}^{B} \tag{3}\] Here, the source messages \(\mathbf{M}\) play the role of the queries \(\mathbf{Q}\), model A's anchor embeddings \(\mathbf{E}_{\mathcal{A}}^{A}\) act as keys \(\mathbf{K}\), and model B's anchor embeddings act as values \(\mathbf{V}\) in the attention equation which computes output \(\mathbf{Z}=\sigma\big{[}\mathbf{Q}\mathbf{K}^{T}\big{]}\mathbf{V}\).10 Footnote 10: In the present setting, one might even draw a parallel between the linear projection of transformer inputs to the key, query and value matrices and the linear projection of observations and prior beliefs onto messages via likelihood and transition tensors. Since self-attention may be understood though the lens of its connection to associative memory models [15, 12], this correspondence goes some way toward theoretically justifying our choice of reconstruction method. In particular, following [12], reconstruction via relative representations can be understood as implementing a form of heteroassociative memory in which model A and B's anchor embeddings are, respectively, the memory and projection matrices. Though empirical performance against a wider range of alternative methods of latent space alignment remains to be assessed, we note a formal connection to regression-based approaches such as [22], in which a representation \(\mathcal{Y}\) of the data is expressed as a mixture of "guesses" (linear projections of local embeddings) from \(k\) experts, weighted according to the fidelity of each expert's representation of the input data \(\mathcal{X}\). This can be expressed as a system of linear equations \(\mathcal{Y}=UL\) in which \(\mathcal{Y}\), \(U\) and \(L\) play roles analogous to those of \(\hat{\mathbf{M}}\), \(\sigma\big{[}\mathbf{R}_{\mathbf{M}}^{A}\big{]}\) and \(\mathbf{E}_{\mathcal{A}}^{B}\) above, with the "repsonsibility" terms (weights) introducing nonlinearity, as the softmax does in our approach (see Appendix C for further details). Not surprisingly, the results of our procedure improve with the number of anchors used (see Appendix A, Figure 4). In our experiments, we used \(N=5000\) anchors. We obtained more accurate mappings using this technique when the anchor points were sampled from the trajectory being reconstructed, which raises the probability of an exact match in the anchor set; for generality, all reported results instead sample anchor points (uniformly, without replacement) from distinct random walks. While it would be possible in the present setting to use similarity metrics tailored to probability distributions to create relative representations, we found empirically that replacing cosine similarity with the negative Jensen-Shannon distance slightly adversely affected performance. ### Mapping across dissimilar models \({}^{\dagger}\)Absolute Representations \({}^{\ddagger}\)Relative Representations *For each condition, mean results and standard deviation over 100 trials (each run on a distinct random graph) are reported, for the more challenging case of messages conditioned only on observations. For all but the (expansion) conditions, the results of mapping in either direction were closely comparable and we report the mean. \begin{table} \begin{tabular}{l c c} & Max belief recovery & Reconstruction accuracy \\ Condition & \% accurate (\(\pm\)SD) & mean cosine similarity (\(\pm\)SD) \\ \hline Baseline: AR\({}^{\ddagger}\) (identical) & 0.01(\(\pm\)0.01) & 0.07(\(\pm\)0.07) \\ Permutation (identical) & 84.09(\(\pm\)28.9) & 0.69(\(\pm\)0.01) \\ Permutation (shifted) & 3.41(\(\pm\)1.48) & 0.69(\(\pm\)0.01) \\ Permutation (landmark) & 20.70(\(\pm\)19.14) & 0.89(\(\pm\)0.003) \\ \hline RR\({}^{\ddagger}\) (identical) & 89.44(\(\pm\)1.84) & 0.99(\(\pm\)0.003) \\ RR (isomorphic) & 41.0(\(\pm\)3.17) & 0.67(\(\pm\)0.02) \\ RR (expansion: large \(\rightarrow\) small) & 97.42(\(\pm\)3.24) & 0.98(\(\pm\)0.02) \\ RR (expansion: small \(\rightarrow\) large) & 47.47(\(\pm\)2.74) & 0.59(\(\pm\)0.02) \\ RR (shifted) & 34.81(\(\pm\)3.81) & 0.63(\(\pm\)0.03) \\ RR (landmark) & 34.13(\(\pm\)6.47) & 0.52(\(\pm\)0.06) \\ \end{tabular} \end{table} Table 1: Mapping across distinct CSCG models* As shown in [13], relative representations can reveal common structure across superficially quite different models -- for example those trained on sentences in distinct natural languages -- via the use of "parallel" anchor points, in which the anchors chosen for each model are related by some mapping (e.g. being translations of the same text). In the context of CSCGs, anchors (forward messages) are defined relative to an observation sequence. To sample parallel anchors across agents, we therefore require partially dissimilar rooms in which similar but distinct observation sequences can be generated. We used four experimental manipulations to generate pairs of partially dissimilar rooms (see Figure 3), which we now outline along with a brief discussion of our results on each. #### 4.1.1 Isomorphism Any randomly generated grid or "room" of a given fixed size will (if CSCG training converges) yield a cognitive map with the same topology. It should thus be possible to generate parallel sequences of (action, observation) pairs -- and thus parallel anchor points for defining relative representations -- across two such random rooms, even if each contains a distinct set of possible observations or a different number of clones, either of which would preclude the use of a simple permutation-based mapping. The relationships among observations will differ across such rooms, however, which matters under conditions of uncertainty, since every clone of a given observation will be partially activated when that observation is received, leading Figure 3: Schematic illustration of experimental conditions. **A** and **B** indicate distinct rooms on which parallel models were trained, except for the “IDENTICAL” condition, where multiple models are trained on a single room. Numbers within nodes illustrate stochastic association of particular hidden state indices with positions in the learned graphs. Graph sizes depicted here do not reflect those used in the experiments. to different conditional belief distributions. This effect should be mitigated or eliminated entirely when beliefs are more or less certain, in which case "lateral" connections (transition dynamics) select just one among the possible clones corresponding to each observation. Indeed, we found that it is possible to obtain near-perfect reconstruction accuracy across models trained on random rooms with distinct observation sets, provided that messages are conditioned on both actions and observations; whereas we only obtained a \(<50\%\) success rate in this scenario when conditioning on observations alone. #### 4.2.2 Expansion In this set of experiments, we generated "expanded" versions of smaller rooms and corresponding "stretched" trajectories (paired observation and action sequences) using Kroenecker products, so that each location in the smaller room is expanded into a \(2\times 2\) block in the larger room, and each step in the smaller room corresponds to two steps in the larger one. We can then define parallel anchors across agents trained on such a pair of rooms, by taking (a) all messages in the smaller room, and (b) every other message in the larger one. In this condition, the large \(\rightarrow\) small mapping can be performed much more accurately than the opposite one, since each anchor point in the smaller ("down-sampled") room corresponds to four potential locations in the larger. Superior results on the (large \(\rightarrow\) small) condition VS our experiments on identical rooms may be explained by the fact that the "small" room contains fewer candidate locations than the room used in the "Identical" condition. #### 4.2.3 Shifting In a third set of experiments, we generated rooms by taking overlapping vertical slices of a wider room, such that identical sequences were observed while traversing the rooms, but within different wider contexts. In this case only the messages corresponding to overlapping locations were used as anchor points, but tests were performed on random walks across the entire room. Under conditions of certainty, mapping across these two rooms can be solved near-perfectly by using all messages as candidate anchor points, since the rooms are isomorphic. Without access to ground-truth actions, it was possible to recover the beliefs of one agent given the other's only \(\sim 35\%\) of the time, even if anchors were sampled from all locations. We hypothesize that this problem is more challenging than the "Isomorphic" condition because similar patterns of observations (and thus similar messages) correspond to distinct locations across the two rooms, which should have the effect of biasing reconstructions toward the wrong locations. #### 4.2.4 Landmarks Finally, partially following the experiments in [16] on largely featureless rooms with unique observations corresponding to unique locations (e.g. corners and walls), we define pairs of rooms with the same (unique) observations assigned to elements of the perimeter, filled by otherwise randomly generated observations that differed across rooms. Using only the common "landmark" locations as anchors, it was still possible to use relative representations to recover an agent's location from messages in a parallel trajectory in the other room with some success. #### 5.3.2 Summary The results reported in Table 1 were obtained under conditions of significant uncertainty, in which messages were conditioned only on observations, without knowledge of the action that produced those observations. In this challenging setting, relative representations still enabled recovery (well above chance in all experimental conditions, and in some cases quite accurate) of one agent's maximum _a posteriori_ belief about its location from those of the other agent, averaged across messages in a test sequence.11 Footnote 11: It is worth noting that this is essentially a one-of-N classification task, with effective values of N around 48 in most cases. This is because (following [16]) most experiments were performed on \(6\times 8\) rooms, and there is one “active” clone corresponding to each location in a converged CSCG. In all settings, it was possible to obtain highly accurate mappings (\(>99\%\) correct in most cases) by conditioning messages on actions as well as observations. This yields belief vectors sharply peaked at the hidden state corresponding to an agent's location on the map. In this regime, the reconstruction procedure acts essentially as a lookup table, as a given message \(\mathbf{m}\) resembles a one-hot vector and this sparsity structure is reflected in the relative representation (which is \(\sim 0\) everywhere except for dimensions corresponding to anchor points nearly identical to \(\mathbf{m}\)). The softmax weighting then simply "selects" the corresponding anchor in model B's anchor set.12 Conditioning messages on probabilistic knowledge of actions (perhaps the most realistic scenario) can be expected to greatly improve accuracy relative to the observation-only condition, and is an interesting subject for a follow-up study. Footnote 12: There is a variation on this in which multiple matches exist in the anchor set, but the result is the same as we then combine \(n\) identical anchor points. ## 6 Discussion The "messages" used to define relative representations in the present work can be interpreted as probability distributions, but they can also be interpreted more agnostically as, simply, neuronal activity vectors. Recent work in systems neuroscience [2] has shown that it is possible to recover common abstract latent spaces from real neuronal activity profiles. As noted above, relative representations were anticipated in neuroscience by RSA, which in effect treats the neuronal responses, or computational model states, associated with certain fixed stimuli as anchor points. This technique complements others such as the analysis of attractor dynamics [26] as a tool to investigate properties of latent spaces in brains, and has been shown to be capable of revealing common latent representational structure across not only individuals, but linguistic communities [28] and even species [11, 7]. Consistent with the aims of [13] and [10], this paradigm might ultimately provide fascinating future directions for brain imaging studies of navigational systems in the hippocampal-entorhinal system and elsewhere. Relative representations generalize this paradigm to "parallel anchors", and also demonstrate the utility of high-dimensional representational similarity vec tors as latent representations in their own right, which can, as demonstrated above, be used to establish zero-shot communication between distinct models. While the conditions we constructed in our toy experiments are artificial, they have analogues in more realistic scenarios. It is plausible that animals navigating structurally homeomorphic but superficially distinct environments, for example, should learn similar cognitive maps at some level of abstraction. Something analogous to the "expansion" setting may occur across two organisms that explore the same space but (for example due to different sizes or speeds of traversal, and thus sample rates) coarse-grain it differently. The idea of landmark-based navigation is central to the SLAM paradigm generally, and the stability of landmarks across otherwise different spaces may provide a model for the ability to navigate despite changes to the same environment over time. Finally, while experiments on partially overlapping rooms seem somewhat contrived if applied naively to spatial navigation scenarios, they may be quite relevant to models of SLAM in abstract spaces [18], such as during language acquisition, where different speakers of the same language may be exposed to partially disjoint sets of stimuli, corresponding to different dialects (or in the limit, idiolects). Crucially, the common reference frame provided by these techniques might allow for the analysis of _shared_ representations, which (when derived from well-functioning systems) should embody an ideal structure that individual cognitive systems in some sense aim to approximate, allowing for comparison of individual brain-bound models against a shared, abstract ground truth. Such an abstracted "ideal" latent space could be used to measure error or misrepresentation [9], or to assess progress in developmental contexts. ## 7 Conclusion In this work we have considered a toy example of the application of relative representations to graph-structured cognitive maps. The results reported here are intended mainly to illustrate concrete directions for the exploration of the latent structure of cognitive maps using relative representations, and as a proof-of-principle that the technique can be applied to the case of inferred posterior distributions over discrete latent spaces. We have also introduced a technique for reconstructing "absolute" representations from their relative counterparts without learning. In addition to further investigating hyperparameter settings (such as choice of similarity function) to optimize performance in practical applications, future work might explore the application of relative representations to more complex models with discrete latent states, such as the discrete "world models" used in cutting-edge model-based reinforcement learning [6], or to enable belief sharing and cooperation in multi-agent active inference scenarios. Given the connection to neural self-attention described above, which has also been noted in the context of the Tolman-Eichenbaum Machine [24], it would also be intriguing to explore models in which such a translation process occurs within agents themselves, as a means of transferring knowledge across local cognitive structures. ## Acknowledgements Alex Kiefer is supported by VERSES Research. CLB is supported by BBRSC grant number BB/P022197/1 and by Joint Research with the National Institutes of Natural Sciences (NINS), Japan, program No. 0111200. ## Code Availability The CSCG implementation is based almost entirely on the codebase provided in [16]. Code for reproducing our experiments and analysis can be found at: [https://github.com/exilefaker/cscg-rr](https://github.com/exilefaker/cscg-rr)
2309.08069
Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing
The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we argue that methodologies that are currently dominant fall short of addressing the complex questions and effects addressed in theoretical media studies. In this survey paper, we review social science approaches and draw a comparison with typical task formulations, methods, and evaluation metrics used in the analysis of media bias in NLP. We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation. These include model transparency, considering document-external information, and cross-document reasoning rather than single-label assignment.
Gisela Vallejo, Timothy Baldwin, Lea Frermann
2023-09-14T23:57:55Z
http://arxiv.org/abs/2309.08069v2
# Connecting the Dots in News Analysis: A Cross-Discipplinary Survey of Media Bias and Framing ###### Abstract The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we argue that methodologies that are currently dominant fall short of addressing the complex questions and effects addressed in theoretical media studies. In this survey paper, we review social science approaches and draw a comparison with typical task formulations, methods, and evaluation metrics used in the analysis of media bias in NLP. We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation. These include model transparency, considering document-external information, and cross-document reasoning rather than single-label assignment. ## 1 Introduction The depiction of complex issues in the media strongly impacts public opinion, politics, and policies [1, 13]. Because a handful of global corporations own an increasing proportion of news outlets, the reach and impact of biased reporting are amplified [1]. Although perfect neutrality is neither realistic nor desirable, media bias turns into an issue when it becomes systematic. If the public is unaware of the presence of bias, this can lead to dangerous consequences, including intolerance and ideological segregation [1]. For decades, news analysis has been an active field of research in the social sciences, and more recently, computational methods for framing and political bias classification have gained considerable momentum. The increasing pace of news reporting suggests a need to scale the process of media bias detection, and there is evidence that exposing media bias promotes healthy public debate, helps journalists to increase thoroughness and objectivity, and promotes critical and conscious news consumption [14]. In the context of this paper, we see the role of NLP as helping to understand, characterise and expose bias at scale. Figure 1 illustrates the concepts of 'framing' and'media bias' adopted in this paper, using the passing of the Respect for Marriage Act as an example. _Framing_ refers to the deliberate presentation or emphasis of selected facts with the goal of eliciting a desired interpretation or reaction in the reader [1]. The left-leaning article in Figure 1 leads with an uplifting picture of a wedding and emphasizes bill support, evoking a positive framing by emphasizing new opportunities for same-sex couples; while the right-leaning article focuses on concerns and debates in both image and text, framing the issue in the a more negative light. _Political bias_ refers to partisan slanted news stories, or the "tendency to deviate from an accurate, neutral, Figure 1: Two articles about the same event written from different political ideologies. Example taken from AllSides.com. balanced, and impartial representation of'reality' of events and social world" (McQuail and Deuze, 2020), which can be a result of a selected framing. In Figure 1, each document was flagged as far-left and far-right ideological leaning, respectively, as a result of the different attitudes and selected points of emphasis chosen in the reporting. Political bias is typically deliberate (Williams, 1975) while framing may be inadvertent as a result of focusing on selective information due to external pressures such as space limitations. In this paper, we survey work on framing and media bias prediction in NLP and relate it to typical research questions and hypotheses in the social sciences. We tease out disconnects across disciplines, and make concrete suggestions on how social science approaches can improve NLP methodology, and how NLP methods can more effectively aid both social science scholars in their analyses as well as underpin tools and applications to raise awareness of media bias among the general public. ## 2 Background ### Framing and Media Bias We focus on the widely-studied phenomena of _framing_ and _political bias_, as they support the detection of partisan-biased documents, and both framing and media bias are strategies to promote a particular view about a specific topic. A variety of definitions of _framing_ exists in social science and communication studies. Prevalent definitions include _equivalence framing:_ presenting the same logical information in different forms (Cacciatore et al., 2016) and _emphasis framing:_ highlighting particular aspects of an issue to promote a particular interpretation (Entman, 2007). Additionally, framing has been conceptualised as a process (de Vreese, 2005; Entman, 2007; Chong and Druckman, 2007), a communication tool (Scheufele, 1999), and/or a political strategy (Roy and Goldwasser, 2020). In order to identify and classify frames automatically, it can be helpful to understand the 'generative process' of frames. Frames can be conceptualised into different typologies, e.g. de Vreese (2005) proposes _issue-specific:_ only pertinent to a single matter, and _issue-generic:_ identifiable across several issues. While Scheufele (1999) differentiates between _media frames:_ embedded in the political discourse, and _audience frames:_ the reader's interpretation of an issue. And Gross (2008) defines _episodic framing_ as portraying an issue with an individual example compared to _thematic framing_, which takes a more broader context to describe the same issue. In this manuscript we cover both issue-specific and issue-generic frames and attach to de Vreese (2005)'s definition of a frame as "an emphasis in salience of different aspects of a topic". Political bias refers to an explicit association of a news article or media outlet with a specific political leaning. Although framing and political bias are different phenomena, NLP researchers have attempted to address them jointly, either by investigating political framing (Roy and Goldwasser, 2020) or by identifying correlations between framing and partisan slanted articles (Ziems and Yang, 2021). NLP studies have attempted automatic media bias identification under several names, including: hyper-partisan news detection (Kiesel et al., 2019), media bias detection (Spinde et al., 2021; Lei et al., 2022), identification of biased terms (Spinde et al., 2021), and political ideology detection (Iyyer et al., 2014; Kulkarni et al., 2018). Their common goal is to detect and classify the bias of a data sample towards a particular political ideology. Many of these approaches naturally relate to investigate _how the story is told_ (framing). ### Why is this Survey Relevant? Hamborg et al. (2019) present a thorough overview of traditional and computational approaches to media bias, including detailed definitions of bias types and their emergence in the context of news production. We complement the survey by providing a more in-depth review of research methodologies in NLP, more recent computational approaches, and a unified focus on the phenomenon of framing and its manifestation as media bias. A very recent survey by Ali and Hassan (2022) reviews computational approaches to modelling framing providing a detailed systematic analysis of NLP methods. For an exhaustive list of NLP previous work we refer the reader to Ali and Hassan (2022). In contrast, our survey takes a closer look at the overall NLP pipeline: data, methodology, and evaluation; draw connections to social science methodology; and pinpoint the gaps between the two disciplines. In order to obtain a comprehensive body of literature which bridge the domains,1 we departed from influential cross-disciplinary papers: (1) a review of media bias and framing across disciplines, but with no focus on state-of-the-art NLP (Hamborg et al., 2019); and (2) one of the first and most influential NLP framing data sets, with a strong theoretical grounding (Card et al., 2015). We identified other relevant work by following both papers' citation graphs (both backwards and forwards). ## 3 Three Disconnects To illustrate the disconnects between the social sciences and NLP, we use the case study of Hernandez (2018)'s study of the framing of domestic violence, in which the author formulates two research questions: 1. Framing functions: Are femicides recognized as a problem of domestic violence? What are the causes of femicides? And what are the solutions proposed? 2. Frame narratives: What are the main narratives of the SCMP2? And what are the sources used to report them? Footnote 2: South China Morning Post The first research question considers the _local_ aspects within each news article. Specifically, it looks at the causes and solutions presented, grounded in Entman (1993)'s conceptualisation of framing in terms of a problem, its cause, and its solution. The second research question relates these local aspects to a _global_ view by contrasting narratives that present domestic violence as isolated incidents with those that treat it as a societal problem. They connect the news reports to _extrinsic_ variables like the sources used or the cultural context of the story e.g. whether the article refers the role of women in the Chinese family or understands domestic violence through the lens of the Confucian philosophy. Their study considers full articles over an extended period, capturing the _temporal development_ of framing of the issue. In contrast, current NLP approaches to frame prediction: (a) typically take a single-class prediction approach --with a few exceptions (Akyurek et al., 2020; Mendelsohn et al., 2021) -- per unit of analysis (sentence or article), rather than treating frames as more complex structures which could for instance distinguish aspects such as cause vs. solution; and (b) treat units of analysis as independent without explicitly drawing connections across articles, or across time, or to document-external context. We thus highlight three important aspects of framing that we could identify while reviewing social science literature. These aspects emerge in theoretical media studies, but cannot be modeled through (single-label) classification, and consequently not attainable by most current NLP approaches: Framing is local and globalIt is local, because a single document can contain several frames, and it is global because to understand the general framing of an article it is often necessary to (a) aggregate local frames and (b) link them to document-external information such as cited (or omitted) sources, or the outlets' political leaning. Framing is dynamicFrames change over time, across outlets, or across countries or communities. Understanding the _development_ of framing can shed light on the impacts of a sustained exposure to biased reporting on readers' opinions, and enables the study of trends. Framing as a comparative taskMedia bias and framing often become most apparent when directly contrasting articles from different perspectives, places or times (cf., Figure 1). We propose to address bias and frame classification as a comparative task rather than labeling documents in isolation. This can help _inducing_ frames from data by analyzing axes of largest variation; and can naturally support tools and applications to raise readers' bias awareness by exposing them to contrasting perspectives on the same issue. The remainder of this article reviews current practice in NLP, points out disconnects to the social science principles introduced above, and suggests steps towards bridging the gap between the disciplines. ## 4 A Critical Review of Current Practices in NLP and Social Science In this section we review both sides of the field, NLP and social sciences, especially communication studies. We look at three main aspects, which we consider to be the most relevant criteria when conducting research: datasets, methods, and evaluation and metrics. Here, the reader can find the similarities and differences across both disciplines. ### Datasets Benchmark datasets dominate modern-day NLP research, and news analysis is no exception. In this section, we review NLP datasets relating to framing and political bias analysis in the news domain. In Table 1, we list relevant datasets, along with the type of labels they provide, the size of the collection, the associated tasks, and sample granularity, whether words, sentences or documents. For the media bias detection task at the _sentence level_, Lim et al. (2020) used crowdsourcing to annotate sentences on 46 English-language news articles about 4 different events with four levels of bias (not-biased, slightly biased, biased, or very biased). Spinde et al. (2021) released BABE ("Bias Annotations By Experts"), a collection of sentences labelled by experts according to binary categories: biased and non-biased, at the sentence and word levels. Fan et al. (2019) contributed the BASIL ("Bias Annotation Spans on the Informational Level") data set which includes word and sentence (span) level annotations of political leaning, as well as sentiment (stance) towards the entities in the article. At the _document level_, the Bitterlemons corpus Lin et al. (2006), comprises weekly issues about the Palestine-Israel conflict. Each issue contains articles from Palestinian and Israeli perspectives written by the portal's editors and guest authors. Despite being intended for document classification, this dataset can be employed to explore framing and political bias, given the documents' nature of strong bias towards one side of the conflict. Additionally, the web portal AllSides3 categories articles into three political ideologies: right, centre, and left (they also offer a finer-grained five-point scale annotation: left, lean left, centre, lean right, right) with the aim to provide all political perspectives on a given story (cf., Figure 1). Experts manually assigned categories at the article level. Several research groups have contributed datasets scraped from AllSides Chen et al. (2018); Baly et al. (2020); Liu et al. (2022); Lee et al. (2022). Footnote 3: [https://www.allsides.com/about](https://www.allsides.com/about) In the field of framing at the _sentence (headline) level_, Liu et al. (2019) released the Gun Violence Frame Corpus (GVFC). It includes headlines about gun violence in news articles from 2016 and 2018 in the U.S., labelled with frames like politics, economics, and mental health. Tourni et al. (2021) released a multi-modal version of the GVFC collection, including the main image associated with each article, and annotations about relevance and framing at the image level. At the _document level_, there is what is probably the most extensive data collection for investigating framing: the Media Frames Corpus (MFC, Card et al., 2015). It includes articles from \(13\) U.S. newspapers on three policy issues: immigration, same-sex marriage, and smoking. This dataset is intended to enable the analysis of policy issue framing, providing annotations at document and span levels with frames like morality, economic, and cultural. Ziems and Yang (2021) contribute a police violence news articles collection (PVFC) that can be categorised in both domains, media bias and framing. They provide annotations for polii \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & Categories & Size & Granularity & Task \\ \hline Bitterlemons Lin et al. (2006) & Israel vs. Palestine & 594 & Documents & Classification \\ Flipper Chen et al. (2018) & Left, Centre, Right & 6.447 & Documents & Classification \\ BASIL Fan et al. (2019) & Libe., Cons., Centre; & 1.2k / 448 & Spans/Words & Classification \\ & Pos, Neu, Neg & 300 & Documents & Classification \\ AllSides Baly et al. (2020) & Left, Centre, Right & 34k & Documents & Classification \\ BiasedSents Lim et al. (2020) & not., slightly., very., biased & 966 & Sentences & Classification \\ BABE Spinde et al. (2021) & Biased, Non-biased & 3.7k & Sentences & Classification \\ BIGENWSALIGN Liu et al. (2022) & Left, Centre, Right & 1M & Documents & Classification \\ NeuS Lee et al. (2022) & Left, Centre, Right & 10.6k & Documents & Cross-Doc \\ \hline MFC Card et al. (2015) & 15 Frames & 61.5k/ & Sentences/ & \\ GVFC Liu et al. (2019) & 9 Frames & 11.9k & Documents & Classification \\ Multimodal GVFC Tourni et al. (2021) & 9 Frames & 1.3k & Headlines & Classification \\ PVFC Ziems and Yang (2021) & Entity frames \& Cons., Libe., none & 82k & Documents & Entity frame \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark contributions in political bias (top) and framing (bottom) mostly in American English. Categories for BASIL denote liberal, conservative, and centre for partisan labels, and polarity classes represent positive, neutral and negative. cal leaning: conservative, liberal or none and also entity-centric frames, including the victim's age, race, and gender. They also include the code to extract those entity frames automatically using regular expressions. It is pertinent to note that this survey is primarily U.S.- and English-centred, in large part because currently-available datasets and work predominantly focus on U.S. news sources. Diversifying research to other countries, cultures, and languages is an important step for future work. In Section 3, we propose three main aspects to investigate framing and media bias: (1) conducting studies at a local and global level; (2) considering the dynamics of framing; and (3) addressing the problem as a comparative task. We suggest that despite being intended for document classification, benchmarks like AllSides, MFC, and Bitterlemons can be redeployed to explore framing and political bias in a different fashion. Instead of assigning frames or political ideologies to documents, they could be used to examine framing and political bias by extracting the most common expressions for each frame or ideology, and investigating commonalities, which can be helpful to social scientists for local and global analyses. Indicators that have been explored by Roy and Goldwasser (2020) are point-wise mutual information (Church and Hanks, 1990) over bigrams and trigrams, but this approach does not generalise well. The MFC contains sentence-level annotations for exploring local framing, however to the best of our knowledge no study has attempted to aggregate those labels to a global level. Regarding datasets providing sentence-level (BABE) and headline (GVFC) annotation, this can be considered as a local dimension. However, this generalises from the headline to the entire document, which ignores the subtle signals in the local dimension. With respect to aspect (2), dynamics occur on many levels, some of which are captured by current data sets: the MFC, BASIL, GVFC and BABE provide article timestamps, supporting diachronic modeling of bias and framing. While some studies exist in this domain (Kwak et al., 2020; Card et al., 2022), the majority of NLP framing considers articles in isolation. Other dynamics, e.g., across countries, communities or media types (e.g., news vs. blogs) are of central interest in communication studies but less achievable with existing data sets. Modelling those dynamics is under-explored. For addressing aspect (3), we propose that researchers explore cross-document differences from various outlets, and their particular angle on a specific issue. Several of the datasets obtained from AllSides include alignment at the event level and hence enable comparison across documents on the left-centre-right spectrum at a finer granularity. A cross-ideology analysis at the event level facilitates the detection of local differences among the three ideologies and allows global aggregation at the document level. ### Methods Researchers in NLP have attempted to tackle media bias as political ideology detection or framing categorisation using different task formulations. The first and most common strategy is _single-label classification_, i.e. assigning a single label to each data point. At the _word level_, Recasens et al. (2013) learn linguistic features from word removal edilogs in Wikipedia. Spinde et al. (2021) compared the Euclidean distance of word embeddings to identify biased words in articles from Huffington Post (left wing) and Breitbart News (right wing). And Liu et al. (2021) experimented with identifying and replacing bias-inducing words with neutral ones using salience scores over word embeddings. At the _sentence level_, Iyyer et al. (2014) used RNNs to identify political ideology in sentences in congressional debate transcripts and articles from the Ideological Book corpus. Using the BASIL corpus, Hartmann et al. (2019) correlated sentence and document distributions using a Gaussian mixture model (Reynolds, 2009) to identify biased sentences; Chen et al. (2020) classified biased spans by calculating their probability distributions on news articles; and Guo and Zhu (2022) applied contrastive learning and created sentence graphs to categorise biased sentences. Other researchers translated keywords from GVFC into several languages, and fine-tuned mBERT to classify frames in news headlines in languages other than English (Akyurek et al., 2020; Aksenov et al., 2021). At the _document level_, there has been substantial work on assigned frames to documents in the MFC corpus. The task has been approached with RNNs (Naderi and Hirst, 2017), attention and discourse information (Ji and Smith, 2017), and pre-trained models (Khanehzar et al., 2019). Baly et al. (2020) combined adversarial adaptation and adapted triple loss with features like Twitter and Wikipedia information about the readers and the outlet to classify the political ideology of news articles. Scholars have performed similar tasks on languages other than English, e.g. by translating English keywords in MFC to Russian to investigate the U.S. framing in Russian media over 13 years (Field et al., 2018). The second formulation is _multi-label classification_. Researchers have primarily used topic modelling (Tsur et al., 2015; Menini et al., 2017) or clustering (Ajjour et al., 2019) to determine the frames present in a document, in an unsupervised setting. Soft membership for topics or clusters allows documents to be assigned to various clusters or topics. Most of this work has been done over political speeches rather than news articles. In a supervised manner, Mendelsohn et al. (2021) employ RoBERTa to classify multiple framing typologies on immigration-related tweets. Similarly, Akyurek et al. (2020) address multi-label framing over headlines using different configurations of BERT. However, all of this work has been done over headlines or documents with a maximum length of 280 characters, and no work has been done at the level of full news articles. Other related task formulations include _entity framing_. At the document level, Ziems and Yang (2021) use regular expressions to identify entity characteristics (gender, race, age, etc.); and Frermann et al. (2023) explore the co-occurrence of narrative roles (entity pictured as villain, hero, or victim) with frames on manually-annotated climate change data. Finally, NLP researchers have also investigated _bias mitigation_. At the headline level, Chen et al. (2018) used LSTMs to flip the leaning of a headline, for example, from a right-leaning title to a left-leaning one, in an attempt to alleviate bias. However, flipping the ideology does not entail the reduction of bias. At the document level, Lee et al. (2022) aggregate all perspectives in one document using multi-document summarisation. We argue that including all biases does not necessarily reduce the impact of ideology bias. Aggregating the most relevant aspects and presenting them comparatively, as depicted in Figure 1, is more effective and has greater utility for social scientists. In the social sciences, approaches tend to be manual, with fewer data samples. One common approach is to _reason across many documents from a high-level perspective_. For example, Chyi and McCombs (2004) design and evaluate a two-dimensional framework (spatial and temporal) to investigate framing changes over time in 170 news articles in American English about a U.S. school shooting event. They manually annotated articles with the signals indicating both of the frame typologies, quantified those annotations and draw conclusions about the temporal and spatial framing behaviour in the inspected articles. Muschert and Carr (2006) assessed the previously-proposed framework based on 290 news documents, and confirmed that the present temporal dimension frame still holds when using data from more than one school shooting. Hernandez (2018) analysed the framing of 124 news stories from the South China Morning Post (SCMP) about femicides by manually coding the articles and quantifying those observations. The author explored whether those cases were portrayed as isolated cases or part of a systematic social problem, by manually analysing signals like narratives, sources, and the role of the entities. In addition, communication science studies often _correlate features of news reports with extra-textual information to formulate or validate their hypotheses_ (see also Hamborg et al., 2019). For example, McCarthy et al. (2008) investigate whether the media is ideologically biased in reporting about demonstration events. They track media coverage of protests during the transition period from communism in Belarus by considering features like the size of the protest, sponsors' status, and number of arrests, and examine their correlation with the event's media coverage. Similarly, Gentzkow and Shapiro (2010) investigate media bias by calculating citations of different media outlets by think tanks, and correlating those statistics with the number of times that members of the U.S. Congress mentioned the same groups. Here, we see a stark disconnect between largely _local_ frame modelling in NLP but a strong dominance on _dynamic_ and _global_ questions raised in communication studies. Social science research provides a lens through which to consider NLP methodology, and its insistence on considering each sample in isolation. We argue that learning from signals like the use of metaphoric or technical (legal) language, the correlation with informative features like sources integrated in the report, and the role of the audience and journalist's cultural background in the story all contribute to news framing and bias analysis. ### Metrics and Evaluation We consider two levels of validation: validating data annotations, and validating model predictions. The former -- validating the quality of labelled data -- applies to both the social sciences and NLP. In a typical social science study, the distribution of manual labels is the main factor for accepting or rejecting hypotheses or drawing larger conclusions. As such, measures for data quality such as inter-coder reliability (ICR) are routinely reported and a core requisite of the study. This validation ensures that the codebook was correctly conceptualised, and coding often includes discussions and several iterations on trial data or pilot studies Hernandez (2018), leading to relatively high ICR scores from carefully trained annotators, often with domain knowledge. Social science studies are largely analytical (examining labelled data, qualitatively based on manual analysis, and quantitatively based on statistical tests). NLP studies on framing are empirical, and evaluation (regrettably) often comes down to numeric comparison of a newly proposed method against previous work, by comparing the predictions of systems against the ground truth frame labels. This does not provide fundamental insights into how well a model can capture framing or political bias at a higher and more abstract level, or whether it is better able to lead to fresh insights into the data. In other words, current approaches fall short of providing inferences from explicit information, i.e. assessing the objectivity of a story as well as measuring the level of factuality by identifying whether a story adopts a recounting or metaphoric style. These strategies are graded in nature (rather than binary) and metrics like accuracy are deficient. In order to address the above-mentioned issues, we propose that automatic framing and bias analysis evaluation tackle three main points: (1) _model performance_, (2) _error analysis_, and (3) _measuring model certainty_. Even though overall model performance in terms of accuracy or F1 does not provide a complete picture of its utility Spinde et al. (2021), we still need to consider point (1) to gauge the overall capabilities of a model. However, we can go deeper and also investigate the performance at the outlet level or look into the most challenging frames for the model to predict. This leads to point (2), error analysis, following previous work Vilar et al. (2006); Kummerfeld and Klein (2013), we propose three key components: (a) error categorisation. (b) Scrutinising the potential causes of these errors. (c) Going beyond identification, extending to suggesting feasible strategies for improvement based on the nature and origins of the errors. Finally, we see the role of NLP as developing meaningful tools and methods that can support social science scholars to enhance and scale the investigation of framing and political bias. Therefore, a user should be able to access model confidence scores to assess the reliability of model predictions, as per point (3). ## 5 Discussion Having reviewed approaches in the social sciences and NLP, and enumerated disconnects, we ask: _What are practices in the social sciences that NLP can adopt?_ NLP task formulations tend to focus on assigning a single label (e.g., a frame) to a unit of analysis, typically a document or sentence. Social science studies annotate news excerpts at the local dimension and combine that information with external signals to arrive at higher-level conclusions. Recalling our introductory example on the framing of domestic violence in the SCMP Hernandez (2018) considers the broader impact, incorporating other victims included in the news story (local signals); the role of culture in the article: whether women are portrayed from their role in the Chinese family or the story mentions Confucianism concepts; the type of report: brief or news story; the sources, whether the article is based on a police report (external factors). She combines these signals and aggregates them to the document-level (global perspective) to draw higher-level conclusions on the dominant narrative framing of domestic violence as isolated instances or a societal phenomenon across the entire collection of articles. With regard to NLP, we argue that the standard practice of assigning a single frame label to news documents is overly simplistic, given that a typical news story comprises viewpoints, arguments, or aspects, which may individually have different connotations or framing. We acknowledge that causes of simplifying annotation relate to factors that affect scalability and automation like the costs and the difficulty of achieving inter-annotator agreement. In these cases researchers are overcoming these challenges by means of few-shot pre-training models He et al. (2023); Bansal and Sharma (2023). In the context of political debates, Ajiour et al. (2019) suggest breaking down debates into arguments and identifying a frame for each idea. Similar strategies in a media framing context could mitigate the simplifying assumption of one frame per article. Khanehzar et al. (2021) also argue that the single, primary frame annotation in the MFC is oversimplified and propose a model for multi-view representations of news stories. To address this gap, we suggest a two-step process: (1) split a news document into self-contained discourse units (such as arguments or events); and (2) assign a frame label to each unit, and/or one or more global frame-label(s) by aggregating across units. As reviewed in Sections 4.1 and 4.2, NLP methods operate mostly on the sentence level (which cannot capture longer arguments) or the document level. Analyzing frames on an argument- or event level reflects the typical level of analysis in the media studies. Rather than assigning the single most likely frame, researchers might want to take the full distribution over labels into account. Although research questions are the starting point in both disciplines, these are distinct. In the social sciences, tasks address more complex issues, e.g. correlating the coverage of protests with characteristics like event size and number of arrests during a political transition period (McCarthy et al., 2008), compared with identifying factuality in an article (Baly et al., 2018), detecting whether a sentence is biased or not (Lim et al., 2020; Lei et al., 2022; Spinde et al., 2021), or categorising full news documents based on their framing about a specific issue (Naderi and Hirst, 2017; Ji and Smith, 2017; Khanehzar et al., 2019). Social scientists often consider external knowledge to draw conclusions. In contrast, most NLP work operates on the individual article level, disregarding external information as well as other articles in the collection. A few exceptions exist, including Baly et al. (2020) who incorporate readership demographics from Twitter and publisher information from Wikipedia; and Kulkarni et al. (2018) who incorporate article link structure into their models. Still, they consider each news item in isolation. We encourage the NLP community to ground frame predictions in external signals - be it document-extraneous (such as readership and sources) and/or cross-document (by explicitly contrasting the framing across articles from different times, locations, or outlets). We envision as a result more expressive frame conceptualizations; outputs and analyses that are more aligned with typical questions in the media sciences; and a stepping stone towards tools that can highlight contrastive framing of issues in the news to a general readership. More broadly, we advocate for a more cross-disciplinary perspective in NLP research, involving domain experts in all steps of the process: from the formulation of research questions, to model design with consideration to transparency and robustness, and evaluation. While prior work has highlighted the importance of expert annotation (Spinde et al., 2021), we argue that in order to develop useful assistive tools for scholars or applications for the general public, a dialogue with domain experts over the whole process is essential. Cross-disciplinary projects would guide NLP researchers to go back to the basics of framing analysis and political bias prediction as in the social sciences, and adopt back best practices in steps like annotation. _What could NLP contribute to the social sciences?_ NLP can support and scale up social science analyses with powerful tools like pre-trained and generative models accompanied with domain expertise on how to employ these tools safely and responsibly. For example, Bhatia et al. (2021) supply a web platform for semi-automatic data annotation and document classifier training in order to support communication-science researchers without the resources and skills in using automatic tools. The system generates LDA topics (Blei et al., 2003) in the first step and allows researchers to tag the topics and annotate documents, which are used as training data for document-level frame prediction. NLP has a strong culture of sharing code and annotated data sets to encourage collaboration and reproducibility. This practice is less common in the humanities. Sharing this data more explicitly through cross-disciplinary dialogue could provide critical assessment and feedback from domain experts and encourage innovation on how to combine large (and potentially noisier) data into the small-scale (but high-quality) annotations, to address increasingly complex questions on the emergence and effects of media biases and framing. ## 6 Conclusion This survey takes a critical look at recent work in NLP on framing and media bias, and points out disconnects and synergies in datasets, methodologies, and validation techniques to research practices in the social sciences. Despite the opportunities for NLP to support and scale social science scholarship on media bias, a current oversimplification in conceptualisation, modelling, and evaluation models of framing and media bias hinders fertile collaboration. We have teased out three disconnects and proposed directions for future work, including: (1) analysing news articles from a local and global perspective, incorporating external non-textual features; (2) taking into account the dynamics of framing and bias across documents, cultures or over time; and (3) tackling the issue of media bias as a comparative task, defining frames on the basis of systematic differences between articles whose origins differ on pre-defined characteristics. This would allow for a more complex characterisation of bias than the currently dominant approach of single-label classification. ## Limitations This survey focuses on framing to news articles. This constrains the scope of our analysis to media rather than framing in a broader context. Additionally, we are aware that regardless on the approach taken for sampling the previous work included in this manuscript, there will be always bias present. With the aim of mitigating this bias, we point the reader to complement our work with previous surveys in this field i.e. Hamborg et al. (2019) and Ali and Hassan (2022). ## Ethics Statement Identifying framing and ideology bias in news articles is highly influenced by social and structural bias. Datasets and technologies intending to tackle these phenomena comprise the social bias of annotators and researchers developing them in an environment lacking diversity, in addition to the potential for dual use of models and benchmarks to promote polarisation and misinformation. However, we see this paper as an opportunity to identify new directions to diversify NLP methodologies and develop new datasets that help to push the field further, and address authentic analytical goals in the social sciences.
2309.17149
Homology and Euler characteristic of generalized anchored configuration spaces of graphs
In this paper we consider the generalized anchored configuration spaces on $n$ labeled points on a~graph. These are the spaces of all configurations of $n$ points on a~fixed graph $G$, subject to the condition that at least $q$ vertices in some pre-determined set $K$ of vertices of $G$ are included in each configuration. We give a non-alternating formula for the Euler characteristic of such spaces for arbitrary connected graphs, which are not trees. Furthermore, we completely determine the homology groups of the generalized anchored configuration spaces of $n$ points on a circle graph.
Dmitry N. Kozlov
2023-09-29T11:26:48Z
http://arxiv.org/abs/2309.17149v2
# Homology of anchored configuration spaces of \(n\) points on a circle ###### Abstract. In this paper we completely determine the homology groups of the spaces of configurations of \(n\) points on a circle, subject to the condition that \(k\) arbitrary pre-determined points are included in each configuration. Furthermore, we give a formula for the Euler characteristic of such spaces for arbitrary graphs. ## 1. Anchored configuration spaces The study of the anchored configuration spaces was initiated in [10] and continued in [12]. These spaces are motivated by certain considerations in logistics and differ from classical configuration spaces in a crucial way. The formal definition is as follows. **Definition 1.1**.: _Let \(X\) be a non-empty topological space, let \(S\) be a set of \(k\) distinct points in \(X\), \(k\geq 0\), and let \(n\) be an arbitrary positive integer. An_ **anchored configuration space**_, denoted \(\Sigma(X,S,n)\), is defined as the subspace of the direct product \(X^{n}\), consisting of all tuples \((x_{1},\ldots,x_{n})\), such that \(S\subseteq\{x_{1},\ldots,x_{n}\}\)._ So far, these spaces has been studied in the situation when \(X\) is a geometric realization of a graph. The case when this graph is a tree has been considered in [10], where the homotopy type of \(\Sigma(X,S,n)\) has been completely determined. As the next step, we let \(X\) be a circle. In that case, changing the positions of the points in \(S\) will produce homeomorphic anchored configuration spaces. Since all we need to record is the cardinality of \(S\), we let \(\Omega(k,n)\) denote \(\Sigma(X,S,n)\), where \(X\cong S^{1}\) and \(|S|=k\). The spaces \(\Omega(2,n)\) were the focus of investigations in [10] and [12]. More specifically, the homology of these spaces was calculated in [10] using discrete Morse theory. This work was continued in [12], where the cup product structure was completely described, and connection to the topological complexity was established. In this paper we consider the spaces \(\Omega(k,n)\) of an arbitrary \(k\), and calculate their homology groups in all dimensions. Rather than using discrete Morse theory, our method is to consider classical long sequences for the corresponding combinatorially given chain complexes. For the standard concepts of Algebraic Topology we refer to [13, 14, 15]. Our study lies within the field of Applied Topology, see [1, 10, 10] for more information. ## 2. Chain complex framework Let us fix positive integers \(k\) and \(n\), such that \(n\geqslant k\geqslant 2\). Let \(C_{k}\) be a cycle graph with \(k\) vertices and \(k\) edges. Let \(E\) denote its set of edges, and let \(V\) denote its set of vertices. We can choose the index set to be \(\mathbb{Z}_{k}\), and write \(E=\{e_{1},\ldots,e_{k}\}\) and \(V=\{v_{1},\ldots,v_{k}\}\), in such a way that the adjacency map \(\partial:E\to 2^{V}\) is given by \(\partial(e_{i})=\{v_{i},v_{i+1}\}\).1 Footnote 1: Of course, \(k+1=1\) in \(\mathbb{Z}_{k}\). The next definition describes the combinatorial objects which are the key building blocks for the combinatorial chain complexes which we consider in this paper. **Definition 2.1**.: _Given \(n\), \(a\)_**vertex-edge \(n\)-tuple** _is an \(n\)-tuple \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\), such that \(\sigma_{i}\in V\cup E\), for all \(i\)._ _For a vertex-edge \(n\)-tuple \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\) we define two subsets of \(\mathbb{Z}_{k}\), which we call_ **vertex** _and_ **edge support sets**_, and which we denote \(\operatorname{supp}_{V}(\sigma)\) and \(\operatorname{supp}_{E}(\sigma)\), as follows:_ \[\operatorname{supp}_{V}(\sigma):=\{i\in\mathbb{Z}_{k}\,|\,v_{i}\in\{\sigma_{1 },\ldots,\sigma_{n}\}\},\] _and_ \[\operatorname{supp}_{E}(\sigma):=\{j\in\mathbb{Z}_{k}\,|\,e_{j}\in\{\sigma_{1 },\ldots,\sigma_{n}\}\}.\] _Finally, the_ **dimension** _of \(\sigma\) is defined to be \(\dim\sigma:=|\{i\,|\,\sigma_{i}\in E\}|\). So, in particular, we have \(\partial\leqslant\dim\sigma\leqslant n\)._ Clearly, for any vertex-set \(n\)-tuple \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\), the set \(\{\sigma_{1},\ldots,\sigma_{n}\}\) is a disjoint union of the sets \(\{v_{i}\,|\,i\in\operatorname{supp}_{V}(\sigma)\}\) and \(\{e_{j}\,|\,j\in\operatorname{supp}_{E}(\sigma)\}\). The direct product \(\underbrace{C_{k}\times\cdots\times C_{k}}_{n}\) has a natural structure of the cubical complex, whose geometric realization is an \(n\)-torus. Its cells are indexed by the vertex-edge \(n\)-tuples, whose dimensions, as described in Definition 2.1, coincide with the geometric dimension of the corresponding cells. Therefore, the chain complex whose chain groups are generated by the vertex-edge \(n\)-tuples, with appropriately defined boundary operators, will calculate the homology of an \(n\)-torus. We shall now consider the chain complexes whose chain groups are generated by the vertex-edge \(n\)-tuples, satisfying additional conditions on the vertex support set \(\operatorname{supp}_{V}(\sigma)\). **Definition 2.2**.: _Assume we are given an arbitrary subset \(P\subseteq\mathbb{Z}_{k}\), and a nonnegative integer \(q\), such that \(q\leqslant|P|\). We define a chain complex \(\mathcal{C}^{P,q}=(C_{*}^{P,q},\partial_{*})\), where \(C_{*}^{P,q}\) are free abelian groups, as follows._ 1. _For each_ \(d\)_, the free abelian group_ \(C_{d}^{P,q}\) _is generated by the vertex-edge_ \(n\)_-tuples_ \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\)_, with_ \(\dim\sigma=d\)_, satisfying the following two conditions:_ * \(\operatorname{supp}_{V}(\sigma)\subseteq P\)_;_ * \(|\operatorname{supp}_{V}(\sigma)|\geqslant q\)_._ 2. _The boundary operator takes the vertex-edge_ \(n\)_-tuple_ \(\sigma\)_, and replaces, with an appropriate sign, any of the edges_ \(\sigma_{i}\in E\) _by any of its boundary vertices, subject to the condition that the index of that vertex lies in_ \(P\)_. Formally we have_ (2.1) \[\partial\sigma=\sum_{i:\sigma_{i}\in E}\sum_{\sigma_{i}\in\partial \sigma_{i}\cap V_{P}}(-1)^{\rho(\sigma,i)}(\sigma_{1},\ldots,\sigma_{i-1}, \bar{\sigma}_{i},\sigma_{i+1},\ldots,\sigma_{n}),\] _where_ \(V_{P}\coloneqq\{v_{j}\,|\,j\in P\}\)_, and_ \(\rho(\sigma,i)\coloneqq\{j\,|\,1\leqslant j\leqslant i-1,\,\sigma_{j}\in E\}\)_._ Note the special case when \(|P|=q\), when the chain groups \(C_{d}^{P,q}\) are generated by all \(\sigma\), satisfying \(\dim\sigma=d\) and \(\operatorname{supp}_{V}(\sigma)=P\). For convenience, we introduce additional notation for the complement set \(H:=\mathbb{Z}_{k}\setminus P\), and \(h:=|H|=n-|P|\). **Remark 2.3**.: _Obviously, \(C_{i}^{P,q}=0\), for \(i<0\). Furthermore, if a vertex-edge \(n\)-tuple \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\) satisfies \(|\operatorname{supp}_{V}(\sigma)|\geqslant q\), then \(\dim\sigma\leqslant n-q\), so \(C_{i}^{P,q}=0\) also for all \(i>n-q\)._ In what follows, we shall compute the homology groups of the chain complexes \(\mathcal{C}^{P,q}\). The case which interests us most is when \(P=\mathbb{Z}_{k}\) and \(q=|P|\), since it gives us the homology of the anchored configuration spaces \(\Omega(k,n)\). We stress this observation for a later reference. **Fact 2.4**.: _The chain complex \(\mathcal{C}^{\mathbb{Z}_{k},k}\) is isomorphic to the cubical chain complex of the anchored configuration space \(\Omega(k,n)\)._ Our calculation will proceed by induction, and we shall compute the homology groups for all values of \(P\) and \(q\). ## 3. Calculation of the homology groups of \(\mathcal{C}^{P,q}\) ### The case \(q=0\) Let us start with the case \(q=0\). When \(q=0\) the condition \(|\operatorname{supp}_{V}(\sigma)|\geqslant q\) is void, which radically simplifies the situation. The homology is then given by the following proposition. **Proposition 3.1**.: 1. _The chain complex_ \(\mathcal{C}^{\mathbb{Z}_{k},0}\) _calculates the homology of an_ \(n\)_-torus. In fact, it is a chain complex of the cubical complex obtained as an_ \(n\)_-fold direct product of the_ \(k\)_-cycle._ 2. _When_ \(P\) _is a proper subset of_ \(\mathbb{Z}_{k}\)_, we have_ \(H_{n}(\mathcal{C}^{P,0})\approx\mathbb{Z}^{h^{n}}\)_, and all other homology groups are trivial._ **Proof.** Statement \((1)\) is trivial and simply formalizes our earlier observation, so we proceed to proving the statement \((2)\). Let \(\widetilde{C}_{k}\) be the graph which is in a sense dual to \(C_{k}\). It is also a cycle graph with \(k\) vertices and \(k\) edges, but with a different indexing. Let \(\widetilde{F}\) denote its set of edges, and let \(\widetilde{V}\) denote its set of vertices. Both again are indexed by \(\mathbb{Z}_{k}\), \(\widetilde{E}=\{\tilde{e}_{1},\ldots,\tilde{e}_{k}\}\) and \(\widetilde{V}=\{\tilde{v}_{1},\ldots,\tilde{v}_{k}\}\), but now in such a way, that the boundary map \(\partial:\widetilde{E}\to 2^{\widetilde{V}}\) is given by \(\partial(\tilde{e}_{i})=\{\tilde{v}_{i-1},\tilde{v}_{i}\}\). So, compared to \(C_{k}\), the relative indexing is shifted by \(1\). Let \(G\) denote the subgraph of the cycle graph \(\widetilde{C}_{k}\), obtained by deleting all edges indexed by \(H\). Consider the cubical complex \(G^{n}=\underbrace{G\times\cdots\times G}_{n}\), and consider the cochain complex of \(G^{n}\), let us call it \(\tilde{C}^{*}\). It is easy to see that \(\mathcal{C}^{P,0}\) is isomorphic to this cochain complex, with the isomorphism \(\varphi\) given by \(\varphi(v_{i}):=\tilde{e}_{i}\), and \(\varphi(e_{i}):=\tilde{v}_{i}\), for all \(i\in\mathbb{Z}_{k}\). In particular, we have \(H_{i}(\mathcal{C}^{P,0})\approx H^{n-i}(\tilde{C}^{*})\), for all \(i\). On the other hand, we assumed that \(h\geqslant 1\), so topologically, the graph \(G\) consists of \(h\) disjoint intervals. In particular, the direct product \(G^{n}\) is homotopy equivalent to the discrete space with \(h^{n}\) points. Therefore, we have \[H^{i}(\tilde{C}^{*})=\left\{\begin{array}{ll}\mathbb{Z}^{h^{n}},&\text{ if }i=0;\\ 0,&\text{ otherwise,}\end{array}\right.\] and it follows that \[H_{i}(\mathcal{C}^{P,q})=\left\{\begin{aligned} &\mathbf{Z}^{h^{n}},&& \text{if $i=n$;}\\ & 0,&&\text{otherwise.}\end{aligned}\right.\] ### Structure of the relative chain complexes Assume now \(q\geq 1\), and consider the chain complex \(\mathcal{C}^{P,q-1}\). The condition as to which vertex-edge \(n\)-tuples are allowed to be taken as generators of the chain groups is weaker for \(\mathcal{C}^{P,q-1}\), than it is for \(\mathcal{C}^{P,q}\), so the latter is its chain subcomplex. The following lemma states that their quotient can be decomposed as a direct sum of chain complexes of the same type. **Lemma 3.2**.: _For any \(P\subseteq\mathbb{Z}_{k}\), and any \(q\geq 1\), we have the following chain complex isomorphism:_ \[\mathcal{C}^{P,q-1}/\mathcal{C}^{P,q}\approx\bigoplus_{S}\mathcal{C}^{S,q-1}, \tag{3.1}\] _where the sum is taken over all subsets of \(P\) of cardinality \(q-1\)._ **Proof.** For each \(d\), the relative chain group \(C_{d}(\mathcal{C}^{P,q-1}/\mathcal{C}^{P,q})=C_{d}^{P,q-1}/C_{d}^{P,q}\) is generated by the cosets of \(C_{d}^{P,q}\), whose representatives are the vertex-edge \(n\)-tuples \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\), satisfying \(|supp\,\nu(\sigma)|=q-1\). Call such a coset \(\bar{\sigma}\). The relative boundary operator in \(\mathcal{C}^{P,q-1}/\mathcal{C}^{P,q}\) is then given by the following formula, cf. (2.1), \[\partial\bar{\sigma}=\sum_{i:\sigma_{i}\in E}\sum_{\bar{\sigma}_{i}\in \sigma_{i}\cap\bar{\nu}}(-1)^{\rho(\sigma,i)}(\sigma_{1},\ldots,\sigma_{i-1}, \bar{\sigma}_{i},\sigma_{i+1},\ldots,\sigma_{n}), \tag{3.2}\] where \(\bar{\nu}=\{\nu_{i}\,|\,j\in supp\,\nu(\sigma)\}\), and \(\rho(\sigma_{i})\) is the same as in (2.1). In other words, when taking the boundary, we are allowed to replace an edge with any of its boundary vertices, subject to the condition, that this does not change the vertex support set. Since the boundary operator preserves the vertex support set, the chain complex \(\mathcal{C}^{P,q-1}/\mathcal{C}^{P,q}\) decomposes as a direct sum, with direct summands indexed by all possible choices of \(supp\,\nu(\sigma)\), which is the same as to say all possible choices of \((q-1)\)-subsets of \(P\). This proves (3.1). ### The case \(P\neq\mathbb{Z}_{k}\) When \(P\) is a proper subset of \(\mathbb{Z}_{k}\), it turns out that all the homology of the chain complex \(\mathcal{C}^{P,q}\) is concentrated in its top dimension. **Theorem 3.3**.: _Assume \(P\) is a proper subset of \(\mathbb{Z}_{k}\). Then, the homology of \(\mathcal{C}^{P,q}\) is concentrated in dimension \(n-q\), in other words, \(H_{i}(\mathcal{C}^{P,q})=0\), for \(i\neq n-q\)._ **Proof.** The proof proceeds by induction on \(q\). For the base case \(q=0\), this has been proved in Proposition 3.1(2). Assume now \(q\geq 1\). Since the chain complex \(\mathcal{C}^{P,q}\) is a subcomplex of \(\mathcal{C}^{P,q-1}\), we have the following long exact sequence: \[\ldots\to H_{*}(\mathcal{C}^{P,q})\to H_{*}(\mathcal{C}^{P,q-1})\to H_{*}( \mathcal{C}^{P,q-1}/\mathcal{C}^{P,q})\xrightarrow{\partial}H_{*-1}(\mathcal{ C}^{P,q})\to\ldots \tag{3.3}\] Note, that by induction assumption, the homology of the complex \(\mathcal{C}^{P,q-1}\) is concentrated in dimension \(n-(q-1)=n-q+1\). Furthermore, due to dimensional reasons, see Remark 2.3, the homology of \(\mathcal{C}^{P,q}\) must be \(0\) in dimension \(n-q+1\) and above. By Lemma 3.2 we have \(\mathcal{C}^{P,q-1}/\mathcal{C}^{P,q}\approx\oplus_{S}\mathcal{C}^{S,q-1}\), where the sum is taken over all subsets of \(P\) of cardinality \(q-1\). Since each \(S\) is a proper subset of \(\mathbb{Z}_{k}\), by induction assumption, the homology of \(\mathcal{C}^{S,q-1}\) is also concentrated in dimension \(n-q+1\). It follows that the only nontrivial part of the long exact sequence (3.3) is \[0\to H_{n-q+1}(\mathcal{C}^{P,q-1})\to H_{n-q+1}(\mathcal{C}^{P,q-1}/ \mathcal{C}^{P,q})\to H_{n-q}(\mathcal{C}^{P,q})\to 0,\] so it follows that \(H_{i}(\mathcal{C}^{P,q})=0\), for \(i\neq n-q\). ### The case \(P=\mathbb{Z}_{k}\) We are now ready to deal with the main case. **Theorem 3.4**.: _The homology of the chain complex \(\mathcal{C}^{\mathbb{Z}_{k},q}\) is given by the following formula:_ \[H_{i}(\mathcal{C}^{\mathbb{Z}_{k},q})\cong\left\{\begin{array}{ll}\mathbb{Z }^{\binom{n}{i}},&\text{ if }0\leqslant i\leqslant n-q-1,\\ 0,&\text{ if }i<0\text{ or }i>n-q.\end{array}\right. \tag{3.4}\] Proof.: Once again, we proceed by induction on \(q\). When \(q=0\), we simply have the homology of the \(n\)-torus, see Proposition 3.1(1). Assume now \(q\geqslant 1\). Consider again the long exact sequence: \[\ldots\to H_{*}(\mathcal{C}^{\mathbb{Z}_{k},q})\to H_{*}(\mathcal{C}^{\mathbb{ Z}_{k},q-1})\to H_{*}(\mathcal{C}^{\mathbb{Z}_{k},q-1}/\mathcal{C}^{\mathbb{Z}_{k},q}) \xrightarrow{q}H_{*-1}(\mathcal{C}^{\mathbb{Z}_{k},q})\to\ldots \tag{3.5}\] Lemma 3.2 together with Theorem 3.3 imply that \(H_{i}(\mathcal{C}^{\mathbb{Z}_{k},q-1}/\mathcal{C}^{\mathbb{Z}_{k},q})=0\), for all \(i\neq n-q+1\). Furthermore, for dimensional reasons, we have \(\mathcal{C}^{\mathbb{Z}_{k},q}_{i}=0\), whenever \(i<0\), or \(i>n-q\), see Remark 2.3, so we know that we must have \(H_{i}(\mathcal{C}^{\mathbb{Z}_{k},q})=0\), unless \(0\leqslant i\leqslant n-q\). It follows that the long exact sequence (3.5) falls into several short pieces \(H_{i}(\mathcal{C}^{\mathbb{Z}_{k},q})\approx H_{i}(\mathcal{C}^{\mathbb{Z}_{ k},q-1})\), for \(0\leqslant i\leqslant n-q-1\), and one longer piece \[0\to H_{n-q+1}(\mathcal{C}^{\mathbb{Z}_{k},q-1})\to H_{n-q+1}( \mathcal{C}^{\mathbb{Z}_{k},q-1}/\mathcal{C}^{\mathbb{Z}_{k},q})\to\\ H_{n-q}(\mathcal{C}^{\mathbb{Z}_{k},q})\to H_{n-q}(\mathcal{C}^{\mathbb{Z}_{ k},q-1})\to 0. \tag{3.6}\] This implies the statement of the theorem. Note, that for dimensional reasons, see Remark 2.3, the top-dimensional homology group \(H_{n-q}(\mathcal{C}^{\mathbb{Z}_{k},q})\) must be free. The Betti number \(\beta_{n-q}(\mathcal{C}^{\mathbb{Z}_{k},q})\) can then be computed using the Euler-Poincare formula. ## 4. The Euler characteristic In this section we return to the general situation which we mentioned in the beginning of this paper. Let the topological space \(X\) be a connected graph, which we call \(G\), and let \(V\) and \(E\) denote its sets of vertices and edges, respectively. Let \(S\) be an arbitrary subset of \(V\). In this section we give a theorem which provides a complete non-recursive formula for the Euler characteristic of the spaces \(\Sigma(G,S,n)\). Before we proceed with its formulation and its proof, let recall the following concepts. First, for arbitrary positive integers \(a\geqslant b\) the _Stirling numbers of the second kind_, denoted \(\{\genfrac{\{}{\}}{0.0pt}{}{a}{b}\}\), count the number of ways to partition a set of \(a\) labelled objects into \(b\) nonempty unlabelled subsets. Clearly, then \(b!\{\genfrac{\{}{\}}{0.0pt}{}{a}{b}\}\) is the number of ways to partition a set of \(a\) labelled objects into \(b\) nonempty labelled subsets. Second, if we have a set \(U\), a subset \(S\subseteq U\) and an element \(x\in U\), we let \(S\times XOR\) denote the subset of \(U\) obtained from \(S\) by the _exclusive or_ operation with respect to \(x\). Formally, we set \[S\times XOR\x\coloneqq\left\{\begin{aligned} S\setminus x,& \quad\text{ if }x\in S;\\ S\cup x,&\quad\text{ if }x\notin S.\end{aligned}\right.\] **Theorem 4.1**.: _Let \(G\) be an arbitrary connected graph, whose set of vertices is \(V\), and whose set of edges is \(E\), and let \(S\) be an arbitrary non-empty subset of \(V\). Set \(k:=|V|\), \(c:=|E|\), and \(s:=|S|\). Finally, let \(n\) be a natural number, such that \(n\geq s\)._ 1. _Assume_ \(G\) _is not a tree. Then, the Euler characteristic of_ \(\Sigma(G,S,n)\) _is given by the formula_ (4.1) \[\frac{(-1)^{n-s}}{s!}\chi(\Sigma(G,S,n))=\sum_{j=0}^{n-s}q^{j} \binom{n}{j}\binom{n-j}{s}=\\ \left\{\begin{aligned} n&\\ s\end{aligned}\right\}+q\binom{n}{1}\left\{\begin{aligned} n&\\ s\end{aligned}\right\}+q^{2}\binom{n}{2}\left\{\begin{aligned} n& \\ s\end{aligned}\right\}+\cdots+q^{n-s}\binom{n}{n-s}\left\{ \begin{aligned} s\\ s\end{aligned}\right\},\] _where_ \(q=c-k\)_._ 2. _If_ \(G\) _is a tree, we have the formula_ (4.2) \[\frac{(-1)^{n-s}}{(s-1)!}\chi(\Sigma(G,S,n))=\sum_{j=1}^{n-s+1} (-1)^{n-s-j+1}\binom{n}{j}\binom{n-j}{s-1}=\\ n\left\{\begin{aligned} n-1&\\ s-1&\\ \end{aligned}\right\}-\binom{n}{2}\left\{\begin{aligned} n -2&\\ s-1&\\ \end{aligned}\right\}+\cdots+(-1)^{s+1}\binom{n}{n-s+1}\left\{ \begin{aligned} s-1&\\ s-1&\\ \end{aligned}\right\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Let us now introduce some further notation. Let \(S=\{v_{1},\ldots,v_{s}\}\) and \(V\setminus S=\{w_{1}\ldots,w_{k-s}\}\). Consider the following collection of sets: \[A_{i} \mathrel{\mathop{:}}= \varphi(v_{i})\cap\varphi(\mu(v_{i})),\text{ for }1\leqslant i \leqslant s,\] \[B_{j} \mathrel{\mathop{:}}= \varphi(w_{j})\cap\varphi(\mu(w_{j})),\text{ for }1\leqslant j \leqslant k-s,\] \[U \mathrel{\mathop{:}}= \bigcup_{e\in E\setminus\varphi(V)}\varphi(e).\] Set \(P_{\sigma}\mathrel{\mathop{:}}=(A_{1},\ldots,A_{s},B_{1},\ldots,B_{k-s},U)\). Clearly, the tuple \(P_{\sigma}\) is an ordered set partition of \(\{1,\ldots,n\}\), in which we allow empty sets. We shall now group all the cells \(\sigma\in\Sigma(G,S,n)\) according to their tuple \(P_{\sigma}\), and calculate the contribution to the Euler characteristic separately in each group. Consider first an arbitrary tuple \(P_{\sigma}\), such that \(B_{r}\neq\emptyset\), for some \(1\leqslant r\leqslant k-s\). Let \(M\) be the set of all cells with this tuple, and let \(l\) be the minimal element of \(B_{r}\). We can then define an involution \(v:M\to M\), by moving the element \(l\) from \(\varphi(w_{r})\) to \(\varphi(\mu(w_{r})))\), and vice versa. Formally, we set \[v(\varphi)(x)\mathrel{\mathop{:}}=\left\{\begin{array}{ll}\varphi(x)\,XOR\, l,&\text{ if }x=w_{r},\text{ or }x=\mu(w_{r});\\ \varphi(x),&\text{ otherwise.}\end{array}\right.\] Since there are no conditions on \(\varphi(w_{r})\), the involution \(v\) is well-defined. It produces a perfect matching on the set \(M\). The difference of dimensions of any two matched cells is \(1\), so their contributions to the Euler characteristic of \(\Sigma(G,S,n)\) have opposite signs. It follows that the contribution of each matched pair is \(0\), and hence also the total contribution of all the cells in \(M\) is \(0\). This means, that when computing the Euler characteristic of \(\Sigma(G,S,n)\) we can limit ourselves to considering the tuples \(P_{\sigma}\), for which \(B_{1}=\cdots=B_{k-s}=\emptyset\). Note, that, for each \(1\leqslant i\leqslant s\), since \(\varphi(v_{i})\neq\emptyset\), we also have \(A_{i}\neq\emptyset\). For \(1\leqslant i\leqslant s\), we set \(l_{i}\) to be the minimum of \(A_{i}\). Since \(A_{i}\neq\emptyset\), the element \(l_{i}\) is well-defined. We now partition the set \(M\) into the sets \(M_{1},M_{2},\ldots,M_{s+1}\) as follows. For each cell \(\sigma\in M\) we define \(h(\sigma)\) to be the index \(j\), uniquely determined by the following condition: \[\varphi(v_{j})\neq l_{j},\text{ and }\varphi(v_{i})=l_{i},\text{ for all }i<j.\] Here, if \(\varphi(v_{i})=l_{i}\), for all \(1\leqslant i\leqslant s\), we set \(h(\sigma)=s+1\). Next, fix an index \(1\leqslant g\leqslant s\), and calculate the contribution of the cells in \(M_{g}\). Same way as earlier in the proof, we can define an involution \(v:M_{g}\to M_{g}\). This time it is shifting \(l_{g}\) from \(\varphi(v_{g})\) to \(\varphi(\mu(v_{g}))\) and back. Formally, \[v(\varphi)(x)\mathrel{\mathop{:}}=\left\{\begin{array}{ll}\varphi(x)\,XOR\, l_{g},&\text{ if }x=v_{g},\text{ or }x=\mu(v_{g});\\ \varphi(x),&\text{ otherwise.}\end{array}\right.\] Since \(\varphi(v_{g})\neq l_{g}\), the involution \(v\) is well-defined. As before, it matches cells with dimension difference \(1\), so the contribution of these two cells, and hence of the total set \(M_{g}\) to the Euler characteristic of \(\Sigma(G,S,n)\) is \(0\). The only interesting contribution occurs in \(M_{s+1}\). Note, that all cells in \(M_{s+1}\) have dimension \(n-s\). Indeed, if \(\sigma\in M_{s+1}\), the condition \(\varphi(v_{i})=l_{i}\), for all \(1\leqslant i\leqslant s\), uniquely determines \(\varphi(\mu(v_{i}))\) as well, and we see that \(\sum_{v\in V}|\varphi(v)|=s\), hence \(\sum_{e\in E}|\varphi(e)|=n-s\). To calculate \(|M_{s+1}|\) note that \(E\setminus\varphi(V)=|E|-|V|=c-k=q\). We need to count the number of ordered set partitions of \(U\) into \(q\) possibly empty parts. To produce such a partition, we can simply choose for each of the elements from \(U\) which of the \(q\) parts it will belong to. Therefore we conclude that \(M_{s+1}=q^{|U|}\). The contribution of \(M\) to the Euler characteristic of \(\Sigma(G,S,n)\) is therefore \((-1)^{n-s}q^{|U|}\). To finish our calculations, we simply need to sum over all tuples \(P_{\sigma}\). To choose such a tuple, we first fix the cardinality of \(U\), set \(j\coloneqq|U|\). We have \(0\leqslant j\leqslant n-s\). After that we choose \(U\) itself, there are \(\binom{n}{j}\) possibilities. All the set \(B_{i}\) are empty, so we need to choose \(A_{1},\ldots,A_{s}\). They must all be non-empty, and they need to form an ordered partition of \([n]\setminus U\). By definition of the Stirling numbers of the second kind, there are \(s!\{\binom{n-j}{s}\}\) to do that. Now, each such tuple gives the contribution \((-1)^{n-s}q^{j}\) to the Euler characteristic, so we obtain the desired formula (4.1). Let us now show (b). The argument is similar to the one in the proof of (a), but it requires certain modifications. Here, the graph \(G\) coincides with its spanning tree, and we can match the vertices of \(G\) with its edges, in the same way as above, with one vertex left over. Call this vertex \(v\). Let again \(S=\{v_{1},\ldots,v_{s}\}\) and \(V\setminus S=\{w_{1},\ldots,w_{k-s}\}\). Since \(S\) is non-empty and the choice of \(v\) is arbitrary, we can assume that \(v=v_{s}\). As before, we have a bijection \(\mu:V\setminus v\to E\), such that each vertex is mapped to one of its adjacent edges. We now set \[A_{i}\coloneqq \varphi(v_{i})\cap\varphi(\mu(v_{i})),\text{ for }1\leqslant i \leqslant s-1,\] \[B_{j}\coloneqq \varphi(w_{j})\cap\varphi(\mu(w_{j})),\text{ for }1\leqslant j \leqslant k-s,\] \[U\coloneqq \varphi(v).\] Again, fixing the values of \(A_{i}\)'s, \(B_{j}\)'s and \(U\), gives us a set of cells \(M\), and we calculate the individual contributions of these sets to the Euler characteristic. Repeating the argument from (a) we immediately conclude that we can assume that \(B_{1}=\cdots=B_{k-s}=\emptyset\), otherwise that contribution is \(0\). Furthermore, we know that in this case the total contribution of the set \(M\) is equal to \((-1)^{n-s+1-|U|}\). The cardinality of \(U\) can be anything between \(1\) and \(n-s+1\). If \(j=|U|\) is fixed, there are \(\binom{n}{j}\) ways to choose \(U\). After that there are \(s-1!\{\binom{n-j}{s-j}\}\) ways to choose the ordered partition \((A_{1},\ldots,A_{s-1})\). Summing over \(j\) we get the formula \(\chi(\Sigma(G,S,n))=\sum_{j=1}^{n-s+1}(-1)^{n-s-j+1}\binom{n}{j}(s-1)!\{ \binom{n-j}{s-1}\}\). In particular, when \(G\) is a cycle, we have \(|V|=|E|\). This means that \(q=0\), so in (4.1) all the terms except for the first one vanish, and we have the following formula. **Corollary 4.2**.: _We have \(\chi(\Omega(n,k))=(-1)^{n-k}k!\{\binom{n}{k}\}\)._ Together with Theorem 3.4 this finishes the calculation of the Betti numbers of \(\Omega(n,k)\). ## Declarations ### Ethical Approval: not applicable **Competing interests:** not applicable. **Authors' contributions:** not applicable. **Funding:** DFG Project Nr. 509120879 **Availability of data and materials:** not applicable.
2302.14732
Constrained Bayesian Optimization for Automatic Underwater Vehicle Hull Design
Automatic underwater vehicle hull Design optimization is a complex engineering process for generating a UUV hull with optimized properties on a given requirement. First, it involves the integration of involved computationally complex engineering simulation tools. Second, it needs integration of a sample efficient optimization framework with the integrated toolchain. To this end, we integrated the CAD tool called FreeCAD with CFD tool openFoam for automatic design evaluation. For optimization, we chose Bayesian optimization (BO), which is a well-known technique developed for optimizing time-consuming expensive engineering simulations and has proven to be very sample efficient in a variety of problems, including hyper-parameter tuning and experimental design. During the optimization process, we can handle infeasible design as constraints integrated into the optimization process. By integrating domain-specific toolchain with AI-based optimization, we executed the automatic design optimization of underwater vehicle hull design. For empirical evaluation, we took two different use cases of real-world underwater vehicle design to validate the execution of our tool.
Harsh Vardhan, Peter Volgyesi, Will Hedgecock, Janos Sztipanovits
2023-02-28T16:36:26Z
http://arxiv.org/abs/2302.14732v2
# Constrained Bayesian Optimization for Automatic Underwater Vehicle Hull Design ###### Abstract. Automatic underwater vehicle hull Design optimization is a complex engineering process for generating a UUV hull with optimized properties on a given requirement. First, it involves the integration of involved computationally complex engineering simulation tools. Second, it needs integration of a sample efficient optimization framework with the integrated toolchain. To this end, we integrated the CAD tool called FreeCAD with CFD tool openFoam for automatic design evaluation. For optimization, we chose Bayesian optimization (BO), which is a well-known technique developed for optimizing time-consuming expensive engineering simulations and has proven to be very sample efficient in a variety of problems, including hyper-parameter tuning and experimental design. During the optimization process, we can handle infeasible design as constraints integrated into the optimization process. By integrating domain-specific toolchain with AI-based optimization, we executed the automatic design optimization of underwater vehicle hull design. For empirical evaluation, we took two different use cases of real-world underwater vehicle design to validate the execution of our tool. Key words and phrases:_Bayesian Optimization, underwater vehicles, inequality constraints, computational fluid dynamics, CAD + Footnote †: journal: _DESITON 2023, May 9-12, 2023, San Antonio TX, United States_ Problem Formulation and Approach In this section, we formulate the UUV hull design problem as a constrained optimization problem. The hull shape of an underwater vehicle is indicated as \(\Omega\) and can be defined using a multivariate parameter \(x\), that is, \(x=\Omega\). If \(f\) is the objective function that maps a 3D shape \(\Omega\) with a complex property of coupled two-way solid-fluid dynamics, that is, drag force \((F_{d})\) (\(f:\Omega\mapsto F_{d}\)), then the optimization problem can be formulated as: \[\Omega^{*}=\underset{x\in DS}{\operatorname{argmin}}f(x) \tag{1}\] Here, \(DS\) is our design search space. The UUV hull contains electronics, sensors, and other mechanical and electrical components. Packing them into the hull imposes a non-linear constraint on the optimization process. The hull design problem can then be formulated as a constrained optimization problem defined as follows: \[\Omega^{*}=\underset{x\in DS}{\operatorname{argmin}}f(X) \tag{3}\] \[s.t.\ g(x)\leq 0 \tag{2}\] Here, constraint function \(g(x)\) ensures that all selected components can be packed inside the designed UUV hull. To solve this optimization problem, we utilize a constrained Bayesian Optimization framework as formulated by (Hari and Sohn, 2015). ### Constrained Bayesian Optimization Bayesian Optimization relies on a probabilistic model of the system of interest during optimization, and the fidelity of the model is the most decisive factor in the optimization process. We use the Gaussian process (Hari and Sohn, 2015) defined below to model system behavior (\(f\)): \[f\sim\mathcal{GP}(\mu(.),\kappa(.,.)) \tag{4}\] Here \(\mu(.)\) is the mean function and \(\kappa(.,.)\) is the covariance kernel. For any given pair of input points \(x,x^{\prime}\in R^{d}\), these are defined as: \[\mu(x)=\mathbb{E}[f(x)] \tag{5}\] \[\kappa(x,x^{\prime})=\mathbb{E}[(f(x)-\mu(x))(f(x^{\prime})-\mu(x^{\prime})] \tag{6}\] In the Bayesian sequential design optimization process, a crucial step at each iteration is to select the most promising candidate \(x^{*}\) for evaluation in the next iteration. In the BO setting, this is done by defining an acquisition function. The design of an acquisition function is a critical component in the performance efficiency of the BO. Let \(x^{*}\) be the best-evaluated sample so far. To select a candidate point \(\hat{x}\) in the next iteration, an improvement is defined according to Mockus et al. (Mockus et al., 2017) as follows: \[I(\hat{x})=max\{0,f(\hat{x})-f(x^{+})\} \tag{7}\] The expected improvement in such a case is defined as an EI acquisition function, which has a closed-form solution for estimating it from a new candidate point, as given by Mockus et al. (Mockus et al., 2017) and Jones et al. (Jones et al., 2017): \[EI(x^{+})=(f(x^{*})-\mu^{+})\Phi(\frac{f(x^{*})-\mu^{+})}{\sigma^{+}}+\sigma^{ +}\phi(\frac{f(x^{*})-\mu^{+})}{\sigma^{+}}) \tag{8}\] Here, \(\phi\) is the standard normal cumulative distribution and \(\Phi\) is the standard normal probability density function. Using this EI function, the most promising candidate sample is selected by choosing \(x^{+}\) that has the maximum EI value. \[x^{*}=\underset{x^{*}\in DS}{\operatorname{argmax}}EI(x^{+}) \tag{9}\] The newly selected sample \(x^{*}\) is evaluated and is included in the evaluated data set, called \(X\). Accordingly, the posterior probability distribution is estimated by the conditioning rules for Gaussian random variables, as below: \[\mu^{*}=\mu(x^{*})+\kappa(x^{*},X)\kappa(X,X)^{-1}(f(X)-\mu(X)) \tag{10}\] \[(\sigma^{*})^{2}=\kappa(x^{*},x^{*})-\kappa(x^{*},X)\kappa(X,X)^{-1}\kappa(X, x^{*}) \tag{11}\] Constrained BO, which is an extension to standard BO meant to model infeasibility during the inequality-constrained optimization routine, is formulated and proposed by (Hari and Sohn, 2015). We use this formulation for our experimentation, and it models both function and constraint as Gaussian processes. Let \(g\) be the constraint function that is unknown _a priori_; the first step in this setting is to model \(f\) and \(g\) as Gaussian processes: \[f\sim\mathcal{GP}(\mu_{1}(x),\kappa_{1}(x)) \tag{12}\] \[g\sim\mathcal{GP}(\mu_{2}(x),\kappa_{2}(x)) \tag{13}\] \[\mu_{1}(x)=\mathbb{E}[f(x)] \tag{14}\] \[\kappa_{1}(x,x^{\prime})=\mathbb{E}[\{f(x)-\mu_{1}(x)\}\{f(x^{ \prime})-\mu_{1}(x^{\prime})\}] \tag{15}\] \[\mu_{2}(x)=\mathbb{E}[g(x)] \tag{16}\] \[\kappa_{2}(x,x^{\prime})=\mathbb{E}[\{g(x)-\mu_{2}(x)\}\{g(x^{ \prime})-\mu_{2}(x^{\prime})\}] \tag{17}\] The improvement function in this case is modified as: \[l_{C}(x^{+})=\Delta(x^{+})max\{0,f(x^{*})-f(x^{+})\} \tag{18}\] \[\Delta(x^{+})\in\{0,1\} \tag{19}\] (20) \(\Delta(x^{+})\) is a feasibility indicator function that is 1 if \(g(x^{+})\leq 0\), and 0 otherwise. It causes \(\Delta(x^{+})\) to be a Bernoulli random variable whose probability of getting a feasible design is: \[PF(x^{+})\coloneqq Pr(g(x)\leq 0)=\int_{-\infty}^{0}P(g(x^{+})|x^{+},X)dg(x^{+}) \tag{21}\] Due to the Gaussian behavior of \(g(.)\), \(\Delta(x^{+})\) would be a univariate Gaussian random variable. The modified expected improvement to include the effect of infeasibility gives a joint acquisition function: \[E_{C}(x^{+}) = \mathbb{E}[I_{C}(x^{+})|x^{+}] \tag{22}\] \[= \mathbb{E}[\Delta(x^{+})I(x^{+})|x^{+}] \tag{23}\] \[= PF(x^{+})EI(x^{+}) \tag{24}\] This joint acquisition function can be further optimized using standard optimization algorithms. Since our acquisition function has the property of being smooth and continuous, we used a two-step optimization to find \(x^{*}\). The first step is Monte Carlo optimization and the second step is limited memory BFGS (Hari and Sohn, 2015) (see Figure 1). ### Integrated Toolchain For design automation, exploration, and optimization, we integrated the necessary simulation tools for completely automated execution. To this end, we integrated the CAD design tool FreeCAD (Hari and Sohn, 2015) with the CFD simulation tool OpenFoam (Gueruer et al., 2017). FreeCAD is used to design a parametric CAD model and generate the 3D CAD geometry from a given set of parameters along with its stereolithography (STL) file without any manual intervention. OpenFoam uses this STL file to conduct fluid physics simulations via finite volume discretizations. Volume meshing is done using a castellated 3D parametric volumetric mesh which is further split and refined in the vicinity of the body surfaces. Other CFD simulation requirements, like solver settings and initial and boundary conditions, can also be set up from a Python environment. In the meshed volume, RANS with kw-SST turbulence fluid physics is solved, and the output of interest is fetched from OpenFoam and transferred back into the Python environment. Accordingly, it is possible to both control and run the entire design optimization pipeline from a single Python environment. This integration of tools and capability to control the parameters and environmental conditions gives us the flexibility to run an optimization framework with design tools in the loop without human intervention (refer to Figure 2). ### Parametric CAD Model and Baseline Packing Geometry For automatic design optimization, it was imperative to design a parametric CAD model with the flexibility and adaptability to be able to create a 3D model from a given set of parameters without manual intervention. The parametric CAD design should maintain the experimenter's assumptions in order to generate a _valid_ CAD design based on the given parameters. To ensure this, we use a stringent design methodology for completely constrained designs (Kang et al., 2017). For the parametric CAD seed design, we used a Myring hull(Mirsh and Sire, 2018) as our baseline architecture. It is the most commonly used axisymmetrical hull shape, due to a number of advantages such as streamlined flow behavior and satisfactory geometry for both hydrodynamic and hydrostatic pressure. A Myring hull has three different body sections: nose, tail, and cylindrical center: The nose and tail equations for a Myring hull are given by: (25) \[r_{1}(x)=\frac{1}{2}D\left[1-\left(\frac{x-a}{a}\right)^{2} \right]^{\frac{1}{n}}\] (26) \[z=(x-a-b)\] (27) \[r_{2}(x)=\frac{1}{2}D-\left[\frac{3D}{2c^{2}}-\frac{\tan\theta}{ c}\right]z^{2}+\left[\frac{D}{c^{3}}-\frac{\tan\theta}{c^{2}}\right]z^{3}.\] (28) Here, \(r_{1}\) and \(r_{2}\) define the radius of the nose and tail of the hull at a distance, \(x\), measured from the tip of the nose. Other body parameters are: \(a\), \(b\), \(c\), \(D\), \(n\), and \(\theta\), corresponding to nose length, body length, tail length, cylindrical body diameter, nose shaping parameter, and tail shaping parameter, respectively (see Figure 3). The baseline design used for internal component packing and placement comes from another automated tool (Kang et al., 2017). Component selection and packing are not within the scope of this paper; however, three-dimensional packing of components in an arbitrary shape is an NP-complete problem. Based on the capabilities of our external component selection and packing tool, we utilized a simple design with conical end caps (nose and tail) and a cylindrical body to determine the exact required hull dimensions. These conical-shaped parametric designs are optimized to minimize internal hull volume while ensuring that packed components have no interferences. These baseline designs, however, are not optimal from the perspective of producing minimal-drag designs. Once components are packed and the parameters of a baseline design are found, this fixed geometry will act as a minimal constraint in the optimization of the hull design. (see Figures 4 and 5). Figure 4. Selected components in the UUV in a specific packing configuration Figure 3. Myring hull: Geometry and parameters ### Infeasible Design Heuristics Since a parametric Myring hull can assume a wide range of shapes (Hernandez et al., 2016), the generated hull shape needs to be tested for interference with the baseline packed design. Any Myring hull parameters that cause interference with the baseline design are deemed to be infeasible. Since the computational cost of CAD assembly and running an interference test is much less than CFD simulation, the in-feasibility test on an optimized design is conducted during the CAD modeling and assembly stage (refer to Figure 2). Running full CFD analysis on an infeasible design is a waste of computational time and resources, and it delays the optimization process. To address this situation, we implemented a heuristic that works as follows (for a minimization problem): for an infeasible design that is detected during CAD assembly, return the maximum drag value to the optimizer for all evaluated samples up to that point, instead of running a full CFD analysis. The opposite can be done for a maximization problem. However, for starting the experiment we need at least one drag value of in-feasible design. Accordingly, we run the first infeasible design and store its drag value. ## 3. Design experimentation and results In this section, we present two different experiments carried out using our optimization pipeline. In both cases, selected components are the same and consequently, the baseline packing geometry is identical. The operating conditions (i.e., the velocity of operation, initial and boundary conditions) and environmental conditions (e.g., turbulence intensity) are kept constant based on mission requirements. The baseline packing geometry is as shown in Figure 6. The design space (\(DS\)) of the search process is selected as shown in Table 1. ### Experiment 1 In this experiment, we only optimize the nose and tail shapes, defined by parameters \(n\) and \(\theta\). The range of the design space for optimization of parameters \(n\) and \(\theta\) is given in Table 1. Due to it being a computationally costly process, we run 50 iterations of optimization using our optimization pipeline. The most optimal design (shown in Figure 8) has a drag value of approx 69 Newtons. ### Experiment 2 In this experiment, we also optimized the nose and tail length (parameters \(a\) and \(b\)) in addition to their shapes (parameters \(n\) and \(\theta\)). The design space for optimization of all four variables is given in Table 1. Again, we run 50 sequential optimization steps using \begin{table} \begin{tabular}{c c c} \hline \hline **Symbol** & **Minimum** & **Maximum** \\ \hline \(a\) & \(a_{B}\) & \(a_{B}+2500\) mm \\ \(c\) & \(c_{B}\) & \(c_{B}+2500\) mm \\ \(n\) & 0.1 & 5.0 \\ \(\theta\) & 0\({}^{o}\) & 50\({}^{o}\) \\ \(l=a+b+c\) & & \\ \hline \hline \end{tabular} \end{table} Table 1. Range of design parameters for optimization Figure 5. Selected components in baseline packed geometry Figure 6. Baseline 3D hull design with \(a_{baseline}=555\) cm, \(b_{baseline}=2664\) cm, \(c_{baseline}=512\) cm, \(D_{baseline}=1026\) cm Figure 2. Optimization pipeline using integrated CAD and CFD tools with Bayesian Optimization our optimization pipeline. The most optimal design is shown in Figure 8 and had a drag value of approximately 36 Newtons. This is a 50% reduction in drag due to the streamlined nose and tail shapes and would save a large amount of energy consumption during real-world operation of the vehicle. ### Analysis of result In both experiments, the allocated budget was 50 evaluations since the evaluation time was tens of minutes. But BO converges to optimal/near-optimal design in a few iterations. In exp1 (refer to right side plot in figure 8) even in 12 iterations a near-optimal design was found and no significant further improvement is observed. In exp2 (refer to right plot in figure 10) only in 10 iterations the optimal design was found and no further improvement was observed. This sample efficiency is due to the dynamic probabilistic modeling of the design space on labeled samples and state-of-the-art acquisition functions and accordingly costly optimization calculation. However, with the current multi-core implementation of BO, it takes million second to seconds for finding a new sample to evaluate and it is prudent to use BO in use cases where sample labeling and evaluation time can not be reduced beyond seconds. ## 4. Related Work One of the first well-known studies on optimizing UUV hull design for low drag was conducted by Gertler (Gertler, 1950) in 1950. Later in 1976, Myring (Myring, 1978) studied viscous-inviscid flow interaction methods to predict drag and concluded that there is low variability in body drag force when the nose or tail varies from slender to stout within a specific range, but it increases dramatically outside that range. To design shapes for better performance, bio-inspired hull shapes for UUVs are becoming popular (Gertler, 1950). To this end, Dong et al. (Dong et al., 2018) designed a gliding robotic fish with the streamlined shape of a whale shark. Stewart et al. (Stewart et al., 2018) designed a hybrid UAV-UUV system inspired by seabirds. A four-fin bio-inspired UUV was studied by Geder et al. (Geder et al., 2018). They also showed that fish can achieve both high maneuverability and excellent gliding performance by equipping themselves with controllable fins and tails. More recently, with extraordinary developments in computing capability and the maturity of mesh-based analysis tools, computational fluid dynamics (CFD) simulations are now widely applied to analyze UUV hydrodynamic performance. Most research is based on either the Reynolds-averaged Navier-Stokes (RANS) formulation or the large eddy simulation (LES). Since RANS treats viscous effects much better than potential flow theory and needs fewer computational resources, it is more frequently used than LES. With the advent of computer-aided design, traditional CFD can be leveraged inside an optimization loop to seek an optimal UUV design for given flow conditions; see, e.g., Alam et al. (Alam et al., 2018). Many different optimization algorithms have been considered; for example, adjoint methods (Alam et al., 2018) and genetic algorithms (Alam et al., 2018). (Alam et al., 2018) integrated machine learning model with numerical simulation to get optimized design faster than the traditional methods. Schweyher et al. (Schweyher et al., 2018) used an evolutionary strategy- Genetic Algorithm to obtain a minimum-drag body. Application of Bayesian Optimization to find a minimum-drag shape was studied in 2D small arbitrary Figure 8. Optimization process vs number of evaluation/iteration: L2 distance between successive selected samples (left), Drag value of best-selected sample in Newton (right) Figure 10. Optimization process vs number of evaluation/iteration: L2 distance between successive selected samples (left), Drag value of the best-selected sample in Newton (right) Figure 7. Optimal UUV hull shape with fixed nose and tail length. Optimal design parameters: \(n=1.0\); \(\theta=50.0\) Figure 9. Optimal UUV hull shape with nose and tail length as a free parameter. Optimal design parameters: \(a=2643.86\); \(c=1348.72\); \(n=1.144\); \(\theta=22.03\) shapes by Eismann et al. (Eismann et al., 2018) and an axisymmetric body of rotation by Vardhan et al (Vardhan et al., 2019). Both works did not consider constraint modeling during the design process. A deep neural network-based approach is used to study the effects of UUV shape on drag force by (Vardhan et al., 2019). ## 5. Conclusion and Future Works In this work, we developed an end-to-end design automation toolchain for constrained optimization problems for underwater vehicle hull design. We integrated the state-of-the-art AI-based optimization algorithm called Bayesian optimization along with the capability to handle constraints during the optimization process. Since this integrated tool is generic, the most interesting future work of interest is the extension and integration of other optimization algorithms with the current evaluation toolchain and comparing the performance of AI-based bayesian optimization with these optimization methods that are currently used in the domain of CFD-based optimization. To this end, it would be interesting to compare BO with other simulation-based optimization methods like GA, Nelder Mead, and Particle Swarm Optimization along with the most used gradient-based optimization method i.e. adjoint-assisted optimization. A detailed comparative study can give us a better picture of the standing of AI-based optimizers in comparison to other existing optimization methods. ###### Acknowledgements. This work is supported by DARPA through contract number FA8750-20-C-0537. Any opinions, findings, conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the views of the sponsor.
2309.06044
Unusual isospectral factorizations of shape invariant Hamiltonians with Scarf II potential
In this paper, we search the factorizations of the shape invariant Hamiltonians with Scarf II potential. We find two classes; one of them is the standard real factorization which leads us to a real hierarchy of potentials and their energy levels; the other one is complex and it leads us naturally to a hierarchy of complex Hamiltonians. We will show some properties of these complex Hamiltonians: they are not parity-time (or PT) symmetric, but their spectrum is real and isospectral to the Scarf II real Hamiltonian hierarchy. The algebras for real and complex shift operators (also called potential algebras) are computed; they consist of $su(1,1)$ for each of them and the total potential algebra including both hierarchies is the direct sum $su(1,1)\oplus su(1,1)$.
Yiğit Can Acar, Lorena Acevedo, Şengül Kuru
2023-09-12T08:28:34Z
http://arxiv.org/abs/2309.06044v2
# Unusual isospectral factorizations of shape invariant Hamiltonians with Scarf II potential ###### Abstract In this paper, we search the factorizations of the shape invariant Hamiltonians with Scarf II potential. We find two classes; one of them is the standard real factorization which leads us to a real hierarchy of potentials and their energy levels; the other one is complex and it leads us naturally to a hierarchy of complex Hamiltonians. We will show some properties of these complex Hamiltonians: they are not parity-time (or PT) symmetric, but their spectrum is real and isospectral to the Scarf II real Hamiltonian hierarchy. The algebras for real and complex shift operators (also called potential algebras) are computed; they consist of \(su(1,1)\) for each of them and the total potential algebra including both hierarchies is the direct sum \(su(1,1)\oplus su(1,1)\). ## 1 Introduction: Scarf II potential Scarf II potential (or Gendenshtein potential) [1, 2] was proposed to describe the atomic and molecular interactions (diatomic molecule potential) in quantum mechanics [3]. It is one of the shape invariant (SI) potentials given for instance in [1] with the following expression \[V(x)=\frac{B^{2}-(A+\frac{\gamma}{2})^{2}+\frac{\gamma^{2}}{4}+2B(A+\frac{ \gamma}{2})\mathrm{sinh}\gamma x}{(\mathrm{cosh}\gamma x)^{2}} \tag{1.1}\] where \(A\), \(B\) and \(\gamma\), in principle, are assumed to be real parameters. In the literature, Scarf II potential has recently attracted much attention, in general not by itself, but for its complexification as a particular simple model to display analytically some properties of complex potentials (see for instance [4, 5, 6, 7, 8]). Complex potentials have interesting properties from the point of view of non-Hermitian Hamiltonians. One of them is the existence of real or complex spectrum depending on the existence of parity-time (PT) symmetry [9, 10, 11] and whether this symmetry is spontaneously broken or not [1, 12, 13, 14, 15]. In [4], new non-PT-symmetric Hamiltonians were obtained including the Hamiltonian with Scarf II potential by using group theoretical methods. Levai and Znojil [5] studied the PT-symmetric Scarf II potential in order to see the relation of PT symmetry and supersymmetry (SUSY). Quesne and Bagchi revisited the PT-symmetric Scarf II potential in [6] and they computed the bound-state wavefunctions together with their energy levels. They found also new completely solvable rationally extended partners of complexified Scarf II potential. Ahmed et al. in [7] studied in detail the crossing of spectrum levels of the complex PT-symmetric Scarf II potential. The broken and unbroken PT and SUSY potentials in optical systems related with Scarf II potential were investigated in [8]. In [14], the relation between SUSY quantum mechanics and PT-symmetry is considered in full detail and in this context a large class of PT or not PT-symmetric complex potentials is studied. This work, is focused just on the real Scarf II potential, not on its complexifications. We will see that this real Hamiltonian has not only a real hierarchy of potentials obtained by factorization. There is also a second complex hierarchy with a broken supersymmetry. We remark that the existence of simultaneous real and the complex hierarchies is not due to any complexification, simply, for the real Scarf II potential there exist these two compatible factorization hierarchies. We show that each of these hierarchies, real and complex, is linked to the Lie algebra \(su(1,1)\). The corresponding factorization is associated to the parameters, \(\beta\) or \(\alpha\) of Scarf II potential, respectively; while the real factorization changes \(\beta\) in real units, the complex one changes the parameter \(\alpha\) in imaginary (complex) units. Besides, we show that the real shift operators and the complex shift operators commute and therefore the total shift (or potential) algebra for this problem is \(su(1,1)\oplus su(1,1)\). This double real-complex factorization of Scarf II potential shown here is unique in the list of shape invariant potentials, as given for instance in [1]. The rest of SI potentials leading to double factorizations (for instance, trigonometric or hyperbolic Poschl-Teller potentials) have two simultaneous real factorizations. Therefore, Scarf II is a very special case in the class of SI potentials and deserves to be examined in closer detail. The organization of this work is as follows. In section 2 we describe how such real and complex factorizations appear, as well as the spectrum of the potentials in each hierarchy. The following section supply a discussion on the Lie algebras involved, to arrive to \(su(1,1)\oplus su(1,1)\) as the algebra of the whole real-complex hierarchy. ## 2 Factorizations Let us take the Scarf II Hamiltonian choosing \(\gamma=2\) in (1.1), for the sake of simplicity, in the following form \[H_{\alpha,\beta}(x)=-\frac{d^{2}}{dx^{2}}+V_{\alpha,\beta}(x),\qquad V_{ \alpha,\beta}(x)=\frac{\alpha^{2}-\beta^{2}+1+2\alpha\beta\sinh 2x}{(\cosh 2x)^{2}} \tag{2.2}\] Our aim is to factorize this Hamiltonian in terms of real first order differential operators \(A^{\pm}\) plus an extra constant as follows: \[H_{\alpha,\beta}(x)=A^{+}A^{-}+\mu \tag{2.3}\] The constant \(\mu\) is called factorization energy and usually corresponds to ground state energy of the system. Therefore, we will start by computing a real factorization and see how an additional complex one will automatically appear. In the next sections we will characterize each factorization as well as their relations. ### Real factorizations Suppose that both \(\alpha\) and \(\beta\) are real. The first order diferential operators \(A^{\pm}\), are sometimes called shift operators, because relate (or intertwine) Hamiltonians with shifted parameters of the potential. The Hamiltonian can be writen as: \[H_{\alpha,\beta}=A^{+}_{\alpha,\beta}A^{-}_{\alpha,\beta}+\mu_{\beta} \tag{2.4}\] where \[A^{\pm}_{\alpha,\beta}=\mp\frac{d}{dx}+\frac{\alpha}{\cosh\!2x}+(\beta-1)\text{ tanh}2x,\qquad\mu_{\beta}=-(\beta-1)^{2} \tag{2.5}\] It can be easily seen from (2.5) that the operators \(A^{\pm}_{\alpha,\beta}\) are adjoint each other: \((A^{+}_{\alpha,\beta})^{\dagger}=A^{-}_{\alpha,\beta}\). Since the potential \(V_{\alpha,\beta}(x)\) has the reflection symmetry \(\alpha\to\tilde{\alpha}=-\alpha;\ \beta\to\tilde{\beta}=-\beta\), then, we have two types of sign election for the parameters giving rise to two different but related factorizations. By applying this symmetry to the factorization, (2.4) and operators (2.5), we get another set of first order differential operators \(\tilde{A}^{\pm}_{\alpha,\beta}\) that coincide with \(A^{\mp}_{\alpha,\beta+2}\), together with a new constant \(\tilde{\mu}_{\beta}=\mu_{\beta+2}\). So, the Hamiltonian \(H_{\alpha,\beta}\) can also be written as: \[H_{\alpha,\beta}=\tilde{A}^{+}_{\alpha,\beta}\tilde{A}^{-}_{\alpha,\beta}+ \tilde{\mu}_{\beta}=A^{-}_{\alpha,\beta+2}A^{+}_{\alpha,\beta+2}+\mu_{\beta+2} \tag{2.6}\] where \[\tilde{A}^{\pm}_{\alpha,\beta}=\mp\frac{d}{dx}-\frac{\alpha}{\cosh\!2x}+(- \beta-1)\text{tanh}2x,\qquad\tilde{\mu}_{\beta}=-(-\beta-1)^{2} \tag{2.7}\] From (2.4) and (2.6), the following factorizations and intertwining relations are found: \[H_{\alpha,\beta}=A^{+}_{\alpha,\beta}A^{-}_{\alpha,\beta}+\mu_{\beta}=A^{-}_ {\alpha,\beta+2}A^{+}_{\alpha,\beta+2}+\mu_{\beta+2} \tag{2.8}\] \[A^{-}_{\alpha,\beta}H_{\alpha,\beta}=H_{\alpha,\beta-2}A^{-}_{\alpha,\beta}, \qquad A^{+}_{\alpha,\beta}H_{\alpha,\beta-2}=H_{\alpha,\beta}A^{+}_{\alpha,\beta} \tag{2.9}\] Thus, we have obtained a real hierarchy of Scarf II Hamiltonians \(H_{\alpha,\beta+2m}\,,\ m\in\mathbb{Z}\). The ground state for \(H_{\alpha,\beta}\) is given by \[A^{-}_{\alpha,\beta}\psi^{0}_{\alpha,\beta}=0,\qquad\psi^{0}_{\alpha,\beta}( x)=N_{0}\,e^{-\alpha\arctan(\tanh x)}(\cosh 2x)^{(-\beta+1)/2}\,,\quad\beta>1 \tag{2.10}\] where \(N_{0}\) is normalization constant depending on the values of \(\alpha\), \(\beta\) and the upper index of \(\psi^{0}_{\alpha,\beta}\) denotes the ground state. Having in mind that the Gudermannian \(\text{gd}\,x\) is defined by [16] \[\text{gd}\,x=2\arctan\Big{(}\tanh\frac{1}{2}x\Big{)},\qquad\text{gd}^{ \prime}x=\text{sech}x\] then, the ground state wavefunction can be re-expressed as \[\psi^{0}_{\alpha,\beta}(x)=N_{0}\,e^{-\frac{\alpha}{2}\,\text{gd}\,2x}(\cosh 2 x)^{(-\beta+1)/2}\,,\quad\beta>1\] The parameter \(\beta\) must satisfy \(\beta>1\), in order \(\psi^{0}_{\alpha,\beta}\) be square integrable. The factorization energy \(\mu_{\beta}\) corresponds to the ground state energy: \(E^{0}_{\alpha,\beta}=\mu_{\beta}=-(\beta-1)^{2}\). According to the intertwining (2.9), the action of the real shift operators on the \(n\)th-excited state \(\psi^{n}_{\alpha,\beta}\) of \(H_{\alpha,\beta}\) is given by: \[A^{+}_{\alpha,\beta+2}\psi^{n}_{\alpha,\beta}\propto\psi^{n+1}_{\alpha,\beta+ 2},\qquad A^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta}\propto\psi^{n-1}_{ \alpha,\beta-2} \tag{2.11}\] where \(n\) denotes the excitation level of the state and \(n=0,1,2,\dots\). Then, any excited state can be obtained by the iterative action of the kind of operators \(A^{+}_{\alpha,\beta}\) on the ground state eigenfunctions: \[A^{+}_{\alpha,\beta}\dots A^{+}_{\alpha,\beta-2n+2}\psi^{0}_{\alpha,\beta-2n} \propto\psi^{n}_{\alpha,\beta},\qquad E^{n}_{\alpha,\beta}=E^{0}_{\alpha, \beta-2n} \tag{2.12}\] under the condition \(\beta-2n-1>0\). The \(n\)th excited state solution in terms of Jacobi polynomials for the real case can be found, for instance in [1, 17]. It has the form: \[\psi^{n}_{\alpha,\beta}(x)=N_{n}\,i^{n}(1+y^{2})^{-s/2}e^{-\lambda\tan^{-1}y}P ^{(i\lambda-s-1/2,-i\lambda-s-1/2)}_{n}(iy),\qquad y=\sinh 2x \tag{2.13}\] where \(N_{n}\) is normalization constant, \(P_{n}^{(\mu,\nu)}\) is for a Jacobi polynomial and \(s=\frac{\beta-1}{2}\), \(\lambda=\frac{\alpha}{2}\) depend on the potential parameters [18]. The corresponding energies and the conditions for bound states on the \(\beta\)-parameter are given by \[E_{\alpha,\beta}^{n}=-(\beta-1-2n)^{2},\quad n=0,1,\ldots,n_{\max},\quad n_{ max}=[\frac{\beta-1}{2}]\;\;\mbox{and}\quad\beta>1,\;\beta-2n_{\max}-1>0 \tag{2.14}\] From Fig. 1 (left) it can be appreciated that the parameter \(\beta\) determines the depth of the potential and so the number of the bound states. However, the effect of \(\alpha\), Fig. 1 (right), seems to have no significative influence on the depth (in agreement with (2.14)). In Fig. 2, we have shown the spectrum of real Scarf II potential for some fixed values of the parameters. Remark that due to the symmetry \(\alpha\to\tilde{\alpha}=-\alpha\), \(\beta\to\tilde{\beta}=-\beta\), if \(\beta>1\) then \(\tilde{\beta}<-1\). Thus, we can define the "left" ground states for \(H_{\tilde{\alpha},\tilde{\beta}}\) by \[A_{\tilde{\alpha},\tilde{\beta}+2}^{+}\tilde{\psi}_{\tilde{\alpha},\tilde{ \beta}}^{0}=0,\qquad\tilde{\psi}_{\tilde{\alpha},\tilde{\beta}}^{0}(x)=N_{0}\, e^{\frac{\tilde{\alpha}}{2}\,\mathrm{g}\,\mathrm{d}\,2x}(\cosh 2x)^{(\tilde{\beta}+1)/2} \tag{2.15}\] Square integrability of the ground state \(\tilde{\psi}_{\tilde{\alpha},\tilde{\beta}}^{0}\), makes necessary that \(\tilde{\beta}<-1\). The factorization energy corresponding to the ground state energy is \(\tilde{E}_{\tilde{\alpha},\tilde{\beta}}^{0}=\mu_{\tilde{\beta}+2}=-(\tilde{ \beta}+1)^{2}\). The excited states can be obtained by the action of \(A^{-}_{\tilde{\alpha},\tilde{\beta}}\) on the ground state eigenfunctions: \[A^{-}_{\tilde{\alpha},\tilde{\beta}+2}\cdots A^{-}_{\tilde{\alpha},\tilde{\beta}+2 n}\tilde{\psi}^{0}_{\tilde{\alpha},\tilde{\beta}+2n}\propto\tilde{\psi}^{n}_{ \tilde{\alpha},\tilde{\beta}}\,,\qquad\tilde{\beta}+2n<-1 \tag{2.16}\] \[\tilde{E}^{n}_{\tilde{\alpha},\tilde{\beta}}=\tilde{E}^{0}_{\tilde{\alpha}, \tilde{\beta}+2n}=-(\tilde{\beta}+2n+1)^{2}\] Therefore, we have a complete symmetry between the hierarchy \(\alpha,\beta>1\) and \(\tilde{\alpha}=-\alpha,\tilde{\beta}=-\beta\): \[E^{0}_{\alpha,\beta}=\tilde{E}^{0}_{\tilde{\alpha},\tilde{\beta}},\quad E^{n} _{\alpha,\beta}=\tilde{E}^{n}_{\tilde{\alpha},\tilde{\beta}} \tag{2.17}\] The symmetry is implemented to the eigenstates: \[\tilde{\psi}^{n}_{\tilde{\alpha},\tilde{\beta}}=\psi^{n}_{\alpha,\beta}\qquad \beta-2n>1,\quad\tilde{\beta}+2n<-1 \tag{2.18}\] In summary, bound states will be present for \(\beta>1\) or for \(\tilde{\beta}<-1\). The spectrum and eigenfunctions for one of these cases are obtained from the other one by a symmetry transformation. ### Complex factorizations The Hamiltonian (2.2) at the same time admits a complex factorization. As in the real case, here also there are two types of factorization due to a reflection symmetry of the potential involving both parameters: \[\begin{array}{c}a)\qquad\quad\alpha\to i\beta,\qquad\quad\beta\to-i\alpha\\ b)\qquad\quad\alpha\to-i\beta,\quad\quad\beta\to i\alpha\end{array} \tag{2.19}\] From the first factorization (a), we get the first order diferential operators \(C^{\pm}_{\alpha,\beta}\) and the Hamiltonian rewriten as: \[C^{\pm}_{\alpha,\beta}=\mp\frac{d}{dx}+\frac{i\,\beta}{\mbox{cosh2}x}-i(\alpha -i)\mbox{tanh2}x,\qquad\mu_{\alpha}=(\alpha-i)^{2} \tag{2.20}\] \[H_{\alpha,\beta}=C^{+}_{\alpha,\beta}C^{-}_{\alpha,\beta}+\mu_{\alpha} \tag{2.21}\] where \(\mu_{\alpha}\) has a complex value. By means of the second symmetry we get the second complex factorization \[\tilde{C}^{\pm}_{\alpha,\beta}=\mp\frac{d}{dx}-\frac{i\,\beta}{\mbox{cosh2}x} +i(\alpha+i)\mbox{tanh2}x,\qquad\tilde{\mu}_{\alpha}=(\alpha+i)^{2} \tag{2.22}\] \[H_{\alpha,\beta}=\tilde{C}^{+}_{\alpha,\beta}\tilde{C}^{-}_{\alpha,\beta}+ \tilde{\mu}_{\alpha}=C^{-}_{\alpha+2i,\beta}C^{+}_{\alpha+2i,\beta}+\mu_{ \alpha+2i} \tag{2.23}\] It is easy to conclude from (2.20) that the operators \(C^{\pm}_{\alpha,\beta}\) are not adjoint each other: \((C^{+}_{\alpha,\beta})^{\dagger}\neq C^{-}_{\alpha,\beta}\) but, \[(C^{+}_{\alpha,\beta})^{\dagger}=(C^{-}_{\alpha,\beta})^{*} \tag{2.24}\] A relation that will be used to prove orthogonality relations later on. Now, comparing (2.21) and (2.23), we are able to express the factorizations and intertwining relations as \[H_{\alpha,\beta}=C^{+}_{\alpha,\beta}C^{-}_{\alpha,\beta}+\mu_{\alpha}=C^{-}_{ \alpha+2i,\beta}C^{+}_{\alpha+2i,\beta}+\mu_{\alpha+2i} \tag{2.25}\] \[C^{-}_{\alpha,\beta}H_{\alpha,\beta}=H_{\alpha-2i,\beta}C^{-}_{\alpha,\beta}, \qquad C^{+}_{\alpha,\beta}H_{\alpha-2i,\beta}=H_{\alpha,\beta}C^{+}_{\alpha,\beta} \tag{2.26}\] In this way, we arrive at the hierarchy of complex Hamiltonians \(H_{\alpha+2ik,\beta}\), \(k\in\mathbb{Z}\). We try to find a ground state \(\phi^{0}_{\alpha,\beta}\) in the usual way by means of shift operators \(C^{\pm}_{\alpha,\beta}\). So, we have the following equations to solve: \[C^{-}_{\alpha,\beta}\phi^{0}_{\alpha,\beta}=0\,,\qquad C^{+}_{\alpha+2i,\beta} \phi^{0}_{\alpha,\beta}=0 \tag{2.27}\] However, the solutions obtained from (2.27) are not square integrable. Thus, we conclude that there is no ground state energy solution annihilated by complex shift operators \(C^{\pm}_{\alpha,\beta}\) and the supersymmetry is spontaneously broken [1, 15]. Nevertheless, in order to find the eigenfunctions and eigenvalues of a Hamiltonian in the complex hierarchy, for example \(H_{\alpha-2i,\beta}\), we can use the solutions of the real Hamiltonian \(H_{\alpha,\beta}\). Applying the shift (intertwining) operators \(C^{\pm}_{\alpha,\beta}\) on these real solutions (\(\psi^{n}_{\alpha,\beta}\)), we get the bound state solutions of the complex Hamiltonian: \[C^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta}\propto\psi^{n}_{\alpha-2i,\beta}, \qquad C^{+}_{\alpha,\beta}\psi^{n}_{\alpha-2i,\beta}\propto\psi^{n}_{\alpha,\beta} \tag{2.28}\] where \(n=0,1,2,\ldots,n_{max}\) and the maximum value of \(n\) depends on the value of \(\beta\) as in the real case: \(n_{max}=[\frac{\beta-1}{2}]\), \(\beta>1\). These complex solutions \(\psi^{n}_{\alpha-2i,\beta}\) have the same real eigenvalue as the initial real solutions \(\psi^{n}_{\alpha,\beta}\): \[E^{n}_{\alpha,\beta}=E^{n}_{\alpha-2i,\beta} \tag{2.29}\] We have checked that indeed these complex solutions are square integrable, which can be proved from the expression (2.20) of \(C^{\pm}_{\alpha,\beta}\). Some of them are represented in Fig. 4. The same happens for the rest of complex Hamiltonians \(H_{\alpha+2ki,\beta}\), for \(k=0,\pm 1,\pm 2,\ldots\). They are isospectral to the real \(H_{\alpha,\beta}\), although their solutions are complex. For example, \(nth\) excited state solution \(\psi^{n}_{\alpha-2ki,\beta}\), with \(k\in\mathbb{Z}^{+}\), is found by acting with \(C^{-}_{\alpha,\beta}\) as usual \[C^{-}_{\alpha-(2k+2)i,\beta}\ldots C^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta }\propto\psi^{n}_{\alpha-2ki,\beta}\,,\qquad E^{n}_{\alpha,\beta}=E^{n}_{ \alpha-2ki,\beta} \tag{2.30}\] The complex solutions \(\psi^{n}_{\alpha-2ki,\beta}\), \(n=0,1\ldots n_{\max}\) of the complex Hamiltonians \(H_{\alpha-2ki,\beta}\) are not orthogonal in the usual sense but they are with respect to another type of product. Let us check, for example that the two eigenfunctions \(\psi^{n_{j}}_{\alpha-2i,\beta}\), \(n_{j}=n_{1},n_{2}\) with \(n_{1}\neq n_{2}\) are orthogonal in the following sense. Define the product by (see [19, 20]) \[\langle\psi^{n_{1}}_{\alpha-2i,\beta},\psi^{n_{2}}_{\alpha-2i,\beta}\rangle:= \int_{-\infty}^{\infty}\psi^{n_{1}}_{\alpha-2i,\beta}\,\psi^{n_{2}}_{\alpha-2 i,\beta}\,dx\propto\int_{-\infty}^{\infty}C^{-}_{\alpha,\beta}\psi^{n_{1}}_{ \alpha,\beta}\,C^{-}_{\alpha,\beta}\psi^{n_{2}}_{\alpha,\beta}\,dx \tag{2.31}\] Next, in this product, from the definition of \(C^{\pm}_{\alpha,\beta}\) and the property (2.24), the last integral is \[\langle\psi^{n_{1}}_{\alpha-2i,\beta},\psi^{n_{2}}_{\alpha-2i,\beta}\rangle \propto\int_{-\infty}^{\infty}C^{+}_{\alpha,\beta}C^{-}_{\alpha,\beta}\psi^{n _{1}}_{\alpha,\beta}\,\psi^{n_{2}}_{\alpha,\beta}\,dx=\int_{-\infty}^{\infty} \psi^{n_{1}}_{\alpha,\beta}\,C^{+}_{\alpha,\beta}C^{-}_{\alpha,\beta}\psi^{n _{2}}_{\alpha,\beta}\,dx \tag{2.32}\] Figure 3: Plot of the real part (left) and the imaginary part of the complex Scarf II potential for different values of parameters: \(\beta=5.3,\alpha_{1}=3,\alpha_{2}=3+2i,\alpha_{3}=3+4i\). These are the complex potentials of the complex hierarchy corresponding \(\beta=5.3\). Thus, taking into account (2.21), we get \[\langle\psi^{n_{1}}_{\alpha-2i,\beta^{\ast}},\psi^{n_{2}}_{\alpha-2i,\beta} \rangle\propto(E^{n_{1}}_{\alpha,\beta}-\mu_{\alpha})\int_{-\infty}^{\infty} \psi^{n_{1}}_{\alpha,\beta}\,\psi^{n_{2}}_{\alpha,\beta}\,dx=(E^{n_{2}}_{ \alpha,\beta}-\mu_{\alpha})\int_{-\infty}^{\infty}\psi^{n_{1}}_{\alpha,\beta} \,\psi^{n_{2}}_{\alpha,\beta}\,dx \tag{2.33}\] From the formula (2.33), we conclude that this equality is satisfied only if \[\langle\psi^{n_{1}}_{\alpha-2i,\beta},\psi^{n_{2}}_{\alpha-2i,\beta}\rangle=0 \tag{2.34}\] We have checked numerically, that indeed these complex eigenfunctions are orthogonal in this sense. As a consequence, the "norm" of \(\psi^{n}_{\alpha-2i,\beta}:=C^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta}\) is given by \[\langle C^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta}\,,C^{-}_{\alpha,\beta} \psi^{n}_{\alpha,\beta}\rangle=(E^{n}_{\alpha,\beta}-\mu_{\alpha})\langle\psi ^{n}_{\alpha,\beta}\,,\psi^{n}_{\alpha,\beta}\rangle\neq 0 \tag{2.35}\] ## 3 Algebras of the operators ### Algebra of real factorizations The real shift operators \(A^{\pm}_{\alpha,\beta}\) together with an additional "diagonal" operator \(A^{0}\), close the \(su(1,1)\) Lie algebra [21, 22]. In order to see this property, we rewrite (2.8) in the form: \[A^{-}_{\alpha,\beta+2}A^{+}_{\alpha,\beta+2}-A^{+}_{\alpha,\beta}A^{-}_{ \alpha,\beta}=-\mu_{\beta+2}+\mu_{\beta}=4\beta \tag{3.36}\] Next, let us define the natural operators (without indices) \(A^{\pm}\) and \(A^{0}\) acting on the eigenfunctions \(\psi^{n}_{\alpha,\beta}\) of any Hamiltonian \(H_{\alpha,\beta}\) in the form \[A^{-}\psi^{n}_{\alpha,\beta}:=\frac{1}{2}A^{-}_{\alpha,\beta}\psi^{n}_{\alpha, \beta}\propto\psi^{n-1}_{\alpha,\beta-2},\quad A^{+}\psi^{n}_{\alpha,\beta}:= \frac{1}{2}A^{+}_{\alpha,\beta+2}\psi^{n}_{\alpha,\beta}\propto\psi^{n+1}_{ \alpha,\beta+2},\quad A^{0}\psi^{n}_{\alpha,\beta}:=\frac{1}{2}\beta\psi^{n}_{ \alpha,\beta} \tag{3.37}\] They satisfy the \(su(1,1)\) algebra, where in the following expressions it is assumed that they are acting on an eigenfunction \(\psi^{n}_{\alpha,\beta}\): \[[A^{-},A^{+}]=2A^{0},\qquad[A^{0},A^{\pm}]=\pm A^{\pm} \tag{3.38}\] In this case, the Casimir operator is \[\mathcal{C}_{\text{su}(1,1)}=A^{+}A^{-}-A^{0}(A^{0}-1)=A^{-}A^{+}-A^{0}(A^{0}+1) \tag{3.39}\] If we act \(\mathcal{C}_{\text{su}(1,1)}\) on the fundamental state (2.10), \(\psi^{0}_{\alpha,\beta}(x)\), we get \[\mathcal{C}_{\text{su}(1,1)}=-\frac{\beta}{2}(\frac{\beta}{2}-1):=-\nu(\nu-1), \qquad\nu=\beta/2,\quad\beta>1 \tag{3.40}\] This is a lowest bounded \(\nu\)-representation of \(su(1,1)\) and the support space is spanned by the eigenfunctions \(\psi^{n}_{\alpha,\beta+2n}(x)\), \(n=0,1,2,\dots\) Notice that according to (2.4), and the definition (3.37), \[H\,\psi^{n}_{\alpha,\beta}:=H_{\alpha,\beta}\,\psi^{n}_{\alpha,\beta}=(4A^{+}A^ {-}-(\beta-1)^{2})\psi^{n}_{\alpha,\beta} \tag{3.41}\] Comparing with the Casimir operator (3.39) and taking into account the definition of \(A^{0}\) in (3.37) we have \[\mathcal{C}_{\text{su}(1,1)}=\frac{1}{4}(H+1),\quad\beta>1 \tag{3.42}\] Therefore, the value of the Hamiltonian on the eigenfunctions \(\psi^{n}_{\alpha,\beta+2n}(x)\) is related to the value of the Casimir by (3.42) and it is given by (3.40). In the case \(\tilde{\beta}<-1\), if we apply the Casimir (3.39) on the fundamental state (2.15), \(\tilde{\psi}^{0}_{\tilde{\alpha},\tilde{\beta}}(x)\) we find \[\mathcal{C}_{\text{su}(1,1)}=-\frac{\tilde{\beta}}{2}(\frac{\tilde{\beta}}{2} +1)=-\frac{\beta}{2}(\frac{\beta}{2}-1):=-\nu(\nu-1),\qquad\nu=-\tilde{\beta}/ 2,\quad\tilde{\beta}<-1 \tag{3.43}\] This is an upper bounded \(su(1,1)\) representation with the same Casimir eigenvalue as the previous lowest representation. In conclusion, positive values of \(\beta>1\) give rise to a lowest bounded representation while the opposite values \(\tilde{\beta}=-\beta<-1\) lead to upper bounded \(su(1,1)\) representations with the same Casimir and Hamiltonian eigenvalues. ### Algebra of complex factorizations For the complex case, the complex shift operators also close an algebra together with an additional diagonal operator. In order to see this property, we have to define again the operators \(C^{\pm}\), \(C^{0}\) having in mind (2.29): \[C^{-}\psi^{n}_{\alpha,\beta}:=\frac{1}{2}C^{-}_{\alpha,\beta}\psi^{n}_{\alpha,\beta}\propto\psi^{n}_{\alpha-2i,\beta},\quad C^{+}\psi^{n}_{\alpha,\beta}:= \frac{1}{2}C^{+}_{\alpha,\beta+2}\psi^{n}_{\alpha,\beta}\propto\psi^{n}_{ \alpha+2i,\beta+2},\quad C^{0}\psi^{n}_{\alpha,\beta}:=\frac{1}{2}\alpha\psi^ {n}_{\alpha,\beta} \tag{3.44}\] They satisfy the following algebra (which will also correspond to \(su(1,1)\)): \[[C^{-},C^{+}]=2iC^{0},\qquad[C^{0},C^{\pm}]=\pm\,iC^{\pm} \tag{3.45}\] These are not the standard commutation relations of \(su(1,1)\)[21, 22, 23]. However, in fact, they can be identified with the Lie algebra \(su(1,1)\) in another basis. In the case of real factorizations the commutators (3.38) correspond to the case where the diagonal operator \(A^{0}\) represents a compact generator (of trigonometric rotations) while in the case of complex factorizations the commutators (3.45) apply when the diagonal operator \(C^{0}\) represents a noncompact operator (generating hyperbolic transformations). The Casimir operator of the algebra (3.45) is \[\mathcal{C}_{\text{su}(1,1)}=C^{+}C^{-}+C^{0}(C^{0}-i) \tag{3.46}\] then, from (2.25) we get \[\mathcal{C}_{\text{su}(1,1)}=\frac{1}{4}(H+1) \tag{3.47}\] which is the same value as in the real \(su(1,1)\) algebra (3.42). ### Complete algebra of Scarf II factorizations At the same time, it can be easily checked that the real shift operators \(A^{\pm},A^{0}\) and the complex shift operators \(C^{\pm},C^{0}\) commute with each other, \[[C^{i},A^{j}]=0\,,\qquad i,j=\pm,\ 0\] when they act on any eigenfunction \(\psi^{n}_{\alpha+2ki,\beta+2m}\) of the hierarchy, and therefore the total shift (or potential) algebra for the double hierarchy is \(su(1,1)\oplus su(1,1)\). As mentioned above, the factorization operators (\(A^{\pm}\) and \(C^{\pm}\)) do not correspond to the same basis of the \(su(1,1)\) algebra, this is the reason why one pair (\(A^{\pm}\)) produces real and the other (\(C^{\pm}\)) complex factorizations. An important point is that their corresponding representations have the same Casimir eigenvalue, what coincides with the more familiar hyperbolic Poschl-Teller potentials whose Lie algebra is also \(su(1,1)\oplus su(1,1)\), but in that case the factorization operators correspond to the same \(su(1,1)\)-basis, which lead to two real factorizations [24, 25]. We have also seen clearly the action of real \(A^{\pm}\) and complex \(C^{\pm}\) operators on the eigenfunctions of the Hamiltonians for the whole hierarchy in Fig. 5. The eigenfunctions of horizontal Hamiltonians with the same energy are connected by \(A^{\pm}\) and the vertical Hamiltonians connected by \(C^{\pm}\) are isospectral to real Hamiltonians (\(H_{\alpha,\beta}\)). The general Hamiltonians and the potentials in this hierarchy are given by \[H_{\alpha+2ki,\beta+2m}(x)=-\frac{d^{2}}{dx^{2}}+V_{\alpha+2ki,\beta+2m}(x) \tag{3.48}\] \[V_{\alpha+2ki,\beta+2m}(x)=\frac{(\alpha+2ki)^{2}-(\beta+2m)^{2}+1+2(\alpha+2 ki)(\beta+2m)\mbox{sinh}2x}{(\mbox{cosh}2x)^{2}} \tag{3.49}\] Figure 5: Schematic drawing of hierarchies of the real and the complex Scarf II potential Hamiltonians. where \(m,k\in\mathbb{Z}\). These potentials can be written in a explicit complex form as: \[V_{\alpha+2ki,\beta+2m}(x)=\frac{\alpha^{2}-(2k)^{2}+(\beta+2m)^{2}+1+2\alpha( \beta+2m){\rm sinh}2x}{({\rm cosh}2x)^{2}}+i\left(\frac{4\alpha k+4k(\beta+2m){ \rm sinh}2x}{({\rm cosh}2x)^{2}}\right) \tag{3.50}\] From here, we see that only if \(k=0\), the potentials are real. This property is also displayed in Fig. 3, where each vertical or horizontal lines represent other (complex) one-parameter hierarchies. We notice that real and complex part of (3.50) have the form of real Scarf II potentials. We can also see directly from (3.50) that this is not a PT-invariant potential (unless for the case where the coefficient \(2\alpha(\beta+2m)\) be zero). If \(\alpha=0\), this potential corresponds to complexified Scarf II potential given in the literature [12]. So, the potential given by (3.50) is a general form of Scarf II potential which includes both real and complexified form. We note that for the general case of the potential given in (1.1), the change of the parameters \(\alpha\) and \(\beta\) instead of being \(\Delta\alpha=\pm 2i\) and \(\Delta\beta=\pm 2\), will be \(\Delta\alpha=\pm\gamma i\) and \(\Delta\beta=\pm\gamma\). In this case, this potential have the same type of factorizations properties. ## 4 Conclusions In this work, we have described a kind of special isospectral factorizations of real Hamiltonians corresponding to Scarf II potential which arises as a companion of the standard real factorization. This second factorizations are complex: the parameters of their potentials in this hierarchy vary in imaginary values. We have remarked that in this respect Scarf II potential is unique among the shape invariant factorizable potentials (for example, given in table 4 of [1]). The obtained complex Hamiltonian hierarchies have a real spectrum; the supersymmetry is spontaneously broken and the potentials are not PT-symmetric. The eigenfunctions and spectrum of all the complex potentials are derived from the real potential hierarchy. These relations are illustrated in Fig. 5. This type of complex potentials are different from the wide family of complexified Scarf II potential, although it might be included in some of them. We point out that the total shift (or potential) algebra for this problem is the direct sum \(su(1,1)\oplus su(1,1)\). This is due to the fact that the complex \(C^{\pm}\) and real \(A^{\pm}\) shift operators commute. This leads to a global two-dimensional lattice of parameters \((\alpha+2ki,\beta+2m)\), where \(m,k\in\mathbb{Z}\), where each point represent a Hamiltonian and a horizontal (or vertical) line represents a Hamiltonian hierarchy. Any two Hamiltonians in this global lattice can be linked by a product of complex and real shift operators. ## Acknowledgements This work was supported by MCIN, Spain with funding from the European Union NexGenerationEU (PRTRC17.I1) and the Consejeria de Educacion de la Junta de Castilla y Leon, Spain, Project QCAYLE and the MCIN Project PID2020-113406GB-I00 of Spain. L. Acevedo acknowledges financial support from Doctorate ProgramFunds No. UVa2023 and Banco Santander. Y. C. Acar acknowledges TUBITAK 2210/A National MSc Scholarship Program. We thank Dr. Javier Negro for helpful comments and suggestions. Data Availability Statement: No Data associated in the manuscript.
2309.09411
Distributionally Time-Varying Online Stochastic Optimization under Polyak-Łojasiewicz Condition with Application in Conditional Value-at-Risk Statistical Learning
In this work, we consider a sequence of stochastic optimization problems following a time-varying distribution via the lens of online optimization. Assuming that the loss function satisfies the Polyak-{\L}ojasiewicz condition, we apply online stochastic gradient descent and establish its dynamic regret bound that is composed of cumulative distribution drifts and cumulative gradient biases caused by stochasticity. The distribution metric we adopt here is Wasserstein distance, which is well-defined without the absolute continuity assumption or with a time-varying support set. We also establish a regret bound of online stochastic proximal gradient descent when the objective function is regularized. Moreover, we show that the above framework can be applied to the Conditional Value-at-Risk (CVaR) learning problem. Particularly, we improve an existing proof on the discovery of the PL condition of the CVaR problem, resulting in a regret bound of online stochastic gradient descent.
Yuen-Man Pun, Farhad Farokhi, Iman Shames
2023-09-18T00:47:08Z
http://arxiv.org/abs/2309.09411v1
Distributionally Time-Varying Online Stochastic Optimization under Polyak-Lojasiewicz Condition with Application in Conditional Value-at-Risk Statistical Learning+ ###### Abstract In this work, we consider a sequence of stochastic optimization problems following a time-varying distribution via the lens of online optimization. Assuming that the loss function satisfies the Polyak-Lojasiewicz condition, we apply online stochastic gradient descent and establish its dynamic regret bound that is composed of cumulative distribution drifts and cumulative gradient biases caused by stochasticity. The distribution metric we adopt here is Wasserstein distance, which is well-defined without the absolute continuity assumption or with a time-varying support set. We also establish a regret bound of online stochastic proximal gradient descent when the objective function is regularized. Moreover, we show that the above framework can be applied to the Conditional Value-at-Risk (CVaR) learning problem. Particularly, we improve an existing proof on the discovery of the PL condition of the CVaR problem, resulting in a regret bound of online stochastic gradient descent. ## 1 Introduction In a stochastic optimization problem, one aims to make a decision by minimizing the expectation of a loss function following an unknown distribution, which can be approximated via sampling. As many problems in real world involve uncertain parameters, stochastic programming has been extensively applied to almost all areas of science and engineering [50], such as telecommunication [25], finance [54, 62], and marketing [47], just to name a few. Most works in stochastic programming study scenarios when the underlying distribution is stationary, which, nevertheless, may not apply to problems in dynamic environments. Examples include problems in finance and sociology where the expansion of economy and the evolution of demographics can significantly modify the underlying distributions. Another example is a source localization problem of a substance leakage or mitigating its effect, where the distribution of the substance changes in the space due to movements of the source, diffusion, or changes in the environment. A naive approach to solving the problem is to find a solution of a worst-case scenario of a set of distributions that contains the whole trajectory of the underlying distribution over time and use tools from the distributionally robust optimization (DRO) to solve it. DRO, which has been proved to be of extreme importance in machine learning [1, 51, 37, 24, 44], focuses on finding the solution of a worst-case scenario of a set of distributions (often known as ambiguity set) constructed near the empirical distribution and assumed to contain the true distribution [14, 22, 23, 9]; also see [48, 8, 18, 27, 29, 31, 46, 59] for different constructions of ambiguity sets. However, the solution in DRO is known to be very conservative, especially when the ambiguity set is large. As the underlying distribution may drift significantly over time and making the ambiguity set large, this approach may not be desirable by applying one solution to all possible distributions in the ambiguity set. Another approach is to view it as a sequence of stochastic optimization problems following a time-varying distribution over different time steps. This fits into an online optimization framework [45, 19, 2, 28], in which a decision maker makes a series of decision based on the observations at previous rounds. Recently, there have been works that interplay between online optimization and stochastic programming; see, for example, [49, 35, 17, 58, 57, 11, 30]. However, as far as we concerned, these works mostly consider sequences of convex loss functions, which may not be applicable to applications with nonconvex losses. Moreover, most works quantify the distribution change using the distance between optimal solutions at consecutive time steps, which is less intuitive as it involves the behavior of the loss function. Motivated by the above discussion, we consider a sequence of expectation loss minimization problems that satisfy the Polyak-Lojasiewicz (PL) condition. This class of functions, albeit not necessarily convex, satisfies certain quadratic growth condition, which is shown to be exhibited in a number of optimization problems [34, 42, 26]. We apply the online stochastic gradient descent to solve the problem and adopt the _dynamic regret_ to measure its performance, which evaluates the cumulative differences between the generated loss and the optimal loss at every time step [63, 10, 41, 20]. We establish a regret bound that makes explicit the dependence of the dynamic regret of online stochastic gradient descent on the cumulative distribution drifts and the gradient bias caused by the stochasticity. While a vast majority of works in online optimization literature bounds the dynamic regret in terms of the cumulative distances between optimal solutions at successive time steps, it is more natural to consider the cumulative distances between underlying distribution at successive time steps in the time-varying distribution setting. The distribution metric we adopt here is Wasserstein distance, which do away with the absolute continuity assumption on distributions at successive time steps, as needed for Kullback-Leibler (KL) divergence [16]. In addition, it is well-defined even when the support set is time-varying. Based on the above development, we further study a sequence of expectation loss minimization problems with a possibly nonsmooth regularizer that satisfies proximal Polyak-Lojasiewicz (proximal PL) condition. We apply the online stochastic proximal gradient descent and show a regret bound that is composed of the cumulative distribution drifts and the gradient bias caused by the stochasticity. Many applications benefit from the above framework. In particular, we apply it to the Conditional Value-at-Risk (CVaR) statistical learning problem, where the underlying distribution is time-varying. The CVaR problem focuses on making the best worst-case decision by minimizing the expected loss of the \(\alpha\cdot 100\%\) worst cases, for \(\alpha\in(0,1]\), which leads to a risk-averse solution. Such a solution is of particular interest in areas such as medicine, traffic and finance, when a poor solution can lead to a severe consequence. Based on the recent advances in the discovery of PL condition in the CVaR problem [33], we establish a regret bound of online stochastic gradient descent in a CVaR problem with a time-varying underlying distribution, which, as far as we know, has barely been investigated in the literature. Specifically, we show that the assumption imposed in [33] for establishing PL condition of a CVaR problem is impossible to achieve at its global optimum. Instead, we find a new non-empty subset that satisfies the PL condition while containing its global optimum. As long as the iterate lies within the subset at every time step, a regret bound of online stochastic gradient descent then follows from the said framework, which expands the repertoire of online robust optimization problems. ### Related Works Over the last two decades, online convex optimization has gained considerable interests in the machine learning community, for its simplicity and efficiency in dealing with large-scale data in real time. While the theory in online optimization is getting more understood, this provides a new tool in studying stochastic optimization with time-varying distribution using techniques from online optimization. For example, [11] studies the dynamic regret bound of online projected stochastic gradient descent when applied to a sequence of convex losses with a bounded convex feasible set. Assuming a prior knowledge on the temporal variations \(\tilde{\Delta}(T)\) of the underlying distribution, the work establishes a regret bound \(\mathcal{O}(\sqrt{T\tilde{\Delta}(T)})\), where \(T\) is the interested time of horizon. Another example is the recent work [17], which considers the error bounds of online proximal gradient descent when applied to a sequence of strongly convex loss functions, both in expectation and with high probability. The error bounds are shown to be composed of optimization error, gradient noise and time drifts. Beyond convexity, researchers have also explored the convergence online algorithms in solving sequences of loss functions satisfying PL condition. An earlier work [61] shows a regret bound of online multiple gradient descent with full-gradient information when solving a sequence of loss functions satisfying PL condition (or known as semi-strong convexity in the work), in which a regret bound in terms of cumulative path variations of optimal solutions is established. Recently, the work [35] studies the online gradient and proximal gradient methods when the loss functions satisfy PL condition and proximal PL condition, respectively. Assuming that the gradient is contaminated by a sub-Weibull noise, the paper shows regret bounds in expectation and with high probability iteration-wise that depend on the variability of the problem and the statistics of the sub-Weibull gradient error. A vast majority of works in online dynamic optimization capture the distribution drift via the distance between the optimal solutions of a particular loss function at consecutive time steps, which is less intuitive compared with other distribution metrics such as KL divergence and Wasserstein distance. An exception that we have noticed is the work [49], which shows a dynamic regret bound of online stochastic gradient descent that is composed of the cumulative Wasserstein distance between distributions at consecutive time steps when applied to a sequence of strongly convex loss functions. Yet, to the best of our knowledge, assumptions that are weaker than the strong convexity under this setting have not been studied in the literature. ### Notations The notation in the paper is mostly standard. We use \(\|\cdot\|_{1}\) and \(\|\cdot\|\) to denote the \(\ell_{1}\)-norm and Euclidean norm, respectively. We also use \(\operatorname{proj}_{X}(\cdot)\) to denote the mapping of projection over a set \(X\) and use \(\operatorname{sgn}(\cdot)\) to denote a sign function. Moreover, we use the operator \((\cdot)_{+}\) to denote the operation \((\cdot)_{+}=\max\{\cdot,0\}\). ## 2 Online Stochastic Optimization under PL Condition ### Problem Formulation Given a loss function \(\mathcal{L}\colon\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{w}}\to\mathbb{R}\), we are interested in solving a sequence of minimization problems \[\min_{\mathbf{x}\in\mathbb{R}^{n_{x}}}\left[\mathcal{F}_{t}(\mathbf{x})\coloneqq \mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\mathcal{L}(\mathbf{x},\mathbf{w})\right] \tag{1}\] for \(t=1,\ldots,T\) and \(T\) being the horizon length. Here, \(\mathbf{x}\in\mathbb{R}^{n_{x}}\) is a decision variable and \(\mathbf{w}\in\mathbb{R}^{n_{w}}\) is a random parameter following an unknown distribution \(\mathbb{P}_{t}\) with probability measure \(\mathcal{P}^{t}\) on a probability space \(\Omega_{t}\subseteq\mathbb{R}^{n_{w}}\) at time \(t\) for \(t=1,\ldots,T\). Suppose that data are revealed in an online manner. Specifically, at each time step \(t\), after determining a decision variable \(\mathbf{x}_{t}\in\mathbb{R}^{n_{x}}\), a loss \(\mathcal{F}_{t}(\mathbf{x}_{t})\) is revealed. We then collect \(m\) samples \(\{\mathbf{w}_{t}^{t}\}_{i=1}^{n}\), which are drawn independently from the underlying distribution \(\mathbb{P}_{t}\), and use them to determine the decision variable \(\mathbf{x}_{t+1}\in\mathbb{R}^{n_{x}}\) at the next time step. Our goal is to minimize the cumulative loss induced by decisions \(\mathbf{x}_{t}\) for \(t=1,\ldots,T\). This form of online stochastic optimization problem has broad applications in online learning, adaptive signal processing and online resource allocation, where decisions have to be made in real-time and the underlying distribution is unknown and time-varying. **Assumption 1** (Lipschitzness and Differentiability of the Loss).: _Let \(\mathbf{x}\in\mathbb{R}^{n_{x}}\). For \(t=1,\ldots,T\), assume that the following holds:_ * \(\mathcal{L}(\mathbf{x},\cdot)\) _is measurable for every_ \(\mathbf{x}\in\mathbb{R}^{n_{x}}\)_;_ * \(\mathcal{F}_{t}(\mathbf{x})=\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\mathcal{L}(\bm {x},\mathbf{w})\) _is well-defined and finite valued;_ * _There exists a positive valued random variable_ \(C(\mathbf{w})\) _such that_ \(\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[C(\mathbf{w})]<\infty\)_, and for all_ \(\mathbf{x}_{1},\ \mathbf{x}_{2}\in\mathbb{R}^{n_{x}}\) _in a neighborhood of_ \(\mathbf{x}\) _and almost every_ \(\mathbf{w}\in\Omega_{t}\)_, the following inequality holds:_ \[|\mathcal{L}(\mathbf{x}_{1},\mathbf{w})-\mathcal{L}(\mathbf{x}_{2},\mathbf{w})|\leq C(\mathbf{w}) \|\mathbf{x}_{1}-\mathbf{x}_{2}\|;\] * _For almost every_ \(\mathbf{w}\in\Omega_{t}\) _the function_ \(\mathcal{L}(\cdot,\mathbf{w})\) _is differentiable at_ \(\mathbf{x}\)_._ **Lemma 1** (Differentiability [50, Theorem 7.44]).: _Let \(\mathbf{x}\in\mathbb{R}^{n_{x}}\). Under Assumption 1, \(\mathcal{F}_{t}(\mathbf{x})\) is Lipschitz continuous in a neighborhood of \(\mathbf{x}\). Moreover, \(\mathcal{F}_{t}(\mathbf{x})\) is differentiable at \(\mathbf{x}\) and_ \[\nabla\mathcal{F}_{t}(\mathbf{x})=\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[\nabla_{ \mathbf{x}}\mathcal{L}(\mathbf{x},\mathbf{w})].\] Assume that \(\mathcal{F}_{t}(\mathbf{x})=\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\mathcal{L}(\bm {x},\mathbf{w})\) is continuously differentiable and \(\mathcal{L}(\mathbf{x},\mathbf{w})\) is differentiable with respect to (wrt) \(\mathbf{x}\)\(\mathcal{P}^{t}\)-almost everywhere for \(t=1,\ldots,T\). Here, \(\mathcal{L}(\mathbf{x},\mathbf{w})\) is not necessarily differentiable everywhere, so a large class of loss functions can be included under this framework, for example, \(\mathcal{L}(\mathbf{x},\mathbf{w})=\mathbf{1}_{C(\mathbf{x})}(\mathbf{w})\) with some convex set \(C(\mathbf{x})\). For \(t=1,\ldots,T-1\), we update an estimate at time \(t+1\) via one-step stochastic gradient descent with step size \(\gamma_{t}>0\): \[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\gamma_{t}\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t}; \mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}), \tag{2}\] where \(\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}) \approx\nabla\mathcal{F}_{t}(\mathbf{x})\) is some gradient approximation with \(\mathbb{E}[\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_ {m}^{t})]=\nabla\mathcal{F}_{t}(\mathbf{x})\). Different gradient approximations can be made in different contexts -- usually taking the average over a set of sampled gradients. However, in our setting, it is possible that given any \(\mathbf{w}\in\Omega_{t}\), \(\mathcal{L}(\mathbf{x},\mathbf{w})\) is non-differentiable at some \(\mathbf{x}\), for \(t=1,\ldots,T\). Hence, to make our statements precise, we introduce the following assumptions and definitions. **Assumption 2** (Bounded Support Set).: _Every underlying distribution \(\mathbb{P}_{t}\) has a bounded support set \(\Omega_{t}\), for \(t=1,\ldots,T\)._ Under Assumption 2, we define the Clarke subdifferential of \(\mathcal{L}\) wrt \(\mathbf{x}\)[40]: \[\partial_{C,\mathbf{x}}\mathcal{L}(\mathbf{x},\mathbf{w})=\left\{\mathbf{s}\in\mathbb{R}^{n_{x }}\colon\mathbf{s}^{T}\mathbf{d}\leq\limsup_{\mathbf{x}^{\prime}\to\mathbf{x},t\searrow 0}\frac{ \mathcal{L}(\mathbf{x}^{\prime}+t\mathbf{d},\mathbf{w})-\mathcal{L}(\mathbf{x}^{\prime},\mathbf{w}) }{t}\right\}.\] This set is a non-empty compact convex set [15, Definition (1.1)]. Given \(\mathbf{w}\in\Omega_{t}\), for \(t=1,\dots,T\), the Clarke subdifferential is a singleton with \(\partial_{C,\mathbf{x}}\mathcal{L}(\mathbf{x},\mathbf{w})=\{\nabla_{\mathbf{x}}\mathcal{L}(\mathbf{ x},\mathbf{w})\}\) when \(\mathcal{L}\) is differentiable at \(\mathbf{x}\). Having a set of samples \(\{\mathbf{w}^{t}_{i}\}_{i=1}^{m}\) collected, a natural possible gradient approximation is \[\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m} )=\frac{1}{m}\sum_{i=1}^{m}\mathbf{g}(\mathbf{x}_{t},\mathbf{w}^{t}_{i}) \tag{3}\] for some \(\mathbf{g}(\mathbf{x}_{t},\mathbf{w}^{t}_{i})\in\partial_{C}\mathcal{L}(\mathbf{x}_{t},\mathbf{w} ^{t}_{i})\). Nevertheless, there can be other possible candidates for gradient approximation, which we will see in Section 4. We assume any gradient approximation candidate satisfies the following assumption. **Assumption 3** (Moments of Gradient Approximation).: _For \(t=1,\dots,T\), the mean and variance of the gradient approximation \(\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x};\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m})\) satisfies_ \[\mathbb{E}_{\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m}}[\widehat{\nabla}\mathcal{F}_ {t}(\mathbf{x};\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m})]=\nabla\mathcal{F}_{t}(\mathbf{x})\] _and_ \[\mathbb{E}_{\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m}}[\widehat{\nabla}\mathcal{F}_ {t}(\mathbf{x}_{t};\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{m})-\nabla\mathcal{F}_{t}(\bm {x}_{t})\|^{2}]\leq\sigma_{t}^{2}\] _for some \(\sigma_{t}>0\)._ To evaluate the performance of online SGD, we use the notion of regret: \[\text{Regret}(T)=\sum_{t=1}^{T}\mathbb{E}_{\{\mathbf{w}^{t}_{1},\dots,\mathbf{w}^{t}_{ m}\}_{i=1}^{t-1}}[\mathcal{F}_{t}(\mathbf{x}_{t})-\mathcal{F}^{*}_{t}], \tag{4}\] where \(\mathcal{F}^{*}_{t}=\min_{\mathbf{x}}\mathcal{F}_{t}(\mathbf{x})\). Moreover, we denote \(\mathbf{x}^{*}_{t}\in\arg\min_{\mathbf{x}}\mathcal{F}_{t}(\mathbf{x})\). The notion of regret is a standard performance metric in online optimization literature [38], which measures the cumulative losses deviating from the cumulative optimal losses over all time steps. Despite the fact that the vast majority of existing works derive bounds of regret via the dynamics of an optimal solution \(\mathbf{x}^{*}_{t}\) between successive time steps [45, 38, 7], our goal, instead, is to bound the regret in terms of the cumulative distribution drifts and the cumulative gradient error caused by stochasticity. This bound is more intuitive since it can capture the impact of the distribution drifts on the regret. Another goal is to derive conditions that can guarantee a sublinear regret bound (i.e., \(\text{Regret}(T)\leq o(T)\)); in other words, the conditions that the loss \(\mathcal{F}_{t}(\mathbf{x}_{t})\) is getting asymptotically close to an optimal loss \(\mathcal{F}^{*}_{t}\) as \(\frac{1}{T}\text{Regret}(T)\to 0\). To characterize the distribution drifts, we employ the Wasserstein distance, which is defined below. **Definition 1** (Wasserstein Distance).: _Let \(\mathcal{M}(\mathbb{R}^{n_{w}})\) be the set of all probability distributions \(\mathbb{Q}\) on \(\mathbb{R}^{n_{w}}\) such that \(\mathbb{E}_{\xi\sim\mathbb{Q}}\{\|\xi\|\}<\infty\). For all \(\mathbb{P},\mathbb{Q}\in\mathcal{M}(\mathbb{R}^{n_{w}})\), the type-1 Wasserstein distance is defined as_ \[\mathfrak{M}(\mathbb{P},\mathbb{Q})\coloneqq\inf_{\mathbb{R}\in\mathcal{J}( \mathbb{P},\mathbb{Q})}\left\{\int_{\mathbb{R}^{n_{w}}\times\mathbb{R}^{n_{w} }}\|\xi_{1}-\xi_{2}\|\Pi(d\xi_{1},d\xi_{2})\right\}\] _where \(\mathcal{J}(\mathbb{P},\mathbb{Q})\) is the set of joint distributions on \(\xi_{1}\) and \(\xi_{2}\) with marginals \(\mathbb{P}\) and \(\mathbb{Q}\), respectively._ Wasserstein distance, which arises from optimal transport, has gained a lot of attention in statistics and machine learning in the last decade; see, e.g., [36, 39]. Contrary to Kullback-Leibler divergence, Wasserstein distance is well-defined even when the support sets of two distributions are different. This provides more flexibility in the application since the support set may vary with time as well. In this work, we use the type-1 Wasserstein distance, which is also known as (aka) Kantorovich metric, to perform the analysis. The distribution drifts can then be characterized via Wasserstein distance in the following assumption. **Assumption 4** (Bounded Distribution Drifts).: _For \(t=1,\dots,T-1\), the probability distribution at successive time steps vary slowly by_ \[\mathfrak{M}(\mathbb{P}_{t+1},\mathbb{P}_{t})\leq\eta_{t}\] _for some \(\eta_{t}>0\)._ ### Performance Analysis To analyze the performance of stochastic online gradient descent, we need a number of assumptions imposed on the loss function, which will be shown in Assumptions 5-7. **Assumption 5** (Smoothness).: _Under Assumption 1, for \(t=1,\dots,T\), \(\mathcal{F}_{t}(\mathbf{x})=\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\mathcal{L}(\mathbf{ x},\mathbf{w})\) is \(\beta\)-smooth; i.e., for any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n_{s}}\), it holds that_ \[\|\nabla\mathcal{F}_{t}(\mathbf{y})-\nabla\mathcal{F}_{t}(\mathbf{x})\|\leq\beta\|\mathbf{ y}-\mathbf{x}\|.\] The smoothness property guarantees a quadratic upper approximation of the loss function at each point in the domain [6, Lemma 5.7]. This property, aka descent lemma, is a key element in proving the descent of many gradient methods. **Lemma 2** (Descent Lemma).: _Under Assumptions 1 and 5, for every \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n_{x}}\) and \(\mathbf{z}\in[\mathbf{x},\mathbf{y}]\coloneqq\{(1-\gamma)\mathbf{x}+\gamma\mathbf{y}\colon\gamma\in [0,1]\}\), we have_ \[\mathcal{F}_{t}(\mathbf{y})\leq\mathcal{F}_{t}(\mathbf{x})+\langle\nabla\mathcal{F}_{ t}(\mathbf{z}),\mathbf{y}-\mathbf{x}\rangle+\frac{\beta}{2}\|\mathbf{y}-\mathbf{x}\|^{2}.\] Moreover, we assume that \(\mathcal{F}_{t}\) satisfies the Polyak-Lojasiewicz condition. **Assumption 6** (Polyak-Lojasiewicz Condition).: _Under Assumption 1, for \(t=1,\dots,T\), \(\mathcal{F}_{t}(\mathbf{x})\) satisfies the Polyak-Lojasiewicz (PL) condition on a set \(\mathcal{X}\) with constant \(\mu\); i.e., for all \(\mathbf{x}\in\mathcal{X}\),_ \[\frac{1}{2}\|\nabla\mathcal{F}_{t}(\mathbf{x})\|^{2}\geq\mu(\mathcal{F}_{t}(\mathbf{ x})-\mathcal{F}_{t}^{*}).\] PL condition has been known to be a simple condition that guarantees a global linear convergence rate for gradient descent in offline optimization [34]. Since it does not require convexity on the whole domain, it is gaining popularity especially in machine learning where loss functions are generally non-convex; see, for example, [42, 43]. Although it is not clear how to check the PL condition of \(\mathcal{F}_{t}\) without knowing the true underlying distribution, there are scenarios that the condition reduce to the PL condition of \(\mathcal{L}(\mathbf{x},\mathbf{w})\) at the mean of the parameter. For example, consider \(\mathcal{L}(\mathbf{x},\mathbf{w})=\frac{1}{2}\|g(\mathbf{x})-h(\mathbf{w})\|^{2}\) for some function \(g\colon\mathbb{R}^{n_{x}}\to\mathbb{R}^{p}\) and some affine function \(h\colon\mathbb{R}^{n_{w}}\to\mathbb{R}^{p}\). Then, because of the fact that \(\mathbb{E}_{\mathbf{w}}[\mathcal{L}(\mathbf{x},\mathbf{w})]=\nabla g(\mathbf{x})^{T}g(\mathbf{x})+ \langle g(\mathbf{x}),h(\mathbb{E}[\mathbf{w}])\rangle\), the PL condition of \(\mathbb{E}_{\mathbf{w}}\mathcal{L}(\mathbf{x},\mathbf{w})\) follows if it holds for \(\mathcal{L}(\mathbf{x},\mathbb{E}[\mathbf{w}])\). The PL condition, combining with the smoothness property in Assumption 5, results in two-sided approximation bounds on the function loss from an optimal value in terms of the distance of a point from the optimal set. **Lemma 3** (Bounds on Losses).: _Under Assumptions 1, 5 and 6, for \(t=1,\dots,T\), the following holds for all \(\mathbf{x}\in\mathbb{R}^{n_{x}}\):_ \[\frac{\mu}{2}\|\mathbf{x}-\mathrm{proj}_{\mathcal{X}^{*}}(\mathbf{x})\|^{2}\leq \mathcal{F}_{t}(\mathbf{x})-\mathcal{F}_{t}^{*}\leq\frac{\beta}{2}\|\mathbf{x}- \mathrm{proj}_{\mathcal{X}^{*}}(\mathbf{x})\|^{2}, \tag{5}\] _where the set \(\mathcal{X}^{*}\) is defined as the set of minimizers of \(\mathcal{F}_{t}\)._ Proof.: The first inequality follows from [34, Thereom 2 and Appendix A]. The second inequality is the direct consequence of (i) descent lemma after taking expectation over \(\mathbf{w}\sim\mathbb{P}_{t}\), and (ii) putting \(\mathbf{y}=\mathbf{x}\), \(\mathbf{x}=\mathrm{proj}_{\mathcal{X}_{t}^{*}}(\mathbf{x})\) and \(\mathbf{z}=\mathrm{proj}_{\mathcal{X}_{t}^{*}}(\mathbf{x})\). The next assumption that we impose on \(\mathcal{L}\) is the Lipschitzness wrt the second argument. **Assumption 7** (Lipschitzness wrt the Second Argument).: _Let \(\mathbf{x}\in\mathbb{R}^{n_{x}}\). \(\mathcal{L}\) is Lipschitz continuous wrt the second argument \(\mathbf{w}\in\mathbb{R}^{n_{w}}\); i.e., there exists a constant \(K_{w}(\mathbf{x})\) depending on \(\mathbf{x}\) such that_ \[|\mathcal{L}(\mathbf{x},\mathbf{w})-\mathcal{L}(\mathbf{x},\mathbf{w}^{\prime})|\leq K_{w}( \mathbf{x})\cdot\|\mathbf{w}-\mathbf{w}^{\prime}\|,\quad\mathrm{for}\ i=1,\dots,n_{x}. \tag{6}\] _Moreover, we assume that there exists a universal constant \(K=\max_{\mathbf{x}}K_{w}(\mathbf{x})<\infty\) such that (6) holds._ Similarly, we can bound the difference between two successive loss function values at the same point. **Lemma 4** (Difference between Successive Loss Functions).: _Under Assumptions 4 and 7, we have_ \[|\mathcal{F}_{t+1}(\mathbf{x})-\mathcal{F}_{t}(\mathbf{x})|\leq K\eta_{t}.\] Proof.: The result directly follows from \[|\mathcal{F}_{t+1}(\mathbf{x})-\mathcal{F}_{t}(\mathbf{x})|=|\mathbb{E}_{\mathbf{w}\sim \mathbb{P}_{t+1}}\mathcal{L}(\mathbf{x},\mathbf{w})-\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_ {t}}\mathcal{L}(\mathbf{x},\mathbf{w})|\leq\mathfrak{M}(\mathbb{P}_{t},\mathbb{P}_{t+1 })\cdot K_{w}\leq K\eta_{t}.\] The last assumption on \(\mathcal{L}\) is concerned with the boundedness of the expectation drift of \(\nabla\mathcal{L}\) at consecutive time steps. **Assumption 8** (Shift of Partial Derivative).: _There exists an increasing function \(J\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that_ \[\|\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1}}[\nabla\mathcal{L}(\mathbf{x},\mathbf{w})]- \mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[\nabla\mathcal{L}(\mathbf{x},\mathbf{w})]\|\leq J (\eta_{t}). \tag{7}\] _for all \(\mathbf{x}\in\mathbb{R}^{n_{x}}\), where \(\eta_{t}\) is the bound on the distribution drift as defined in Assumption 4._ **Remark 1**.: _Assumption 8 assumes that the shifts of the expectation of every partial derivative between successive time steps are bounded by the Wasserstein distance of the two distributions. This can be satisfied when every partial derivative \(\frac{\partial\mathcal{L}(\mathbf{x},\mathbf{w})}{\partial x_{i}}\) (for \(i=1,\ldots,m\)) is Lipschitz continuous. Specifically, denote \(\mathcal{C}(\mathbf{x})\) to be the set of differentiable points of \(\mathcal{L}\) wrt \(\mathbf{w}\) and assume that \(\mathcal{P}^{t}(\mathcal{C}(\mathbf{x}))=1\) for all \(t\). For any \(\mathbf{w},\mathbf{w}^{\prime}\in\mathcal{C}(\mathbf{x})\), there exists a constant \(L_{w}(\mathbf{x})\) depending on \(\mathbf{x}\) such that_ \[\left|\frac{\partial\mathcal{L}(\mathbf{x},\mathbf{w})}{\partial x_{i}}-\frac{ \partial\mathcal{L}(\mathbf{x},\mathbf{w}^{\prime})}{\partial x_{i}}\right|\leq L_{w} (\mathbf{x})\cdot\|\mathbf{w}-\mathbf{w}^{\prime}\|\quad\mathrm{for}\ i=1,\ldots,n_{x}. \tag{8}\] _Assume that \(L\coloneqq\max_{\mathbf{x}}L_{w}(\mathbf{x})<\infty\). Then, using Kantorovich-Rubinstein duality [53], we have_ \[\|\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1}}[\nabla\mathcal{L}(\mathbf{x },\mathbf{w})]-\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[\nabla\mathcal{L}(\mathbf{x},\bm {w})]\| \leq\sum_{i=1}^{n_{x}}\left|\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1} }\left[\frac{\partial\mathcal{L}(\mathbf{x},\mathbf{w})}{\partial x_{i}}\right]- \mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\left[\frac{\partial\mathcal{L}(\mathbf{x}, \mathbf{w})}{\partial x_{i}}\right]\right|\] \[=\sum_{i=1}^{n_{x}}\inf_{\Pi(\mathcal{P}^{t+1},\mathcal{P}^{t})} \left|\int_{\mathcal{C}(\mathbf{x})}\frac{\partial\mathcal{L}(\mathbf{x},\mathbf{w})}{ \partial x_{i}}-\frac{\partial\mathcal{L}(\mathbf{x},\mathbf{w}^{\prime})}{\partial x _{i}}d\Pi\right|\] \[\leq\sum_{i=1}^{n_{x}}\inf_{\Pi(\mathcal{P}^{t+1},\mathcal{P}^{t })}\int_{\mathcal{C}(\mathbf{x})}L\|\mathbf{w}-\mathbf{w}^{\prime}\|d\Pi\] \[=n_{x}L\eta_{t}.\] _However, we impose Assumption 8 instead of the Lipschitzness assumption (8) to gain some flexibility in the class of loss function \(\mathcal{L}\). Under our setting, \(\mathcal{L}(\mathbf{x},\mathbf{w})\) can be non-differentiable at some point \(\mathbf{x}\). In this case, (8) may not hold but (7) may still hold. We will see an example in Section 4._ Define the distance between two sets \(\mathcal{X}\) and \(\mathcal{Y}\) by \[\mathrm{dist}(\mathcal{X},\mathcal{Y})=\inf_{\mathbf{x}\in\mathcal{X},\mathbf{y}\in \mathcal{Y}}\|\mathbf{x}-\mathbf{y}\|.\] We are now ready to characterize the path variations between minimizers at successive time steps. **Lemma 5** (Difference of Successive Optimal Values).: _Under Assumptions 1, 4, 6-8, the difference between the optimal loss values at successive time steps is upper bounded by_ \[\mathcal{F}_{t}^{*}-\mathcal{F}_{t+1}^{*}\leq K\eta_{t}+\frac{1}{2\mu}J(\eta_ {t})^{2}\quad\mathrm{for}\ t=1,\ldots,T-1.\] Proof.: Applying Assumption 6, we have \[\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})-\mathcal{F}_{t+1}^{*}\leq\frac{1}{2\mu}\| \nabla\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})\|^{2}\leq\frac{1}{2\mu}\|\nabla \mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})-\nabla\mathcal{F}_{t}(\mathbf{x}_{t}^{*})\|^{2}. \tag{9}\] Moreover, applying Assumption 8, for all \(\mathbf{x}\in\mathbb{R}^{n_{x}}\), it holds that \[\|\nabla\mathcal{F}_{t+1}(\mathbf{x})-\nabla\mathcal{F}_{t}(\mathbf{x})\| ^{2} =\|\nabla\left(\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1}}[\mathcal{L} (\mathbf{x},\mathbf{w})]\right)-\nabla\left(\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[ \mathcal{L}(\mathbf{x},\mathbf{w})]\right)\|^{2}\] \[=\|\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1}}[\nabla\mathcal{L}(\bm {x},\mathbf{w})]-\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}[\nabla\mathcal{L}(\mathbf{x}, \mathbf{w})]\|^{2}\] \[\leq J(\eta_{t})^{2}. \tag{10}\] Hence, using triangle inequality and the result in Lemma 4, we have \[\mathcal{F}_{t}^{*}-\mathcal{F}_{t+1}^{*}=\mathcal{F}_{t}(\mathbf{x}_{t}^{*})- \mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})+\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})-\mathcal{ F}_{t+1}(\mathbf{x}_{t+1}^{*})\leq K\eta_{t}+\frac{1}{2\mu}J(\eta_{t})^{2},\] as desired. **Remark 2**.: _From the proof of Lemma 5, we can also derive the distance between optimal sets at successive time steps. Specifically, let \(\mathcal{X}_{t}^{*}\) be the set of minimizers of (1) at time \(t\) for \(t=1,\ldots,T\). Using the result of Lemma 3 and the optimality of \(\mathbf{x}_{t}^{*}\), we have_ \[\mathrm{dist}(\mathcal{X}_{t}^{*},\mathcal{X}_{t+1}^{*})^{2}=\inf_{\mathbf{x}_{t}^{* }\in\mathcal{X}_{t}^{*},\mathbf{x}_{t+1}^{*}\in\mathcal{X}_{t+1}^{*}}\|\mathbf{x}_{t}^ {*}-\mathbf{x}_{t+1}^{*}\|^{2}\leq\|\mathbf{x}_{t}^{*}-\mathrm{proj}_{\mathcal{X}_{t+ 1}^{*}}(\mathbf{x}_{t}^{*})\|^{2}\leq\frac{2}{\mu}(\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})- \mathcal{F}_{t+1}^{*})\leq\frac{J(\eta_{t})^{2}}{\mu^{2}}.\] Armed with the above results, we are now ready to establish a regret bound of stochastic online gradient descent in distributionally time-varying online stochastic optimization. **Theorem 1** (Regret Bound).: _Suppose that Assumptions 1-8 hold and the step size satisfies \(\gamma_{t}\equiv\gamma\in(0,\min(1/\beta,1/(2\mu)))\). Let \(\zeta=-\frac{\gamma^{2}\beta}{2}+\gamma\), the regret can be upper bounded by_ \[\mathrm{Regret}(T)\leq\frac{1}{2\mu\zeta}(\mathcal{F}_{1}(\mathbf{x}_{1})- \mathcal{F}_{1}^{*})+\frac{K}{\mu\zeta}\sum_{t=1}^{T-1}\eta_{t}+\frac{1}{4\mu^ {2}\zeta}\sum_{t=1}^{T-1}J(\eta_{t})^{2}+\frac{\gamma\beta}{2\mu}\sum_{t=1}^{T -1}\sigma_{t}^{2}. \tag{11}\] Proof.: Using Lemma 2, we have \[\mathcal{F}_{t}(\mathbf{x}_{t+1})-\mathcal{F}_{t}(\mathbf{x}_{t}) \leq\left\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}),\mathbf{x}_{t+1}- \mathbf{x}_{t}\right\rangle+\frac{\beta}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\] \[=\left\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}),-\gamma\widehat{ \nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}) \right\rangle+\frac{\gamma^{2}\beta}{2}\left\|\widehat{\nabla}\mathcal{F}_{t}( \mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t})\right\|^{2}.\] Taking expectation of (16) wrt \(\{\mathbf{w}_{t}^{t}\}_{i=1}^{m}\) given \(\mathbf{x}_{t}\) yields \[\mathbb{E}_{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t }}[\mathcal{F}_{t}(\mathbf{x}_{t+1})-\mathcal{F}_{t}(\mathbf{x}_{t})|\mathbf{x}_{t}]\] \[\leq-\gamma\|\nabla\mathcal{F}_{t}(\mathbf{x}_{t})\|^{2}+\frac{\gamma ^{2}\beta}{2}\mathbb{E}_{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t }}\left[\left\|\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t}, \ldots,\mathbf{w}_{m}^{t})\right\|^{2}\right]\] \[=-\gamma\left(1-\frac{\gamma\beta}{2}\right)\|\nabla\mathcal{F}_{ t}(\mathbf{x}_{t})\|^{2}+\frac{\gamma^{2}\beta}{2}\left(\mathbb{E}_{\mathbf{w}_{1}^{t}, \ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t}}\left[\left\|\widehat{\nabla}\mathcal{ F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t})\right\|^{2}\right]-\| \nabla\mathcal{F}_{t}(\mathbf{x}_{t})\|^{2}\right)\] \[=-\gamma\left(1-\frac{\gamma\beta}{2}\right)\|\nabla\mathcal{F}_{ t}(\mathbf{x}_{t})\|^{2}+\frac{\gamma^{2}\beta}{2}\mathbb{E}_{\mathbf{w}_{1}^{t}, \ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t}}\left[\left\|\widehat{\nabla} \mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t})-\nabla \mathcal{F}_{t}(\mathbf{x}_{t})\right\|^{2}\right]\] \[\leq-\gamma\left(1-\frac{\gamma\beta}{2}\right)\|\nabla\mathcal{ F}_{t}(\mathbf{x}_{t})\|^{2}+\frac{\gamma^{2}\beta}{2}\sigma_{t}^{2}. \tag{12}\] Now, writing \(\zeta=-\frac{\gamma^{2}\beta}{2}+\gamma\), under Assumption 6, and upon applying Lemmas 4 and 5, we obtain \[\mathbb{E}_{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t }}[\mathcal{F}_{t+1}(\mathbf{x}_{t+1})-\mathcal{F}_{t+1}^{*}|\mathbf{x}_{t}]\] \[\leq\mathbb{E}_{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}\sim\mathbb{ P}_{t}}[(\mathcal{F}_{t+1}(\mathbf{x}_{t+1})-\mathcal{F}_{t}(\mathbf{x}_{t+1}))+( \mathcal{F}_{t}(\mathbf{x}_{t+1})-\mathcal{F}_{t}(\mathbf{x}_{t}))+(\mathcal{F}_{t}( \mathbf{x}_{t})-\mathcal{F}_{t}^{*})+(\mathcal{F}_{t}^{*}-\mathcal{F}_{t+1}^{*})| \mathbf{x}_{t}]\] \[\leq 2K\eta_{t}+\frac{J(\eta_{t})^{2}}{2\mu}+(1-2\mu\zeta)\mathbb{E }_{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t}}[\mathcal{F}_{t}(\mathbf{x })-\mathcal{F}_{t}^{*}|\mathbf{x}_{t}]+\frac{\gamma^{2}\beta}{2}\sigma_{t}^{2}.\] Since \(\gamma\in\left(0,\min\left(\frac{1}{2\mu},\frac{1}{\beta}\right)\right)\), we see that \(0<2\mu\zeta<1\). Using the above result, we can establish a regret bound: \[\sum_{t=1}^{T}\mathbb{E}_{\{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t} \}_{t=1}^{t-1}}[\mathcal{F}_{t}(\mathbf{x}_{t})-\mathcal{F}_{t}^{*}]\] \[=(\mathcal{F}_{1}(\mathbf{x}_{1})-\mathcal{F}_{1}^{*})+\sum_{t=1}^{T-1 }\mathbb{E}_{\mathbf{w}_{1}^{t+1},\ldots,\mathbf{w}_{m}^{t+1}\sim\mathbb{P}_{t+1}}[ \mathcal{F}_{t+1}(\mathbf{x}_{t+1})-\mathcal{F}_{t+1}^{*}|\mathbf{x}_{t}]\] \[\leq(\mathcal{F}_{1}(\mathbf{x}_{1})-\mathcal{F}_{1}^{*})+2K\sum_{t=1 }^{T-1}\eta_{t}+\frac{1}{2\mu}\sum_{t=1}^{T-1}J(\eta_{t})^{2}+(1-2\mu\zeta) \sum_{t=1}^{T-1}\mathbb{E}[\mathcal{F}_{t}(\mathbf{x}_{t})-\mathcal{F}_{t}^{*}]+ \frac{\gamma^{2}\beta}{2}\sum_{t=1}^{T-1}\sigma_{t}^{2}.\] Rearranging the terms, and since \(\gamma\leq 1/\beta\) implies \(\zeta\geq\gamma/2\), \[\mathrm{Regret}(T) =\sum_{t=1}^{T}\mathbb{E}_{\{\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t} \}_{t=1}^{t-1}}[\mathcal{F}_{t}(\mathbf{x}_{t})-\mathcal{F}_{t}^{*}]\] \[\leq\frac{1}{2\mu\zeta}(\mathcal{F}_{1}(\mathbf{x}_{1})-\mathcal{F}_ {1}^{*})+\frac{K}{\mu\zeta}\sum_{t=1}^{T-1}\eta_{t}+\frac{1}{4\mu^{2}\zeta} \sum_{t=1}^{T-1}J(\eta_{t})^{2}+\frac{\gamma^{2}\beta}{4\mu\zeta}\sum_{t=1}^{T -1}\sigma_{t}^{2} \tag{13}\] \[\leq\frac{1}{2\mu\zeta}(\mathcal{F}_{1}(\mathbf{x}_{1})-\mathcal{F}_ {1}^{*})+\frac{K}{\mu\zeta}\sum_{t=1}^{T-1}\eta_{t}+\frac{1}{4\mu^{2}\zeta} \sum_{t=1}^{T-1}J(\eta_{t})^{2}+\frac{\gamma\beta}{2\mu}\sum_{t=1}^{T-1} \sigma_{t}^{2}. \tag{14}\] In particular, writing \(\Theta=\min(1/\beta,1/(2\mu))\) and taking \(\gamma=\Theta/\sqrt{T}\), we see that \[\frac{1}{\zeta}=\frac{1}{-\frac{\Theta^{2}\beta}{2T}+\frac{\Theta}{\sqrt{T}}} \leq\frac{1}{-\frac{\Theta}{2T}+\frac{\Theta}{\sqrt{T}}}=\frac{\sqrt{T}}{- \frac{\Theta}{2\sqrt{T}}+\Theta}\leq\frac{\sqrt{T}}{-\frac{\Theta}{2}+\Theta}= \frac{2\sqrt{T}}{\Theta}.\] Therefore, putting it back to (14) yields (15). As can be seen, the online stochastic gradient descent method can achieve sublinear regret when the cumulative distribution drift \(\sum_{t}\eta_{t}\), the cumulative squared drifts of expectation of gradients \(\sum_{t}J(\eta_{t})^{2}\) and the cumulative variance of the gradient approximation \(\sum_{t}\sigma_{t}^{2}\) grow sublinearly. In particular, if \(J(\eta_{t})\leq c_{0}\sqrt{\eta}_{t}\) for some \(c_{0}>0\) and all \(t\), the condition reduces to the sublinear growth of the cumulative distribution drift \(\sum_{t}\eta_{t}\) and the cumulative variance of the gradient approximation \(\sum_{t}\sigma_{t}^{2}\). Furthermore, if the variance of the gradient approximation is constant for all \(t\) (i.e., it grows linearly), online stochastic gradient descent is still able to achieve sublinear regret by picking suitable step size, as long as the cumulative distribution drift grows sufficiently slowly (such that \(\sum_{t}\eta_{t}\) and \(\sum_{t}J(\eta_{t})^{2}\) grow no faster than \(\sqrt{T}\)). **Remark 3**.: _The condition on the step size \(\gamma\in(0,\min(1/\beta,1/(2\mu)))\) is used to ensure the contraction of the iterate (i.e., \(0<2\mu\zeta<1\)) and the simplification of the regret bound in (14). A necessary and sufficient condition the step size \(\gamma\) of the online stochastic gradient descent is \(\gamma\in(0,\min(2/(\mu\beta),\frac{\mu-\sqrt{\mu^{2}-\mu\beta}}{\mu\beta}))\), which would yield a regret bound (13)._ **Remark 4**.: _As can be seen from the right-hand side of (11), the gradient error term \(\sum_{t}\sigma_{t}^{2}\) is coupled with the step size \(\gamma\). Hence, one can have control over the gradient error term using a suitable step size rule. In particular, if \(\sigma_{t}^{2}\leq\sigma^{2}\) for some scalar \(\sigma>0\) and for all \(t\), setting the step size of the online SGD as \(\gamma=\min(1/\beta,1/(2\mu))/\sqrt{T}\), the regret can be upper bounded by_ \[\mathrm{Regret}(T)\leq M_{1}\sqrt{T}+M_{2}\sqrt{T}\sum_{t=1}^{T-1}\eta_{t}+M_ {3}\sqrt{T}\sum_{t=1}^{T-1}J(\eta_{t})^{2}, \tag{15}\] _where_ \[M_{1}=\frac{1}{\mu\Theta}(\mathcal{F}_{1}(\mathbf{x}_{1})-\mathcal{F}_{1}^{*})+ \frac{\sigma^{2}}{2\mu},\quad M_{2}=\frac{2K}{\mu\Theta},\quad M_{3}=\frac{1}{ 2\mu^{2}\Theta},\quad\Theta=\min(1/\beta,1/(2\mu)). \tag{16}\] _This fact is particularly useful when the variance of the gradient error does not diminish over time._ **Remark 5**.: _For simplicity, we keep the step size \(\gamma_{t}\) constant throughout all time steps \(t\). In particular, if the variance of the measurement noise is constant at all time steps, Theorem 1 states that one may need to set the time horizon of \(T\) in advance for the selection of the suitable step size \(\gamma=\min(1/\beta,1/(2\mu))/\sqrt{T}\) of online stochastic gradient descent. However, in fact, the proof still follows if the step size is chosen to be \(\gamma_{t}=\min(1/\beta,1/(2\mu))/\sqrt{t}\) for \(t=1,\ldots,T\). Specifically, similar regret bound can be achieved by considering \(\zeta_{t}=-\frac{\gamma_{t}^{2}\beta}{2}+\gamma_{t}\) and using the fact that_ \[\frac{1}{\zeta_{t}}=\frac{1}{-\frac{\gamma_{t}^{2}\beta}{2}+\gamma_{t}}= \frac{2}{-\gamma_{t}^{2}\beta+2\gamma_{t}}=\frac{2}{\gamma_{t}}\cdot\frac{1}{2 -\gamma_{t}\beta}\leq\frac{2}{\gamma_{t}}\cdot\frac{1}{2-\frac{1}{\beta} \cdot\beta}=\frac{2}{\gamma_{t}}\leq\frac{2}{\gamma_{T}}\] _for all \(t\). In Section 5, we will see that the latter step size would yield a better performance of online stochastic gradient descent. Moreover, such a step size does not rely on the information of time of horizon, which may be more useful in practice._ ## 3 Online Stochastic Optimization under Proximal PL Condition In the previous section, we consider the minimization problem of a smooth data fidelity loss function where the underlying distribution the data is time-varying. Yet, its regularized version is also of interest since one may want to impose some structure on the decision vector. In this section, we show that similar regret bound can be developed for stochastic online proximal gradient descent given a sequence of loss functions satisfying the proximal PL condition. ### Problem Formulation Let \(\mathcal{F}_{t}(\mathbf{x})\coloneqq\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t}}\mathcal{ L}(\mathbf{x},\mathbf{w})\). In this section, we consider a sequence of optimization problems \[\min_{\mathbf{x}\in\mathbb{R}^{n_{x}}}[\mathcal{G}_{t}(\mathbf{x})\coloneqq\mathcal{F}_ {t}(\mathbf{x})+\mathcal{R}(\mathbf{x})] \tag{17}\] for \(t=1,\ldots,T\) for some potentially non-smooth convex regularizer \(\mathcal{R}\colon\mathbb{R}^{n_{x}}\to\mathbb{R}\). The regularizer \(\mathcal{R}\) can be used to impose structures on the decision vector, for example, \(\mathcal{R}(\mathbf{x})=\|\mathbf{x}\|_{1}\) imposes sparsity on the decision vector. Under Assumptions 1 and 2, for \(t=1,\ldots,T-1\), we employ the one-step stochastic proximal gradient descent \[\mathbf{x}_{t+1} =\mathrm{prox}_{\gamma_{t}\mathcal{R}}(\mathbf{x}_{t}-\gamma_{t}\widehat {\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}))\] \[=\arg\min_{\mathbf{y}}\left\{\left\langle\widehat{\nabla}\mathcal{F}_ {t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\ldots,\mathbf{w}_{m}^{t}),\mathbf{y}-\mathbf{x}_{t}\right\rangle +\frac{1}{2\gamma_{t}}\|\mathbf{y}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{y})-\mathcal{ R}(\mathbf{x}_{t})\right\}. \tag{18}\] where \(\widehat{\nabla}\mathcal{F}_{t}\) is defined in (3). We, again, use the notion of regret to evaluate its performance, namely, \[\mathrm{Regret}(T)=\sum_{t=1}^{T}\mathbb{E}_{\{\mathbf{w}_{1}^{*},\ldots,\mathbf{w}_{ m}^{*}\}_{\tau=1}^{t-1}}[\mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}_{t}^{*}],\] where \(\mathcal{G}_{t}^{*}=\min_{\mathbf{x}}\mathcal{G}_{t}(\mathbf{x}_{t})\) is the minimum loss. We also denote \(\mathbf{x}_{t}^{*}\in\arg\min_{\mathbf{x}}\mathcal{G}_{t}(\mathbf{x}_{t})\). Suppose that Assumptions 1-5 and 7-8 hold. Moreover, we assume that the proximal Polyak-Lojasiewicz condition holds for \(\mathcal{G}_{t}\) for all \(t=1,\ldots,T\). **Assumption 9** (Proximal Polyak-Lojasiewicz Condition).: _Under Assumption 1, for \(t=1,\ldots,T\), \(\mathcal{G}_{t}\) satisfies the proximal Polyak-Lojasiewicz (proximal PL) condition on a set \(\mathcal{X}\) with constant \(\mu\); i.e., for all \(\mathbf{x}\in\mathcal{X}\),_ \[\frac{1}{2}\mathcal{D}_{\mathcal{R}}^{t}(\mathbf{x},\beta)\geq\mu(\mathcal{G}_{t} (\mathbf{x})-\mathcal{G}_{t}^{*}),\] _where_ \[\mathcal{D}_{\mathcal{R}}^{t}(\mathbf{x},\delta):=-2\delta\min_{\mathbf{y}}\left\{ \langle\nabla\mathcal{F}_{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\rangle+\frac{\delta}{2}\| \mathbf{y}-\mathbf{x}\|^{2}+\mathcal{R}(\mathbf{y})-\mathcal{R}(\mathbf{x})\right\}.\] Proximal PL condition is a generalization of PL condition in non-smooth optimization. It is known that problems like support vector machine and \(\ell_{1}\) regularized least squares satisfy proximal PL condition; see more examples in [34, Section 4.1 and Appendix G]. Similar to PL condition, the quadratic growth property is also implied for functions satisfying proximal PL condition. **Lemma 6** (Quadratic Growth).: _Let \(\mathcal{F}_{t}\colon\mathbb{R}^{n_{x}}\to\mathbb{R}\) be a function that satisfies proximal PL condition. Then, under Assumption 5, there exists a constant \(\xi>0\) such that for every \(\mathbf{x}\in\mathbb{R}^{n_{x}}\), the following holds_ \[\frac{\xi}{2}\|\mathbf{x}-\mathrm{proj}_{\mathcal{X}_{t}^{*}}(\mathbf{x})\|^{2}\leq \mathcal{G}_{t}(\mathbf{x})-\mathcal{G}_{t}^{*}. \tag{19}\] This is a direct consequence of the equivalence of proximal PL condition, proximal error bound condition and quadratic growth [34, Appendix G],[21, Corollary 3.6]. Having the proximal PL condition, we can also bound the distance between two successive optimal sets and the difference between two successive loss function values at the same point. **Lemma 7** (Difference between Successive Loss Functions).: _Under Assumptions 4 and 7, we have_ \[|\mathcal{G}_{t+1}(\mathbf{x})-\mathcal{G}_{t}(\mathbf{x})|\leq K\eta_{t}.\] The lemma directly follows from Lemma 4. Collecting all the results, we can now establish a regret bound of stochastic online proximal gradient descent. **Lemma 8** (Difference of Successive Optimal Values).: _For \(t=1,\ldots,T\). Under Assumptions 1, 4, 7-9, we have_ \[\mathcal{G}_{t}^{*}-\mathcal{G}_{t+1}^{*}\leq K\eta_{t}+\frac{J(\eta_{t})^{2} }{2\mu}.\] Proof.: Note that \[\mathcal{G}_{t+1}(\mathbf{x}_{t}^{*})-\mathcal{G}_{t+1}^{*} \leq\frac{1}{2\mu}\mathcal{D}_{\mathcal{R}}^{t+1}(\mathbf{x}_{t}^{*},\beta)\] \[=-\frac{\beta}{\mu}\cdot\min_{\mathbf{y}}\left\{\langle\nabla \mathcal{F}_{t+1}(\mathbf{x}_{t}^{*}),\mathbf{y}-\mathbf{x}_{t}^{*}\rangle+\frac{\beta}{2} \|\mathbf{y}-\mathbf{x}_{t}^{*}\|^{2}+\mathcal{R}(\mathbf{y})-\mathcal{R}(\mathbf{x}_{t}^{*})\right\}\] \[\leq-\frac{\beta}{\mu}\Bigg{(}\min_{\mathbf{y}}\left\{\langle\nabla \mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})-\nabla\mathcal{F}_{t}(\mathbf{x}_{t}^{*}),\mathbf{y} -\mathbf{x}_{t}^{*}\rangle+\frac{\beta}{2}\|\mathbf{y}-\mathbf{x}_{t}^{*}\|^{2}\right\}\] \[\quad+\min_{\mathbf{y}}\left\{\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t} ^{*}),\mathbf{y}-\mathbf{x}_{t}^{*}\rangle+\mathcal{R}(\mathbf{y})-\mathcal{R}(\mathbf{x}_{t}^ {*})\right\}\Bigg{)}\] \[\leq-\frac{\beta}{\mu}\cdot\min_{\mathbf{y}}\left\{\langle\nabla \mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})-\nabla\mathcal{F}_{t}(\mathbf{x}_{t}^{*}),\mathbf{y} -\mathbf{x}_{t}^{*}\rangle+\frac{\beta}{2}\|\mathbf{y}-\mathbf{x}_{t}^{*}\|^{2}\right\}. \tag{20}\] The last inequality follows from the optimality of \(\mathbf{x}_{t}^{*}\). Also, we can easily obtain the global optimum \(\mathbf{y}=\mathbf{x}_{t}^{*}+\frac{1}{\beta}(\nabla\mathcal{F}_{t}(\mathbf{x}_{t}^{*})- \nabla\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*}))\) for the minimization problem in the last inequality. Plugging this back into (20) and using the argument in (10), for all \(\mathbf{x}\in\mathbb{R}^{n_{x}}\), we obtain \[\mathcal{G}_{t+1}(\mathbf{x}_{t}^{*})-\mathcal{G}_{t+1}^{*} \leq\frac{1}{2\mu}\|\nabla\mathcal{F}_{t+1}(\mathbf{x}_{t}^{*})- \nabla\mathcal{F}_{t}(\mathbf{x}_{t}^{*})\|^{2}\] \[=\frac{1}{2\mu}\|\nabla\left(\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+ 1}}[\mathcal{L}(\mathbf{x},\mathbf{w})]\right)-\nabla\left(\mathbb{E}_{\mathbf{w}\sim \mathbb{P}_{t}}[\mathcal{L}(\mathbf{x},\mathbf{w})]\right)\|^{2}\] \[=\frac{1}{2\mu}\|\mathbb{E}_{\mathbf{w}\sim\mathbb{P}_{t+1}}\left[ \nabla_{\mathbf{x}}\mathcal{L}(\mathbf{x},\mathbf{w})\right]-\mathbb{E}_{\mathbf{w}\sim \mathbb{P}_{t}}\left[\nabla_{\mathbf{x}}\mathcal{L}(\mathbf{x},\mathbf{w})\right]\|^{2}\] \[\leq\frac{J(\eta_{t})^{2}}{2\mu}.\] Hence, using triangle inequality and result in Lemma 7, we have \[\mathcal{G}_{t}^{*}-\mathcal{G}_{t+1}^{*}=\mathcal{G}_{t}(\mathbf{x}_{t}^{*})- \mathcal{G}_{t+1}(\mathbf{x}_{t}^{*})+\mathcal{G}_{t+1}(\mathbf{x}_{t}^{*})-\mathcal{ G}_{t+1}(\mathbf{x}_{t+1}^{*})\leq K\eta_{t}+\frac{J(\eta_{t})^{2}}{2\mu},\] as desired. **Remark 6**.: _From the proof of Lemma 8, we can also derive the distance between optimal sets at successive time steps. Specifically, let \(\mathcal{X}_{t}^{*}\) be the set of minimizers of (1) at time \(t\) for \(t=1,\ldots,T\). Using the result of Lemma 6 and the optimality of \(\mathbf{x}_{t}^{*}\), we have_ \[\mathrm{dist}(\mathcal{X}_{t}^{*},\mathcal{X}_{t+1}^{*})^{2}=\inf_{\mathbf{x}_{t}^ {*}\in\mathcal{X}_{t}^{*},\mathbf{x}_{t+1}^{*}\in\mathcal{X}_{t+1}^{*}}\|\mathbf{x}_{ t}^{*}-\mathbf{x}_{t+1}^{*}\|^{2}\leq\|\mathbf{x}_{t}^{*}-\mathrm{proj}_{\mathcal{X}_{t+1}^ {*}}(\mathbf{x}_{t}^{*})\|^{2}\leq\frac{2}{\xi}(\mathcal{G}_{t+1}(\mathbf{x}_{t}^{*})- \mathcal{G}_{t+1}^{*})\leq\frac{J(\eta_{t})^{2}}{\xi\mu}.\] Having the above set up, we can establish a regret bound of online stochastic proximal gradient descent similar to Theorem 1. **Theorem 2**.: _Suppose that Assumptions 1-5, 7-9 hold. For any step size \(\gamma_{t}\equiv\gamma\in(0,1/\beta)\), the regret can be upper bounded by_ \[\mathrm{Regret}(T)\leq\frac{1}{2\mu\gamma}(\mathcal{G}_{1}(\mathbf{x}_{1})- \mathcal{G}_{1}^{*})+\frac{K}{\mu\gamma}\sum_{t=1}^{T-1}\eta_{t}+\frac{1}{4\mu^ {2}\gamma}\sum_{t=1}^{T-1}J(\eta_{t})^{2}+\frac{1}{4\mu}\sum_{t=1}^{T-1}\sigma _{t}^{2}. \tag{21}\] Proof.: Applying Assumption 5 and using the result in Lemma 2, we can write \[\mathcal{G}_{t}(\mathbf{x}_{t+1})-\mathcal{G}_{t}(\mathbf{x}_{t}) =\mathcal{F}_{t}(\mathbf{x}_{t+1})-\mathcal{F}_{t}(\mathbf{x}_{t})+ \mathcal{R}(\mathbf{x}_{t+1})-\mathcal{R}(\mathbf{x}_{t})\] \[\leq\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}),\mathbf{x}_{t+1}-\mathbf{x}_ {t}\rangle+\frac{\beta}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{x}_{ t+1})-\mathcal{R}(\mathbf{x}_{t}). \tag{22}\] Since the update \(\mathbf{x}_{t+1}\) is determined by the sampling data \(\{\mathbf{w}_{t}^{i}\}_{i=1}^{m}\) and the previous update \(\mathbf{x}_{t}\), taking expectation over (22) yields \[\mathbb{E}_{\mathbf{w}_{1}^{i},\ldots,\mathbf{w}_{m}^{i}}\left[\mathcal{G }_{t}(\mathbf{x}_{t+1})-\mathcal{G}_{t}(\mathbf{x}_{t})|\mathbf{x}_{t}\right]\] \[\leq\underbrace{\left[\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}), \mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\rangle+\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime} -\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{x}_{t+1}^{\prime})-\mathcal{R}(\mathbf{x}_{t}) \right]}_{\text{(I)}}\] \[+\underbrace{\mathbb{E}_{\mathbf{w}_{1}^{i},\ldots,\mathbf{w}_{m}^{i}} \Bigg{[}\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{ \prime}\rangle+\frac{\beta}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}-\frac{1}{2\gamma }\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{x}_{t+1})-\mathcal{ R}(\mathbf{x}_{t+1}^{\prime})\Bigg{]}}_{\text{(II)}}, \tag{23}\] where \[\mathbf{x}_{t+1}^{\prime}=\arg\min_{\mathbf{z}}\left\{\langle\nabla\mathcal{F}_{t}( \mathbf{x}_{t}),\mathbf{z}-\mathbf{x}_{t}\rangle+\frac{1}{2\gamma}\|\mathbf{z}-\mathbf{x}_{t}\|^{2}+ \mathcal{R}(\mathbf{z})-\mathcal{R}(\mathbf{x}_{t})\right\}.\] Under Assumption 9, we can bound (I) \[\langle\nabla\mathcal{F}_{t}(\mathbf{x}_{t}),\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t} \rangle+\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\|^{2}+\mathcal{R}( \mathbf{x}_{t+1}^{\prime})-\mathcal{R}(\mathbf{x}_{t})=-\gamma\mathcal{D}_{\mathcal{R} }^{t}\left(\mathbf{x}_{t},\frac{1}{\gamma}\right)\leq-2\mu\gamma(\mathcal{G}_{t}( \mathbf{x}_{t})-\mathcal{G}_{t}^{*}). \tag{24}\] Next, (II) can be written as \[\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\left\langle \nabla\mathcal{F}_{t}(\mathbf{x}_{t})-\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\bm {w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{\prime}\right\rangle \Bigg{]}\] \[+\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\left\langle \widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t} ),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{\prime}\right\rangle+\frac{\beta}{2}\|\mathbf{x}_{t+ 1}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{x}_{t+1})-\mathcal{R}(\mathbf{x}_{t+1}^{\prime })-\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\|^{2}\Bigg{]}. \tag{25}\] Recalling that the updating rule is given by \[\mathbf{x}_{t+1}=\arg\min_{\mathbf{z}}\left\{\left\langle\widehat{\nabla}\mathcal{F} _{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}),\mathbf{z}-\mathbf{x}_{t}\right\rangle +\frac{1}{2\gamma}\|\mathbf{z}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{z})-\mathcal{R}( \mathbf{x}_{t})\rightleftharpoons:H(\mathbf{z})\right\}. \tag{26}\] Given \(\{\mathbf{w}_{t}^{t}\}_{i=1}^{m}\) and the assumption that \(\mathcal{R}\) is convex, \(H\) is strongly convex. Therefore, by the optimality of \(\mathbf{x}_{t+1}\), we have \[H(\mathbf{x}_{t+1}^{\prime})\geq H(\mathbf{x}_{t+1})+\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^ {\prime}-\mathbf{x}_{t+1}\|^{2}.\] That is, \[\left\langle\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1 }^{t},\dots,\mathbf{w}_{m}^{t}),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{\prime}\right\rangle+ \frac{1}{2\gamma}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\] \[+\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t+1}\|^{2}+ \mathcal{R}(\mathbf{x}_{t+1})-\mathcal{R}(\mathbf{x}_{t+1}^{\prime})-\frac{1}{2\gamma} \|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\|^{2}\leq 0. \tag{27}\] Therefore, \[\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\left\langle \widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t }),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{\prime}\right\rangle+\frac{\beta}{2}\|\mathbf{x}_{t+ 1}-\mathbf{x}_{t}\|^{2}+\mathcal{R}(\mathbf{x}_{t+1})-\mathcal{R}(\mathbf{x}_{t+1}^{\prime })-\frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t}\|^{2}\Bigg{]}\] \[\leq\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\frac {1}{2}\left(\beta-\frac{1}{\gamma}\right)\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}-\frac {1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t+1}\|^{2}\Bigg{]}. \tag{28}\] Putting (28) back to (25) and using Young's inequality [4, Proposition 2.7], (II) can be bounded by \[\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\left\langle \nabla\mathcal{F}_{t}(\mathbf{x}_{t})-\widehat{\nabla}\mathcal{F}_{t}(\mathbf{x}_{t}; \mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}),\mathbf{x}_{t+1}-\mathbf{x}_{t+1}^{\prime}\right\rangle +\frac{1}{2}\left(\beta-\frac{1}{\gamma}\right)\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}- \frac{1}{2\gamma}\|\mathbf{x}_{t+1}^{\prime}-\mathbf{x}_{t+1}\|^{2}\Bigg{]}\] \[\leq\frac{\gamma}{2}\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^ {t}}\Bigg{[}\left\|\nabla\mathcal{F}_{t}(\mathbf{x}_{t})-\widehat{\nabla}\mathcal{ F}_{t}(\mathbf{x}_{t};\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t})\right\|^{2}\Bigg{]}+ \mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\Bigg{[}\frac{1}{2}\left( \beta-\frac{1}{\gamma}\right)\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\Bigg{]}\] \[\leq\frac{\gamma}{2}\sigma_{t}^{2}+\mathbb{E}_{\mathbf{w}_{1}^{t}, \dots,\mathbf{w}_{m}^{t}}\Bigg{[}\frac{1}{2}\left(\beta-\frac{1}{\gamma}\right)\| \mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\Bigg{]}. \tag{29}\] Since \(\gamma\leq 1/\beta\), putting (24) and (29) into (23), we have \[\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}}\left[\mathcal{G}_{t}(\mathbf{x}_{ t+1})-\mathcal{G}_{t}(\mathbf{x}_{t})|\mathbf{x}_{t}\right]\leq-2\mu\gamma\mathbb{E}_{\mathbf{w}_{1}^{t}, \dots,\mathbf{w}_{m}^{t}}[\mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}_{t}^{s}|\mathbf{x}_{t }]+\frac{\gamma}{2}\sigma_{t}^{2}.\] Therefore, given \(\mathbf{x}_{t}\in\mathbb{R}^{n_{x}}\), \[\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t}} [\mathcal{G}_{t+1}(\mathbf{x}_{t+1})-\mathcal{G}_{t+1}^{*}|\mathbf{x}_{t}]\] \[=\mathbb{E}[(\mathcal{G}_{t+1}(\mathbf{x}_{t+1})-\mathcal{G}_{t}(\mathbf{x }_{t+1}))+(\mathcal{G}_{t}(\mathbf{x}_{t+1})-\mathcal{G}_{t}(\mathbf{x}_{t}))+( \mathcal{G}_{t}(\mathbf{x}_{t}^{*})-\mathcal{G}_{t}^{*})+(\mathcal{G}_{t}^{*}- \mathcal{G}_{t+1}^{*})]\] \[\leq 2K\eta_{t}+\frac{J(\eta_{t})^{2}}{2\mu}+(1-2\mu\gamma)( \mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}_{t}^{*})+\frac{\gamma}{2}\sigma_{t}^{2}.\] Summing the terms up, \[\sum_{t=1}^{T}\mathbb{E}_{\{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}\}_{ r=1}^{t-1}}[\mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}_{t}^{s}]\] \[=(\mathcal{G}_{1}(\mathbf{x}_{1})-\mathcal{G}_{1}^{*})+\sum_{t=1}^{T-1 }\mathbb{E}_{\mathbf{w}_{1}^{t},\dots,\mathbf{w}_{m}^{t}\sim\mathbb{P}_{t}}[\mathcal{G}_{t+ 1}(\mathbf{x}_{t+1})-\mathcal{G}_{t+1}^{*}]\] \[\leq(\mathcal{G}_{1}(\mathbf{x}_{1})-\mathcal{G}_{1}^{*})+2K\sum_{t=1 }^{T-1}\eta_{t}+\frac{1}{2\mu}\sum_{t=1}^{T-1}J(\eta_{t})^{2}+\sum_{t=1}^{T-1 }\left(1-2\mu\gamma\right)\mathbb{E}[\mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}_{t}^{*} ]+\frac{\gamma}{2}\sum_{t=1}^{T-1}\sigma_{t}^{2}.\] Rearranging the terms, the regret is upper bounded by \[\text{Regret}(T) =\sum_{t=1}^{T}\mathbb{E}_{\mathbf{w}^{\prime}_{1},\dots,\mathbf{w}^{\prime }_{t}\sim\mathbb{P}_{t}}\left[\mathcal{G}_{t}(\mathbf{x}_{t})-\mathcal{G}^{*}_{t}\right]\] \[\leq\frac{1}{2\mu\gamma}(\mathcal{G}_{1}(\mathbf{x}_{1})-\mathcal{G}^ {*}_{1})+\frac{K}{\mu\gamma}\sum_{t=1}^{T-1}\eta_{t}+\frac{1}{4\mu^{2}\gamma} \sum_{t=1}^{T-1}J(\eta_{t})^{2}+\frac{1}{4\mu}\sum_{t=1}^{T-1}\sigma_{t}^{2}.\] Theorem 2 shows that the online stochastic proximal gradient descent method can achieve sublinear regret when the cumulative distribution drift \(\sum_{t}\eta_{t}\), the cumulative squared drifts of expectation of gradients \(\sum_{t}J(\eta_{t})^{2}\) and the cumulative variance of the gradient approximation \(\sum_{t}\sigma_{t}^{2}\) grow sublinearly. However, if the variance of the gradient approximation is constant throughout all \(t\), sublinear regret bounds can no longer be achieved. This is due to the technical challenge caused by the nonsmoothness of the regularizer. Yet, in Section 5, we will see numerical examples that a sublinear regret of online stochastic proximal gradient descent can be observed while the cumulative variance of gradient approximation grows linearly given a suitable step size. **Remark 7**.: _Unlike Theorem 1, the gradient error term \(\sum_{t}\sigma_{t}\) shown in the right-hand side of the regret bound (21) does not couple with any step size, implying that we cannot control the term using a suitable step size rule. In other words, if the gradient error does not diminish, Theorem 2 cannot guarantee a sublinear regret bound of online stochastic proximal gradient descent. However, sublinear regret can still be observed empirically using a suitable step size rule; see Section 5. This suggests that it is possible to achieve a tighter regret bound of online stochastic proximal gradient descent given some assumptions on the regularizer. We will leave this as a future work._ ## 4 Application to CVaR Statistical Learning Without assuming convexity, this framework can be applied to a broader class of loss functions. In this section, we show how time-varying CVaR learning problem benefits from the above setup. In the following, the notation might be slightly different from the above sections, which we will define in due course. ### CVaR Formulation and Preliminaries Consider a known parametric family of functions \(\mathcal{F}\coloneqq\{\phi\colon\mathbb{R}^{n}\to\mathbb{R}|\phi(\cdot) \equiv f(\cdot,\mathbf{\theta}),\mathbf{\theta}\in\mathbb{R}^{n}\}\), called a hypothesis class. At each time \(t=1,\dots,T\), we collect samples \((\mathbf{x},y)\in\mathbb{R}^{d}\times\mathbb{R}\) from an unknown distribution \(\mathbb{P}_{t}\) on example space \(\Omega_{t}\) and would like to find \(\mathbf{\theta}^{*}_{t}\in\mathbb{R}^{n}\) that can best describe the relation between input \(\mathbf{x}\) and output \(y\). Specifically, we use a loss function \(\ell\colon\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) to measure the discrepancy between the quantity of an admissible predictor \(f(\mathbf{x},\mathbf{\theta})\) and the output \(y\) for each sample \((\mathbf{x},y)\), and minimize an expected loss \[\inf_{\mathbf{\theta}\in\mathbb{R}^{n}}\mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}_{t}} \{\ell(f(\mathbf{x},\mathbf{\theta}),y)\}. \tag{30}\] at each time step \(t\). A fundamental issue about this formulation is that it is risk-neutral. In some applications, for example, making medical decisions and portfolio management, one of the objectives is to avoid worst-case scenarios, and therefore, a robust risk measure is of more interest. In view of this, one of the most popular risk measures in theory and practice is CVaR, which is defined as \[\text{CVaR}^{\alpha}(Z)\coloneqq\inf_{h\in\mathbb{R}}\left\{h+\frac{1}{\alpha }\mathbb{E}\{(Z-h)_{+}\}\right\}\] at confidence level \(\alpha\in(0,1]\) for an integrable random loss \(Z\). Putting \(Z=\ell(f(\mathbf{x},\mathbf{\theta}),y)\), we can reformulate problem (30) using CVaR measure over variables \((\mathbf{\theta},h)\) as \[\inf_{(\mathbf{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}}\mathbb{E}_{(\mathbf{x},y) \sim\mathbb{P}_{t}}\left\{h+\frac{1}{\alpha}(\ell(f(\mathbf{x},\mathbf{\theta}),y)-h)_ {+}\right\}.\] Intuitively, \(\text{CVaR}^{\alpha}(Z)\) is the mean of the worst \(\alpha\cdot 100\%\) of the values of \(Z\). To see this, we define the Value-at-Risk (VaR) of \(Z\) at level \(\alpha\in(0,1]\), which is given by \[\text{VaR}^{\alpha}(Z)\coloneqq\inf\{z\in\mathbb{R}\colon\mathcal{P}^{t}(\{Z \leq z\})\geq 1-\alpha\};\] in other words, the VaR can be understood as the left-side \((1-\alpha)\)-quantile of the distribution of \(Z\)[13]. The results in [50, Theorem 6.2] show that the CVaR of \(Z\) at level \(\alpha\in(0,1]\) is equivalent to an expectation conditioned on random variables greater than VaR; i.e., \[\text{CVaR}^{\alpha}(Z)=\mathbb{E}(Z|Z\geq\text{VaR}^{\alpha}(Z)). \tag{31}\] Since \(\mathcal{P}^{t}(Z>\text{VaR}^{\alpha}(Z))=\alpha\), one can deduce that \(\mathcal{P}^{t}(Z>\text{CVaR}^{\alpha}(Z))<\alpha\). ### CVaR with Time-Varying Distribution Let \(\alpha\in(0,1]\). Denote \(\ell_{\alpha}\colon\mathbb{R}^{n}\times\mathbb{R}\times\Omega\to\mathbb{R}\) by \[\ell_{\alpha}(\boldsymbol{\theta},h;\boldsymbol{x},y)\coloneqq h+\frac{1}{ \alpha}(\ell(f(\boldsymbol{x},\boldsymbol{\theta}),y)-h)_{+}. \tag{32}\] Then, for \(t=1,\ldots,T\), our goal is to solve \[\min_{(\boldsymbol{\theta},h)}L^{t}_{\alpha}(\boldsymbol{\theta},h) \tag{33}\] where \(L^{t}_{\alpha}\colon\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}\) is given by \[L^{t}_{\alpha}(\boldsymbol{\theta},h)\coloneqq\mathbb{E}_{(\boldsymbol{x},y) \sim\mathbb{P}_{t}}[\ell_{\alpha}(\boldsymbol{\theta},h;\boldsymbol{x},y)]= \mathbb{E}_{(\boldsymbol{x},y)\sim\mathbb{P}_{t}}\left\{h+\frac{1}{\alpha}( \ell(f(\boldsymbol{x},\boldsymbol{\theta}),y)-h)_{+}\right\}. \tag{34}\] For the sake of notational simplicity, we assume that every distribution \(\mathbb{P}_{t}\) shares the same support set \(\Omega_{t}\equiv\Omega\) for \(t=1,\ldots,T\). **Assumption 10**.: _The following statements hold:_ 1. _For each_ \(\boldsymbol{\theta}\in\mathbb{R}^{n}\)_,_\(\ell(f(\boldsymbol{x},\cdot),y)\) _is_ \(C_{\boldsymbol{\theta}}(\boldsymbol{x},y)\)_-Lipschitz on a neighborhood_ \(\boldsymbol{\theta}\) _for_ \(\mathcal{P}^{t}\)_-almost all_ \((\boldsymbol{x},y)\)_, where_ \(\mathbb{E}_{\mathcal{P}^{t}}\{C_{\boldsymbol{\theta}}(\boldsymbol{x},y)\}<\infty\)_._ 2. \(\ell(f(\boldsymbol{x},\cdot),\cdot)\) _is differentiable at_ \(\boldsymbol{\theta}\) _for_ \(\mathcal{P}^{t}\)_-almost all_ \((\boldsymbol{x},y)\)_, and_ \(\mathcal{P}^{t}(\ell(f(\boldsymbol{x},\boldsymbol{\theta}),y)=h)\equiv 0\) _for all_ \((\boldsymbol{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}\)_._ Under Assumption 10, differentiation may be interchanged with expectation for \(L^{t}_{\alpha}\)[50, Section 7.2.4]. Moreover, the function \(L^{t}_{\alpha}\) is differentiable [5, Lemma 1] and the gradient representation for every \((\boldsymbol{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}\) is given by \[\nabla L^{t}_{\alpha}(\boldsymbol{\theta},h)=\begin{bmatrix}\frac{1}{\alpha} \mathbb{E}_{(\boldsymbol{x},y)\sim\mathbb{P}_{t}}\{\mathbf{1}_{\mathcal{A}( \boldsymbol{\theta},h)}(\boldsymbol{x},y)\nabla_{\boldsymbol{\theta}}\ell(f( \boldsymbol{x},\boldsymbol{\theta}),y)\}\\ -\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}(\boldsymbol{\theta},h)))+1\end{bmatrix}, \tag{35}\] where the event-valued multifunction \(\mathcal{A}\colon\mathbb{R}^{n}\times\mathbb{R}\rightrightarrows\Omega\) is defined as \[\mathcal{A}(\boldsymbol{\theta},h)\coloneqq\{(\boldsymbol{x},y)\in\Omega|\ell (f(\boldsymbol{x},\boldsymbol{\theta}),y)-h>0\} \tag{36}\] for \((\boldsymbol{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}\). Also, we can employ stochastic online gradient descent to solve the sequence of optimization problems, where every gradient is well-defined almost surely. Specifically, at each time step \(t\), we run one-step gradient descent \[(\boldsymbol{\theta}_{t+1},h_{t+1})=(\boldsymbol{\theta}_{t},h_{t})-\gamma \widehat{\nabla}L^{t}_{\alpha}(\boldsymbol{\theta}_{t},h_{t};\boldsymbol{x} ^{t}_{1},\ldots,\boldsymbol{x}^{t}_{m},y^{t}_{1},\ldots,y^{t}_{m}) \tag{37}\] for \(t=1,\ldots,T-1\), where the gradient approximation is given by \[\widehat{\nabla}L^{t}_{\alpha}(\boldsymbol{\theta},h;\boldsymbol{x}^{t}_{1}, \ldots,\boldsymbol{x}^{t}_{m},y^{t}_{1},\ldots,y^{t}_{m})=\begin{bmatrix}\frac{1 }{\alpha}\cdot\frac{1}{\alpha}\sum_{j=1}^{m}\{\mathbf{1}_{\mathcal{A}( \boldsymbol{\theta},h)}(\boldsymbol{x}^{t}_{i},y^{t}_{i})\nabla_{\boldsymbol{ \theta}}\ell(f(\boldsymbol{x}^{t}_{i},\boldsymbol{\theta}),y^{t}_{i})\}\\ -\frac{1}{\alpha}\cdot\frac{1}{\alpha}\sum_{i=1}^{m}\mathbf{1}_{\mathcal{A}( \boldsymbol{\theta},h)}(\boldsymbol{x}^{t}_{i},y^{t}_{i})+1\end{bmatrix}.\] It can be seen that \(\mathbb{E}[\widehat{\nabla}L^{t}_{\alpha}(\boldsymbol{\theta},h;\boldsymbol{x }^{t}_{1},\ldots,\boldsymbol{x}^{t}_{m},y^{t}_{1},\ldots,y^{t}_{m})]=\nabla L ^{t}_{\alpha}(\boldsymbol{\theta},h)\). The recent results in [32, Lemma 1] show that if the loss satisfies the set-restricted PL inequality relative to the multifunction \(\mathcal{A}\) (which will be defined in (39)), then the objective function \(L^{t}_{\alpha}\) satisfies the ordinary PL inequality for \(t=1,\ldots,T\). While the PL condition in [32] was proved over the subset \(\Delta^{\prime}\coloneqq\{(\boldsymbol{\theta},h)\colon\mathcal{P}^{t}( \mathcal{A}(\boldsymbol{\theta},h))>\alpha+2\alpha\mu(h^{*}_{t}-h)_{+}\}\), our discussion in Section 4.1 shows that \(\mathcal{P}^{t}(\mathcal{A}(\boldsymbol{\theta}^{t}_{t},h^{*}_{t}))<\alpha\), implying that an optimum \((\boldsymbol{\theta}^{t}_{t},h^{*}_{t})\) does not lie in the subset \(\Delta^{\prime}\). Hence, in the next lemma, we propose a new subset \(\Delta\) that \(L^{t}_{\alpha}\) satisfies the PL condition of \(L^{t}_{\alpha}\), for \(t=1,\ldots,T\), which is much more useful in studying the convergence around an optimum point. **Lemma 9** (\(L^{t}_{\alpha}\) is Polyak-Lojasiewicz).: _Fix an \(\alpha\in(0,1]\). Suppose that for \(t=1,\ldots,T\), the following holds:_ 1. \(\arg\min_{(\boldsymbol{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}}L^{t}_{ \alpha}(\boldsymbol{\theta},h)\neq\emptyset\) _and denote_ \((\boldsymbol{\theta}^{*}_{t},h^{*}_{t})\in\arg\min_{(\boldsymbol{\theta},h)}L^{t }_{\alpha}(\boldsymbol{\theta},h)\)_;_ 2. _Let_ \[\Delta_{t}\coloneqq\{(\boldsymbol{\theta},h)\colon\mathbb{R}^{n}\times \mathbb{R}\colon\lambda\alpha\leq\mathcal{P}^{t}(\mathcal{A}(\boldsymbol{\theta},h))\leq\alpha+2\alpha\mu(h^{*}_{t}-h)\}.\] (38) _The loss_ \(\ell(f(\boldsymbol{x},\cdot),y)\) _satisfies the_ \(\mathcal{A}\)_-restricted PL inequality with parameter_ \(\mu>0\)_, relative to_ \(\Omega\) _and on_ \(\Delta_{t}\)_; i.e.,_ \[\frac{1}{2}\|\mathbb{E}\{\nabla_{\boldsymbol{\theta}}\ell(f(\boldsymbol{x}, \boldsymbol{\theta}),y)|\mathcal{A}(\boldsymbol{\theta},h)\}\|_{2}^{2}\geq\mu \mathbb{E}\{\ell(f(\boldsymbol{x},\boldsymbol{\theta}),y)-\ell^{*}( \boldsymbol{\theta},h)|\mathcal{A}(\boldsymbol{\theta},h)\}\] (39) _for all_ \((\boldsymbol{\theta},h)\in\Delta_{t}\)_, where_ \(\ell^{*}(\bullet,\cdot)=\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\mathbb{E}\{ \ell(f(\boldsymbol{x},\tilde{\boldsymbol{\theta}}),y)|\mathcal{A}(\bullet, \cdot)\}\)_._ _Suppose that there exists \(0<\lambda<1\) such that for all \(t=1,\ldots,T\), it holds that_ \[\mathcal{P}^{t}(\mathcal{A}(\mathbf{\theta}_{t}^{*},h_{t}^{*}))\geq\lambda\alpha.\] _Then, the CVaR objective \(L_{\alpha}^{t}\) obeys_ \[\kappa(L_{\alpha}^{t}(\mathbf{\theta},h)-L_{\alpha}^{t}(\mathbf{\theta}_{t}^{*},h_{t}^{ *}))\leq\frac{1}{2}\|\nabla L_{\alpha}^{t}(\mathbf{\theta},h)\|_{2}^{2}\] _everywhere on \(\Delta_{t}\), where \(\kappa=\lambda\mu\)._ Proof.: Recall the definition of \(\ell_{\alpha}\) in (32). Adapting the proof in [32, Lemma 1], we have, for every \((\mathbf{x},y)\in\Omega\), \[\ell_{\alpha}(\mathbf{\theta},h;\mathbf{x},y)-\ell_{\alpha}(\mathbf{\theta}_ {t}^{*},h_{t}^{*};\mathbf{x},y)\] \[=h-h_{t}^{*}+\frac{1}{\alpha}(\ell(f(\mathbf{x},\mathbf{\theta}),y)-h)_{+ }-\frac{1}{\alpha}(\ell(f(\mathbf{x},\mathbf{\theta}_{t}^{*}),y)-h_{t}^{*})_{+}\] \[\leq h-h_{t}^{*}+\frac{1}{\alpha}(\ell(f(\mathbf{x},\mathbf{\theta}),y)-h _{+}-\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)(\ell(f( \mathbf{x},\mathbf{\theta}_{t}^{*}),y)-h_{t}^{*})\] \[=h-h_{t}^{*}+\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}( \mathbf{x},y)(\ell(f(\mathbf{x},\mathbf{\theta}),y)-\ell(f(\mathbf{x},\mathbf{\theta}_{t}^{*}),y) +h_{t}^{*}-h)\] \[=(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{ \theta},h)}(\mathbf{x},y)-1\right)+\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)(\ell(f(\mathbf{x},\mathbf{\theta}),y)-\ell(f(\mathbf{x},\mathbf{\theta}_{t}^{*} ),y)).\] Taking expectation on both sides, it follows that \[L_{\alpha}^{t}(\mathbf{\theta},h)-L_{\alpha}^{t}(\mathbf{\theta}_{t}^{*},h_{t}^{*})\] \[\leq(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{ A}(\mathbf{\theta},h))-1\right)+\frac{1}{\alpha}\mathbb{E}_{(\mathbf{x},y)\sim_{ \mathcal{P}_{t}}}\left\{\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)(\ell(f( \mathbf{x},\mathbf{\theta}),y)-\ell(f(\mathbf{x},\mathbf{\theta}_{t}^{*}),y))\right\}\] \[=(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))-1\right)+\frac{1}{\alpha}\mathbb{E}_{(\mathbf{x},y)\sim_{ \mathcal{P}_{t}}}\left\{(\ell(f(\mathbf{x},\mathbf{\theta}),y)-\ell(f(\mathbf{x},\mathbf{ \theta}_{t}^{*}),y))|\mathcal{A}(\mathbf{\theta},h)\right\}\mathcal{P}^{t}( \mathcal{A}(\mathbf{\theta},h))\] \[=(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))-1\right)+\frac{1}{\alpha}\left(\mathbb{E}_{(\mathbf{x},y)\sim_{ \mathcal{P}_{t}}}\left\{(\ell(f(\mathbf{x},\mathbf{\theta}),y)|\mathcal{A}(\mathbf{\theta },h)\right\}-\mathbb{E}\{\ell(f(\mathbf{x},\mathbf{\theta}_{t}^{*}),y))|\mathcal{A}( \mathbf{\theta},h)\right\}\mathcal{P}^{t}(\mathcal{A}(\mathbf{\theta},h))\] \[\leq(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{ A}(\mathbf{\theta},h))-1\right)+\frac{1}{\alpha}\left(\mathbb{E}_{(\mathbf{x},y)\sim_{ \mathcal{P}_{t}}}\left\{(\ell(f(\mathbf{x},\mathbf{\theta}),y)|\mathcal{A}(\mathbf{\theta },h)\right\}-\ell_{t}^{*}(\mathbf{\theta},h)\right)\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))\] \[=(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))-1\right)+\frac{1}{\alpha}\left(\mathbb{E}_{(\mathbf{x},y)\sim_{ \mathcal{P}_{t}}}\left\{(\ell(f(\mathbf{x},\mathbf{\theta}),y)-\ell_{t}^{*}(\mathbf{ \theta},h)|\mathcal{A}(\mathbf{\theta},h))\right\}\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h)).\] Therefore, from the set-restricted PL inequality (39), we get \[L_{\alpha}^{t}(\mathbf{\theta},h)-L_{\alpha}^{t}(\mathbf{\theta}_{t}^{*},h_{t}^{*})\leq \left(h_{t}^{*}-h\right)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}(\mathbf{ \theta},h))-1\right)+\frac{1}{2\mu\alpha}\|\mathbb{E}\{\nabla_{\mathbf{\theta}} \ell(f(\mathbf{x},\mathbf{\theta}),y)|\mathcal{A}(\mathbf{\theta},h)\}\|^{2}\mathcal{P}^{t }(\mathcal{A}(\mathbf{\theta},h)).\] Now, recall the gradient of \(L_{\alpha}^{t}\) given in (35). Using the fact that \(\mathcal{P}^{t}(\mathcal{A}(\mathbf{\theta}_{t}^{*},h_{t}^{*}))<\alpha\) and the definition of \(\Delta\), we have \[\lambda\mu(h_{t}^{*}-h)\left(\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}(\mathbf{ \theta},h))-1\right)\leq\left(1-\frac{1}{\alpha}\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))\right)^{2}.\] The lemma then follows from simple computation. Although set-restricted PL inequality is a new notion in the literature, it is shown that if the loss \(\ell(f(\mathbf{x},\mathbf{\theta}),y)\) is smooth and strongly convex for \(\mathcal{P}^{t}\)-almost all \((\mathbf{x},y)\), then for all events \(\mathcal{B}\) on the support set, every pair of \((\mathbf{\theta},\mathcal{B})\) satisfies the set-restricted PL inequality [32, Proposition 1]. Moreover, the next lemma shows some nice properties of \(\ell_{\alpha}\). **Lemma 10** (Properties of \(\ell_{\alpha}\)).: _Fix \(\alpha\in(0,1]\). Suppose that_ 1. _Assumption_ 10 _holds; and_ 2. \(\ell(f(\mathbf{x},\mathbf{\theta}),y)\) _is_ \(K\)_-Lipschitz continuous wrt_ \((\mathbf{x},y)\)_._ _Then, the following statements hold:_ 1. _Given any_ \((\mathbf{\theta},h)\)_,_ \(\ell_{\alpha}(\mathbf{\theta}_{1},h_{1};\mathbf{x},y)\) _is differentiable at_ \((\mathbf{\theta},h)\) _for almost every_ \((\mathbf{x},y)\in\Omega\)_;_ 2. \(\ell_{\alpha}(\mathbf{\theta}_{1},h_{1};\mathbf{x},y)\) _is locally Lipschitz wrt_ \((\mathbf{\theta},h)\) _ 3. \(\ell_{\alpha}(\mathbf{\theta}_{1},h_{1};\mathbf{x},y)\) _is_ \(K\)_-Lipschitz wrt_ \((\mathbf{x},y)\) _on_ \(\Omega\)_._ Proof.: Let us prove the statements one by one. 1. Differentiability of \(\ell_{\alpha}(\mathbf{\theta}_{1},h_{1};\mathbf{x},y)\) at \((\mathbf{\theta},h)\) for almost every \((\mathbf{x},y)\in\Omega\) follows directly from Assumption 10. 2. Suppose that \(\ell(f(x,\mathbf{\theta}_{1}),y)-h_{1}=\epsilon_{1}\) and \(\ell(f(\mathbf{x},\mathbf{\theta}_{2}),y)-h_{2}=\epsilon_{2}\). The statement follows directly when \(\epsilon_{1},\epsilon_{2}\geq 0\) or \(\epsilon_{1},\epsilon_{2}<0\). Now, consider \(\epsilon_{1}\geq 0\) and \(\epsilon_{2}<0\). Then, \[|\ell_{\alpha}(\mathbf{\theta}_{1},h_{1};\mathbf{x},y)-\ell_{\alpha}( \mathbf{\theta}_{2},h_{2};\mathbf{x},y)| =\frac{1}{\alpha}|\ell(f(\mathbf{x},\mathbf{\theta}_{1}),y)-h_{1}|\] \[=\frac{1}{\alpha}\epsilon_{1}\] \[\leq\frac{1}{\alpha}(\epsilon_{1}-\epsilon_{2})\] \[=\frac{1}{\alpha}(\ell(f(\mathbf{x},\mathbf{\theta}_{1}),y)-\ell(f(\mathbf{ x},\mathbf{\theta}_{2}),y))+\frac{1}{\alpha}(h_{2}-h_{1}).\] The local Lipschitzness of \(\ell_{\alpha}\) wrt \((\mathbf{\theta},h)\) then follows from Assumption 10. 3. Following the trick in the above argument, suppose that \(\ell(f(\mathbf{x}_{1},\mathbf{\theta}),y_{1})-h=\epsilon_{1}\) and \(\ell(f(\mathbf{x}_{2},\mathbf{\theta}),y_{2})-h=\epsilon_{2}\). It remains to consider the case that \(\epsilon_{1}>0\) and \(\epsilon_{2}<0\). Then, \[|\ell_{\alpha}(\mathbf{\theta},h;\mathbf{x}_{1},y_{1})-\ell_{\alpha}(\bm {\theta},h;\mathbf{x}_{2},y_{2})| =\frac{1}{\alpha}|\ell(f(\mathbf{x}_{1},\mathbf{\theta}),y_{1})-h|\] \[=\frac{1}{\alpha}\epsilon_{1}\] \[\leq\frac{1}{\alpha}(\epsilon_{1}-\epsilon_{2})\] \[=\frac{1}{\alpha}(\ell(f(\mathbf{x}_{1},\mathbf{\theta}),y_{1})-\ell(f( \mathbf{x}_{2},\mathbf{\theta}),y_{2})),\] which leads to the Lipschitzness result given assumption (ii). Having the above lemmas, we are ready to apply our framework to the CVaR problem. **Corollary 1**.: _Fix \(\alpha\in(0,1]\). Under the setting of Lemma 9, suppose that_ 1. _assumptions (i) and (ii) in Lemma_ 10 _hold;_ 2. _every underlying distribution has a bounded support set;_ 3. _the probability density function of every distribution is differentiable;_ 4. _the Wasserstein distance of any two successive distributions is bounded; i.e.,_ \[\mathfrak{M}(\mathbb{P}_{t+1},\mathbb{P}_{t})\leq\eta_{t},\quad\text{\rm for }t=1,\dots,T-1;\] 5. _the variance of the gradient approximation is upper bounded by_ \[\mathbb{E}[\|\hat{\nabla}L^{t}_{\alpha}(\mathbf{\theta},h;\mathbf{x}_{1}^{t},\dots, \mathbf{x}_{m}^{t},y_{1}^{t},\dots,y_{m}^{t})-\nabla L^{t}_{\alpha}(\mathbf{\theta},h )\|^{2}]\leq\sigma_{t}^{2}\] _for some_ \(\sigma_{t}>0\) _and for_ \(t=1,\dots,T\)_; and_ 6. \(L^{t}_{\alpha}\) _is_ \(\beta\)_-smooth on_ \(\Delta_{t}\) _for_ \(t=1,\dots,T\)_._ _Suppose that the step size \(\gamma_{t}\equiv\gamma\in(0,1/(2\kappa))\) for \(t=1,\dots,T\). If the iterates \((\mathbf{\theta}_{t},h_{t})\in\Delta_{t}\) over all \(t=1,\dots,T\), writing \(\zeta=-\frac{\gamma^{2}\beta}{2}+\gamma\), a regret bound for stochastic online gradient descent satisfies_ \[\operatorname{Regret}(T)\leq\frac{1}{2\kappa\zeta}(L^{1}_{\alpha}(\mathbf{\theta}_ {1},h_{1})-(L^{1}_{\alpha})^{*})+\frac{1}{\kappa\zeta}\left(K+\frac{C}{4\kappa }\right)\sum_{t=1}^{T}\eta_{t}+\frac{\gamma\beta}{2\kappa}\sum_{t=1}^{T-1} \sigma_{t}^{2},\] _where \((L^{1}_{\alpha})^{*}=\min_{(\mathbf{\theta},h)}L^{1}_{\alpha}(\mathbf{\theta},h)\) and \(C>0\) is some constant that depends on the CVaR parameter \(\alpha\), the loss function \(\ell(f(\cdot,\cdot),\cdot)\), and the probability density functions of the underlying distributions \(\{\mathbb{P}_{t}\}_{t=1}^{T}\)._ Proof.: Let us verify that problem (33) for \(t=1,\ldots,T\) satisfies the assumptions in Theorem 1. Using the results in Lemmas 9 and 10, it remains to show that \[\|\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathbb{P}_{t+1}}[\nabla\ell_{\alpha}(\mathbf{ \theta},h;\mathbf{x},y)]-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathbb{P}_{t}}[\nabla\ell_ {\alpha}(\mathbf{\theta},h;\mathbf{x},y)]\|\leq C\sqrt{\eta_{t}}\] for some \(C>0\). Recall that \[\nabla_{(\mathbf{\theta},h)}\ell_{\alpha}(\mathbf{\theta},h;\mathbf{x},y)=\begin{bmatrix} \frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)\nabla_{\bm {\theta}}\ell(f(\mathbf{x},\mathbf{\theta}),y)\\ -\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)+1\end{bmatrix}.\] Given \((\mathbf{\theta},h)\in\mathbb{R}^{n}\times\mathbb{R}\), we see that \(\nabla_{\mathbf{\theta}}\ell(f(\mathbf{x},\mathbf{\theta}),y)\) is bounded on the support set \(\Omega\), due to the assumptions (v) and (vi). Assume that \(\|\nabla_{\mathbf{\theta}}\ell(f(\mathbf{x},\mathbf{\theta}),y)\|\leq M\) for some \(M>0\). Then, \[\left\|\mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}_{t+1}}\left[\frac{1} {\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)\nabla_{\mathbf{\theta}} \ell(f(\mathbf{x},\mathbf{\theta}),y)\right]-\mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}_{t} }\left[\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)\nabla _{\mathbf{\theta}}\ell(f(\mathbf{x},\mathbf{\theta}),y)\right]\right\|\] \[\leq \frac{M}{\alpha}|\mathcal{P}^{t+1}(\mathcal{A}(\mathbf{\theta},h))- \mathcal{P}^{t}(\mathcal{A}(\mathbf{\theta},h))|.\] Also, \[\left|\mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}_{t+1}}\left[-\frac{1}{\alpha} \mathbf{1}_{\mathcal{A}(\mathbf{\theta},h)}(\mathbf{x},y)+1\right]-\mathbb{E}_{(\mathbf{x },y)\sim\mathbb{P}_{t}}\left[-\frac{1}{\alpha}\mathbf{1}_{\mathcal{A}(\mathbf{ \theta},h)}(\mathbf{x},y)+1\right]\right|\leq\frac{1}{\alpha}|\mathcal{P}^{t+1}( \mathcal{A}(\mathbf{\theta},h))-\mathcal{P}^{t}(\mathcal{A}(\mathbf{\theta},h))|.\] It remains to bound \(|\mathcal{P}^{t+1}(\mathcal{A}(\mathbf{\theta},h))-\mathcal{P}^{t}(\mathcal{A}( \mathbf{\theta},h))|\). Now, let us invoke a theorem from [12]. **Lemma 11** (c.f. [12, Theorem 2.1]).: _Let \(p_{t}\) and \(p_{t+1}\) be the probability density function of the distributions \(\mathbb{P}_{t}\) and \(\mathbb{P}_{t+1}\). Then,_ \[\|p_{t}-p_{t+1}\|_{1}^{2}\leq c(\|p_{t}\|_{1}+\|Dp_{t}\|_{1}+\|p_{t+1}\|_{1}+ \|Dp_{t+1}\|_{1})\cdot\mathfrak{M}(\mathbb{P}_{t},\mathbb{P}_{t+1})\] _for some constant \(c>0\), where \(D\) is the differential operator and \(\|\cdot\|_{1}\) is the \(\ell_{1}\)-norm wrt the Lebesgue measure._ Let \(\mathcal{E}\) be the event space. The theorem implies that the total variation distance \(\sup_{\mathbf{A}\in\mathcal{E}}|\mathcal{P}^{t+1}(\mathbf{A})-\mathcal{P}^{t}(\mathbf{A})|\) is upper bounded in terms of \(\mathfrak{M}(\mathbb{P}_{t},\mathbb{P}_{t+1})\), since \[\sup_{\mathbf{A}\in\mathcal{E}}|\mathcal{P}^{t+1}(\mathbf{A})-\mathcal{P}^ {t}(\mathbf{A})| =\left|\int_{\mathbf{A}}p_{t}(\mathbf{x},y)d(\mathbf{x},y)-\int_{\mathbf{A}}p_{t+ 1}(\mathbf{x},y)d(\mathbf{x},y)\right|\] \[\leq\int_{\mathbf{A}}|p_{t}(\mathbf{x},y)-p_{t+1}(\mathbf{x},y)|d(\mathbf{x},y)\] \[\leq\int_{\Omega}|p_{t}(\mathbf{x},y)-p_{t+1}(\mathbf{x},y)|d(\mathbf{x},y)= \|p_{t}-p_{t+1}\|_{1}.\] Consequently, applying Theorem 1 yields the desired result. Corollary 1 shows that, under assumptions (i)-(vi) in Corollary 1, the regret of online stochastic gradient descent grows sublinearly when both the cumulative distribution drifts and the cumulative gradient noise variances grow sublinearly. In particular, the assumption on the smoothness of \(L_{\alpha}\) is shown to be satisfied if the gradient on \((\mathbf{x},y)\) is not zero on the boundary of the event set [3, Section 2]; for details on the assumption see [55, Theorem 2.1]. Although the conditions are described as general in [3, 56, 60], the conditions on the smoothness could be hard to verify. A number of works suggest smooth approximation of the CVaR problem; see, e.g., [32, 52]. Similar analysis could be applied but a cumulative approximation error term would be involved in the regret bound. **Remark 8**.: _When a regularizer is added to the CVaR formulation, it is not clear whether set-restricted proximal PL inequality (an analogy to proximal PL inequality) of \(\ell+R\) would lead to the proximal PL condition of the regularized CVaR objective \(L_{\alpha}^{t}+R\), for some regularizer \(R\). The main technical difficulty lies in comparing the minimum values involved in the proximal PL inequality and the set-restricted proximal inequality when a regularizer exists. One may need to explore whether set-restricted proximal PL inequality is still a suitable tool to understand the proximal PL condition of the regularized CVaR learning problem. We will leave this as a future work._ Numerical Simulations In this section, we present some numerical results to illustrate the theoretical findings of our proposed framework. Specifically, in the following, at every time step \(t\) (for \(t=1,\ldots,T\)), we generate the set of data \(\{(\mathbf{u}^{i,t},d_{i}^{t})\}_{i=1}^{m}\), where \[d_{i}^{t}=\tilde{\mathbf{\theta}}_{t}^{T}\mathbf{u}^{i,t}+\nu_{i}^{t}.\] Here, \(\mathbf{u}^{i,t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is a random vector with dimension \(n=5\), where every entry follows an independent and identically distributed (iid) Gaussian distribution with zero mean; \(\nu_{i}^{t}\sim\mathcal{N}(0,0.5)\) is some mean-zero measurement noise with variance \(0.5\); and \(T=500\) is the horizon length of interest. For \(t=1,\ldots,T-1\), \(\tilde{\mathbf{\theta}}_{t}\in\mathbb{R}^{n}\) is deterministic, unknown and time-varying, which we initialize at \(\tilde{\mathbf{\theta}}_{1}=\mathbf{e}\) and update by \[\tilde{\mathbf{\theta}}_{t+1}=\mathrm{proj}_{C}(\tilde{\mathbf{\theta}}_{t}+\mathbf{z}^{ t}) \tag{40}\] with \(\mathbf{e}\in\mathbb{R}^{n}\) being the all-one vector, \(\mathbf{z}^{t}\sim\mathcal{N}(\mathbf{0},10^{-4}\cdot t^{-1}\mathbf{I})\) and some convex set \(C\subseteq\mathbb{R}^{n}\) in the numerical simulations. We assess the performance of online stochastic gradient descent (resp. online stochastic proximal gradient descent) when the objective function is unconstrained (resp. constrained or regularized) via _relative regret_, which is given by [11, Section IV] \[\text{Relative regret}(t)=\frac{1}{t}\cdot\frac{\text{Regret}(t)}{\text{Regret }(1)}.\] The relative regret shown in the figures are averaged over 100 Monte Carlo runs. ### Adaptive Filtering In this example, we are interested in solving the adaptive filtering problem, which can be posed as an online stochastic optimization problem with time-varying distributions [11]: \[\inf_{\mathbf{\theta}\in\mathbb{R}^{n}}\mathbb{E}_{(\mathbf{u},d)\sim\mathbb{P}_{t}}[( d-\mathbf{\theta}^{T}\mathbf{u})^{2}]+R(\mathbf{\theta}) \tag{41}\] for \(t=1,\ldots,T\). We consider three optimization problems corresponding to different regularizers and different feasible set \(C\) defined in (40): (i) an unconstrained optimization problem, where \(C=\mathbb{R}^{n}\) and \(R=0\); (ii) a constrained optimization problem, where \(C=[-5,5]^{n}\) and \(R(\cdot)=\mathbf{1}_{C}(\cdot)\) with \(\mathbf{1}_{C}\) as the indicator function wrt \(C\); and (iii) a regularized optimization problem, where \(C=\mathbb{R}^{n}\) and \(R(\cdot)=\|\cdot\|_{1}\). We apply online stochastic gradient descent for problem (i) and apply online stochastic proximal gradient descent for problems (ii) and (iii), all with initialization \(\mathbf{\theta}_{1}=\mathbf{0}\). We test the performance of both methods using two different step sizes: (a) a constant step size \(\gamma_{t}=0.01/\sqrt{T}\), and (b) a decaying step size \(\gamma_{t}=0.01/\sqrt{t}\) for \(t=1,\ldots,T-1.\) The number of samples drawn at each time step is \(m=5\). When applying online stochastic proximal gradient descent, we use the fact that, for any \(\mathbf{x}\in\mathbb{R}^{n}\), the proximal step for \(R(\cdot)=\mathbf{1}_{C}(\cdot)\) is given by \[(\mathrm{prox}_{\gamma_{t}R}(\mathbf{x}))_{i}=\begin{cases}-5,&x_{i}>5\\ x_{i},&x_{i}\in[-5,5]\\ 5,&x_{i}<-5\end{cases}\] while that for \(R(\cdot)=\|\cdot\|_{1}\) is given by \[(\mathrm{prox}_{\gamma_{t}R}(\mathbf{x}))_{i}=\mathrm{sgn}(x_{i})\max\{|x_{i}|- \gamma_{t},0\}\] for \(i=1,\ldots,m\). We need to find an optimal point of each problem to compute a relative regret at each time step. For problems (i) and (ii), it is known that an optimal point at time \(t\) is given by \(\mathbf{\theta}_{t}^{*}=\tilde{\mathbf{\theta}}_{t}\)[11]. For problem (iii), we use the true vector \(\tilde{\mathbf{\theta}}_{t}\) as the initial point and perform the proximal gradient descent updates using the constant step size \(0.01\) until either the difference of the objective values of successive iterates is less than \(10^{-6}\) or the number of iterations reaches 1000. We then declare it as an optimal point \(\mathbf{\theta}_{t}^{*}\). Figure 0(a) shows the relative regret of online stochastic gradient descent and online stochastic proximal gradient descent with a constant step size \(\gamma_{t}=0.01/\sqrt{T}\) for all \(t\) when the adaptive filtering problem is unconstrained and constrained/regularized, respectively. As can be seen, the relative regret of online stochastic gradient descent applying to the unconstrained problem decreases when \(t\) increases, implying a sublinear regret of online stochastic gradient descent. This verifies our findings in Theorem 1. Despite the fact that the cumulative variance of the measurement noise grows linearly, a sublinear regret of online stochastic gradient descent can be achieved given a suitable step size rule. Similar results can be observed for online stochastic gradient descent. Specifically, although Theorem 2 cannot guarantee a sublinear regret bound of online stochastic proximal gradient descent as discussed in Remark 7, we see that a sublinear regret bound can be achieved in numerical simulations when the adaptive filtering problem is either constrained or regularized. Figure 0(b) shows the relative regret of online stochastic gradient descent and online stochastic proximal gradient descent with a decaying step size \(\gamma_{t}=0.01/\sqrt{t}\) for \(t=1,\dots,T\) when applied to the adaptive filtering problem with different regularizers. As can be seen, the online stochastic gradient descent (resp. online stochastic proximal gradient descent) achieves sublinear regret when the problem is unconstrained (resp. constrained or regularized). This verifies our discussion in Remark 5 that the step size can be set to be decreasing instead of constant. Moreover, as the step size is larger at the beginning, the learning rate is faster than that using constant step size, resulting in a lower relative regret of both online stochastic gradient descent and online stochastic proximal gradient descent given different regularizers. Besides, using either step size, we see that the relative regret of online stochastic proximal gradient when applied to the regularized problem decreases at the slowest speed. This partly explains the technical difficulty in improving the regret bound proved in Theorem 2 that the structure of the regularizer could seriously affect the performance of the online algorithms. ### CVaR Learning In this example, we consider the online CVaR learning problem with time-varying distribution: \[\inf_{\mathbf{\theta},h}\mathbb{E}_{(\mathbf{u},d)\sim\mathbb{P}_{t}}\left[h+\frac{1} {\alpha}((d-\mathbf{\theta}^{T}\mathbf{u})^{2}-h)_{+}\right]+R(\mathbf{\theta}) \tag{42}\] with \(\alpha=0.95\). Using the same setting as in the previous example, we consider all unconstrained, constrained and regularized optimization problems of (42). To better estimate the underlying probability distribution, we draw \(m=20\) samples drawn at each time step. We apply online stochastic gradient descent for problem (i) and apply online stochastic proximal gradient descent for problems (ii) and (iii), all with initialization \((\mathbf{\theta}_{1},h_{1})=\mathbf{0}\). We test the performance of both methods with the following two step sizes: (a) a constant step size \(\gamma_{t}=0.01/\sqrt{T}\), and (b) a decaying step size \(\gamma_{t}=0.01/\sqrt{t}\) for \(t=1,\dots,T-1\). An optimal point for computing a relative regret is found as follows: At each time step, we approximate the distribution using a new sample set with 100 samples. Then, for all unconstrained, constrained and regularized versions of problem (42), we initialize the iterate at the origin and perform the gradient descent (or proximal gradient descent) updates using the constant step size 0.01 until either the difference of the objective values of successive iterates is less than 0.01 or the number of iterations reaches 1000. We then declare it as an optimal point \((\mathbf{\theta}_{t}^{*},h_{t}^{*})\). Figure 1(a) shows the relative regret of online stochastic gradient descent and online stochastic proximal gradient descent with a constant step size \(\gamma_{t}=0.01/\sqrt{T}\) for all \(t\). It can be seen that both online stochastic gradient descent and online stochastic proximal gradient descent enjoy sublinear regret regardless of the regularizers. This matches our result in Corollary 1 that online stochastic gradient descent achieves sublinear regret when applied to unconstrained online CVaR problem. Although it is not known whether a regularized CVaR learning problem possesses proximal PL condition, we see that online stochastic proximal gradient descent achieves sublinear regret when applied to constrained or regularized version of (42). In particular, we see that the relative regret of online stochastic gradient descent when applied to the unconstrained problem and that of online stochastic proximal gradient descent when applied to the constrained problem decrease Figure 1: Relative regret of online stochastic gradient descent and online stochastic proximal gradient descent when the adaptive filtering problem is unconstrained, constrained or regularized. while the relative regret of online stochastic proximal gradient descent when applied to the regularized problem decreases at the slowest speed. This is because the \(\ell_{1}\) regularizer destroys the smoothness of the problem, resulting in a slower convergence of the algorithm. On the other hand, the online stochastic proximal gradient descent performs better than the online stochastic gradient descent when the problem is constrained, because more knowledge on the underlying distribution is available compared with the unconstrained problem. Figure 1(b) shows the relative regrets of online stochastic gradient descent and online stochastic proximal gradient descent when applied to the unconstrained problem and constrained/regularized problem, respectively. Similar to Figure 1(a), all the curves are decreasing, implying sublinear regrets of both methods when applied to the corresponding problems. Also, we see that the relative regret of the online stochastic proximal gradient descent is the lowest, whereas that when applied to the regularized problem is the highest. Comparing to Figure 1(a), we see that both methods perform better using a decaying step size instead of a constant step size, because of the faster learning rate at the beginning. ## 6 Conclusion In this paper, we considered an online stochastic optimization problem with a time-varying distribution, when the loss function satisfies the PL condition. We established a regret bound of online stochastic gradient descent, which is composed of the cumulative gradient biases caused by stochasticity and the cumulative Wasserstein distances between distribution at consecutive time steps. A similar regret bound of online stochastic proximal gradient descent was also shown when the objective function is regularized. We applied this framework to the CVaR learning problem by improving an existing proof of its PL condition and established its regret bound. Our numerical results support our theoretical findings and demonstrate the power of the framework. An interesting future direction is to apply the said framework to other data-driven modeling optimization problems with time-varying distribution. Particularly, it is intriguing to see under what condition the CVaR problem possesses proximal PL condition when it is regularized.
2308.16618
Enhancing Frequency Control through Rate of Change of Voltage Feedback
This letter proposes a simple and inexpensive technique to improve the frequency control of distributed energy resources. The proposed control consists in modifying the conventional estimated bus frequency signal with an additional feedback signal that utilizes the rate of change of the voltage magnitude measured at the same bus. The case study showcases the benefits of the proposed control and compares its performance with standard frequency control schemes through time-domain simulations.
Federico Milano, Bibi Alhanjari, Georgios Tzounas
2023-08-31T10:24:05Z
http://arxiv.org/abs/2308.16618v1
# Enhancing Frequency Control through Rate of Change of Voltage Feedback ###### Abstract This letter proposes a simple and inexpensive technique to improve the frequency control of distributed energy resources. The proposed control consists in modifying the conventional estimated bus frequency signal with an additional feedback signal that utilizes the rate of change of the voltage magnitude measured at the same bus. The case study showcases the benefits of the proposed control and compares its performance with standard frequency control schemes through time-domain simulations. Frequency control, geometric observability, complex frequency, low-inertia systems. ## I Introduction The increasing proliferation of converter-based generation is known to reduce the available mechanical inertia in the electric power grid. As a result, frequency variations triggered from imbalances between power supply and demand become more prominent and faster, and threaten the system's stability and performance. In response to this challenge, system operators and researchers have been actively seeking for techniques that enhance the effectiveness of frequency control services provided by Distributed Energy Resources (DERs). Literature in the field has explored several optimal control and coordination strategies for different converter-based energy technologies, including wind, solar PV, storage, and demand response systems [1, 2]. For example, recent works have proposed analytical methods for the design of synthetic inertia and droop coefficients of DERs to meet a desired dynamic performance, as well as optimal DER placement and power sharing strategies, e.g. see [3, 4]. In previous work, the authors of this paper have studied the analytical links between power, frequency and voltage variations in transmission and distribution networks to establish alternative control signals that can improve the stability and primary response of low-inertia systems [5, 6]. In this vein, this letter proposes to modify the bus frequency signal conventionally utilized in DER primary control loops to include as additional feedback signal the rate of change of the voltage magnitude measured at the same bus. The rationale of the proposed control scheme is based on a recent work by the first author that defines a novel quantity, namely, the _complex frequency_, where the rate of change of the voltage magnitude constitutes the real part and the conventional frequency is the imaginary part [7]. ## II Rationale and Proposed Control Scheme The definition of the complex frequency relies on a general property of complex quantities, as follows. Let us consider the Park vector of the bus voltage, say \(\bar{v}\), as time-dependent complex value that utilizes the \(dq\)-axis components of the Park reference frame rotating at constant angular speed \(\omega_{o}\), i.e: \[\bar{v}(t)=v_{d}(t)+j\,v_{q}(t)\,, \tag{1}\] where \(j\) is the imaginary unit. This voltage is substantially a dynamic phasor and can be written in polar coordinates as: \[\bar{v}=v\,e^{j\,\theta}\,, \tag{2}\] where we have dropped the dependency on time for economy in notation. Defining \(u=\ln(v)\), \(v\neq 0\), (2) becomes: \[\bar{v}=e^{u+j\,\theta}\,. \tag{3}\] If \(\bar{v}\) is a function of time, then the derivative of (3) leads to: \[\dot{\bar{v}}=(\dot{u}+j\,\dot{\theta})\,e^{u+j\,\theta}=(\dot{u}+j\,\dot{ \theta})\,\bar{v}\,. \tag{4}\] Equaling (2) and (1) and taking into account the rotation of the Park reference frame, one has: \[\omega=\dot{\theta} =\frac{v_{d}\dot{v}_{q}-v_{q}\dot{v}_{d}}{v^{2}}+\omega_{o}\,, \tag{5}\] \[\rho=\dot{u} =\frac{\dot{v}}{v}=\frac{\dot{u}\dot{v}_{d}+v_{q}\dot{v}_{q}}{v^{2 }}\,, \tag{6}\] where \(\omega\) is the conventional instantaneous frequency of \(\bar{v}\) and \(\rho\) can be defined as an _instantaneous bandwidth_[8] or, using a geometric analogy, a _radial frequency_[9]. The time derivative of the Park vector in (1) can be written as: \[\dot{v}=(\rho+j\,\omega)\,\bar{v}=\bar{\eta}\,\bar{v}\,, \tag{7}\] where \(\bar{\eta}\) is the _complex frequency_ as defined in [7]. For the discussion of the control proposed in this letter, it is relevant to rewrite \(\omega\) and \(\rho\) in (5) and (6) assuming now that the Park transform is obtained using the frequency of the Center of Inertia (COI), say \(\omega_{\rm COI}\), rather than the synchronous reference frame \(\omega_{o}\). This leads to: \[\omega =\frac{v_{d}^{\prime}\dot{v}_{q}^{\prime}-v_{q}^{\prime}\dot{v}_{ d}^{\prime}}{v^{2}}+\omega_{\rm COI}\,, \tag{8}\] \[\rho =\frac{v_{d}^{\prime}\dot{v}_{d}^{\prime}+v_{q}^{\prime}\dot{v}_ {q}^{\prime}}{v^{2}}\,, \tag{9}\] where \(v_{d}^{\prime}+jv_{q}^{\prime}\) is the Park vector obtained for the \(dq\)-axis reference frame rotating at \(\omega_{\rm COI}\). While \(v_{d}^{\prime}\neq v_{d}\) and \(v_{q}^{\prime}\neq v_{q}\), the values of \(\omega\) and \(\rho\) on the left-hand sides of (8) and (9) are in effect equal to the values in (5) and (6), respectively, as \(\omega\) and \(\rho\) are geometric invariants, that is, their values are independent from the coordinates utilized to measure the components of the voltage. The invariance of \(\rho\) is straightforward to show, as the identity \(v=|v_{d}+jv_{q}|=|v_{d}^{\prime}+jv_{q}^{\prime}|\) must hold independently of the reference speed chosen to obtain the Park transform. Demonstrating the invariance of \(\omega\) is more involved, and the interested reader can find a proof in [10]. The invariance of \(\omega\) and \(\rho\) leads to the following remarks: * Equation (8) shows that the instantaneous frequency \(\omega\) can be decomposed into two terms. The first captures exclusively local bus dynamics and is null in steady state. The second, i.e. \(\omega_{\rm COI}\), is slow and follows the system-wide dynamic of the frequency. * The radial frequency \(\rho\) depends only on local dynamics and, hence, is always null in steady state. It is immediate to observe that, if the voltage at the bus where the frequency is measured is regulated through an automatic control, then the local variations of \(\rho\) can be relatively small w.r.t. the variations of \(\omega\). An obvious limit case is that of an ideal voltage controller, for which \(\rho=0\) as the voltage is always kept perfectly constant. This also justifies some models of the bus voltage signal proposed in the literature [11]. However, since ideal voltage controllers do not exist in practice, we can always assume that the voltage magnitude and hence also \(\rho\) do exhibit a transient behavior following a disturbance. In this letter we show the benefits for the frequency control of power systems of including \(\rho\) in the conventional frequency control input signal. Such control is meaningful when the dynamics of \(\omega\) and \(\rho\) evolve in similar time scales. This is particularly relevant for converter-interfaced devices, whose frequency controllers can be designed to be as fast or faster than their voltage regulation. We exploit this feature to design the proposed DER control. On the other hand, the proposed control is not suitable for conventional synchronous machines, as their voltage regulators are much faster than the dynamics that can be tracked by turbine governors. A consequence of assuming similar time scales of frequency and voltage controllers is that the local terms of \(\omega\) and \(\rho\) in (8) and (9), while having different expressions, have similar harmonic content and thus, show a similar "trend". This statement is further illustrated in the case study presented in Section III but we also justify it qualitatively below with an analytic example. Let us assume that: \[v^{\prime}_{d} =V-k\,e^{-\alpha t}\cos(\beta t)\,, \tag{10}\] \[v^{\prime}_{d} =k\,e^{-\alpha t}\sin(\beta t)\,,\] where \(V\), \(k\), \(\alpha\) and \(\beta\) are constant. The expressions in (10) resemble a typical frequency transient in power systems if \(k\ll V\) and \(\alpha,\beta\ll\omega_{\rm COI}\), where \(\alpha\) and \(\beta\) represent the damping and the angular frequency, respectively, of the dominant mode of the system. Leveraging these inequalities, one can get to the following approximated expressions from (8) and (9): \[\omega-\omega_{\rm COI} \approx\frac{k\,e^{-\alpha t}}{V}\left[\beta\cos(\beta t)-\alpha \sin(\beta t)\right], \tag{11}\] \[\rho \approx\frac{k\,e^{-\alpha t}}{V}\left[\beta\sin(\beta t)+\alpha \cos(\beta t)\right].\] The latter expressions show that \(\rho\) and \(\omega-\omega_{\rm COI}\) have same order of magnitude and show same oscillatory behavior. We use the observations and the empirical result above as follows. Conventional frequency controllers of non-synchronous resources utilize the estimation of the frequency of the voltage at their point of connection with the grid, e.g., using a Phased-Locked Loop (PLL), and compare it with a reference, typically \(\omega_{0}\). The resulting frequency error signal is thus a mix of the local frequency oscillations and the system-wide frequency deviation due to the power imbalance in the grid. This means that the local and system-wide variations are weighted in the same way by the controller. This appears inevitable since, in order to estimate the local term of the instantaneous frequency in (8), one would need to be able to measure \(\omega_{\rm COI}\) first, which is though not available to local controllers. However, extrapolating the result of the example above and based on the experience matured on a large number of simulations, we have observed that we can decouple, even if in an approximated way, these two effects and "weight" them differently in the frequency control. The main advantage of the rate of change of voltage (\(\rho\)) as compensating signal for the frequency control is that it is a purely _local_ signal, as opposed to the frequency that is both local and system-wide as it intrinsically contains information on the frequency of the center of inertia (\(\omega_{\rm COI}\)). Controllers that utilize the rate of change of frequency (RoCoF) are effective as long as the variations of the RoCoF are not biased by the variations of \(\omega_{\rm COI}\). In conventional systems or systems where converters emulate the behavior and time scales of synchronous machines, e.g., [12], the variations of \(\omega_{\rm COI}\) are slower than local frequency oscillations, and that is why controllers based on the RoCoF can be effective. However, in systems with very low inertia, \(\omega_{\rm COI}\) can partially overlap with local fluctuations and thus lead to a less effective control based on the RoCoF. We note, moreover, that controllers based on the RoCoF generally require an additional control channel in parallel with the conventional frequency droop control. On the other hand, \(\rho\) can be included in any existing DER frequency controller. In summary, we propose to use \(\rho\) to build the following modified input signal to primary frequency controllers: \[\tilde{\omega}=\omega-K\,\rho\,, \tag{12}\] where \(K\) is the parameter to be adjusted and that allows tuning the impact of local frequency oscillations on the power output of the converter-interfaced device. It is relevant to observe that the estimation of \(\rho\) is simple and readily available as, from (6), one simply needs to measure \(v\) and estimate \(\hat{v}\). ## III Case Study In this section, we illustrate the effect of the proposed control with the WSCC 9-bus system and the New England 39-bus system. All simulation results are obtained with the software tool Dome [13]. In the simulations, estimations of \(\omega\) and \(\rho\) are obtained using a synchronous-reference frame PLL and voltage measurements at the point of connection of the converter-interfaced generator (CIG). ### _WSCC 9-bus System_ The original WSCC 9-bus system is modified to emulate a low-inertia system by reducing the inertia constants of the synchronous machines (SMs), namely, 4 s for SM 1 and 2 and 3 s for SM 3. Then, a CIG is connected through a transformer to bus 7. The block diagram of the CIG control is shown in Fig. 1. The inner loop regulates the \(dq\)-axis currents and includes limiters. The outer loop consists of a voltage controller and a frequency controller with droop and washout channels. In the scheme, \(p^{\rm ref}\) and \(q^{\rm ref}\) are the operating set points for a given period of the DER as defined by the market and/or transmission system operators; whereas \(v^{\rm ref}_{\rm VO}\) is an auxiliary signal coming from power oscillator damper (POD) connected to the DER, which is equivalent ot the power system stabilizers of conventional synchronous generators. The considered DER utilizes a grid-following converter. It is important to note that employing grid-forming converters would not change the effectiveness of the proposed compensating signal based on \(\rho\) as long as the voltage and frequency controllers have similar time scales. At the initial operating point the CIG generates 100 MW, which are accommodated in the system by reducing by the same amount the power produced by SM 2. To study the effectiveness of the proposed control in improving the frequency response of the system, we first compare the observability that the signals \(\omega\) and \(\tilde{\omega}\) provide to the dynamic modes of the system Fig. 1: Control diagram of the primary controllers of DERs. The conventional frequency controller utilizes the signal \(\omega\), whereas the proposed control utilizes as input signal \(\tilde{\omega}\) as defined in (12). directly linked to primary frequency regulation. Recall that a dynamic mode represents a primary frequency control mode if it meets the following properties [14]: (i) it is global, i.e. all buses are coherent to the mode. (ii) the associated mode shapes for all generator speeds are in phase; and (iii) its natural frequency lies in the range 0.02-0.1 Hz. We carry out a small-signal analysis of the system assuming that the CIG frequency control loop is switched off. The results indicate that the system is stable at the examined equilibrium and that the primary frequency control mode is represented by the complex pair of eigenvalues \(-0.55\pm j1.12\) with natural frequency 0.09 Hz. The corresponding normalized mode shapes of the rotor speeds of the three SMs are depicted in the left panel of Fig. 2. The geometric observability \(go\)[15] of the frequency control mode by \(\rho\), \(\omega\), and \(\tilde{\omega}\) for \(K=1\), are summarized in Table I. The right panel in Fig. 2 tracks the ratio \(go(\tilde{\omega})/go(\omega)\) between the observability of \(\tilde{\omega}\) and \(\omega\), as a function of \(K\). This analysis confirms that \(\rho\) is effective to improve the dynamic behavior of the system. Note, however, that the specific value of the gain depends on the estimation of \(\rho\), which in the simulations below is obtained through a PLL that estimates \(\hat{v}\), and the parameters of the frequency controller of the CIG. Figures 3 and 4 show the transient behavior of the system following the loss of 50% of the load consumption at bus 5, for three scenarios: (i) system without CIG; (ii) system with CIG and frequency control using the conventional signal \(\omega\); and (iii) system with CIG and frequency control using the proposed signal \(\tilde{\omega}\) defined in (12). The results shown in Figs. 3 and 4 were obtained for \(K=1.2\). The proposed control achieves lower deviations of the COI frequency and of the power generated by the CIG, without deteriorating the voltage control performance or modifying the reactive power output. The compensating signal has the sought effect of reducing the local oscillation of the active power of the CIG, which results in an overall improvement of the system frequency dynamic response. ### _New England 39-bus System_ This section illustrates the dynamic performance of the proposed control for a larger system with multiple CIGs. The original data is modified to accommodate 70% of CIG-based generation and the inerties of conventional SMs are reduced to emulate a low-inertia system. The setup of the grid is same as in [6]. The contingency considered is a fault at bus 12 cleared after \(0.2\) s. Figure 5 shows that, also in this case, the compensating signal effectively reduces local oscillations and improves the system frequency dynamic response. It is relevant to note that, in this scenario, the signal \(\tilde{\omega}\) is obtained using \(K=-0.03\). Comparing this value with that utilized for the WSCC 9-bus system (\(K=1.2\)), we note that the gain \(K\) is highly system-dependent and can be positive or negative. On the other hand, \(K\) does not need to be adjusted when the operating point changes and, according to our study, does not appear to interfere or couple with other system dynamics. ## IV Conclusions This letter builds on top of a recently proposed definition of _complex frequency_ that accounts for angle and magnitude voltage rate of changes in a unified geometric-based framework. The invariant properties of the components of the complex frequency are exploited in this work to separate local and system-wide dynamics of the frequency. This separation allows defining a frequency control that can compensate local oscillations and, as a consequence, improve the overall transient response of the grid. The effectiveness of this approach is confirmed by the eigensensitivity analysis and the evaluation of the geometric observability of the proposed compensated control signal, and by time-domain simulations. The proposed control is suited for CIGs, the active power controllers of which are generally faster than those of conventional power plants and are thus more prone to be affected by local oscillations of the frequency. Finally, the proposed control is also simple and inexpensive to implement as it requires only local measurements of the voltage at the point of connection of the devices that regulate the frequency. Fig. 4: Transient behavior of the power injected by the CIG for various control setups of the WSCC test system. Fig. 5: New England 39-bus system: Transient behavior of the frequency for various setups of the frequency control of the CIGs. Fig. 3: Transient behavior of the frequency of the COI and of the voltage at bus 7 for various control setups of the WSCC test system. Fig. 2: WSCC test system: Left: Frequency control mode shapes of SM speeds. Right: geometric observability ratio for the conventional and proposed frequency control modes as a function of \(K\).
2309.10140
Neural Feature Learning in Function Space
We present a novel framework for learning system design with neural feature extractors. First, we introduce the feature geometry, which unifies statistical dependence and feature representations in a function space equipped with inner products. This connection defines function-space concepts on statistical dependence, such as norms, orthogonal projection, and spectral decomposition, exhibiting clear operational meanings. In particular, we associate each learning setting with a dependence component and formulate learning tasks as finding corresponding feature approximations. We propose a nesting technique, which provides systematic algorithm designs for learning the optimal features from data samples with off-the-shelf network architectures and optimizers. We further demonstrate multivariate learning applications, including conditional inference and multimodal learning, where we present the optimal features and reveal their connections to classical approaches.
Xiangxiang Xu, Lizhong Zheng
2023-09-18T20:39:12Z
http://arxiv.org/abs/2309.10140v3
# A Geometric Framework for Neural Feature Learning+ ###### Abstract We present a novel framework for learning system design based on neural feature extractors by exploiting geometric structures in feature spaces. First, we introduce the feature geometry, which unifies statistical dependence and features in the same functional space with geometric structures. By applying the feature geometry, we formulate each learning problem as solving the optimal feature approximation of the dependence component specified by the learning setting. We propose a nesting technique for designing learning algorithms to learn the optimal features from data samples, which can be applied to off-the-shelf network architectures and optimizers. To demonstrate the application of the nesting technique, we further discuss multivariate learning problems, including conditioned inference and multimodal learning, where we present the optimal features and reveal their connections to classical approaches. F 1 Feature geometry, neural feature learning, multivariate dependence decomposition, nesting technique ## 1 Introduction Learning useful feature representations from data observations is a fundamental task in machine learning. Early developments of such algorithms focused on learning optimal linear features, e.g., linear regression, PCA (Principal Component Analysis) (Pearson, 1901), CCA (Canonical Correlation Analysis) (Hotelling, 1936), and LDA (Linear Discriminant Analysis) (Fisher, 1936). The resulting algorithms admit straightforward implementations, with well-established connections between learned features and statistical behaviors of data samples. However, in learning applications, data can have complex structures which linear features fail to capture. To address such problems, practitioners employ more complicated feature designs and build inference models based on these features, e.g., kernel methods (Cortes and Vapnik, 1995; Hofmann et al., 2008) and deep neural networks (LeCun et al., 2015). An illustration of such feature-based learning systems is shown in Figure 1, which consists of two parts: 1. A _learning_ module which generates a collection of features from the data. Data can take different forms, for example, input-output pairs1 or some tuples. The features can be either specified implicitly, e.g., by a kernel function in kernel methods, or explicitly parameterized as feature extractors, e.g., artificial neural networks. The features are learned via a training process, e.g., optimizing a training objective defined on the features. 2. An _assembling_ module which uses learned features to build an inference model or a collection of inference models. The inference models are used to provide information about data. For example, when the data take the form of input-output pairs, a typical inference model provides prediction or estimation of output variables based on the input variables. The assembling module determines the relation between features and resulting models, which can also be specified implicitly, e.g., in kernel methods. Learning systems are commonly designed with a predetermined assembling module, which allows the whole system to be learned in an end-to-end manner. One representative example of such designs is deep neural networks. On one hand, this end-to-end characteristic makes it possible to exploit large-scale and often over-parametrized neural feature extractors (LeCun et al., 2015; Krizhevsky et al., 2017; He et al., 2016; Vaswani et al., 2017), which can effectively capture hidden structures in data. On the other hand, the choices of assembling modules and learning objectives are often empirically designed, with design heuristics varying a lot across different tasks. Such heuristic choices make the learned feature extractors hard to interpret and adapt, often viewed as black boxes. In addition, the empirical designs are typically inefficient, especially for multivariate learning problems, e.g., multimodal learning (Ngiam et al., 2011), where there are many potentially useful assembling modules and learning objectives to consider. To address this issue, recent development of learning systems has adopted statistical and information-theoretical tools in designing training objectives. The design goal is to guarantee that learned features are informative, i.e., carry useful information for the inference tasks. To this end, a common practice is to incorporate information measures in learning objectives, such as mutual information (Tishby et al., 2000; Tishby and Zaslavsky, 2015) and rate distortion function (Chan et al., 2022). However, information measures might not effectively characterize the usefulness of features for learning tasks, due to the essentially different operational meanings. For example, the mutual information characterizes the optimal rate of a communication system (Shannon, 1948), which does not reflect the feature processing nature in learning systems. In practice, such information measures are often designed as regularization terms in the training objective (Belghazi et al., 2018), and the learned features generally depend on such case-by-case design choices. In this work, we aim to establish a framework for learning features that capture the statistical nature of data, and can be assembled to build different inference models without retraining. In particular, the features are learned to approximate and represent the statistical dependence of data, instead of solving specific inference tasks. To this end, we introduce a geometry on functional spaces, coined _feature geometry_, which we apply to connect statistical dependence with features. This connection allows us to represent statistical dependence by corresponding operations in feature spaces. Specifically, the features are learned by approximating the statistical dependence, and the approximation error measures the amount of information captured by features. The resulting features capture the statistical dependence and thus are useful for general inference tasks. Our main contributions of this work are as follows. * We establish a framework for designing learning systems where the learned features can be assembled to build different inference models. In particular, we introduce the feature Figure 1: Schematic diagram of a general feature-based learning system geometry, which converts feature learning problems to geometric operations in functional space. The resulting optimal features capture the statistical dependence of data and can be adapted to different inference tasks. * We propose a nesting technique for learning the features associated with the statistical dependence component of interest. The nesting technique provides a systematic approach to construct training objectives and learning algorithms, where we can employ off-the-shelf network architectures and optimizers. * We present the applications of this unified framework in multivariate learning problems, including learning for conditioned inference and multimodal learning with missing modalities. We demonstrate the optimal features and show their relations to classical learning problems, such as maximum likelihood estimation and maximum entropy principle (Jaynes, 1957a,b). The rest of the paper is organized as follows. In Section 2, we introduce the feature geometry, including operations on feature spaces and functional representations of statistical dependence. In Section 3, we present the learning system design in a bivariate learning setting, where we demonstrate the feature learning algorithm and the design of assembling modules. In Section 4, we introduce the nesting technique, as a systematic approach to learning algorithm design in multivariate learning problems. We then demonstrate the applications of the nesting technique, where we study a conditioned inference problem in Section 5, and a multimodal learning problem in Section 6. Then, we present our experimental verification for the proposed learning algorithms in Section 7, where we compare the learned features with the theoretical solutions. Finally, we summarize related works in Section 8 and provide some concluding remarks in Section 9. ## 2 Notations and Preliminaries For a random variable \(Z\), we use \(\mathcal{Z}\) to denote the corresponding alphabet, and use \(z\) to denote a specific value in \(\mathcal{Z}\). We use \(P_{Z}\) to denote the probability distribution of \(Z\). For the ease of presentation, we demonstrate our development with discrete random variables with finite alphabets. The corresponding results can be extended to continuous variables under certain regularity conditions, see, e.g., Weidmann (2012), for detailed discussions. ### Feature Geometry #### 2.1.1 Vector Space Given an inner product space with inner product \(\langle\cdot,\cdot\rangle\) and its induced norm \(\|\cdot\|\), we can define the projection and orthogonal complement as follows. **Definition 1**: _Give a subspace \(\mathcal{W}\) of \(\mathcal{V}\), we denote the projection of a vector \(v\in\mathcal{V}\) onto \(\mathcal{W}\) by_ \[\Pi\left(v;\mathcal{W}\right)\triangleq\operatorname*{arg\,min}_{w\in\mathcal{ W}}\|v-w\|^{2}\,. \tag{1}\] _In addition, we use \(\mathcal{V}\boxminus\mathcal{W}\) to denote the orthogonal complement of \(\mathcal{W}\) in \(\mathcal{V}\), viz., \(\mathcal{V}\boxminus\mathcal{W}\triangleq\{v\in\mathcal{V}\colon\langle v,w \rangle=0\text{ for all }w\in\mathcal{W}\}\)._ We use "\(\boxplus\)" to denote the direct sum of orthogonal subspaces, i.e., \(\mathcal{V}=\mathcal{V}_{1}\boxplus\mathcal{V}_{2}\) indicates that \(\mathcal{V}=\mathcal{V}_{1}+\mathcal{V}_{2}\) and \(\mathcal{V}_{1}\perp\mathcal{V}_{2}\). Then we have the following facts. **Fact 1**: _If \(\mathcal{V}=\mathcal{V}_{1}\boxplus\mathcal{V}_{2}\), then \(\mathcal{V}_{2}=\mathcal{V}\boxminus\mathcal{V}_{1}\). In addition, if \(\mathcal{W}\) is a subspace of \(\mathcal{V}\), then \(\mathcal{V}=\mathcal{W}\boxplus(\mathcal{V}\boxminus\mathcal{W})\)._ **Fact 2** (Orthogonality Principle): _Given \(v\in\mathcal{V}\) and a subspace \(\mathcal{W}\) of \(\mathcal{V}\), then \(w=\Pi\left(v;\mathcal{W}\right)\) if and only if \(w\in\mathcal{W}\) and \(v-w\in\mathcal{V}\boxminus\mathcal{W}\). In addition, we have \(v=\Pi\left(v;\mathcal{W}\right)+\Pi\left(v;\mathcal{V}\boxminus\mathcal{W}\right)\)._ #### 2.1.2 Feature Space Given an alphabet \(\mathcal{Z}\), we use \(\mathcal{P}^{\mathcal{Z}}\) to denote the collection of probability distributions supported on \(\mathcal{Z}\), and use \(\operatorname{relint}(\mathcal{P}^{\mathcal{Z}})\) to denote the relative interior of \(\mathcal{P}^{\mathcal{Z}}\), i.e., the collection of distributions in \(\mathcal{P}^{\mathcal{Z}}\) that have positive probability masses. We use \(\mathcal{F}_{\mathcal{Z}}\triangleq\{\mathcal{Z}\to\mathbb{R}\}\) to denote the collection of features (functions) on given \(\mathcal{Z}\). Specifically, \(\mathcal{F}_{\mathcal{Z}}\) represents the collection of constant features. We define the inner product on \(\mathcal{F}_{\mathcal{Z}}\) as \(\langle f_{1},f_{2}\rangle\triangleq\mathbb{E}_{\mathbb{R}}\left[f_{1}(Z)f_{2 }(Z)\right]\), where \(R\in\operatorname{relint}(\mathcal{P}^{\mathcal{Z}})\) is referred to as the **metric distribution**. This defines the geometry on \(\mathcal{F}_{\mathcal{Z}}\). Specifically, we have the induced norm \(\|f\|\triangleq\sqrt{\langle f,f\rangle}\), and the projection of \(f\in\mathcal{F}_{\mathcal{Z}}\) onto a subspace \(\mathcal{G}_{\mathcal{Z}}\) of \(\mathcal{F}_{\mathcal{Z}}\), i.e., \(\Pi\left(f;\mathcal{G}_{\mathcal{Z}}\right)\), is then defined according to Definition 1. For each \(k\geq 1\), we use \(\mathcal{F}_{\mathcal{Z}}^{k}\triangleq(\mathcal{F}_{\mathcal{Z}})^{k}=\{ \mathcal{Z}\to\mathbb{R}^{k}\}\) to denote the collection of \(k\)-dimensional features. For \(f\in\mathcal{F}_{\mathcal{Z}}^{k}\), we use \(\operatorname{span}\{f\}\) to denote the subspace spanned by all dimensions, i.e., \(\operatorname{span}\{f\}=\operatorname{span}\{f_{1},\ldots,f_{k}\}\), and use \(\Pi\left(f;\mathcal{G}_{\mathcal{Z}}\right)\) to denote the corresponding projection on each dimension, i.e., \(\Pi\left(f;\mathcal{G}_{\mathcal{Z}}\right)=[\Pi\left(f_{1};\mathcal{G}_{ \mathcal{Z}}\right),\ldots,\Pi\left(f_{k};\mathcal{G}_{\mathcal{Z}}\right)]^{ \mathrm{T}}\in\mathcal{G}_{\mathcal{Z}}^{k}\triangleq(\mathcal{G}_{\mathcal{ Z}})^{k}\). For \(f_{1}\in\mathcal{F}_{\mathcal{Z}}^{k_{1}},f_{2}\in\mathcal{F}_{\mathcal{Z}}^{k_{2}}\), we denote \(\Lambda_{f_{1},f_{2}}\triangleq\mathbb{E}_{\mathbb{R}}\left[f_{1}(Z)f_{2}^{ \mathrm{T}}(Z)\right]\). Specifically, we define \(\Lambda_{f}\triangleq\Lambda_{f,f}=\mathbb{E}_{\mathbb{R}}\left[f(Z)f^{ \mathrm{T}}(Z)\right]\) for feature \(f\in\mathcal{F}_{\mathcal{Z}}^{k}\). With \([k]\) denoting the set \(\{i\in\mathbb{Z}\colon 1\leq i\leq k\}\) with ascending order, we define \(f_{[k]}\in\mathcal{F}_{\mathcal{Z}}^{k}\) as \(f_{[k]}\colon z\mapsto(f_{1}(z),\ldots,f_{l}(z))^{\mathrm{T}}\) for given \(f_{1},\ldots,f_{k}\in\mathcal{F}_{\mathcal{Z}}\). #### 2.1.3 Joint Functions Given alphabets \(\mathcal{X},\mathcal{Y}\) and a metric distribution \(R_{X,Y}\in\operatorname{relint}(\mathcal{P}^{\mathcal{X}\times\mathcal{Y}})\), \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) is composed of all joint functions of \(x\) and \(y\). In particular, for given \(f\in\mathcal{F}_{\mathcal{X}}\), \(g\in\mathcal{F}_{\mathcal{Y}}\), we use \(f\otimes g\) to denote their product \(((x,y)\mapsto f(x)\cdot g(y))\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\), and refer to such functions as _product functions_. For each product function \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\), we can represent \(\gamma\) as \[\gamma=\sigma\cdot(f\otimes g), \tag{2}\] where \(\sigma=\|\gamma\|\geq 0\), and \(f\in\mathcal{F}_{\mathcal{X}},g\in\mathcal{F}_{\mathcal{Y}}\) satisfy \(\|f\|=\|g\|=1\). We refer to (2) as the **standard form** of product functions. In addition, for given \(f=(f_{1},\ldots,f_{k})^{\mathrm{T}}\in\mathcal{F}_{\mathcal{X}}^{k}\) and \(g=(g_{1},\ldots,g_{k})^{\mathrm{T}}\in\mathcal{F}_{\mathcal{Y}}^{k}\), we denote \(f\otimes g\triangleq\sum_{i=1}^{k}f_{i}\otimes g_{i}\). For two subspaces \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\), respectively, we denote the tensor product of \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) as \(\mathcal{G}_{\mathcal{X}}\otimes\mathcal{G}_{\mathcal{Y}}\triangleq\operatorname {span}\{f\otimes g\colon f\in\mathcal{G}_{\mathcal{X}},g\in\mathcal{G}_{ \mathcal{Y}}\}\). Note that by extending each \(f=(x\mapsto f(x))\in\mathcal{F}_{\mathcal{X}}\) to \(((x,y)\mapsto f(x))\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\), \(\mathcal{F}_{\mathcal{X}}\) becomes a subspace of \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\), with the metric distribution being the marginal distribution \(R_{X}\) of \(R_{X,Y}\). We then denote the orthogonal complement of \(\mathcal{F}_{\mathcal{X}}\) in \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) as \[\mathcal{F}_{\mathcal{Y}\|\mathcal{X}}\triangleq\mathcal{F}_{\mathcal{X}\times \mathcal{Y}}\boxplus\mathcal{F}_{\mathcal{X}}. \tag{3}\] Specifically, we use \(\mathcal{F}_{\mathcal{X}|\varnothing}\triangleq\mathcal{F}_{\mathcal{X}}\boxplus \mathcal{F}_{\mathcal{X}}\) to represent the collection of zero-mean functions on \(\mathcal{X}\). We establish a correspondence between the distribution space \(\mathcal{P}^{\mathcal{Z}}\) and the feature space \(\mathcal{F}_{\mathcal{Z}}\) by the density ratio function. **Definition 2**: _Given a metric distribution \(R\in\operatorname{relint}(\mathcal{P}^{\mathcal{Z}})\), for each \(P\in\mathcal{P}^{\mathcal{Z}}\), we define the (centered) density ratio function \(\tilde{\ell}_{P;R}\in\mathcal{F}_{\mathcal{Z}}\) as_ \[\tilde{\ell}_{P;R}(z)\triangleq\frac{P(z)-R(z)}{R(z)},\quad\text{for all }z\in \mathcal{Z}.\] It is easy to verify that \(\tilde{\ell}_{P;R}\) has mean zero, i.e., \(\tilde{\ell}_{P;R}\in\mathcal{F}_{\mathcal{Z}|\varnothing}\). We will simply refer to \(\tilde{\ell}_{P;R}\) as the density ratio or likelihood ratio and use \(\tilde{\ell}_{P}\) to denote \(\tilde{\ell}_{P;R}\) when there is no ambiguity about the metric distribution \(R\). In particular, given random variables \(X\) and \(Y\) with the joint distribution \(P_{X,Y}\), we denote the density ratio \(\hat{\ell}_{P_{X,Y};P_{X}P_{Y}}\) by \(\mathfrak{i}_{X;Y}\), i.e., \[\mathfrak{i}_{X;Y}(x,y)=\frac{P_{X,Y}(x,y)-P_{X}(x)P_{Y}(y)}{P_{X}(x)P_{Y}(y)}, \quad\text{for all }x\in\mathcal{X},y\in\mathcal{Y}. \tag{4}\] We refer to \(\mathfrak{i}_{X;Y}\) as the **canonical dependence kernel** (CDK) function, which connects the \((X;Y)\) dependnece with \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\). With the feature geometry, we can associate geometric operations with corresponding operations on features, which we summarize as follows. **Property 1**: _Consider the feature geometry on \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) defined by the metric distribution \(R_{X,Y}=P_{X}P_{Y}\). Then, we have \(\left\langle f_{1}\otimes g_{1},f_{2}\otimes g_{2}\right\rangle=\mathbb{E}_{P _{X}}\left[f_{1}(X)f_{2}(X)\right]\cdot\mathbb{E}_{P_{Y}}\left[g_{1}(Y)g_{2}(Y )\right]\) for given \(f_{1},f_{2}\in\mathcal{F}_{\mathcal{X}},g_{1},g_{2}\in\mathcal{F}_{\mathcal{ Y}}\). In addition, For any \(k\geq 1\) and \(f\in\mathcal{F}_{\mathcal{X}}^{k},g\in\mathcal{F}_{\mathcal{Y}}^{k}\), we have_ \[\Pi\left(f;\mathcal{F}_{\varnothing}\right)=\mathbb{E}_{P_{X}} \left[f(X)\right],\quad\Pi\left(g;\mathcal{F}_{\varnothing}\right)=\mathbb{E}_ {P_{Y}}\left[g(Y)\right], \tag{5}\] \[\left\langle\mathfrak{i}_{X;Y},f\otimes g\right\rangle=\mathbb{E} _{P_{X,Y}}\left[f^{\mathrm{T}}(X)g(Y)\right]-\left(\mathbb{E}_{P_{X}}\left[f( X)\right]\right)^{\mathrm{T}}\mathbb{E}_{P_{Y}}\left[g(Y)\right],\] (6) \[\left\|f\otimes g\right\|^{2}=\mathrm{tr}(\Lambda_{f}\cdot\Lambda _{g}), \tag{7}\] _where \(\Lambda_{f}=\mathbb{E}_{P_{X}}\left[f(X)f^{\mathrm{T}}(X)\right],\Lambda_{g}= \mathbb{E}_{P_{Y}}\left[g(Y)g^{\mathrm{T}}(Y)\right]\)._ #### 2.1.4 Feature Geometry on Data Samples In practice, the variables of interest typically have unknown and complicated probability distributions, with only data samples available for learning. We can similarly define the feature geometry on data samples by exploiting the corresponding empirical distributions. To begin, given a dataset consisting of \(n\) samples of \(Z\), denoted as \(\{z_{i}\}_{i=1}^{n}\), we denote the corresponding empirical distribution \(\hat{P}_{Z}\in\mathcal{P}^{\mathbb{Z}}\) as \[\hat{P}_{Z}(z)\triangleq\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}_{\{z_{i}=z\}}, \quad\text{for all }z\in\mathcal{Z}, \tag{8}\] where \(\mathbb{1}_{\{\cdot\}}\) denotes the indicator function. Then, for any function \(f\) of \(Z\), we have \(\mathbb{E}_{\hat{P}_{Z}}\left[f(Z)\right]=\sum_{z\in\mathbb{Z}}\hat{P}_{Z}(z) \cdot f(z)=\frac{1}{n}\sum_{i=1}^{n}f(z_{i})\), which is the empirical average of \(f\) over the dataset. Specifically, given \(n\) sample pairs \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\) of \((X,Y)\), the corresponding joint empirical distribution \(\hat{P}_{X,Y}\in\mathcal{P}^{\mathcal{X}\times\mathcal{Y}}\) is given by \[\hat{P}_{X,Y}(x,y)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}_{\{x_{i}=x,y_{i}=y\}}, \quad\text{for all }x\in\mathcal{X},y\in\mathcal{Y}. \tag{9}\] It is easily verified that the marginal distributions of \(\hat{P}_{X,Y}\) are the empirical distributions \(\hat{P}_{X}\) of \(\{x_{i}\}_{i=1}^{n}\) and \(\hat{P}_{Y}\) of \(\{y_{i}\}_{i=1}^{n}\). Therefore, we can express the CDK function associated with the empirical distribution \(P_{X,Y}\) as [cf. (4)] \[\hat{\mathfrak{i}}_{X;Y}(x,y)=\frac{\hat{P}_{X,Y}(x,y)-\hat{P}_{X}(x)\hat{P}_{Y }(y)}{\hat{P}_{X}(x)\hat{P}_{Y}(y)},\quad\text{for all }x\in\mathcal{X},y\in\mathcal{Y}. \tag{10}\] As a result, given the dataset \(\mathcal{D}\), we can define the associated feature geometry on \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) by using the metric distribution \(R_{X,Y}=\hat{P}_{X}\hat{P}_{Y}\). From Property 1, we can evaluate the induced geometric quantities over data samples in \(\mathcal{D}\), by replacing the distributions by the corresponding empirical distributions, and applying the empirical CDK function \(\hat{\mathfrak{i}}_{X;Y}\) in (6). Such characteristic allows us to discuss theoretical properties and algorithmic implementations in a unified notation. In our later developments, we will state both theoretical results and algorithms designs using the same notation of distribution, say, \(P_{X,Y}\). This \(P_{X,Y}\) corresponds to the underlying distribution in theoretical analyses, and represents the corresponding empirical distribution in algorithm implementations. **Remark 3**: _Note that for finite \(\mathcal{Z}\), \(\mathcal{F}_{\mathcal{Z}}\) is a space with dimension \(|\mathcal{Z}|\). Therefore, we can choose a basis of \(\mathcal{F}_{\mathcal{Z}}\) and represent each feature as corresponding coefficient vectors in Euclidean space \(\mathbb{R}^{|\mathcal{Z}|}\). Similarly, we can represent operators on functional spaces as matrices. Such conventions have been introduced and adopted in previous works, e.g., (Huang et al., 2019; Xu et al., 2022), which we summarize in Appendix Appendix A for completeness and comparisons._ ### Modal Decomposition We then investigate the relation between joint functions in \(\mathcal{F}_{\mathcal{X}\times\mathfrak{Y}}\) and the features in \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathfrak{Y}}\). For convenience, we will assume all metric distributions used in the section take the product form, i.e., \(R_{X,Y}=R_{X}R_{Y}\). #### 2.2.1 Modal Decomposition We define the operator \(\zeta\) on \(\mathcal{F}_{\mathcal{X}\times\mathfrak{Y}}\) as the optimal rank-1 approximation, i.e., \[\zeta(\gamma)\triangleq\operatorname*{arg\,min}_{\begin{subarray}{c}\gamma^{ \prime}:\,\gamma^{\prime}=f\otimes g\\ f\in\mathcal{F}_{\mathcal{X}},g\in\mathcal{F}_{\mathcal{Y}}\end{subarray}}\| \gamma-\gamma^{\prime}\|,\quad\text{for all }\gamma\in\mathcal{F}_{\mathcal{X}\times \mathfrak{Y}}. \tag{11}\] In addition, for all \(k\geq 1\), we define the operator \(\zeta_{k}\) as \(\zeta_{1}\triangleq\zeta\), and \(\zeta_{k}(\gamma)\triangleq\zeta\left(\gamma-\sum_{i=1}^{k-1}\zeta_{i}( \gamma)\right)\), which we refer to as the \(k\)-th mode of \(\gamma\). Then, we use \(\zeta_{\leq k}(\gamma)\triangleq\sum_{i=1}^{k}\zeta_{i}(\gamma)\) and \(r_{k}(\gamma)\triangleq\gamma-\zeta_{\leq k}(\gamma)\) to denote the superposition of the top \(k\) modes and the corresponding remainder, respectively. **Remark 4**: _If the minimization problem (11) has multiple solutions, the corresponding \(\zeta(\gamma)\) (and \(\zeta_{k}(\gamma)\)) might not be unique. In this case, \(\zeta_{1}(\gamma),\ldots\zeta_{k}(\gamma)\) represent one of such solutions obtained from the sequential rank-1 approximations._ For each \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathfrak{Y}}\), we define the rank of \(\gamma\) as \(\operatorname{rank}(\gamma)\triangleq\inf\{k\geq 0\colon\|r_{k}(\gamma)\|=0\}\). Let \(K\triangleq\operatorname{rank}(\gamma)\), and suppose \(\zeta_{i}(\gamma)=\sigma_{i}(f_{i}^{*}\otimes g_{i}^{*})\) is the standard form of \(\zeta_{i}(\gamma)\) for \(i\in[K]\). Then, \[\gamma(x,y)=\sum_{i=1}^{K}\sigma_{i}\cdot f_{i}^{*}(x)\cdot g_{i}^{*}(y),\quad \text{for all }x\in\mathcal{X},\,y\in\mathcal{Y}, \tag{12}\] where \(\|f_{i}^{*}\|=\|g_{i}^{*}\|=1\) and \(\sigma_{i}=\|\,\zeta_{i}(\gamma)\|\). We refer to (12) as the modal decomposition of \(\gamma\), which is a special case of Schmidt decomposition (Schmidt, 1907; Ekert and Knight, 1995), or singular value decomposition (SVD) in functional space. We list several useful characterizations as follows. **Fact 3**: _Let \(K\triangleq\mathrm{rank}(\gamma)\), then \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{K}>0\). In addition, for all \(i,j=1,\ldots,K\), we have2\(\langle f_{i}^{*},f_{j}^{*}\rangle=\langle g_{i}^{*},g_{j}^{*}\rangle=\delta_{ij}\), and_ Footnote 2: We adopt the Kronecker delta notation \[\delta_{ij}=\begin{cases}0&\text{if }i\neq j,\\ 1&\text{if }i=j.\end{cases}\] _where the maximization is taken over all \(f_{i}\in\mathcal{F}_{\mathcal{X}}\) and \(g_{i}\in\mathcal{F}_{y}\) with \(\|f_{i}\|=\|g_{i}\|=1\) and \(\langle f_{i},f_{j}^{*}\rangle=\langle g_{i},g_{j}^{*}\rangle=0\) for \(j\in[i-1]\)._ \[\langle\zeta_{i}(\gamma),\zeta_{j}(\gamma)\rangle=0,\qquad\text{ if }i<j,\] \[\langle\zeta_{i}(\gamma),r_{j}(\gamma)\rangle=0,\qquad\text{if }i \leq j.\] **Fact 4**: _For all \(i\in[K]\), we have \((f_{i}^{*},g_{i}^{*})=\operatorname*{arg\,max}_{f_{i},g_{i}}\left\langle \gamma,f_{i}\otimes g_{i}\right\rangle\) where the maximization is taken over all \(f_{i}\in\mathcal{F}_{\mathcal{X}}\) and \(g_{i}\in\mathcal{F}_{y}\) with \(\|f_{i}\|=\|g_{i}\|=1\) and \(\langle f_{i},f_{j}^{*}\rangle=\langle g_{i},g_{j}^{*}\rangle=0\) for \(j\in[i-1]\)._ **Fact 5** (Eckart-Young-Mirsky theorem, Eckart and Young 1936): _For all \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) and \(k\geq 1\), we have_ \[\zeta_{\leq k}(\gamma)=\operatorname*{arg\,min}_{\gamma^{\prime}:\ \mathrm{rank}( \gamma^{\prime})\leq k}\|\gamma-\gamma^{\prime}\|=\operatorname*{arg\,min}_{ \begin{subarray}{c}\gamma^{\prime}:\ \gamma^{\prime}=f\otimes g_{i},\\ f\in\mathcal{F}_{\mathcal{X}}^{k},\ g\in\mathcal{F}_{y}^{k}\end{subarray}}\| \gamma-\gamma^{\prime}\|.\] Therefore, we refer to \(\zeta_{\leq k}(\gamma)\) as the rank-\(k\) approximation of \(\gamma\), and the remainder \(r_{k}(\gamma)\) represents the approximation error. #### 2.2.2 Constrained Modal Decomposition We can extend the modal decomposition by restricting the rank-\(1\) function to some subspaces. Given subspace \(\mathcal{G}_{\mathcal{X}}\) of \(\mathcal{F}_{\mathcal{X}}\) and subspace \(\mathcal{G}_{\mathcal{Y}}\), we define \[\zeta(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}}) \triangleq\operatorname*{arg\,min}_{\begin{subarray}{c}\gamma^{ \prime}:\ \gamma^{\prime}=f\otimes g_{i}\\ f\in\mathcal{G}_{\mathcal{X}},g\in\mathcal{G}_{\mathcal{Y}}\end{subarray}}\| \gamma-\gamma^{\prime}\|, \tag{13}\] \[\zeta_{k}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y }}) \triangleq\zeta\left(\gamma-\sum_{i=1}^{k-1}\zeta_{i}(\gamma| \mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}})\bigg{|}\mathcal{G}_{ \mathcal{X}},\mathcal{G}_{\mathcal{Y}}\right),\quad\text{for all }k\geq 1, \tag{14}\] where \(\zeta_{1}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}}) \triangleq\zeta(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}})\). Similarly, we denote \(\zeta_{\leq k}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}}) \triangleq\sum_{i=1}^{k}\zeta_{i}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G }_{\mathcal{Y}})\). Then, all the properties of modal decomposition can be readily extended to the constrained case. In particular, we can extend Fact 5 as follows. A proof is provided in Appendix C.1. **Proposition 5**: _Suppose \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) are subspace of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathcal{Y}}\), respectively. Then, for all \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) and \(k\geq 1\), we have \(\zeta_{k}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}})=\zeta_{k}( \Pi\left(\gamma;\mathcal{G}_{\mathcal{X}}\otimes\mathcal{G}_{\mathcal{Y}} \right))\), and_ \[\zeta_{\leq k}(\gamma|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}})=\zeta_ {\leq k}(\Pi\left(\gamma;\mathcal{G}_{\mathcal{X}}\otimes\mathcal{G}_{ \mathcal{Y}}\right))=\operatorname*{arg\,min}_{\begin{subarray}{c}\gamma^{ \prime}:\ \gamma^{\prime}=f\otimes g_{i},\\ f\in\mathcal{G}_{\mathcal{X}}^{k},g\in\mathcal{G}_{\mathcal{Y}}^{k}\end{subarray}}\| \gamma-\gamma^{\prime}\|, \tag{15}\] _where we have defined \(\mathcal{G}_{\mathcal{X}}^{k}\triangleq\left(\mathcal{G}_{\mathcal{X}}\right)^{k}\) and \(\mathcal{G}_{\mathcal{Y}}^{k}\triangleq\left(\mathcal{G}_{\mathcal{Y}}\right)^{k}\)._ Therefore, we can implement projection operators in functional space, by computing the corresponding constrained low-rank approximation (or modal decomposition). ### Statistical Dependence and Induced Features Given \((X,Y)\sim P_{X,Y}\), we consider the space \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) with the metric distribution \(R_{X,Y}=P_{X}P_{Y}\). Then, we can characterize the statistical dependence between \(X\) and \(Y\) by the CDK function \(\mathrm{i}_{X;Y}\), as defined in (4). Suppose \(\mathrm{rank}(\mathrm{i}_{X;Y})=K\) and let the modal decomposition be [cf. (12)] \[\mathrm{i}_{X;Y}=\sum_{i=1}^{K}\zeta_{i}(\mathrm{i}_{X;Y})=\sum_{i=1}^{K} \sigma_{i}\cdot(f_{i}^{*}\otimes g_{i}^{*}), \tag{16}\] where for each \(i\in[K]\), \(\zeta_{i}(\mathrm{i}_{X;Y})=\sigma_{i}\cdot(f_{i}^{*}\otimes g_{i}^{*})\) is the standard form of \(i\)-th rank-one dependence mode, with strength characterized by \(\sigma_{i}=\|\,\zeta_{i}(\mathrm{i}_{X;Y})\|\). Note that since different modes are orthogonal (cf. Fact 3), we have \(\left\|\mathrm{i}_{X;Y}\right\|^{2}=\sum_{i=1}^{K}\sigma_{i}^{2}\). From \(\sigma_{1}\geq\cdots\geq\sigma_{K}\), these modes are ordered by their contributions to the joint dependence. In particular, the features \(f_{i}^{*}\)'s, \(g_{i}^{*}\)'s are the maximally correlated features in \(\mathcal{F}_{\mathcal{X}}\), \(\mathcal{F}_{\mathcal{Y}}\), known as Hirschfeld-Gebelein-Renyi (HGR) maximal correlation functions (Hirschfeld, 1935; Gebelein, 1941; Renyi, 1959). To see this, let us denote the covariance for given \(f\in\mathcal{F}_{\mathcal{X}},g\in\mathcal{F}_{\mathcal{Y}}\) as \[\mathrm{cov}(f,g)\triangleq\mathbb{E}_{P_{X,Y}}\left[f(X)g(Y)\right]-\mathbb{ E}_{P_{X}P_{Y}}\left[f(X)g(Y)\right]. \tag{17}\] From Fact 4 and the fact \(\mathrm{cov}(f,g)=\langle\mathrm{i}_{X;Y},f\otimes g\rangle\), we obtain the following corollary. **Corollary 6** (HGR Maximal Correlation Functions): _For each \(i=1,\ldots,K\), we have \(\sigma_{i}=\mathrm{cov}(f_{i}^{*},g_{i}^{*})=\mathbb{E}_{P_{X,Y}}\left[f_{i}^ {*}(X)g_{i}^{*}(Y)\right]\) and \((f_{i}^{*},g_{i}^{*})=\underset{f_{i},g_{i}}{\arg\max}\ \mathrm{cov}(f_{i},g_{i})\), where the maximization is taken over all \(f_{i}\in\mathcal{F}_{\mathcal{X}}\) and \(g_{i}\in\mathcal{F}_{\mathcal{Y}}\) with \(\|f_{i}\|=\|g_{i}\|=1\) and \(\langle f_{i},f_{j}^{*}\rangle=\langle g_{i},g_{j}^{*}\rangle=0\) for all \(j\in[i-1]\)._ We can also consider the constrained modal decomposition of \(\mathrm{i}_{X;Y}\). Specifically, given subspaces \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathcal{Y}}\), respectively, let us define \[\zeta_{i}(\mathrm{i}_{X;Y}|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}} )=\hat{\sigma}_{i}\cdot(\hat{f}_{i}^{*}\otimes\hat{g}_{i}^{*}),\quad i\geq 1, \tag{18}\] Then, we can interpret \(\hat{\sigma}_{i},\hat{f}_{i}^{*},\hat{g}_{i}^{*}\) as the solution to a constrained maximal correlation problem, formalized as the following extension of Corollary 6. A proof is provided in Appendix C.2. **Proposition 7**: _Given subspaces \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathcal{Y}}\), respectively, we have \(\hat{\sigma}_{i}=\mathrm{cov}(\hat{f}_{i}^{*},\hat{g}_{i}^{*})=\mathbb{E}_{P_{ X,Y}}\big{[}\hat{f}_{i}^{*}(X)\hat{g}_{i}^{*}(Y)\big{]}\), \((\hat{f}_{i}^{*},\hat{g}_{i}^{*})=\underset{f_{i},g_{i}}{\arg\max}\ \mathrm{cov}(f_{i},g_{i})\), where \(\mathrm{cov}\) denotes the covariance [cf. (17)], and where the maximization is taken over all \(f_{i}\in\mathcal{G}_{\mathcal{X}}\) and \(g_{i}\in\mathcal{G}_{\mathcal{Y}}\) that satisfy_ \[\|f_{i}\|=\|g_{i}\|=1\quad\text{and}\quad\langle f_{i},\hat{f}_{j}^{*}\rangle= \langle g_{i},\hat{g}_{j}^{*}\rangle=0\text{ for }j\in[i-1]. \tag{19}\] In particular, we can interpret CCA (Canonical Correlation Analysis) as the modal decomposition constrained to linear functions. **Example 1** (Canonical Correlation Analysis): _Suppose \(\mathcal{X}\) and \(\mathcal{Y}\) are vector spaces, and \(\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{Y}}\) are the space of all linear functions defined on \(\mathcal{X}\), \(\mathcal{Y}\), respectively. Then, Proposition 7 gives solutions to CCA (Canonical Correlation Analysis) (Hotelling, 1936), where \(\hat{\sigma}_{i}\)'s are canonical correlations._ Weak Dependence and Local AnalysisIn the particular case where the statistical dependence between \(X\) and \(Y\) is weak, we can establish further connections between feature geometry and conventional information measures. Such analyses have been extensively studied in Huang et al. (2019), referred to as local analysis, formalized as follows. **Definition 8** (\(\epsilon\)-Dependence): _Given \((X,Y)\sim P_{X,Y}\), \(X\) and \(Y\) are \(\epsilon\)-dependent if \(\left\|\mathrm{i}_{X;Y}\right\|=O(\epsilon)\)._ Then, we can relate the length in feature geometry to the mutual information. **Lemma 9** (Huang et al. 2019, Lemma 16): _If \(X\) and \(Y\) are \(\epsilon\)-dependent, then we have \(I(X;Y)=\frac{1}{2}\cdot\left\|\mathrm{i}_{X;Y}\right\|^{2}+o(\epsilon^{2})\)._ Therefore, from (16) we can obtain a decomposition of mutual information: \(I(X;Y)=\frac{1}{2}\cdot\left\|\mathrm{i}_{X;Y}\right\|^{2}+o(\epsilon^{2})= \frac{1}{2}\sum_{i=1}^{K}\sigma_{i}^{2}+o(\epsilon^{2})\). ## 3 Dependence Approximation and Feature Learning In this section, we demonstrate the learning system design in a bivariate learning setting. In particular, we consider optimal feature representations of the statistical dependence, and present learning such features from data and assembling them to build inference models. To begin, let \(X\) and \(Y\) denote the random variables of interest, with the distribution \(P_{X,Y}\). We then characterize the statistical dependence between \(X\) and \(Y\) as the CDK function [cf. (4)] \(\mathrm{i}_{X;Y}\in\mathcal{T}_{\mathcal{X}\times\mathcal{Y}}\). In our development, we consider the feature geometry on \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) with respect to the metric distribution \(R_{X,Y}=P_{X}P_{Y}\). We also assume \(\mathrm{i}_{X;Y}\) has the modal decomposition (16). ### Low Rank Approximation of Statistical Dependence In learning applications, the joint distribution \(P_{X,Y}\) is typically unknown with enormous complexity, making direct computation or estimation of \(\mathrm{i}_{X;Y}\) infeasible. We now introduce learning \(\mathrm{i}_{X;Y}\) from data samples, by considering its rank-\(k\) approximation \(\zeta_{\leq k}(\mathrm{i}_{X;Y})\) for given \(k\geq 1\). Specifically, we approximate \(\mathrm{i}_{X;Y}\) by the rank-\(k\) joint function \(f\otimes g=\sum_{i=1}^{k}f_{i}\otimes g_{i}\), where \(f\in\mathcal{F}_{\mathcal{X}}^{k}\) and \(g\in\mathcal{F}_{\mathcal{Y}}^{k}\) are \(k\)-dimensional features. By this formulation, we convert the computation of \(\zeta_{\leq k}(\mathrm{i}_{X;Y})\) to an optimization problem, where the objective is the approximation error \(\left\|\mathrm{i}_{X:Y}-f\otimes g\right\|\), and where the optimization variables are \(k\)-dimensional features \(f\) and \(g\). However, we cannot directly compute the error \(\left\|\mathrm{i}_{X;Y}-f\otimes g\right\|\) for given \(f\) and \(g\), due to the unknown \(\mathrm{i}_{X;Y}\). To address this issue, we introduce the H-score, proposed in (Xu and Huang, 2020; Xu et al., 2022). **Definition 10**: _Given \(k\geq 1\) and \(f\in\mathcal{F}_{\mathcal{X}}^{k}\), \(g\in\mathcal{F}_{\mathcal{Y}}^{k}\), the H-score \(\mathscr{H}(f,g)\) is defined as_ \[\mathscr{H}(f,g) \triangleq\frac{1}{2}\left(\left\|\mathrm{i}_{X;Y}\right\|^{2}- \left\|\mathrm{i}_{X;Y}-f\otimes g\right\|^{2}\right) \tag{20}\] \[=\mathbb{E}\left[f^{\mathrm{T}}(X)g(Y)\right]-\left(\mathbb{E} \left[f(X)\right]\right)^{\mathrm{T}}\mathbb{E}\left[g(Y)\right]-\frac{1}{2} \cdot\mathrm{tr}\left(\Lambda_{f}\Lambda_{g}\right), \tag{21}\] _where \(\Lambda_{f}=\mathbb{E}\left[f(X)f^{\mathrm{T}}(X)\right]\), \(\Lambda_{g}=\mathbb{E}\left[g(Y)g^{\mathrm{T}}(Y)\right]\)._ The H-score measures the goodness of the approximation, with a larger H-score value indicating a smaller approximation error. In particular, for \(k\)-dimensional feature inputs, the maximum value of H-score gives the total energy of top-\(k\) dependence modes, achieved by the optimal rank-\(k\) approximation. Formally, we have the following property from Fact 5. **Property 2**: _Given \(k\geq 1\), let \(\sigma_{i}=\|\zeta_{i}(\mathrm{i}_{X;Y})\|\) for \(i\in[k]\). Then, for all \(f\in\mathcal{F}_{\mathfrak{X}}^{k}\) and \(g\in\mathcal{F}_{\mathfrak{Y}}^{k}\),_ \[\mathscr{H}(f,g)\leq\frac{1}{2}\|\zeta_{\leq k}(\mathrm{i}_{X;Y})\|^{2}=\frac{ 1}{2}\sum_{i=1}^{k}\sigma_{i}^{2}, \tag{22}\] _where the inequality holds with equality if and only if \(f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y})\)._ In practice, for given features \(f\) and \(g\), we can efficiently compute the H-score \(\mathscr{H}(f,g)\) from data samples, by evaluating corresponding empirical averages in (21). Since \(\mathscr{H}(f,g)\) is differentiable with respect to \(f\) and \(g\), we can use it as the training objective for learning the low-rank approximation of \(\mathrm{i}_{X;Y}\), where we use neural networks to parameterize \(f\) and \(g\) and optimize their parameters by gradient descent. Suppose the networks have sufficient expressive power, then the optimal solution gives the desired low-rank approximation \(\zeta_{\leq k}(\mathrm{i}_{X;Y})\). It is worth noting that in this particular bivariate setting, the roles of \(X\) and \(Y\) (and the learned features \(f\) and \(g\)) are symmetric. Moreover, we design the learned features to directly approximate the statistical dependence between \(X\) and \(Y\), instead of solving a specific inference task, e.g., predicting \(Y\) based on \(X\), or vice versa. Nevertheless, we can readily solve these inference tasks by simply assembling the learned features, as we will demonstrate next. ### Feature Assembling and Inference Models We then discuss the assembling of the features to obtain different inference models. Suppose we obtain \(f\in\mathcal{F}_{\mathfrak{X}}^{k},g\in\mathcal{F}_{\mathfrak{Y}}^{k}\) from maximizing the H-score \(\mathscr{H}(f,g)\). We first consider the case where \(k\geq\mathrm{rank}(\mathrm{i}_{X;Y})\) and we have learned \(f\otimes g=\mathrm{i}_{X;Y}\) (cf. Property 2). Then, we have the following proposition. A proof is provided in Appendix C.3. **Proposition 11**: _Suppose \(f\otimes g=\mathrm{i}_{X;Y}\). Then, we have \(\|\mathrm{i}_{X;Y}\|^{2}=\mathrm{tr}(\Lambda_{f}\cdot\Lambda_{g})\) and_ \[P_{Y|X}(y|x)=P_{Y}(y)\left(1+f^{\mathrm{T}}(x)g(y)\right). \tag{23}\] _In addition, for any \(d\)-dimensional function \(\psi\in\mathcal{F}_{\mathfrak{Y}}^{d}\), we have_ \[\mathbb{E}\left[\psi(Y)|X=x\right]=\mathbb{E}\left[\psi(Y)\right]+\Lambda_{ \psi,g}\cdot f(x). \tag{24}\] Therefore, we can compute the strength of \((X;Y)\) dependence, i.e., \(\|\mathrm{i}_{X;Y}\|\) from the features \(f\) and \(g\). In addition, the posterior distribution (23) and conditional expectation (24) are useful for supervised learning tasks. Specifically, we consider the case where \(X\) and \(Y\) are the input variable and target variable, respectively. Then, \(Y\) represents the categorical label in classification tasks or the target to estimate in regression tasks. In classification tasks, we can compute the posterior distribution \(P_{Y|X}\) of the label \(Y\) from (23). The corresponding corresponding MAP (maximum a posteriori) estimation is \[\hat{y}_{\mathrm{MAP}}(x)=\operatorname*{arg\,max}_{y\in\mathfrak{Y}}P_{Y|X}(y |x)=\operatorname*{arg\,max}_{y\in\mathfrak{Y}}P_{Y}(y)\left(1+f^{\mathrm{T}}( x)g(y)\right), \tag{25}\] where \(P_{Y}\) can be obtained from training set. This prediction is also referred to as the maximal correlation regression (MCR) (Xu and Huang, 2020). If the target variable \(Y\) is continuous, it is often of interest to estimate \(Y\), or more generally, some function \(\psi\) of \(Y\). Then, the MMSE (minimum mean square error) estimation of \(\psi(Y)\) based on \(X=x\) is the conditional expectation \(\mathbb{E}\left[\psi(Y)|X=x\right]\). From (24), we can efficiently compute the conditional expectation, where \(\mathbb{E}\left[\psi(Y)\right]\) and \(\Lambda_{\psi,g}=\mathbb{E}\left[\psi(Y)g^{\mathrm{T}}(Y)\right]\) can be evaluated from the training dataset by taking the corresponding empirical averages. Therefore, we obtain the model for estimating \(\psi(Y)\) for any given \(\psi\), by simply assembling the learned features without retraining. In practice, it can happen that feature dimension \(k<\text{rank}(\text{i}_{X;Y})\), due to a potentially large \(\text{rank}(\text{i}_{X;Y})\). In such case, the best approximation of \(\text{i}_{X;Y}\) would be the rank-\(k\) approximation \(\zeta_{\leq k}(\text{i}_{X;Y})\), and we can establish a similar result as follows. A proof is provided in Appendix C.4. **Proposition 12**: _Suppose \(f\otimes g=\zeta_{\leq k}(\text{i}_{X;Y})\) for \(k\geq 1\). Then, for all \(d\)-dimensional function \(\psi\in\text{span}^{d}\{g_{0}^{*},g_{1}^{*},\ldots,g_{k}^{*}\}\), we have \(\mathbb{E}\left[\bar{\psi}(Y)\right]|X=x]=\mathbb{E}\left[\psi(Y)\right]+ \Lambda_{\psi,g}\cdot f(x)\), where we have defined the constant function \(g_{0}^{*}(y)\equiv 1\), and where for each \(i\in[k]\), \(g_{i}^{*}\) is obtained from the standard form of \(\zeta_{i}(\text{i}_{X;Y})\): \(\sigma_{i}(f_{i}^{*}\otimes g_{i}^{*})=\zeta_{i}(\text{i}_{X;Y})\)._ ### Constrained Dependence Approximation We can readily extend the above analysis to the constrained low-rank approximation problem. Specifically, we consider the constrained rank-\(k\) approximation [cf. (15)] \(\zeta_{\leq k}(\text{i}_{X;Y}|\mathcal{G}_{\mathcal{X}},\mathcal{G}_{\mathcal{ X}})\) for \(k\geq 1\), where \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) are subspaces of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathcal{Y}}\), respectively. Analogous to Property 2, when we restrict \(f\in\mathcal{G}_{\mathcal{X}}^{\,k}\) and \(g\in\mathcal{G}_{\mathcal{Y}}^{\,k}\), the H-score \(\mathscr{H}(f,g)\) is maximized if and only if \(f\otimes g=\zeta_{\leq k}(\text{i}_{X;Y}|\mathcal{G}_{\mathcal{X}},\mathcal{G }_{\mathcal{X}})\). As an application, we can model the restricted expressive power of feature extractors as the constraints and characterize its effects. To begin, we consider the maximization of H-score \(\mathscr{H}(f,g)\), where features \(f\) and \(g\) are \(k\)-dimensional outputs of neural networks. In particular, we assume the last layers of the networks are linear layers, which is a common network architecture design in practice. The overall network architecture is shown in Figure 2, where we express \(f\) as the composition of feature extractor \(\phi\in\mathcal{F}_{\mathcal{X}}^{d_{\mathcal{X}}}\) and the last linear layer with weight matrix \(W_{\text{x}}\in\mathbb{R}^{k\times d_{\mathcal{X}}}\). Similarly, we represent \(g\) as the composition of \(\psi\in\mathcal{F}_{\mathcal{Y}}^{d_{\mathcal{Y}}}\) and the linear layer with weight \(W_{\text{y}}\in\mathbb{R}^{k\times d_{\mathcal{Y}}}\). Suppose we have trained the weights \(W_{\text{x}},W_{\text{y}}\) and the parameters in \(\phi,\psi\) to maximize the H-score \(\mathscr{H}(f,g)\), and the weights \(W_{\text{x}}\) and \(W_{\text{y}}\) have converged to their optimal values with respect to \(\phi\) and \(\psi\). Note that for any given \(\phi,\psi\), \(f=W_{\text{x}}\phi\) takes values from the set \(\{W_{\text{x}}\phi\colon W_{\text{x}}\in\mathbb{R}^{k\times d_{\mathcal{X}}} \}=\text{span}^{k}\{\phi\}\), and, similarly, \(g=W_{\text{y}}\psi\) takes values from \(\text{span}^{k}\{\psi\}\). Therefore, the optimal \((f,g)\) corresponds to the solution of a constrained low-rank approximation problem, and we have \(f\otimes g=\zeta_{\leq k}(\text{i}_{X;Y}|\,\text{span}\{\phi\},\text{span}\{\psi\})\). In addition, from Proposition 5 and the orthogonality principle, we can express the approximation error as \[\left\|\text{i}_{X;Y}-f\otimes g\right\|^{2} =\left\|\text{i}_{X;Y}-\text{i}^{\prime}_{X;Y}+\text{i}^{\prime}_{ X;Y}-\zeta_{\leq k}(\text{i}^{\prime}_{X;Y})\right\|^{2}\] \[=\left\|\text{i}_{X;Y}-\text{i}^{\prime}_{X;Y}\right\|^{2}+\left\| r_{k}(\text{i}^{\prime}_{X;Y})\right\|^{2}, \tag{26}\] where \(\text{i}^{\prime}_{X;Y}\triangleq\Pi\left(\text{i}_{X;Y};\text{span}\{\phi \}\otimes\text{span}\{\psi\}\right)\). Note that the overall approximation error in (26) contains two terms, where the first term characterizes the effects of insufficient expressive power of \(\phi,\psi\), and the second term characterizes the impacts of feature dimension \(k\). Figure 2: Features \(f\), \(g\) as the output of linear layers. The linear layers are represented as triangle modules, with inputs \(\phi,\psi\), and weights \(W_{\text{x}}\), \(W_{\text{y}}\), respectively. ### Relationship to Classification DNNs We conclude this section by discussing a relation between the dependence approximation framework and deep neural networks, studied in Xu et al. (2022). We consider a classification task where \(X\) and \(Y\) denote the input data and the target label to predict, respectively. Then, we can interpret the log-likelihood function of DNN as an approximation of the H-score, and thus DNN also learns strongest modes of \((X;Y)\) dependence. To begin, let \(\{(x_{i},y_{i})\}_{i=1}^{n}\) denote the training data, with empirical distribution \(P_{X,Y}\) as defined in (9). We depict the architecture of typical classification DNN in Figure 3, where we abstract all layers before classification layer as a \(k\)-dimensional feature extractor \(f\in\mathcal{F}_{\mathcal{X}}^{k}\). The feature \(f\) is then processed by a classification layer with weight matrix \(G^{\text{T}}\) and the bias vector \(\underline{b}\), and activated by the softmax function3. Without loss of generality, we assume \(\mathcal{Y}=\{1,\ldots,|\mathcal{Y}|\}\), then we can represent \(G\) and \(\underline{b}\) as Footnote 3: The softmax function is defined such that, for all \(k>1\) and each \(\underline{v}=(v_{1},\ldots,v_{k})^{\text{T}}\in\mathbb{R}^{k}\), we have \(\text{softmax}(\underline{v})\in\mathbb{R}^{k}\), with each \(i\)-th entry being \([\text{softmax}(\underline{v})]_{i}\triangleq\frac{\exp(v_{i})}{\sum_{j=1}^ {k}\exp(v_{j})}\), \(i\in[k]\). \[G\triangleq[g(1),\ldots,g(|\mathcal{Y}|)]\in\mathbb{R}^{k\times|\mathcal{Y}|}, \quad\underline{b}\triangleq[b(1),\ldots,b(|\mathcal{Y}|)]^{\text{T}}\in \mathbb{R}^{|\mathcal{Y}|}, \tag{27}\] where \(g(y)\in\mathbb{R}^{k}\) and \(b(y)\in\mathbb{R}\) denote the weight and bias associated with each class \(Y=y\), respectively. Then, the softmax output of \((G^{\text{T}}f(x)+\underline{b})\) gives a parameterized posterior \[\tilde{P}_{Y|X}^{(f,g,b)}(y|x)\triangleq\frac{\exp(f(x)\cdot g(y)+b(y))}{\sum _{y^{\prime}\in\mathcal{Y}}\exp(f(x)\cdot g(y^{\prime})+b(y^{\prime}))}. \tag{28}\] The network parameters are trained to maximize the resulting log-likelihood function4 Footnote 4: Throughout our development, all logarithms are base \(e\), i.e., natural. \[\mathcal{L}(f,g,b)\triangleq\frac{1}{n}\sum_{i=1}^{n}\log\tilde{P}_{Y|X}^{(f,g,b)}(y_{i}|x_{i})=\mathbb{E}_{(\hat{X},\hat{Y})\sim P_{X,Y}}\left[\log\tilde{ P}_{Y|X}^{(f,g,b)}(\hat{Y}|\hat{X})\right]. \tag{29}\] We further define \(\mathcal{L}(f,g)\triangleq\max_{b\in\mathcal{Y}_{\mathcal{Y}}}\mathcal{L}(f,g,b)\), by setting the bias \(b\) to its optimal value with respect to given \(f\) and \(g\). It can be verified that \(\mathcal{L}(f,g)\) depends only on the centered versions of \(f\) and \(g\), formalized as follows. A proof is provided in Appendix C.5. Figure 3: A classification DNN for predicting label \(Y\) based on the input \(X\). All layers before classification layer are represented as feature extractor \(f\). The weight and bias associated with each class \(Y=y\) are denoted by \(g(y)\) and \(b(y)\), respectively, which gives weight matrix \(G^{\text{T}}\) and bias vector \(\underline{b}\) with \(G=[g(1),\ldots,g(|\mathcal{Y}|)],\underline{b}=[b(1),\ldots,g(|\mathcal{Y}|) ]^{\text{T}}\). Then, the softmax module outputs a posterior probability, parameterized by \(f\), \(g\) and \(b\). **Property 3**: _For all \(k\geq 1\) and \(f\in\mathcal{F}_{\mathcal{X}}^{k}\), \(g\in\mathcal{F}_{\mathcal{Y}}^{k}\), we have \(\mathcal{L}(f,g)=\mathcal{L}(\tilde{f},\tilde{g})\), where we have defined \(\tilde{f}\in\mathcal{F}_{\mathcal{X}}^{k},\tilde{g}\in\mathcal{F}_{\mathcal{ Y}}^{k}\) as \(\tilde{f}\triangleq\Pi\left(f;\mathcal{F}_{\mathcal{Y}|\varnothing}\right), \tilde{g}\triangleq\Pi\left(g;\mathcal{F}_{\mathcal{Y}|\varnothing}\right)\), i.e., \(\tilde{f}(x)=f(x)-\mathbb{E}\left[f(X)\right]\), \(\tilde{g}(y)=g(y)-\mathbb{E}\left[g(Y)\right]\), for all \(x\in\mathcal{X},y\in\mathcal{Y}\)._ Therefore, it is without loss of generalities to restrict our discussions to zero-mean \(f\) and \(g\). Specifically, we can verify that for the trivial choice of feature \(f=0\), the resulting likelihood function is \(\mathcal{L}(0,g)=\mathcal{L}(0,0)=-H(Y)\), achieved when the posterior distribution satisfies \(\tilde{P}_{Y|X}^{(0,g,b)}=P_{Y}\), where \(H(\cdot)\) denotes the Shannon entropy. In general, we have the following characterization of \(\mathcal{L}(f,g)\), which extends (Xu et al., 2022, Theorem 4). A proof is provided in Appendix C.6. **Proposition 13**: _Suppose \(X\) and \(Y\) are \(\epsilon\)-dependent. For all \(k\geq 1\), and \(f\in\mathcal{F}_{\mathcal{X}|\varnothing}^{k}\), \(g\in\mathcal{F}_{\mathcal{Y}|\varnothing}^{k}\), if \(\mathcal{L}(f,g)\geq\mathcal{L}(0,0)=-H(Y)\), then we have \(\|f\otimes g\|=O(\epsilon)\), and_ \[\mathcal{L}(f,g)=\mathcal{L}(0,0)+\frac{1}{2}\cdot\left(\|i_{X;Y}\|^{2}-\left\| i_{X;Y}-f\otimes g\right\|^{2}\right)+o(\epsilon^{2}), \tag{30}\] _which is maximized if and only if \(f\otimes g=\zeta_{\leq k}\big{(}i_{X,Y}\big{)}+o(\epsilon)\)._ From Proposition 13, the H-score \(\mathscr{H}(f,g)\) coincides with likelihood function \(\mathcal{L}(f,g)\) in the local regime. For a fully expressive feature extractor \(f\) of dimension \(k\), the optimal feature \(f\) and weight matrix \(G^{\mathrm{T}}\) are approximating the rank-\(k\) approximation of \((X;Y)\) dependence. In this sense, the weight matrix \(G^{\mathrm{T}}\) in classification DNN essentially characterizes a feature of the label \(Y\), with a role symmetric to feature extractor \(f\). However, unlike the H-score implementation, the classification DNN is restricted to categorical \(Y\) to make the softmax function (28) computable. ## 4 Nesting Technique for Dependence Decomposition In multivariate learning applications, it is often difficult to summarize the statistical dependence as some bivariate dependence. Instead, the statistical dependence of interest is typically only a component decomposed from the original dependence. In this section, we introduce a nesting technique, designed for operating such dependence decompositions in feature spaces. For the ease of presentation, we adopt the bivariate setting introduced previously and consider the geometry on \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) with metric distribution \(P_{X}P_{Y}\). We will discuss the multivariate extensions in later sections. ### Nesting Configuration and Nested H-score The nesting technique is a systematic approach to learn features representing projected dependence components or their modal decomposition. In particular, for a given dependence component of interest, we can construct corresponding training objective for learning the dependence component. The resulting training objective is an aggregation of different H-scores, where the inputs to these H-scores are features forming a nested structure. We refer to such functions as the _nested H-scores_. To specify a nested H-score, we introduce its configuration, referred to as the _nesting configuration_, defined as follows. **Definition 14**: _Given \(\mathcal{X},\mathcal{Y}\) and \(k\geq l\geq 1\), we define an \(l\)-level nesting configuration for \(k\)-dimensional features as the tuple \(\left\{(d_{1},\ldots,d_{l});\,\big{(}\mathcal{S}_{\mathcal{X}}^{(1)},\ldots, \mathcal{S}_{\mathcal{X}}^{(l)}\big{)};\,\mathcal{G}_{\mathcal{Y}}\right\}\), where_ * \((d_{1},\cdots,d_{l})\) _is a sequence with_ \(d_{i}>0\) _and_ \(\sum_{i=1}^{l}d_{i}=k\)_;_ * \(\big{(}\mathcal{S}_{\mathcal{X}}^{(1)},\ldots,\mathcal{S}_{\mathcal{X}}^{(l)} \big{)}\) _is an increasing sequence of_ \(l\) _subspaces of_ \(\mathcal{F}_{\mathcal{X}}\)_:_ \(\mathcal{S}_{\mathcal{X}}^{(1)}\subset\cdots\subset\mathcal{S}_{\mathcal{X}}^{ (l)}\)_;_ * \(\mathcal{G}_{\mathcal{Y}}\) _is a subspace of_ \(\mathcal{F}_{\mathcal{Y}}\) Nested H-scoreGiven a nesting configuration \(\mathcal{C}=\left\{(d_{1},\ldots,d_{l});\,\left(\mathcal{G}_{\mathcal{X}}^{(1)}, \ldots,\mathcal{G}_{\mathcal{X}}^{(l)}\right);\,\mathcal{G}_{\mathcal{Y}}\right\}\) for \(k\)-dimensional features, the associated nested H-score is a function of \(k\)-dimensional feature pair \(f\) and \(g\), which we denote by \(\mathscr{H}(f,g;\mathcal{C})\), specified as follows. To begin, let us define \(k_{i}\triangleq\sum_{j=1}^{i}d_{j}\) for each \(0\leq i\leq l\), representing the total dimension up to \(i\)-th level. Then, we define the domain of \(\mathscr{H}(f,g;\mathcal{C})\), denoted by \(\mathrm{dom}(\mathcal{C})\), as \[\mathrm{dom}(\mathcal{C})\triangleq\left\{(f,g)\colon f\in\mathcal{F}_{ \mathcal{X}}^{k},g\in\mathcal{G}_{\mathcal{Y}}^{k},f_{j}\in\mathcal{G}_{ \mathcal{X}}^{(i)},\text{ for all }k_{i-1}<j\leq k_{i}\right\}. \tag{31}\] Then, for \((f,g)\in\mathrm{dom}(\mathcal{C})\) and each \(i\in[l]\), we obtain the H-score \(\mathscr{H}(f_{[k_{i}]},g_{[k_{i}]})\) by taking the first \(k_{i}\) dimensions of \(f,g\). We define the nested H-score \(\mathscr{H}(f,g;\mathcal{C})\) by taking the sum of these \(l\) H-scores, \[\mathscr{H}(f,g;\mathcal{C})\triangleq\sum_{i=1}^{l}\mathscr{H}(f_{[k_{i}]},g _{[k_{i}]}),\qquad(f,g)\in\mathrm{dom}(\mathcal{C}). \tag{32}\] **Remark 15**: _Note that we obtain the nested H-score (32) by applying the sum function to aggregate different H-scores. However, this choice of aggregation function is not unique. In fact, for an \(l\)-level nesting configuration, we can apply any differentiable function \(\mathbb{R}^{l}\rightarrow\mathbb{R}\) that is strictly increasing in each argument, and use the aggregated result as a definition of nested H-score. For ease of presentation, we adopt the form (32) throughout our development, but also provide general discussions in Appendix B for completeness._ **Remark 16**: _By symmetry, we can also define the configuration \(\left\{(d_{1},\ldots,d_{l});\,\mathcal{G}_{\mathcal{X}};\,\left(\mathcal{G}_{ \mathcal{Y}}^{(1)},\ldots,\mathcal{G}_{\mathcal{Y}}^{(l)}\right)\right\}\) and the associated nested H-score, for subspaces \(\mathcal{G}_{\mathcal{X}}\) of \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}^{(1)}\subset\cdots\subset\mathcal{G}_{\mathcal{Y}}^{ (l)}\) of \(\mathcal{F}_{\mathcal{Y}}\)._ From (32), the nested H-score aggregates different H-scores with nested input features. The nested structure of features is specified by the increasing sequence of dimension indices: \([k_{1}]\subset\cdots\subset[k_{l}]=[k]\), determined by the sequence \((d_{1},\ldots,d_{l})\). The domain of features is specified by subspaces in the configuration. When \(\mathcal{G}_{\mathcal{X}}^{(i)}=\mathcal{G}_{\mathcal{X}}\) for all \(i\in[l]\), we can simply write the configuration as \(\mathcal{C}=\left\{(d_{1},\ldots,d_{l});\,\mathcal{G}_{\mathcal{X}};\,\mathcal{G }_{\mathcal{Y}}\right\}\) without ambiguity. In particular, we can represent the original H-score for \(k\)-dimensional input features as a nested H-score configured by \(\left\{k;\,\mathcal{F}_{\mathcal{X}};\,\mathcal{F}_{\mathcal{Y}}\right\}\). Refinements of Nesting ConfigurationGiven a nesting configuration for \(k\)-dimensional features \(\mathcal{C}=\left\{(d_{1},\ldots,d_{l});\,\left(\mathcal{G}_{\mathcal{X}}^{(1) },\ldots,\mathcal{G}_{\mathcal{X}}^{(l)}\right);\,\mathcal{G}_{\mathcal{Y}}\right\}\), the sequence \((d_{1},\ldots,d_{l})\) defines a partition that separates the \(k\) dimensions into \(l\) different groups. By refining such partition, we can construct new configurations with higher levels, which we refer to as refined configurations. In particular, we denote the finest refinement of \(\mathcal{C}\) by \(\mathcal{C}^{\star}\), defined as \[\mathcal{C}^{\star}\triangleq\left\{(1)^{k};\,\left(\left(\mathcal{G}_{ \mathcal{X}}^{(1)}\right)^{d_{1}},\ldots,\left(\mathcal{G}_{\mathcal{X}}^{(l) }\right)^{d_{l}}\right);\,\mathcal{G}_{\mathcal{Y}}\right\}, \tag{33}\] where we have used \((1)^{k}\) to denote the all-one sequence of length \(k\), and where \(\left(\left(\mathcal{G}_{\mathcal{X}}^{(1)}\right)^{d_{1}},\ldots,\left(\mathcal{ G}_{\mathcal{X}}^{(l)}\right)^{d_{l}}\right)\) represents the length-\(k\) sequence starting with \(d_{1}\) terms of \(\mathcal{G}_{\mathcal{X}}^{(1)}\), followed by \(d_{2}\) terms of \(\mathcal{G}_{\mathcal{X}}^{(2)}\), up to \(d_{l}\) terms of \(\mathcal{G}_{\mathcal{X}}^{(l)}\). From (31), such refinements do not change the domain, and we have \(\mathrm{dom}(\mathcal{C}^{\star})=\mathrm{dom}(\mathcal{C})\). The corresponding nested H-score is \[\mathscr{H}\left(f,g;\mathcal{C}^{\star}\right)=\sum_{i=1}^{k}\mathscr{H}(f_{[ i]},g_{[i]}),\quad(f,g)\in\mathrm{dom}(\mathcal{C}). \tag{34}\] ### Nesting Technique for Modal Decomposition We then demonstrate the application of nesting technique in learning modal decomposition. Given \(k\)-dimensional features \(f,g\), we consider the nesting configuration \(\{(1)^{k};\,\mathcal{F}_{\mathfrak{X}};\,\mathcal{F}_{\mathfrak{y}}\}\), which can also be obtained from the original H-score by the refinement (33): \(\{(1)^{k};\,\mathcal{F}_{\mathfrak{X}};\,\mathcal{F}_{\mathfrak{y}}\}=\{k;\, \mathcal{F}_{\mathfrak{X}};\,\mathcal{F}_{\mathfrak{y}}\}^{\star}\). The corresponding nested H-score is the sum of \(k\) H-scores: \[\mathscr{H}(f,g;\{(1)^{k};\,\mathcal{F}_{\mathfrak{X}};\,\mathcal{F}_{ \mathfrak{y}}\})=\sum_{i=1}^{k}\mathscr{H}(f_{[i]},g_{[i]}). \tag{35}\] Note that from Property 2, for each \(i\in[k]\), the H-score \(\mathscr{H}(f_{[i]},g_{[i]})\) is maximized if and only if \(f_{[i]}\otimes g_{[i]}=\zeta_{\leq i}(\mathfrak{i}_{X;Y})\). Therefore, all \(k\) terms of H-scores are maximized simultaneously, if and only if we have \(f_{[i]}\otimes g_{[i]}=\zeta_{\leq i}(\mathfrak{i}_{X;Y})\) for all \(i\in[k]\). By definition, this is also equivalent to \[f_{i}\otimes g_{i}=\zeta_{i}(\mathfrak{i}_{X;Y}),\quad i\in[k]. \tag{36}\] Hence, the nested H-score \(\mathscr{H}(f,g;\{(1)^{k};\,\mathcal{F}_{\mathfrak{X}};\,\mathcal{F}_{ \mathfrak{y}}\})\) is maximized if and only if we have (36), which gives the top \(k\) modes of \((X;Y)\) dependence. In practice, we can compute the nested H-score by using a nested architecture as shown in Figure 4, where we have used the "\(\plus\)" symbol to indicate the concatenation of two vectors, i.e., \(v_{1}\plus v_{2}\triangleq\left[\begin{smallmatrix}v_{1}\\ v_{2}\end{smallmatrix}\right]\) for two column vectors \(v_{1},v_{2}\). By maximizing the nested H-score, we can use (36) to retrieve each \(i\)-th dependence mode from corresponding feature pair \((f_{i},g_{i})\), for \(i\in[k]\). Compared with the features learned in Section 3, the nesting technique provides several new applications. First, from Fact 3, the learned features \(f\) and \(g\) have orthogonal dimensions, i.e., Figure 4: Nesting technique for modal decomposition: the nested H-score is computed with a nested architecture, where “\(\plus\)” denotes the concatenation operation of two features. different dimensions are uncorrelated. In addition, from (36), we can compute the energy contained in each \(i\)-th dependence mode, via \(\|\,\zeta_{i}(\mathbf{i}_{X;Y})\|^{2}=\|f_{i}\otimes g_{i}\|^{2}=\mathbb{E}\left[ f_{i}^{2}(X)\right]\cdot\mathbb{E}\left[g_{i}^{2}(Y)\right]\), for \(i\in[k]\). This provides a spectrum of \((X;Y)\) dependence and characterizes the usefulness or contribution of each dimension. Similarly, we can retrieve top \(k\) maximal correlation functions \(f_{i}^{*},g_{i}^{*}\) and coefficients \(\sigma_{i}\), by using the relations [cf. (16) and Corollary 6] \[f_{i}^{*}=\frac{f_{i}}{\sqrt{\mathbb{E}\left[f_{i}^{2}(X)\right]}},\quad g_{i} ^{*}=\frac{g_{i}}{\sqrt{\mathbb{E}\left[f_{i}^{2}(Y)\right]}},\quad\sigma_{i} =\sqrt{\mathbb{E}\left[f_{i}^{2}(X)\right]\cdot\mathbb{E}\left[g_{i}^{2}(Y) \right]},\qquad i\in[k]. \tag{37a}\] Nested OptimalityFrom (36), for any \(d\leq k\), we can represent the optimal rank-\(d\) approximation of \(\mathrm{i}_{X;Y}\) as \(\zeta_{\leq d}(\mathrm{i}_{X;Y})=f_{[d]}\otimes g_{[d]}\), which corresponds to top \(d\)-dimensions of learned features. We refer to this property as nested optimality: the learned features give a collection of optimal solutions for different dimensions, with a nested structure. This nested optimality provides a convenient way for feature selection, where the optimal selection of \(d\) feature pairs is simply taking the top \(d\) dimensions of learned features. In practice, we can choose \(d\) based on the dependence spectrum, such that selected features capture sufficient amount of dependence information, and then take \(f_{[d]},g_{[d]}\) for further processing. We can readily extend the discussion to constrained modal decomposition problems. Let \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{Y}}\) be subspaces of \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\mathcal{Y}}\), respectively. Then, the nested H-score \(\mathscr{H}(f,g;\{(1)^{k};\,\mathcal{G}_{\mathcal{X}};\,\mathcal{G}_{ \mathcal{Y}}\})\) defined for \(k\)-dimensional features \(f\in\mathcal{G}_{\mathcal{X}}^{\,k}\), \(g\in\mathcal{G}_{\mathcal{Y}}^{\,k}\), is maximized if and only if \[f_{i}\otimes g_{i}=\zeta_{i}(\mathrm{i}_{X;Y}|\mathcal{G}_{\mathcal{X}}, \mathcal{G}_{\mathcal{Y}}),\quad\text{for all }i\in[k]. \tag{38}\] From (38), we can establish a similar nested optimality in the constrained case. In particular, when \(\mathcal{G}_{\mathcal{X}}\) and \(\mathcal{G}_{\mathcal{X}}\) correspond to the collection of features that can be expressed by neural feature extractors, the result also characterizes the effects of restricted expressive power of neural networks (cf. Section 3.3). Specifically, from (38), when we use feature extractors with restricted expressive power, we can still guarantee the learned features have uncorrelated dimensions. ### Nesting Technique for Projection With the nesting technique, we can also operate projections of statistical dependence in feature spaces. Such operations are the basis of multivariate dependence decomposition. To begin, let \(\mathcal{G}_{\mathcal{X}}\) denote a subspace of \(\mathcal{F}_{\mathcal{X}}\). Then, from \(\mathcal{F}_{\mathcal{X}}=\mathcal{G}_{\mathcal{X}}\boxplus(\mathcal{F}_{ \mathcal{X}}\boxplus\mathcal{G}_{\mathcal{X}})\), we obtain an orthogonal decomposition of functional space \[\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}=\mathcal{F}_{\mathcal{X}}\otimes \mathcal{F}_{\mathcal{Y}}=(\mathcal{G}_{\mathcal{X}}\otimes\mathcal{F}_{ \mathcal{Y}})\boxplus((\mathcal{F}_{\mathcal{X}}\boxplus\mathcal{G}_{\mathcal{ X}})\otimes\mathcal{F}_{\mathcal{Y}}). \tag{39}\] Therefore, by projecting the statistical dependence \(\mathrm{i}_{X;Y}\) to these functional spaces, we obtain its orthogonal decomposition [cf. Fact 2] \[\mathrm{i}_{X;Y}=\Pi\left(\mathrm{i}_{X;Y};\mathcal{G}_{\mathcal{X}}\otimes \mathcal{F}_{\mathcal{Y}}\right)+\Pi\left(\mathrm{i}_{X;Y};(\mathcal{F}_{ \mathcal{X}}\boxplus\mathcal{G}_{\mathcal{X}})\otimes\mathcal{F}_{\mathcal{Y}} \right). \tag{40}\] In particular, the first term \(\Pi\left(\mathrm{i}_{X;Y};\mathcal{G}_{\mathcal{X}}\otimes\mathcal{F}_{\mathcal{ Y}}\right)\) characterizes the dependence component aligned with the subspace \(\mathcal{G}_{\mathcal{X}}\), and the second term represents the component orthogonal to \(\mathcal{G}_{\mathcal{X}}\). For convenience, we denote these two dependence components by \(\pi(\mathrm{i}_{X;Y})\) and \(\pi_{\perp}(\mathrm{i}_{X;Y})\), respectively, and demonstrate the geometry of the decomposition in Figure 5. In general, the information carried by decomposed dependence components depend on the choices of subspace \(\mathcal{G}_{\mathcal{X}}\), which various in different learning settings. In spite of such differences, we can learn the decomposition with the same procedure, which we demonstrate as follows. To begin, we consider the feature representations of the dependence components. For example, by applying the rank-\(k\) approximation on the orthogonal component \(\pi_{\perp}(\mathrm{i}_{X;Y})\), we obtain \[\zeta_{\leq k}(\pi_{\perp}(\mathrm{i}_{X;Y}))=\zeta_{\leq k}(\Pi \left(\mathrm{i}_{X;Y};(\mathcal{F}_{\mathfrak{X}}\boxminus\mathcal{G}_{ \mathfrak{X}})\otimes\mathcal{F}_{\mathfrak{Y}})\right)=\zeta_{\leq k}( \mathrm{i}_{X;Y}|\mathcal{F}_{\mathfrak{X}}\boxminus\mathcal{G}_{\mathfrak{X} },\mathcal{F}_{\mathfrak{Y}}),\] which can be represented as a pair of \(k\)-dimensional features. To learn such feature representations, we introduce the two-level nesting configuration \[\mathcal{C}_{\pi}\triangleq\{(\bar{k},k);\ (\mathcal{G}_{\mathfrak{X}}, \mathcal{F}_{\mathfrak{X}});\ \mathcal{F}_{\mathfrak{Y}}\} \tag{41}\] for some feature dimensions \(\bar{k},k\geq 1\). The corresponding nested H-score is \[\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathcal{C}_{\pi}\right)=\mathscr{H}(\bar{f},\bar{g})+\mathscr{ H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right), \tag{42}\] defined on the domain [cf. (31)] \[\mathrm{dom}(\mathcal{C}_{\pi})=\left\{\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right):\bar{f}\in\mathcal{G}_{\mathfrak{X}}^{\bar{k}},\bar{g} \in\mathcal{F}_{\mathfrak{Y}}^{\bar{k}},f\in\mathcal{F}_{\mathfrak{X}}^{k},g \in\mathcal{F}_{\mathfrak{Y}}^{k}\right\}, \tag{43}\] where for convenience, we explicitly express the first-level features as \(\bar{f},\bar{g}\), both of dimension \(\bar{k}\). We can use a network structure to compute the nested H-score (42), as shown in Figure 6. To see the roles of the two H-score terms in (42), note that if we maximize only the first term \(\mathscr{H}(\bar{f},\bar{g})\) of the nested H-score over the domain (43), we will obtain the solution to a constrained dependence approximation problem (cf. Section 3.3): \(\bar{f}\otimes\bar{g}=\zeta_{\leq\bar{k}}(\mathfrak{i}_{X;Y}|\mathscr{G}_{X}, \mathscr{F}_{\bar{y}})\). Specifically, if \(\bar{k}\) is sufficiently large, we would get \(\bar{f}\otimes\bar{g}=\Pi\left(\mathfrak{i}_{X;Y};\mathscr{G}_{X}\otimes \mathscr{F}_{\bar{y}}\right)=\pi(\mathfrak{i}_{X;Y})\), which gives the aligned component. With such \(\bar{f}\) and \(\bar{g}\), we can express the second H-score term as \[\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right) =\frac{1}{2}\cdot\left[\|\mathfrak{i}_{X;Y}\|^{2}-\|\mathfrak{i} _{X;Y}-\bar{f}\otimes\bar{g}-f\otimes g\|^{2}\right]\] \[=\frac{1}{2}\cdot\left[\|\mathfrak{i}_{X;Y}\|^{2}-\|\pi_{\perp}( \mathfrak{i}_{X;Y})-f\otimes g\|^{2}\right].\] Therefore, if we maximize the second H-score over only \(f\) and \(g\), we would get the orthogonal dependence component: \(f\otimes g=\zeta_{\leq k}(\pi_{\perp}(\mathfrak{i}_{X;Y}))\). This gives a two-phase training strategy for computing the decomposition (40). In contrast, the nested H-score (42) provides a single training objective to obtain both dependence components simultaneously. We formalize the result as the following theorem, of which a proof is provided in Appendix C.7. **Theorem 17**: _Given \(\bar{k}\geq\mathrm{rank}(\Pi\left(\mathfrak{i}_{X;Y};\mathscr{G}_{X}\otimes \mathscr{F}_{\bar{y}}\right))\), \(\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathscr{C}_{\pi}\right)\) is maximized if and only if_ \[\bar{f}\otimes\bar{g} =\Pi\left(\mathfrak{i}_{X;Y};\mathscr{G}_{X}\otimes\mathscr{F}_{ \bar{y}}\right), \tag{44a}\] \[f\otimes g =\zeta_{\leq k}(\mathfrak{i}_{X;Y}|\mathscr{F}_{X}\boxplus\mathscr{ G}_{X},\mathscr{F}_{\bar{y}}). \tag{44b}\] We can further consider the modal decomposition of dependence components, to obtain features with nested optimality. To learn such features, it suffices to consider the refined configuration \(\mathscr{C}_{\pi}^{\star}=\left\{(1)^{\bar{k}+k};\;(\mathscr{G}_{X}^{\bar{k}}, \mathscr{F}_{X}^{k});\;\mathscr{F}_{\bar{y}}\right\}\), which we formalize as follows. A proof is provided in Appendix C.8. **Theorem 18**: _Given \(\bar{k}\geq\mathrm{rank}(\Pi\left(\mathfrak{i}_{X;Y};\mathscr{G}_{X}\otimes \mathscr{F}_{\bar{y}}\right))\), \(\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathscr{C}_{\pi}^{\star}\right)\) is maximized if and only if_ \[\bar{f}_{i}\otimes\bar{g}_{i}=\zeta_{i}(\mathfrak{i}_{X;Y}| \mathscr{G}_{X},\mathscr{F}_{\bar{y}})\quad\text{for all $i\in[\bar{k}]$,} \tag{45a}\] \[f_{i}\otimes g_{i}=\zeta_{i}(\mathfrak{i}_{X;Y}|\mathscr{F}_{X} \boxplus\mathscr{G}_{X},\mathscr{F}_{\bar{y}})\quad\text{for all $i\in[k]$.} \tag{45b}\] ### Learning With Orthogonality Constraints We conclude this section by discussing a simple application of the nesting technique, where the goal is to learn optimal features uncorrelated to given features. Such uncorrelated conditions correspond to orthogonality constraints in feature geometry. Specifically, given a \(\bar{k}\)-dimensional feature \(\phi\in\mathscr{F}_{\bar{\chi}}^{\bar{k}}\), we consider the problem of learning \(k\)-dimensional feature \(f\) from \(X\) for inferring \(Y\), with the constraint that \(\mathrm{span}\{f\}\perp\mathrm{span}\{\phi\}\), i.e., \(\mathbb{E}\left[f_{i}(X)\phi_{j}(X)\right]=\langle f_{i},\phi_{j}\rangle=0\), for all \(i\in[k],j\in[\bar{k}]\). We therefore consider the constrained low-rank approximation problem \[\underset{f\in\mathscr{F}_{\bar{\chi}}^{\bar{k}},g\in\mathscr{F}_{\bar{\chi}}^ {\bar{k}}:\;\mathrm{span}\{f\}\perp\mathrm{span}\{\phi\}}{\mathrm{minimize}} \;\;\big{\|}\mathfrak{i}_{X;Y}-f\otimes g\big{\|}. \tag{46}\] We can demonstrate that the solution to (46) corresponds to learning the decomposition (40), with the choice \(\mathscr{G}_{X}=\mathrm{span}\{\phi\}\). To see this, we rewrite (46) as \[\mathfrak{i}_{X;Y}=\pi_{\phi}(\mathfrak{i}_{X;Y})+(\mathfrak{i}_{X;Y}-\pi_{ \phi}(\mathfrak{i}_{X;Y}))\,. \tag{47}\] where we have denoted the aligned component \(\pi_{\phi}(\mathrm{i}_{X;Y})\triangleq\Pi\left(\mathrm{i}_{X;Y};\mathrm{span}\{ \phi\}\otimes\mathcal{F}_{\mathbb{y}}\right).\) In addition, note that the orthogonality constraint of (46) is \(f\in(\mathcal{F}_{\mathbb{X}}\boxminus\mathrm{span}\{\phi\})^{k}\,,g\in \mathcal{F}_{\mathbb{y}}^{k}\). Therefore, it follows from Proposition 5 that the solution to (46) is \[f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y}|\mathcal{F}_{\mathbb{ X}}\boxminus\mathrm{span}\{\phi\},\mathcal{F}_{\mathbb{y}}) =\zeta_{\leq k}(\Pi\left(\mathrm{i}_{X;Y};(\mathcal{F}_{\mathbb{ X}}\boxminus\mathrm{span}\{\phi\})\otimes\mathcal{F}_{\mathbb{y}}\right))\] \[=\zeta_{\leq k}\left(\mathrm{i}_{X;Y}-\pi_{\phi}(\mathrm{i}_{X;Y })\right), \tag{48}\] where to obtain the last equality we have used the orthogonal decomposition (47), as well as the fact that \((\mathcal{F}_{\mathbb{X}}\boxminus\mathrm{span}\{\phi\})\otimes\mathcal{F}_{ \mathbb{y}}=\mathcal{F}_{\mathbb{X}\times\mathbb{y}}\boxminus(\mathrm{span} \{\phi\}\otimes\mathcal{F}_{\mathbb{y}})\). **Remark 19**: _From the decomposition (47), we can characterize the amount of dependence information captured by feature \(\phi\), as the energy \(\|\pi_{\phi}(\mathrm{i}_{X;Y})\|^{2}\). This quantity (with a \(1/2\) scaling factor) is also referred to as the single-sided H-score (Xu and Huang, 2020; Xu et al., 2022) of \(\phi\), due to the connection: \(\max\limits_{\bar{g}\in\mathcal{F}_{\mathbb{y}}^{k}}\mathscr{H}(\phi,\bar{g}) =\frac{1}{2}\|\pi_{\phi}(\mathrm{i}_{X;Y})\|^{2}\)._ To learn the features (48), we can apply the nesting technique and maximize the nested H-score configured by \(\mathcal{C}_{\pi}\) with \(\mathcal{G}_{\mathbb{X}}=\mathrm{span}\{\phi\}\). Specifically, from Theorem 17, \(\bar{f}=\phi\) is already in the optimal solution set. Therefore, we can fix \(\bar{f}\) to \(\bar{f}=\phi\), and optimize \[\mathscr{H}\begin{pmatrix}\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathcal{C}_{\pi}\end{pmatrix}\bigg{|}_{\bar{f}=\phi}=\mathscr{H }(\phi,\bar{g})+\mathscr{H}\begin{pmatrix}\begin{bmatrix}\phi\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\end{pmatrix} \tag{49}\] over \(\bar{g}\in\mathcal{F}_{\mathbb{y}}^{\bar{k}},f\in\mathcal{F}_{\mathbb{X}}^{k}\), and \(g\in\mathcal{F}_{\mathbb{y}}^{k}\). We can compute the objective (49) by the nested network structure as shown in Figure 7. It is also worth noting that from Proposition 13, we can also interpret the solution to (46) as the features extracted by classification DNNs subject to the same orthogonality constraints. However, compared with the H-score optimization, putting such equality constraints in DNN training typically requires non-trivial implementation. ## 5 Learning With Side Information In this section, we study a multivariate learning problem involving external knowledge and demonstrate learning algorithm design based on the nesting technique. Specifically, we consider the Figure 7: Nesting technique for learning features orthogonal to feature \(\phi\), where the \(\phi\) block is frozen during learning. The feature \(\phi\) can be given either in its analytical expressions, or as a pretrained neural network. problem of learning features from \(X\) to infer \(Y\), and assume some external knowledge \(S\) is available for the inference. We refer to \(S\) as the side information, which corresponds to extra data source for facilitating the inference. In particular, we consider the setting where we cannot obtain \((X,S)\) joint pair during the information processing, e.g., \(X\) and \(S\) are collected and processed by different agents in a distributed system. Otherwise, we can apply the bivariate dependence learning framework, by treating the \((X,S)\) pair as a new variable and directly learn features for predicting \(Y\). We depict this learning setting in Figure 8, where the inference is based on the both features extracted from \(X\) and the side information \(S\). Our goal is to design an efficient feature extractor which carries only the information not captured by \(S\). In addition, we need to design a fusion mechanism for the inference module to combine such features with the side information \(S\), and provide inference results conditioned on the side information. Let \(P_{X,S,Y}\) denote the joint distribution of \(X\), \(S\), \(Y\). Throughout our development in this section, we consider the feature geometry on \(\mathcal{F}_{\mathcal{X}\times\mathcal{S}\times\mathcal{Y}}\) with the metric distribution \(R_{X,S,Y}\triangleq P_{X}P_{S,Y}\). ### Dependence Decomposition and Feature Learning To begin, we represent the joint dependence in functional space as the CDK function \(\mathfrak{i}_{X;S,Y}\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}\times \mathcal{Y}}\). Since the side information \(S\) is provided for the inference, we focus on the dependence between \(X\) and target \(Y\) not captured by the side information. To this end, we separate the \((X;S)\) dependence from the joint dependence, by considering the orthogonal decomposition of functional space [cf. Fact 1 and (3)]: \[\mathcal{F}_{\mathcal{X}\times\mathcal{S}\times\mathcal{Y}}=\mathcal{F}_{ \mathcal{X}\times\mathcal{S}}\boxplus\mathcal{F}_{\mathcal{Y}|(\mathcal{X} \times\mathcal{S})}. \tag{50}\] This induces an orthogonal decomposition of the joint dependence \[\mathfrak{i}_{X;S,Y}=\pi_{\mathsf{M}}(\mathfrak{i}_{X;S,Y})+\pi_{\mathsf{C}}( \mathfrak{i}_{X;S,Y}), \tag{51}\] where we have defined \(\pi_{\mathsf{M}}(\gamma)\triangleq\Pi\left(\gamma;\mathcal{F}_{\mathcal{X} \times\mathcal{S}}\right)\) and \(\pi_{\mathsf{C}}(\gamma)\triangleq\Pi\left(\gamma;\mathcal{F}_{\mathcal{Y}|( \mathcal{X}\times\mathcal{S})}\right)\) for all \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}\times\mathcal{Y}}\). We characterize the decomposed components as follows, a proof of which is provided in Appendix C.9. **Proposition 20**: _We have \(\pi_{\mathsf{M}}(\mathfrak{i}_{X;S,Y})=\mathfrak{i}_{X;S}=\tilde{\ell}_{P_{X,S,Y}^{\mathsf{M}}}\), where \(P_{X,S,Y}^{\mathsf{M}}\triangleq P_{X|S}P_{S}P_{Y|S}\)._ From Proposition 20, we have \(\pi_{\mathsf{M}}(\mathfrak{i}_{X;S,Y})=\mathfrak{i}_{X;S}=\tilde{\ell}_{P_{X,S,Y}^{\mathsf{M}}}\), where \(P_{X,S,Y}^{\mathsf{M}}\triangleq P_{X|S}P_{S}P_{Y|S}\). More generally, the space \(\mathcal{F}_{\mathcal{X}\times\mathcal{S}}\) characterizes CDK functions associated with such Markov distributions, which we formalize as follows. A proof of which is provided in Appendix C.10. **Proposition 21**: _Given \(Q_{X,S,Y}\) with \(Q_{X}=P_{X},Q_{S,Y}=P_{S,Y}\), let \(\mathfrak{i}_{X;S,Y}^{(Q)}\) denote the corresponding CDK function. Then, \(\mathfrak{i}_{X;S,Y}^{(Q)}\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}}\) if and only if \(Q_{X,S,Y}=Q_{X|S}Q_{S}Q_{Y|S}\)._ Figure 8: Learning Setting With Side Information \(S\) Hence, we refer to the dependence component \(\pi_{\mathsf{M}}\big{(}\mathrm{i}_{X;S,Y}\big{)}=\mathrm{i}_{X;S}\) as the Markov component. Then, we have \(\pi_{\mathsf{C}}\big{(}\mathrm{i}_{X;S,Y}\big{)}=\mathrm{i}_{X;S,Y}-\mathrm{i} _{X;S}\), which characterizes the joint dependence not captured by \(S\). We refer to it as the Conditional dependence component, and also denote it by \(\mathrm{i}_{X;Y|S}\), i.e., \[\mathrm{i}_{X;Y|S}(x,s,y)\triangleq\mathrm{i}_{X;S,Y}(x,s,y)-\mathrm{i}_{X;S}( x,s)=\left[\frac{P_{X,S,Y}-P_{X,S,Y}^{\mathsf{M}}}{R_{X,S,Y}}\right](x,s,y). \tag{52}\] Therefore, the conditional dependence component \(\mathrm{i}_{X;Y|S}\) vanishes if and only if \(X\) and \(Y\) are conditionally independent given \(S\). In general, from the Pythagorean relation, we can write \[\left\|\mathrm{i}_{X;Y|S}\right\|^{2}=\left\|\mathrm{i}_{X;S,Y}\right\|^{2}- \left\|\mathrm{i}_{X;S}\right\|^{2}, \tag{53}\] analogous to the expression of conditional mutual information \(I(X;Y|S)=I(X;S,Y)-I(X;S)\). Indeed, we can establish an explicit connection in the local regime where \(X\) and \((S,Y)\) are \(\epsilon\)-dependent, i.e., \(\left\|\mathrm{i}_{X;S,Y}\right\|=O(\epsilon)\). Then, from Lemma 9 we obtain \(\left\|\mathrm{i}_{X;S,Y}\right\|^{2}=2\cdot I(X;S,Y)+o(\epsilon^{2})\), and similarly, \(\left\|\mathrm{i}_{X;S}\right\|^{2}=2\cdot I(X;S)+o(\epsilon^{2})\). Therefore, (53) becomes \[\left\|\mathrm{i}_{X;Y|S}\right\|^{2}=\left\|\mathrm{i}_{X;S,Y}\right\|^{2}- \left\|\mathrm{i}_{X;S}\right\|^{2}=2\cdot I(X;Y|S)+o(\epsilon^{2}).\] We then discuss learning these two dependence components by applying the nesting technique. To begin, note that since \[\mathrm{i}_{X;S} =\pi_{\mathsf{M}}(\mathrm{i}_{X;S,Y})=\Pi\left(\mathrm{i}_{X;S,Y} ;\mathcal{F}_{\mathcal{X}}\otimes\mathcal{F}_{\mathcal{S}}\right), \tag{54}\] \[\mathrm{i}_{X;Y|S} =\pi_{\mathsf{C}}(\mathrm{i}_{X;S,Y})=\Pi\left(\mathrm{i}_{X;S,Y} ;\mathcal{F}_{\mathcal{X}}\otimes\mathcal{F}_{\mathcal{Y}|\mathcal{S}}\right), \tag{55}\] we recognize the decomposition \(\mathrm{i}_{X;S,Y}=\mathrm{i}_{X;S}+\mathrm{i}_{X;Y|S}\) as a special case of (40). Therefore, similar to our discussions in Section 4.3, we consider the nesting configuration \(\mathcal{C}_{\mathsf{MC}}\) and its refinement \(\mathcal{C}_{\mathsf{MC}}^{*}\), where [cf. (41)] \[\mathcal{C}_{\mathsf{MC}}\triangleq\left\{(\bar{k},k);\,\mathcal{F}_{ \mathcal{X}};\,(\mathcal{F}_{\mathcal{S}},\mathcal{F}_{\mathcal{S}\times \mathcal{Y}})\right\}. \tag{56}\] The corresponding nested H-scores are defined on \[\mathrm{dom}(\mathcal{C}_{\mathsf{MC}})=\mathrm{dom}(\mathcal{C}_{\mathsf{MC }}^{\star})=\left\{\left(\left[\bar{f}\atop f\right],\left[\bar{g}\atop g \right]\right):\bar{f}\in\mathcal{F}_{\mathcal{X}}^{\bar{k}},\bar{g}\in \mathcal{F}_{\mathcal{S}}^{\bar{k}},f\in\mathcal{F}_{\mathcal{X}}^{k},g\in \mathcal{F}_{\mathcal{Y}\times\mathcal{S}}^{k}\right\}. \tag{57}\] In particular, we can compute the nested H-score configured by \(\mathcal{C}_{\mathsf{MC}}\) from a nested network structure as shown in Figure 9. Then, we can obtain both dependence components by optimizing the nested H-scores. Formally, we have the following corollary of Theorem 17 and Theorem 18. **Corollary 22**: _Given \(\bar{k}\geq\mathrm{rank}(\mathrm{i}_{X;S})\), \(\mathscr{H}\left(\left[\bar{f}\atop f\right],\left[\bar{g}\atop g\right]; \mathcal{C}_{\mathsf{MC}}\right)\) is maximized if and only if_ \[\bar{f}\otimes\bar{g}=\mathrm{i}_{X;S}, \tag{58a}\] \[f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y|S}). \tag{58b}\] _In addition, \(\mathscr{H}\left(\left[\bar{f}\atop f\right],\left[\bar{g}\atop g\right]; \mathcal{C}_{\mathsf{MC}}^{*}\right)\) is maximized if and only if_ \[\bar{f}_{i}\otimes\bar{g}_{i}=\zeta_{i}(\mathrm{i}_{X;S}),\quad i \in[\bar{k}]. \tag{59a}\] \[f_{i}\otimes g_{i}=\zeta_{i}(\mathrm{i}_{X;Y|S}),\quad i\in[k]. \tag{59b}\] ### Feature Assembling and Inference Models We then assemble the features for inference tasks, particularly the inference conditioned on \(S\). We first consider the case where we have learned both dependence components \(\mathrm{i}_{X;S}\) and \(\mathrm{i}_{X;Y|S}\), for which we have the following characterization (cf. Proposition 11). A proof is provided in Appendix C.11. **Proposition 23**: _Suppose features \(\bar{f}\in\mathcal{F}_{X}^{\bar{k}},\bar{g}\in\mathcal{F}_{\bar{\mathcal{S}}}^ {\bar{k}}\) and \(f\in\mathcal{F}_{X}^{k},g\in\mathcal{F}_{\bar{\mathcal{S}}\times y}^{k}\) satisfy \(\bar{f}\otimes\bar{g}=\mathrm{i}_{X;S},f\otimes g=\mathrm{i}_{X;Y|S}\). Then, we have \(\|\mathrm{i}_{X;S}\|^{2}=\mathrm{tr}(\Lambda_{\bar{f}}\cdot\Lambda_{\bar{g}}), \|\mathrm{i}_{X;Y|S}\|^{2}=\mathrm{tr}(\Lambda_{f}\cdot\Lambda_{g})\), and_ \[P_{Y|X,S}(y|x,s)=P_{Y|S}(y|s)\cdot\left(1+\frac{f^{\mathrm{T}}(x)g(s,y)}{1+f^{ \mathrm{T}}(x)\bar{g}(s)}\right). \tag{60}\] _In addition, for any function \(\psi\in\mathcal{F}_{\bar{\mathcal{S}}}^{d}\),_ \[\mathbb{E}\left[\psi(Y)|X=x,S=s\right]=\mathbb{E}\left[\psi(Y)|S=s\right]+ \frac{1}{1+f^{\mathrm{T}}(x)\bar{g}(s)}\cdot\Lambda_{\psi,g}^{(s)}\cdot f(x), \tag{61}\] _where we have defined \(\Lambda_{\psi,g}^{(s)}\triangleq\mathbb{E}\left[\psi(Y)g^{\mathrm{T}}(s,Y)|S=s\right]\) for each \(s\in\mathcal{S}\)._ Therefore, we can compute the strength of both the Markov component \(\mathrm{i}_{X;S}\) and the conditional component \(\mathrm{i}_{X;Y|S}\) from the features. Similarly, we can further compute the spectrum of the dependence components, by learning the modal decomposition according to (59). From Proposition 23, we can obtain inference models conditioned on the side information \(S\). In particular, for classification task, we can use (60) to compute the posterior probability, with the resulting MAP estimation conditioned on \(S=s\) [cf. (25)]: \[\hat{y}_{\mathrm{MAP}}(x;s)=\operatorname*{arg\,max}_{y\in y}P_{Y|X,S}(y|x,s)= \operatorname*{arg\,max}_{y\in y}P_{Y|S}(y|s)\cdot\left(1+\frac{f^{\mathrm{T}} (x)g(s,y)}{1+f^{\mathrm{T}}(x)\bar{g}(s)}\right). \tag{62}\] Specifically, \(P_{Y|S}\) can be obtained by a separate discriminative model that predicts \(Y\) from side information \(S\). In addition, when \(Y\) is continuous, we can obtain the MMSE estimator of \(\psi(Y)\) conditioned on \(S=s\) from (61), where we can learn \(\mathbb{E}\left[\psi(Y)|S=s\right]\) and \(\Lambda_{\psi,g}^{(s)}=\mathbb{E}\left[\psi(Y)g^{\mathrm{T}}(s,Y)|S=s\right]\) separately from \((S,Y)\) pairs. As we construct both models by assembling learned features, the model outputs depend on input data \(X\) only through the features \(\bar{f}\) and \(f\) of \(X\), as desired. Moreover, we can conduct a conditional independence test from the learned features. In particular, suppose we have learned features \(f\in\mathcal{F}_{X}^{k},g\in\mathcal{F}_{\bar{\mathcal{S}}\times y}^{k}\) with \(f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y|S})\) for some \(k\geq 1\). Then we obtain \(\mathrm{tr}(\Lambda_{f}\cdot\Lambda_{g})=\|\zeta_{\leq k}(\mathrm{i}_{X;Y|S}) \|^{2}\geq 0\), where the equality holds if and only if \(\mathrm{i}_{X;Y|S}=0\), i.e., \(X\) and \(Y\) are conditionally independent given \(S\). Figure 9: Nesting technique for learning with the side information \(S\) ### Theoretical Properties and Interpretations We conclude this section by demonstrating theoretical properties of the learned features. In particular, we focus on the conditional component \(\mathfrak{i}_{X;Y|S}\) and associated features, as the Markov component \(\mathfrak{i}_{X;S}\) shares the same properties as discussed in the bivariate case. To begin, let \(K\triangleq\mathrm{rank}(\mathfrak{i}_{X;Y|S})\), and let the modal decomposition of \(\mathfrak{i}_{X;Y|S}\) be \[\zeta_{i}(\mathfrak{i}_{X;Y|S})=\sigma_{i}\cdot(f_{i}^{*}\otimes g_{i}^{*}),i \in[K], \tag{63}\] where we have represented each mode in the standard form. Then, we can interpret the \(\sigma_{i},f_{i}^{*},g_{i}^{*}\) as the solution to a constrained maximal correlation problem. To see this, note that from \(\mathfrak{i}_{X;Y|S}=\Pi\left(\mathfrak{i}_{X;S,Y};\mathcal{F}_{\mathfrak{Y}| \mathcal{X},\delta}\right)=\Pi\left(\mathfrak{i}_{X;S,Y};\mathcal{F}_{ \mathfrak{X}}\otimes\mathcal{F}_{\mathfrak{Y}|\mathcal{S}}\right)\), we can obtain \(\sigma_{i}(f_{i}^{*}\otimes g_{i}^{*})=\zeta_{i}(\mathfrak{i}_{X;Y|S})= \zeta_{i}(\mathfrak{i}_{X;S,Y}|\mathcal{F}_{\mathfrak{X}},\mathcal{F}_{ \mathfrak{Y}|\mathcal{S}})\). Therefore, \(f_{i}^{*},g_{i}^{*}\) are the constrained maximal correlation function of \(X\) and \((S,Y)\) as defined in Proposition 7, with the subspaces \(\mathcal{G}_{\mathfrak{X}}=\mathcal{F}_{\mathfrak{X}},\mathcal{G}_{8\times \mathfrak{Y}}=\mathcal{F}_{\mathfrak{Y}|\mathcal{S}}\). #### 5.3.1 Posterior Distribution and Conditional Dependence In a local analysis regime, we can simplify the posterior distribution \(P_{Y|X,S}\) as follows. A proof is provided in Appendix C.12. **Proposition 24**: _If \(X\) and \((S,Y)\) are \(\epsilon\)-dependent, we have_ \[P_{Y|X,S}(y|x,s)=P_{Y|S}(y|s)\left(1+\sum_{i=1}^{K}\sigma_{i}f_{i}^{*}(x)g_{i}^ {*}(s,y)\right)+o(\epsilon), \tag{64}\] _where \(\sigma_{i},f_{i}^{*},g_{i}^{*}\) are as defined in (63)._ From (64), the dominant term of \(P_{Y|X,S}(y|x,s)\) depends on \(x\) only through \(f_{i}^{*}(x)\), \(i=1,\ldots,K\). Therefore, the feature \(f_{[K]}^{*}(X)=(f_{1}^{*}(X),\ldots,f_{K}^{*}(X))^{\mathrm{T}}\) captures the conditional dependence between \(X\) and \(Y\) given \(S\), up to higher-order terms of \(\epsilon\). #### 5.3.2 Relationship to Multitask Classification DNNs We can also establish a connection between the side information problem and deep neural networks for multitask learning. Specifically, we consider a multitask classification task where \(X\) and \(Y\) denote the input data and target label to predict, respectively, and \(S\) denotes the index for tasks. When conditioned on different values of \(S\), the dependence between data and label are generally different. We then demonstrate that a multitask DNN also learns the optimal approximation of the conditional dependence component \(\mathfrak{i}_{X;Y|S}\). Specifically, we consider a classical multitask classification DNN design (Caruana, 1993; Ruder, 2017), as shown in Figure 10. In this figure, feature \(f\in\mathcal{F}_{\mathfrak{X}}^{k}\) of \(X\) is shared among all tasks. For each task \(s\in\mathcal{S}\), the corresponding classification head with weight matrix \(G_{s}^{\mathrm{T}}\in\mathbb{R}^{|\mathfrak{Y}|\times k}\) and bias \(\underline{b}_{s}\in\mathbb{R}^{[\mathfrak{Y}]}\) are applied to compute the corresponding posterior probability \[\tilde{P}_{Y|X,S}^{(f,g,b)}(y|x,s)\triangleq\frac{\exp\left(f(x)\cdot g(s,y)+b (s,y)\right)}{\sum_{y^{\prime}\in\mathfrak{Y}}\exp\left(f(x)\cdot g(s,y^{ \prime})+b(s,y^{\prime})\right)}, \tag{65}\] where \(g\in\mathcal{F}_{\mathcal{S}\times\mathfrak{Y}}\) and \(b\in\mathcal{F}_{\mathfrak{Y}}\) are related to \(G_{s}\) and \(\underline{b}_{s}\) via [cf. (27)] \[G_{s}(i,y)=g_{i}(s,y)\text{ for all }i\in[k],y\in\mathcal{Y},\qquad\underline{b}_{s}=[ b(s,1),\ldots,b(s,|\mathfrak{Y}|)]^{\mathrm{T}}. \tag{66}\] Given data samples \(\{(x_{i},s_{i},y_{i})\}_{i=1}^{n}\) with the empirical distribution \(P_{X,S,Y}\), we write the corresponding likelihood function as \[\mathcal{L}_{S}(f,g,b)\triangleq\frac{1}{n}\sum_{i=1}^{n}\log\tilde{P}_{Y|X,S}^{ (f,g,b)}(y_{i}|x_{i},s_{i})=\mathbb{E}_{(\hat{X},\hat{Y},\hat{S})\sim P_{X,Y,S}} \left[\log\tilde{P}_{Y|X,\hat{S}}^{(f,g,b)}(\hat{Y}|\hat{X},\hat{S})\right]. \tag{67}\] Note that we can relate the posterior probability \(\tilde{P}_{Y|X,S}^{(f,g,b)}\) to the posterior \(\tilde{P}_{Y|X}^{(f,g,b)}\) in ordinary classification DNN, as defined in (28). To see this, note that for all \(s\in\mathcal{S}\), we have \(\tilde{P}_{Y|X,S=s}^{(f,g,b)}=\tilde{P}_{Y|X}^{(f,g^{(s)},b^{(s)})}\), where we have defined \(g^{(s)}\in\mathcal{F}_{y}^{k}\) and \(b^{(s)}\in\mathcal{F}_{y}\) for each \(s\in\mathcal{S}\), as \(g^{(s)}(y)\triangleq g(s,y),b^{(s)}(y)\triangleq b(s,y)\). Then, we rewrite (67) as \(\mathcal{L}_{S}(f,g,b)=\sum_{s\in\mathcal{S}}P_{S}(s)\mathcal{L}_{S}^{(s)}(f,g^{(s)},b^{(s)})\), where \(\mathcal{L}_{S}^{(s)}(f,g,b)\triangleq\mathbb{E}_{(\hat{X},\hat{Y})\sim P_{X,Y |S=s}}\left[\log\tilde{P}_{Y|X}^{(f,g,b)}(\hat{Y}|\hat{X})\right]\) is the likelihood value conditioned on \(S=s\). We further assume the all biases are trained to their optimal values with respect to \(f\) and \(g\), and obtain \[\mathcal{L}_{S}(f,g)\triangleq\sum_{s\in\mathcal{S}}P_{S}(s)\mathcal{L}_{S}^{ (s)}(f,g^{(s)})=\max_{b\in\mathcal{F}_{\mathcal{S}\times\mathcal{Y}}}\mathcal{ L}_{S}(f,g), \tag{68}\] where we have denoted \(\mathcal{L}_{S}^{(s)}(f,g)\triangleq\max_{b\in\mathcal{F}_{y}}\mathcal{L}_{S}^{(s)}(f,g,b)\) for each \(s\in\mathcal{S}\). Then, from Property 3 we can verify that \(\mathcal{L}_{S}(f,g)\) depends only on centered features, formalized as follows. **Property 4**: _We have \(\mathcal{L}_{S}(f,g)=\mathcal{L}_{S}(\tilde{f},\tilde{g})\), where we have defined \(\tilde{f}\triangleq\Pi\left(f;\mathcal{F}_{\mathcal{X}|\varnothing}\right)\), and \(\tilde{g}\triangleq\Pi\left(g;\mathcal{F}_{y|\mathcal{S}}\right)\), i.e., \(\tilde{f}(x)=f(x)-\mathbb{E}\left[f(X)\right]\) and \(\tilde{g}(s,y)=g(s,y)-\mathbb{E}\left[g(s,Y)|S=s\right]\)._ Therefore, we can focus on centered features \(f\in\mathcal{F}_{\mathcal{X}|\varnothing}^{k}\) and \(g\in\mathcal{F}_{y|\mathcal{S}}^{k}\), i.e., \(\mathbb{E}\left[f(X)\right]=0\) and \(\mathbb{E}\left[g(s,Y)|S=s\right]=0\) for all \(s\in\mathcal{S}\). We also restrict to features \(f,g\) that perform better than the Figure 10: A multihead network for extracting feature \(f\) shared among different tasks in \(\mathcal{S}=\{1,\dots,|\mathcal{S}|\}\). Each task \(s\in\mathcal{S}\) corresponds to a separate classification head with weight matrix \(G_{s}^{\mathrm{T}}\) and bias vector \(\underline{b}_{s}\) for generating the associated posterior \(\tilde{P}_{Y|X,S=s}^{(f,g,b)}\). trivial choice of \(f=0\), by assuming that \[\mathcal{L}_{S}^{(s)}(f,g^{(s)})\geq\mathcal{L}_{S}^{(s)}(0,g^{(s)})=\mathcal{L}_{ S}^{(s)}(0,0)=-H(Y|S=s),\quad\text{for all $s\in\mathcal{S}$}. \tag{69}\] Then, we have the following characterization, which extends Proposition 13 to the multitask setting. A proof is provided in Appendix C.13. **Theorem 25**: _Suppose \(X\) and \((S,Y)\) are \(\epsilon\)-dependent. For \(f\in\mathcal{F}_{\mathfrak{X}|\varnothing}^{k}\) and \(g\in\mathcal{F}_{\mathfrak{y}|\mathcal{S}}^{k}\) with (69),_ \[\mathcal{L}_{S}(f,g)=\mathcal{L}_{S}(0,0)+\frac{1}{2}\cdot\left(\left\|\mathrm{ i}_{X;Y|S}\right\|^{2}-\left\|\mathrm{i}_{X;Y|S}-f\otimes g\right\|^{2} \right)+o(\epsilon^{2}), \tag{70}\] _which is maximized if and only if \(f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y|S})+o(\epsilon)\)._ Therefore, the multitask classification network essentially learns features approximating the conditional dependence \(\mathrm{i}_{X;Y|S}\). Different from the nested H-score implementation, the multitask network implements the conditioning by directly applying a separate classification head for each task \(S=s\). As a consequence, this design requires \(|\mathcal{S}|\) many different heads, and is not applicable when the side information \(S\) is continuous or has complicated structures. ## 6 Multimodal Learning With Missing Modalities In this section, we demonstrate another multivariate learning application, where we need to conduct inference based on different data sources. In particular, we focus on the setting where the goal is to infer \(Y\) from two different data sources, denoted by \(X_{1}\) and \(X_{2}\). We refer to such problems as the multimodal learning[5] problems, and are particularly interested in the cases where we have missing modalities: either \(X_{1}\) or \(X_{2}\) can be missing during the inference. Our goal is to design a learning system to solve all the three problems: (i) inferring \(Y\) based on \(X_{1}\), (ii) inferring \(Y\) based on \(X_{2}\), and (iii) inferring \(Y\) based on \((X_{1},X_{2})\). Throughout our discussions in this section, we use \(P_{X_{1},X_{2},Y}\) to denote the joint distribution of \(X_{1}\in\mathcal{X}_{1},X_{2}\in\mathcal{X}_{2},Y\in\mathcal{Y}\). For convenience, we also denote \(X\triangleq(X_{1},X_{2})\in\mathcal{X}\triangleq\mathcal{X}_{1}\times \mathcal{X}_{2}\). We consider the feature geometry on \(\mathcal{F}_{\mathfrak{X}\times\mathfrak{y}}=\mathcal{F}_{\mathfrak{X}_{1} \times\mathfrak{X}_{2}\times\mathfrak{y}}\), with the metric distribution \(R_{X_{1},X_{2},Y}=P_{X_{1},X_{2}}P_{Y}\), or equivalently, \(R_{X,Y}=P_{X}P_{Y}\). ### Dependence Decomposition To begin, we decompose the joint dependence \(\mathrm{i}_{X_{1},X_{2};Y}\in\mathcal{F}_{\mathfrak{X}_{1}\times\mathfrak{X}_ {2}\times\mathfrak{y}}\) as \[\mathrm{i}_{X_{1},X_{2};Y}=\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})+\pi_{ \mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y}), \tag{71}\] where we have defined \(\pi_{\mathsf{B}}(\gamma)\triangleq\Pi\left(\gamma;\mathcal{F}_{\mathfrak{X}_{ 1}\times\mathfrak{y}}+\mathcal{F}_{\mathfrak{X}_{2}\times\mathfrak{y}}\right),\pi_{\mathsf{I}}(\gamma)\triangleq\gamma-\pi_{\mathsf{B}}(\gamma)\). We refer to \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) as the Bivariate dependence component, and refer to \(\pi_{\mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y})\) as the Interaction component. The bivariate dependence component \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) is uniquely determined by all pairwise dependencies among \(X_{1},X_{2},Y\). Formally, let \(\mathcal{Q}_{\mathsf{B}}\) denote the collection of distributions with the same pairwise marginal distributions as \(P_{X_{1},X_{2},Y}\), i.e., \[\mathcal{Q}_{\mathsf{B}}\triangleq\{Q_{X_{1},X_{2},Y}\in\mathcal{P}^{\mathfrak{ X}_{1}\times\mathfrak{X}_{2}\times\mathfrak{y}}\colon Q_{X_{1},X_{2}}=P_{X_{1},X_{2}},Q_{X_{1},Y}=P_{X_{1},Y},Q_{X_{2},Y}=P_{X_{2},Y}\}. \tag{72}\] Then we have the following result. A proof is provided in Appendix C.14. **Proposition 26**: _For all \(Q_{X_{1},X_{2},Y}\in\mathcal{Q}_{\mathsf{B}}\), we have \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)})=\pi_{\mathsf{B}}(\mathrm{i}_ {X_{1},X_{2};Y})\), where \(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)}\) denotes the CDK function associated with \(Q_{X_{1},X_{2},Y}\)._ We show the relation between different dependence components in Figure 11, where we have further decomposed the bivariate dependence component \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) as \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})=\pi_{\mathsf{B}_{1}}(\mathrm{i}_{ X_{1},X_{2};Y})+\pi_{\mathsf{B}_{2}}(\mathrm{i}_{X_{1},X_{2};Y})\) for some \(\pi_{\mathsf{B}_{i}}(\mathrm{i}_{X_{1},X_{2};Y})\in\mathcal{F}_{\mathcal{X}_{ i}\times\mathsf{y}},i=1,2\). For comparison, we have also demonstrated \(\pi_{\mathsf{M}_{1}}(\mathrm{i}_{X_{1},X_{2};Y})=\mathrm{i}_{X_{1};Y}\) and \(\pi_{\mathsf{C}_{1}}(\mathrm{i}_{X_{1},X_{2};Y})=\mathrm{i}_{X_{1},X_{2};Y}- \mathrm{i}_{X_{1};Y}\), obtained from the decomposition introduced in Section 5.1. Note that since the interaction component \(\pi_{\mathsf{I}}(\mathrm{i}_{X_{1};X_{2};Y})\) does not capture any bivariate dependence, we can also obtain \(\pi_{\mathsf{M}_{1}}(\mathrm{i}_{X_{1},X_{2};Y})\) directly from the bivariate component \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) via a projection: \(\pi_{\mathsf{M}_{1}}(\mathrm{i}_{X_{1},X_{2};Y})=\pi_{\mathsf{M}_{1}}(\pi_{ \mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}))\). ### Feature Learning from Complete Data We consider learning the features representations for the two dependence components. Here, we assume the data are complete \((X_{1},X_{2},Y)\) triplets with the empirical distribution \(P_{X_{1},X_{2},Y}\). We will discuss the learning with incomplete data later. Again, we apply the nesting technique to design the training objective. Note that since \(X=(X_{1},X_{2})\), with \(\mathcal{G}_{\mathcal{X}}=\mathcal{F}_{\mathcal{X}_{1}}+\mathcal{F}_{ \mathcal{X}_{2}}\), we can express the two components as [cf. (40)] \[\pi_{\mathsf{B}}(\mathrm{i}_{X;Y}) =\Pi\left(\mathrm{i}_{X;Y};\mathcal{G}_{\mathcal{X}}\otimes \mathcal{F}_{\mathcal{y}}\right), \tag{73}\] \[\pi_{\mathsf{I}}(\mathrm{i}_{X;Y}) =\Pi\left(\mathrm{i}_{X;Y};\left(\mathcal{G}_{\mathcal{X}}\boxminus \mathcal{G}_{\mathcal{X}}\right)\otimes\mathcal{F}_{\mathcal{y}}\right). \tag{74}\] Therefore, we consider the nesting configuration \(\mathcal{C}_{\mathsf{BI}}\) and its refinement \(\mathcal{C}_{\mathsf{BI}}^{*}\), as [cf. (41)] \[\mathcal{C}_{\mathsf{BI}}\triangleq\left\{(\bar{k},k);\,(\mathcal{F}_{ \mathcal{X}_{1}}+\mathcal{F}_{\mathcal{X}_{2}},\mathcal{F}_{\mathcal{X}}); \,\mathcal{F}_{\mathcal{y}}\right\}. \tag{75}\] The corresponding nested H-scores are defined on \[\mathrm{dom}(\mathcal{C}_{\mathsf{BI}})=\mathrm{dom}(\mathcal{C}_{\mathsf{BI} }^{*})=\left\{\left(\begin{bmatrix}\bar{f}\\ \bar{f}\end{bmatrix},\begin{bmatrix}\bar{g}\\ \bar{g}\end{bmatrix}\right):\bar{f}\in\mathcal{F}_{\mathcal{X}_{1}}^{\bar{k} }+\mathcal{F}_{\mathcal{X}_{2}}^{\bar{k}},\bar{g}\in\mathcal{F}_{\mathcal{y}} ^{\bar{k}},f\in\mathcal{F}_{\mathcal{X}}^{k},g\in\mathcal{F}_{\mathcal{y}}^{k }\right\}. \tag{76}\] Figure 11: Decompose the joint dependence \(\mathrm{i}_{X_{1},X_{2};Y}\) into bivariate dependence component \(\pi_{\mathsf{B}}\) and interaction dependence component \(\pi_{\mathsf{I}}\). The plane denotes the sum of \(\mathcal{F}_{\mathcal{X}_{1}\times\mathsf{y}}\) and \(\mathcal{F}_{\mathcal{X}_{2}\times\mathsf{y}}\). Specifically, we can compute the nested H-score configured by \(\mathcal{C}_{\text{BI}}\) using a nested network structure as shown in Figure 12. Then we can obtain both dependence components by maximizing the corresponding nested H-scores, formalized as follows (cf. Theorem 17 and Theorem 18). **Corollary 27**: _Given \(\tilde{k}\geq\operatorname{rank}(\pi_{\text{\sf B}}(\mathrm{i}_{X_{1},X_{2};Y}))\), the nested H-score \(\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathcal{C}_{\text{BI}}\right)\) is maximized if and only if_ \[\bar{f}\otimes\bar{g}=\pi_{\text{\sf B}}(\mathrm{i}_{X_{1},X_{2} ;Y}), \tag{77a}\] \[f\otimes g=\zeta_{\leq k}\left(\pi_{\text{\sf I}}(\mathrm{i}_{X_ {1},X_{2};Y})\right). \tag{77b}\] _In addition, \(\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathcal{C}_{\text{BI}}^{*}\right)\) is maximized if and only if_ \[\bar{f}_{i}\otimes\bar{g}_{i}=\zeta_{i}\left(\pi_{\text{\sf B}}( \mathrm{i}_{X_{1},X_{2};Y})\right),\quad i\in[\bar{k}], \tag{78a}\] \[f_{i}\otimes g_{i}=\zeta_{i}\left(\pi_{\text{\sf I}}(\mathrm{i}_ {X_{1},X_{2};Y})\right),\quad i\in[k]. \tag{78b}\] ### Feature Assembling and Inference Models We then illustrate how to assemble the learned features for the inference tasks and deal with incomplete data. For convenience, we define the conditional expectation operators \(\tau_{i},i=1,2\), such that for \(f\in\mathcal{F}_{\mathcal{X}_{1}\times\mathcal{X}_{2}}^{k}\) with \(k\geq 1\), we have \[[\tau_{i}(f)](x_{i})\triangleq\mathbb{E}\left[f(X_{1},X_{2})|X_{i}=x_{i} \right],\qquad\text{for all }x_{i}\in\mathcal{X}_{i}. \tag{79}\] Note that we can also interpret \(\tau_{i}\) as to the projection onto \(\mathcal{F}_{\mathcal{X}_{i}}\), i.e., \(\tau_{i}(f)=\Pi\left(f;\mathcal{F}_{\mathcal{X}_{i}}\right)\). Then, we have the following result. A proof is provided in Appendix C.15. **Proposition 28**: _Suppose we have \(\bar{f}\otimes\bar{g}=\pi_{\text{\sf B}}(\mathrm{i}_{X_{1},X_{2};Y}),f\otimes g =\pi_{\text{\sf I}}(\mathrm{i}_{X_{1},X_{2};Y})\) for features \(\bar{f}=\bar{f}^{(1)}+\bar{f}^{(2)}\) with \(\bar{f}^{(i)}\in\mathcal{F}_{\mathcal{X}_{i}}^{\tilde{k}},i=1,2\), \(\bar{g}\in\mathcal{F}_{\mathcal{Y}}^{\tilde{k}}\), \(f\in\mathcal{F}_{\mathcal{X}}^{\tilde{k}}\), and \(g\in\mathcal{F}_{\mathcal{Y}}^{k}\). Then, we have_ \[P_{Y|X_{1},X_{2}}(y|x_{1},x_{2})=P_{Y}(y)\left[1+\bar{f}^{\text{\sc T}}(x_{1}, x_{2})\bar{g}(y)+f^{\text{\sc T}}(x_{1},x_{2})g(y)\right], \tag{80a}\] Figure 12: Nesting Technique for Learning Features from Multimodal Data \[P_{Y|X_{1}}(y|x_{1}) =P_{Y}(y)\left[1+\left(\bar{f}^{(1)}(x_{1})+[\tau_{1}(\bar{f}^{(2)}) ](x_{1})\right)^{\mathrm{T}}\bar{g}(y)\right], \tag{80b}\] \[P_{Y|X_{2}}(y|x_{2}) =P_{Y}(y)\left[1+\left(\bar{f}^{(2)}(x_{2})+[\tau_{2}(\bar{f}^{(1) })](x_{2})\right)^{\mathrm{T}}\bar{g}(y)\right]. \tag{80c}\] _In addition, for all \(\psi\in\mathcal{G}_{y}^{d}\), we have_ \[\mathbb{E}\left[\psi(Y)|X_{1}=x_{1},X_{2}=x_{2}\right]=\mathbb{E }\left[\psi(Y)\right]+\Lambda_{\psi,\bar{g}}\bar{f}(x_{1},x_{2})+\Lambda_{\psi,g}f(x_{1},x_{2}), \tag{81a}\] \[\mathbb{E}\left[\psi(Y)|X_{1}=x_{1}\right]=\mathbb{E}\left[\psi(Y )\right]+\Lambda_{\psi,\bar{g}}\left(\bar{f}^{(1)}(x_{1})+[\tau_{1}(\bar{f}^{ (2)})](x_{1})\right),\] (81b) \[\mathbb{E}\left[\psi(Y)|X_{2}=x_{2}\right]=\mathbb{E}\left[\psi(Y )\right]+\Lambda_{\psi,\bar{g}}\left(\bar{f}^{(2)}(x_{2})+[\tau_{2}(\bar{f}^{ (1)})](x_{2})\right). \tag{81c}\] From Proposition 28, we can obtain inference models for all three different types of input data, by simply assembling the learned features in different ways. The resulting inference models also reveal the different roles of two dependence components. For example, the features associated with the interaction dependence component, i.e., \(f\) and \(g\), are useful only in the case where we have both \(X_{1}\) and \(X_{2}\). In practice, we can use (80) and (81) for classification and estimation tasks, respectively. To apply (81), we can compute \(\Lambda_{\psi,\bar{g}}\) and \(\Lambda_{\psi,g}\) from the corresponding empirical averages over the training dataset, and learn features \(\tau_{1}(\bar{f}^{(2)})\) and \(\tau_{2}(\bar{f}^{(1)})\) from \((X_{1},X_{2})\) pairs. For example, we can use Proposition 11 to implement the conditional expectation operators \(\tau_{1}\) and \(\tau_{2}\) [cf. (24)]. ### Theoretical Properties and Interpretations We then introduce several theoretical properties of the dependence decomposition and induced feature representations, including their connections to the principle of maximum entropy (Jaynes, 1957a,b) and the optimal transformation of variables (Breiman and Friedman, 1985). #### 6.4.1 Dependence Decomposition We can relate the bivariate-interaction decomposition (71) to decomposition operations in both the probability distribution space and the data space. Decomposition in Distribution SpaceWe assume that for all \((x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}\), \[[\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)\geq-1,\quad[ \pi_{\mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)\geq-1, \tag{82}\] and define the associated distributions \[P_{X_{1},X_{2},Y}^{\mathsf{B}}(x_{1},x_{2},y) \triangleq P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)(1+[\pi_{\mathsf{ B}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)), \tag{83a}\] \[P_{X_{1},X_{2},Y}^{\mathsf{I}}(x_{1},x_{2},y) \triangleq P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)(1+[\pi_{\mathsf{ I}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)). \tag{83b}\] Then, we have the following characterization, a proof of which is provided in Appendix C.16. **Proposition 29**: _Under assumption (82), we have \(P_{X_{1},X_{2},Y}^{\mathsf{B}},P_{X_{1},X_{2},Y}^{\mathsf{I}}\in\mathcal{P}^{ \mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}}\), with marginal distributions \(P_{X_{1},X_{2}}^{\mathsf{B}}=P_{X_{1},X_{2}}^{\mathsf{I}}=P_{X_{1},X_{2}}\) and \(P_{X_{i},Y}^{\mathsf{B}}=P_{X_{i},Y},P_{X_{i},Y}^{\mathsf{I}}=P_{X_{i}}P_{Y}\) for \(i=1,2\)._ From Proposition 29, \(P_{X_{1},X_{2},Y}^{\mathsf{I}}\) has marginal distributions \(P_{X_{i},Y}^{\mathsf{I}}=P_{X_{i}}P_{Y}\), \(i=1,2\), and does not capture \((X_{1};Y)\) or \((X_{2};Y)\) dependence. On the other hand, \(P_{X_{1},X_{2},Y}^{\mathsf{B}}\) has the same pairwise marginal distributions as \(P_{X_{1},X_{2},Y}\), i.e., \(P^{\mathsf{B}}_{X_{1},X_{2},Y}\in\mathcal{Q}_{\mathsf{B}}\) with \(\mathcal{Q}_{\mathsf{B}}\) as defined in (72). We can show that \(P^{\mathsf{B}}_{X_{1},X_{2},Y}\) also achieves the maximum entropy in \(\mathcal{Q}_{\mathsf{B}}\) in the local analysis regime. Formally, let \[P^{\mathsf{ent}}_{X_{1},X_{2},Y}\triangleq\operatorname*{arg\, max}_{Q_{X_{1},X_{2},Y}\in\mathcal{Q}_{\mathsf{B}}}H(Q_{X_{1},X_{2},Y}) \tag{84}\] denote the entropy maximizing distribution on \(\mathcal{Q}_{\mathsf{B}}\), where \(H(Q_{X_{1},X_{2},Y})\) denotes the entropy of \((X_{1},X_{2},Y)\sim Q_{X_{1},X_{2},Y}\). Then we have the following result. A proof is provided in Appendix C.17. **Proposition 30**: _Suppose \(X=(X_{1},X_{2})\) and \(Y\) are \(\epsilon\)-dependent, and let \(\mathfrak{i}^{(\mathsf{ent})}_{X_{1},X_{2};Y}\) denote the CDK function associated with \(P^{\mathsf{ent}}_{X_{1},X_{2},Y}\). Then, we have \(\left\|\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})-\mathfrak{i}^{(\mathsf{ ent})}_{X_{1},X_{2};Y}\right\|=o(\epsilon)\), or equivalently,_ \[P^{\mathsf{ent}}_{X_{1},X_{2},Y}(x_{1},x_{2},y)=P^{\mathsf{B}}_{X_{1},X_{2},Y} (x_{1},x_{2},y)+o(\epsilon),\quad\text{for all $x_{1},x_{2},y$.} \tag{85}\] Decomposition in Data SpaceFor each triplet \((x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}\), we consider the decomposition \[(x_{1},x_{2},y)\mapsto(x_{1},x_{2}),(x_{1},y),(x_{2},y). \tag{86}\] Suppose the dataset6\(\mathscr{D}\triangleq\left\{\left(x_{1}^{(i)},x_{2}^{(i)},y^{(i)}\right) \right\}_{i\in[n]}\) has the empirical distribution \(P_{X_{1},X_{2},Y}\), where each tuple \(\left(x_{1}^{(i)},x_{2}^{(i)},y^{(i)}\right)\in\mathcal{X}_{1}\times\mathcal{ X}_{2}\times\mathcal{Y}\). Then, by applying this decomposition on \(\mathscr{D}\) and grouping the decomposed pairs, we obtain three separate datasets Footnote 6: Though the dataset is modeled as a multiset without ordering, we introduce the index \(i\) for the convenience of presentation, which corresponds to a specific realization for traversing the dataset. \[\left\{\left(x_{1}^{(i)},x_{2}^{(i)}\right)\right\}_{i\in[n]}, \left\{\left(x_{1}^{(i)},y^{(i)}\right)\right\}_{i\in[n]},\left\{\left(x_{2}^ {(i)},y^{(i)}\right)\right\}_{i\in[n]}, \tag{87}\] which have empirical distributions \(P_{X_{1},X_{2}}\), \(P_{X_{1},Y}\), and \(P_{X_{2},Y}\), respectively. Therefore, we can interpret the decomposition (86) as extracting the bivariate dependence component from the joint dependence: the new pairwise datasets retain all pairwise dependence, but do not capture any interaction among \(X_{1},X_{2},Y\). Indeed, it is easy to see that, for any dataset with empirical distribution from \(\mathcal{Q}_{\mathsf{B}}\), the decomposition (86) leads to the same pairwise datasets. Reversely, we can reconstruct \(P^{\mathsf{B}}_{X_{1},X_{2},Y}\) from the pairwise datasets (87). We will discuss the details of such reconstruction algorithm later. #### 6.4.2 Feature Representations Let \(\bar{K}\triangleq\operatorname{rank}(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{ 2};Y}))\) and \(K\triangleq\operatorname{rank}(\pi_{\mathsf{I}}(\mathfrak{i}_{X_{1},X_{2};Y}))\). Then, we can represent the dependence modes of the bivariate component \(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})\) and \(\pi_{\mathsf{I}}(\mathfrak{i}_{X_{1},X_{2};Y})\) in their standard forms, as \[\zeta_{i}(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})) =\bar{\sigma}_{i}\left(\bar{f}_{i}^{*}\otimes\bar{g}_{i}^{*} \right),\quad i\in[\bar{K}], \tag{88a}\] \[\zeta_{i}(\pi_{\mathsf{I}}(\mathfrak{i}_{X_{1},X_{2};Y})) =\sigma_{i}\left(f_{i}^{*}\otimes g_{i}^{*}\right),\quad i\in[K]. \tag{88b}\] By applying Proposition 7, we can interpret these features as solutions to corresponding constrained maximal correlation problems. For example, since \(\zeta_{i}(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y}))=\zeta_{i}(\mathfrak{i} _{X_{1},X_{2};Y}|\mathcal{F}_{\mathcal{X}_{1}}+\mathcal{F}_{\mathcal{X}_{2}}, \mathcal{F}_{\mathcal{Y}})\), \((\bar{f}_{i}^{*},\bar{g}_{i}^{*})\) is the \(i\)-th constrained maximal correlation function pair of \(X=(X_{1},X_{2})\) and \(Y\) restricted to subspaces \(\mathcal{F}_{\mathcal{X}_{1}}+\mathcal{F}_{\mathcal{X}_{2}}\) and \(\mathcal{F}_{\mathcal{Y}}\). The top mode \(\bar{\sigma}_{1},\bar{f}_{1}^{*},\bar{g}_{1}^{*}\) in (88a) also characterizes the optimal solution to a classical regression formulation. Specifically, given input variables \(X_{1},X_{2}\) and the output variable \(Y\), Breiman and Friedman (1985) formulated the regression problem \[\underset{\begin{subarray}{c}\phi^{(1)}\in\mathcal{F}_{X_{1}| \varnothing},\phi^{(2)}\in\mathcal{F}_{X_{2}|\varnothing}\\ \psi\in\mathcal{F}_{Y|\varnothing}:\,\|\psi\|=1\end{subarray}}{\text{minimize}} \mathbb{E}\left[\left(\psi(Y)-\phi^{(1)}(X_{1})-\phi^{(2)}(X_{2})\right)^{2}\right] \tag{89}\] where the minimization is over zero-mean functions \(\phi^{(1)},\phi^{(2)},\) and \(\psi,\) and referred to the solutions as the optimal transformations. Then, we have the following characterization. A proof is provided in Appendix C.18. **Proposition 31**: _The minimum value of optimization problem (89) is \(1-\bar{\sigma}_{1}^{2}\), which can be achieved by \(\phi^{(1)}+\phi^{(2)}=\bar{\sigma}_{1}\cdot\bar{f}_{1}^{*}\) and \(\psi=\bar{g}_{1}^{*}\)._ Therefore, the optimal transformations depend on, and thus characterize only, the top mode of bivariate dependence component \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\). ### Learning With Missing Modalities We conclude this section by briefly discussing feature learning based on incomplete samples. #### 6.5.1 Learning from Pairwise Samples A special case of the incomplete samples is the pairwise datasets (87) obtained from the decomposition (86). Specifically, suppose we obtain (87) from \(\mathscr{D}\triangleq\left\{\left(x_{1}^{(i)},x_{2}^{(i)},y^{(i)}\right) \right\}_{i\in[n]}\), and let \(P_{X_{1},X_{2},Y}\) denote the empirical distribution of \(\mathscr{D}\). Since the bivariate dependence is retained in the decomposition (86), we can learn \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) from the pairwise datasets (87). In particular, when we set \(k=0\) in \(\mathcal{C}_{\mathsf{BI}}\) [cf. (75)], we have \(\mathscr{H}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix};\mathcal{C}_{\mathsf{BI}}\right)=2\cdot\mathscr{H}(\bar{f}, \bar{g})\), and \[\mathscr{H}(\bar{f},\bar{g}) =\mathscr{H}\left(\bar{f}^{(1)}+\bar{f}^{(2)},\bar{g}\right)\] \[=\mathbb{E}\left[\left(\bar{f}^{(1)}(X_{1})+\bar{f}^{(2)}(X_{2}) \right)^{\mathrm{T}}\bar{g}(Y)\right]-\left(\mathbb{E}\left[\bar{f}^{(1)}(X_{ 1})+\bar{f}^{(2)}(X_{2})\right]\right)^{\mathrm{T}}\mathbb{E}\left[\bar{g}(Y) \right]-\frac{1}{2}\operatorname{tr}\left(\Lambda_{\bar{f}}\Lambda_{\bar{g}}\right)\] \[=\mathscr{H}(\bar{f}^{(1)},\bar{g})+\mathscr{H}(\bar{f}^{(2)}, \bar{g})-\operatorname{tr}\left(\Lambda_{\bar{f}^{(1)},\bar{f}^{(2)}}\cdot \Lambda_{\bar{g}}\right). \tag{90}\] Therefore, we can evaluate (90) from the pairwise datasets (87), since each \(\mathscr{H}(\bar{f}^{(i)},\bar{g})\) depends only on \(P_{X_{i},Y}\) for \(i=1,2\), and \(\Lambda_{\bar{f}^{(1)},\bar{f}^{(2)}}\) depends only on \(P_{X_{1},X_{2}}\). Then, from Corollary 27, we can obtain \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\) and the same set of features. #### 6.5.2 General Heterogeneous Training Data We then consider general forms of heterogeneous training data, as shown in Table 1. In particular, suppose there are \(n\triangleq n_{0}+n_{1}+n_{2}\) training samples, and we group them into separate datasets: \(\mathcal{D}_{0}\) contains \(n_{0}\) complete observations of \((X_{1},X_{2},Y)\), and, for \(i=1,2\), each \(\mathcal{D}_{i}\) has \(n_{i}\) sample pairs of \((X_{i},Y)\). Our goal is to learn features from the heterogeneous data and obtain similar inference models as we introduced in Section 6.3. In this case, we need to consider the empirical distributions for each dataset, and also take the sample size into account. To begin, we use a metric distribution of the product form \(R_{X_{1},X_{2}}R_{Y}\), where \(R_{X_{1},X_{2}}\) and \(R_{Y}\) correspond to some empirical distributions of training data. For example, we can set \(R_{X_{1},X_{2}}=\hat{P}^{(0)}_{X_{1},X_{2}}\) and \(R_{Y}=\eta_{0}\hat{P}^{(0)}_{Y}+\eta_{1}\hat{P}^{(1)}_{Y}+\eta_{2}\hat{P}^{(2)}_ {Y}\) with \(\eta_{i}\triangleq n_{i}/n\) for \(i=0,1,2\), which correspond to the empirical distributions of all \((X_{1},X_{2})\) sample pairs and all \(Y\) samples, respectively. Then, for any given \(Q_{X_{1},X_{2},Y}\in\mathbb{P}^{\mathfrak{X}_{1}\times\mathfrak{X}_{2}\times Y}\), we characterize the difference between \(Q_{X_{1},X_{2},Y}\) and the empirical distributions induced by the data, as the weighted sum \[L(Q_{X_{1},X_{2},Y})\triangleq\eta_{0}\cdot\|\tilde{\ell}_{ \hat{P}^{(0)}_{X_{1},X_{2},Y}}-\tilde{\ell}_{Q_{X_{1},X_{2},Y}}\|^{2}+\eta_{1} \cdot\|\tilde{\ell}_{\hat{P}^{(1)}_{X_{1},Y}}-\tilde{\ell}_{Q_{X_{1},Y}}\|^{2 }+\eta_{2}\cdot\|\tilde{\ell}_{\hat{P}^{(2)}_{X_{2},Y}}-\tilde{\ell}_{Q_{X_{2},Y}}\|^{2}. \tag{91}\] We also use \(P^{(\text{est})}_{X_{1},X_{2},Y}\) to denote the optimal distribution that minimizes (91). We can again apply the nesting technique to learn the feature representations associated with \(P^{(\text{est})}_{X_{1},X_{2},Y}\). To begin, we use \(\mathscr{H}(f,g;Q_{X,Y})\) to denote the H-score computed over the joint distribution \(Q_{X,Y}\), defined as \[\mathscr{H}(f,g;Q_{X,Y}) \triangleq\frac{1}{2}\left(\left\|\tilde{\ell}_{Q_{X,Y}}\right\|^ {2}-\left\|\tilde{\ell}_{Q_{X,Y}}-f\otimes g\right\|^{2}\right)\] \[=\mathbb{E}_{Q_{X,Y}}\left[f^{\mathrm{T}}(X)g(Y)\right]-\left( \mathbb{E}_{R_{X}}\left[f(X)\right]\right)^{\mathrm{T}}\mathbb{E}_{R_{Y}} \left[g(Y)\right]-\frac{1}{2}\cdot\operatorname{tr}\left(\Lambda_{f}\Lambda_{ g}\right),\] with \(\Lambda_{f}=\mathbb{E}_{R_{X}}\left[f(X)f^{\mathrm{T}}(X)\right]\) and \(\Lambda_{g}=\mathbb{E}_{R_{Y}}\left[g(Y)g^{\mathrm{T}}(Y)\right]\). Then, we define the H-score associated with the heterogeneous datasets shown in Table 1, as \[\mathscr{H}_{\mathrm{m}}(f,g)\triangleq\eta_{0}\cdot\mathscr{H} \big{(}f,g;\hat{P}^{(0)}_{X_{1},X_{2},Y}\big{)}+\eta_{1}\cdot\mathscr{H} \left(\tau_{1}(f),g;\hat{P}^{(1)}_{X_{1},Y}\right)+\eta_{2}\cdot\mathscr{H} \left(\tau_{2}(f),g;\hat{P}^{(2)}_{X_{2},Y}\right), \tag{92}\] where we have defined conditional expectation operators \(\tau_{i},i=1,2\) as in (79), with respect to the distribution \(R_{X_{1},X_{2}}\). By applying the same nesting configuration \(\mathcal{C}_{\text{BI}}\), we can obtain the corresponding nested H-score \[\mathscr{H}_{\mathrm{m}}\left(\begin{bmatrix}\bar{f}\\ \bar{f}\end{bmatrix},\begin{bmatrix}\bar{g}\\ \bar{g}\end{bmatrix};\mathcal{C}_{\text{BI}}\right)=\mathscr{H}_{\mathrm{m}}( \bar{f},\bar{g})+\mathscr{H}_{\mathrm{m}}\left(\begin{bmatrix}\bar{f}\\ \bar{f}\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right). \tag{93}\] Then, we have the following theorem, which extends Corollary 27 to the heterogeneous datasets. A proof is provide in Appendix C.19. **Theorem 32**: _Given \(\bar{k}\geq\operatorname{rank}\left(\pi_{\mathsf{B}}\Big{(}\tilde{\ell}_{P^{( \text{est})}_{X_{1},X_{2},Y}}\Big{)}\right)\), the nested H-score \(\mathscr{H}_{\mathrm{m}}\left(\begin{bmatrix}\bar{f}\\ \bar{f}\end{bmatrix},\begin{bmatrix}\bar{g}\\ \bar{g}\end{bmatrix};\mathcal{C}_{\text{BI}}\right)\) as defined in (93) is maximized if and only if_ \[\bar{f}\otimes\bar{g}=\pi_{\mathsf{B}}\Big{(}\tilde{\ell}_{P^{( \text{est})}_{X_{1},X_{2},Y}}\Big{)},\qquad f\otimes g=\zeta_{\leq k}\left(\pi_ {\mathsf{I}}\Big{(}\tilde{\ell}_{P^{(\text{est})}_{X_{1},X_{2},Y}}\Big{)} \right). \tag{94}\] \begin{table} \begin{tabular}{c c c} \hline \hline Datasets & Empirical Distribution & Remark \\ \hline \(\mathscr{D}_{0}=\left\{(x^{(i)}_{1},x^{(i)}_{2},y^{(i)})\right\}_{i=1}^{n_{0}}\) & \(\hat{P}^{(0)}_{X_{1},X_{2},Y}\) & Complete Observation \\ \(\mathscr{D}_{1}=\left\{(x^{(i)}_{1},y^{(i)})\right\}_{i=0}^{n_{0}+n_{1}}\) & \(\hat{P}^{(1)}_{X_{1},Y}\) & \(X_{2}\) missing \\ \(\mathscr{D}_{2}=\left\{(x^{(i)}_{2},y^{(i)})\right\}_{i=n_{0}+n_{1}+1}^{n_{0}+n_ {1}+n_{2}}\) & \(\hat{P}^{(2)}_{X_{2},Y}\) & \(X_{1}\) missing \\ \hline \hline \end{tabular} \end{table} Table 1: Heterogeneous Training Data With Missing Modalities We can also use the refined configuration \(\mathcal{C}^{\star}_{\mathsf{BI}}\) to obtain modal decomposition of the dependence components. The inference models can be built by assembling learned features, as we have discussed in Section 6.3. Furthermore, we can show that the estimation \(P^{(\mathrm{est})}_{X_{1},X_{2},Y}\) coincides with the maximum likelihood estimation (MLE) in a local analysis regime. Formally, let \(\mathbb{P}\left\{\mathscr{D}_{0},\mathscr{D}_{1},\mathscr{D}_{2};Q_{X_{1},X_{ 2},Y}\right\}\) denote the probability of observing datasets \(\mathscr{D}_{0},\mathscr{D}_{1},\mathscr{D}_{2}\), when all data samples are independently generated by \(Q_{X_{1},X_{2},Y}\). Then, we can write the MLE solution as \[P^{(\mathrm{ML})}_{X_{1},X_{2},Y}\triangleq\operatorname*{arg\,max}_{Q_{X_{1}, X_{2},Y}}\,\mathbb{P}\left\{\mathscr{D}_{0},\mathscr{D}_{1},\mathscr{D}_{2};Q_{X_{ 1},X_{2},Y}\right\}, \tag{95}\] for which we have the following characterization. A proof is provided in Appendix C.20. **Theorem 33**: _If \(L(R_{X_{1},X_{2},Y})=O(\epsilon^{2})\), then we have_ \[P^{(\mathrm{ML})}_{X_{1},X_{2},Y}(x_{1},x_{2},y)=P^{(\mathrm{est})}_{X_{1},X_{ 2},Y}(x_{1},x_{2},y)+o(\epsilon),\quad\text{for all $x_{1},x_{2},y$}. \tag{96}\] ## 7 Experimental Verification To verify the learning algorithms as well as established theoretical properties, we design a series of experiments with various types of data. We generate data with given probability distributions to allow the comparison with corresponding theoretical results. The source codes for all experiments are available at [https://github.com/XiangxiangXu/NFE](https://github.com/XiangxiangXu/NFE), and we defer the implementation details to Appendix D. ### Learning Maximal Correlation Functions We first consider learning dependence modes, i.e., maximal correlation functions from sample pairs of \((X,Y)\), by maximizing nested H-score (35). We verify the effectiveness by experiments on both discrete and continuous data, and also discuss one application in analyzing sequential data. #### 7.1.1 Discrete Data The simplest case for dependence learning is when \(X\) and \(Y\) are both discrete with small alphabet sizes \(|\mathscr{X}|\) and \(|\mathscr{Y}|\). In this case, we can design neural extractors with idea expressive powers. Suppose \(\mathscr{X}=\{1,\ldots,|\mathscr{X}|\}\), then we can express \(f\in\mathcal{F}^{k}_{\mathscr{X}}\) on \(\mathscr{X}\) by first mapping each \(i\in\mathscr{X}\) to \(i\)-th standard basis vector in \(\mathbb{R}^{|\mathscr{X}|}\), also referred to as "one-hot encoding" in practice, and then applying a linear function \(\mathbb{R}^{|\mathscr{X}|}\to\mathbb{R}^{k}\) to the mapped result, which we implement by a linear layer. Then, any \(f\in\mathcal{F}^{k}_{\mathscr{X}}\) can be expressed in this way by setting corresponding weights in the linear layer. Similarly, we can express \(g\in\mathcal{F}^{k}_{y}\) using another linear layer. In the experiment, we set \(|\mathscr{X}|=8\), \(|\mathscr{Y}|=6\), and randomly generate a \(P_{X,Y}\in\mathcal{P}^{\mathscr{X}\times\mathscr{Y}}\). We generate \(N=30\,000\) training samples from \(P_{X,Y}\), and learn \(k=3\) dimensional features \(f,g\) by maximizing \(\mathscr{H}(f,g;\{(1)^{k}\); \(\mathcal{F}_{\mathscr{X}}\); \(\mathcal{F}_{\mathscr{Y}}\})\). Then, we normalize each \(f_{i},g_{i}\) to obtain corresponding estimations of \(f^{*}_{i}\), \(g^{*}_{i}\), and \(\sigma_{i}\) by applying (37). We show the estimated features and singular values in Figure 13, which are consistent with the corresponding theoretical values computed from \(P_{X,Y}\). #### 7.1.2 Continuous Data We proceed to consider a continuous dataset with degenerate dependence modes, i.e., the singular values \(\sigma_{i}\)'s are not all distinct. In particular, we consider \(X,Y\) taking values from \(\mathcal{X}=\mathcal{Y}=[-1,1]\), where the joint probability density function \(p_{X,Y}\) takes a raised cosine form: \[p_{X,Y}(x,y)=\frac{1}{4}\cdot\left[1+\cos(\pi(x-y))\right],\quad(x,y)\in[-1,1]^{ 2}. \tag{97}\] Then, it can be verified that the corresponding marginal distributions of \(X,Y\) are uniform distributions \(p_{X}=p_{Y}=\mathrm{Unif}([-1,1])\). In addition, the resulting CDK function is \[\mathrm{i}_{X;Y}(x,y)=\frac{p_{X,Y}(x,y)-p_{X}(x)p_{Y}(y)}{p_{X}(x)p_{Y}(y)}= \cos(\pi(x-y)). \tag{98}\] Note that we have \(\cos(\pi(x-y))=\cos(\pi x+\theta_{0})\cdot\cos(\pi y+\theta_{0})+\sin(\pi x+ \theta_{0})\cdot\sin(\pi y+\theta_{0})\), for any \(\theta_{0}\in[-\pi,\pi)\). Therefore, we have \(\mathrm{rank}(\mathrm{i}_{X;Y})=2\), and the associated dependence modes are given by \(\sigma_{1}=\sigma_{2}=1/2\) and the maximal correlation functions \[f_{1}^{*}(x) =\sqrt{2}\cdot\cos(\pi x+\theta_{0}), f_{2}^{*}(x) =\sqrt{2}\cdot\sin(\pi x+\theta_{0}), \tag{99a}\] \[g_{1}^{*}(y) =\sqrt{2}\cdot\cos(\pi y+\theta_{0}), g_{2}^{*}(y) =\sqrt{2}\cdot\sin(\pi y+\theta_{0}), \tag{99b}\] Figure 13: Top three dependence modes learned from a discrete dataset, in consistent with theoretical results. for any \(\theta_{0}\in[-\pi,\pi)\). During this experiment, we first generate \(N=50\,000\) sample pairs of \(X,Y\) for training, with histogram showing in Figure 13(a). Then, we learn \(k=2\) dimensional features \(f_{1},f_{2}\) of \(\mathcal{X}\) and \(g_{1},g_{2}\) of \(\mathcal{Y}\) by maximizing the nested H-score (35), where \(f\) and \(g\) are parameterized neural feature extractors detailed in Appendix D.1.2. Figure 13(b) shows the learned functions after normalization (37). The learned results well match the theoretical results (99): (i) The learned \(f_{1}^{*}\) and \(f_{2}^{*}\) are sinusoids differ in phase by \(\pi/2\), and (ii) \(g_{i}^{*}\) coincides with \(f_{i}^{*}\), for each \(i=1,2\). It is also worth mentioning that due to the degeneracy \(\sigma_{1}=\sigma_{2}\), the initial phase \(\theta_{0}\) in learned sinusoids (99) can be different during each run of the training algorithm. Figure 14: Dependence modes learned from continuous data, in consistent with theoretical results. Figure 15: MMSE estimators \(\mathbb{E}\left[\psi_{i}(Y)|X=x\right]\) obtained from learning dependence modes, in comparison with theoretical results. Based on the learned dependence modes, we then demonstrate estimating functions of \(Y\) based on observed \(X=x\). Here, we consider the functions \(\psi_{1}(y)=y,\psi_{2}(y)=y^{2},\psi_{3}(y)=e^{y}\). From Proposition 11, we can compute the learned MMSE estimator \(\mathbb{E}\left[\psi_{i}(Y)|X=x\right]\) for each \(i\), by estimating \(\mathbb{E}\left[\psi_{i}(Y)\right]\) and \(\Lambda_{\psi_{i},g}\) from the training set and then applying (24). For comparison, we compute the theoretical values \[\mathbb{E}\left[\psi_{i}(Y)|X=x\right]=\int_{-1}^{1}p_{Y|X}(y|x)\psi_{i}(y)\,dy,\] with \(p_{Y|X}(y|x)=\frac{1}{2}\cdot\left[1+\cos(\pi(y-x))\right]\), which gives \[\mathbb{E}\left[Y|X=x\right]=\frac{1}{\pi}\cdot\sin(\pi x),\qquad \mathbb{E}\left[Y^{2}\big{|}X=x\right]=\frac{1}{3}-\frac{2}{\pi^{2}}\cos(\pi x), \tag{100a}\] \[\mathbb{E}\left[e^{Y}\big{|}X=x\right]=\frac{e^{2}-1}{2e(1+\pi^{2} )}\cdot(\pi\sin(\pi x)-\cos(\pi x)+\pi^{2}+1). \tag{100b}\] Figure 7.1.2 shows the learned estimators, which are consistent with the theoretically optimal estimators given by (100). #### 7.1.3 Sequential Data We proceed with an example of learning dependence modes among sequence pairs. For simplicity, we consider binary sequences \(\underline{X}\) and \(\underline{Y}\), of lengths \(l\) and \(m\), respectively. Suppose we have the Markov relation \(\underline{X}-U-V-\underline{Y}\) for some unobserved binary factors \(U,V\in\mathcal{U}=\mathcal{V}=\{0,1\}\). In addition, we assume7\(\underline{X}=(X_{1},\ldots,X_{l})^{\mathrm{T}},\underline{Y}=(Y_{1},\ldots,Y_{m})^{ \mathrm{T}}\) satisfy Footnote 7: For convenience, we adopt the vector notation to represent sequences. \[\underline{X}|U=i\sim\mathrm{BMS}(l,q_{i}),\quad\underline{Y}|V=i\sim\mathrm{ BMS}(m,q_{i}),\quad\text{for }i=0,1, \tag{101}\] where \(\mathrm{BMS}(l,q)\) denotes the distribution of a binary first-order Markov sequence of length \(l\) and state flipping probability \(q\). The corresponding state transition diagram is shown in Figure 15(a). Therefore, if \(\underline{Z}\sim\mathrm{BMS}(l,q)\), then \(Z_{1}\sim\mathrm{Unif}(\{0,1\})\) and \((Z_{1},\ldots,Z_{l})\) forms a first order Markov chain over binary states \(\{0,1\}\), and flipping probability \(\mathbb{P}\left\{Z_{i+1}\neq Z_{i}|Z_{i}=z\right\}=q\) for both \(z=0,1\). Formally, \(\underline{Z}\sim\mathrm{BMS}(l,q)\) if and only if \[P_{\underline{Z}}(\underline{z})=\frac{1}{2}\cdot\prod_{i=1}^{l-1}\left[(1-q) ^{\delta_{z_{i}z_{i+1}}}\cdot q^{1-\delta_{z_{i}z_{i+1}}}\right]\quad\text{for all }(z_{1},\ldots,z_{l})\in\{0,1\}^{l}.\] As a consequence, the resulting alphabets are \(\mathcal{X}=\{0,1\}^{l},\mathcal{Y}=\{0,1\}^{m}\), with sizes \(|\mathcal{X}|=2^{l},|\mathcal{Y}|=2^{m}\). In our experiment, we set \(l=40,m=30,q_{0}=0.1,q_{1}=0.9\), and use the following joint distribution \(P_{U,V}\): \[\begin{array}{c c c}\hline\text{Prob.}&U=0&U=1\\ \hline V=0&0.1&0.2\\ V=1&0.4&0.3\\ \hline\end{array}\] We generate \(N=50\,000\) training sample pairs of \(\underline{X},\underline{Y}\), with instances shown in Figure 15(b). We also generate \(N^{\prime}=10\,000\) sample pairs in the same manner, as the testing dataset. Then, we learn \(k=1\) dimensional features \(f\) and \(g\) by maximizing \(\mathscr{H}(f,g)\) over the training set. We plot the extracted features in Figure 17. In particular, each point represents an \((f(\underline{x}),g(\underline{y}))\) pair evaluated on an instance from testing set, with corresponding values of binary factors \((U,V)\) shown for comparison. For ease of demonstration, here we plot only \(1,000\) sample pairs randomly chosen from the testing set. As we can see in the figure, the learned features are clustered according to the underlying factors. This essentially reconstructs the hidden factors \(U,V\). For example, one can apply a standard clustering algorithm on the features, e.g., \(k\)-means (Hastie et al., 2009), then count the proportion of each cluster, to obtain an estimation of \(P_{U,V}\) up to permutation of symbols. For a closer inspection, we can compare the learned features with the theoretical results, formalized as follows. A proof is provided in Appendix C.21. **Proposition 34**: _Suppose \(\underline{X},\underline{Y}\) satisfy the Markov relation \(\underline{X}-U-V-\underline{Y}\) with \(U,V\in\{0,1\}\) and the conditional distributions (101). Then, we have \(\mathrm{rank}(\mathrm{i}_{X;Y})\leq 1\), and the corresponding maximal correlation functions \(f_{1}^{*},g_{1}^{*}\) satisfy_ \[f_{1}^{*}(\underline{x}) =c\cdot\left[\tanh\left(2w\cdot\varphi(\underline{x})+b_{U}\right) -\tanh(b_{U})\right], \tag{102a}\] \[g_{1}^{*}(\underline{y}) =c^{\prime}\cdot\left[\tanh\left(2w\cdot\varphi(\underline{y})+b _{V}\right)-\tanh(b_{V})\right], \tag{102b}\] Figure 16: Sequential Data: Model and Generated \(\underline{X},\underline{Y}\) Samples Pairs Figure 17: Features Learned from Sequential Data \(\underline{X}\) and \(\underline{Y}\) _for some \(c,c^{\prime}\in\mathbb{R}\), where \(w\triangleq\frac{1}{2}\log\frac{q_{1}}{q_{0}},b_{U}\triangleq\frac{1}{2}\log\frac{ P_{U}(1)}{P_{U}(0)},b_{V}\triangleq\frac{1}{2}\log\frac{P_{V}(1)}{P_{V}(0)}\), and where we have defined \(\varphi\colon\{0,1\}^{*}\to\mathbb{R}\), such that for each \(\underline{z}=(z_{1},\ldots,z_{l})^{\mathrm{T}}\in\{0,1\}^{l}\), we have \(\varphi(\underline{z})\triangleq\frac{l-1}{2}-\sum_{i=1}^{l-1}\delta_{z_{i}z_{ i+1}}\)._ Then, we compute the correlation coefficients between \(f(\underline{X})\) and \(f_{1}^{*}(\underline{X})\), and between \(g(\underline{Y})\), \(g_{1}^{*}(\underline{Y})\), respectively, using sample pairs in the testing set. The absolute values of both correlation coefficients are greater than \(0.99\), demonstrating the effectiveness of the learning algorithm. ### Learning With Orthogonality Constraints We verify the feature learning with orthogonality constraints presented in Section 4.4 on the same dataset used for Section 7.1.2. Here, we consider the settings \(\bar{k}=k=1\), i.e., we learn one-dimensional feature \(f(x)\) uncorrelated to given one dimensional feature \(\phi\in\mathcal{F}_{\mathcal{X}}\). Note that without the orthogonality constraint [cf. (99)], the optimal feature will be sinusoids with any initial phase, i.e., \(f_{1}^{*}(x)=\sqrt{2}\cdot\cos(\pi x+\theta_{0})\) for any \(\theta_{0}\in[-\pi,\pi)\). Here, we consider the following two choices of \(\phi\), \((x\mapsto x)\) and \((x\mapsto x^{2})\), which are even and odd functions, respectively. Since the underlying \(p_{X}\) is uniform on \([-1,1]\), we can verify the optimal features under the two constraints are \(f_{1}^{*}(x)=\sqrt{2}\cos(\pi x)\) for \(\phi(x)=x\), and \(f_{1}^{*}(x)=\sqrt{2}\sin(\pi x)\) for \(\phi(x)=x^{2}\), respectively. By maximizing the nested H-score restricted to \(\bar{f}=\phi\) [cf. (49)], we can learn the optimal feature \(f_{1}^{*}\), as shown in Figure 18. The learned features are in consistent with the theoretical ones, validating the effectiveness of the learning algorithm. ### Learning With Side Information We design an experiment to verify the connection between our learning algorithm and the multitask classification DNN, as demonstrated in Theorem 25. In particular, we consider the discrete \(X,Y,S\) with \(|\mathcal{X}|=8,|\mathcal{S}|=|\mathcal{Y}|=3\), and randomly choose a joint distribution \(P_{X,S,Y}\) on \(\mathcal{X}\times\mathcal{S}\times\mathcal{Y}\). Then, we generate \(N=50\,000\) training samples of \((X,S,Y)\) triples. In our implementation, we set \(\bar{k}=|\mathcal{S}|-1=2,k=1\) and maximize the nested H-score configured by \(\mathcal{C}_{\mathsf{MC}}\) [cf. (56)] on the training set. Figure 18: Learning feature uncorrelated to given \(\phi\) under different settings of \(\phi\). The learned results are compared with theoretical results. For comparison, we also train the multihead network shown in Figure 10 by maximizing the log-likelihood function (67) to learn the corresponding feature \(f\) and weight matrices \(G_{s}\) for all \(s\in\mathcal{S}\). Then, we convert the weights to \(g\in\mathcal{F}_{8\times\mathcal{Y}}\) via the correspondence [cf. (66)] \(g(s,y)=G_{s}(1,y)\). The features learned by our algorithm (labeled as "SideInfo") and the multihead neural network are shown in Figure 19, where the results are consistent. ### Multimodal Learning With Missing Modalities To verify the multimodal learning algorithms presented in Section 6, we consider multimodal classification problems in two different settings. Suppose \(X_{1},X_{2}\) are multimodal data variables, and \(Y\in\mathcal{Y}=\{-1,1\}\) denotes the binary label to predict. In the first setting, we consider the training set with complete \((X_{1},X_{2},Y)\) samples. In the second setting, only the pairwise observations of \((X_{1},X_{2})\), \((X_{1},Y)\), and \((X_{2},Y)\) are available, presented in three different datasets. In both settings, we set \(\mathcal{X}_{1}=\mathcal{X}_{2}=[-1,1]\) with \[P_{X_{1},X_{2}}(x_{1},x_{2})=\frac{1}{4}\cdot[1+\cos(2\pi(x_{1}-x_{2}))]\,. \tag{103}\] We consider predicting \(Y\) based on the learned features, where some modality in \(X_{1}\), \(X_{2}\) might be missing during the prediction. #### 7.4.1 Learning from Complete Observations We consider the \(X_{1},X_{2},Y\) dependence specified by (103) and the conditional distribution \[P_{Y|X_{1},X_{2}}(y|x_{1},x_{2})=\frac{1}{2}+\frac{y}{4}\cdot[\cos(\pi x_{1})+ \cos(\pi x_{2})+\cos(\pi(x_{1}+x_{2}))] \tag{104}\] for \(x_{1},x_{2}\in[-1,1]\) and \(y=\pm 1\). It can be verified that \(P_{Y}\) satisfies \(P_{Y}(1)=P_{Y}(-1)=\frac{1}{2}\). The corresponding CDK function and dependence components [cf. (71)] are given by \[\mathrm{i}_{X_{1},X_{2};Y}(x_{1},x_{2},y) =\frac{y}{2}\cdot[\cos(\pi x_{1})+\cos(\pi x_{2})+\cos(\pi(x_{1}+ x_{2}))] \tag{105a}\] \[[\pi_{\mathcal{B}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y) =\frac{y}{2}\cdot[\cos(\pi x_{1})+\cos(\pi x_{2})]\,, \tag{105b}\] Figure 19: Experimental verification of the connection between learning with side information and training a multihead neural network. \[[\pi_{\text{l}}(\text{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)=\frac{y}{2}\cdot\cos(\pi(x _{1}+x_{2})). \tag{105c}\] Therefore, we have \(\text{rank}(\pi_{\text{B}}(\text{i}_{X_{1},X_{2};Y}))=\text{rank}(\pi_{\text{l} }(\text{i}_{X_{1},X_{2};Y}))=1\), and the functions obtained from modal decompositions [cf. (88)] are \(\tilde{g}_{1}^{*}(y)=g_{1}^{*}(y)=y\) and \[\tilde{f}_{1}^{*}(x_{1},x_{2})=\cos(\pi x_{1})+\cos(\pi x_{2}),\qquad f_{1}^{* }(x_{1},x_{2})=\sqrt{2}\cdot\cos(\pi(x_{1}+x_{2})). \tag{106}\] In the experiment, we first generate \(N=50\,000\) triples of \((X_{1},X_{2},Y)\) for training. The histogram of \((X_{1},X_{2})\) pair is shown in Figure 1(a). Then, we set \(\tilde{k}=k=1\), and learn the features \(\tilde{f},\tilde{f},g,g\) by maximizing the nested H-score configured by \(\mathcal{C}_{\text{BI}}\) [cf. (75)]. We then normalize \(\tilde{f}\), \(f\) to obtain the estimated \(\tilde{f}_{1}^{*}\) and \(f_{1}^{*}\), and compute the posterior probability \(P_{Y|X_{1},X_{2}}(1|x_{1},x_{2})\) based on (80a). The results of learned features \(\tilde{f}_{1}^{*}\), \(f_{1}^{*}\) and posterior \(P_{Y|X_{1},X_{2}}(1|x_{1},x_{2})\) are shown in Figure 1(b), which are consistent with the theoretical results. We then consider the prediction problem with missing modality, i.e., predict label \(Y\) based on unimodal data \(X_{1}\) or \(X_{2}\). In particular, based on learned \(\tilde{f}=\tilde{f}^{(1)}+\tilde{f}^{(2)}\), we train two separate networks to operate as \(\tau_{1}\) and \(\tau_{2}\), then apply (80b) and (80c) to estimate the posteriors \(P_{Y|X_{1}}\) and \(P_{Y|X_{2}}\). Then, for each \(i=1,2\), the MAP prediction of \(Y\) based on observed \(X_{i}=x_{i}\) can be obtained by comparing \(P_{Y|X_{i}}(1|x_{i})\) with the threshold \(1/2\), via \[\operatorname*{arg\,max}_{y\in\mathcal{Y}}P_{Y|X_{i}}(y|x_{i})=\begin{cases}1& \text{if }P_{Y|X_{i}}(1|x_{i})>1/2,\\ -1&\text{if }P_{Y|X_{i}}(1|x_{i})\leq 1/2.\end{cases}\] We plot the estimated results in Figure 21, in comparison with the threshold \(1/2\) and the theoretical values \[P_{Y|X_{i}}(y|x)=\frac{1}{2}+\frac{y}{4}\cdot\cos(\pi x),\quad\text{for }i=1,2. \tag{107}\] From the figure, the estimated posteriors \(P_{Y|X_{1}},P_{Y|X_{2}}\) have consistent trends with the ground truth posteriors, and the induced \(Y\) predictions are well aligned. Figure 10: Features and posterior probability learned from multimodal data \(X_{1},X_{2},Y\), in comparison with theoretical results. #### 7.4.2 Learning from Pairwise Observations We proceed to consider the multimodal learning with only pairwise observations. Specifically, we consider the joint distribution of \((X_{1},X_{2},Y)\), specified by (103) and \[P_{Y|X_{1},X_{2}}(y|x_{1},x_{2})=\frac{1}{2}+\frac{y}{4}\cdot\left[\cos(\pi x_{1 })+\cos(\pi x_{2})\right]. \tag{108}\] for \(x_{1},x_{2}\in[-1,1]\) and \(y=\pm 1\). It can be verified that \(P_{Y}(1)=P_{Y}(-1)=\frac{1}{2}\), and the associated CDK function satisfies \[\mathrm{i}_{X_{1},X_{2};Y}(x_{1},x_{2},y)=\frac{y}{2}\cdot\left[\cos(\pi x_{1} )+\cos(\pi x_{2})\right]. \tag{109}\] and \(\mathrm{i}_{X_{1},X_{2};Y}=\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\). Therefore, the interaction dependence component \(\pi_{\mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y})=0\), and the joint dependence can be learned from all pairwise samples, as we discussed in Section 6.5.1. We then construct an experiment to verify learning joint dependence from all pairwise observations. Specifically, we generate \(N=50\,000\) triples of \((X_{1},X_{2},Y)\) from (103) and (108). Then, we adopt the decomposition (86) on each triple, to obtain three pairwise datasets with samples of \((X_{1},X_{2})\), \((X_{1},Y)\), \((X_{2},Y)\), where each dataset has \(N\) sample pairs. We use these three pairwise datasets for training to learn one dimensional \(\bar{f}\in\mathcal{F}_{\chi},\bar{g}\in\mathcal{F}_{\bar{y}}\) that maximize \(\mathscr{H}(\bar{f},\bar{g})\). Here, we compute \(\mathscr{H}(\bar{f},\bar{g})\) based on the minibatches from the three pairwise datasets, according to (90). Based on learned \(\bar{f},\bar{g}\), we then compute the normalized \(\bar{f}_{1}^{*}\) and posterior distribution \(P_{Y|X_{1},X_{2}}(1|x_{1},x_{2})\), as shown in Figure 22, where the learned results match the theoretical values. Similar to the previous setting, we consider the unimodal prediction problem, and show the estimated results in Figure 23. It is worth noting that from (105b), (109), the joint distributions \(P_{X_{1},X_{2},Y}\) in both settings contain the same bivariate dependence component. Therefore, the theoretical results for \(\bar{f}_{1}^{*}\) and \(P_{Y|X_{1}},P_{Y|X_{2}}\) are the same. ## 8 Related Works HGR Maximal Correlation and Learning AlgorithmsThe Hirschfeld-Gebelein-Renyi (HGR) maximal correlation (Hirschfeld, 1935; Gebelein, 1941; Renyi, 1959) provides an important connection between statistical dependence and functional space. The same concept has been studied with various formulations, and often in different terminologies, such as correspondence analysis Figure 21: Label prediction from unimodal data using learned features (Greenacre, 2017), functional canonical variates (Buja, 1990), and principal inertial components (du Pin Calmon et al., 2017). See, e.g., (Huang et al., 2019, Section II), for a detailed discussion. The first practical algorithm for learning such functions is the alternating conditional expectations (ACE) algorithm (Breiman and Friedman, 1985; Buja, 1990), which learns the top dependence mode and are mostly used for processing low-dimensional data. To learn maximal correlation functions by neural feature extractors, several different training objectives have been proposed (Wang et al., 2019; Hsu et al., 2021). Specifically, the H-score was derived from the low-rank approximation, referred as the Soft-HGR objective (Wang et al., 2019). It was also introduced as the local approximation of the log-loss function (Xu et al., 2022). The application of H-score for classifica Figure 23: Label prediction from unimodal data, with features learned from pairwise datasets. Figure 22: Features and posterior probability learned from pairwise datasets of \((X_{1},X_{2}),(X_{1},Y)\), and \((X_{2},Y)\), in comparison with theoretical results. tion tasks was exploited in Xu and Huang (2020). In an independent line of work (HaoChen et al., 2021), a special form of H-score was proposed for self-supervised learning tasks, referred to as the _spectral contrastive loss_ therein. Informative Features and Local AnalysisHuang et al. (2019) provided an in-depth characterization of informative features by applying information-theoretic tools, with a focus on and bivariate learning problems and the local analysis regime. In particular, it was shown in Huang et al. (2019) that for bivariate settings, there are a series of statistical and learning formulations that lead to the same optimal features, characterized as the HGR maximal correlation functions. For multivariate problems, the features introduced in (88a) were also studied by Xu and Huang (2021) in characterizing distributed hypothesis testing problems. Decomposition of Probability DistributionsIn addition to the modal decomposition, the decomposition of probability distributions has also been studied, particularly in the context of information geometry (Amari and Nagaoka, 2000). For example, Amari (2001) established a decomposition in distribution space, which also investigated the maximum entropy formulation (84), cf. (Amari, 2001, Theorem 7). ## 9 Conclusions and Discussions We have presented a framework for designing learning systems with neural feature extractors, which allows us to learn informative features and assemble them to build different inference models. Based on the feature geometry, we convert learning problems to corresponding geometric operations in feature spaces. We then introduce the nesting technique for implementing such geometric operations in feature spaces, which provides a systematic design of both feature learning and feature assembling modules. We demonstrate the applications by considering conditioned inference and multimodal learning problems. The established framework provides abundant opportunities for further explorations. For the ease of presentation, our discussions focused on basic and typically simplified settings, which can be extended to more general analyses and applications. For example, we can extend the analysis to decompose the dependence of a random processes and develop the corresponding feature learning algorithms. Specifically, we can get refined dependence components by iteratively applying the decomposition (40). Our characterizations on learning factors, such as expressive power of feature extractors (cf. Section 3.3) and sample size of training data (cf. Section 6.5.2), can also be extended to more in-depth discussions. In addition, we can apply the feature geometry to analyze existing algorithm designs, e.g., evaluating kernel choices in kernel methods (Xu and Zheng, 2023). ## Acknowledgments and Disclosure of Funding This work was supported in part by the National Science Foundation (NSF) under Award CNS-2002908 and the Office of Naval Research (ONR) under grant N00014-19-1-2621. ## Appendix A Feature Space: Canonical Basis and Vector Notations For completeness, we briefly summarize the information vector and corresponding matrix conventions introduced in Huang et al. (2019) and related works. To begin, we assume the random variable \(Z\) takes finite many possible values, i.e., \(|\varnothing|<\infty\), then the resulting feature space \(\mathcal{F}_{\mathbb{Z}}\) is a finite-dimensional vector space. It is sometimes more convenient to represent features using vector and matrix notations. Specifically, each \(f\in\mathcal{F}_{\mathbb{Z}}\) can be equivalently expressed as one vector in \(\mathbb{R}^{|\mathbb{Z}|}\) as follows. Suppose \(R_{Z}\) is the metric distribution, then we can construct an orthonormal basis \(\left\{\mathsf{b}_{Z}^{(z)}\colon z\in\mathbb{Z}\right\}\) of \(\mathcal{F}_{\mathbb{Z}}\), where \[\mathsf{b}_{Z}^{(z)}(z^{\prime})\triangleq\frac{\delta_{zz^{ \prime}}}{\sqrt{R_{Z}(z)}}\quad\text{for all }z,z^{\prime}\in\mathbb{Z}. \tag{110}\] We refer to this basis as the canonical basis of \(\mathcal{F}_{\mathbb{Z}}\). For all \(f\in\mathcal{F}_{\mathbb{Z}}\), we can represent \(f\) as a linear combination of these basis functions, i.e., \[f=\sum_{z^{\prime}\in\mathbb{Z}}\xi(z^{\prime})\cdot\mathsf{b}_{Z }^{(z^{\prime})},\] where the coefficient \(\xi(z)\) for each \(z\in\mathbb{Z}\) is given by \[\xi(z)=\left\langle f,\mathsf{b}_{Z}^{(z)}\right\rangle=f(z)\sqrt {R_{Z}(z)}. \tag{111}\] In particular, when \(f\) is the density ratio \(\tilde{\ell}_{P_{Z}}\) for some \(P_{Z}\in\mathcal{F}_{\mathbb{Z}}\), the corresponding coefficient \(\xi(z)\) for each \(z\in\mathbb{Z}\) is \[\xi(z)=\left\langle\tilde{\ell}_{P_{Z}},\mathsf{b}_{Z}^{(z)} \right\rangle=\frac{P_{Z}(z)-R_{Z}(z)}{\sqrt{R_{Z}(z)}}. \tag{112}\] This establishes a one-to-one correspondence between \(\tilde{\ell}_{P_{Z}}\) (or \(P_{Z}\)) and the vector \(\underline{\xi}\triangleq[\xi(z),z\in\mathbb{Z}]^{\mathrm{T}}\in\mathbb{R}^{| \mathbb{Z}|}\), which is referred to as the information vector associated with \(\tilde{\ell}_{P_{Z}}\) (or \(P_{Z}\)). Similarly, for \(X\) and \(Y\) with \(|\mathcal{X}|<\infty,|\mathcal{Y}|<\infty\), we can represent the CDK function \(\mathrm{i}_{X;Y}\in\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\) as an \(|\mathcal{X}|\times|\mathcal{Y}|\) matrix \(\tilde{B}_{X;Y}\): \[\tilde{B}_{X;Y}(x;y)\triangleq\frac{P_{X,Y}(x,y)-P_{X}(x)P_{Y}(y )}{\sqrt{P_{X}(x)}\sqrt{P_{Y}(y)}}, \tag{113}\] which is referred to as the canonical dependence matrix (CDM) of \(X\) and \(Y\). With the metric distribution \(P_{X}P_{Y}\) on \(\mathcal{F}_{\mathcal{X}\times\mathcal{Y}}\), each \(\tilde{B}_{X;Y}(x;y)\) is the coefficient associated with the basis function \(\mathsf{b}_{X,Y}^{(x,y)}\) [cf. (110)]. In addition, we have \(\mathrm{rank}(\tilde{B}_{X;Y})=\mathrm{rank}(\mathrm{i}_{X;Y})\). Suppose \(\sigma_{i}(f_{i}^{*}\otimes g_{i}^{*})=\zeta_{i}(\mathrm{i}_{X;Y})\), for each \(1\leq i\leq\mathrm{rank}(\mathrm{i}_{X;Y})\), then \(\sigma_{i}\) is the \(i\)-th singular vector of \(\tilde{B}_{X;Y}\), and the corresponding \(i\)-th left and right singular vector pair are given by \(\underline{\xi}_{i}^{X}\in\mathbb{R}^{|\mathcal{X}|},\underline{\xi}_{i}^{Y} \in\mathbb{R}^{|\mathcal{Y}|}\), with [cf. (111)] \[\xi_{i}^{X}(x)=\sqrt{P_{X}(x)}\cdot f_{i}^{*}(x),\quad\xi_{i}^{Y} (y)=\sqrt{P_{Y}(y)}\cdot g_{i}^{*}(y). \tag{114}\] Therefore, for small-scale discrete data, we can use the connection (114) to obtain the modal decomposition by solving the SVD of the corresponding CDM. ## Appendix B Characterization of Common Optimal Solutions We consider optimization problems defined on a common domain \(\mathcal{D}\). Specifically, given \(k\) functions \(h_{i}\colon\mathcal{D}\to\mathbb{R}\), \(i=1,\ldots,k\), let us consider optimization problems \[\operatorname*{maximize}_{u\in\mathcal{D}}h_{i}(u),\quad i=1, \ldots,k. \tag{115}\] For each \(i=1,\ldots,k\), we denote the optimal solution and optimal value for \(i\)-th problem by \[\mathcal{D}^{*}_{i}\triangleq\operatorname*{arg\,max}_{u\in \mathcal{D}}h_{i}(u),\quad t^{*}_{i}\triangleq\max_{u\in\mathcal{D}}h_{i}(u), \tag{116}\] respectively, and suppose \(\mathcal{D}^{*}_{i}\neq\varnothing,i=1,\ldots,k\). Then, the set \(\mathcal{D}^{*}\triangleq\cap_{i=1}^{k}\mathcal{D}^{*}_{i}\) represents the collection of common optimal solutions for all \(k\) optimization problems (115). When such common solutions exist, i.e., \(\mathcal{D}^{*}\) is nonempty, we can obtain \(\mathcal{D}^{*}\) by a single optimization program, by using an objective that aggregates original \(k\) objectives. We formalize the result as follows. **Proposition 35**: _If \(\mathcal{D}^{*}\neq\varnothing\), we have \(\mathcal{D}^{*}=\operatorname*{arg\,max}_{u\in\mathcal{D}}\,\Gamma(h_{1}(u), h_{2}(u),\ldots,h_{k}(u))\) for every \(\Gamma\colon\mathbb{R}^{k}\to\mathbb{R}\) that is strictly increasing in each argument._ **Proof** Let \(\mathcal{D}^{**}\triangleq\operatorname*{arg\,max}_{u\in\mathcal{D}}\,h(u)\) with \(h(u)\triangleq\Gamma(h_{1}(u),h_{2}(u),\ldots,h_{k}(u))\). Then the proposition is equivalent to \(\mathcal{D}^{*}=\mathcal{D}^{**}\). We then establish \(\mathcal{D}^{*}\subset\mathcal{D}^{**}\) and \(\mathcal{D}^{**}\subset\mathcal{D}^{*}\), respectively. To prove \(\mathcal{D}^{*}\subset\mathcal{D}^{**}\), take any \(u^{*}\in\mathcal{D}^{*}\). Then, for all \(u\in\mathcal{D}\), we have \(h_{i}(u)\leq h_{i}(u^{*})\), \(i=1,\ldots,k\), which implies that \[h(u)=\Gamma(h_{1}(u),\ldots,h_{k}(u))\leq\Gamma(h_{1}(u^{*}), \ldots,h_{k}(u^{*}))=h(u^{*}).\] Therefore, we have \(u^{*}\in\mathcal{D}^{**}\). Since \(u^{*}\) can be arbitrarily chosen from \(\mathcal{D}^{*}\), we have \(\mathcal{D}^{*}\subset\mathcal{D}^{**}\). We then establish \(\mathcal{D}^{**}\subset\mathcal{D}^{*}\), which is equvialent to \[(\mathcal{D}\setminus\mathcal{D}^{*})\subset(\mathcal{D}\setminus\mathcal{D} ^{**}). \tag{117}\] Note that (117) is trivially true if \(\mathcal{D}\setminus\mathcal{D}^{*}=\varnothing\). Otherwise, take any \(u^{\prime}\in\mathcal{D}\setminus\mathcal{D}^{*}\). Then, we have \(h_{i}(u^{\prime})\leq h_{i}(u^{*})\) for all \(i\in[k]\), and the strict inequality holds for at least one \(i\in[k]\). This implies that \[h(u^{\prime})=\Gamma(h_{1}(u^{\prime}),\ldots,<h_{k}(u^{\prime}) )<\Gamma(h_{1}(u^{*}),\ldots,h_{k}(u^{*}))=h(u^{*}).\] Hence, \(u^{\prime}\notin\mathcal{D}^{**}\), and thus \(u^{\prime}\in(\mathcal{D}\setminus\mathcal{D}^{**})\), which establishes (117). To apply Proposition 35, the first step is to test the existence of common optimal solutions. A naive test is to solve all optimization problems (115) and then check the definition, which can be difficult in practice. Instead, we can consider a related multilevel optimization problem as follows. Let \(\mathcal{D}_{0}\triangleq\mathcal{D}\), and for each \(i=1,\ldots,k\) as, we define \(\mathcal{D}_{i}\) and \(t_{i}\in\mathbb{R}\) as \[\mathcal{D}_{i}\triangleq\operatorname*{arg\,max}_{u\in\mathcal{D} _{i-1}}\,h_{i}(u),\quad t_{i}\triangleq\max_{u\in\mathcal{D}_{i-1}}\,h_{i}(u). \tag{118}\] Note that for each \(i\), the \(\mathcal{D}_{i}\) solved by level \(i\) optimization problem gives the elements in \(\mathcal{D}_{i}\) that maximize \(h_{i}\), and \(t_{i}\) denotes the corresponding optimal value. Therefore, \(\mathcal{D}_{k}\) can be obtained by sequentially solving \(k\) optimization problems as defined in (118). Then, the following result provides an approach to test the existence of common optimal solutions. **Proposition 36**: _The following three statements are equivalent:_ 1. _The optimization problems (_115_) have common optimal solutions, i.e.,_ \(\mathcal{D}^{*}\neq\varnothing\) 2. \(t_{i}=t_{i}^{*}\) _for all_ \(i=1,\ldots,k\)_;_ 3. \(\mathcal{D}_{i-1}\cap\mathcal{D}_{i}^{*}\neq\varnothing\) _for all_ \(i=1,\ldots,k\)_._ _In addition, if one of these statements holds, then we have \(\mathcal{D}_{k}=\mathcal{D}^{*}\)._ **Proof** We establish the equivalence of the statements 1 to 3, by showing that "1" \(\implies\) "2", "2" \(\implies\) "3", and "3" \(\implies\) "1". **"1" \(\implies\) "2"** Suppose "1" holds, and take any \(u^{*}\in\mathcal{D}^{*}=\cap_{i=1}^{k}\mathcal{D}_{i}^{*}\). We then establish "2" by induction. First, note that \(u^{*}\in\mathcal{D}_{0}\). For the induction step, we can show that for each \(i=1,\ldots,k\), if \(u^{*}\in\mathcal{D}_{i-1}\), then \(u^{*}\in\mathcal{D}_{i}\) and \(t_{i}^{*}=t_{i}\). Indeed, we have \[t_{i}^{*}\geq t_{i}=\max_{u\in\mathcal{D}_{i-1}}h_{i}(u)\geq h_{i}(u^{*})=t_{ i}^{*},\] where the first inequality follows from the fact that \(\mathcal{D}=\mathcal{D}_{0}\supset\mathcal{D}_{1}\supset\cdots\supset \mathcal{D}_{k}\), where the second inequality follows from the inductive assumption \(u^{*}\in\mathcal{D}_{i-1}\), and where the last equality follows from that \(u^{*}\in\mathcal{D}^{*}\subset\mathcal{D}_{i}^{*}\). **"2" \(\implies\) "3"** For each \(i=2,\ldots,k\), \(t_{i}=t_{i}^{*}\) implies that there exists some \(u_{i}\in\mathcal{D}_{i-1}\), such that \(h_{i}(u_{i})=t_{i}=t_{i}^{*}=\max_{u\in\mathcal{D}}h_{i}(u)\), and thus \(u_{i}\in\mathcal{D}_{i}^{*}\). Therefore, \(u_{i}\in\mathcal{D}_{i-1}\cap\mathcal{D}_{i}^{*}\), which establishes "3". **"3" \(\implies\) "1"** For each \(i=2,\ldots,k\), from \(\mathcal{D}_{i-1}\cap\mathcal{D}_{i}^{*}\neq\varnothing\) and the definitions (116) and (118), we have \(\mathcal{D}_{i}=\mathcal{D}_{i-1}\cap\mathcal{D}_{i}^{*}\). It can also be verified that \(\mathcal{D}_{i}=\mathcal{D}_{i-1}\cap\mathcal{D}_{i}^{*}\) holds for \(i=1\). Therefore, we obtain \[\mathcal{D}_{k}=\mathcal{D}_{k-1}\cap\mathcal{D}_{k}^{*}=\mathcal{D}_{k-2} \cap\mathcal{D}_{k-1}^{*}\cap\mathcal{D}_{k}^{*}=\cdots=\mathcal{D}_{0}\cap \left(\bigcap_{i=1}^{k}\mathcal{D}_{i}^{*}\right)=\mathcal{D}\cap\mathcal{D}^ {*}=\mathcal{D}^{*}. \tag{119}\] This implies that \(\mathcal{D}^{*}=\mathcal{D}_{k-1}\cap\mathcal{D}_{k}^{*}\neq\varnothing\). Finally, from (119) we know that statement 3 implies \(\mathcal{D}_{k}=\mathcal{D}^{*}\). Since all three statements are equivalent, each statement implies \(\mathcal{D}_{k}=\mathcal{D}^{*}\). ## Appendix C Proofs ### Proof of Proposition 5 Let \(\hat{\gamma}\triangleq\Pi\left(\gamma;\mathcal{G}_{\mathfrak{X}}\otimes \mathcal{G}_{\mathfrak{Y}}\right)\). We first consider the second equality of (15), i.e., \[\zeta_{\leq k}(\hat{\gamma})=\operatorname*{arg\,min}_{\begin{subarray}{c} \gamma^{\prime}:\,\gamma^{\prime}=f\otimes g_{\mathfrak{Y}}\\ f\in\mathcal{G}_{\mathfrak{X}}^{k},\,g\in\mathcal{G}_{\mathfrak{Y}}^{k} \end{subarray}}\|\gamma-\gamma^{\prime}\| \tag{120}\] For each \(k\leq\operatorname{rank}(\hat{\gamma})\), consider \(\gamma^{\prime}=f\otimes g\) with \(f\in\mathcal{G}_{\mathfrak{X}}^{k},g\in\mathcal{G}_{\mathfrak{Y}}^{k}\). Then, we have \[\|\gamma-\gamma^{\prime}\|^{2}=\|\gamma-\hat{\gamma}+\hat{\gamma}-\gamma^{ \prime}\|^{2}=\|\gamma-\hat{\gamma}\|^{2}+\|\hat{\gamma}-\gamma^{\prime}\|^{2 }\geq\|\gamma-\hat{\gamma}\|^{2}+\|r_{k}(\hat{\gamma})\|^{2}, \tag{121}\] where to obtain the second equality we have used the orthogonality principle with fact that \((\gamma-\hat{\gamma})\perp\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{ \mathfrak{Y}}\) and \((\hat{\gamma}-\gamma^{\prime})\in\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{ \mathfrak{Y}}\). Note that the inequality in (121) holds with equality if and only if \(\gamma^{\prime}=\zeta_{\leq k}(\hat{\gamma})\), which establishes (120). We then establish \(\zeta_{k}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{Y}})=\zeta_{k}( \hat{\gamma})\) by induction. To begin, set \(k=1\) in (120), and the right hand side becomes \(\zeta(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{Y}})\), which implies \(\zeta(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{Y}})=\zeta( \hat{\gamma})\), i.e., \(\zeta_{1}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{Y}})= \zeta_{1}(\hat{\gamma})\). As the inductive hypothesis, suppose we have \[\zeta_{i}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{Y}})=\zeta_ {i}(\hat{\gamma}),\quad i=1,\ldots,m. \tag{122}\] From (14), we have \[\zeta_{m+1}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{ \mathfrak{y}}) =\zeta\left(\gamma-\sum_{i=1}^{m}\zeta_{i}(\gamma|\mathcal{G}_{ \mathfrak{X}},\mathcal{G}_{\mathfrak{y}})\Bigg{|}\mathcal{G}_{\mathfrak{X}}, \mathcal{G}_{\mathfrak{y}}\right)\] \[=\zeta\left(\gamma-\sum_{i=1}^{m}\zeta_{i}(\hat{\gamma})\Bigg{|} \mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{y}}\right) \tag{123}\] \[=\zeta\left(\Pi\left(\gamma-\sum_{i=1}^{m}\zeta_{i}(\hat{\gamma}) ;\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{\mathfrak{y}}\right)\right)\] (124) \[=\zeta\left(\hat{\gamma}-\sum_{i=1}^{m}\Pi\left(\zeta_{i}(\hat{ \gamma});\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{\mathfrak{y}}\right)\right)\] (125) \[=\zeta\left(\hat{\gamma}-\sum_{i=1}^{m}\zeta_{i}(\hat{\gamma}) \right)=\zeta_{m+1}(\hat{\gamma}), \tag{126}\] where (123)-(124) follow from the inductive assumption (122), where (125) follows from the linearity of projection operator. To obtain the first equality of (125), we have again applied the assumption (122): for \(i=1,\ldots,m\), \(\zeta_{i}(\hat{\gamma})=\zeta_{i}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G }_{\mathfrak{y}})\in\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{\mathfrak{y}}\). Finally, \(\zeta_{\leq k}(\gamma|\mathcal{G}_{\mathfrak{X}},\mathcal{G}_{\mathfrak{y}})= \zeta_{\leq k}(\hat{\gamma})\) can be readily obtained by definition. ### Proof of Proposition 7 It is easy to verify that \(\operatorname{cov}(\hat{f}_{i}^{*},\hat{g}_{i}^{*})=\left\langle\mathrm{i}_{ X;Y},\hat{f}_{i}^{*}\otimes\hat{g}_{i}^{*}\right\rangle=\|\,\zeta_{i}( \mathfrak{i}^{\prime}_{X;Y})\|=\hat{\sigma}_{i}^{*}\), where \(\mathfrak{i}^{\prime}_{X;Y}=\Pi\left(\mathrm{i}_{X;Y};\mathcal{G}_{\mathfrak{ X}}\otimes\mathcal{G}_{\mathfrak{y}}\right)\). From Fact 4, we have \((\hat{f}_{i}^{*},\hat{g}_{i}^{*})=\operatorname*{arg\,max}_{f_{i},g_{i}}\left \langle\mathfrak{i}^{\prime}_{X;Y},f_{i}\otimes g_{i}\right\rangle\) where the maximization is taken over all \(f_{i}\in\mathcal{F}_{\mathfrak{X}}\) and \(g_{i}\in\mathcal{F}_{\mathfrak{y}}\) that satisfy (19). Therefore, for each \(i\) and \(f_{i}\in\mathcal{G}_{\mathfrak{X}},g_{i}\in\mathcal{G}_{\mathfrak{y}}\) that satisfy (19), we have \[\operatorname{cov}(f_{i},g_{i})=\left\langle\mathrm{i}_{X;Y},f_{i}\otimes g_{ i}\right\rangle=\left\langle\mathfrak{i}^{\prime}_{X;Y},f_{i}\otimes g_{i} \right\rangle\leq\hat{\sigma}_{i}^{*}=\operatorname{cov}(\hat{f}_{i}^{*},\hat {g}_{i}^{*}),\] where the second equality follows from that \(f_{i}\otimes g_{i}\in\mathcal{G}_{\mathfrak{X}}\otimes\mathcal{G}_{\mathfrak{y}}\). Hence, we obtain \((\hat{f}_{i}^{*},\hat{g}_{i}^{*})=\operatorname*{arg\,max}_{f_{i},g_{i}} \operatorname{cov}(f_{i},g_{i})\) where the maximization is taken over \(f_{i}\in\mathcal{G}_{\mathfrak{X}},g_{i}\in\mathcal{G}_{\mathfrak{y}}\) with the constraint (19). ### Proof of Proposition 11 The result of \(\|\mathrm{i}_{X;Y}\|\) directly follows from Property 1. In addition, \[f^{\mathrm{T}}(x)g(Y)=\mathrm{i}_{X;Y}(x,y)=\frac{P_{X,Y}(x,y)-P_{X}(x)P_{Y}(y) }{P_{X}(x)P_{Y}(y)},\quad\text{for all }(x,y)\in\mathfrak{X}\times\mathfrak{Y},\] which implies \(P_{Y|X}(y|x)=P_{Y}(y)\left(1+f^{\mathrm{T}}(x)g(y)\right)\), i.e., (23). Therefore, for all \(\psi\in\mathcal{F}_{\mathfrak{Y}}^{d}\), we have \[\mathbb{E}\left[\psi(Y)|X=x\right]=\sum_{y\in\mathfrak{Y}}P_{Y|X}(y|x)\psi(y)= \sum_{y\in\mathfrak{Y}}P_{Y}(y)(1+f^{\mathrm{T}}(x)g(y))\psi(y)\] \[=\sum_{y\in\mathbb{Y}}P_{Y}(y)\psi(y)+\sum_{y\in\mathbb{Y}}P_{Y}(y)( \psi(y)g^{\mathrm{T}}(y))f^{\mathrm{T}}(x)\] \[=\mathbb{E}\left[\psi(Y)\right]+\Lambda_{\psi,g}\cdot f(x),\] which gives (24). ### Proof of Proposition 12 We have \[\mathbb{E}\left[\psi(Y)|X=x\right] =\sum_{y\in\mathbb{Y}}P_{Y|X}(y|x)\psi(y)\] \[=\sum_{y\in\mathbb{Y}}P_{Y}(y)(1+\mathrm{i}_{X;Y}(x,y))\psi(y)\] \[=\sum_{y\in\mathbb{Y}}P_{Y}(y)[1+\zeta_{\leq k}(\mathrm{i}_{X;Y}) ](x,y)\psi(y)+\sum_{y\in\mathbb{Y}}P_{Y}(y)\sum_{i>k}\sigma_{i}^{*}f_{i}^{*}(x) g_{i}^{*}(y)\psi(y)\] \[=\sum_{y\in\mathbb{Y}}P_{Y}(y)(1+f^{\mathrm{T}}(x)g(y))\psi(y)\] \[=\sum_{y\in\mathbb{Y}}P_{Y}(y)\psi(y)+\sum_{y\in\mathbb{Y}}P_{Y}( y)(\psi(y)g^{\mathrm{T}}(y))f^{\mathrm{T}}(x)\] \[=\mathbb{E}\left[\psi(Y)\right]+\Lambda_{\psi,g}\cdot f(x),\] where the fourth equality follows from the fact that \[\sum_{y\in\mathbb{Y}}P_{Y}(y)\sum_{i>k}\sigma_{i}^{*}f_{i}^{*}(x)g_{i}^{*}(y) \psi(y)=\sum_{i>k}\sigma_{i}^{*}f_{i}^{*}(x)\mathbb{E}\left[g_{i}^{*}(Y)\psi(Y )\right]=0,\] due to \(\psi_{j}\in\{1,g_{1}^{*},\ldots,g_{k}^{*}\}\) for each \(j\in[d]\). ### Proof of Property 3 The property is equivalent to \(\mathcal{L}(f,g)=\mathcal{L}(f+\underline{u},g+\underline{v})\) for all \(\underline{u},\underline{v}\in\mathbb{R}^{k}\). Hence, it suffices to prove that \[\mathcal{L}(f+\underline{u},g+\underline{v})\leq\mathcal{L}(f,g),\qquad\text{ for all }\underline{u},\underline{v}\in\mathbb{R}^{k}. \tag{127}\] Note that for all \(b\in\mathcal{F}_{y}\), since \[\left(f(x)+\underline{u}\right)^{\mathrm{T}}(g(y)+\underline{v})+b(y)=f(x) \cdot g(y)+\underline{v}^{\mathrm{T}}f(x)+\underline{u}^{\mathrm{T}}g(y)+b(y )+\underline{u}^{\mathrm{T}}\underline{v}, \tag{128}\] we have [cf. (28)]\(\tilde{P}_{Y|X}^{(f+\underline{u},g+\underline{v},b)}=\tilde{P}_{Y|X}^{(f,g,b+ \underline{u}^{\mathrm{T}}g)}\), which implies that \(\mathcal{L}(f+\underline{u},g+\underline{v},b)=\mathcal{L}(f,g,b+\underline{ u}^{\mathrm{T}}g)\). Therefore, we obtain \[\mathcal{L}(f+\underline{u},g+\underline{v})=\max_{b\in\mathcal{F}_{y}} \mathcal{L}(f+\underline{u},g+\underline{v},b)\leq\max_{b\in\mathcal{F}_{y}} \mathcal{L}(f,g,b+\underline{u}^{\mathrm{T}}g)\leq\mathcal{L}(f,g). \tag{129}\] ### Proof of Proposition 13 We first prove a useful lemma. **Lemma 37**: _Suppose \(p>0,q>0\), \(p+q=1\), then we have_ \[\log\left(p\cdot\exp\left(\frac{u}{p}\right)+q\cdot\exp\left(-\frac{u}{q}\right) \right)\geq\min\left\{u^{2},u_{0}|u|\right\},\quad\text{for all }u\in\mathbb{R},\] _where \(u_{0}\triangleq\frac{\ln 2}{3}\cdot\min\{p,q\}\)._ **Proof** [Proof of Lemma 37] Let \(p_{\min}\triangleq\min\{p,q\}\). Then, we have \[h^{\prime}(u)=\frac{\exp\left(p^{-1}u\right)-\exp\left(-q^{-1}u\right)}{p \cdot\exp\left(p^{-1}u\right)+q\cdot\exp\left(-q^{-1}u\right)}\] \[h^{\prime\prime}(u) =\left[p\cdot\exp\left(p^{-1}u\right)+q\cdot\exp\left(-q^{-1}u \right)\right]^{-2}\] \[\quad\cdot\left[\left(\frac{1}{p}\exp\left(\frac{u}{p}\right)+ \frac{1}{q}\exp\left(-\frac{u}{q}\right)\right)\cdot\left(p\exp\left(\frac{u}{ p}\right)+q\exp\left(-\frac{u}{q}\right)\right)-\left[\exp\left(\frac{u}{p} \right)-\exp\left(-\frac{u}{q}\right)\right]^{2}\right]\] \[\geq\frac{\left[\exp\left(p^{-1}u\right)+\exp\left(-q^{-1}u \right)\right]^{2}-\left[\exp\left(p^{-1}u\right)-\exp\left(-q^{-1}u\right) \right]^{2}}{\left[p\cdot\exp\left(p^{-1}u\right)+q\cdot\exp\left(-q^{-1}u \right)\right]^{2}}\] \[=4\cdot\frac{\exp\left(\left(p^{-1}-q^{-1}\right)\cdot u\right)} {\left[p\cdot\exp\left(p^{-1}u\right)+q\cdot\exp\left(-q^{-1}u\right)\right]^ {2}}.\] Moreover, for all \(|u|\leq u_{0}\), we have \[\exp\left(\left(p^{-1}-q^{-1}\right)\cdot u\right)\geq\exp\left(-|p^{-1}-q^{- 1}|\cdot u_{0}\right)\geq\exp\left(-\frac{u_{0}}{p_{\min}}\right)\] \[p\cdot\exp\left(p^{-1}u\right)+q\cdot\exp\left(-q^{-1}u\right)\leq\exp\left( \frac{u_{0}}{p_{\min}}\right).\] As a result, for all \(|u|\leq u_{0}\), we have \(h^{\prime\prime}(u)\geq 4\exp\left(-\frac{3u_{0}}{p_{\min}}\right)=2\). Therefore, \(h^{\prime}(u_{0})\geq h^{\prime}(0)+2\cdot(u_{0}-0)=2u_{0}>u_{0}\), and similarly, \(-h^{\prime}(-u_{0})=h^{\prime}(0)-h^{\prime}(-u_{0})\geq 2u_{0}\), i.e., \(h^{\prime}(-u_{0})\leq-2u_{0}\leq-u_{0}\). Moreover, for all \(|u|\leq u_{0}\), \(h(u)\geq h(0)+h^{\prime}(0)\cdot u+\frac{1}{2}u^{2}\cdot 2=u^{2}\). Therefore, for all \(u>u_{0}\), \(h^{\prime}(u)\geq h^{\prime}(u_{0})>u_{0}\), which implies that \(h(u)\geq h(u_{0})+u_{0}(u-u_{0})\geq u_{0}^{2}+u_{0}u-u_{0}^{2}=u_{0}u\). Similarly, we have \(h(u)\geq-u_{0}u\) for all \(u<-u_{0}\). Proceeding to the proof of Proposition 13, we consider zero-mean \(k\)-dimensional \(f\), \(g\). Without loss of generality, we assume \(b\in\mathcal{F}_{\mathbb{y}}\) satisfies \(\mathbb{E}\left[b(Y)\right]=-H(Y)\). Then, let \(a\in\mathcal{F}_{\mathbb{y}|\varnothing}\) be \(a(y)\triangleq b(y)-\log P_{Y}(y)\), and define \(\gamma\in\mathcal{F}_{\mathbb{X}\times\mathbb{y}}\) as \(\gamma(x,y)\triangleq f(x)\cdot g(y)+a(y)\). Note that since \[\exp(f(x)\cdot g(y)+b(y))=P_{Y}(y)\exp(f(x)\cdot g(y)+a(y))=P_{Y}(y)\exp(\gamma (x,y)),\] we have \[\tilde{P}_{Y|X}^{(f,g,b)}(y|x)=\frac{\exp(f(x)\cdot g(y)+b(y))}{\sum_{y^{ \prime}\in\mathbb{y}}\exp(f(x)\cdot g(y^{\prime})+b(y^{\prime}))}=\frac{P_{Y} (y)\exp(\gamma(x,y))}{\sum_{y^{\prime}\in\mathbb{y}}P_{Y}(y^{\prime})\exp(\gamma (x,y^{\prime}))}.\] Therefore, \[\mathcal{L} (f,g,b)\] \[=\mathbb{E}_{(\hat{X},\hat{Y})\sim P_{X}P_{Y}}\left[(\mathrm{i}_{X;Y }(\hat{X},\hat{Y})+1)\cdot\log\hat{P}_{Y|X}^{(f,g,b)}(\hat{Y}|\hat{X})\right]\] \[=\mathbb{E}_{(\hat{X},\hat{Y})\sim P_{X}P_{Y}}\left[(\mathrm{i}_{ X;Y}(\hat{X},\hat{Y})+1)\cdot\left(\log P_{Y}(\hat{Y})+\gamma(\hat{X},\hat{Y})- \log\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{\prime})\exp(\gamma(\hat{X},y^{ \prime}))\right)\right]\] \[=-H(Y)+\langle\mathrm{i}_{X;Y},\gamma\rangle-\mathbb{E}_{\hat{X} \sim P_{X}}\left[\log\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{\prime})\exp( \gamma(\hat{X},y^{\prime}))\right]. \tag{130}\] As a result, for all \((f,g,b)\) that satisfies \(\mathcal{L}(f,g,b)\geq-H(Y)\), we have \[\langle\mathrm{i}_{X;Y},\gamma\rangle\geq\mathbb{E}_{\hat{X}\sim P_{X}}\left[ \log\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{\prime})\exp(\gamma(\hat{X},y^{ \prime}))\right]. \tag{131}\] In addition, note that for all \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\), we have \[\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{\prime})\exp(\gamma(x,y^{ \prime})) =P_{Y}(y)\exp(\gamma(x,y))+(1-P_{Y}(y))\sum_{y^{\prime}\in\mathcal{ Y}\setminus\{y\}}\frac{P_{Y}(y^{\prime})}{1-P_{Y}(y)}\exp(\gamma(x,y^{\prime}))\] \[\geq P_{Y}(y)\exp(\gamma(x,y))+(1-P_{Y}(y))\exp\left(-\frac{P_{Y}( y)}{1-P_{Y}(y)}\cdot\gamma(x,y)\right).\] where the inequality follows from Jensen's inequality and \(\sum_{y\in\mathcal{Y}}P_{Y}(y)\gamma(x,y)=0\). Let us define \[q_{X}\triangleq\min_{x\in\mathcal{X}}P_{X}(x)>0,\quad q_{Y}\triangleq\min_{y \in\mathcal{Y}}P_{Y}(y)>0.\] Then, from Lemma 37 we have \[\log\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{\prime})\exp(\gamma(x,y^{\prime})) \geq\log\left[P_{Y}(y)\exp(\gamma(x,y))+(1-P_{Y}(y))\exp\left(- \frac{P_{Y}(y)}{1-P_{Y}(y)}\cdot\gamma(x,y)\right)\right]\] \[\geq\min\left\{(P_{Y}(y)\gamma(x,y))^{2},\frac{\ln 2}{3}\cdot q _{Y}|P_{Y}(y)\gamma(x,y)|\right\}\] \[\geq\frac{\ln 2\cdot q_{Y}^{2}}{3}\cdot\min\left\{(\gamma(x,y))^{2},|\gamma(x,y)|\right\},\] which implies that \[\mathbb{E}_{\hat{X}\sim P_{X}}\left[\log\sum_{y^{\prime}\in \mathcal{Y}}P_{Y}(y^{\prime})\exp(\gamma(\hat{X},y^{\prime}))\right] \geq P_{X}(x)\cdot\log\sum_{y^{\prime}\in\mathcal{Y}}P_{Y}(y^{ \prime})\exp(\gamma(x,y^{\prime}))\] \[\geq\frac{\ln 2\cdot q_{X}q_{Y}^{2}}{3}\cdot\min\left\{(\gamma(x,y)) ^{2},|\gamma(x,y)|\right\}. \tag{132}\] On the other hand, since \(\|\mathrm{i}_{X;Y}\|=O(\epsilon)\), there exists a constant \(C>0\) such that \(\|\mathrm{i}_{X;Y}\|\leq C\epsilon\). Therefore, \[\langle\mathrm{i}_{X;Y},\gamma\rangle\leq\|\mathrm{i}_{X;Y}\|\cdot\|\gamma\| \leq\gamma_{\max}\cdot\epsilon, \tag{133}\] where \(\gamma_{\max}\triangleq\max_{x\in\mathfrak{X},y\in\mathfrak{Y}}|\gamma(x,y)|\). Hence, by combining (131), (132) and (133), we obtain \[\frac{\ln 2\cdot q_{X}q_{Y}^{2}}{3}\cdot\min\left\{\gamma_{\max}^{2},\gamma_{ \max}\right\}\leq C\cdot\gamma_{\max}\cdot\epsilon, \tag{134}\] where we have taken \((x,y)=\arg\max_{x^{\prime},y^{\prime}}|\gamma(x^{\prime},y^{\prime})|\) in (132). From (134), if \(\epsilon<\frac{\ln 2}{3}\cdot\frac{q_{X}q_{Y}^{2}}{3C}\) we have \(\gamma_{\max}<\epsilon\). This implies that \[|\gamma(x,y)|<\epsilon,\quad\text{for all }x\in\mathfrak{X},y\in\mathfrak{Y}, \tag{135}\] and we obtain \(\|\gamma\|<\epsilon\). In addition, note that since \(\|\gamma\|^{2}=\|f\otimes g+a\|^{2}=\|f\otimes g\|^{2}+\|a\|^{2}\), we obtain \(\|f\otimes g\|=O(\epsilon)\). From (135), we have \[\sum_{y^{\prime}\in\mathfrak{Y}}P_{Y}(y^{\prime})\exp(\gamma(x,y^ {\prime})) =\sum_{y^{\prime}\in\mathfrak{Y}}P_{Y}(y^{\prime})\left(1+\gamma( x,y^{\prime})+\frac{(\gamma(x,y^{\prime}))^{2}}{2}+o(\epsilon^{2})\right)\] \[=1+\frac{1}{2}\sum_{y^{\prime}\in\mathfrak{Y}}P_{Y}(y^{\prime})( \gamma(x,y^{\prime}))^{2}+o(\epsilon^{2})\] \[=1+\frac{1}{2}\cdot\mathbb{E}_{\hat{Y}\sim P_{Y}}\left[\gamma(x, \hat{Y})\right]^{2}+o(\epsilon^{2}).\] Therefore, \[\mathbb{E}_{\hat{X}\sim P_{X}}\left[\log\sum_{y^{\prime}\in\mathfrak{Y}}P_{Y}( y^{\prime})\exp(\gamma(\hat{X},y^{\prime}))\right]=\frac{1}{2}\cdot\mathbb{E}_{( \hat{X},\hat{Y})\sim P_{X}P_{Y}}\left[\gamma(\hat{X},\hat{Y})\right]^{2}+o( \epsilon^{2})=\frac{1}{2}\cdot\|\gamma\|^{2}+o(\epsilon^{2}),\] and the likelihood (130) becomes \[\mathcal{L}(f,g,b) =-H(Y)+\langle\mathrm{i}_{X;Y},\gamma\rangle-\mathbb{E}_{\hat{X} \sim P_{X}}\left[\log\sum_{y^{\prime}\in\mathfrak{Y}}P_{Y}(y^{\prime})\exp( \gamma(\hat{X},y^{\prime}))\right]\] \[=-H(Y)+\langle\mathrm{i}_{X;Y},\gamma\rangle-\frac{1}{2}\cdot\| \gamma\|^{2}+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot\left(\|\mathrm{i}_{X;Y}\|^{2}-\|\mathrm{i}_{X;Y }-\gamma\|^{2}\right)-H(Y)+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot\left(\|\mathrm{i}_{X;Y}\|^{2}-\|\mathrm{i}_{X;Y }-f\otimes g-a\|^{2}\right)-H(Y)+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot\left(\|\mathrm{i}_{X;Y}\|^{2}-\|\mathrm{i}_{X;Y }-f\otimes g\|^{2}-\|a\|^{2}\right)-H(Y)+o(\epsilon^{2}), \tag{136}\] where the last equality follows from the fact that \[\|\mathrm{i}_{X;Y}-f\otimes g-a\|^{2}=\|\mathrm{i}_{X;Y}-f\otimes g\|^{2}+\|a \|^{2},\] due to the orthogonality \((\mathrm{i}_{X;Y}-f\otimes g)\perp\mathcal{F}_{\mathfrak{Y}}\ni a\). Finally, from (136), for given \(f,g\), \(\mathcal{L}(f,g,b)\) is maximized when \(\|a\|=0\). Therefore, we have \[\mathcal{L}(f,g)=\max_{b\in\mathcal{F}_{\mathfrak{Y}}}\mathcal{L}(f,g,b)=\frac{ 1}{2}\cdot\left(\|\mathrm{i}_{X;Y}\|^{2}-\|\mathrm{i}_{X;Y}-f\otimes g\|^{2} \right)-H(Y)+o(\epsilon^{2}),\] which gives (30). ### Proof of Theorem 17 Throughout this proof, we consider defined on the domain, and let We also define, as (137) Then, it suffices to establish that (138) To see this, note that the common solution (138) coincides with the optimal solution (44) to be established. From Proposition 35, since, we have (139) Furthermore, from the definition of H-score (cf. Definition 10), we have which implies that It remains only to establish (138). From Proposition 36, it suffices to establish that and (cf. (138)) (140) where we have defined To this end, let us define and as as, and. Then, we have (141) \[=\left\|\bar{\gamma}+\gamma-\bar{f}\otimes\bar{g}-(\bar{h}+h) \otimes g\right\|^{2} \tag{142}\] \[=\left\|\bar{\gamma}-\bar{f}\otimes\bar{g}-\bar{h}\otimes g \right\|^{2}+\left\|\gamma-h\otimes g\right\|^{2}\] (143) \[\geq\left\|\gamma-h\otimes g\right\|^{2}\] (144) \[\geq\left\|r_{k}(\gamma)\right\|^{2} \tag{145}\] where (143) follows from the orthogonality principle, since \(\left(\bar{\gamma}-\bar{f}\otimes\bar{g}-\bar{h}\otimes g\right)\in\mathcal{G }_{\mathcal{X}}\otimes\mathcal{F}_{\mathcal{Y}}\) and \(\left(\gamma-h\otimes g\right)\perp\mathcal{G}_{\mathcal{X}}\otimes\mathcal{F }_{\mathcal{Y}}\). Moreover, it is easily verified that the lower bound (145) is tight: all inequalities hold with equality for \((\bar{f},\bar{g},f,g)\) satisfying (44). Therefore, we have \[t_{2}^{*}=\min_{(\bar{f},\bar{g},f,g)\in\mathcal{D}}L_{2}(\bar{f},\bar{g},f,g) =\left\|r_{k}(\gamma)\right\|^{2} \tag{146}\] On the other hand, since \(\bar{k}\geq\text{rank}(\bar{\gamma})\), from Proposition 5, we have \(\mathcal{D}_{1}^{*}=\{(\bar{f},\bar{g},f,g)\colon\bar{f}\otimes\bar{g}=\bar{ \gamma}\}\). Hence, for all \((\bar{f},\bar{g},f,g)\in\mathcal{D}_{1}^{*}\), we have \[L_{2}(\bar{f},\bar{g},f,g)=\left\|i_{X;Y}-\bar{f}\otimes\bar{g}-f\otimes g \right\|^{2}=\left\|i_{X;Y}-\bar{\gamma}-f\otimes g\right\|^{2}=\left\|\gamma -f\otimes g\right\|^{2}\geq\left\|r_{k}(\gamma)\right\|^{2},\] where the inequality holds with equality if and only if \(f\otimes g=\zeta_{\leq k}(\gamma)\). As a result, \(\mathcal{D}_{2}=\arg\min_{(\bar{f},\bar{g},f,g)\in\mathcal{D}_{1}^{*}}L_{2}( \bar{f},\bar{g},f,g)\) is given by (140), and we have \[t_{2}=\min_{(\bar{f},\bar{g},f,g)\in\mathcal{D}_{1}^{*}}L_{2}(\bar{f},\bar{g}, f,g)=\left\|r_{k}(\gamma)\right\|^{2}=t_{2}^{*}.\] ### Proof of Theorem 18 To begin, we have \[\mathscr{H}\left(\left[\bar{f}\right],\left[\bar{g}\right]; \mathcal{C}_{\pi}^{\star}\right) =\sum_{i=1}^{\bar{k}}\mathscr{H}(\bar{f}_{[i]},\bar{g}_{[i]})+ \sum_{i=1}^{k}\mathscr{H}\left(\left[\bar{f}_{[i]}\right],\left[\bar{g}_{[i]} \right]\right)\] \[=\sum_{i=1}^{\bar{k}-1}\mathscr{H}(\bar{f}_{[i]},\bar{g}_{[i]})+ \sum_{i=1}^{k}\left(k^{-1}\cdot\mathscr{H}(\bar{f},\bar{g})+\mathscr{H}\left( \left[\bar{f}_{[i]}\right],\left[\bar{g}_{[i]}\right]\right)\right).\] Note that for \(\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right)\in\text{dom}(\mathcal{C}_{\pi}^{\star})\) and each \(i\in[\bar{k}]\), \(\mathscr{H}(\bar{f}_{[i]},\bar{g}_{[i]})\) is maximized if and only if \[\bar{f}_{[i]}\otimes\bar{g}_{[i]}=\zeta_{\leq i}(\mathfrak{i}_{X;Y}|\mathcal{ S}_{\mathcal{X}},\mathcal{F}_{\mathcal{Y}}). \tag{147}\] In addition, from Theorem 17, for each \(i\in[k]\), the term is maximized if and only if \[\bar{f}\otimes\bar{g}=\Pi\left(\mathfrak{i}_{X;Y};\mathcal{G}_{\mathcal{X}} \otimes\mathcal{F}_{\mathcal{Y}}\right),\qquad f_{[i]}\otimes g_{[i]}=\zeta_{ \leq i}(\Pi\left(\mathfrak{i}_{X;Y};\left(\mathcal{F}_{\mathcal{X}}\boxplus \mathcal{G}_{\mathcal{X}}\right)\otimes\mathcal{F}_{\mathcal{Y}}\right)). \tag{148}\] Note that (45) is the common solution of (147) and (148). Hence, the proof is completed by applying Proposition 35. ### Proof of Proposition 20 The relation \(\mathrm{i}_{X;S}=\tilde{\ell}_{P_{X,S,Y}^{\mathsf{M}}}\) can be directly verified from definition. To establish \(\pi_{\mathsf{M}}(\mathrm{i}_{X;S,Y})=\Pi\left(\mathrm{i}_{X;S,Y};\mathcal{F}_{ \mathcal{X}\times\mathcal{S}}\right)=\mathrm{i}_{X;S}\), from Fact 2, it suffices to show that \((\mathrm{i}_{X;S,Y}-\mathrm{i}_{X;S})\perp\mathcal{F}_{\mathcal{X}\times \mathcal{S}}\). To this end, note that \[\mathrm{i}_{X;S,Y}(x,s,y)-\mathrm{i}_{X;S}(x,s) =\frac{P_{X,S,Y}(x,s,y)-P_{X,S,Y}^{\mathsf{M}}(x,s,y)}{R_{X,S,Y}(x,s,y)}\] \[=\frac{P_{X,S,Y}(x,s,y)-P_{X|S}(x|s)P_{S}(s)P_{Y|S}(y|s)}{R_{X,S,Y} (x,s,y)}.\] Therefore, for all \(f\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}}\), we have \[\left\langle\mathrm{i}_{X;S,Y}-\mathrm{i}_{X;S},f\right\rangle =\sum_{x\in\mathcal{X},s\in\mathcal{S},y\in y}P_{X,S,Y}(x,s,y)f(x,s)-\sum_{x\in\mathcal{X},s\in\mathcal{S},y\in y}P_{X|S}(x|s)P_{S}(s)P_{Y|S}(y |s)\cdot f(x,s)\] \[=\mathbb{E}_{P_{X,S}}\left[f(X,S)\right]-\mathbb{E}_{P_{X,S}} \left[f(X,S)\right]\] \[=0.\] ### Proof of Proposition 21 Let \(\gamma=\mathrm{i}_{X;S,Y}^{(Q)}\), then the statement is equivalent to \[\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}}\quad\iff\quad Q_{X,S,Y}=Q_ {X|S}Q_{S}Q_{Y|S}.\] "\(\Longrightarrow\)"If \(\gamma\in\mathcal{F}_{\mathcal{X}\times\mathcal{S}}\), then we have \[Q_{X,S,Y}(x,s,y) =Q_{X}(x)Q_{S,Y}(s,y)\left(1+\gamma(x,s)\right)\] \[=P_{X}(x)P_{S,Y}(s,y)\left(1+\gamma(x,s)\right)\] \[=P_{X}(x)P_{S}(s)\left(1+\gamma(x,s)\right)\cdot P_{Y|S}(y|s), \quad\text{for all }x,s,y. \tag{149}\] Therefore, \[Q_{X,S}(x,s)=\sum_{y\in y}Q_{X,S,Y}(x,s,y)=P_{X}(x)P_{S}(s)\left(1+\gamma(x,s) \right). \tag{150}\] From (149) and (150), we obtain \[Q_{X,S,Y}(x,s,y)=Q_{X,S}(x,s)P_{Y|S}(y|s)=Q_{X,S}(x,s)Q_{Y|S}(y|s)=Q_{X|S}(x|s) Q_{S}(s)Q_{Y|S}(y|s). \tag{151}\] "\(\Longleftarrow\)"It suffices to note that \[\mathrm{i}_{X;S,Y}^{(Q)}(x,s,y)=\frac{Q_{X,S,Y}(x,s,y)}{Q_{X}(x)Q_{S,Y}(s,y)} -1=\frac{Q_{X,S}(x,s)Q_{Y|S}(y|s)}{Q_{X}(x)Q_{S}(s)Q_{Y|S}(y|s)}-1=\frac{Q_{X,S }(x,s)}{Q_{X}(x)Q_{S}(s)}-1.\] ### Proof of Proposition 23 The results on \(\|\mathrm{i}_{X;S}\|\) and \(\|\mathrm{i}_{X;Y|S}\|\) directly follows from Property 1. To establish (60), note that from Proposition 20, we have \[P_{X,S,Y}(x,s,y)-P_{X|S}(x|s)P_{S}(s)P_{Y|S}(y|s)=P_{X}(x)P_{S,Y}(s,y)\cdot \mathrm{i}_{X;Y|S}(x,s,y),\] which implies that \[P_{Y|X,S}(y|x,s) =P_{Y|S}(y|s)\cdot\left(1+\frac{P_{X}(x)P_{S}(s)}{P_{X,S}(x,s)} \cdot\mathrm{i}_{X;Y|S}(x,s,y)\right)\] \[=P_{Y|S}(y|s)\cdot\left(1+\frac{1}{1+\mathrm{i}_{X;S}(x,s)}\cdot \mathrm{i}_{X;Y|S}(x,s,y)\right) \tag{152}\] \[=P_{Y|S}(y|s)\cdot\left(1+\frac{f^{\mathrm{T}}(x)g(s,y)}{1+\bar{f }^{\mathrm{T}}(x)\bar{g}(s)}\right), \tag{153}\] which further implies (61). ### Proof of Proposition 24 When \(X\) and \((S,Y)\) are \(\epsilon\)-dependent, we have \(\|\mathrm{i}_{X;S,Y}\|=O(\epsilon)\), and thus \[\mathrm{i}_{X;S}(x,s)=O(\epsilon),\quad\mathrm{i}_{X;Y|S}(x,s,y)=O(\epsilon).\] Therefore, we have \[\frac{1}{1+\mathrm{i}_{X;S}(x,s)}\cdot\mathrm{i}_{X;Y|S}(x,s,y)=(1-\mathrm{i} _{X;S}(x,s))\cdot\mathrm{i}_{X;Y|S}(x,s,y)+o(\epsilon)=\mathrm{i}_{X;Y|S}(x,s,y)+o(\epsilon).\] Then, it follows from (152) that \[P_{Y|X,S}(y|x,s)=P_{Y|S}(y|s)\cdot\left(1+\mathrm{i}_{X;Y|S}(x,s,y)\right)+o( \epsilon).\] Finally, the proof is completed by using the decomposition (63). ### Proof of Theorem 25 For each \(s,x,y\), let us define \(R^{(s)}_{X,Y}(x,y)\triangleq P_{X|S=s}(x)P_{Y|S=s}(y)\) and \[\mathrm{i}^{(s)}_{X;Y}(x,y)\triangleq\frac{P_{X,Y|S=s}(x,y)-P_{X|S=s}(x)P_{Y| S=s}(y)}{P_{X|S=s}(x)P_{Y|S=s}(y)}. \tag{154}\] In addition, we define \(\underline{\mu}_{s}\in\mathbb{R}^{k}\) as \(\underline{\mu}_{s}\triangleq\mathbb{E}\left[f(X)|S=s\right]\) for each \(s\in\mathcal{S}\). Also, let \(\beta\in\mathcal{F}_{\mathcal{S}\times\mathcal{Y}}\) be defined as \(\beta(s,y)\triangleq\underline{\mu}_{s}^{\mathrm{T}}g(s,y)\). Then, for each \(s\in\mathcal{S}\), from (69) and Proposition 13 we have \[|f^{\mathrm{T}}(x)g(s,y)-\beta(s,y)|=|(f(x)-\underline{\mu}_{s})^{\mathrm{T}}g (s,y)|=O(\epsilon),\quad\text{for all }x,s,y, \tag{155}\] which implies that \[\big{|}f^{\mathrm{T}}(x)g(s,y)\big{|}=O(\epsilon),\quad\text{for all }x,s,y. \tag{156}\] In addition, from Property 3, we have \[\mathcal{L}_{S}^{(s)} (f,g^{(s)})\] \[=\mathcal{L}_{S}^{(s)}(f-\underline{\mu}_{s},g^{(s)})\] \[=\frac{1}{2}\cdot\left(\left\|\dot{\mathrm{i}}_{X;Y}^{(s)}\right\| _{\dot{\mathrm{R}}_{X,Y}^{(s)}}^{2}-\left\|\dot{\mathrm{i}}_{X;Y}^{(s)}-(f- \underline{\mu}_{s})\otimes g^{(s)}\right\|_{\dot{\mathrm{R}}_{X,Y}^{(s)}}^{2 }\right)-H(Y|S=s)+o(\epsilon^{2})\] \[=\left\langle\mathrm{i}_{X;Y}^{(s)},f\otimes g^{(s)}-\underline{ \mu}_{s}^{\mathrm{T}}g^{(s)}\right\rangle_{\dot{\mathrm{R}}_{X,Y}^{(s)}}-\frac {1}{2}\cdot\left\|f\otimes g^{(s)}-\underline{\mu}_{s}^{\mathrm{T}}g^{(s)} \right\|_{\dot{\mathrm{R}}_{X,Y}^{(s)}}^{2}-H(Y|S=s)+o(\epsilon^{2}), \tag{157}\] where \(\langle\,\cdot\,,\,\cdot\,\rangle_{R}\) and \(\|\cdot\|_{R}\) denote the inner product and corresponding induced norm on the functional space, with respect to the metric distribution \(R\). For the first two terms in (157), we compute their expectations over \(P_{S}\). For the first term, \[\sum_{s\in\SS}P_{S}(s)\big{\langle}\mathrm{i}_{X;Y}^{(s)},f \otimes g^{(s)}-\underline{\mu}_{s}^{\mathrm{T}}g^{(s)}\big{\rangle}_{\dot{ \mathrm{R}}_{X,Y}^{(s)}}\] \[\qquad=\sum_{x\in\mathcal{X},s\in\SS,y\in\SS}P_{S}(s)R_{X,Y}^{(s) }(x,y)\mathrm{i}_{X;Y}^{(s)}(x,y)\left(f^{\mathrm{T}}(x)g^{(s)}(y)-\underline {\mu}_{s}^{\mathrm{T}}g^{(s)}(y)\right)\] \[\qquad=\sum_{x\in\mathcal{X},s\in\SS,y\in\SS}P_{X}(x)P_{S,Y}(s,y) \mathrm{i}_{X;Y|S}(x,s,y)\cdot\left(f^{\mathrm{T}}(x)g(s,y)-\beta(s,y)\right)\] \[\qquad=\left\langle\mathrm{i}_{X;Y|S},f\otimes g\right\rangle- \left\langle\mathrm{i}_{X;Y|S},\beta\right\rangle\] \[\qquad=\left\langle\mathrm{i}_{X;Y|S},f\otimes g\right\rangle \tag{158}\] where to obtain the second equality we have used the facts that \[\mathrm{i}_{X;Y}^{(s)}(x,y) =\frac{P_{X,Y|S=s}(x,y)-P_{X|S=s}(x)P_{Y|S=s}(y)}{P_{X|S=s}(x)P_{Y |S=s}(y)}\] \[=\frac{P_{X,S,Y}(x,s,y)-P_{X,S,Y}^{\mathsf{M}}(x,s,y)}{P_{X}(x)P_{ S,Y}(s,y)}\cdot\frac{P_{X}(x)}{P_{X|S=s}(x)}\] \[=\mathrm{i}_{X;Y|S}(x,s,y)\cdot\frac{1}{1+\mathrm{i}_{X;S}(x,s)} \tag{159}\] and \[P_{S}(s)R_{X,Y}^{(s)}(x,y)=P_{X,S,Y}^{\mathsf{M}}(x,s,y)=P_{X}(x)P_{S,Y}(s,y) \cdot(1+\mathrm{i}_{X;S}(x,s)), \tag{160}\] and where to obtain (158) we have used the fact that \(\mathrm{i}_{X;Y|S}\perp\mathcal{F}_{\SS\times\SS}\ni\beta\). For the second term of (157), we have \[\sum_{s\in\SS}P_{S}(s)\big{\|}f\otimes g^{(s)}-\underline{\mu}_{ s}^{\mathrm{T}}g^{(s)}\big{\|}_{\dot{\mathrm{R}}_{X,Y}^{(s)}}^{2}\] \[\qquad=\sum_{x\in\mathcal{X},s\in\SS,y\in\SS}P_{X,S,Y}^{\mathsf{M }}(x,s,y)\cdot\left[(f(x)-\underline{\mu}_{s})^{\mathrm{T}}g(s,y)\right]^{2} \tag{161}\] \[\qquad=\sum_{x\in\mathcal{X},s\in\SS,y\in\SS}P_{X}(x)P_{S,Y}(s,y) \cdot\left[(f(x)-\underline{\mu}_{s})^{\mathrm{T}}g(s,y)\right]^{2}+o( \epsilon^{2})\] (162) \[\qquad=\sum_{x\in\mathcal{X},s\in\SS,y\in\SS}P_{X}(x)P_{S,Y}(s,y) \cdot\left[f^{\mathrm{T}}(x)g(s,y)-\beta(s,y)\right]^{2}+o(\epsilon^{2}) \tag{163}\] \[=\left\|f\otimes g-\beta\right\|^{2}+o(\epsilon^{2}) \tag{164}\] \[=\left\|f\otimes g\right\|^{2}+\left\|\beta\right\|^{2}+o(\epsilon^ {2}), \tag{165}\] where to obtain the second equality we have used (155), (160), and the fact that \(\left\|\mathrm{i}_{X;S}\right\|=O(\epsilon)\). Furthermore, we can show that \(\left\|\beta\right\|=o(\epsilon)\). To see this, note that \[\beta(s,y)=\underline{\mu}_{s}^{\mathrm{T}}g(s,y) =\sum_{x\in\mathfrak{X}}P_{X|S}(x|s)f^{\mathrm{T}}(x)g(s,y)\] \[=\sum_{x\in\mathfrak{X}}P_{X}(x)\cdot[1+\mathrm{i}_{X;S}(x,s)] \cdot f^{\mathrm{T}}(x)g(s,y)\] \[=\sum_{x\in\mathfrak{X}}P_{X}(x)\cdot\mathrm{i}_{X;S}(x,s)\cdot f ^{\mathrm{T}}(x)g(s,y).\] Then, from \(\left\|\mathrm{i}_{X;S}\right\|=O(\epsilon)\) and (156), we obtain \(\left|\beta(s,y)\right|=O(\epsilon^{2})\) and \(\left\|\beta\right\|=O(\epsilon^{2})=o(\epsilon)\). Therefore, we can refine (165) as \[\sum_{s\in\SS}P_{S}(s)\left\|f\otimes g^{(s)}-\underline{\mu}_{s}^{\mathrm{T} }g^{(s)}\right\|_{R^{(s)}_{X,Y}}^{2}=\left\|f\otimes g\right\|^{2}+o(\epsilon ^{2}). \tag{166}\] Combining (157), (158), and (166), we have \[\mathcal{L}_{S}(f,g)=\sum_{s\in\SS}P_{S}(s)\cdot\mathcal{L}_{S}^ {(s)}(f,g^{(s)}) =\left\langle\mathrm{i}_{X;Y|S},f\otimes g\right\rangle-\frac{1}{ 2}\cdot\left\|f\otimes g\right\|^{2}-H(Y|S)+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot\left(\left\|\mathrm{i}_{X;Y|S}\right\|^{2}- \left\|\mathrm{i}_{X;Y|S}-f\otimes g\right\|^{2}\right)-H(Y|S)+o(\epsilon^{2}),\] which is maximized if and only if \(f\otimes g=\zeta_{\leq k}(\mathrm{i}_{X;Y|S})+o(\epsilon)\). ### Proof of Proposition 26 Given any \(h\in\mathcal{F}_{\mathfrak{X}_{1}\times\mathfrak{Y}}+\mathcal{F}_{\mathfrak{X }_{2}\times\mathfrak{Y}}\), we can represent \(h=h^{(1)}+h^{(2)}\) with \(h^{(i)}\in\mathcal{F}_{\mathfrak{X}_{i}\times\mathfrak{Y}},i=1,2\), i.e., \(h(x_{1},x_{2},y)=h^{(1)}(x_{1},y)+h^{(2)}(x_{2},y)\). Then, for any \(Q_{X_{1},X_{2},Y}\in\mathcal{Q}_{\mathsf{B}}\), we have \[\langle\mathrm{i}_{X_{1},X_{2};Y}^{(Q)},h^{(i)}\rangle =\sum_{x_{1}\in\mathfrak{X}_{1},x_{2}\in\mathfrak{X}_{2},y\in \mathfrak{Y}}P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)\mathrm{i}_{X_{1},X_{2};Y}^{ (Q)}(x_{1},x_{2},y)\cdot h^{(i)}(x_{i},y)\] \[=\sum_{x_{1}\in\mathfrak{X}_{1},x_{2}\in\mathfrak{X}_{2},y\in \mathfrak{Y}}\left[Q_{X_{1},X_{2},Y}(x_{1},x_{2},y)-P_{X_{1},X_{2}}(x_{1},x_{2 })P_{Y}(y)\right]\cdot h^{(i)}(x_{i},y)\] \[=\sum_{x_{i}\in\mathfrak{X}_{i},y\in\mathfrak{Y}}\left[P_{X_{i},Y }(x_{i},y)-P_{X_{i}}(x_{i})P_{Y}(y)\right]\cdot h^{(i)}(x_{i},y)\] \[=\langle\mathrm{i}_{X_{i};Y},h^{(i)}\rangle\qquad\qquad\text{ for }i=1,2.\] As a result, \[\langle\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)}),h\rangle =\langle\mathrm{i}_{X_{1},X_{2};Y}^{(Q)}-\pi_{\mathsf{I}}( \mathrm{i}_{X_{1},X_{2};Y}^{(Q)}),h\rangle\] \[=\langle\mathrm{i}_{X_{1},X_{2};Y}^{(Q)},h\rangle\] \[=\langle\mathrm{i}_{X_{1},X_{2};Y}^{(Q)},h^{(1)}\rangle+\langle \mathrm{i}_{X_{1},X_{2};Y}^{(Q)},h^{(2)}\rangle\equiv\langle\mathrm{i}_{X_{1}; Y},h^{(1)}\rangle+\langle\mathrm{i}_{X_{2};Y},h^{(2)}\rangle,\] where the second equality follows from the fact that \(\langle\pi_{\mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)}),h\rangle=0\). Hence, we obtain \(\langle\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)})-\pi_{\mathsf{B}}( \tilde{\ell}_{P_{X_{1},X_{2},Y}}),h\rangle=0\) for all \(h\in\mathcal{F}_{\mathfrak{X}_{1}\times\mathfrak{Y}}+\mathcal{F}_{\mathfrak{X }_{2}\times\mathfrak{Y}}\), which implies that \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}^{(Q)})-\pi_{\mathsf{B}}(\tilde{ \ell}_{P_{X_{1},X_{2},Y}})=0\). ### Proof of Proposition 28 From the definition, we have \[P_{X_{1},X_{2},Y}(x_{1},x_{2},y) =P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)\left(1+[\pi_{\mathsf{B}}( \mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)+[\pi_{\mathsf{I}}(\mathrm{i}_{X_{1 },X_{2};Y})](x_{1},x_{2},y)\right)\] \[=P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)\left[1+\bar{f}^{\,\mathrm{T} }(x_{1},x_{2})\bar{g}(y)+f^{\,\mathrm{T}}(x_{1},x_{2})g(y)\right],\] which implies (80a). To obtain (80b), note that from (167), we have \[P_{X_{1},Y}(x_{1},y) =\sum_{x_{2}\in\mathcal{X}_{2}}P_{X_{1},X_{2},Y}^{\mathsf{B}}(x_{ 1},x_{2},y)\] \[=P_{X_{1}}(x_{1})P_{Y}(y)\left[1+\mathbb{E}\left[\bar{f}^{\, \mathrm{T}}(x_{1},X_{2})\bar{g}(y)|X_{1}=x_{1}\right]\right]\] \[=P_{X_{1}}(x_{1})P_{Y}(y)\left[1+\left(\bar{f}^{(1)}(x_{1})+[\tau_ {1}(\bar{f}^{(2)})](x_{1})\right)^{\mathrm{T}}\bar{g}(y)\right],\] where to obtain the last equality we have used the fact that \(\tau_{1}(\bar{f})=\bar{f}^{(1)}+\tau_{1}(\bar{f}^{(2)})\). Similarly, we can obtain (80b). Finally, (81) can be readily obtained from (80). ### Proof of Proposition 29 Note that since \(\mathrm{i}_{X_{1},X_{2};Y}\in\mathcal{F}_{(\mathcal{X}_{1}\times\mathcal{X}_{2 }\times\mathcal{Y})|\varnothing}\), we have \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\in\mathcal{F}_{(\mathcal{X}_{1} \times\mathcal{X}_{2}\times\mathcal{Y})|\varnothing}\), which implies that \[\sum_{(x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}} P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)\cdot[\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{ 2};Y})](x_{1},x_{2},y)=\langle\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}),1 \rangle=0.\] Therefore, it follows from the definition of \(P_{X_{1},X_{2},Y}^{\mathsf{B}}\) that \(\sum_{(x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}} P_{X_{1},X_{2},Y}^{\mathsf{B}}(x_{1},x_{2},y)=1\). Similarly, we have \(\sum_{(x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}} P_{X_{1},X_{2},Y}^{\mathsf{I}}(x_{1},x_{2},y)=1\). Moreover, from (82), we have \(P_{X_{1},X_{2},Y}^{\mathsf{B}}(x_{1},x_{2},y)\geq 0,P_{X_{1},X_{2},Y}^{\mathsf{I}} (x_{1},x_{2},y)\geq 0\) for all \((x_{1},x_{2},y)\). Therefore, we obtain \(P_{X_{1},X_{2},Y}^{\mathsf{B}},P_{X_{1},X_{2},Y}^{\mathsf{I}}\in\mathcal{Y}^ {\mathbb{X}_{1}\times\mathcal{X}_{2}\times\mathcal{Y}}\). Since \(\mathrm{i}_{X_{1},X_{2};Y}\perp\mathcal{F}_{\mathcal{X}_{1}\times\mathcal{X}_{2 }}\), we have \(\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})\perp\mathcal{F}_{\mathcal{X}_{1} \times\mathcal{X}_{2}}\). Therefore, for any \((\hat{x}_{1},\hat{x}_{2})\in\mathcal{X}_{1}\times\mathcal{X}_{2}\), let \(f(x_{1},x_{2})=\delta_{x_{1}\hat{x}_{1}}\delta_{x_{2}\hat{x}_{2}}\), then we have \[\sum_{y\in\mathcal{Y}}[\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y} )](\hat{x}_{1},\hat{x}_{2},y) =\sum_{(x_{1},x_{2},y)\in\mathcal{X}_{1}\times\mathcal{X}_{2} \times\mathcal{Y}}[\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y )\cdot\delta_{x_{1}\hat{x}_{1}}\delta_{x_{2}\hat{x}_{2}}\] \[=\langle\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}),f\rangle=0.\] This implies that \(P_{X_{1},X_{2}}^{\mathsf{B}}=P_{X_{1},X_{2}}\). Similarly, we have \(P_{X_{1},X_{2}}^{\mathsf{I}}=P_{X_{1},X_{2}}\). Finally, note that since \[P_{X_{1},X_{2},Y}(x_{1},x_{2},y)=P_{X_{1},X_{2}}(x_{1},x_{2})P_{Y}(y)\left[1+[ \pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)+[\pi_{\mathsf{I}}( \mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2},y)\right],\] we have \[P_{X_{1},Y}(x_{1},y)-P_{X_{1},Y}^{\mathsf{B}}(x_{1},y) =P_{X_{1},Y}^{\mathsf{I}}(x_{1},y)-P_{X_{1}}(x_{1})P_{Y}(y)\] \[=\sum_{x_{2}^{\prime}\in\mathcal{X}_{2}}P_{X_{1},X_{2}}(x_{1},x_{2 }^{\prime})P_{Y}(y)[\pi_{\mathsf{I}}(\mathrm{i}_{X_{1},X_{2};Y})](x_{1},x_{2}^{ \prime},y)\] \[=0.\] To obtain the last equality, note that for any \(\hat{x}_{1},\hat{y}\in\mathscr{X}_{1}\times\mathscr{Y}\), let us define \(\gamma\in\mathscr{F}_{\mathscr{X}_{1}\times\mathscr{Y}}\) as \(\gamma(x_{1},y)=\delta_{x_{1}\hat{x}_{1}}\delta_{y\hat{y}}\). Then, from \(\pi_{\mathfrak{l}}(\mathrm{i}_{X_{1},X_{2};Y})\perp\mathscr{F}_{\mathscr{X}_{1 }\times\mathscr{Y}}\), we have \[0=\langle\pi_{\mathfrak{l}}(\mathrm{i}_{X_{1},X_{2};Y}),\gamma \rangle=\sum_{x_{2}^{\prime}\in\mathscr{X}_{2}}P_{X_{1},X_{2}}(\hat{x}_{1},x_{ 2}^{\prime})P_{Y}(\hat{y})[\pi_{\mathfrak{l}}(\mathrm{i}_{X_{1},X_{2};Y})]( \hat{x}_{1},x_{2}^{\prime},\hat{y}). \tag{167}\] Similarly, we can show that \[P_{X_{2},Y}(x_{2},y)-P_{X_{2},Y}^{\mathsf{B}}(x_{2},y)=P_{X_{2},Y }^{\mathsf{I}}(x_{2},y)-P_{X_{2}}(x_{2})P_{Y}(y)=0.\] ### Proof of Proposition 30 Note that since \[H(Q_{X_{1},X_{2},Y})=H(P_{X_{1},X_{2}})+H(P_{Y})-I_{Q}(X_{1},X_{ 2};Y),\] we have \[P_{X_{1},X_{2},Y}^{\mathsf{ent}}=\operatorname*{arg\,min}_{Q_{ X_{1},X_{2},Y}\in\mathscr{O}_{\mathsf{B}}}I_{Q}(X_{1},X_{2};Y), \tag{168}\] where \(I_{Q}(X_{1},X_{2};Y)\) denotes the mutual information between \((X_{1},X_{2})\) and \(Y\) with respect to \(Q_{X_{1},X_{2},Y}\). Specifically, when we take \(P_{X_{1},X_{2},Y}\) as the \(Q_{X_{1},X_{2},Y}\), we have \(I_{P}(X_{1},X_{2};Y)=\frac{1}{2}\cdot\|\mathrm{i}_{X_{1},X_{2};Y}\|^{2}+o( \epsilon^{2}).\) Therefore, to solve (168), it suffices to consider \(Q_{X_{1},X_{2},Y}\) with \(I_{Q}(X_{1},X_{2};Y)<I_{P}(X_{1},X_{2};Y)=O(\epsilon^{2})\). Since \(Q_{X_{1},X_{2}}Q_{Y}=P_{X_{1},X_{2}}P_{Y}\in\operatorname{relint}(\mathscr{P }^{\mathscr{X}_{1}\times\mathscr{X}_{2}\times\mathscr{Y}})\), we have \(\|\tilde{\ell}_{Q_{X_{1},X_{2},Y}}\|^{2}=O(\epsilon^{2})\) [cf. (Sason and Verdu, 2016, Eq. (338))]. Therefore, \[I_{Q}(X_{1},X_{2};Y) =\frac{1}{2}\cdot\|\tilde{\ell}_{Q_{X_{1},X_{2},Y}}\|^{2}+o( \epsilon^{2})\] \[=\frac{1}{2}\cdot\|\pi_{\mathsf{B}}(\tilde{\ell}_{Q_{X_{1},X_{2},Y}})\|^{2}+\frac{1}{2}\cdot\|\pi_{\mathfrak{l}}(\tilde{\ell}_{Q_{X_{1},X_{2},Y}})\|^{2}+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot\|\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y}) \|^{2}+\frac{1}{2}\cdot\|\pi_{\mathfrak{l}}(\tilde{\ell}_{Q_{X_{1},X_{2},Y}}) \|^{2}+o(\epsilon^{2})\] \[\geq\frac{1}{2}\cdot\|\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y} )\|^{2}+o(\epsilon^{2})\] where the inequality holds with equality when \(\big{\|}\pi_{\mathfrak{l}}(\tilde{\ell}_{Q_{X_{1},X_{2},Y}})\big{\|}=o(\epsilon)\). Hence, we obtain \(\mathrm{i}_{X_{1},X_{2};Y}^{(\mathsf{ent})}=\tilde{\ell}_{P_{X_{1},X_{2},Y} ^{\mathsf{ent}}}=\pi_{\mathsf{B}}(\mathrm{i}_{X_{1},X_{2};Y})+o(\epsilon)\), which gives (85). ### Proof of Proposition 31 For all \(\phi\triangleq\phi^{(1)}+\phi^{(2)}\in\mathscr{F}_{\mathscr{X}_{1}}+\mathscr{ F}_{\mathscr{X}_{2}}\) and \(\psi\in\mathscr{F}_{\mathscr{Y}}\) with \(\|\psi\|=1\), we have \(\mathbb{E}_{P_{X_{1},X_{2}}P_{Y}}\left[\psi(Y)\phi(X_{1},X_{2})\right]=0\). Hence, \[\mathbb{E}\left[\psi(Y)\phi(X_{1},X_{2})\right]=\mathbb{E}_{P_{X _{1},X_{2}}P_{Y}}\left[(1+\mathrm{i}_{X_{1},X_{2};Y}(X_{1},X_{2},Y))\cdot \psi(Y)\phi(X_{1},X_{2})\right]\] \[=\langle\mathfrak{i}_{X_{1},X_{2};Y},\phi\otimes\psi\rangle\] \[=\langle\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y}),\phi\otimes \psi\rangle+\langle\pi_{\mathsf{I}}(\mathfrak{i}_{X_{1},X_{2};Y}),\phi\otimes\psi\rangle\] \[=\langle\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y}),\phi\otimes \psi\rangle,\] where the first equality follows from the definition of \(\mathfrak{i}_{X_{1},X_{2};Y}\), and where the last equality follow from the fact that \(\pi_{\mathsf{I}}(\mathfrak{i}_{X_{1},X_{2};Y})\perp(\mathcal{F}_{\mathcal{X}_ {1}\times\mathfrak{Y}}+\mathcal{F}_{\mathcal{X}_{2}\times\mathfrak{Y}})\ni \phi\otimes\psi\). Therefore, we can rewrite the objective (89) as \[\mathbb{E}\left[\left(\psi(Y)-\phi^{(1)}(X_{1})-\phi^{(2)}(X_{2 })\right)^{2}\right] =\mathbb{E}\left[\left(\psi(Y)-\phi(X_{1},X_{2})\right)^{2}\right]\] \[=\|\psi\|^{2}+\|\phi\|^{2}-2\cdot\mathbb{E}\left[\psi(Y)\phi(X_{ 1},X_{2})\right]\] \[=1+\|\phi\otimes\psi\|^{2}-2\langle\pi_{\mathsf{B}}(\mathfrak{i} _{X_{1},X_{2};Y}),\phi\otimes\psi\rangle\] \[=1+\|\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})-\phi\otimes \psi\|^{2}-\|\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})\|^{2}.\] Finally, the proof is completed by noting that \(\|\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})\|^{2}=\sum_{i=1}^{K}\bar{\sigma }_{i}^{2}\), and we have \[\|\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y})-\phi\otimes\psi\|^{2}\geq\|r _{1}(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y}))\|^{2}=\sum_{i=2}^{K}\bar{ \sigma}_{i}^{2},\] where the inequality becomes equality if and only if \(\phi\otimes\psi=\zeta(\pi_{\mathsf{B}}(\mathfrak{i}_{X_{1},X_{2};Y}))=\bar{ \sigma}_{1}(\bar{f}_{1}^{*}\otimes\bar{g}_{1}^{*})\). ### Proof of Theorem 32 We first introduce a useful lemma. **Lemma 38**: _For all \(k\geq 1,f\in\mathcal{F}_{\mathcal{X}}^{k},g\in\mathcal{F}_{\mathcal{Y}}^{k}\), let \(\bar{h}\triangleq\Pi\left(f;\mathcal{F}_{\mathcal{X}_{1}}+\mathcal{F}_{ \mathcal{X}_{2}}\right)\), \(h=f-\bar{h}\). Then, we have_ \[\mathscr{H}_{\mathrm{m}}(f,g)=\frac{1}{2}\cdot\left[L(R_{X_{1},X_{2},Y})-\eta_ {0}\cdot\big{\|}\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}} )-h\otimes g\big{\|}^{2}-L_{\mathsf{B}}(\bar{h}\otimes g)\right], \tag{169}\] _where we have defined_ \[L_{\mathsf{B}}(\gamma)\triangleq\eta_{0}\cdot\big{\|}\pi_{\mathsf{B}}(\tilde{ \ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}})-\pi_{\mathsf{B}}(\gamma)\big{\|}^{2}+ \eta_{1}\cdot\big{\|}\tilde{\ell}_{\hat{P}_{X_{1},Y}^{(1)}}-\pi_{\mathsf{M}_{ 1}}(\gamma)\big{\|}^{2}+\eta_{2}\cdot\big{\|}\tilde{\ell}_{\hat{P}_{X_{2},Y}^{ (2)}}-\pi_{\mathsf{M}_{2}}(\gamma)\big{\|}^{2}. \tag{170}\] **Proof** By definition, we have \[\mathscr{H}(f,g;\hat{P}_{X_{1},X_{2},Y}^{(0)}) =\frac{1}{2}\left(\|\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}} \big{\|}^{2}-\big{\|}\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}}-f\otimes g \big{\|}^{2}\right)\] \[=\frac{1}{2}\left(\|\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}} \big{\|}^{2}-\big{\|}(\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0 )}})-h\otimes g)+(\pi_{\mathsf{B}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)} })-\bar{h}\otimes g)\big{\|}^{2}\right)\] \[=\frac{1}{2}\left(\|\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}} \big{\|}^{2}-\big{\|}\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0 )}})-h\otimes g\big{\|}^{2}-\big{\|}\pi_{\mathsf{B}}(\tilde{\ell}_{\hat{P}_{X_{1 },X_{2},Y}^{(0)}})-\bar{h}\otimes g\big{\|}^{2}\right), \tag{171}\] where to obtain the last equality we have used the orthogonality between \(\left(\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}})-h\otimes g \right)\perp(\mathcal{F}_{\mathcal{X}_{1}\times\mathfrak{Y}}+\mathcal{F}_{ \mathcal{X}_{2}\times\mathfrak{Y}})\) and \(\left(\pi_{\mathsf{B}}(\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}})-\bar{h} \otimes g\right)\in(\mathcal{F}_{\mathcal{X}_{1}\times\mathfrak{Y}}+\mathcal{F}_ {\mathcal{X}_{2}\times\mathfrak{Y}})\). In addition, for each \(i=1,2\), from \(\tau_{i}(f)=\tau_{i}(\bar{h})\) we have \[\mathscr{H}(\tau_{i}(f),g;\hat{P}^{(i)}_{X_{i},Y})=\mathscr{H}(\tau_ {i}(\bar{h}),g;\hat{P}^{(i)}_{X_{i},Y}) =\frac{1}{2}\left(\left\|\tilde{\ell}_{\hat{P}^{(i)}_{X_{i},Y}} \right\|^{2}-\left\|\tilde{\ell}_{\hat{P}^{(i)}_{X_{i},Y}}-\tau_{\bar{i}}(\bar {h})\otimes g\right\|^{2}\right),\] \[=\frac{1}{2}\left(\left\|\tilde{\ell}_{\hat{P}^{(i)}_{X_{i},Y}} \right\|^{2}-\left\|\tilde{\ell}_{\hat{P}^{(i)}_{X_{i},Y}}-\pi_{\mathsf{M}_{i} }(\bar{h}\otimes g)\right\|^{2}\right), \tag{172}\] where the last equality follows from that \(\pi_{\mathsf{M}_{i}}(\bar{h}\otimes g)=\tau_{i}(\bar{h})\otimes g\). From (171) and (172), we can rewrite (92) as \[\mathscr{H}_{\text{m}}(f,g) =\eta_{0}\cdot\mathscr{H}(f,g;\hat{P}^{(0)}_{X_{1},X_{2},Y})+ \eta_{1}\cdot\mathscr{H}\left(\tau_{1}(f),g;\hat{P}^{(1)}_{X_{1},Y}\right)+ \eta_{2}\cdot\mathscr{H}\left(\tau_{2}(f),g;\hat{P}^{(2)}_{X_{2},Y}\right)\] \[=\frac{\eta_{0}}{2}\cdot\left(\left\|\tilde{\ell}_{\hat{P}^{(0)} _{X_{1},X_{2},Y}}\right\|^{2}-\left\|\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}^{ (0)}_{X_{1},X_{2},Y}})-h\otimes g\right\|^{2}-\left\|\pi_{\mathsf{B}}(\tilde{ \ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}})-\bar{h}\otimes g\right\|^{2}\right)\] \[\qquad+\sum_{i=1}^{2}\frac{\eta_{i}}{2}\cdot\left(\left\|\tilde{ \ell}_{\hat{P}^{(i)}_{X_{i},Y}}\right\|^{2}-\left\|\tilde{\ell}_{\hat{P}^{(i) }_{X_{1},Y}}-\pi_{\mathsf{M}_{i}}(\bar{h}\otimes g)\right\|^{2}\right)\] \[=\frac{1}{2}\cdot\left[L(R_{X_{1},X_{2},Y})-\eta_{0}\cdot\left\| \pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}})-h\otimes g \right\|^{2}\right]\] \[\qquad-\frac{1}{2}\left[\eta_{0}\cdot\left\|\pi_{\mathsf{B}}( \tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}})-\bar{h}\otimes g\right\|^{2}+ \sum_{i=1}^{2}\eta_{i}\cdot\left\|\tilde{\ell}_{\hat{P}^{(i)}_{X_{1},Y}}-\pi_ {\mathsf{M}_{i}}(\bar{h}\otimes g)\right\|^{2}\right]\] \[=\frac{1}{2}\cdot\left[L(R_{X_{1},X_{2},Y})-\eta_{0}\cdot\left\| \pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}})-h\otimes g \right\|^{2}-L_{\mathsf{B}}(\bar{h}\otimes g)\right],\] where the last equality follows from the fact that \(\bar{h}\otimes g=\pi_{\mathsf{B}}(\bar{h}\otimes g)\). Let use define \(\bar{h}\triangleq\Pi\left(f;\mathcal{F}_{\mathcal{X}_{1}}+\mathcal{F}_{ \mathcal{X}_{2}}\right)\) and \(h\triangleq f-\bar{h}\). Then, from Lemma 38, we have \[\mathscr{H}_{\text{m}}(\bar{f},\bar{g})=\frac{1}{2}\cdot\left[L(R_{X_{1},X_{2},Y})-\eta_{0}\cdot\left\|\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X _{2},Y}})\right\|^{2}-L_{\mathsf{B}}(\bar{f}\otimes\bar{g})\right], \tag{173}\] and, similarly, \[\mathscr{H}_{\text{m}}\left(\begin{bmatrix}\bar{f}\\ f\end{bmatrix},\begin{bmatrix}\bar{g}\\ g\end{bmatrix}\right)=\frac{1}{2}\cdot\left[L(R_{X_{1},X_{2},Y})-\eta_{0} \cdot\left\|\pi_{\mathsf{I}}(\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}})-h \otimes g\right\|^{2}-L_{\mathsf{B}}(\bar{f}\otimes\bar{g}+\bar{h}\otimes g) \right]. \tag{174}\] Therefore, (173) is minimized if and only if \[\bar{f}\otimes\bar{g}=\pi_{\mathsf{B}}\Big{(}\tilde{\ell}_{\hat{P}^{(\text{out })}_{X_{1},X_{2},Y}}\Big{)}, \tag{175}\] and (174) is minimized if and only if \[\bar{f}\otimes\bar{g}+\bar{h}\otimes g=\pi_{\mathsf{B}}\Big{(}\tilde{\ell}_{ \hat{P}^{(\text{out})}_{X_{1},X_{2},Y}}\Big{)},\qquad h\otimes g=\zeta_{\leq k }\left(\pi_{\mathsf{I}}\Big{(}\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}} \Big{)}\right). \tag{176}\] Hence, the common solution of (175) and (176) is \[\bar{f}\otimes\bar{g}=\pi_{\mathsf{B}}\Big{(}\tilde{\ell}_{\hat{P}^{(\text{out })}_{X_{1},X_{2},Y}}\Big{)},\qquad\bar{h}\otimes g=0,\qquad h\otimes g=\zeta_ {\leq k}\left(\pi_{\mathsf{I}}\Big{(}\tilde{\ell}_{\hat{P}^{(0)}_{X_{1},X_{2},Y}} \Big{)}\right), \tag{177}\] which is equivalent to (94). Finally, from Proposition 35, this is also the solution that maximizes the nested H-score (93). ### Proof of Theorem 33 We first introduce two useful facts. **Fact 6** (Cover and Thomas 2006, Theorem 11.1.2): _Given \(m\) samples \(Z_{1},\ldots,Z_{m}\) i.i.d. generated from \(Q_{Z}\in\mathcal{P}^{\mathbb{Z}}\), th probability of observing \(\{Z_{i}\}_{i=1}^{m}=\{z_{i}\}_{i=1}^{m}\), denoted by \(\mathbb{P}\left\{\{z_{i}\}_{i=1}^{m};Q_{Z}\right\}\), satisfies \(-\log\mathbb{P}\left\{\{z_{i}\}_{i=1}^{m};Q_{Z}\right\}=m\left[H(\hat{P}_{Z})+ D(\hat{P}_{Z}\|Q_{Z})\right]\), where \(\hat{P}_{Z}\) is the empirical distribution of \(\{z_{i}\}_{i=1}^{m}\)._ **Fact 7** (Huang et al. 2019, Lemma 10): _Given a reference distribution \(R\in\mathrm{relint}(\mathcal{P}^{\mathbb{Z}})\). Then, for \(P,Q\in\mathcal{P}^{\mathbb{Z}}\) with \(\|\tilde{\ell}_{P;R}\|=O(\epsilon)\), \(\|\tilde{\ell}_{Q;R}\|=O(\epsilon)\), we have_ \[D(P\|Q)=\frac{1}{2}\cdot\|\tilde{\ell}_{P;R}-\tilde{\ell}_{Q;R}\|^{2}+o( \epsilon^{2}).\] Note that since the data from three different datasets are generated independently, we have \[\mathbb{P}\left\{\mathcal{D}_{0},\mathcal{D}_{1},\mathcal{D}_{2 };Q_{X_{1},X_{2},Y}\right\} =\mathbb{P}\left\{\mathcal{D}_{0};Q_{X_{1},X_{2},Y}\right\}\cdot \mathbb{P}\left\{\mathcal{D}_{1};Q_{X_{1},X_{2},Y}\right\}\cdot\mathbb{P} \left\{\mathcal{D}_{2};Q_{X_{1},X_{2},Y}\right\}\] \[=\mathbb{P}\left\{\mathcal{D}_{0};Q_{X_{1},X_{2},Y}\right\}\cdot \mathbb{P}\left\{\mathcal{D}_{1};Q_{X_{1},Y}\right\}\cdot\mathbb{P}\left\{ \mathcal{D}_{2};Q_{X_{2},Y}\right\}.\] Therefore, from Fact 6, \[-\log\mathbb{P}\left\{\mathcal{D}_{0},\mathcal{D}_{1},\mathcal{D}_ {2};Q_{X_{1},X_{2},Y}\right\}\] \[\qquad=-\log\mathbb{P}\left\{\mathcal{D}_{0};Q_{X_{1},X_{2},Y} \right\}-\log\mathbb{P}\left\{\mathcal{D}_{1};Q_{X_{1},Y}\right\}-\log\mathbb{ P}\left\{\mathcal{D}_{2};Q_{X_{2},Y}\right\}\] \[\qquad\qquad+n_{0}\cdot H(\hat{P}_{X_{1},X_{2},Y}^{(0)})+n_{1} \cdot H(\hat{P}_{X_{1},Y}^{(1)})+n_{2}\cdot H(\hat{P}_{X_{2},Y}^{(2)})\] \[\qquad=n\cdot L^{(\mathrm{ML})}(Q_{X_{1},X_{2},Y})+n_{0}\cdot H( \hat{P}_{X_{1},X_{2},Y}^{(0)})+n_{1}\cdot H(\hat{P}_{X_{1},Y}^{(1)})+n_{2} \cdot H(\hat{P}_{X_{2},Y}^{(2)})\] where the second equality follows from (6), and where we have defined \[L^{(\mathrm{ML})}(Q_{X_{1},X_{2},Y})\triangleq\eta_{0}\cdot D\big{(}\hat{P}_{ X_{1},X_{2},Y}^{(0)}\big{\|}Q_{X_{1},X_{2},Y}\big{)}+\eta_{1}\cdot D\big{(}\hat{P}_{ X_{1},Y}^{(1)}\big{\|}Q_{X_{1},Y}\big{)}+\eta_{2}\cdot D\big{(}\hat{P}_{X_{2},Y}^{(2)} \big{\|}Q_{X_{2},Y}\big{)}.\] Hence, we can rewrite the maximum likelihood solution \(P_{X_{1},X_{2},Y}^{(\mathrm{ML})}\) as \[P_{X_{1},X_{2},Y}^{(\mathrm{ML})}=\operatorname*{arg\,max}_{Q_{X_{1},X_{2},Y}} \ \mathbb{P}\left\{\mathcal{D}_{0},\mathcal{D}_{1},\mathcal{D}_{2};Q_{X_{1},X_{2},Y}\right\}=\operatorname*{arg\,min}_{Q_{X_{1},X_{2},Y}}L^{(\mathrm{ML})}(Q_{ X_{1},X_{2},Y}).\] From \(L(R_{X_{1},X_{2},Y})=O(\epsilon^{2})\), we obtain \(\big{\|}\tilde{\ell}_{\hat{P}_{X_{1},X_{2},Y}^{(0)}}\big{\|}=O(\epsilon),\big{\|} \tilde{\ell}_{\hat{P}_{X_{1},Y}^{(1)}}\big{\|}=O(\epsilon),\big{\|}\tilde{ \ell}_{\hat{P}_{X_{2},Y}^{(2)}}\big{\|}=O(\epsilon)\). By definition, we have \(L(P_{X_{1},X_{2},Y}^{(\mathrm{est})})<L(R_{X_{1},X_{2},Y})=O(\epsilon^{2})\), which implies that \(\big{\|}\tilde{\ell}_{P_{X_{1},X_{2},Y}^{(\mathrm{est})}}\big{\|}=O(\epsilon)\). We first consider \(Q_{X_{1},X_{2},Y}\) with \[\big{\|}\tilde{\ell}_{Q_{X_{1},X_{2},Y}}-\tilde{\ell}_{P_{X_{1},X_{2},Y}^{( \mathrm{est})}}\big{\|}\leq\epsilon. \tag{178}\] Then, we have \[\big{\|}\tilde{\ell}_{Q_{X_{1},X_{2},Y}}\big{\|}\leq\big{\|}\tilde{\ell}_{Q_{X_{1 },X_{2},Y}}-\tilde{\ell}_{P_{X_{1},X_{2},Y}^{(\mathrm{est})}}\big{\|}+\big{\|} \tilde{\ell}_{P_{X_{1},X_{2},Y}^{(\mathrm{est})}}\big{\|}=O(\epsilon), \tag{179}\] and it follows from Fact 7 that \[L^{(\mathrm{ML})}(Q_{X_{1},X_{2},Y}) =\eta_{0}\cdot D\big{(}\hat{P}^{(0)}_{X_{1},X_{2},Y}\big{\|}Q_{X_{1 },X_{2},Y}\big{)}+\eta_{1}\cdot D\big{(}\hat{P}^{(1)}_{X_{1},Y}\big{\|}Q_{X_{1 },Y}\big{)}+\eta_{2}\cdot D\big{(}\hat{P}^{(2)}_{X_{2},Y}\big{\|}Q_{X_{2},Y} \big{)}\] \[=\frac{1}{2}\Big{(}\eta_{0}\cdot\big{\|}\tilde{\ell}_{\hat{P}^{(0 )}_{X_{1},X_{2},Y}}-\tilde{\ell}_{Q_{X_{1},X_{2},Y}}\big{\|}^{2}+\eta_{1}\cdot \big{\|}\tilde{\ell}_{\hat{P}^{(1)}_{X_{1},Y}}-\tilde{\ell}_{Q_{X_{1},Y}}\big{\|} ^{2}\] \[\qquad\quad+\eta_{2}\cdot\big{\|}\tilde{\ell}_{\hat{P}^{(2)}_{X_ {2},Y}}-\tilde{\ell}_{Q_{X_{2},Y}}\big{\|}^{2}\Big{)}+o(\epsilon^{2})\] \[=\frac{1}{2}\cdot L(Q_{X_{1},X_{2},Y})+o(\epsilon^{2}). \tag{180}\] Therefore, for \(Q_{X_{1},X_{2},Y}\) that satisfies (178), the minimum value of \(L^{(\mathrm{ML})}(Q_{X_{1},X_{2},Y})\) is achieved by \(Q_{X_{1},X_{2},Y}=P^{(\mathrm{est})}_{X_{1},X_{2},Y}+o(\epsilon)\). Now we consider the case of \(Q_{X_{1},X_{2},Y}\) with \(\big{\|}\tilde{\ell}_{Q_{X_{1},X_{2},Y}}-\tilde{\ell}_{P^{(\mathrm{est})}_{X_ {1},X_{2},Y}}\big{\|}>\epsilon\). Let \(\epsilon^{\prime}=\epsilon/\big{\|}\tilde{\ell}_{Q_{X_{1},X_{2},Y}}-\tilde{ \ell}_{P^{(\mathrm{est})}_{X_{1},X_{2},Y}}\big{\|}<1\) and define \[\bar{Q}_{X_{1},X_{2},Y}\triangleq\epsilon^{\prime}\cdot Q_{X_{1},X_{2},Y}+(1- \epsilon^{\prime})\cdot P^{(\mathrm{est})}_{X_{1},X_{2},Y}. \tag{181}\] Then, we have \(\tilde{\ell}_{\bar{Q}_{X_{1},X_{2},Y}}=\epsilon^{\prime}\cdot\tilde{\ell}_{Q_ {X_{1},X_{2},Y}}+(1-\epsilon^{\prime})\cdot\tilde{\ell}_{P^{(\mathrm{est})}_{ X_{1},X_{2},Y}}\), which implies \[\big{\|}\tilde{\ell}_{Q_{X_{1},X_{2},Y}}-\tilde{\ell}_{P^{(\mathrm{est})}_{X_ {1},X_{2},Y}}\big{\|}=\epsilon^{\prime}\cdot\big{\|}\tilde{\ell}_{Q_{X_{1},X_{ 2},Y}}-\tilde{\ell}_{P^{(\mathrm{est})}_{X_{1},X_{2},Y}}\big{\|}=\epsilon. \tag{182}\] As a result, we can apply the same analysis on \(\bar{Q}_{X_{1},X_{2},Y}\) and obtain [cf. (180)] \[L^{(\mathrm{ML})}(\bar{Q}_{X_{1},X_{2},Y})=\frac{1}{2}\cdot L(\bar{Q}_{X_{1}, X_{2},Y})+o(\epsilon^{2}). \tag{183}\] Hence, we obtain from (182) that \[L^{(\mathrm{ML})}(\bar{Q}_{X_{1},X_{2},Y})>L^{(\mathrm{ML})}(P^{(\mathrm{est}) }_{X_{1},X_{2},Y})=\frac{1}{2}\cdot L(P^{(\mathrm{est})}_{X_{1},X_{2},Y})+o( \epsilon^{2}) \tag{184}\] for \(\epsilon\) sufficiently small. In addition, since \(L^{(\mathrm{ML})}\) is convex, it follows from Jensen's inequality that (cf. (181)) \[L^{(\mathrm{ML})}(\bar{Q}_{X_{1},X_{2},Y})\leq\epsilon^{\prime}\cdot L^{( \mathrm{ML})}(Q_{X_{1},X_{2},Y})+(1-\epsilon^{\prime})\cdot L^{(\mathrm{ML})} (P^{(\mathrm{est})}_{X_{1},X_{2},Y}). \tag{185}\] As a result, from (184) and (185) we have \(L^{(\mathrm{ML})}(Q_{X_{1},X_{2},Y})>L^{(\mathrm{ML})}(P^{(\mathrm{est})}_{X_ {1},X_{2},Y})\). Combining both cases of \(Q_{X_{1},X_{2},Y}\), we obtain (96). ### Proof of Proposition 34 It suffices to establish that there exist \(\alpha\colon\mathcal{U}\to\mathbb{R}\), \(\beta\colon\mathcal{V}\to\mathbb{R}\), such that \[\mathrm{i}_{\underline{X};U}(\underline{x},u) =\alpha(u)\cdot\left[\tanh\left(2w\cdot\varphi(\underline{x})+b_{U} \right)-\tanh(b_{U})\right], \tag{186a}\] \[\mathrm{i}_{\underline{Y};V}(\underline{y},v) =\beta(v)\cdot\left[\tanh\left(2w\cdot\varphi(\underline{y})+b_{V} \right)-\tanh(b_{V})\right]. \tag{186b}\] To see this, note that from the Markov relation \(\underline{X}-U-V-\underline{Y}\), we have \[P_{\underline{X},\underline{Y}}(\underline{x},\underline{y})=\sum_{u\in \mathcal{U},v\in\mathcal{V}}P_{\underline{X}|U}(\underline{x}|u)\cdot P_{ \underline{Y}|V}(\underline{y}|v)\cdot P_{UV}(u,v)\] \[=P_{\underline{X}}(\underline{x})P_{\underline{Y}}(\underline{y})\sum_ {u\in\mathbb{U},v\in\mathcal{V}}P_{U,V}(u,v)\cdot(1+\mathrm{i}_{\underline{X};U}( \underline{x},u))\cdot(1+\mathrm{i}_{\underline{Y};V}(\underline{y},v))\] \[=P_{\underline{X}}(\underline{x})P_{\underline{Y}}(\underline{y}) \left(1+\sum_{u\in\mathbb{U},v\in\mathcal{V}}P_{U,V}(u,v)\cdot\mathrm{i}_{ \underline{X};U}(\underline{x},u)\cdot\mathrm{i}_{\underline{Y};V}(\underline{ y},v)\right),\] where to obtain the last equality we have used the fact that \[\sum_{u\in\mathbb{U}}P_{U}(u)\cdot\mathrm{i}_{\underline{X};U}(\underline{x}, u)=\sum_{v\in\mathcal{V}}P_{V}(v)\cdot\mathrm{i}_{\underline{Y};V}(\underline{y},v)=0.\] Therefore, we obtain \[\mathrm{i}_{\underline{X};\underline{Y}}(\underline{x},\underline{ y}) =\frac{P_{\underline{X},\underline{Y}}(\underline{x},\underline{ y})}{P_{\underline{X}}(\underline{x})P_{\underline{Y}}(\underline{y})}-1\] \[=\sum_{u\in\mathbb{U},v\in\mathcal{V}}P_{U,V}(u,v)\cdot\mathrm{i} _{\underline{X};U}(\underline{x},u)\cdot\mathrm{i}_{\underline{Y};V}( \underline{y},v)\] \[=\mathbb{E}\left[\alpha(U)\beta(V)\right]\cdot\left[\tanh\left(2 w\cdot\varphi(\underline{x})+b_{U}\right)-\tanh(b_{U})\right]\cdot\left[\tanh \left(2w\cdot\varphi(\underline{y})+b_{V}\right)-\tanh(b_{V})\right],\] which gives (102). It remains only to establish (186). For symmetry, we consider only (186a). To begin, by definition, we have \[P_{\underline{X}|U=u}(\underline{x})=\frac{1}{2}\cdot\prod_{i=1}^{l-1}\left[q _{u}^{(1-\delta_{x_{i}x_{i+1}})}(1-q_{u})^{\delta_{x_{i}x_{i+1}}}\right],\] which implies \[\log P_{\underline{X}|U=u}(\underline{x})=\log\frac{1}{2}+\log q_{u}\cdot\sum_ {i=1}^{l-1}(1-\delta_{x_{i}x_{i+1}})+\log(1-q_{u})\cdot\sum_{i=1}^{l-1}\delta _{x_{i}x_{i+1}}.\] Therefore, we obtain \[\frac{1}{2}\log\frac{P_{\underline{X}|U=1}(\underline{x})}{P_{ \underline{X}|U=0}(\underline{x})} =\frac{1}{2}\left[\log P_{\underline{X}|U=1}(\underline{x})-\log P _{\underline{X}|U=0}(\underline{x})\right]\] \[=\frac{1}{2}\left[\log\frac{q_{1}}{q_{0}}\cdot\sum_{i=1}^{l-1}(1- \delta_{x_{i}x_{i+1}})+\log\frac{1-q_{1}}{1-q_{0}}\cdot\sum_{i=1}^{l-1}\delta _{x_{i}x_{i+1}}\right]\] \[=\log\frac{q_{1}}{q_{0}}\cdot\left(\frac{l-1}{2}-\sum_{i=1}^{l-1} \delta_{x_{i}x_{i+1}}\right)\] \[=2w\cdot\varphi(\underline{x}).\] As a consequence, \[\frac{P_{\underline{X}|U=1}(\underline{x})-P_{\underline{X}|U=0}( \underline{x})}{P_{\underline{X}}(\underline{x})} =\frac{P_{\underline{X}|U=1}(\underline{x})-P_{\underline{X}|U=0} (\underline{x})}{P_{U}(1)\cdot P_{\underline{X}|U=1}(\underline{x})+P_{U}(0) \cdot P_{\underline{X}|U=0}(\underline{x})}\] \[=\frac{1}{P_{U}(0)}\cdot\frac{P_{\underline{X}|U=1}(\underline{x} )}{\frac{P_{\underline{X}|U=0}(\underline{x})}{P_{\underline{X}|U=0}( \underline{x})}-1}\] \[=\frac{1}{2P_{U}(0)P_{U}(1)}\cdot\left[\frac{\frac{P_{U}(1)}{P_{U}(0)} \cdot\frac{P_{\underline{X}|U=1}(\underline{x})}{P_{\underline{X}|U=0}( \underline{x})}-1}{\frac{P_{U}(1)}{P_{U}(0)}\cdot\frac{P_{\underline{X}|U=1}( \underline{x})}{P_{\underline{X}|U=0}(\underline{x})}+1}-\frac{\frac{P_{U}(1)} {P_{U}(0)}-1}{\frac{P_{U}(1)}{P_{U}(0)}+1}\right]\] \[=\frac{1}{2P_{U}(0)P_{U}(1)}\cdot\left[\tanh\left(2w\cdot\varphi( \underline{x})+b_{U}\right)-\tanh(b_{U})\right],\] where to obtain the last equality we have used the fact that \(\frac{t-1}{t+1}=\tanh\left(\frac{1}{2}\log t\right)\). Hence, with \(u^{\prime}=1-u\), we have \[\mathrm{i}_{\underline{X};U}(\underline{x},u) =\frac{P_{\underline{X},U}(\underline{x},u)-P_{\underline{X}}( \underline{x})P_{U}(u)}{P_{\underline{X}}(\underline{x})P_{U}(u)}\] \[=\frac{P_{\underline{X}|U=u}(\underline{x})-P_{\underline{X}}( \underline{x})}{P_{\underline{X}}(\underline{x})}\] \[=\frac{P_{\underline{X}|U=u}(\underline{x})-P_{U}(u)P_{ \underline{X}|U=u}(\underline{x})-P_{U}(u^{\prime})P_{\underline{X}|U=u^{ \prime}}(\underline{x})}{P_{\underline{X}}(\underline{x})}\] \[=P_{U}(u^{\prime})\cdot\frac{P_{\underline{X}|U=u}(\underline{x}) -P_{\underline{X}|U=u^{\prime}}(\underline{x})}{P_{\underline{X}}(\underline{ x})}\] \[=P_{U}(u^{\prime})\cdot(-1)^{u+1}\cdot\frac{P_{\underline{X}|U=1 }(\underline{x})-P_{\underline{X}|U=0}(\underline{x})}{P_{\underline{X}}( \underline{x})}\] \[=\frac{(-1)^{u+1}}{2P_{U}(u)}\cdot\left[\tanh\left(2w\cdot \varphi(\underline{x})+b_{U}\right)-\tanh(b_{U})\right],\] which gives (186a) as desired, with \(\alpha(u)=\frac{(-1)^{u+1}}{2P_{U}(u)}\). ## Appendix D Implementation Details of Experiments We implement our experiments in Python 3 (Van Rossum and Drake, 2009), where we use the PyTorch(Paszke et al., 2019) library for neural network training and use the Matplotlib(Hunter, 2007) library for plotting. We also make use of NumPy(Harris et al., 2020) and SciPy(Virtanen et al., 2020) for the computation. In the experiments, we apply Adam (Kingma and Ba, 2015) as the optimizer with the default parameters: a learning rate of \(10^{-3}\), \(\beta_{1}=0.9,\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). For each MLP (multilayer perceptron) used in the experiments, we set the activation function to be the softplus function \(x\mapsto\log(1+e^{x})\), which are applied to all layers except the output layer. It is worth mentioning that our choices of network architectures, optimizers and hyperparameters are not tuned, and thus might not be optimal. It is possible to further optimize such choices to improve the performance or convergence. ### Learning Maximal Correlation Functions We first introduce the implementation details for Section 7.1, where the goal is to learn maximal correlation functions for different data. The corresponding learning objective is the nested H-score (35), which are maximized during the training. #### d.1.1 Implementation of Section 7.1.1 We set \(|\mathcal{X}|=8,|\mathcal{Y}|=6\), and feature dimension \(k=3\). To generate the discrete distributions \(P_{X,Y}\), we draw \((|\mathcal{X}|\cdot|\mathcal{Y}|)\) i.i.d. numbers from \(\text{Unif}[0,1]\) and divide each number by their sum. We then use the resulting \(|\mathcal{X}|\times|\mathcal{Y}|\) table as the values for the probability mass function \(P_{X,Y}\). To ensure reproducible results, we set the random seed of NumPy to \(20\,230\,606\) in the generating process. Then, we generate \(N=30\,000\) training sample pairs of \((X,Y)\) from \(P_{X,Y}\), then apply one-hot encoding such that the inputs are represented as \(|\mathcal{X}|\) and \(|\mathcal{Y}|\) dimensional vectors. Then, we use two one-layer linear networks as the feature extractors \(f\) and \(g\). We train the networks with a minibatch size of 128 for 100 epochs. Then, we obtain the estimated \(f_{i}^{*}\), \(g_{i}^{*}\), and \(\sigma_{i}\) by applying (37) and compare them with corresponding theoretical values, which we compute from the SVD of corresponding CDM matrix [cf. (113)], with the results shown in Figure 13. Note that since \(f_{i}^{*}\otimes g_{i}^{*}=(-f_{i}^{*})\otimes(-g_{i}^{*})\), both \((f_{i}^{*},g_{i}^{*})\) and \((-f_{i}^{*},-g_{i}^{*})\) are the optimal feature pairs. For the sake of presentation, we applied a sign modification before the plotting. #### d.1.2 Implementation of Section 7.1.2 In this experiment, we first generate \(N=50\,000\) samples of \((X,Y)\in\mathbb{R}^{2}\) for training, to learn \(k=2\) dimensional features \(f\) and \(g\). We use two MLPs of the same architecture as the feature extractors for \(f\) and \(g\). Specifically, each MLP is with three layers, where the dimensions for all intermediate features, from input to output, are: \(\text{input}=1\) - 32 - 32 - 2 = output. We then train the networks with a minibatch size of 256 for 100 epochs and use the learned features for estimation tasks, as demonstrated in Section 7.1.2. #### d.1.3 Implementation of Section 7.1.3 In this experiment, we set \(k=1\). To extract \(f\) and \(g\) from input sequences \(\underline{X}\) and \(\underline{Y}\), we use one-dimensional convolutional neural networks as the feature extractors, which are used in sentence classifications (Kim, 2014; Zhang and Wallace, 2017). In particular, \(f\) and \(g\) are of the same architecture, composed of an embedding (linear) layer, a 1 dimensional convolutional layer, an average pooling layer, and a fully connected (linear) layer. We use feature extractor \(f\) as an example to illustrate the processing of sequential data. First, we represent \(\underline{x}\) sequence as a one-hot encoded list, i.e., each \(x_{i}\in\{(1,0)^{\text{T}},(0,1)^{\text{T}}\}\). Then, the embedding layer maps each \(x_{i}\) to a 4-dimensional vector. The one-dimensional convolutional layer then processes the length-\(l\) list of embedded 4-dimensional vectors, by 32 convolutional kernels of size 4. We then activate the convolution results by the ReLU function \(x\mapsto\max\{x,0\}\). The output from each convolutional kernel is further averaged by the average pooling layer, leading to a 32 dimensional feature, with each dimension corresponding to a convolutional kernel. Finally, we feed the 32 dimensional feature to the fully connected layer and generate \(k=1\) dimensional output. Then, we train the feature extractors \(f\) and \(g\) with a minibatch size of 128 for 100 epochs. The learned features are shown in Section 7.1.3. ### Learning With Orthogonality Constraints In this experiment, we use the same dataset generated in Appendix D.1.2. We set \(\bar{k}=k=1\), i.e., we learn one-dimensional feature \(f\) from \(X\) orthogonal to given one-dimensional \(\bar{f}\). To this end, we use three MLPs of the same architecture as the feature extractors for \(\bar{g},f,g\), with dimensions input = 1 - 32 - 32 - 1 = output. We then train the networks with a minibatch size of 256 for 100 epochs to maximize the nested H-score restricted to \(\bar{f}=\phi\) [cf. (49)], for \(\phi(x)=x\) and \(\phi(x)=x^{2}\), respectively. ### Learning With Side Information We set \(|\mathcal{X}|=8,|\mathcal{S}|=|\mathcal{Y}|=3\), and generate \(P_{X,S,Y}\) in a manner similar to Appendix D.1.1, with the same random seed. Then, we generate \(N=50\,000\) training samples of \((X,S,Y)\) triples. In our implementation, we set \(\bar{k}=|\mathcal{S}|-1=2\) and \(k=1\), and set feature extractors \(\bar{f}\in\mathcal{F}_{\mathcal{X}}^{2},\bar{g}\in\mathcal{F}_{\bar{S}}^{2}\), \(f\in\mathcal{F}_{\mathcal{X}}\), \(g\in\mathcal{F}_{\mathcal{S}\times\mathcal{Y}}\) as corresponding one-layer linear networks with one-hot encoded inputs. In particular, we convert each \((s,y)\) to one unique one-hot vector of in \(\mathbb{R}^{|\mathcal{Y}|\cdot|\mathcal{Y}|}\) as the input to the network \(g\). Then, we train these feature extractors on the training set with a minibatch size of 256 for 100 epochs, to maximize the nested H-score configured by \(\mathcal{C}_{\text{MC}}\) [cf. (56)]. For comparison, we train a multihead network shown in Figure 10 on the same dataset, with the same minibatch size and epochs. The feature \(f\) is again implemented by a one-layer linear network. In particular, we maximize the log-likelihood function (67) to learn the corresponding feature and weights. Then, we convert the weights to \(g\in\mathcal{F}_{\mathcal{S}\times\mathcal{Y}}\) via the correspondence [cf. (66)] \(g(s,y)=G_{s}(1,y)\). The comparison between features learned from two approaches are shown in Figure 19. For the sake of presentation, we have normalized the features before plotting, such that \(f\) and each \(g(s,\cdot)\) are zero-mean, and unit variance with respect to \(\text{Unif}(\mathcal{X})\) and \(\text{Unif}(\mathcal{Y})\), respectively. ### Multimodal Learning With Missing Modalities #### d.4.1 Implementation of Section 7.4.1 We first generate \(N=50\,000\) triples of \((X_{1},X_{2},Y)\) for training. In implementing the algorithm, we \(\bar{k}=k=1\). To represent \(\bar{f}\in\mathcal{F}_{\mathcal{X}}\), we set each \(\bar{f}^{(i)}\), \(i\in\{1,2\}\) as an MLP with dimensions input = 1 - 32 - 32 - 32 - 1 = output. To represent \(f\), we use an MLP with dimensions input = 2 - 32 - 32 - 1 = output with the input set to \(X_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt} \mskip-0.
2310.00230
SLM: Bridge the thin gap between speech and text foundation models
We present a joint Speech and Language Model (SLM), a multitask, multilingual, and dual-modal model that takes advantage of pretrained foundational speech and language models. SLM freezes the pretrained foundation models to maximally preserves their capabilities, and only trains a simple adapter with just 1\% (156M) of the foundation models' parameters. This adaptation not only leads SLM to achieve strong performance on conventional tasks such as speech recognition (ASR) and speech translation (AST), but also introduces the novel capability of zero-shot instruction-following for more diverse tasks: given a speech input and a text instruction, SLM is able to perform unseen generation tasks including contextual biasing ASR using real-time context, dialog generation, speech continuation, and question answering, etc. Our approach demonstrates that the representational gap between pretrained speech and language models might be narrower than one would expect, and can be bridged by a simple adaptation mechanism. As a result, SLM is not only efficient to train, but also inherits strong capabilities already acquired in foundation models of different modalities.
Mingqiu Wang, Wei Han, Izhak Shafran, Zelin Wu, Chung-Cheng Chiu, Yuan Cao, Yongqiang Wang, Nanxin Chen, Yu Zhang, Hagen Soltau, Paul Rubenstein, Lukas Zilka, Dian Yu, Zhong Meng, Golan Pundak, Nikhil Siddhartha, Johan Schalkwyk, Yonghui Wu
2023-09-30T02:27:45Z
http://arxiv.org/abs/2310.00230v1
# SLM: Bridge the Thin Gap Between Speech and Text Foundation Models ###### Abstract We present a joint Speech and Language Model (SLM), a multitask, multilingual, and dual-modal model that takes advantage of pretrained foundational speech and language models. SLM freezes the pretrained foundation models to maximally preserves their capabilities, and only trains a simple adapter with just 1% (156M) of the foundation models' parameters. This adaptation not only leads SLM to achieve strong performance on conventional tasks such as automatic speech recognition (ASR) and automatic speech translation (AST), but also unlocks the novel capability of zero-shot instruction-following for more diverse tasks. Given a speech input and a text instruction, SLM is able to perform unseen generation tasks including contextual biasing ASR using real-time context, dialog generation, speech continuation, and question answering. Our approach demonstrates that the representational gap between pretrained speech and language models is narrower than one would expect, and can be bridged by a simple adaptation mechanism. As a result, SLM is not only efficient to train, but also inherits strong capabilities already present in foundation models of different modalities. Mingqiu Wang, Wei Han, Izhak Shafran, Zelin Wu, Chung-Cheng Chiu, Yuan Cao, Yongqiang Wang, Nanxin Chen, Yu Zhang, Hagen Soltau, Paul K. Rubenstein, Lukas Zilka, Dian Yu, Zhong Meng Golan Pundak, Nikhil Siddhartha, Johan Schalkwyk, Yonghui Wu+Google Deepmind Footnote †: Corresponding author: [email protected] Google Deepmind ## 1 Introduction Recent advances in foundation models of text and speech have offered new opportunities to build strong speech-language models without a large amounts of paired speech-text data. Text foundation models have demonstrated impressive capabilities and performance on a wide range of language tasks [1, 2], and audio foundation models have recently advanced the state-of-the-art in speech recognition and understanding tasks [3, 4]. Developing effective approaches that unify foundation models of both modalities is a natural way of building strong speech understanding models without requiring a large amount of paired speech-text data. In previous work, a joint Speech Language Model (SLM) [5] was introduced using an adapter-based approach [6] to unify pretrained speech and text models for an end-to-end English dialog understanding task, namely, MultiWoz [7]. In this work, we refine the proposed SLM using multilingual speech and language foundation models to unlock new multitask and 0-shot capabilities. In contrast to the previous version of SLM, in this work the two foundation models are kept frozen to safeguard their inherent capabilities and an adapter is trained to bridge the two modalities. The adapter takes the output of the speech encoder, applies a uniform subsampling approach to reduce the sequence length, and learns to map the audio representation into the textual representation space that can be interpreted by the frozen LLM. The key contributions of this work are: * A lightweight and efficient approach to glue frozen speech and text foundation models with a simple adapter, maximally preserving the native capabilities in the pretrained foundation models. * A robust and generalizable model that achieves strong performance on a variety of speech tasks including ASR, AST and speech biasing. * The proposed system demonstrates novel cross-modality zero-shot instruction-following capabilities, with speech as inputs and text as instructions. We describe our approach and model in Section 3, the training data and tasks in Section 4, experiment setup in Section 5, illustrate several zero-shot capabilities in Section 6.3, Figure 1: SLM consists of a frozen pretrained speech model, a frozen pretrained LLM and an adapter to bridge from speech to textual embeddings. Therefore, SLM extends LLM’s instruction-following capabilities beyond text to speech inputs and successfully performs multiple 0-shot tasks. and report quantitative results on ASR, AST and biasing tasks in Section 6. The implication of our results are discussed in Section 7 and summarized in Section 8. ## 2 Related Work To place this work in the context of the existing literature, we review a few representative models in this realm. In SpeechGPT [8], the speech input is converted into Hubert [9] units which are then treated similar to the text tokens as input to the LLM. The entire model is then fine-tuned on different speech tasks. The capabilities of the model are illustrated using anecdotal examples, but the quantitative effectiveness of their method is unclear since no metrics on benchmark tasks are reported. The approach of AudioPaLM and AudioLM [10, 11] focus on pretraining a multimodal (audio and text) foundation model using an extended vocabulary with audio and text tokens to allow audio generation. In contrast, our work focuses on utilizing two frozen pretrained foundation models. Pengi [12] feeds audio into a frozen language model by using the output of a speech encoder, with the encodings being treated as the prefix to the standard text prompt. The focus of Pengi is on acoustic classification of sounds, emotions and music. ImageBind [13] attempts to learn a joint embedding across six modalities including images, text, audio, depth and thermal data. The model expects relevant image data in the input and cannot readily learn cross-modal capabilities, for example, from audio-text paired data. Listen, Think, and Understand (LTU) [14] leverages LLM's reasoning capabilities to improve acoustic scene understanding, which was trained on a large audio QA dataset. Instead of acoustic scene understanding, our work focuses on spoken language tasks such as ASR, AST and other zero-shot language tasks. In AudioToken [15], an adapter is used to concatenate acoustic embeddings to text embeddings. Similar to our work, they trained only the adapter while keeping the acoustic encoder and the text-to-image diffusion model frozen. However, the focus of their work is image generation, while this work aims to improve spoken language tasks. ## 3 Model: The adapter sandwich SLM glues a pretrained speech encoder with a pretrained LLM using a simple adapter where the adapter is sandwiched between the two frozen models, as illustrated in the Figure 2. We use SLM to refer to the combination of a pretrained LLM, a pretrained speech encoder, and the adapter. SLM supports two input modalities with speech and text inputs. The speech input \(S_{1:U}\) of length \(U\) is fed into speech encoder which generates speech embedding \(S_{1:U}^{D}\) with dimension \(D\). The speech encoder is taken from a pretrained encoder-decoder ASR model whose decoder is discarded. The embeddings \(S_{1:U}^{D}\) are down-sampled to \(S_{1:U^{\prime}}^{D}\) by about a factor of 4x, as described further in Section 3.1. This reduction allows longer speech inputs. The down-sampled speech embedding sequence \(S_{1:U^{\prime}}^{D}\) is then fed into an adapter, which in our case is simply a few transformer layers, as few as 2 layers in our experiments. The text input \(X_{1:T}\) of length \(T\) is embedded by the embedding layer of the LLM. The text embedding sequence \(X_{1:T}^{E}\) is then concatenated along the time dimension with the output sequence from the adapter \(S_{1:U^{\prime}}^{E}\) to get \(X_{1:T}^{E}||S_{1:U}^{E}\). (i.e. _"{text instruction}_ {audio}"_). Note that the adapter output dimension \(E\) as the text embedding. The concatenated embedding sequence \(X_{1:T}^{E}||S_{1:U^{\prime}}^{E}\) is fed into the rest of the LLM transformer stack. We use the next-token-prediction loss as the training objective for the adapter, using a mixture of tasks (see Section 4). The target sequence can be either speech transcripts, translation sentences, or any open ended generation targets. Intuitively, SLM adapter is trained to implicitly map the reduced speech embedding \(S_{1:U^{\prime}}^{D}\) into the same representation space as text embedding, so that the adapted speech embedding \(S_{1:U^{\prime}}^{E}\) can be "understood" by the _frozen_ LLM. By supporting both speech and text inputs in the manner described above, we hypothesize that the text input serves as an effective prompt for speech inputs, allowing the model to follow instructions. In Section 6.3 we demonstrate several examples tasks, lending credibility to this hypothesis. Figure 2: **Model architecture of SLM.** The encodings from the output of the speech encoder is downsampled and adapted to the input textual embedding representation of the frozen LLM. ### Speech sequence length reduction We reduce the speech encodings sequence to similiar length as the corresponding text word-piece sequences. This is important for improving both training and inference efficiency and allows the model to handle long speech inputs. A uniform reduction is applied, where the output sequences are reduced at a fixed rate. In our experiments, we discard 3/4 of the frames randomly and thus reduce speech encoding sequence to only 1/4 of its original length. ## 4 Training data mixtures SLM was trained using a mixture of supervised learning tasks where the inputs include speech signals, text instructions; and the output is the text string in different tasks such as ASR transcripts and speech translation sentences. 1. Speech Recognition: The fixed instruction for this task is _"Recognize this speech in \(\{\)lang\(\}\)"_, where we replace \(\{\)lang\(\}\) with the actual language name for the input speech. We used multilingual YouTube corpus [3] for this task which contains 75 languages harvested from YouTube and amounts to 90k hours. 2. Speech Translation: The model takes a speech input and generates its corresponding translation for a specified target language. The instruction for this task is _"Translate this speech from \(\{\)src_lang\(\}\) to \(\{\)tgt_lang\(\}\)"_, where we replace \(\{\)src_lang\(\}\) with the actual language name for the input speech, and \(\{\)tgt_lang\(\}\) with the target language name to be translated into. We use CoV-oST2 corpus [16] for this task, which is a speech to text translation dataset covering translations from 21 languages into English and from English into 15 languages, totaling about 2.9k hours of audio. 3. Speech Instruction Tuning: The model takes a speech input and a text input as instruction, and predicts an appropriate answer following this instruction. Different from previous tasks using a fixed instruction, this task has varied instructions for different data samples such as dialog generation, named entity recognition and question answering. The task is to train the model to adeptly follow diverse instructions, avoiding over-fitting to any fixed instructions above, which is critical to the success of downstream 0-shot instruction following tasks. We used Alpaca dataset [17], in which data samples contain \(\{\)_instruction, input, output\(\}\)_, where _instruction_ describes the task the model should perform, _input_ is an input context for the task, and _output_ is the answer to the instruction. The speech input was generated using a TTS system described in [18, 19]. ## 5 Experiments ### Pretrained Foundation Models We used the Universal Speech Model (USM) [3] as our speech foundation model. USM encoder of 2B parameters was first pretrained with BEST-RQ [20] and subsequently finetuned with a LAS decoder of 128M parameters on the 75-language YouTube corpus as introduced in [3]. We used T5 family [21] as the text foundation model, which has the encoder-decoder architecture. Specifically, we adopted mT0-MT XXL checkpoint with 13B model size that was readily available [22]. This model was trained on mC4 corpus [23] which covers 101 languages with multilingual instruction tuning capability. ### Training The adapter is a transformer stack with \(L\) layers, where \(L\) is a hyper-parameter. By default we use two layers of transformer as the adapter (\(L=2\)) unless stated otherwise. We adopt the same transformer implementation as used in mT0-MT XXL. The transformer layer size is also the same as in mT0-MT XXL, where the number of heads is 64, the head dimension is 64, the embedding dimension is 4096, and the projection layer dimension is 10240. This total parameter count of the adapter is 156M. The adapter parameters are learned from scratch. We used a data mixture of combining all tasks described in the Section 4, where all tasks have the unified input-output format: * _Text prompt input_, instruction about the task to perform, * _Speech input_, content as audio, and * _Text response output_, target responses. The mixing ratio is proportional to the number of data samples in each task. We used 250k multilingual sentence piece vocabulary. The training objective is the cross-entropy loss for next token prediction in a sequence, which is the standard language modeling objective. To preserve the capabilities of existing speech and text models, we froze both of the speech and text foundation models during training and only trained the adapter, as mentioned before. ### Evaluation We first evaluated our model on conventional speech recognition and translation tasks. For 0-shot instruction following, we demonstrate quantitative results on a contextual biasing ASR, where we instructed the model with real-time context. Furthermore, we also show empirical studies on 0-shot open-ended question answering tasks. 1. _Speech Recognition_: We evaluated on the test set of SpeechStew ASR [24], VoxPopuli ASR [25], and FLEURS ASR [26]. The performance was computed in terms of word error rate (WER) using the JiWER implementation [27]. 2. _Speech Translation_: We evaluated on CoVoST2 AST task [16] and report BLEU scores on X-to-En test sets using the SacreBLEU and corpusBLEU implementations [28, 29]. 3. _Speech Recognition with Contextual Biasing_: This task evaluates the model's ability on recognizing speech using runtime context (i.e., named entities). We provide the model with real-time retrieved entities in the text prompts. We report WERs on the multi-context TTS corpora in [30], where W_PREFIX and WO_PREFIX evaluate the in-domain performance: each utterance is assigned a correct bias entity + distractor entities; ANTI evaluates the out-of-domain performance: each utterance is associated with distractor entities only. The original corpora contains variants scaling from 0 to 3K bias entities assigned to each utterance. For simplicity, we combined the entities across test-set variants and constructed a single retrieval database with 4.55K bias entities in total, and scored each utterance against it. * ANTI: The transcript truths simulate the voice assistant traffic, examples include "what's the weather", "turns the lights to 100%". * W_PREFIX: The transcript truths contain pre-fixed patterns such as "open SAPPS", "call SCONTACTS", "play SSNGS". * WO_PREFIX: The transcript truths are entities chosen from SAPPS, SCONTACTS, SSNGS. ## 6 Results ### Speech Recognition We present the results of ASR evaluation in Table 1 on an English corpus using SpeechStew [24], as well as on multilingual corpora using Voxpopuli [25] and FLEURS [26]. The instructions during evaluation are similar to the ones in training (e.g., _"Recognize this speech in \(\{\)lang\(\}\)"_). Note that, all ASR evaluations are performed on out-of-domain tasks, since we didn't include any training data from SpeechStew, Voxpopuli, or FLEURS in the training mixture. We compare performance of our SLM model to USM baselines[3] trained on the same YouTube dataset as used in our training mixture. ### Speech Translation We report speech translation performance on CoVoST2 [16] corpus in Table 2, where the performance is averaged over 21 pairs of X-to-En translation. Here again, the instruction during evaluation and training are the same (e.g., _"Translate this speech from \(\{\)src_lang\(\}\) to \(\{\)tgt_lang\(\}\)"_). In this case, the response from the model are scored without any normalization. ### Zero-shot Instruction Following #### 6.3.1 Speech Recognition with Contextual Biasing We evaluated the contextual biasing ASR as a general speech recognition task using the same instruction to that used in other ASR tasks, as shown in the 2nd column in Table 3. Typically, a specific list of phrases is given for each speech utterance. We provided this list in the prompt and instructed the model to pick the most relevant phrase, i.e., _"Recognize this speech in language English using potential mention - \(\{\)biasing entity\(\}\)"_. We used an off-the-shelf speech retriever [5] to retrieve top-1 entity mention from the speech, and replace \(\{\)biasing entity\(\}\) with it in the prompt above. \begin{table} \begin{tabular}{l c c c} \hline Eval set & USM-LAS & SLM & SLM-FT \\ \hline \hline \multicolumn{4}{c}{English} \\ Common Voice & \(12.6\) & \(10.8\) & \(7.5\) \\ AMI (ihm) & \(16.6\) & \(18.4\) & \(15.4\) \\ AMI (sdm) & \(36.3\) & \(40.7\) & \(36.9\) \\ Librispeech (clean) & \(3.2\) & \(4.8\) & \(2.6\) \\ Librispeech (other) & \(5.5\) & \(7.4\) & \(5.0\) \\ Switchboard & \(10.6\) & \(12.7\) & \(10.3\) \\ Tedium & \(2.9\) & \(3.4\) & \(2.9\) \\ Wall Street Journal & \(4.8\) & \(4.4\) & \(3.0\) \\ \hline \multicolumn{4}{c}{Multilingual} \\ Voxpopuli & \(13.1\) & \(14.0\) & \(13.0\) \\ FLEURS & \(13.3\) & \(13.8\) & \(12.4\) \\ \hline \end{tabular} \end{table} Table 1: **Speech recognition results.** For most languages, we computed Word Error Rate (WER %) after removing capitalization, punctuation and text normalization. For Chinese, Japanese, Thai, Lao, and Burmese character error rate (CER %) is computed similar to Whisper [4]. Voxpoluli WER is an average of 14 languages. FLEURS NER is an average of 54 languages in Fleurs which are also present in the YouTube corpus. For SpeechStew dataset, Whisper normalization was applied on references and predictions. We also report performance of a fine-tuned SLM (SLM-FT) after training SLM text encoder on YouTube corpus. \begin{table} \begin{tabular}{l c} \hline Model & BLEU\(\uparrow\) (X-to-En) \\ \hline \hline Whisper [4] & \(29.1\) \\ mSLAM-CTC [31] & \(25.2\) \\ MAESTRO [32] & \(25.2\) \\ USM-M [3] & \(30.7\) \\ Mu\({}^{2}\)SLAM [33] & \(27.1\) \\ AudioPaLM-2 [10] & \(37.8\) \\ \hline SLM & \(33.0\) \\ SLM-FT & \(37.4\) \\ \hline \end{tabular} \end{table} Table 2: **Speech translation results on CoVoST2 test set.** In the 0-shot instruction prompt experiment (C-ASR), we observe the SLM model gives about 46.2% relative (\(32.7\to 17.6\)) performance gain. We also demonstrate that further WER reductions can be achieved by fine-tuning the model parameters on task specific training corpora (C-ASR-FT): e.g., fine-tune adapter only: (\(17.6\to 7.8\)), fine-tune T5 encoder: (\(17.6\to 5.1\)). #### 6.3.2 Open-ended generation We prompted SLM with more diverse instructions, ranging from dialog generation, named entities recognition, and question answering (QA). See Table 4 for illustrative samples. In particular, we tested the speech-based QA capabilities using Natural Question dataset [34], where we verbalized the text questions into spoken versions using TTS, and prompted SLM with the instruction _How do you answer this?_ We observe that indeed SLM is capable of following the instruction and answering question. Like in standard LLMs, the freely-generated answers may be hallucinated. To investigate whether the hallucination was introduced from the adaptation process or inherited from the LM, we ran the original mT0-MT language model on the text-based NQ dataset, and found that similar hallucinations were present in mT0-MT for open-ended QA tasks. ## 7 Discussion ### Adaptation depth To gain a deeper understanding of the required adapter depth for successfully integrating pretrained models from speech and language modalities, we varied the number of transformer layers in the adapter from \(1\) to \(8\). We observed notable performance improvement from 1 to 2 layers, but the performance saturated after 2 layers. This implies that the representation gap between the pretrained speech model and text model might be narrower than expected, and can be bridged by a shallow adaptation from speech encoding to the LLM embedding space. We also experiment different adaptation approaches, such as low-rank adaptation [35] or using a more sophisticated Flamingo-style approach [36] to inject the speech information into the LLM transformer stacks via cross-attention, which will be included in future work. ### Impact of pretrained LLMs We compared different LLM checkpoints in T5 family: mT5 [23], mT5-LM-adapted [37], mT0-MT [22] (used in this work), T5 [21], T5-flan [38], and observed that the pretrained LLM plays a crucial role in both training efficiency and model quality after adaptation. We found that those LLMs pretrained with LM objective [39] (such as mT5-LM-adapted, mT0-MT, T5-flan) require significantly less time to train adaptation compared to LLMs solely pretrained with masked language model objective [40] (such as mT5 and T5). For example, with the same computational resources, adapting T5 or mT5 takes a few days to converge, while T5-Flan and mT0-mt takes a few hours. The intrinsic capabilities of pretrained LLMs determine instructions following quality of the trained SLM. For LLMs without zero-shot instruction capabilities (T5 and mT5), the adapted SLM is not able to perform 0-shot instruction following either. When the LLMs have poor performances on certain downstream tasks (such as QA task), the adapted SLM also exhibits poor accuracy on those tasks. This again confirms that the thin adapter layer itself only provides the transformation from speech modality to text modality, but does not store world-knowledge as in LLMs. In this work, we only compared different encoder-decoder variants of LLM. However, the proposed adaptation approach also applies to decoder-only LLMs. In future work, we will present a more comprehensive comparison between both encoder-decoder and decoder-only LLMs. ### Train adapter only v.s finetune LLM In previous sections, we presented a general SLM for a wide range of speech tasks without the need of altering the weights of the original speech model or LLMs. In this section, we explore further finetuning SLM on any downstream corpus with LLMs unfrozen. This can be used to tailor to a specific downstream task to achieve optimal quality. By finetuning SLM adapter using the in-domain contextual biasing training set, the WER decreases from \(8.6\%\) to \(1.7\%\) compared to the 0-shot case. By allowing LLM encoder unfreezing, the WER further decreases to \(1.0\%\), see details in the Table 3. By finetuning SLM on CoVoST2 dataset with the LLM encoder unfrozen, we observed BLEU score increases from \begin{table} \begin{tabular}{l c c c c} \hline \hline Prompt type & ASR & C-ASR & \multicolumn{2}{c}{C-ASR-FT} \\ & & & (Adapter) & (T5-Enc) \\ \hline \hline ANTI & \(10.3\) & \(10.4\) & \(11.8\) & \(11.2\) \\ W\_PREFIX & \(14.8\) & \(8.6\) & \(1.7\) & \(1.0\) \\ WO\_PREFIX & \(32.7\) & \(17.6\) & \(7.8\) & \(5.1\) \\ \hline \hline \end{tabular} \end{table} Table 3: **ASR contextual (C) biasing WERs.** ASR corresponds to using the same prompt as training time _“Recognize this speech in language English”_; C-ASR corresponds to 0-shot instruction prompt; C-ASR-FT corresponds to variants where adapter / T5-Enc model weights are further fine-tuned on task specific training corpora, which consists of 12K synthetic TTS examples with equal coverage on SAPPS, SCONTACTS and SSONGS, the carrier phrase patterns of the examples closely match the W\_PREFIX test-set but are combined with non-overlapping named entities for generalization. \(33.0\) to \(37.4\), which is on-par with the current SOTA CoVoST2 AST performance from AudioPaLM [10]. ### End-to-end speech-to-X v.s. cascaded ASR+LLM A question that often comes up is whether the end-to-end model has an edge over a cascade pipeline where the speech is fed to an ASR system and the transcripts are send to LLMs. To answer this, we ran ablation studies on AST task, where we applied the same speech and text foundation model as in SLM: USM LAS model for ASR, and mT0-MT for text-to-text translation. Specifically, we prompted the mT0-MT model using a similar instruction _"Translate this from {src_lang} to {tgt_lang}"_ as we used in SLM training mixture. We observed that the cascaded pipeline has significantly worse performance than the end-to-end SLM (i.e., CoVoST2 Fr-to-En BLEU degraded from \(38\) to \(32\)). This is presumably due to ASR errors. One potential approach to improve the cascaded system is to further finetune LLMs on ASR transcripts, which will effectively improve the robustness to ASR errors, but will apparently requires further finetuning and alters the original capabilities of LLMs. ## 8 Conclusions We present SLM, a multitask, multilingual, and dual-modal speech-language model. SLM comprises a frozen pretrained speech encoder, a frozen pretrained LLM, and a light-weight adapter that maps the output of the speech encoder to the input of the LLM. Apart from the speech input, additional text input can be used as the prompts to specify the tasks that SLM needs to perform. In this work, we showcase the adaptation of output encodings from speech foundation model USM [3] to input textual embeddings of large language model mT0-MT [22]. Nevertheless, SLM can be easily applied as a plugin for any speech encoder and LLM pair. In future work, we will present a more comprehensive comparison across different speech encoders and both encoder-decoder and decoder-only LLMs. We will also compare different adaptation approaches, for example, residual [6] adaptation or LoRA [35]. ## 9 Acknowledgements We are grateful for help, discussion, and support from Nobuyuki Morioka, Heiga Zen, Yifan Ding, Ankur Bapna, \begin{table} \begin{tabular}{l l} \hline \hline **Instruction** & **Prediction** \\ \hline \hline **Audio Content**: I don’t feel well since last night, I had headache and I took a pill of Ibuprofen. I went to bed early. \\ Recognize this speech: & I don’t feel well since last night, I had headache and \\ & I took a pill of Ibuprofen. I went to bed early. \\ \hline What symptoms are mentioned? & Headache. \\ What medicines are mentioned? & Ibuprofen. \\ What happened to this person? & This person is sick. \\ Translate this speech to Chinese: & \\ \hline **Audio Content**: can you open hay day please \\ & \\ & \\ Recognize this speech: & Can you open hi day please? \\ \hline Recognize this speech with potential mentions - & \\ “Hey Ya!”, “Happy Day”, “Bad Day”, ”hay day” & Can you open hay day please? \\ \hline **Audio Content**: Can I reserve a double-room for 4 nights, for 2 adults and a kid? Also we’d like to add breakfast. \\ & \\ How would you answer this? & Sure, when will you be arriving? \\ How would you answer this if you don’t & Sorry we don’t have double-room available for 4 nights. Do you \\ & \\ have such a room? & consider a single bed room? \\ \hline & Est-ce que je peux réserver une chambre double pour 4 nuits, pour \\ & 2 adults et un enfant? \\ & \\ & Nous aimerions également àjouter le petit-déjéuener. \\ \hline \hline **Audio Content** (From NQ dataset): Give the formula for the following substance carbonic acid \\ & \\ How do you answer this? & H2CO3 [FOOTNOTE:]Footnote : H2CO3 (equivalently OC(OH)2)] \\ \hline **Audio Content** (From NQ dataset): The resting stage of the cell cycle is \\ How do you answer this? & The phase where the cell does not divide. / [Groundtruth: \\ & \\ & \\ \hline \hline \end{tabular} \end{table} Table 4: **Zero-shot instruction following examples.** Given the same speech inputs, SLM can respond differently according to instructions. Particularly, we show predicted answers on audio Natural Question corpus. SLM is able to follow the instruction to answer the spoken question. Gang Li, Laurent El Shafey, James Qin, Jeffrey Zhao, Zhehuai Chen, and Yong Cheng.
2309.12757
Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You Where
While image data starts to enjoy the simple-but-effective self-supervised learning scheme built upon masking and self-reconstruction objective thanks to the introduction of tokenization procedure and vision transformer backbone, convolutional neural networks as another important and widely-adopted architecture for image data, though having contrastive-learning techniques to drive the self-supervised learning, still face the difficulty of leveraging such straightforward and general masking operation to benefit their learning process significantly. In this work, we aim to alleviate the burden of including masking operation into the contrastive-learning framework for convolutional neural networks as an extra augmentation method. In addition to the additive but unwanted edges (between masked and unmasked regions) as well as other adverse effects caused by the masking operations for ConvNets, which have been discussed by prior works, we particularly identify the potential problem where for one view in a contrastive sample-pair the randomly-sampled masking regions could be overly concentrated on important/salient objects thus resulting in misleading contrastiveness to the other view. To this end, we propose to explicitly take the saliency constraint into consideration in which the masked regions are more evenly distributed among the foreground and background for realizing the masking-based augmentation. Moreover, we introduce hard negative samples by masking larger regions of salient patches in an input image. Extensive experiments conducted on various datasets, contrastive learning mechanisms, and downstream tasks well verify the efficacy as well as the superior performance of our proposed method with respect to several state-of-the-art baselines.
Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, Pin-Yu Chen, Wei-Chen Chiu
2023-09-22T09:58:38Z
http://arxiv.org/abs/2309.12757v2
# Masking Improves Contrastive Self-Supervised Learning for ConvNets, ###### Abstract While image data starts to enjoy the simple-but-effective self-supervised learning scheme built upon masking and self-reconstruction objective thanks to the introduction of tokenization procedure and vision transformer backbone, convolutional neural networks as another important and widely-adopted architecture for image data, though having contrastive-learning techniques to drive the self-supervised learning, still face the difficulty of leveraging such straightforward and general masking operation to benefit their learning process significantly. In this work, we aim to alleviate the burden of including masking operation into the contrastive-learning framework for convolutional neural networks as an extra augmentation method. In addition to the additive but unwanted edges (between masked and unmasked regions) as well as other adverse effects caused by the masking operations for ConvNets, which have been discussed by prior works, we particularly identify the potential problem where for one view in a contrastive sample-pair the randomly-sampled masking regions could be overly concentrated on important/salient objects thus resulting in misleading contrastiveness to the other view. To this end, we propose to explicitly take the saliency constraint into consideration in which the masked regions are more evenly distributed among the foreground and background for realizing the masking-based augmentation. Moreover, we introduce hard negative samples by masking larger regions of salient patches in an input image. Extensive experiments conducted on various datasets, contrastive learning mechanisms, and downstream tasks well verify the efficacy as well as the superior performance of our proposed method with respect to several state-of-the-art baselines. ## 1 Introduction The recent renaissance of deep learning techniques has brought a magic leap to various fields, such as computer vision, natural language processing, and robotics. Learning from a large-scale labeled/supervised dataset, which is one of the key factors leading to the success of deep learning, however, has now turned out to be a significant limitation on its extensions to more fields. In addition to the expensive cost of time and human resources to collect training datasets for different tasks and their corresponding labels, the supervised learning scenario typically would suffer from the issue of overfitting on the training dataset, thus leading to worse generalizability of the learnt models. These problems bring challenges for the application of deep learning techniques but also give rise to the research topic of self-supervised learning, wherein it aims to learn to extract informative feature representations from an unlabelled dataset via leveraging the underlying structure of data and building the supervisory signals from the data itself. The discovered representations are typically more general and can be further utilized or fine-tuned to various downstream tasks. Without loss of generality, self-supervised learning has firstly made great success in natural language processing, where the autoregressive modeling (i.e. predicting the next word given the previous words) and masked modeling (i.e. masking operation to randomly mask a portion of words in a text, coupled with a self-reconstruction objective to predict those masked words) bring up the powerful language models such as GPT [30] and BERT [7]. Nevertheless, the direct adaptation of such techniques (especially masking and self-reconstruction) to image data [28] only contributes to slight improvement (at least not as significant as what happens in the field of natural language processing), in which such a predicament was later relieved with the help of vision transformers [8] (e.g. the influential work from masked autoencoder (MAE) [17] and the related ones such as SimMIM [37], BEiT [1], and iBOT [41]). In contrast to vision transformers which enable the application of masking op eration and its coupled self-reconstruction objective on the self-supervised learning for vision data, another dominant architecture over the last decade for computer vision field, i.e. convolutional neural networks, has difficulty incorporating the random masking operation (on image patches), because the resultant edges between masked and unmasked regions could cause problems for learning convolution kernels, and the nature of performing convolutions on regular grids also hinder it from adopting positional embeddings or masked tokens as the typical transformer models [17]. In turn, the most popular self-supervised learning scenarios nowadays for convolution neural networks come from contrastive learning - given one sample image, two different views of it are respectively created by two different augmentations. The contrastive objective which attracts the views from the same image (known as positive pair/views) while repelling the ones from distinct images (respectively, negative pair/views) drives the learning of feature extractor (i.e. encoder) to capture the crucial features invariant to augmentations. Hence, how to design good positive and negative views with augmentations [12, 29] plays an important role in the success of contrastive learning, in which the design choices for augmentations typically are highly dependent on the characteristics of the image data domain (i.e. more domain-specific). From such a point of view, including the less domain-specific augmentations would definitely be able to benefit the versatility and the flexibility of the corresponding contrastive learning algorithms, thus the fundamentals of masking (as being one of the most straightforward and general operations) consequently come into our sight: _Are we able to include masking as an extra augmentation method into contrastive self-supervised learning framework with convolutional neural networks as its backbone?_ We are not the first to ask such a question. Two prior works (i.e. **MSCN**[20] and **ADIOS**[34]) proposed to tackle the issues "how to mask" and "learning where to mask" respectively. Nevertheless, on one hand, the improvements provided by these prior works are relatively insignificant, thus showing this topic is still under-explored; on the other hand, we highlight the potential issue that: if the masking is performed in a completely random manner, there exists a chance where all the masked patches fall on either the foreground or the background objects, in which the contrastive objective upon positive pair under such case could be detrimental for the overall model learning (e.g. attracting two views where one is completely background while the other still owns most of the foreground). To address this potential issue, we propose to particularly include **saliency** as a prior before performing masking. That is, we suggest that the masked patches should be evenly distributed to an image's foreground objects and background, regardless of the masking ratio. To this end, we introduce _random masking with saliency constraint_ as an augmentation method for the contrastive self-supervised learning framework, in which the feature extractor is built upon convolutional neural networks. Basically, we split the entire input image into the foreground objects and the background, followed by performing random masking on them separately, and three different masking strategies are provided to handle the parasitic edges stemming from masking. Moreover, we also introduce hard negative samples by masking more salient patches of the original input image, where these hard negative samples are experimentally shown to bring an extra boost to our proposed method. Lastly, we further discover that masking only one branch (to be specific, masking only the query branch when processing the positive pairs) of the contrastive learning framework (usually also known as _siamese network_) provides better performance than masking both branches due to the effects in terms of sample variance that masking brings, which also well corroborates the statement claimed in [35]. Our main contributions in this work are summarized as follows, * We propose a saliency masking augmentation method for the contrastive self-supervised learning framework with convolutional neural networks as backbones, where the saliency information is utilized to guide the random masking applied on the foreground and background regions individually. * Three masking strategies are proposed to tackle parasitic edges between masked and unmasked regions, in which hard negative samples can also be created by masking more salient patches to achieve further improvement. * From the perspective of manipulating the difference in terms of variance between two branches of the siamese network, we propose to apply masking augmentation solely on the query branch when processing positive pairs to benefit the model training. ## 2 Related work **Self-Supervised Learning** (SSL) aims to learn a feature encoder for extracting representations from unlabeled data via the help of pretext tasks (where the objective functions for these tasks are typically built upon the data properties), in which the resultant encoder can be further fine-tuned with labeled data to support different downstream tasks. Early SSL works rely on designing handcrafted pretext tasks, such as predicting rotation angles [11, 14], solving jigsaw puzzles [26], or colorization [40]. Recently, the introduction of contrastive objectives to SSL algorithms [3, 4, 16, 18, 39] have brought a significant leap of performance, even providing superiority to some standard supervised learning baselines. Contrastive SSL tries to maximize the agreement between representations of positive samples (i.e. augmented views from the same source image). While some contrastive SSL approaches (such as SimCLR [4] and MoCov2 [18]) further leverage negative samples (i.e., augmented views from different images) to prevent the model collapse (i.e. learning trivial features) by utilizing large batch size and memory bank, some other works (e.g. SimSiam [6] and BYOL [16]) instead prove that the negative samples might not be necessary for contrastive SSL (e.g. the stop-gradient technique serve the same purpose to prevent model collapse). Though there exist other categories of SSL methods (e.g. clustering ones such as DeepCluster [2] and SWAV [3]), the contrastive ones still take the lead in the stream of SSL. **Masking in SSL** (e.g. masking out a portion of input data sample followed by learning to recover the missing content) is firstly proved by the success of masked language modeling (e.g. BERT [7]) and later adapted to the vision data, thanks to the introduction of vision transformer backbones where the input images are firstly divided into patches then tokenized. For instance, MAE [17] as a seminal work proposes an autoencoder architecture where the transformer-based encoder turns the unmasked image patches into feature representations, which are further decoded back to the original image; while SimMIM [37] encodes the entire image, including the masked patches, and predicts the missing region with a lightweight one-layer head. Compared to the SSL methods for vision data which are based on the coupled masking operation and self-reconstruction loss but highly constrained to the transformer backbone, contrastive SSL becomes more friendly for adopting another important computer vision backbone, convolutional neural networks (also abbreviated as ConvNets), in which recently there comes some research works to investigate the plausibility of including masking operation as an augmentation method into contrastive SSL for ConvNets (also denoted as "siamese networks with ConvNets" in this paper). MSCN [20] firstly discusses the issues of adopting masking operation in siamese networks with ConvNets (including the parasitic edges on the masked input, introduction of superficial solutions, distortion upon the balance between local and global features, and having fewer training signals) then proposes several designs to tackles these issues, such as adopting high-pass filter to alleviate the impact of parasitic edges or applying focal masks to balance short-range and long-range features. Basically, MSCN focuses more on the perspective of "how to mask". In comparison, our work not only provides more "how to mask" strategies (in addition to the one using a high-pass filter, two more based on strong blurring and filling mean value are proposed) but also explicitly includes saliency constraint (from the perspective of "where to mask") as well as extensions on learning mechanism (i.e. using masking to produce hard negative samples and manipulate variance across siamese branches); ADIOS [34] mainly tackles the issue of "where to mask", where instead of using random masking, it particularly adopts an occlusion module (UNet-based, acting as a segmentation model) which learns adversarially along with the feature encoder to determine the regions to be masked, hence the produced masks are semantically meaningful. As jointly training the feature encoder and occlusion module results in heavy computational cost for ADIOS, our proposed method strikes a better balance between having (partially) semantic masks (as being guided by saliency to separate foreground and background) and the computation efforts (as our saliency is estimated by a pretrained and frozen localization network). The visualization to highlight the difference among our proposed method, MSCN [20], and ADIOS [34] is provided in Figure 2. ## 3 Method As motivated previously, we would like to include the masking operation as an extra augmentation method into contrastive self-supervised learning with ConvNets as backbone, where the saliency information is particularly leveraged to guide the masking. Our full model is shown in Figure 1 where in the following we will sequentially describe the saliency computation, our various saliency-guided masking strategies, the ways to construct positive and hard negative samples, as well as the learning scheme. ### Saliency computation The idea of saliency was firstly introduced to predict the eye-catching regions over an image, in which here we generalize such idea to localize the main objects in an input image (which are corresponding to "foreground" without loss of generality) while the rest is then treated as "background". To this end, in this work we adopt the Selective Convolutional Descriptor Aggregation (**SCDA**[36]) method to build our _localization network_\(f_{\xi}\) for producing saliency map \(M\) of a given image \(X\), i.e. \(M=f_{\xi}(X)\). The main reason to choose SCDA for our use stems from its simplicity of only requiring a pre-trained CNN model (typically for the task of classification) and demanding no further supervision to localize the main objects. With denoting the feature tensor obtained by the aforementioned pre-trained CNN model prior to its global average pooling layer as \(S\in\mathbb{R}^{U\times V\times D}\), the aggregation (by summation) of \(S\) along the channel dimension results in the activation map \(A\in\mathbb{R}^{U\times V}\). In addition to use the mean value \(\bar{a}\) of \(A\) as the threshold on all the \(U\times V\) elements in \(A\) to locate the positions of foreground objects (same as the original SCDA), we add another condition based on the standard deviation \(\sigma\) of \(A\) to have more flexible localization results which better fit our need to later guide the masking: \[M(u,v)=\begin{cases}1&\text{if }A(u,v)\geq\bar{a}-0.6\cdot\sigma\\ 0&\text{otherwise}\end{cases} \tag{1}\] Take an input image of size \(224\times 224\) and use an ImageNet-pretrained CNN model based on ResNet-50 as an example, the resultant saliency map \(M\) is of size \(7\times 7\) and every its element is corresponding to a \(32\times 32\) patch in the original image, in which such image patches are the basic units for our performing masking operation later in constrative SSL. ### Saliency-guided masking strategies As naively masking out image patches in an original image will produce many parasitic edges (i.e. the boundaries between masked and unmasked patches/regions), the ConvNet-based feature encoder could be largely misled to focus on learning these unwanted edge features (since the convolutional kernels are typically good at capturing edges) thus causing problematic model training. In this work we propose to adopt three masking strategies for amending the aforementioned issue stemmed from parasitic edges: * **High-pass filtering.** Such masking strategy is actually proposed by MSCN [20], where the high-pass filter is applied on the input image prior to the masking operation, in which the edges caused by masking in the filtered map is less visible. It is worth noting that, as the input for the feature encoder under such masking strategy is the high-pass-filtered images, in the downstream tasks the input data should follow the same form (i.e. needed to be firstly high-pass-filtered as well) to achieve better performance, in which this requirement would become a limitation for practical applications (i.e. users for the downstream tasks have to know how the encoder was trained in SSL stage). * **Strong blurring.** The masking is performed firstly on the original input image, where the masked regions are not filled with the zero value but their own appearance processed by strong Gaussian blurring (i.e. each region to be masked is gone through a low-pass filter), leading to less obvious parasitic edges. Noting that within the patches/regions undergone such masking strategy, the image details are also lost while only the significant contours of objects are preserved. The Gaussian blurring kernel is of size \(31\times 31\) with variance set to \(10\) in our experiments unless otherwise stated. * **Mean filling.** The masking is also performed on the original input image at first, then the masked regions are filled by the mean pixel value of that input image, such that the boundaries between the masked and unmasked regions becomes much less significant. As previously discussed, if the masking operation used in these three masking strategies is completely random (i.e. the regions/patches to be masked are sampled randomly), there could exist the potential case where all the masked patches fall on either the foreground or the background objects thus leading to improper contrastiveness (e.g. forming a positive pair where one is fully background while the other still contains most of the foreground). To this end, we propose to use the saliency information (i.e. the saliency map \(M\) obtained by our localization network \(f_{\xi}\) stemmed from SCDA technique) to guide the masking operation used in all our three masking strategies. Basically, the saliency map \(M\) helps us to separate the foreground objects and the background, in which we perform random masking independently for foreground and background (i.e. distribute the masked patches more evenly to both foreground and back Figure 1: An overview for our proposed method of including saliency-guided masking augmentation into contrastive self-supervised learning, where the backbone of the feature extractor is ConvNets. Firstly, our localization network \(f_{\xi}\) produce the saliency map which is built upon SCDA [36] (cf. Section 3.1), in which such saliency map helps to separate the foreground objects and background in an image. Given an input image, after conducting standard augmentations along the query and key branches (following the common practice of siamese network) to produce two views, our proposed saliency-guided masking strategies are adopted to produce positive and hard negative samples (please refer to Section 3.2 for more details, where in this figure we take the high-pass filtering strategy as an example). The constructed positive and (hard) negative samples are gone through the feature encoder to compute the contrastive objective function \(\mathcal{L}_{nce}\) (please refer to our Section 3.3). Noting that here we base on the SSL framework of MoCov2 [5] to illustrate the computation flow, hence there exists an momentum encoder \(f_{\epsilon}\) in addition to our main learning target, the feature encoder \(f_{\theta}\). ground). In the following, we detail how we utilize saliency to create positive and (hard) negative samples for driving the contrastive self-supervised learning objective. Assume an input image \(X\) is composed of \(N\) patches and \(\gamma\) denotes the ratio of \(N\) patches that are identified as foreground by SCDA (where \(N=U\times V\), i.e. the total number of elements in saliency map \(M\) and there are \(\gamma\cdot N\) patches be identified as foreground). Given a masking ratio \(\alpha\), as we would like to have the masking performed separately for both foreground and background, for a positive sample, the ratio of the number of masked patches between foreground and background is \(\gamma:(1-\gamma)\), i.e. there are \(\alpha\cdot\gamma\cdot N\) patches randomly chosen to be masked in the foreground (respectively \(\alpha\cdot(1-\gamma)\cdot N\) randomly-masked patches in the background). Noting that in our experiments \(\alpha\) is drawn from a uniform distribution \(\mathcal{U}(0.05,0.25)\) unless otherwise stated; When it comes to creating _hard negative samples_, the masking is only applied on the foreground patches (i.e. the main objects are mostly masked to remove the salient/important information of such input image). We achieve so by drawing \(\beta\sim\mathcal{U}(0.4,\,0.7)\) in our experiments and randomly masking \(\beta\cdot\gamma\cdot N\) foreground patches. Since the saliency-guided masking operation described above is applied upon the spatial dimension, we also name it as **spatial masking** (to make analogy to the terminology defined in MSCN [20], but please note that ours particularly has the guidance of saliency), and such saliency-guided spatial masking operation is adopted for all our three masking strategies. Moreover, as the masking strategy of high-pass filtering is inspired by MSCN [20], we also include/extend two other masking operations used in MSCN to our high-pass filtering strategy: **channel-wise masking** where our saliency-guided spatial masking operation is applied individually for each of the RGB channels, and **focal masking** where random cropping is performed (noting that such focal masking does not involve any saliency guidance). Specifically, for focal masking, the region outside of \(200\times 200\) (respectively region inside \(130\times 130\)) is cropped and replaced by Guassian noise to produce positive samples (respectively hard negative samples) is our experiments. Furthermore, to be even more aligned with the original masking operation in MSCN [20], in our high-pass filtering strategy we will add random Gaussian noise to the masked sample regardless of which masking operation (i.e. spatial, channel-wise, and focal masking) is adopted (noting that for the strong blurring and mean filling masking strategies we do not apply such step). In Figure 3 we provide examples of positive and hard negative samples from all our three masking strategies. ### Learning scheme Given the saliency-guided masking strategies introduced above, here we summarize the overall learning scheme for our including masking augmentation into contrastive self-supervised learning framework, where the feature extractor is based on ConvNet-backbone. Following the common scenario of contrastive SSL, two views of an input image \(X\) are firstly produced by two different standard augmentations, where one view is denoted as the _key view_ while the other is denoted as _query view_. Afterwards, we can apply saliency-guided masking operation (parameterized by \(\gamma\) and \(\alpha\)) to the query view, where the resultant masked query view \(X_{q}\) together with the key view \(X_{k^{+}}\) form the positive pair; or, we can apply saliency-guided masking operation (now parameterized by \(\gamma\) and \(\beta\)) upon the key view, which results to be the hard negative sample \(X_{k^{-}}\) to the original key view \(X_{k^{+}}\). Noting that the masked query view \(X_{q}\) together with the views of any other image different from \(X\) (denoted as \(X_{\neg}\)) naturally forms the negative pairs. With denoting the feature representation of \(X_{q}\) extracted by the feature encoder as \(z_{q}\) (analogously \(z_{k^{+}}\) for \(X_{k^{+}}\), \(z_{k^{-}}\) for \(X_{k^{-}}\), and \(z_{\neg}\), for \(X_{\neg}\)), our contrastive objective \(\mathcal{L}_{nce}\) is built to pull closer the features of positive pair while pushing away the feature of negative pair: \[\mathcal{L}_{nce}=-\log\frac{\exp(z_{q}^{\top}z_{k^{+}}/\tau)}{\sum_{X_{\neg}} \exp(z_{q}^{\top}z_{\neg}/\tau)+\exp(\rho z_{q}^{\top}z_{k^{-}}/\tau)} \tag{2}\] where \(\tau\) is a temperature parameter and \(\rho\) is the penalty ratio for hard negative samples, in which our \(\mathcal{L}_{nce}\) is stemmed from InfoNCE loss [27] and analogous to the one in [12]. It is worth noting that while constructing the positive pairs, the key view \(X_{k^{+}}\) does not undergo any saliency-guided masking operation. Such design is motivated from 1) empirical findings that the masked views show higher variance than the views produced by standard augmentations (in which we provide the corresponding study in Section 4.3), and 2) the statement made in [35] that having higher variance in the query branch than the key branch yields better results in the siamese network, where the benefit of such design to model training is demonstrated in our experiments provided in Table 6. ## 4 Experimental Results We compare our proposed method with two state-of-the-art baselines including masking operations into the ConvNet-based SSL, i.e. MSCN [20] and ADIOS [34], as illustrated in Figure 2. Following ADIOS [34], we adopt the ImageNet-100 dataset [33] as the basis to conduct our experiments (i.e., being used to perform contrastive self-supervised learning for training the feature encoder), while we choose MoCov2 [5] and SimCLR [4] to be our experimental bed of contrastive SSL frameworks. Basically, ImageNet-100 dataset contains 100 ImageNet classes, and is composed of 1300 training images and 50 validation images for each class. We use ResNet-50 as our ConvNet-backbone for the feature encoder for both contrastive SSL frameworks. For MoCov2, we set the batch size to 128 and the learning rate to 0.015, and use SGD [32] as the optimizer; While for SimCLR, we set the batch size to 256 and the learning rate to 0.3, and use LARS [38] as the optimizer. The contrastive self-supervised pretraining is run on a 4-GPU machine for 200 epochs, including 10 epochs of warm-up and using a cosine learning rate scheduler. ### ImageNet-100 Classification Once the feature encoders are pretrained via adopting our proposed method (with three different masking strategies) and baselines (i.e. MSCN and ADIOS) in the contrastive SSL frameworks (i.e. MoCov2 and SimCLR), we now evaluate their performances on various downstream tasks. Firstly, we experiment on the downstream task of classification based on ImageNet-100 dataset, where a linear classifier is supervisedly trained while the feature encoder is kept fixed/frozen. Please note again that, as described in Section 3.2, for the feature encoder pretrained by using the high-pass filtering masking strategy, its input in the downstream task should still be firstly gone through the high-pass filter (in which such requirement is also applied to MSCN). The evaluation results on ImageNet-100 classification are summarized in Table 1, where our proposed method (regardless of adopting any saliency-guided masking strategies in both SSL frameworks) consistently achieves superior performance comparing to MoCov2 baseline (i.e. no masking augmentation is applied), MoCov2+MSCN, and MoCov2+ADIOS, where the similar trend is also observable while using SimCLR as the contrastive SSL framework, thus verifying the contribution and the efficacy of our proposed saliency-guided masking methods. It is worth noting that, although our high-pass filtering masking strategies shares quite some common designs as MSCN, our explicit introduction of saliency guidance contributes to the resultant improvement of our proposed method, e.g. MoCov2+OURS (High-pass filtering) versus MoCov2+MSCN and SimCLR+OURS (High-pass filtering) versus SimCLR+MSCN in Table 1. ### Transfer Learning As one of the important goals of SSL is to obtain the feature encoder with better generalizability such that the encoder can be easily adapted to various tasks or datasets with little amount of labeled data, we thus further conduct experiments on different downstream tasks or datasets for better assessing the generality of the features learned by various methods. Here we take MoCov2 as the main experimental bed of SSL framework, and we adopt high-pass filtering masking strategy to present our proposed method for making comparison with the baselines (while the results of adopting strong blurring and mean filling masking strategies in our proposed method are provided in the Appendix). **Image classification on different datasets.** Here we experiment on the classification downstream task based on \begin{table} \begin{tabular}{l|c} Method & Linear Evaluation \\ \hline MoCov2 [5] & 68.22 \\ + MSCN [20] & 70.28 \\ + ADIOS [34] & 62.76 \\ + OURS (High-pass filtering) & **73.8** \\ + OURS (Strong blurring) & 72.50 \\ + OURS (Mean filling) & 70.84 \\ \hline SimCLR [4] & 69.77 \\ + MSCN [20] & 77.18 \\ + ADIOS [34] & 71.12 \\ + OURS (High-pass filtering) & **77.9** \\ + OURS (Strong blurring) & 77.78 \\ + OURS (Mean filling) & 77.36 \\ \end{tabular} \end{table} Table 1: Linear evaluation results on ImageNet-100 classification task, where MoCov2 or SimCLR are used as the contrastive SSL framework for pretraining the feature encoder. The best results are marked in bold. two widely used benchmarks, i.e. Caltech-101 [10] and Flowers-102 [25], which are different from the one used for feature encoder pretraining (i.e. ImageNet-100). Again, we keep the pretrained feature encoder fixed and only train the linear classifier when learning the downstream task. The experimental results are reported in Table 3, where our proposed method outperforms both MSCN and ADIOS baselines on both Caltech-101 and Flowers-102 datasets. **Object detection and instance segmentation.** Now we turn to different downstream tasks on object detection and instance segmentation, where the former is conducted on VOC07+12 [9] and COCO [24] datasets while the latter is conducted on the COCO dataset. With keeping the feature encoder fixed and only supervisedly training the detection or segmentation heads (where for VOC07+12 dataset we adopt the Faster-RCNN [31] model with a C4 backbone which finetuned for 24k iterations, while for COCO dataset we adopt Mask R-CNN [19] model with C4 backbone which is finetuned for 180K iterations, following the same experimental setting as in the original MoCov2 paper), the experimental results are summarized in Table 2. From the results we can again observe the consistent out-performance with respect to both MSCN and ADIOS baselines. The transfer learning results provided in Table 3 and Table 2 support that our method can effectively learn general-purpose features which can be transferred across different downstream tasks or datasets, providing a promising finding for future research in the field of self-supervised learning. MoCov2. Such results are aligned with [35], which suggests that maintaining a lower variance in the key branch than in the query branch during pretraining can be beneficial, and not the other way around. Moreover, we hypothesize that masking can also influence variance such that manipulating it through masking can lead to better results. To further support our claim, we conduct an experiment comparing the variance of standard data augmentation with standard data augmentation combined with our saliency masking for all of our settings, where the results are presented in Table 7. We can observe that standard data augmentation combined with our saliency masking leads to a higher variance than standard data augmentation for all our settings. **Impact of Hard Negative Samples.** We conduct a study to verify our designs of creating hard negative samples by masking large portion of salient patches (40%-70%) of the key view. Table 8 compares the results of being with or without our proposed hard negative samples, showing the benefit brought by our designs. By masking more salient patches, the remaining part of image has more background than foreground. We deem this view as negative one, though it comes from the same sample image as the positive view. The model might be confused if it is biased to the background when the similarity between the hard negative view and the query view is too high. In turn, the model with our hard negative samples will focus on learning more from the foreground thus boosting the performance. **Impact of Localization Networks Pretrained on Different Datasets.** As we adopt ImageNet-pretrained classfication model as the basis for SCDA to build our localization network (for producing saliency maps), there could exist potential concern if we take any advantage later in the SSL pretraining stage than other baselines. To resolve such potential concern, here we conduct a study to use another model, the ResNet-50 backbone from a Faster R-CNN detection model pretrained on the COCO dataset, as the basis for SCDA. With using our high-pass filtering masking strategy guided by the saliency maps respectively from different localization networks in MoCov2 SSL framework to training the feature encoders, the experimental results upon various downstream tasks and datasets (following the same settings as previous experiments) are summarized in Table 5. We are able to observe that both localization networks result in similar performances, thus verifying that our method is not sensitive to the selection of localization network once it is able to provide reasonable localization capability. ## 5 Conclusion We propose a salient masking augmentation method for contrastive self-supervised learning with a ConvNet as its backbone. Compared to randomly masking patches of the input image, our salient masking provides more semantically meaningful masks while its efficacy is well verified in our ablation study. Besides masked positive samples, we further introduce a simple way to create hard negative samples according to three different masking strategies, which \begin{table} \begin{tabular}{c|c|c} Setting & Mask branch & Top1 \\ \hline Baseline MoCov2 & ✗ & 56.00 \\ \hline \multirow{3}{*}{High-pass filtering} & key & 52.25 \\ & both & 56.29 \\ & query & **58.19** \\ \hline \multirow{3}{*}{Strong blurring} & key & 51.06 \\ & both & 56.83 \\ & query & **58.28** \\ \hline \multirow{3}{*}{Mean filling} & key & 47.53 \\ & both & 56.86 \\ \cline{1-1} & query & **58.34** \\ \end{tabular} \end{table} Table 6: Comparison of masking on different branches of SSL framework. \begin{table} \begin{tabular}{c|c} Augmentation & Variance (1e-3) \\ \hline Standard & 6.196 \\ \hline + High-pass filtering masking & 10.952 \\ High-pass filtering w/o masking & 8.67 \\ + Strong blurring masking & 7.744 \\ + Mean filling masking & 7.776 \\ \end{tabular} \end{table} Table 7: Variance of the representations b/w standard augmentation and our saliency masking (refer to [35] for variance calculation, based on TinyImageNet validation set). \begin{table} \begin{tabular}{c|c|c|c c c} Setting & Positive mask & Negative mask & Top1 \\ \hline High-pass & ✓ & ✗ & 55.25 \\ filtering & ✓ & ✓ & **56.26** \\ \hline Strong & ✓ & ✗ & 55.52 \\ blurring & ✓ & ✓ & **56.83** \\ \hline Mean & ✓ & ✗ & 55.18 \\ filling & ✓ & ✓ & **56.21** \\ \end{tabular} \end{table} Table 8: Efficacy of our proposed hard negative samples. \begin{table} \begin{tabular}{c|c c|c c c} Pretrained dataset & ImageNet-100 & Caltech-101 & Flowers-102 & \multicolumn{2}{c}{VOC07+12 det} \\ & & Top1 & & AP\({}_{all}\) & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline ImageNet-1K & 73.80 & 84.91 & 90.95 & 50.89 & 77.66 & 55.44 \\ COCO & 73.78 & 85.68 & 90.83 & 50.22 & 77.41 & 54.28 \\ \end{tabular} \end{table} Table 5: We compare the results of using localization networks pretrained on the ImageNet-1K and COCO dataset, where the localization network is to produce the saliency maps for guiding our masking operations. Similar performances produced by using these two localization networks demonstrate that our proposed method is insensitive to the selection of localization network for performing saliency computation, once it provides feasible localization results. further improve the capability of training the feature encoder. The extensive experimental results demonstrate the effectiveness and superiority of our proposed method. A Appendix In this appendix, we firstly provide the summarization of our contributions with respect to our two main baselines MSCN [20] and ADIOS [34] (cf. Section A.1 and Section A.2 respectively) as well as our contribution in terms of saliency masking (cf. Section A.3). Furthermore, we show the efficacy of our three different masking strategies (i.e., high-pass filtering, strong blurring, and mean filling) with more experiments as well as discuss the computational cost: In Section A.4, we provide detailed experimental setups and conduct various downstream tasks (i.e., classification, object detection, and semantic segmentation) in different contrastive SSL frameworks (i.e., MoCov2 [5] and SimCLR [4]); While in Section A.5, we compare the computational cost of three different masking strategies with MSCN [20] and ADIOS [34]. ### Emphasis upon our contribution compared to baseline MSCN [20] Here we would like to emphasize again that our contributions stand out from the ones of MSCN [20] as it includes: 1) _Saliency masking with various masking strategies_ (their benefits are shown in Table 4 and 1 of the main manuscript), in which MSCN does not adopt saliency-guided masking but applies random masking, and its masking strategy based on high-pass filtering constrains the setting of downstream tasks (since the input for the downstream tasks needs to be firstly high-pass-filtered as well, i.e. having the prior knowledge upon how the pre-training of feature extractor is done, c.f. lines 360-372 in our main manuscript). Our proposed strong blurring and mean-filling masking strategies are novel and practical as they do not have such constraints, thus being more flexible; 2) Based on the explicit analysis of _variance manipulation_, our proposed method applies masking solely on the query branch of the siamese framework and is shown to consistently improve the performance for all masking strategies (c.f. Table 6 of the main manuscript); 3) Generating the _hard negative samples_ easily by masking only the foreground patches with the help of saliency (cf. Table 8 of the main manuscript for the improvement based from such design). ### Emphasis upon our contribution compared to baseline ADIOS [34] We would like to emphasize that our contributions stand out from the ones of ADIOS [34] as it includes: 1) _Efficiency in obtaining (partially) semantic masks_. While both our proposed method and ADIOS employ a localization network to address the "where to mask" issue, our approach achieves a more favorable trade-off between obtaining (partially) semantic masks and computational effort. Notably, the localization network we utilize remains frozen during feature extractor training, whereas ADIOS requires joint training of the localization network (UNet-based segmentation model) alongside the feature extractor; 2) _Variance manipulation in single branch_. ADIOS masks a _single view_ while both views (i.e. masked and unmasked) will go through both query and key branches (as indicated in their source code). In comparison, our design shows that incorporating variance manipulation through masking only the query branch has a positive impact on the Siamese network. Our method differs from ADIOS in terms of both operation and motivation (i.e. variance manipulation). Further details of our investigation and discussion can be found in lines 744-807, and while corresponding ablation studies can be found in Tables 6 and 7. ### Our contribution in saliency masking As described in lines 101-113 in our main manuscript, and we would like to clarify again here: most existing studies of adopting masking operations (together with self-reconstruction objective) to realize self-supervised learning are based on the _transformer backbone_ thanks to the tokenized input (where the masking is simply to block out some tokens), and the prior works (e.g. SemMAE [22], MST [23], BEiT [1], iBOT [41], MAE [17], and SimMIM [37]) are designed for transformers as well. In contrast, we aim to apply masking for _convolutional neural networks_, which is actually nontrivial due to the unwanted edges caused by masking (and that is exactly why MSCN [20] needs to introduce the high-pass filtering at first). Moreover, even there exists some transformer-based prior works adopting the saliency operations as well, the ways of their applying saliency masking are also different from ours: For instance, SemMAE [22] requires a two-stage training process to determine where to apply the mask, while our approach achieves the same goal with a single feature extractor and end-to-end training; MST [23] also aims to avoid masking important objects, while our method of explicitly distributing masked patches across foreground and background empirically leads to better performance. ### More Experimental Results In this section, we provide the results for all the downstream tasks with three different masking strategies (i.e., high-pass filtering, strong blurring, and mean filling) and baselines (i.e., MSCN [20] and ADIOS [34]) based on two contrastive SSL frameworks (i.e., MoCov2 [5] and SimCLR [4]). In the pretraining stage, we train the feature encoder (under MoCov2 and SimCLR frameworks) with using ResNet-50 as the backbone on the ImageNet-100 [33] dataset for 200 epochs. We conduct experiments on three datasets (i.e., ImageNet-100, Caltech-101 [10], and Flower-102 [25]) for downstream classification tasks and supervisedly train a linear classifier while the feature encoder is kept fixed/frozen for 100 epochs. We conduct experi ments on VOC07+12 [9] and COCO [24] datasets for downstream detection tasks, where the COCO dataset is also used for the downstream instance segmentation task. For the VOC07+12 dataset, we adopt the Faster R-CNN [31] model with C4 backbone which is finetuned for 24k iterations; while for the COCO dataset, we adopt the Mask R-CNN [19] model with C4 backbone which is finetuned for 180k iterations (using 1\(\times\) learning rate schedule). **MoCov2 Results.** First of all, please note that most of the results based on MoCov2 framework have been provided in our main paper, here we particularly include them again for the purpose of having better and more complete overview. For MoCov2, we set the batch size to 128 and the base learning rate to 0.015 and use SGD [32] as the optimizer during pretraining. When training the linear classifier, we set the base learning rate to 30.0 and adopt a learning rate schedule that decreases the learning rate by 0.1 at epochs 60 and 80. All MoCov2's downstream classification results are reported in the upper half of Table 9, while all the downstream detection and instance segmentation results are reported in the upper half of Table 10. Our method outperforms the fundamental contrastive SSL framework (i.e., MoCov2, which has no masking involved) and two baselines (i.e., MSCN and ADIOS) in all the downstream tasks. **SimCLR Results.** For SimCLR, we set the batch size to 256, the base learning rate to 0.3, and use LARS [38] as the optimizer during pretraining. When training the linear classifier, we set the batch size to 256, the base learning rate to 1.0, and adopt a cosine learning rate schedule. All the SimCLR's downstream classification results are reported in the lower half of Table 9, while all the downstream detection and instance segmentation results are reported in the lower half of Table 10. Our method outperforms the fundamental contrastive SSL framework (i.e., SimCLR, which has no masking involved) and two baselines (i.e., MSCN and ADIOS) in all the classification tasks; but ADIOS slightly outperforms our method in the detection and instance segmentation tasks, where we attribute this to two reasons. Firstly, according to ablation studies conducted in [35], manipulating variance across branches in symmetric encoders (i.e. SimCLR) does not improve as much as that in asymmetric encoders (i.e., MoCov2), limiting improvement in our three masking strategies. Secondly, more detailed semantically meaningful masks of ADIOS are learnt in its pretraining stage, which yield better performance for the downstream detection and instance segmentation tasks (as both detection and instance segmentation can be seen as more detailed recognition tasks than classification). However, noting that an occlusion module needs to be trained jointly with the main SSL objective to learn such masks for ADIOS (thus being believed to require more computational efforts). In contrast, our saliency masking utilizes a pre-trained localization network before masking (where the resultant masks are less detailed than the ones in ADIOS but no additional joint learning is required) and still contributes to the comparable results with ADIOS. **Supervised Baseline Results.** To establish a solid foundation, we create a supervised baseline. In this baseline, we train an image classification model using ResNet-50 as the feature extractor. Our training setup involves using a batch size of 256, a base learning rate of 0.1, and a learning rate decay of 10 every 30 epochs, and employing SGD as our optimizer when training on the ImageNet-100 dataset. For downstream classification tasks involving Caltech-101 and Flowers-102, we follow our SSL approaches. In these cases, we kept the ResNet-50 feature which is trained on ImageNet-100 fixed, and train a linear classifier with hyperparameters similar to our SSL approaches. All classification tasks undergo 100 epochs of training. Regarding downstream detection and instance segmentation tasks, we utilize settings similar to those used in our SSL methods. The top row of Table 9 presents the results for the supervised baseline classification, while the top row of Table 10 showcases the results for downstream detection and instance segmentation. Despite achieving the highest accuracy in ImageNet-100 classification, the supervised baseline exhibits the poorest transferability. Both transfer classification tasks achieve only 20% accuracy, a result we attribute to the distribution differences between the ImageNet-100 and Caltech-101/Flowers-102 datasets. **Monocular Depth Estimation Downstream Task Results** In addition to the commonly addressed downstream tasks of classification, detection, and instance segmentation in most SSL previous works, we have extended the evaluation of our approach to include monocular depth estimation. To achieve this, we adopt Monodepth2 [15] as our reference and substitute its feature encoder with our pretrained ResNet-50, which remains frozen during training. We maintain identical hyperparameters to those used in Monodepth2 and conduct our evaluation on the KITTI 2015 dataset [13]. The results are presented in Table 11. Notably, whereas Monodepth2 trains all model components, we exclusively train the depth decoder and the pose network while keeping the feature encoder fixed. Our mean filling masking strategy produces results on par with the original Monodepth2, and all our settings outperform baseline methods (MoCov2, MoCov2+MSCN, MoCov2+ADIOS). Furthermore, our approach's learned features demonstrates the capacity to generalize to tasks beyond the scope of traditional classification, detection, and instance segmentation. ### Computational Cost We compare the computational cost of ADIOS [34], MSCN [20], and our three masking strategies (i.e., high-pass filtering, strong blurring, and mean filling) using MoCov2 as the SSL framework. Training time (in minutes) per epoch in ImageNet-100 for each method is measured. Serving as the base SSL framework of all methods, MoCov2 takes 5.5 minutes to train one epoch. In order to alleviate the parasitic edges caused by masking operation in ConvNets, MSCN [20] adopts a high-pass filter and applies random masking (including channel-wise and focal masking) on input images, which in results takes 7 minutes per epoch. Instead of randomly masking, ADIOS [34] proposes an UNet-based occlusion module to adversarially learn along with the feature encoder to determine the regions to be masked, which is called masking slot. The memory and computational cost will increase linearly as the number of masking slots increases. 10 minutes are needed to train one epoch with 6 masking slots in ADIOS. In order to determine where and how to mask in an easier way, our three masking strategies consist of saliency computation and different image processing. In saliency computation, two forward passes through the localization network are needed to produce saliency maps for positive and (hard) negative samples. Compared to MSCN, although it takes 2 minutes longer per epoch due to the saliency constraint in our high-pass filtering strategy, we achieve better performance on various downstream tasks. While in mean filling and strong blurring strategies, mean value and strong blurred patches are filled in the masked regions to make those edges caused by masking less visible, in total each \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \multicolumn{2}{c|}{Method} & \multicolumn{2}{c|}{VOC07+12 detection} & \multicolumn{2}{c|}{COCO detection} & \multicolumn{2}{c}{COCO instance segmentation} \\ & \(AP_{all}\) & \(AP_{50}\) & \(AP_{75}\) & \(AP_{all}^{bb}\) & \(AP_{50}^{bb}\) & \(AP_{75}^{bb}\) & \(AP_{all}^{mk}\) & \(AP_{50}^{mk}\) & \(AP_{75}^{mk}\) \\ \hline Supervised & 44.30 & 73.47 & 46.50 & 37.84 & 57.09 & 40.67 & 33.14 & 53.95 & 35.31 \\ \hline \hline MoCov2 & 50.27 & 76.68 & 54.76 & 38.52 & 57.62 & 41.67 & 33.75 & 54.70 & 35.86 \\ \hline + MSCN & 50.27 & 76.99 & 54.70 & 38.80 & 58.09 & **42.20** & 33.89 & 54.78 & **36.36** \\ + ADIOS & 45.85 & 73.44 & 48.45 & 38.12 & 57.38 & 41.29 & 33.38 & 54.25 & 35.63 \\ \hline \hline + OURS (High-pass filtering) & 50.89 & 77.66 & 55.44 & 39.16 & 58.62 & 42.45 & 34.22 & 55.28 & 36.30 \\ + OURS (Strong blurring) & **50.76** & **77.29** & 54.75 & 38.90 & **58.13** & 42.11 & **33.93** & 54.77 & 36.53 \\ + OURS (Mean filling) & 50.59 & 76.97 & **55.30** & **38.93** & 58.08 & 42.17 & 33.92 & **54.86** & 36.27 \\ \hline \hline SimCLR & 40.34 & 69.86 & 40.96 & 36.30 & 55.55 & 38.80 & 31.99 & 52.28 & 33.80 \\ + MSCN & 43.50 & 73.18 & 45.04 & 37.88 & 57.44 & 40.68 & 33.36 & 54.15 & 35.57 \\ + ADIOS & 43.83 & **73.42** & **45.01** & 38.76 & 58.35 & 41.96 & 33.94 & 54.96 & 36.23 \\ \hline \hline + OURS (High-pass filtering) & **43.76** & 73.43 & 44.90 & **38.45** & **57.79** & **41.58** & **33.90** & **54.70** & **35.93** \\ + OURS (Strong blurring) & 43.20 & 73.15 & 44.27 & 37.44 & 56.80 & 39.96 & 32.92 & 53.73 & 35.00 \\ + OURS (Mean filling) & 43.20 & 72.54 & 44.79 & 37.27 & 56.46 & 40.10 & 32.68 & 53.35 & 34.54 \\ \end{tabular} \end{table} Table 10: Transfer learning results on VOC07+12 and COCO detection tasks, and COCO instance segmentation task. Performances in terms of \(AP_{all}\), \(AP_{50}\) and \(AP_{75}\) metrics are reported, and the best and second-best results on each task of different contrastive SSL frameworks (i.e., MoCov2, SimCLR) are marked in orange and blue respectively. \begin{table} \begin{tabular}{l|c c c} Method & ImageNet-100 & Caltech-101 & Flowers-102 \\ \hline Supervised & 82.72 & 21.99 & 20.29 \\ \hline \hline MoCov2 & 68.22 & 81.87 & 88.39 \\ \hline + MSCN [20] & 70.28 & **84.13** & 90.10 \\ + ADIOS [34] & 62.76 & 79.83 & 88.39 \\ \hline \hline + OURS (High-pass filtering) & 73.80 & 84.91 & 90.95 \\ + OURS (Strong blurring) & **72.50** & 83.95 & 90.59 \\ + OURS (Mean filling) & 70.84 & 82.68 & **90.83** \\ \hline \hline SimCLR & 69.77 & 78.20 & 85.21 \\ \hline + MSCN [20] & 77.18 & **86.99** & **91.08** \\ + ADIOS [34] & 71.12 & 81.96 & 87.53 \\ \hline + OURS (High-pass filtering) & 77.90 & 87.04 & 90.71 \\ + OURS (Strong blurring) & **77.78** & 83.41 & 91.93 \\ + OURS (Mean filling) & 77.36 & 83.55 & 90.83 \\ \end{tabular} \end{table} Table 9: Linear evaluation results on ImageNet-100, Caltech-101 and Flowers-102. The best and second-best results on each dataset with different constrastive SSL frameworks (i.e., MoCov2, SimCLR) are marked in orange and blue respectively. epoch takes 7.5 and 10.5 minutes respectively for their training. The strong blurring strategy spends more time than other strategies, in which the bottleneck is attributed to the GPU I/O. Since our saliency masking procedure is done on GPU, for our strong blurring strategy, we need to move both the standard augmented images and strong blurred images onto the GPU. The data transfer time will be twice that of our other two strategies (i.e., high-pass filtering strategy only moves images onto GPU after high-pass filtering, while mean filling strategy only moves the images onto GPU after standard data augmentation). We will keep improving the overall GPU I/O procedure for our proposed strategies. Furthermore, we test the accuracy of our high-pass filtering method against MSCN on ImageNet-100, with matching pre-training times. MSCN achieves its highest accuracy 70.28% in 197 epochs, while around the same time our method based on high-pass filtering masking strategy reaches 131 epochs but results to have 71.66% accuracy, which is already 1.4% higher than MSCN. To sum up, our high-pass filtering strategy strikes a better balance between efficiency and efficacy than MSCN and ADIOS.
2309.14332
Hall conductivity pump
The Thouless charge pump represents a transfer of electric charge through a gapped one-dimensional system between its zero-dimensional boundaries under a periodic change of a parameter. The value of the passed charged during a single cycle is known to be a topological invariant. We construct an analogous topological invariant that measures a pump of Hall conductance inside of three-dimensional material between its two-dimensional boundaries.
Lev Spodyneiko
2023-09-25T17:57:11Z
http://arxiv.org/abs/2309.14332v1
# Hall conductivity pump ###### Abstract The Thouless charge pump represents a transfer of electric charge through a gapped one-dimensional system between its zero-dimensional boundaries under a periodic change of a parameter. The value of the passed charged during a single cycle is known to be a topological invariant. We construct an analogous topological invariant that measures a pump of Hall conductance inside of three-dimensional material between its two-dimensional boundaries. Introduction The primary objective of the present paper is to build on previous work [1; 2] and construct an invariant that measures the pumping of Hall conductance in three and higher dimensions. We will take the opportunity and review the physical idea of the construction. This work is focused on topological invariants of gapped systems. The simplest example of that is a zero space dimensional (0d) system with finite Hilbert space which conserves \(U(1)\) charge. If the ground state is separated by a gap from the rest of the spectrum the expectation value of the charge operator is an integer. The latter cannot change under continuous deformations of the Hamiltonian as long as the gap is preserved and the charge conservation is respected. In this sense, it is a topological invariant and this is what is meant by the term topological in the rest of the paper. There is no straightforward generalization of this invariant to a one-dimensional (1d) system since the charge of the ground state typically scales intensively with the size of a system. Nevertheless, there is a more sophisticated generalization. One-dimensional systems always have zero-dimensional systems at the boundary which can have a non-trivial charge. However, one can always change the charge of the boundary by stacking it with a decoupled 0d system and thus it is inherently ambiguous. This is not very interesting and in order to get something non-trivial one can employ intuition from bulk-boundary correspondence. In the present setting, this would mean that the conservation of charge on the boundary is anomalous because the charge can flow into the bulk. An example of it is the Thouless charge pump [3; 4] when the charge is transmitted from one boundary to the other through a gapped bulk by a periodic adiabatic change of a parameter. The value of the pumped charge is a topological invariant of a family of systems in the following sense. Consider a family gapped 1d Hamiltonians parametrized by a periodic parameter. One can compute how much charge passes through a fixed section as the parameter is periodically varied. The result will not change if we add a small perturbation to the family as long as each member of the family stays gapped. The charge passing through a fixed section during a single cycle is a topological invariant that protects the family from deformation to a trivial family, i.e. a family that does not depend on the parameter. This way one can construct a topological invariant of a family from a topological invariant living in a lower dimension. Heuristically, one can imagine that a charged mode, localized on one boundary, delocalizes and moves through the bulk onto another boundary [5]. A natural question is whether a similar thing can happen for other topological invariants. The answer is positive, and an example is a mode with a non-trivial Chern class of the Berry connection [6] (which is actually a topological invariant of a 2-parameter family of 0d systems) that can move through the bulk from one edge to another. This leads to an invariant of a 3-parameter family for 1d system. Both of these examples are interesting but they start from invariants of 0d systems. It is interesting to find a generalization where a higher dimensional topological invariant is "pumped" through the bulk. One can pump a non-trivial family through the bulk. Namely, one can construct an invariant [2] that measures how many 1d Thouless pumps have been pumped from one 1d edge of 2d system to another. This is intriguing but somewhat unsatisfactory since the pumped quantity is an invariant of a family itself. In the present paper, we will construct a topological invariant that measures the pumping of a 2d Hall conductance inside of 3d material under periodic change of a parameter. More precisely, consider a family of gapped 3d systems parametrized by a periodic parameter with a boundary consisting of two planes. The invariant measures by how much the quantum Hall conductance of one boundary changes during an adiabatic change of a parameter. This characterization of the invariant contains an important caveat which is a manifestation of the bulk-boundary correspondence for topological invariants of families. The easiest way to explain it is to use an example of a Thouless charge pump. There are two ways to compute the pumped charge. One is to use the static perturbation theory and find a charge passing through a section of an infinite 1d system. The system is assumed to be gapped and to have a unique ground state. The parameter is assumed to be varied adiabatically and thus we can use the adiabatic theorem. On the other hand, one can consider a finite-size system with two boundaries and compute the change of charge on one of the boundaries. In this case, one can try to apply the adiabatic theorem but it leads to a problem. The theorem states that the system should stay in the ground state. This would mean that the charge of each of the boundaries would return to the initial value after a complete cycle resulting in a zero passed charge. The seeming contradiction between the two computations lies in the applicability of the adiabatic theorem. The latter assumes that the instantaneous ground state does not intersect other states for all values of the parameters. This leads to the bulk boundary correspondence of invariants of the families: whenever the invariant computed in the bulk is non-trivial one cannot introduce a gapped boundary condition for all members of the family which continuously depend on the parameters. A seemingly different way of how a topological invariant can be generalized to higher dimensions is via a charge of solitonic objects. For example, in 1d a soliton can carry a non-trivial charge which is protected under continuous deformations. The existence of charged defects can be used to characterize the topologically protected properties of a state. However, this approach can be related to the previous one. Instead of considering slow adiabatic variation in time, one can consider slow variations in space. As one moves along the soliton, the parameters of the Hamiltonian change. Far away from a soliton the Hamiltonian should return to itself, thus one can think about the change of the parameter as one moves perpendicular to the soliton as a loop in the parameter space. The Thouless charge pump of this cycle is related to the charge of the soliton [2]. Similarly, one can think about domain wall defects in 3d system with non-trivial 2d Hall conductivity. A one-parameter family of systems represents a loop in the space of gapped theories \(\mathfrak{M}_{D}\) in \(D\)-dimensions. Whenever a topological invariant is non-trivial the loop cannot be contracted to a point and represents a non-trivial element in the fundamental group \(\pi_{1}(\mathfrak{M}_{D})\). Similarly, invariants of \(n\)-parameter families distinguish non-trivial elements of \(\pi_{n}(\mathfrak{M}_{D})\). Typically, one is only interested in different gapped phases that correspond to different connected components \(\pi_{0}(\mathfrak{M}_{D})\) of \(\mathfrak{M}_{D}\). One might wonder why one should be interested in more sophisticated details of the topology of \(\mathfrak{M}_{D}\) besides its zeroth homotopy group \(\pi_{0}(\mathfrak{M}_{D})\). First, as we discussed above, non-trivial families are related to solitons and other defects. Secondly, it is interesting that each member of the family can be in a trivial phase (i.e. it can be continuously deformed into a trivially gapped system) while the family as a whole is non-trivial. Lastly, in the case of the invertible topological orders, it is expected [7; 8] that \(\mathfrak{M}_{D}\) forms a loop spectrum \(\pi_{n}(\mathfrak{M}_{D})=\pi_{n+1}(\mathfrak{M}_{D+1})\). The construction of invariants of families can be used to test and prove these conjectures. The above discussions were very intuitive and gave a simple physical picture. However, it was heuristic at best and contained numerous implicit assumptions. Even in the case of the charge, separating the boundary from the bulk and tracking the mode is a very sophisticated procedure and is plagued with ambiguities. For the Hall conductance pump, the matter is further complicated by pumping a subsystem of macroscopic size. We can avoid the technical issues by using the descent equation coined by Kitaev [9] and further developed in [1; 2]. Instead of dealing with the pumps directly, it defines a form on the parameter space which can be computed by a localized static response. The integral of this form over the parameter space can be shown to be a topological invariant of the family and can be interpreted as a pump: a non-trivial flow of 2d system under a periodic adiabatic pump inside of 3d system. The descend equation can be applied further to construct invariants of \(n\)-parameter families in \(D+n-2\) dimensions. Since Hall conductivity is not an integral of a localized density the formalism of the descent equation is not directly applicable to it. Fortunately, one can use Streda formula in order to write Hall conductivity in the form appropriate for the descent equation. Even though the pumped quantity is two-dimensional in nature the resulting invariant is given by a formula that only has contributions localized around a point. In the present paper, we will use Hamiltonian formalism and by a family of systems, we will mean a family of gapped Hamiltonians. This allows expression for invariants in terms of standard perturbation theory. The resulting invariant only depends on the ground-state wave function with other details of the Hamiltonian being irrelevant. In the purely wave-functions language these results were derived in [10]. Also, similar ideas in the setting of quantum field theory were studied in [11; 12]. The paper is structured as follows. After explaining our setting in Section II, we review the descend construction on the example of a charge pump in Section III. We derive the higher-dimensional invariants for the Hall effect in Section IV. Field theory interpretation as well as applications are discussed in Section V. We summarize the bookkeeping notations in Appendix A and derive the Streda formula for lattice systems in Appendix B. We thank Anton Kapustin and Nikita Sopenko for the discussions. The work was supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant (651446) from the Simons Foundation. ## II Lattice systems In this paper, we consider lattice systems defined on a subset \(\Lambda\subset\mathbb{R}^{d}\). The total Hilbert space is a tensor product of onsite Hilbert spaces. The latter is assumed to be finite-dimensional. The closure of the lattice \(\Lambda\) is assumed to be discrete. We will restrict to models which are invariant under \(U(1)\) symmetry generated by charge densities \(Q_{p}\) with \(p\in\Lambda\). They are hermitian operators with integer eigenvalues such that \[[Q_{p},Q_{q}] =0, \tag{1}\] \[\sum_{q\in\Lambda}[Q_{q},H_{p}] =0, \tag{2}\] where \(H_{p}\) is Hamiltonian density. Operators \(Q_{p}\) and \(H_{p}\) are assumed to act trivially on a site \(q\) if \(|q-p|>R_{\rm int}\) for all \(p,q\) and some constant \(R_{\rm int}\). We will only consider gapped Hamiltonians with a unique ground state. The time-derivative of the charge density reads \[\frac{dQ_{p}}{dt}=i[H,Q_{p}]=-\sum_{q\in\Lambda}J_{qp}, \tag{3}\] where we have defined the current as \[J_{qp}=i[Q_{p},H_{q}]-i[Q_{q},H_{p}]. \tag{4}\] It represents the current flowing from \(p\) to \(q\) and is thus chosen to be anti-symmetric. Suppose, \(f(p)\) is 1 on a finite subset \(A\subset\Lambda\) and zero otherwise. Then, the charge in this region is \[Q(f)=\sum_{p\in\Lambda}f(p)Q_{p}. \tag{5}\] It evolves in time as \[\frac{dQ(f)}{dt}=-J(\delta f), \tag{6}\] where we have defined \[J(\delta f)=\frac{1}{2}\sum_{p,q\in\Lambda}(f(q)-f(p))J_{pq}, \tag{7}\] which physically corresponds to the current flowing through the boundary of \(A\). We defined it for a function \(f\) with compact support, but it can be generalized to include more general functions such as \(f(p)=\theta(a-x^{i}(p))\), where \(x^{i}(p)\) is the \(i\)th coordinate of the site \(p\) in \(\mathbb{R}^{d}\). In this case, \(J(\delta f)\) will represent the total charge flowing through the hyperplane \(x^{i}=a\). In a ground state, the charge density is constant \[\sum_{p\in\Lambda}\langle J_{pq}\rangle=0. \tag{8}\] This equation can be solved \[\langle J_{pq}\rangle=\sum_{r\in\Lambda}M_{pqr}, \tag{9}\] where \(M_{pqr}\) is magnetization density [13] which is totally antisymmetric in \(p,q,r\). Solution of eq. (8) requires specifying boundary conditions and magnetization non-locally depends on them. However, the variation of magnetization with respect to change of parameters is local [13; 14; 15] and is given by \[\frac{dM_{pqr}}{d\lambda}=\oint_{z=0}\frac{dz}{2\pi i}\mathrm{Tr}\left(G\frac{ dH_{p}}{d\lambda}GJ_{qr}+G\frac{dH_{q}}{d\lambda}GJ_{rp}+G\frac{dH_{r}}{d \lambda}GJ_{pq}\right), \tag{10}\] where we introduced the resolvent \(G=\frac{1}{z-H}\) to shorten the formulas. The integral is around ground state energy \(z=E_{0}\) which we choose to be 0. Later, we will need a derivative of magnetization with respect to the chemical potential \(\mu\). It is related [16] to derivative of magnetization with respect to a deformation \(dH_{p}=d\lambda_{Q}\,Q_{p}\) as \(\frac{dM}{d\mu}=-\frac{dM}{d\lambda_{Q}}\) and is given by \[\frac{dM_{pqr}}{d\mu}=-\oint_{z=E_{0}}\frac{dz}{2\pi i}\mathrm{Tr}\left(GQ_{p} GJ_{qr}+GQ_{q}GJ_{rp}+GQ_{r}GJ_{pq}\right). \tag{11}\] ## III Descendants from Thouless pump. In this section, we explain the basic idea behind the construction of higher dimensional topological invariants. We focus on the example of a first descendant of 0d electric charge of the ground state. The result is a Thouless charge pump. This example is simple and physically transparent while highlighting the main ideas of the construction. Suppose for a moment that the number of cites in \(\Lambda\) is finite. Then one has a finite-dimensional quantum mechanical system. Since the Hamiltonian is gapped and has a unique ground state the expectation value of the total charge in the ground state \(\langle Q\rangle\) is an integer. It remains unchanged under continuous deformations of Hamiltonian as long as it stays gapped and commutes with the charge operator. So, it is a topological invariant of 0d systems. If one considers a one-dimensional lattice the charge of the ground state will typically diverge as volume increases and it is ill-defined in infinite volume. However, one can expect the charge passing through any section under a deformation of the Hamiltonian to be finite. Consider a function \(f(x)=\theta(x-a)-\theta(x-a-L)\), i.e. \(f(x)\) is \(1\) if \(a<x<a+L\) and zero otherwise. We assume that \(a,a+L\) are not points of 1d lattice \(\Lambda\subset\mathbb{R}\). Consider an infinitesimal deformation of a system given by \(H_{p}\to H_{p}+dH_{p}\). The change of the ground state expectation value of the total charge in the region \(a<x<a+L\) is \[\sum_{a<p<a+L}d\langle Q_{p}\rangle=\sum_{p\in\Lambda}f(p)d\langle Q_{p}\rangle =-\sum_{p,q\in\Lambda}f(p)\left(\langle dH_{q}P\frac{1}{H}PQ_{p}\rangle+\langle Q _{p}P\frac{1}{H}PdH_{q}\rangle\right), \tag{12}\] where \(P\) is the projector onto the excited states and we used the quantum mechanical perturbation theory. We normalized the ground state energy to be zero. Each summand in the last formula is non-zero when \(p,q\) are deep inside the region \(a<x<a+L\). However, physically we would expect the flow of the charge to occur only at the boundaries around \(a\) and \(a+L\). This calls for the following rearrangement of the summation \[\begin{split}\sum_{p,q\in\Lambda}f(p)\langle dH_{q}P\frac{1}{H}PQ _{p}\rangle&=\sum_{p,q\in\Lambda}f(p)\left(\langle dH_{q}P\frac{1 }{H}PQ_{p}\rangle-\langle dH_{p}P\frac{1}{H}PQ_{q}\rangle\right)\\ &=\frac{1}{2}\sum_{p,q\in\Lambda}(f(p)-f(q))\left(\langle dH_{q}P \frac{1}{H}PQ_{p}\rangle-\langle dH_{p}P\frac{1}{H}PQ_{q}\rangle\right).\end{split} \tag{13}\] In the first equality, we used the fact that the total charge commutes with the Hamiltonian \(H\) and therefore with the projection \(P\). The projection acting on the ground state gives zero. In the second equality, we used the antisymmetry in \(p,q\) of the expression in the brackets. Doing the same to the other term, we find \[\sum_{p\in\Lambda}f(p)d\langle Q_{p}\rangle=\frac{1}{2}\sum_{p,q\in\Lambda}(f (q)-f(p))Q_{pq}^{(1)}=Q^{(1)}(\delta f), \tag{14}\] where we have defined \[Q_{pq}^{(1)} =\langle dH_{q}P\frac{1}{H}PQ_{p}\rangle-\langle dH_{p}P\frac{1} {H}PQ_{q}\rangle+\langle Q_{p}P\frac{1}{H}PdH_{q}\rangle-\langle Q_{q}P\frac {1}{H}PdH_{p}\rangle, \tag{15}\] \[Q^{(1)}(\delta f) =\frac{1}{2}\sum_{p,q\in\Lambda}(f(q)-f(p))Q_{pq}^{(1)}. \tag{16}\] The function \(Q_{pq}^{(1)}\) rapidly decays [17] to zero as \(|p-q|\to\infty\). After these manipulations, one can write \[d\sum_{a<p<a+L}\langle Q_{p}\rangle=Q^{(1)}(\delta f)\approx Q^{(1)}(\delta f ^{L})-Q^{(1)}(\delta f^{R}), \tag{17}\] where \(f^{L}(x)=\theta(x-a)\) and \(f^{R}(x)=\theta(x-a-L)\). The error is of order \(O(L^{-\infty})\). A couple of important points. First, even though the charge \[Q^{(0)}(g)=\sum_{p}g(p)\langle Q_{p}\rangle \tag{18}\] is ill-defined for \(g=f^{L,R}\), the amount of the charge \(Q^{(1)}(\delta f_{L,R})\) flowing through each edge of the region under deformation \(dH\) is well-defined. Second, the total change of the charge \(Q^{(0)}(f)\) as a parameter of Hamiltonian is periodically varied is zero. However, suppose we only focus on one boundary and compute the integral \(\oint_{S_{1}}Q^{(1)}(f^{L})\) along the cycle in the parameter space. Here the integral is over a periodic parameter of the Hamiltonian \(\lambda\sim\lambda+1\). In that case, the result may be non-trivial and is actually a topological invariant in the sense that we will explain momentarily. This is a Thouless charge pump. It measures how much charge has flown under the adiabatic periodic change of a parameter. The charge passing during one cycle cannot change under continuous variations of the Hamiltonian as long as it stays gapped. The main idea behind descend construction is to extend this procedure by replacing the charge with other topological invariants. For example, one can start from the Thouless pump \(\oint Q^{(1)}(f^{L})\) instead of the charge \(\langle Q\rangle\). Above we used the charge density \(Q^{(0)}_{p}=\langle Q_{p}\rangle\) extensively, an analog for the pump would be \(\frac{1}{2}\sum_{q\in\Lambda}\oint Q^{(1)}_{pq}(f^{L}(q)-f^{L}(p))\). The last expression looks cumbersome due to parameter space integration as well as a complicated contraction with the function \(f\). In order to simplify the computation, it is easier to work with densities valued in differential forms on the parameter space and do the integration in the parameter space and contraction with functions at the very last step. The essential step in the construction of the Thouless pump is to find the solution \(Q^{(1)}_{qp}\) of the equation \[dQ^{(0)}_{p}=d\langle Q_{p}\rangle=\sum_{q\in\Lambda}Q^{(1)}_{qp}. \tag{19}\] The function \(Q^{(1)}_{qp}\) must be chosen antisymmetric in \(p,q\) in order to have zero contribution from the region where \(f\) is constant. This equation can be generalized to1 \[dQ^{(n)}_{p_{0},\ldots,p_{n}}=\sum_{q\in\Lambda}Q^{(n+1)}_{q,p_{0},\ldots,p_{n}}, \tag{20}\] where \(Q^{(n+1)}_{q,p_{0},\ldots,p_{n}}\) is \((n+1)\)-form on the parameter space which is antisymmetric in the indices \(q,p_{0},\ldots,p_{n}\). The exterior derivative \(d=\sum_{i}d\lambda_{i}\dfrac{\partial}{\partial\lambda_{i}}\) acts on the parameters \(\lambda_{i}\) of the Hamiltonian. As one goes to higher dimensions it is getting harder to keep track of the different indices. Fortunately, a convenient formal language reviewed in Appendix A drastically simplifies the formulas as well as makes the interpretation clearer. In the following, we will use these notations. The descendant equation (20) takes the form \[dQ^{(n)}=\partial Q^{(n+1)}. \tag{21}\] The solution to this equation can be found in [2]. By evaluating \(n\)-chain \(Q^{(n)}\) on \(n\)-cochain \(\alpha^{(n)}\), we get an \(n\)-form \(\langle Q^{(n)},\alpha^{(n)}\rangle\) on the parameter space. If cochain \(\alpha^{(n)}\) is closed \(\delta\alpha^{(n)}=0\), then the resulting form will be closed with respect to \(d\) \[d\langle Q^{(n)},\alpha^{(n)}\rangle=\langle\partial Q^{(n+1)},\alpha^{(n)} \rangle=\langle Q^{(n+1)},\delta\alpha^{(n)}\rangle=0. \tag{22}\] The result integration of \(n\)-form \(\langle Q^{(n)},\alpha^{(n)}\rangle\) over a manifold without boundary in the parameter space will not depend on continuous deformations of this manifold: \[\int_{M}\langle Q^{(n)},\alpha^{(n)}\rangle-\int_{M^{\prime}}\langle Q^{(n)}, \alpha^{(n)}\rangle=\int_{\partial Y}\langle Q^{(n)},\alpha^{(n)}\rangle=\int _{Y}d\langle Q^{(n)},\alpha^{(n)}\rangle=0, \tag{23}\] where \(M^{\prime}\) is the result of a continuous deformation of \(M\) and \(Y\) is \((n+1)\)-dimensional manifold which \(M\) sweeps as it gets continously deformed into \(M^{\prime}\). The boundary of \(Y\) consists of \(M\) and \(M^{\prime}\). For closed \(\alpha\) and \(M\) without a boundary, we will call \(\int_{M}\langle Q^{(n)},\alpha^{(n)}\rangle\) a topological invariant of a family of gapped Hamiltonians parameterized by \(M\). As one can see, it does not change under continuous deformations of \(M\). If \(M\) can be continuously contracted into a point, the resulting invariant will be zero. On the other hand, if \(\alpha^{(n)}\) is exact \(\alpha^{(n)}=\delta\gamma^{(n-1)}\), we find \[\langle Q^{(n)},\alpha^{(n)}\rangle=\langle Q^{(n)},\delta\gamma^{(n-1)} \rangle=\langle\partial Q^{(n)},\gamma^{(n-1)}\rangle=d\langle Q^{(n-1)}, \gamma^{(n-1)}\rangle. \tag{24}\] In this case, the integration over a manifold without boundary gives zero \[\int_{M}\langle Q^{(n)},\alpha^{(n)}\rangle=\int_{M}d\langle Q^{(n-1)},\gamma^{(n- 1)}\rangle=\int_{\partial M}\langle Q^{(n-1)},\gamma^{(n-1)}\rangle=0. \tag{25}\] Thus, the topological invariant \(\int_{M}\langle Q^{(n)},\alpha^{(n)}\rangle\) is trivial if \(\alpha^{(n)}\) is exact. From the above, we see that if one evaluates \(Q^{(n)}\) on a non-trivial element of \(n\)-cochain cohomology and integrates the result over a manifold without boundary in the parameter space one would get a topological invariant of the family of gapped Hamiltonians. This naturally leads us to the question of what is chain cohomology of the lattice \(\Lambda\subset\mathbb{R}^{D}\). For lattices that are coarsely equivalent to \(\mathbb{Z}^{D}\) (roughly, lattices which can be deformed into \(\mathbb{Z}^{D}\) by a finite distance shift of every site such that only a finite number of sites accumulate at every point), one only has a non-trivial element in \(D\)-cochain cohomology. This means that one will construct a \(D\)-form \(\langle Q^{(D)},\alpha^{(D)}\rangle\) for \(D\)-dimensional family of Hamiltonians. In the same way, one can construct [1] higher Berry curvatures \(\Omega^{(n)}\), which physically correspond to Thoughless pumps of the Chern class of the Berry curvature. In the next section, we will construct descendants \(H^{(n)}\) of the Hall conductance. ## IV Descendants of Hall effect The electric Hall effect in two-dimensional space at zero temperature is given by [13] \[\sigma^{H}(f,g)=-i\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\left(GJ(\delta f) G^{2}J(\delta g)\right), \tag{26}\] where \(f=\theta(-x^{1})\) and \(g=\theta(-x^{2})\). It is a topological invariant and we would like to construct descendants of it. However, after striping it off from the contraction with functions \(f,g\) we find \[-i\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\left(GJ_{pq}G^{2}J_{rs}\right), \tag{27}\] which is not totally antisymmetric in \(p,q,r,s\). Moreover, it has 4 indices and in 2d we expect it to have only 3 since only 2-cochains cohomology is non-trivial in this dimension. The way to cure both of these problems is to use the Streda formula \[\sigma^{H}(f,g)=H^{(0)}(\delta f\cup\delta g), \tag{28}\] where 2-chain \(H^{(0)}\) is minus derivative of magnetization with respect to the chemical potential given by \[H^{(0)}_{pqr}=-\frac{dM_{pqr}}{d\mu}=\oint_{z=E_{0}}\frac{dz}{2\pi i }\text{Tr}\left(GQ_{p}GJ_{qr}+GQ_{q}GJ_{rp}+GQ_{r}GJ_{pq}\right). \tag{29}\] The equation (28) is the lattice analog of the Streda formula [18; 19] in the continuum which reads \[\sigma^{H}=\left(\frac{\partial n}{\partial B}\right)_{\mu}= \left(\frac{\partial M_{z}}{\partial\mu}\right)_{B}, \tag{30}\] where \(n,B,M_{z},\mu\) are density, magnetic field, magnetization, and chemical potential respectively. We prove (28) in the appendix B. The extra minus sign in (29) compared to (30) comes from the fact that the continuum limit of equation (9) is \[J_{i}(\mathbf{x})=-\epsilon_{ij}\partial_{j}M(\mathbf{x}), \tag{31}\] and the lattice definition of magnetization (9) is related to the continuum one as \(M=-M_{z}\). Now, the Hall conductance is in an appropriate form for the construction of descendants. The solution to the descendant equation \[dH^{(n)}=\partial H^{(n+1)} \tag{32}\] is \[H^{(n)}_{p_{0},\ldots,p_{n}}=\frac{1}{2}\sum_{\sigma\in S_{n+2}} (-1)^{\text{sgn}\,\sigma}\oint\frac{dz}{2\pi i}\sum_{j=0}^{n}(-1)^{n-j}A^{(j)}_ {p_{\sigma(0)},\ldots,p_{\sigma(n+2)}}, \tag{33}\] where \[A^{(j)}_{p_{0},\ldots,p_{n+2}}=\text{Tr}\Bigg{(}GdH_{p_{0}}GdH_{ p_{1}}\ldots GdH_{p_{j-1}}GQ_{p_{j}}GdH_{p_{j+1}}\ldots GdH_{p_{n}}GJ_{p_{n+1},p_{ n+2}}\Bigg{)}. \tag{34}\] The first sum is over permutations of \(n+2\) elements. We omitted the wedge product of forms (on the parameter space) from the last formula. Using results of [17], one can show that \(H^{(0)}_{pqr}\) decays to zero fast when either of \(|p-q|,|q-r|,|r-p|\) is large. We expect that similar results hold for other \(H^{(n)}\) and they are chains in the sense of Appendix A. In the following, we will assume this. The \(n\)-chain \(H^{(n)}\) has to be contracted with the only non-trivial element of the cohomology and integrated over \((D-2)\)-dimensional manifold without boundary \(M\) in the parameter space leading to a topological invariant of a family \[\int_{M}H^{(D-2)}(\alpha^{(D)}). \tag{35}\] A simple non-trivial example of such a co-chain is \[\alpha^{(D)}=\delta f_{1}\cup\delta f_{2}\cup\cdots\cup\delta f_{D}, \tag{36}\] where \(f_{i}=\theta(x^{i}(p)-a_{i})\) with \(x^{i}(p)\) being the \(i\)th coordinate of site \(p\). Due to the fast decay of correlation functions in the definition of \(H^{(D-2)}\) the resulting invariant will be localized around the point given by coordinates \((a_{1},\ldots,a_{D})\). Note, that different choices of the point \((a_{1},\ldots,a_{D})\) can be related to each other by the addition of exact one-form to \(\delta f_{1}\cup\cdots\cup\delta f_{D}\), and thus the invariant does not depend on the choice of it. Suppose the topological invariant of a family is non-zero. It forbids boundary conditions that continuously depend on parameters for the whole family and preserve the gap. Indeed, if the gap was preserved then the invariant would be the same on both sides of the boundary. However, it is zero on one side and non-zero on the other. For a general gapped system the invariants (35) are not expected to be integer quantized. For invertible phases, i.e. the ones that can be deformed to a trivially decoupled phase after stacking it with an appropriate system, one can show that (35) are quantized as \(\frac{1}{2\pi}\) times integer along the lines of [1; 2] or more rigorous [20]. ## V Discussion and applications In this section, we will discuss some applications and properties of the invariants. Most of the discussion can be done within the framework of lattice systems but we will focus on field theory interpretation of the invariants. Whenever a gapped system admits a long-distance effective description in terms of field theory it lacks local low-energy degrees of freedom while the high-energy ones can be integrated out. The effective field theory still can capture interesting effects via its dependence on the background and by considering more sophisticated backgrounds one gains more insights into the underlying system. It is common to consider curved manifold \(X\) with non-trivial metrics instead of flat space-time as well as turn on \(U(1)\) background gauge field \(A\) whenever one has local on-sight \(U(1)\) symmetry. When the underlying system depends on parameters, one can make them space-time depend. If the parameter variations are smooth they would correspond to background fields \(\phi:X\to M\) in the effective description. Here \(M\) is the space of possible values for parameters and \(\phi\) is a map that specifies their local values. Topologically protected quantities manifest themself in the form of topological terms in the effective action. The characteristic feature of the latter is that they are independent of the background metrics. In the following, we will discuss topological terms corresponding to the invariants of families in trivial phases. When non-trivial topological order is present one should include appropriate topological quantum field theories as well as their coupling to the background. We first review the effective action for the Thouless charge pump [2]. The 1d Thouless charge pump can be described by the following term in effective action \[S_{\rm top}(X,\phi,A)=\int_{X}A\wedge\phi^{*}\tau^{(1)}=\int_{X}\epsilon^{ \mu\nu}A_{\mu}\partial_{\nu}\phi^{i}\tau^{(1)}_{i}(\phi)d^{2}x. \tag{37}\] Here \(\epsilon\) is a totally antisymmetric tensor, \(x^{0},x^{1}\) are time and space coordinates, and \(\tau^{(1)}\) is 1-form on the parameter space \(M\). In the classic example of adiabatic pumping via flux insertion into a cylinder, \(M=S^{1}\) parameterizes the flux mod \(2\pi\) and \(\tau^{(1)}\) is properly normalized volume form on the circle. The relation between the Thouless pump and action (37) can be seen by computing the correction it introduces to the \(U(1)\) current \[j^{\mu}_{\rm top}(x^{0},x^{1})=\epsilon^{\mu\nu}\tau^{(1)}_{i}(\phi(x^{0},x^{1 }))\partial_{\nu}\phi^{i}(x^{0},x^{1}). \tag{38}\] If we let the parameter depend only on time, and compute the charge flowing through a section \(x^{1}=a\) during a single cycle, we find \[\Delta Q(a)=\int_{0}^{T}j^{1}_{\rm top}(x^{0},a)dx^{0}=-\int_{0}^{T}\tau^{(1) }_{i}(\phi(x^{0}))\partial_{0}\phi^{i}(x^{0})dx^{0}=-\int_{\phi(0)}^{\phi(T)} \tau^{(1)}_{i}(\phi)d\phi^{i}=-\oint_{S_{1}}\tau^{(1)},\] where we assumed that the parameters return to the initial values after a full cycle. The latter integral is the Thouless charge pump invariant. Instead of varying the parameter in the time one can vary it in space. Namely, consider \(\phi^{i}(x^{1})\) such that \(\phi^{i}(-\infty)\) is the same point in \(M\) as \(\phi^{i}(\infty)\). If \(\phi^{i}(x^{1})\) sweeps a non-contractable circle in the parameter space \(M\), this configuration can be interpreted as a soliton. The charge of this soliton can be found from (38) \[Q_{\rm s}=\int j^{0}_{\rm top}(x^{1})dx^{1}=\int\tau^{(1)}_{i}(\phi(x^{1})) \partial_{1}\phi^{i}(x^{1})dx^{1}=\oint_{S_{1}}\tau^{(1)}_{i}. \tag{39}\] In higher dimensions, the Thouless pump topological term is \[S_{\text{top}}(X,\phi,A)=\int_{X}A\wedge\phi^{*}\tau^{(D)}, \tag{40}\] where now \(X\) is \((D+1)\)-dimensional space-time and \(\tau^{(D)}\) is a \(D\)-form on the parameter space. In the same way as above, one can find the correction to the current and show that this term represents the higher descendant of the Thouless charge pump. Also, it gives charge to skyrmion in the same way it did to soliton. The relation between soliton/skyrmion charges and descendants of the Thouless pump can be seen directly on the lattice [2] without appeal to the effective description. We conclude the review of the Thouless pump by indicating the following relation with the \(\theta\)-terms. In 0d, there is only one topological action which is linear in vector potential \[S=q\int_{X}A. \tag{41}\] Gauge invariance forces the charge \(q\) to be quantized and it is a topological invariant as we discussed at the beginning of Section III. Are there any topological terms in 1d that a linear in \(A\)? There is a theta-term \[S=\frac{\theta}{2\pi}\int_{X}F. \tag{42}\] However, the coefficient \(\theta\) is no longer forced to be quantized. It is a continuous parameter and one can deform it to zero without closing the gap. So it does not lead to a topological invariant in the usual sense. However, we can consider a space-time dependent theta-term \[S=\frac{1}{2\pi}\int\theta F=\frac{1}{2\pi}\int\theta dA=\frac{1}{2\pi}\int A \wedge d\theta, \tag{43}\] which has the same form as (37). The theta term is periodic \(\theta\sim\theta+2\pi\) and thus naturally corresponds to a non-trivial loop in the parameter space. Contrary to the theta-term action (42) the coefficient in front of (43) is quantized and represents a topological invariant of a family. Let us now turn to the descendants of Hall conductivity. The relevant term in the effective action is \[\begin{split} S_{\text{top}}(X,\phi,A)&=\frac{1}{ 2}\int_{X}A\wedge dA\wedge\phi^{*}h^{(D-2)}\\ &=\frac{1}{2}\int_{X}\epsilon^{\mu_{1},\dots,\mu_{D+1}}A_{\mu_{1} }\partial_{\mu_{2}}A_{\mu_{3}}\partial_{\mu_{3}}\phi^{i_{3}}\dots\partial_{ \mu_{D+1}}\phi^{i_{D+1}}h_{i_{3},\dots,i_{D+1}},\end{split} \tag{44}\] where \(h^{(D-2)}\) is the \((D-2)\)-form on the parameter space. For clarity, we will focus on the simplest case \(D=3\). \[S_{\text{top}}(X,\phi,A)=\frac{1}{2}\int_{X}A\wedge dA\wedge\phi^{*}h^{(1)}= \frac{1}{2}\int_{X}\epsilon^{\mu\nu\rho\sigma}A_{\mu}\partial_{\nu}A_{\rho} \partial_{\sigma}\phi^{i}h^{(1)}_{i}. \tag{45}\] The correction to the current reads \[j^{\mu}_{\text{top}}=\epsilon^{\mu\nu\rho\sigma}\partial_{\nu}A_{\rho} \partial_{\sigma}\phi^{i}h^{(1)}_{i}. \tag{46}\] Heuristically, it can be related to the adiabatic pump of Hall conductivity in the following way. Consider a parameter that depends only on time and \(A\) corresponding to a constant magnetic field \(B_{1}\) along \(x^{1}\) axis. The 2d charge denisty flowing through a section \(x^{1}=a\) is \[\Delta n(a,B_{1})=\int_{0}^{T}dx^{0}\,j^{1}=\int_{0}^{T}dx^{0}\,B_{1}\partial_ {0}\phi^{i}h^{(1)}_{i}=B_{1}\oint h^{(1)}. \tag{47}\] Now, imagine that we have two boundaries at \(x^{1}=a-L\) and \(x^{1}=a+L\) then we have two 2d systems with \(x^{1}<a\) and \(x^{1}>a\) respectively. Formula (47) means that particle density of, say, the right subsystem will change by \(\Delta n(a,B_{1})\). Using Streda formula (30) we find that the Hall conductivities of it change by \(\oint h\) as expected. Consider spatially dependent parameter \(\phi(x^{3})\), which depends only on the third coordinate and represents a non-trivial loop in the parameter space as \(x^{3}\) changes from \(-\infty\) to \(\infty\). This represents a domain wall. Choose \(A\) to represent constant electric field \(E_{2}\) along \(x^{2}\). Correction to the current (46) will give \[\int dx^{3}j^{2}=E_{3}\oint h^{(1)}. \tag{48}\] This corresponds to the increase in 2d Hall conductance of the whole system by \(\oint h^{(1)}\) due to the presence of the domain wall. Descendants of the Berry curvature and the \(U(1)\) charge give \(D+2\) and \(D\) forms on the parameter space respectively. The degree of these forms is greater or equal then the dimension of space. The descendant of the Hall conductance on the other hand gives a \((D-2)\)-form and it can be used to detect gapless modes on defects. Let us illustrate this in the simplest case \(D=3\). Suppose we have a line defect, such that the Hamiltonian has the density \(H_{p}(\lambda(\phi))\) far away from the defect where \(\phi\) is the angular coordinate around the defect. Far away from the defect, the system looks like a soliton from the previous paragraph. If the relevant integral \(\oint_{S_{1}}H^{(1)}\) over \(\lambda(\phi)\) for \(\phi\) from \(0\) to \(2\pi\) is non-zero this soliton will have a nontrivial Hall conductance. The defect itself can be thought of as a boundary of this soliton. The non-zero Hall conductance of the soliton is known to be related to the level of Kac-Moody algebra at its boundary. Thus, we see that \(2\pi\oint_{S_{1}}H^{(1)}\) gives the level of the Kac-Moody algebra of the defect. Let us conclude with a couple of applications and specific examples. One can construct a translationally invariant free fermion system that has a non-trivial Hall conductivity pump \(H^{(1)}\). The 1-form in this case can be related to the integral of the second Chern class of Bloch-Berry connection over 3d Brillouin zone. The computations of the 1-form as well as an example of a system with a non-trivial second Chern class are analogous to the ones in [1; 2] (see also [21]). As an application of the invariant, one can consider a translationally invariant 3d system. There are 3 natural loops in the parameter space corresponding to shift by periods of the lattice in three directions. One can integrate one-form \(H^{(1)}\) over these 3 loops and it will result in 3 components of invariants \(\frac{1}{2\pi}\mathbf{G}\) of 3d quantum Hall state, where \(\mathbf{G}\) is a vector of the reciprocal lattice. It cannot change unless either gap closes or translational symmetry is lost. Interestingly, dislocation characterized by Burgers vector \(\mathbf{B}\) will host massless modes with level \(\mathbf{B}\cdot\mathbf{G}\) Kac-Moody algebra [22]. The descend equation in the presence of translational symmetry requires some care and we leave it to future work. There is an interesting question what happens in the case when there is a rotational symmetry on top of translational symmetry. In this case, dislocation can be split into two disclinations and it would be interesting to understand what happens to the massless modes. ## Appendix A Notation A large number of indices forces us to introduce an auxiliary formal definition in order to simplify the notation. See [13] for a more thorough discussion. We will call an antisymmetric function \(A_{p_{0},\ldots,p_{n}}\) of \(n+1\) lattice sites, which decays faster than any power of distance away from the diagonal \(p_{0}=p_{1}=\ldots,=p_{n}\), an \(n\)-chain. From an \(n\)-chain one can construct an \((n-1)\)-chain \[(\partial A)_{p_{1},\ldots,p_{n}}=\sum_{p_{0}\in\Lambda}A_{p_{0},p_{1},\ldots, p_{n}}, \tag{10}\] where fast decay guarantees that the sum is well-defined. The boundary operator \(\partial\) satisfies \(\partial^{2}=0\). For an antisymmetric function \(f(p_{0},\ldots,p_{n})\) (which do not necessarily decay away from the diagonal), we can define the contraction \[\langle A,f\rangle=\sum_{p_{0},\ldots,p_{n}\in\Lambda}A_{p_{0},\ldots,p_{n}}f(p_ {0},\ldots,p_{n}). \tag{10}\] In order for the sum to be well-defined one has to impose some conditions on \(f\). In this paper, we will for simplicity consider only functions \(f\) which are constant whenever all \(|p_{i}-p_{j}|\) is greater than some fixed distance for every \(i\neq j\). We will call such function \(n\)-cochain. Occasionally, we will write \(A(f)\) instead of \(\langle A,f\rangle\). One can define the co-boundary operator \(\delta\) as \[\delta f(p_{0},\ldots,p_{n+1})=\sum_{j=0}^{n}(-1)^{j}f(p_{0},\ldots\hat{p}_{j},\ldots p_{n+1}), \tag{11}\] where hat indicates the omitted variable. It is related to \(\partial\) as \[\langle A,\delta f\rangle=\langle\partial A,f\rangle. \tag{12}\] A cup product of \(n\)-cochain with \(m\)-cochain which gives a \((n+m)\)-cochain defined as \[f\cup g(p_{0},\ldots,p_{n+m})=\frac{1}{(n+m+1)!}\sum_{\sigma\in S_{n+m+1}}(-1 )^{\text{sgn}\,\sigma}f(p_{\sigma(0)},\ldots p_{\sigma(n)})g(p_{\sigma(n)}, \ldots p_{\sigma(n+m+1)}), \tag{13}\] where the sum is over the permutation group. It satisfies \[f\cup g =(-1)^{|f||g|}g\cup f, \tag{14}\] \[\delta(f\cup g) =f\cup\delta g+(-1)^{|f|}f\cup\delta g, \tag{15}\] where \(|f|\) is the degree of the cochain. We define a cap product that gives an \(n\)-chain from \((n+m)\)-chain and \(m\)-cochain \[A\cap f(p_{0},\ldots,p_{n}) =(-1)^{mn}\frac{n!}{(n+m+1)!} \tag{16}\] \[\times\sum_{p_{n+1},\ldots,p_{n+m}}A(p_{0},\ldots,p_{n+m})\sum_{i =0}^{n}f(p_{i},p_{n+1},p_{n+2},\ldots,p_{n+m}).\] It satisfies \[(A\cap f)(g) =A(f\cup g), \tag{17}\] \[\partial(A\cap f) =A(f),\qquad\text{when }|f|=|A|,\] (18) \[\partial(A\cap f) =(-1)^{|f|}\Big{(}\partial A\cap f-A\cap\delta f\Big{)}. \tag{19}\] Note that the cup product is defined even if \(f\) is not a cochain, i.e. if it is not a constant when all its arguments are separated. ## Appendix B Proof of Streda formula In this appendix, we show that the Hall conductivity \[\sigma^{H}(f,g)=-i\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\left(GJ(\delta f)G^{2 }J(\delta g)\right). \tag{10}\] coincides with a minus derivative of the charge magnetization with respect to the chemical potential \[H^{(0)}(\delta f\cup\delta g), \tag{11}\] where 2-chain \(H^{(0)}\) is given by \[H^{(0)}=\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\left(GQ_{p} GJ_{qr}+GQ_{q}GJ_{rp}+GQ_{r}GJ_{pq}\right). \tag{12}\] The idea behind the proof is to show that 1-chain \[\sigma^{H}(\eta)_{pq}=-i\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr} \left(GJ(\eta)G^{2}J_{pq}\right) \tag{13}\] and 1-chain \[(H^{(0)}\cap\eta)_{pq} \tag{14}\] differ by an exact 1-chain if \(\delta\eta=0\). The cap product of 2-chain and 1-cochain is given by \[(H^{(0)}\cap\eta)_{pq}=-\frac{1}{6}\sum_{r}H^{(0)}_{pqr}(\eta(p,r)+\eta(q,r)). \tag{15}\] Expanding this expression we find \[\begin{split}(H^{(0)}\cap\eta)_{pq}=-\frac{1}{6}\oint_{z=E_{0}} \frac{dz}{2\pi i}\text{Tr}\Big{(}\sum_{r}GJ_{pq}GQ_{r}\eta(q,r)+\sum_{r}GJ_{pq} GQ_{r}\eta(p,r)\\ +2G(J\cap\eta)_{q}GQ_{p}+\sum_{r}GJ_{qr}GQ_{p}\eta(p,r)-2G(J\cap \eta)_{p}GQ_{q}+\sum_{r}GJ_{rp}GQ_{q}\eta(q,r)\Big{)},\end{split} \tag{16}\] where the cap product of 1-chain and 1-cochain is \[(J\cap\eta)_{p}=\frac{1}{2}\sum_{r}J_{pr}\eta(p,r) \tag{17}\] Two terms in this sum can also be rewritten in this form \[\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}GJ_{rp}GQ_{q}\eta(q,r) \Big{)}=\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}-2G(J\cap\eta)_{p}GQ_{q} \Big{)}\] (B9) where we have used cocycle condition \(\delta\eta=\eta(p,q)+\eta(q,r)+\eta(r,p)=0\) and ultralocallity of the charge \([Q_{p},Q_{q}]=0\). Similarly \[\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}GJ_{qr}GQ _{p}\eta(p,r)\Big{)}=\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}GJ _{qr}GQ_{p}(-\eta(r,q)-\eta(q,p))\Big{)}\] \[=\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}GJ_{qr} GQ_{p}(-\eta(r,q))\Big{)}=\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}2G(J \cap\eta)_{q}GQ_{p}\Big{)}\] (B10) The remaining two terms \[\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}GJ_{pq}GQ_{r}\eta(q,r) +GJ_{pq}GQ_{r}\eta(p,r)\Big{)}\] (B11) can be rewritten in the same form after the addition of an exact term \[\partial A =\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}-GJ_{pq} GQ_{r}(\eta(q,r)+\eta(p,r))-GJ_{qr}GQ_{p}(\eta(r,p)+\eta(q,p))\] \[-GJ_{rp}GQ_{q}(\eta(p,q)+\eta(r,q))\Big{)}=\sum_{r}\oint_{z=E_{0}} \frac{dz}{2\pi i}\text{Tr}\Big{(}-GJ_{pq}GQ_{r}\eta(q,r)-GJ_{pq}GQ_{r}\eta(p,r)\] \[-GJ_{qr}GQ_{p}\eta(r,p)-GJ_{rp}GQ_{q}\eta(r,q)\Big{)}=\oint_{z=E_ {0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}-\sum_{r}GJ_{pq}GQ_{r}\eta(q,r)\] \[-\sum_{r}GJ_{pq}GQ_{r}\eta(p,r)+2G(J\cap\eta)_{q}GQ_{p}-2G(J\cap \eta)_{p}GQ_{q}\Big{)}\] where 2-chain A is defined in the first equality. Summing all this together we find that up to an exact term \[(H^{(0)}\cap\eta)_{pq}=\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}G(J \cap\eta)_{p}GQ_{q}-G(J\cap\eta)_{q}GQ_{p}\Big{)}+\text{exact}.\] (B12) We see that the difference of the Hall conductance and \(H^{(0)}\) is \[\sigma^{H}(\eta)_{pq}-(H^{(0)}\cap\eta)_{pq}\] \[=-i\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\left(GJ(\eta)G^{2}J_ {pq}\right)-\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}G(J\cap\eta)_{p}GQ _{q}-G(J\cap\eta)_{q}GQ_{p}\Big{)}\] \[=-i\sum_{r}\oint_{z=E_{0}}\frac{dz}{2\pi i}\text{Tr}\Big{(}G(J \cap\eta)_{r}G^{2}J_{pq}+G(J\cap\eta)_{p}G^{2}J_{qr}+G(J\cap\eta)_{q}G^{2}J_{ rp}\Big{)}\] is an exact 1-chain.
2302.00035
On Auslander`s depth formula
We show that if Auslander`s depth formula holds for non-zero Tor-independent modules over Cohen-Macaulay local rings of dimension 1, then it holds for such modules over any Cohen-Macaulay local ring. More generally, we show that the depth formula for non-zero Tor-independent modules which have finite Cohen-Macaulay dimension over depth 1 local rings implies the depth formula for such modules over any positive depth local ring.
Shashi Ranjan Sinha, Amit Tripathi
2023-01-31T19:07:54Z
http://arxiv.org/abs/2302.00035v1
# On Auslander's depth formula ###### Abstract. We show that if Auslander's depth formula holds for non-zero Tor-independent modules over Cohen-Macaulay local rings of dimension 1, then it holds for such modules over any Cohen-Macaulay local ring. More generally, we show that the depth formula for non-zero Tor-independent modules which have finite Cohen-Macaulay dimension over depth 1 local rings implies the depth formula for such modules over any positive depth local ring. Key words and phrases:Auslander, depth formula, tensor product 2010 Mathematics Subject Classification: 13D07, 13C14, 13C15 All rings are assumed to be commutative Noetherian local and all modules are assumed to be finitely generated. We say that a pair \((M,N)\) of \(R\)-modules satisfies Auslander's depth formula (or simply _the depth formula_) if \[depth\,M+depth\,N=depth\,R+depth\,M\otimes_{R}N.\] We say that \(M\) and \(N\) are Tor-independent over \(R\) if \(Tor_{i}^{R}(M,N)=0\) for all \(i\geqslant 1\). Auslander [2] proved the depth formula under the assumption that the \(R\)-modules \(M\) and \(N\) are Tor-independent and the projective dimension of \(M\) is finite. This formula was shown to hold for Tor-independent modules over complete intersection local rings by Huneke and Wiegand [12]. Their result was generalized to Tor-independent modules over arbitrary local rings, under the additional assumption that one of the modules has finite complete intersection dimension, by Araya and Yoshino [1], and independently by Iyengar [13]. Gheibi, Jorgensen, and Takahashi [11] have proved the depth formula for Tor-independent modules when one of the modules has a finite quasi-projective dimension. More generally, we can ask if \((M,N)\) is a pair of \(R\)-modules satisfying a Tor vanishing condition (Absolute, Relative or Tate Tor, see Avramov and Martsinkovsky[6]) such that one or both the modules have a finite homological dimension, then the pair \((M,N)\) satisfies the depth formula. In this direction, various versions have been established - see Bergh and Jorgensen [7], Celikbas, Liang, and Sadeghi [8], and Christensen and Jorgensen [9], for instance. In this note, we restrict to the "classical" version of the depth formula where the Tor vanishing condition is for absolute Tor. In this form, it is unknown whether the depth formula is true, even for Gorenstein local rings. The most general result seems to be [9] which proves the depth formula over \(AB\)-rings. In [1, Example 2.9] Araya and Yoshino construct \(R\)-modules \(M\) and \(N\), where \(R\) is a formal power series ring in 5 variables over a field \(k\), such that \(pd_{R}(N)=1\) and \(Tor_{i}(M,N)=0\) for \(i\geqslant 2\), but \(depth\,M+depth\,N\neq depth\,R+depth\,M\otimes_{R}N\). This shows that the Tor-independence condition can not be relaxed. We show that, **Theorem 1**.: _If the depth formula holds for non-zero Tor-independent modules over Cohen-Macaulay local rings of dimension 1, then it holds for such modules over any Cohen-Macaulay local ring._ This result follows as a corollary to the more general result stated below. **Theorem 2**.: _If the depth formula holds for non-zero Tor-independent modules with finite Cohen-Macaulay dimension over local rings with depth \(1\), then it holds for such modules over any local ring with positive depth._ _Remark 1_.: The proof of Theorem 1 will work if we assume the underlying ring to be a Gorenstein ring. The proof of Theorem 2 will work if replace "Cohen-Macaulay dimension" with other homological dimensions like Gorenstein dimension \(G\text{-}\dim_{R}(M)\), upper Gorenstein dimension \(G^{*}\text{-}\dim_{R}(M)\), lower complete intersection dimension \(CI_{*}\text{-}\dim_{R}(M)\), or any other homological dimension which satisfies some standard properties (see Theorem 3). We have demonstrated the main idea using the Cohen-Macaulay dimension as it is the most general homological dimension (see Theorem 4) in the list given above. ### Acknowledgements The authors thank Jishnu Biswas and Suresh Nayak for helpful comments and suggestions. The first author was partially supported by a CSIR senior research fellowship through the grant no. 09/1001(0097)/2021-EMR-I. The second author was partially supported by a Science and Engineering Research Board (SERB) grant MTR/2020/000164. ## 1. Preliminaries Throughout, \(R\) denotes a local ring \((R,m,k)\) and all \(R\)-modules are finitely generated. For any \(R\)-module \(W\) and an element \(x\in m\), we define \(\overline{W}_{x}:=W\otimes_{R}R/xR\). When \(x\) is clear from the context, we will write simply \(\overline{W}\). We will assume that \(M\) and \(N\) are \(R\)-modules with following minimal free resolutions \[\cdots\xrightarrow{f_{3}}P_{2}\xrightarrow{f_{2}}P_{1}\xrightarrow{f _{1}}P_{0}\xrightarrow{f_{0}}M\to 0 \tag{1}\] \[\cdots\xrightarrow{g_{3}}Q_{2}\xrightarrow{g_{2}}Q_{1}\xrightarrow {g_{1}}Q_{0}\xrightarrow{g_{0}}N\to 0 \tag{2}\] Let \(\Omega_{i}(M)\) be the \(i\)'th syzygy of \(M\). In particular, \(\Omega_{1}(M)=Ker(P_{0}\to M)\). Let \(\Omega_{i}(N)\) be the \(i\)'th syzygy of \(N\). For any \(R\)-module \(M\), there are several possible homological dimensions such as its projective dimension \(pd_{R}(M)\), Gorenstein dimension \(G\text{-}\dim_{R}(M)\)[3], complete intersection dimension \(CI\text{-}\dim_{R}(M)\)[5], lower complete intersection dimension \(CI_{*}\text{-}\dim_{R}(M)\)[10], upper Gorenstein dimension \(G^{*}\text{-}\dim_{R}(M)\)[15] or Cohen-Macaulay dimension \(CM\text{-}\dim_{R}(M)\)[10]. These homological dimensions share some common properties which we summarize below. **Theorem 3**.: _[_4_, Theorem 8.7]_ _Let \(M\neq 0\) be an \(R\)-module. Let \(H\text{-}\dim_{R}(M)\) be a homological dimension of \(M\) where \(H\text{-}\dim\) can be \(G\text{-}\dim\), \(CM\text{-}\dim\), \(CI\text{-}\dim\), \(CI_{*}\text{-}\dim\), or \(G^{*}\text{-}\dim\)._ 1. _If_ \(H\text{-}\dim_{R}(M)<\infty\)_, then_ \(depth\,M+H\text{-}\dim_{R}(M)=depth\,R\)_. In particular,_ \(H\text{-}\dim_{R}(M)\leq depth\,R\)_._ 2. \(H\text{-}\dim_{R}(\Omega_{n}(M))=\max\{H\text{-}\dim_{R}(M)-n,0\}\)_._ 3. _If_ \(x\in m\) _be an element regular on_ \(R\) _and_ \(M\)_, then_ 1. \(H\text{-}\dim_{R}(\overline{M})=H\text{-}\dim_{R}(M)+1\)_._ 2. _If_ \(H\text{-}\dim_{R}(M)<\infty\)_, then_ \(H\text{-}\dim_{\overline{R}}(\overline{M})=H\text{-}\dim_{R}(M)\)_._ Proof.: The assertion for \(H=G\) is from [3] (and [14] for a corrected proof of (a)). The proof for \(H=CI,G^{*}\), and \(CM/CI_{*}\) can be found in [5], [15], and [10], respectively. _Remark 2_.: If \(H\text{-}\text{dim}_{R}(M)\), \(H\text{-}\text{dim}_{R}(N)\) and \(H\text{-}\text{dim}_{R}(M\otimes_{R}N)\) are all finite, then by Theorem 3 (a), we can also restate the depth formula as \[H\text{-}\text{dim}_{R}(M)+H\text{-}\text{dim}_{R}(N)=H\text{-}\text{dim}_{R}( M\otimes N).\] These homological dimensions satisfy the following inequalities **Theorem 4**.: _[_4_, Theorem 8.8]_ _With the notation above, we have_ 1. \(CM\text{-}\text{dim}_{R}(M)\leqslant G\text{-}\text{dim}_{R}(M)\leqslant G^{*} \text{-}\text{dim}_{R}(M)\leqslant CI\text{-}\text{dim}_{R}(M)\)_._ 2. \(CM\text{-}\text{dim}_{R}(M)\leqslant G\text{-}\text{dim}_{R}(M)\leqslant CI \text{-}\text{dim}_{R}(M)\leqslant CI\text{-}\text{dim}_{R}(M)\)_._ _where finiteness at any point implies that all the inequalities to its left are equalities._ Proof.: These inequalities follow from the definition of the respective dimensions, except for the lower bound for \(G^{*}\text{-}\text{dim}_{R}(M)\) which is from [15], and the upper bound for \(CI\text{-}\text{dim}_{R}(M)\) which is from [5]. ### Cohen-Macaulay dimension For the sake of completion, we discuss the definition of Cohen-Macaulay dimension and refer the reader to Gerko [10] for details. Recall that the _grade_ of an \(R\)-module \(M\), denoted by \(grade(M)\), is defined to be the smallest integer \(i\) such that \(Ext_{R}^{i}(M,R)\neq 0\). It is easy to see that \(grade(M)\leqslant G\text{-}\text{dim}_{R}(M)\). If \(grade(M)=G\text{-}\text{dim}_{R}(M)\), then \(M\) is said to be \(G\text{-}\text{\it perfect}\). We say an ideal \(I\) is \(G\text{-}\text{\it perfect}\) over \(R\) if \(R/I\) is a \(G\text{-}\text{\it perfect}\)\(R\)-module. **Definition 1**.: A \(G\text{-}\text{quasi-deformation}\) of a ring \(R\) is a diagram of local homomorphisms \(R\to R^{\prime}\leftarrow Q\), where \(R\to R^{\prime}\) is a flat extension and \(Q\to R^{\prime}\) is a \(G\text{-deformation}\), that is, a surjective homomorphism whose kernel is a \(G\text{-}\text{\it perfect}\) ideal. **Definition 2**.: The Cohen-Macaulay dimension of an \(R\)-module \(M\) is defined as \[CM\text{-}\text{dim}_{R}(M)= \text{inf}\{G\text{-}\text{dim}_{Q}(M\otimes_{R}R^{\prime})-G \text{-}\text{dim}_{Q}(R^{\prime}):\] \[R\to R^{\prime}\leftarrow Q\text{ is a }G\text{-quasi-deformation}\}.\] Cohen-Macaulay local rings are characterized by the class of modules with finite Cohen-Macaulay dimensions. It is a formal repercussion of the more specific assertion below. **Theorem 5**.: _[_10_, Theorem 3.9]_ _For any local ring \(R\), the following statements are equivalent:_ 1. \(R\) _is Cohen-Macaulay._ 2. \(CM\text{-}\text{dim}_{R}(M)<\infty\) _for all_ \(R\text{-}\text{modules}\)__\(M\)_._ 3. \(CM\text{-}\text{dim}_{R}(k)<\infty\)_._ ## 2. Proof of the main result Consider the following statements: **P:**: Let \(R\) be a local ring of depth \(1\). If \((M,N)\) is any pair of non-zero Tor-independent \(R\)-modules with finite CM dimension, then \(depth\,M+depth\,N=depth\,R+depth\,M\otimes_{R}N\) i.e. the pair \((M,N)\) satisfies the depth formula. **Q:**: Let \(R\) be a local ring with \(depth\geqslant 1\). If \((M,N)\) is any pair of non-zero Tor-independent \(R\)-modules with finite CM dimension, then \(depth\,M+depth\,N=depth\,R+depth\,M\otimes_{R}N\) i.e. the pair \((M,N)\) satisfies the depth formula. To prove Theorem 2 it suffices to show that \(\textbf{P}\implies\textbf{Q}\). For simplicity, we will divide **P** into 3 simpler statements about local rings of depth \(1\): * **P0:**: There exists no pair \((M,N)\) of Tor-independent \(R\)-modules with finite CM dimensions such that \(depth\,M=depth\,N=0\). * **P1:** If \((M,N)\) is a pair of Tor-independent \(R\)-modules with finite CM dimension \(0\), then \(depth\,M\otimes_{R}N=1\). * **P2:** If \((M,N)\) is a pair of Tor-independent \(R\)-modules where \(CM\)-\(dim(M)=0\), and \(CM\)-\(dim(N)=1\), then \(depth\,M\otimes_{R}N=0\). A simple argument (see Remark 3) shows that \(\textbf{P0}\implies\textbf{P2}\). Thus, we are only assuming **P0** and **P1**. The proof of \(\textbf{P}\implies\textbf{Q}\) is inductive and as an intermediate step, we will prove the following two special cases. * \(\textbf{P1}\implies\textbf{Q1}\) where **Q1** is the statement: Let \(R\) be a local ring with positive depth and \((M,N)\) be a pair of Tor-independent \(R\)-modules with \(CM\) dimensions \(0\). Then \[depth\,R=depth\,M\otimes_{R}N.\] * **P1 + P2** where **Q2** is the statement: Let \(R\) be a local ring with positive depth and \((M,N)\) be a pair of non-zero Tor-independent \(R\)-modules with \(CM\)-\(dim(M)=0\), and \(CM\)-\(dim(N)<\infty\). Then \[depth\,N=depth\,M\otimes_{R}N.\] **Lemma 6**.: _Let \(M\) be an \(R\)-module with positive depth. Let \(N\) be any \(R\)-module, such that \(Tor^{R}_{i}(M,N)=0\). Then \(depth\,M\otimes N>0\) if and only if \(Tor^{R}_{1}(\overline{M},N)=0\) for some \(x\) which is an \(M\)-regular element._ Proof.: If \(depth\,M\otimes N>0\), we can find \(x\in m\) regular on \(M\) and \(M\otimes_{R}N\). Tensoring the sequence \(0\to M\xrightarrow{x}M\to\overline{M}\to 0\) with \(N\) gives \[0\to Tor^{R}_{1}(\overline{M},N)\to M\otimes N\xrightarrow{x}M\otimes N.\] This proves one side. Conversely, if \(Tor^{R}_{1}(\overline{M},N)=0\) then the above sequence shows that \(x\) is \(M\otimes_{R}N\)-regular. _Remark 3_.: We show that \(\textbf{P0}\implies\textbf{P2}\). Let \((M,N)\) be a pair of Tor-independent \(R\)-modules over a depth \(1\) local ring \(R\), such that \(CM\)-\(dim(M)=0\) and \(CM\)-\(dim(N)=1\). If \(depth\,M\otimes N>0\), then by Lemma 6, there is some \(x\) which is \(M\)-regular such that \(Tor^{R}_{1}(\overline{M},N)=0\). Thus \((\overline{M},N)\) are Tor-independent \(R\)-modules of depth \(0\) and have finite CM dimension (by Theorem 3), contradicting **P0**. **Lemma 7**.: _Let \((R,m,k)\) be a local ring with positive depth, and let \(M\) and \(N\) be non-zero \(R\)-modules._ 1. _If there exists an element_ \(x\in m\) _which is regular on_ \(R\) _and_ \(N\)_, then we have isomorphisms_ \[Tor^{R}_{p}(\overline{M},\overline{N})\cong Tor^{R}_{p}(\overline{M},N),\ \ p\geqslant 1.\] 2. _If there exists an element_ \(x\in m\) _which is regular on_ \(R\)_, then there exists a long exact sequence_ \[\cdots\to Tor^{R}_{p+1}(N,\overline{M})\to Tor^{\overline{R}}_{p+1 }(\overline{N},\overline{M})\to Tor^{\overline{R}}_{p-1}(Tor^{R}_{1}(N, \overline{R}),\overline{M})\] \[\to Tor^{R}_{p}(N,\overline{M})\to Tor^{\overline{R}}_{p}(\overline{N },\overline{M})\to\cdots.\] Proof.: There is a spectral sequence [16, Theorem 5.6.6] \[Tor^{\overline{R}}_{p}(Tor^{R}_{q}(N,\overline{R}),\overline{M})\implies Tor^{R}_{p+q}(N, \overline{M}).\] 1. It follows by our assumptions that \(\mathsf{Tor}_{q}^{R}(N,\overline{R})=0\) for \(q\geqslant 1\). Therefore the spectral sequence collapses at \(r=2\) to a single row and we get the claimed isomorphisms. 2. It follows by our assumptions that \(\mathsf{Tor}_{q}^{R}(N,\overline{R})=0\) for \(q\geqslant 2\). Therefore the spectral sequence collapses at \(r=2\) to the rows \(q=0,1\) and we get the claimed long exact sequence. **Corollary 8**.: _Let \((R,m,k)\) be a local ring with positive depth and let \(M\) and \(N\) be non-zero Tor-independent \(R\)-modules._ 1. _If there exists an element_ \(x\in m\) _which is regular on_ \(R,M\) _and_ \(N,\) _then_ \[\mathsf{Tor}_{p}^{\overline{R}}(\overline{M},\overline{N}) =0,\ p\geqslant 2,\] \[\mathsf{Tor}_{1}^{\overline{R}}(\overline{M},\overline{N}) \cong\mathsf{Tor}_{1}^{R}(\overline{M},N).\] 2. _If there exists an element_ \(x\in m\) _which is regular on_ \(R\) _and_ \(M,\) _then there is an exact sequence_ \[0\to\mathsf{Tor}_{2}^{\overline{R}}(\overline{N},\overline{M}) \to\mathsf{Tor}_{1}^{R}(N,\overline{R})\otimes\overline{M}\to\mathsf{Tor}_{1}^ {R}(N,\overline{M})\to Tor_{1}^{\overline{R}}(\overline{N},\overline{M})\to 0\] _and isomorphisms_ \[\mathsf{Tor}_{p}^{\overline{R}}(\overline{N},\overline{M}) \cong\mathsf{Tor}_{p-2}^{\overline{R}}(\mathsf{Tor}_{1}^{R}(N,\overline{R}), \overline{M}),\ \ p\geqslant 3.\] Proof.: Both claims follow from the vanishing of \(\mathsf{Tor}_{p}^{R}(\overline{M},N)\) for \(p\geqslant 2\). **Lemma 9**.: _Let \(M\) and \(N\) be Tor-independent \(R\)-modules and let \(x\in m\) be an element regular on \(R,M,N\) and \(M\otimes_{R}N\). Then \(\overline{M}\) and \(\overline{N}\) are Tor-independent over \(\overline{R}\)._ Proof.: By Lemma 6, and tensoring \(0\to M\xrightarrow{x}M\to\overline{M}\to 0\) with \(N\), we get that \(\overline{M}\) and \(N\) are Tor-independent over \(R\). Applying Corollary 8 completes the proof. **Lemma 10**.: _If there exists an element \(x\in m\) which is regular on \(R,M,N\) and \(M\otimes_{R}N\). Then the following are equivalent_ 1. \(M\) _and_ \(N\) _are Tor-independent_ \(R\)_-modules with_ \(\mathsf{depth}M=e\)_,_ \(\mathsf{depth}N=f\) _and_ \(\mathsf{depth}M\otimes_{R}N=g\)_._ 2. \(\overline{M}\) _and_ \(\overline{N}\) _are Tor-independent_ \(\overline{R}\)_-modules with_ \(\mathsf{depth}\overline{M}=e-1\)_,_ \(\mathsf{depth}\overline{N}=f-1\)_, and_ \(\mathsf{depth}\overline{M}\otimes_{\overline{R}}\overline{N}=g-1\)_._ Proof.: \(1\implies 2\) is by Lemma 9. For the converse implication, the claim about depth is clear. By Lemma 7, \(\mathsf{Tor}_{p}^{R}(\overline{M},N)=0\) for all \(p\geqslant 1\). Thus by Nakayama lemma, \(M\) and \(N\) are Tor-independent. For notational convenience, we have used the subscript \(d\) (for instance \(M_{d}\) or \(N_{d}\)) to denote modules over a ring \(R_{d}\) of depth \(d\), in the proof of Lemma 11 and the proof of \(\mathbf{P1}\implies\mathbf{Q1}\). **Lemma 11**.: _Let \(R_{d}\) be a local ring with depth \(d\geqslant 1\). Let \((M_{d},N_{d})\) be a pair of Tor-independent \(R_{d}\)-modules with \(\mathsf{CM}\)-\(\mathsf{dim}(M_{d})=CM\)-\(\mathsf{dim}(N_{d})=0\). Then_ \[\mathbf{P1}\implies\mathsf{depth}M_{d}\otimes_{R_{d}}N_{d}>0.\] Proof.: By \(\mathbf{P1}\), we may assume \(d\geqslant 2\). Let \(\Omega_{i}(M_{d})\) be the i'th syzygy of \(M_{d}\) defined by a minimal free resolution \[\cdots\to P_{i,d}\to\cdots\to P_{1,d}\to P_{0,d}\to M_{d}\to 0.\] It is clear that \(depth\,\Omega_{i}(M_{d})\otimes_{R_{d}}N_{d}\geqslant i\) for \(i\in\{1,2,\cdots,d\}\). We work with the pair \((\Omega_{d-1}(M_{d}),N_{d})\) of Tor-independent \(R_{d}\)-modules with (see Theorem 3) \[CM\text{-}\dim(\Omega_{d-1}(M_{d}))=CM\text{-}\dim(N_{d})=0\] and \(depth\,\Omega_{d-1}(M_{d})\otimes_{R_{d}}N_{d}\geqslant d-1\geqslant 1\). Let \(x_{d}\in m\) be a nonzero divisor on the finite collection of \(R_{d}\) modules \(\{R_{d},M_{d},N_{d},\Omega_{i}(M_{d}),\{\Omega_{i}(M_{d})\otimes_{R_{d}}N_{d}\} _{i\in\{1,2,\cdots,d-1\}}\}\). For any \(R\)-module \(T_{d}\), define \(T_{d-1}:=T_{d}/x_{d}T_{d}\). Then by Theorem 3, the \(R_{d-1}\)-modules \(M_{d-1},N_{d-1}\), and \(\{\Omega_{i}(M_{d-1})\}_{i\in\{1,2,\cdots,d-1\}}\) have Cohen-Macaulay dimension zero. By Lemma 10, \[depth\,\Omega_{d-1}(M_{d-1})\otimes_{R_{d-1}}N_{d-1}\geqslant d-2.\] Continuing this way, by repeatedly applying Lemma 10 on successive pairs \((\Omega_{d-1}(M_{i}),N_{i})\) for \(i\in\{d-1,d-2,\cdots,1\}\), we conclude that the pair \((\Omega_{d-1}(M_{1}),N_{1})\) of \(R_{1}\)-modules is Tor-independent with Cohen-Macaulay dimension zero, where \(depth\,R_{1}=1\). By **P1**, \(depth\,\Omega_{d-1}(M_{1})\otimes_{R_{1}}N_{1}=1\). Going upwards, by applying the converse implication of Lemma 10 on the pair \((\Omega_{d-1}(M_{i}),N_{i})\), we conclude that \(depth\,\Omega_{d-1}(M_{d})\otimes_{R_{d}}N_{d}=d\). The exact sequence \[0\to\Omega_{d-1}(M_{d})\otimes_{R_{d}}N_{d}\to P_{d-1,d}\otimes_{R_{d}}N_{d} \to\cdots\to P_{0,d}\otimes_{R_{d}}N_{d}\to M_{d}\otimes_{R_{d}}N_{d}\to 0,\] shows that \(depth\,M_{d}\otimes_{R_{d}}N_{d}>0\). Proof of **P1** \(\Longrightarrow\)**Q1**. Without loss of generality, we may assume that \(d=depth\,R_{d}\geqslant 2\). We apply Lemma 11 to conclude that \(depth\,M_{d}\otimes_{R_{d}}N_{d}\) is positive. Thus by Lemma 10, \(M_{d-1},N_{d-1}\) are Tor-independent \(R_{d-1}\)-modules with zero Cohen-Macaulay dimension. Applying Lemma 11 on the pair of \(R_{d-1}\)-modules \((M_{d-1},N_{d-1})\), we get that \(depth\,M_{d-1}\otimes_{R_{d-1}}N_{d-1}\) is positive. Continuing in this way, we conclude that \((M_{1},N_{1})\) is a pair of Tor-independent \(R_{1}\)-modules with zero Cohen-Macaulay dimension and with \(depth\,M_{1}\otimes_{R_{1}}N_{1}=1\), where \(depth\,R_{1}=1\). Applying the converse implication of Lemma 10, we work our way upwards, to conclude that \(depth\,M_{d}\otimes_{R_{d}}N_{d}=d\). Proof of **P1** + **P2** \(\Longrightarrow\)**Q2**. By assumption \(CM\text{-}\dim(M)=0\). We work by decreasing induction on the depth of \(N\). Since \(CM\text{-}\dim(N)<\infty\) therefore, by Theorem 3, \(depth\,N\leqslant depth\,R\). If \(depth\,N=depth\,R\), then this holds by **Q1** proved above, as then \(M\) and \(N\) will both be modules with zero Cohen-Macaulay dimension. Assume that \(depth\,N<depth\,R\). Let \(\Omega_{1}(N)\) be the first syzygy of \(N\) and \(0\to\Omega_{1}(N)\to Q_{0}\to N\to 0\) be the associated sequence where \(Q_{0}\) is a free \(R\)-module. Tensoring with \(M\), we get \[0\to M\otimes\Omega_{1}(N)\to M\otimes Q_{0}\to M\otimes N\to 0.\] Since \(M\) and \(\Omega_{1}(N)\) are Tor-independent and \(depth\,\Omega_{1}(N)=depth\,N+1\), by inductive hypothesis, the pair \((M,\Omega_{1}(N))\) satisfies the depth formula. There are \(2\) possibilities 1. \(depth\,M\otimes N<depth\,M\otimes Q_{0}=depth\,R=d\). Then we must have \(depth\,M\otimes N=depth\,M\otimes\Omega_{1}(N)-1\) and we are done. 2. \(depth\,M\otimes N\geqslantdepth\,M\otimes Q_{0}=d\). Then, we must have \(depth\,M\otimes\Omega_{1}(N)\geqslant d\). Therefore, by the inductive hypothesis, \[d> depth\,N=depth\,\Omega_{1}(N)-1\] \[= depth\,M\otimes\Omega_{1}(N)+depth\,R-depth\,M-1\geqslant d-1.\] If \(d\geqslant 2\), by repeatedly applying Lemma 10, we may assume \(depth\,M=depth\,M\otimes N=depth\,R=1\) and \(depth\,N=0\). This contradicts the assumption **P2**. **Lemma 12**.: _Let \(M\) and \(N\) be \(R\)-modules such that \(\operatorname{Tor}_{1}^{R}(M,N)=0\). Then one of the following two holds_ 1. \(0ptM\leqslant depthM\otimes N\)_._ 2. \(0ptM\otimes\Omega_{1}(N)=depthM\otimes N+1\)_._ Proof.: Assume that \(depthM>depthM\otimes N\). Tensoring the long exact sequence (2) with \(M\), and the Tor vanishing gives the short exact sequence \(0\to M\otimes\Omega_{1}(N)\to M\otimes Q_{0}\to M\otimes N\to 0\). By depth lemma, \(depthM\otimes\Omega_{1}(N)=depthM\otimes N+1\) which completes the proof. **Theorem 13**.: _Let \(R\) be a local ring of positive depth. Let \(M\) and \(N\) be Tor-independent \(R\)-modules such that \(CM\)-\(dim_{R}M\) and \(CM\)-\(dim_{R}N\) are finite. Assume that the statement \(P\) holds. Then_ \[depthR+depthM\otimes N\geqslant depthM+depthN.\] Proof.: We work by decreasing induction on \(depthN\). If either \(M\) or \(N\) has zero Cohen-Macaulay dimension then the equality holds by \(Q2\), which is already proved. Thus, without loss of generality, we may assume that \(max\{depthM,depthN\}<depthR\). Consider the following sequence \[0\to\Omega_{1}(N)\to Q_{0}\to N\to 0.\] By depth lemma \(depth\Omega_{1}(N)=depthN+1\). Further, \(M\) and \(\Omega_{1}(N)\) are non-zero Tor-independent \(R\)-modules. Inductively, the pair \((M,\Omega_{1}(N))\) satisfies the above inequality. If \(depthM>depthM\otimes N\), then by Lemma 12, \(depthM\otimes\Omega_{1}(N)=depthM\otimes N+1\) and thus the pair \((M,N)\) also satisfies the inequality. If \(depthM\leqslant depthM\otimes N\), then using the fact that \(depthN<depthR\), we get a strict inequality \[depthM\otimes N+depthR>depthM+depthN.\] Thus in either case, the inequality holds. **Corollary 14**.: _Let \(R\) be a local ring of positive depth. Let \(M\) and \(N\) be Tor-independent \(R\)-modules such that \(depthR>max\{depthM,depthN\}\). Then there exists an element \(x\) which is regular on \(R\) and \(\overline{R}\)-modules \(X\) and \(Y\) which are Tor-independent, such that \(depthM=depthX\) and \(depthN=depthY\)._ Proof.: We choose an element \(x\) which is regular on \(R,\Omega_{1}(M)\) and \(\Omega_{1}(N)\). Define \(X=\overline{\Omega_{1}(M)}\) and \(Y=\overline{\Omega_{1}(N)}\). Since \(depth\Omega_{1}(M)\otimes\Omega_{1}(N)>0\), we can apply Lemma 10 on the Tor-independent pair \((\Omega_{1}(M),\Omega_{1}(N))\), to conclude that \((X,Y)\) are Tor-independent with \(depthX=depthM\) and \(depthY=depthN\). **Lemma 15**.: _Assume that \(P0\) holds. Let \(R\) be a local ring of positive depth. Let \(M,N\) be depth \(0\)\(R\)-modules with finite Cohen-Macaulay dimensions. Then \(M,N\) are not Tor-independent._ Proof.: By repeated application of Corollary 14 and Theorem 3, we reduce to the case \(depthR=1\) where it contradicts \(P0\). **Lemma 16**.: _Assume that \(P\) holds. Let \(R\) be a local ring of positive depth. Let \(M\) and \(N\) be Tor-independent \(R\)-modules with finite Cohen-Macaulay dimensions. Then_ \[depthM+depthN-depthR\geqslant 0.\] Proof.: We induct on \(0ptR\). The claimed inequality holds by \(P\) when \(0ptR=1\). Since \(P\implies Q2\), the inequality follows when \(0ptR=\max\{0ptM,0ptN\}\). Thus, without loss of generality, we may assume that \(0ptR>0ptM\geqslant 0ptN\). By Lemma 15, we may further assume that \(0ptM>0\). Thus \(M\) and \(\Omega_{1}(N)\) are Tor-independent \(R\)-modules with positive depth. The containment \(M\otimes\Omega_{1}(N)\subset M\otimes\Omega_{0}\) shows that \(0ptM\otimes\Omega_{1}(N)\) is positive. Let \(x\) be an element which is regular on \(R,M,\Omega_{1}(N)\) and \(M\otimes\Omega_{1}(N)\). By Lemma 9, \(\overline{M}\) and \(\overline{\Omega_{1}(N)}\) are Tor-independent \(\overline{R}\)-modules. Going modulo \(x\), we get \[0pt\overline{M}+0pt\overline{\Omega_{1}(N)}-0pt\overline{R}= 0pt\overline{M}+0ptN-0ptR+1\] \[= 0ptM+0ptN-0ptR<0.\] contradicting the inductive hypothesis. Proof of Theorem 2.: Let \(d\geqslant 2\) be the smallest integer for which there is a counterexample on a depth \(d\) ring \(R\). By \(Q2\), we can assume that \(0ptR>\max\{0ptM,0ptN\}\). Suppose that the inequality in Theorem 13 is strict, then by Lemma 16 we have \[0ptM\otimes N>0ptM+0ptN-0ptR\geqslant 0.\] By Lemma 10, we can reduce the depth and get a counterexample in lower depth which contradicts the minimality of \(d\). Proof of Theorem 1.: The dimension \(0\) case is trivial. For higher dimensions, we use Theorem 5 to show that the Cohen-Macaulay dimensions of the modules involved are finite and then apply Theorem 2.
2309.03896
Fate of Quadratic Band Crossing under quasiperiodic modulation
We study the fate of two-dimensional quadratic band crossing topological phases under a one-dimensional quasiperiodic modulation. By employing numerically exact methods, we fully characterize the phase diagram of the model in terms of spectral, localization and topological properties. Unlike in the presence of regular disorder, the quadratic band crossing is stable towards the application of the quasiperiodic potential and most of the topological phase transitions occur through a gap closing and reopening mechanism, as in the homogeneous case. With a sufficiently strong quasiperiodic potential, the quadratic band crossing point splits into Dirac cones which enables transitions into gapped phases with Chern numbers $C=\pm1$, absent in the homogeneous limit. This is in sharp contrast with the disordered case, where gapless $C=\pm1$ phases can arise by perturbing the band crossing with any amount of disorder. In the quasiperiodic case, we find that the $C=\pm1$ phases can only become gapless for a very strong potential. Only in this regime, the subsequent quasiperiodic-induced topological transitions into the trivial phase mirror the well-known ``levitation and annihilation'' mechanism in the disordered case.
Raul Liquito, Miguel Gonçalves, Eduardo V. Castro
2023-09-07T17:57:01Z
http://arxiv.org/abs/2309.03896v2
# Fate of Quadratic Band Crossing under quasiperiodic modulation ###### Abstract We study the fate of two-dimensional quadratic band crossing topological phases under a one-dimensional quasiperiodic modulation. By employing numerically exact methods, we fully characterize the phase diagram of the model in terms of spectral, localization and topological properties. Unlike in the presence of regular disorder, the quadratic band crossing is stable towards the application of the quasiperiodic potential and most of the topological phase transitions occur through a gap closing and reopening mechanism, as in the homogeneous case. With a sufficiently strong quasiperiodic potential, the quadratic band crossing point splits into Dirac cones which enables transitions into gapped phases with Chern numbers \(C=\pm 1\), absent in the homogeneous limit. This is in sharp contrast with the disordered case, where gapless \(C=\pm 1\) phases can arise by perturbing the band crossing with any amount of disorder. In the quasiperiodic case, we find that the \(C=\pm 1\) phases can only become gapless for a very strong potential. Only in this regime, the subsequent quasiperiodic-induced topological transitions into the trivial phase mirror the well-known "levitation and annihilation" mechanism in the disordered case. ## I Introduction The unique characteristics of topological insulators compared to conventional band insulators have rendered them a central topic of current research [1; 2; 3; 4]. The groundbreaking discovery of the quantum Hall effect [5] and its subsequent theoretical explanation through the lens of topology [6; 7], paved the way for the proposal of the quantum anomalous Hall effect [8]. Remarkably, this effect allows for the appearance of a topological phase in the absence of a uniform magnetic field, and was experimentally realized in various platforms [9; 10; 11; 12]. These so called Chern insulators [4] arise from opening non-trivial gaps in systems with band crossing points carrying a finite quantized Berry phase, which can be accomplished by breaking time-reversal symmetry. The simplest and more extensively studied case of a band crossing point is that of a Dirac point, that hosts a low-energy linear dispersion described by a Dirac Hamiltonian, as it is the case for nodal superconductors and graphene [13]. Dirac points contain Berry phases of \(\pm\pi\), but there are other possibilities for band crossings and associated Berry phases, as is the case of quadratic band crossing points (QBCPs). Two-dimensional systems with QBCPs are also very interesting because on the one hand they are associated with a finite Berry phase of \(\pm 2\pi\) and on the other, a finite density of states at QBCP, contrary to Dirac points, renders these systems unstable to interactions [14]. In QBCP systems, interactions can induce nematic phases with two Dirac cones, each carrying half of the QBCP's Berry phase; or gap openings that may give rise to topological insulating phases [15; 16; 17; 18]. Even though topological band theory is usually studied in momentum-space for systems that have translational invariance, topological insulators are known to be robust to the effects of uncorrelated disorder [19]. In fact, disorder is a key ingredient for the observation of quantized Hall conductivity for quantum Hall systems since it localizes every state except those responsible for the quantized Hall current, that live at very narrow energy windows [20; 21; 22]. In this way, varying the Fermi level in the gap filled with localized states cannot change the Hall conductivity, giving rise to a filling-independent plateau. Sufficiently large disorder, however, generically suppresses the topological properties by inducing a topological phase transition into a trivial phase, where the topological extended states meet and become suppressed through the "levitation and annihilation" mechanism [22; 23; 24; 25]. However, numerous instances of disorder-induced topological phases, known as topological Anderson insulators, have also been found [26; 27; 28; 29; 30; 31]. A different class of systems that break translational invariance, where more exotic localization properties can occur, are quasiperiodic systems. Contrary to disordered systems, extended, localized and critical multifractal phases can arise even in one dimensional (1D) [32; 33; 34; 35; 36; 37; 38; 39; 40]. In higher dimensions, these systems have received considerable attention, mostly on the interplay between moire physics and localization [41; 42; 43; 44; 45; 46; 47; 48; 49]. Systems with quasiperiodic modulations can be realized in widely different platforms, including optical lattices [50; 51; 52; 53; 54; 55; 56; 57; 58; 59], photonic [48; 60; 61; 62; 63; 64; 65] and phononic [66; 67; 68; 69; 70; 71] metamaterials, and moire materials [72; 73; 74]. The impact of quasiperiodic modulations on parent topological systems has also been previously studied [75; 76; 77; 78], and was found to give rise to interesting topological phases with richer localization properties than in the disordered cases. The fate of a QBC in the presence of disorder was numerically studied in Ref. [79], in the non-interacting limit, where it was found to be unstable to the formation of topological insulating phases with Chern numbers \(C=\pm 1\), not present in the model's clean limit. The interplay between disorder and interactions was also studied in Refs. [80; 81] by renormalization-group methods, where the interaction-induced topological insulating phases were found to be suppressed at strong disorder. The influence of quasiperiodic modulations on QBCP has, however, remained poorly explored so far, with the notable exception of Ref. [82]. In this reference, a quasiperiodic potential was applied to a QBCP system, that was found to be stable up to a flatband regime at which the wave function becomes delocalized in momentum-space, closely related to the incommensurate magic-angle regime found in moire systems [46; 47; 83]. The study of the topological phases that can be induced by applying quasiperiodic modulations to a QBCP system has however remained unexplored up to now. In this work we study the topological and localization properties of a QBCP model with an applied 1D quasiperiodic modulation. The main results are shown in Fig. 1. The QBCP is robust for a small quasiperiodic potential, with the phase diagram remaining essentially unchanged with respect to the homogeneous limit, as seen in Fig. 1(a). For a large quasiperiodic potential, phases with Chern numbers \(C=\pm 1\), not present in the homogeneous limit, arise. The topological transitions into these phases were found to occur through the gap closing and reopening mechanism, as shown in in Fig. 1(b), being therefore of different nature from the ones found in the disordered case, in Ref. [79]. Interestingly, in the majority of the studied topological transitions, the gap closes at the same momenta as in the homogeneous case: at the center and corners of the first Brillouin zone (FBZ), defined in the translationally invariant direction. In Fig. 1(c), the localization length at the Fermi energy, obtained by the transfer matrix method, is seen to diverge at the gap closing phase transitions. Away from the Fermi energy, we found large clusters of bulk extended states both in topological and in trivial phases. This is in contrast with the disordered case, where all eigenstates are localized within the topological phases except for topologically non-trivial states arising at special energies. Only for an even larger potential, the \(C=\pm 1\) phases can become gapless and exhibit similar physics to the disordered case. In fact, further increasing the potential induces a transition into a trivial phase that mirrors typical topological transitions in the presence of disorder: close to the transition, topologically non-trivial extended states arise only at very narrow energy windows, merging at the transition, after which all states become trivial and localized, as in the "levitation and annihilation" mechanism [22; 23; 24; 25]. The paper is organized as follows: In Sec. II, we introduce the tight-binding model used to describe the electronic properties of the quasiperiodiced QBC system and the methods to analyze its properties. The topological, spectral, and localization properties are discussed in Sec. III. A thorough discussion of the obtained results is given in Sec. IV. We also include three appendices: in Appendix A we provide the real space TB Hamiltonian Figure 1: (a) Chern number as a function of the model parameter \(B\) and quasiperiodic potential \(W\) for a system size of \(L=21\) and \(20\) different twists realizations. The greyed out areas are gapless. (b) Energy gap as a function of \(B\) and \(W\) for \(L_{y}=1597\). No twist realizations were considered. The grey lines represent the lines at which a topological phase transition occurs. (c) Localization length (\(\xi\)) at the Fermi level (\(E=0\)) for the \(k_{x}\) values where the gap closes. A 1D system size of \(N=30000\) was used. The grid discretization is \(300\times 300\). and the \(k_{x}\)-dependent model; in Appendix B we present the Golden Ratio method used to compute the extrema of the energy spectrum; finally IPR and fractal dimension results are presented in Appendix C. ## II Model and Methods We consider a QBCP model on a square lattice with two orbitals per unit cell [79]. This model is defined by the following clean limit \(\mathbf{k}\)-space Hamiltonian \[\mathcal{H}_{0}=\sum_{\mathbf{k}}\boldsymbol{\Psi}_{\mathbf{k}}^{\dagger} \mathcal{H}_{\mathbf{k}}\boldsymbol{\Psi}_{\mathbf{k}}, \tag{1}\] where \(\boldsymbol{\Psi}_{\mathbf{k}}^{\dagger}=\left(\begin{array}{cc}c_{\mathbf{k },\mathbf{A}}^{\dagger}&c_{\mathbf{k},B}^{\dagger}\end{array}\right)\), with \(c_{\mathbf{k},\alpha}^{\dagger}\) the creation operator of a state with crystal momentum \(\mathbf{k}\) in the \(\alpha\) orbital and \[\mathcal{H}_{\mathbf{k}}=\mathbf{h}(\mathbf{k})\cdot\boldsymbol{\sigma}, \tag{2}\] with \(\boldsymbol{\sigma}\) the vector of Pauli matrices and the vector \(\mathbf{h}(\mathbf{k})\) given by \[h_{x} =2t_{x}\sin\left(k_{x}\right)\sin\left(k_{y}\right)\] \[h_{y} =0\] \[h_{z} =2t_{z}\left(\cos\left(k_{x}\right)-\cos\left(k_{y}\right)\right). \tag{3}\] From this point on, we set \(t_{x}=t_{y}=t=1\), so that every physical observable with energy units come in units of \(t\). QBCPs with symmetric Berry curvatures occur at \(\Gamma=(0,0)\) and \(M=(\pm\pi,\pm\pi)\) points of the FBZ. For \(h_{y}=0\), the system is gapless, having a finite DOS at the Fermi level (\(E=0\)). The introduction of a non-zero \(h_{y}\) perturbation opens a gap that can give rise to Chern insulating phases. This can be accomplished by introducing the following \(h_{y}\) term, \[h_{y}=1+\frac{B+1}{2}\left(\cos\left(k_{x}\right)+\cos\left(k_{y}\right)\right). \tag{4}\] For this model, a single QBCP occurs at \(M\) for \(B=0\), and for \(B=-2\) at \(\Gamma\). These are the only values of \(B\) for which the system becomes gapless, where topological phase transitions between phases with \(C=2\leftrightarrow C=0\) (\(B=-2\)) and \(C=0\leftrightarrow C=-2\) (\(B=0\)) take place (see Fig. 1(a), for \(W=0\)). In this work, we will study the fate of the model's phase diagram upon the addition of a 1D quasiperiodic potential. We consider the Aubry-Andre unidirectional potential given by the real-space Hamiltonian, \[\mathcal{H}_{W}=\frac{W}{2}\sum_{\mathbf{R},\alpha}\cos\left(2\pi\beta n \right)c_{\mathbf{R},\alpha}^{\dagger}c_{\mathbf{R},\alpha}, \tag{5}\] where \(\mathbf{R}=ma\hat{\mathbf{e}}_{x}+na\hat{\mathbf{e}}_{y}\) is a lattice vector and \(\beta\) is the potential frequency. In the thermodynamic limit we choose \(\beta\) to be an irrational number in order to break translational invariance; we choose the golden ratio \(\beta=\phi_{\text{GR}}=(1+\sqrt{5})/2\). We carried out numerical simulations for finite systems with \(L_{x}=L_{y}=L\) (with \(L\) the number of unit cells in each direction) and periodic/twisted boundary conditions. In order to avoid boundary defects we have chosen system sizes \(L\to L_{n}=F_{n}\), where \(F_{n}\) is the \(n\)-th order Fibonacci number, and approximated \(\beta\) with rational approximants of the golden ratio \(\beta\to\beta_{n}=F_{n+1}/F_{n}\). This choice ensures that the system's unit cell is of size \(L\), which guarantees that the system always remains incommensurate as \(L\) is increased. The potential in Eq. (5) keeps the system translational invariant along the \(x\) direction, and we take the Fourier transform along this direction to obtain \(\mathcal{H}_{k_{x}}\), an Hamiltonian diagonal in Bloch momentum \(k_{x}\) (see Appendix A). In what follows, we carry out an extensive study of spectral, topological and localization properties of the model. We compute the Chern number phase diagram of the system with an implementation of the coupling matrix method of Ref. [84] as implemented in Ref. [85]. Spectral properties were studied using exact diagonalization (ED) and the kernel polynomial method [86]. The transfer matrix method (TMM), introduced in [87; 88], can be used to compute the localization length and therefore to study bulk localization properties. At topological phase transitions, the states are extended at the Fermi level, while in topological phases the system is either gapped or populated with localized states around the Fermi level. For the two last cases, the localization length at Fermi level is finite, while in the former, it diverges in the thermodynamic limit. The TMM method was then also used to cross-check the topological phase diagram obtained through Chern number calculations. Throughout this work averages over twisted boundary conditions were realized such that the phase twist follow a random uniform distribution in the interval \(\theta_{i}\in[0,2\pi[\). To apply phase twists, the boundaries are periodically closed (as for periodic boundary conditions), but with an additional phase twist, so that: \[\psi_{\alpha}(\boldsymbol{R}+L\mathbf{a}_{i})=e^{i\theta_{i}}\psi_{\alpha}( \boldsymbol{R}), \tag{6}\] where \(\psi_{\alpha}(\boldsymbol{R})=c_{\mathbf{R},\alpha}^{\dagger}\ket{0}\). As the system approaches thermodynamic limit, any dependence on phase twists should vanish. ## III Results ### Topological Properties We start by characterizing the topological phase diagram, shown in Fig. 1(a). The phase diagram was ob tained for a system with \(21\times 21\) unit cells averaged over 20 random phase twists with twisted boundary conditions. The clean limit presents topological transitions from \(C=\pm 2\) to \(C=0\), with the gap closing at a QBCP (\(B=\{-2,0\}\)). These transitions are robust for small \(W\), still occurring at \(B=\{-2,0\}\). The observed behavior contrasts with the case of uncorrelated disorder, studied in Ref. [79], where it is shown that for infinitesimal disorder strength \(W\), \(C=\pm 1\) phases appear. For the present case of an incommensurate potential, \(C=\pm 1\) phases only appear at large enough potential strength, indicating a rather different phenomenology. Interestingly, along the lines of fixed \(B=-2,0\), when the QBCP occurs, we see the appearance of very small \(C=\pm 1\) bubbles, indicated by the arrows in Fig. 1(a). These bubbles are also seen in Fig. 1(b) for the gap and 1(c) for the localization length, which were obtained with very different system sizes, confirming they are not merely a result of finite-size effects. Increasing the quasiperiodic potential along the lines of fixed \(B=\{-2,0\}\), it will be shown below that for \(W\in[3.0,4.5]\) the QBCP splits into two Dirac cones, and eventually at \(W\approx 4.5\) a topological phase transition into \(C=\pm 1\) phase occurs. For a large enough potential strength, all topological phases are suppressed and a trivial phase is eventually reached. Unlike in the disordered case, there are both gapped and gapless \(C=\pm 1\) phases separated by the thin black lines in Fig. 1(a), as we will detail below. These phases are induced by quasi-periodicity, and are different from the topological Anderson insulators that exist for uncorrelated disorder due to their distinct localization properties, as we will show. ### Gapped/Gapless Regions In order to study the spectral gap of the system, we computed the closest eigenvalue to \(E=0\), which we call \(\epsilon_{0^{+}}\), using the \(k_{x}\)-dependent model presented in Appendix A. We then estimate the gap of the system as \(2\epsilon_{0^{+}}\), which is well justified due to the particle hole symmetry of the model, that ensures the spectrum is symmetric around \(E=0\). In Fig. 1(b) we present the gap of the system. These results were obtained using Lanczos decomposition alongside with the shift invert method for \(L_{y}=1597\) at the value of \(k_{x}\) where the gap closes. Generally we do not know the value of \(k_{x}\) at which the gap closes [89]. For this reason a golden ratio search algorithm (see Appendix. B) was implemented to compute the value of \(k_{x}\) that maximizes the valence band energy, that corresponds to the gap closing point. We can see in Fig. 1(b) that the gap closes and reopens in most of the topological phase transitions. We also see the appearance of gapless regions for strong potential, seen as wider blackish regions at high \(W\). In Fig. 1(a) the greyed out regions were computed using the data in Fig. 1(b). Since the spectrum of a finite system is always discrete, we need to properly define a threshold gap value \(\delta\) below which we consider the system to be gapless. This threshold was fixed as the minimum value of the gap at well defined gap closing and reopening topological transitions. For the system size considered this value is \(\delta\approx 0.015\). ### Localization Properties To complete the characterization of the phase diagram, we now turn to the study of the localization properties. We start by discussing the localization length (\(\xi\)) at the Fermi level (\(E=0\)), obtained through the TMM, and shown in Fig. 1(c). From the gap results presented in Fig. 1(b), we know that for most of the topological phase transitions the gap closes and reopens. Since the gap closing point should contain extended states at a topological phase transition, the localization length \(\xi\) should diverge at these points, while being finite at the gapped regions. We can therefore capture the topological transitions by computing \(\xi\) as a function of \(B\) and \(W\), as shown in Fig. 1(c). Using the \(k_{x}\)-dependent model, we can reach closer to thermodynamic limit reaching system sizes of the order of \(L\approx 10^{4}\) with a relative error of approximately \(\epsilon\approx 1\%\), where \(\epsilon\) is the relative error of the average localization length after \(N\) TMM iterations. In order to compute the localization length at the Fermi level, TMM was implemented for the \(k_{x}\) at which the gap closes (calculated using the golden ratio search algorithm discussed in Appendix B ). We have noticed that the valence band presents two local maxima, one around the \(\Gamma\)-pion and the other around the \(M\)-point, however the gap only closes around one of them. The TMM results in Fig. 1(c) are a sum of the localization lengths for the two local maxima, a quantity that diverges when either one of the localization length does. For any other value of \(k_{x}\), TMM will always give a finite but small \(\xi\) since the spectra of \(\mathcal{H}_{k_{x}}\) is gapped. With this approach, we compute the localization length \(\xi\) at \(E=0\) for a set of points \((B,W)\), thus obtaining in Fig. 1(c) the contour of the original topological phase diagram shown in Fig. 1(a). For high enough quasiperiodic potential strength, when \(B\lesssim-3\) or \(B\gtrsim 2\), this approach fails since the gap closes and does not reopens. The method of golden ratio search also fails in gapless regions since the gap closes at a continuous region of \(k_{x}\). To better understand the localization properties, we have performed an energy resolved study including energies away from the Fermi level. Since localized bulk states do not contribute to the system's topological properties, analysing the energy position of the extended bulk states with increasing quasiperiodic potential should give us some intuition on how they affect the topological phases. For regular Anderson disorder in 1D and 2D, any amount of disorder fully localizes the spectrum for systems belonging to the orthogonal symmetry class A [90]. As previously discussed, the same cannot be said about incommensurate structures for which finite fractions of extended states can exist. To study the localization properties of the bulk states as a function of the energy \(E\) we computed the \(k_{x}\) and \(E\) resolved localization length, \(\xi(k_{x},E)\) for different values of the quasiperiodic potential. In Fig. 2 the \(k_{x}\)-dependent results for the localization length can be seen for \(B=-3\). For the first topological phase transition from \(C=2\) to \(C=1\) (upper panel) there are a set of extended states around \(k_{x}=0\), where the gap closes. We can clearly see that the higher the quasiperiodic potential \(W\) the larger is the fraction of localized states. On the lower panel of Fig. 2, we see that right before the second transition from \(C=1\) to \(C=0\), most of the bulk states are localized except for a few extended states that can be observed if we use a fine enough grid discretization, as seen in the inset. After the transition, for \(W>9\) all the states are Anderson localized, and the system is in a trivial Anderson insulating phase. Figure 2 shows the localization length \(\xi(k_{x},E)\) for \(B=-2\). For small \(W\) we can see that the QBCP is robust, as expected from previous results. Around \(W=3.5\), we see that the QBCP splits into two Dirac points around the \(\Gamma\) point. It is still clear that under increasing quasiperiodic potential \(W\) bulk states become increasingly more localized. In the lower panel of Fig. 2, we show the localization length results for a topological phase transition from \(C=1\) into a trivial phase. We see that the gap closes at the transition, after which all the states fully localize. Overall the same behavior can be observed for other values of \(B\). On the the other side of the diagram, \(B>-1\), the behavior is similar, with a shift of \(\Delta k_{x}=\pi\). ## IV Discussion In this work we unveiled the full phase diagram of a 2D system with a QBCP under an applied 1D quasiperiodic potential. In the following, we discuss our main results and compare with previous findings for the disordered QBCP system. In the absence of quasiperiodicity, topological phase transitions occur when the gap closes at a QBCP. When a small quasiperiodic potential is applied, even though translational invariance is broken along the \(y\)-direction, we observe that the topological transitions are accompanied by a gap closing with a quadratic dispersion along \(k_{x}\), as is obvious from Figs. 2 and 3. Moreover, these transitions occur through a gap closing and reopening mechanism as in the clean case, which strongly indicates that the QBCP is stable to the addition of quasiperiodic modulations. This is in sharp contrast with the disordered case, for which the QBCP is unstable to the formation of gapless phases with Chern numbers \(C=\pm 1\), not present in the clean limit [79]. Interestingly, in the quasiperiodic case, topological transitions into \(C=\pm 1\) phases were also observed for larger quasiperiodic potentials. However, again in contrast to the disordered case, these transitions also occur via a gap closing and re Figure 2: Localization length as a function of energy \(E\) and momentum \(k_{x}\) at \(B=-3\) for the entire range of the valence band. The grid discretization is \(300\times 300\). The insets follow the same procedure for a smaller region (\(E\in[-1,0]\) and \(k_{x}\in[-\pi,-\pi+0.2]\)) with a grid discretization of \(600\times 600\). opening mechanism. For the topological transitions from \(C=\pm 2\) to \(C=\pm 1\), one of the QBCP is split in two gapped Dirac cones (see, for instance, the upper right panel in Fig. 3), and the gap closes only at one of them (otherwise the Chern variation would be larger than one) [91]. The Dirac cone at a gap closing point is clearly seen in Fig. 2 (upper, middle panel). Once again in sharp contrast with the disordered case, only for very large potential strengths (\(W\gtrsim 9\)) the \(C=\pm 1\) phases becomes gapless (with localized states at the Fermi level). The localization properties were also found to be very distinct from the disordered case at small \(W\). In the presence of disorder all the eigenstates are always localized except for extended states that live on very narrow energy windows. This contrasts with the quasiperiodic case where considerably large clusters of bulk states remain extended. Furthermore, at large potential strengths, an analogous of the "levitation and annihilation" mechanism observed in disordered systems can be observed close to the transitions from the \(C=\pm 1\) phases into the trivial phase. In fact, in this large potential limit, the \(C=\pm 1\)can become gapless, with extended states only existing at very narrow energy windows [92]. After the topological transition into the trivial phase, all states become localized, as in the disordered case. The topological and localization properties therefore show a mixture of features present in the homogeneous and disordered cases, as it was also previously observed for quasiperiodic Chern insulators in Ref. [78]. The small potential results are also in agreement with Ref. [82], where a different QBCP system was found to be robust to the application of a (in this case fully two-dimensional) quasiperiodic potential. Our findings can be verified in ultracold atoms and trapped ions experiments, where quasiperiodic potentials are routinely implemented [50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. By keeping translational invariance along one direction, our study of a 1D quasiperiodic potential was a suitable starting point to address the interplay between quasiperiodicity and quadratic band crossings. However, an equally interesting question lies on the fate of the QBCP upon the application of a fully two-dimensional quasiperiodic potential, that we leave for future exploration. The authors acknowledge partial support from Fundacao para a Ciencia e Tecnologia (FCT-Portugal) through Grant No. UIDB/04650/2020. MG acknowledges partial support from Fundacao para a Ciencia e Tecnologia (FCT-Portugal) through Grant No. UID/CTM/04540/2019. MG acknowledges further support from FCT-Portugal through the Grant SFRH/BD/145152/2019.
2309.12743
Cosmic-ray propagation in extragalactic space and secondary messengers
These notes summarize the lectures about "Cosmic-ray propagation in extragalactic space and secondary messengers", focusing in particular on the interactions of cosmic-ray particles with the background photons in the Universe, including nuclear species heavier than hydrogen, and on the analytical computation of the expected cosmic-ray fluxes at Earth. The lectures were held at the Course 208 of the International School of Physics "Enrico Fermi" on "Foundations of Cosmic-Ray Astrophysics", in Varenna (Como, Italy) from June 23rd to June 29th, 2022. These notes are complementary to the content of the lectures held by Pasquale Dario Serpico at the same school.
Denise Boncioli
2023-09-22T09:44:55Z
http://arxiv.org/abs/2309.12743v1
# Cosmic-ray propagation in extragalactic space and secondary messengers ###### Abstract These notes summarize the lectures about "Cosmic-ray propagation in extragalactic space and secondary messengers", focusing in particular on the interactions of cosmic-ray particles with the background photons in the Universe, including nuclear species heavier than hydrogen, and on the analytical computation of the expected cosmic-ray fluxes at Earth. The lectures were held at the Course 208 of the International School of Physics "Enrico Fermi" on "Foundations of Cosmic-Ray Astrophysics", in Varenna (Como, Italy) from June 23rd to June 29th, 2022. These notes are complementary to the content of the lectures held by Pasquale Dario Serpico at the same school. ## 1 Introduction This series of lectures focuses on the foundations of cosmic-ray astrophysics. The history of cosmic rays starts at the beginning of the past century, thanks to some experiments realized by Domenico Pacini [1] and Victor F. Hess [2]. They revealed that a spontaneous "radiation" was coming from the outer space, and not from the Earth crust, as it was believed to justify the spontaneous discharge of charged electroscopes. Forthcoming experiments established that this "radiation" is instead of particle nature: _cosmic rays_ are ionized nuclei, of which 90% are protons. Thanks to cosmic rays, many fundamental discoveries in particle physics were possible until the 1950s, when high-energy particles started to be produced in human-made particle accelerators. In recent decades, cosmic rays have again been considered as the frontier to explore interactions at the highest energies, far beyond those available at terrestrial accelerators. These lectures deal with interactions of cosmic rays in astrophysical environments, whose knowledge is needed in order to unveil the astrophysical origin of cosmic rays. Beyond this aspect, cosmic rays constitute in general a unique field of research to improve our understanding of the physics governing interactions at extremely high energies. In addition, cosmic rays at the highest energies could provide a unique test of physics beyond the Standard Model, due to possible connections with dark matter as well as to probes of fundamental symmetries [4]. Cosmic rays are measured within a vast energy range; their energy density at Earth is recognized to follow a power law spectrum as \(E^{-\gamma}\) with \(\gamma\sim 3\) (see Fig. 1). The present lectures focus on the extremely high energies, namely above \(10^{17}\) eV, where the cosmic Figure 1: Global view of the cosmic-ray energy spectrum, with the measurements of different experiments. From [3]. rays are called Ultra-High-Energy Cosmic Rays (UHECRs). They are currently investigated thanks to giant observatories such as the Pierre Auger Observatory [5] and the Telescope Array [6], which can measure the energy deposited in the atmosphere by the cascade of particles originated after the first interaction of the cosmic-ray particle in the atmosphere, as well as the particles reaching the surface of the Earth. Thanks to hybrid techniques, several fundamental quantities of the primary cosmic ray hitting the atmosphere can be deduced, such as its energy, arrival direction and chemical composition. UHECR particles have energy roughly more than eight orders of magnitude larger than the energy of a proton at rest, and the particles are therefore ultrarelativistic. Although theoretical studies and experimental efforts have been developed in the last decades, several issues concerning UHECRs are still unsolved. The astrophysical origin of cosmic rays, as well as their chemical composition and the mechanisms that bring them to such extreme energies in their sites of production, are uncertain, and constitute a major and exciting field of study in astroparticle physics. Due to the extremely high energy of UHECRs, and taking into account the confinement power of the magnetic fields in the Galaxy, they are not expected to be produced in the Galaxy, as also significantly confirmed by large-scale anisotropy studies [7]. In order to estimate what type of extragalactic sources can host mechanisms to accelerate particles to such high energies, the acceleration mechanisms of particles in Supernova Remnants (SNRs), as predicted in the standard paradigm of the Galactic cosmic rays [8], can be exploited. In these environments, the maximum energy reachable from particles due to the presence of shocks is connected to the age, and therefore to the dimension, of the SNR, as well as to the intensity of the magnetic fields [9, 10]. This has been generalized in [11], and the maximum energy can be thus defined in terms of the confinement power of the astrophysical source, meaning that the particles can reside in the acceleration region as soon as the gyroradius is smaller than the region itself. This argument, known as "Hillas condition", permits to classify the candidate sources in terms of their comoving size and magnetic field in the acceleration region, as shown in Fig. 2 (left), where the maximum energy is expressed as \(E_{\rm max}=Ze\beta_{\rm sh}cBR\), being \(\beta_{\rm sh}\) the velocity of the shock in units of speed of light, \(c\), \(B\) the intensity of the magnetic field in the source, \(R\) the size of the accelerating region and \(Ze\) the charge of the particle. Several source types reported in Fig. 2 (left) are capable of accelerating cosmic rays to ultra-high energies. A complementary condition to be taken into account is that the considered source class must produce the energy budget in cosmic rays to account for the observed energy flux at Earth. From the fit of the UHECR spectrum and composition measured at Earth, an estimate of the UHECR energy production rate per unit volume (also called luminosity density or emissivity, being the product of the luminosity and the number density) can be given, as done in [12], where \(5\times 10^{44}\)\(\rm erg\,Mpc^{3}\,yr^{-1}\) is found. Fig. 2 (right) shows what source classes satisfy this requirement, taking into account the combination of luminosity and number density. The vertical line in the plot highlights the minimum effective number density that can be estimated from studies involving arrival directions. The full diagonal line refers to the case in which the luminosity in cosmic rays is supposed to coincide with the gamma-ray luminosity. The plots in Fig. 2 summarize generic indications about source classes that can be considered as responsible for the UHECR flux at Earth. More detailed indications come, for instance, from studies that compare UHECR arrival directions with the position of sources from catalogs, as in [14] where the starburst galaxies are found to be correlated with the highest energy CRs at more than 4.0 \(\sigma\). Studies that interpret the UHECR energy spectrum and composition in terms of astrophysical scenarios can in turn constrain the spectral characteristics of UHECRs at the escape from their sources [12]. Some slight departures from a single power law in the CR spectrum can be recognized; for instance, a change of slope is visible at \(\sim 3\times 10^{15}\) eV, called _knee_. This could be interpreted as a signature of the end of the acceleration power of CR sources at work in the Galaxy, as due to processes in SNRs. Another change of slope, measured at \(\sim 5\times 10^{18}\) eV, called _ankle_, could be connected to the intersection of the spectra of Galactic and extragalactic CRs, as well as to different contributions from populations of extragalactic sources, or to effects of the energy losses of CRs during their travel through the extragalactic space (see for instance Ref. [15] and references therein). The origin of the suppression of the CR spectrum at the highest energies is also undetermined, as it will be also mentioned in the following. For a comprehensive and recent review of the open questions about cosmic rays, see Ref. [13]. The lecture notes are organized as follows. In Sec. 2 I will introduce the physics of interactions that take place in the travel of the UHECR particles in the space they traverse before being detected. Sec. 3 is dedicated to the analytical computation of the cosmic-ray flux at Earth, for the case of protons and heavier nuclei, respectively. Figure 2: Left: Source classes as a function of their comoving size and magnetic field, and corresponding maximum energy reachable for hydrogen and iron nuclei for different values of the shock velocity. Right: Characteristic source luminosity versus source number density for steady sources, and effective luminosity versus effective number density for transient sources. Both figures are reproduced with permission from [13]. In this section, several references to the _SimProp_ Monte Carlo code for the UHECR extragalactic propagation [16], of which I am one of the authors, can be found, as well as to some applications of the code, that is currently under revision for improving its performances and for refining some physics inputs [17]. In Sec. 4 the production of secondary particles that are expected to be generated by cosmic rays interacting in the extragalactic background photon field is described. Several appendices complement the material reported in the main text. ## 2 Interactions of UHECRs In this section the calculation of the energy losses of cosmic-ray protons as due to interactions with the cosmic microwave background (CMB) is worked out analytically; the same calculation can be performed for the case of a generic photon field, taking into account its dependence on the redshift and energy. The case of nuclei heavier than hydrogen will be also shown. As a first step, the available energy for the interactions is discussed; due to the relativistic energies of the involved particles, in order to describe a process such as \(a+b\to c+d\), where \(a,\,b,\,c,\,d\) are the particles in the initial and final states, their energy-momentum four-vectors can be defined as \[p_{\mu}=(E_{i},\,c\vec{p}_{i}); \tag{1}\] another quantity to be considered is the \(s\) of the process, namely the scalar product of the cumulative four-vectors of the initial and final states, that is a Lorentz-invariant quantity. As a general approach applied to any of the processes described in the following, the value of \(s_{\rm th}\), namely the \(s\) at the threshold for the reaction, will be computed in a convenient reference frame, such as \[s_{\rm th}=(E_{a}+E_{b})^{2}-c^{2}(\vec{p}_{a}+\vec{p}_{b})^{2}=(m_{c}^{2}+m_{ d}^{2})c^{4}, \tag{2}\] corresponding to the laboratory frame. This quantity will be calculated corresponding to the interactions relevant in the propagation of the particles through the extragalactic space. The necessary ingredients for computing the rates of the processes are the spectra of the photons encountered during the extragalactic propagation; the ones relevant for UHECRs are mainly the CMB and the ultraviolet-optical-infrared background light (also called Extragalactic Background light, EBL). The cross sections of the photo-nuclear processes, that will be discussed in the following, have to be included in the computation as well. The typical time of an interaction process is approximately proportional to \(1/(c\sigma n_{\rm ph})\), where \(\sigma\) represents the cross section of the process, while \(n_{\rm ph}\) is the density of the photons encountered by the cosmic ray in the extragalactic space. If the distribution of these quantities in terms of energies is considered, the interaction rate can be computed as: \[\frac{dN_{\rm int}}{dt}=c\int(1-\beta\cos\theta)n_{\rm ph}(\epsilon,\cos\theta )\sigma(\epsilon^{\prime})d\cos\theta d\epsilon\,, \tag{3}\] where \(n_{\rm ph}\) is the energy spectrum of the photon field (as a function of the energy of the photon in the laboratory frame and of the angle between the momenta of the particle and the photon) and \(\sigma\) is the cross section of the considered process, as a function of the energy of the photon in the particle frame (here and in the following the primed terms refer to the quantities computed in the proton/nucleus rest frame). These quantities are reported respectively in Fig. 3 (calculated in the local Universe, for the CMB and some models of the EBL) and Fig. 4. From the energy of the photon in the proton rest frame, the transformation \(d\epsilon^{\prime}=-\Gamma\epsilon d\cos\theta\) is derived and the integral reads then as two integrals over the energy of the photons (whose distribution is assumed isotropic in the laboratory frame), in the laboratory and the proton rest frame: \[\frac{dN_{\rm int}}{dt}=\frac{c}{2\Gamma^{2}}\int_{\epsilon^{\prime}_{\rm th}} ^{\infty}\sigma(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{\prime}/2 \Gamma}^{\infty}\frac{n_{\rm ph}(\epsilon)}{\epsilon^{2}}d\epsilon d\epsilon^ {\prime}\,. \tag{4}\] A complete derivation of the interaction rate is reported in App. A. Figure 3: Spectral energy density of the CMB and EBL calculated in the local Universe, according to the models reported in [18, 19, 20], and used in [17]. In order to compute the energy loss length, the inelasticity of the processes taken into account has to be evaluated, meaning the mean fraction of energy of a nucleus lost in a single interaction, \(f(\epsilon^{\prime})=\langle\frac{E_{\rm in}-E_{\rm out}}{E_{\rm in}}\rangle\). The interaction length in Eq. 4 can be used to compute the energy loss rate, by introducing the inelasticity, as: \[\frac{1}{E}\frac{dE}{dt}=-\frac{c}{2\Gamma^{2}}\int_{\epsilon^{\prime}_{\rm th }}^{\infty}f(\epsilon^{\prime})\sigma(\epsilon^{\prime})\epsilon^{\prime}\int _{\epsilon^{\prime}/2\Gamma}^{\infty}\frac{n_{\rm ph}(\epsilon)}{\epsilon^{2}} d\epsilon d\epsilon^{\prime}\,, \tag{5}\] for a generic process. This rate can be thus converted into a length as: \[l_{\rm loss}=-c\left(\frac{1}{E}\frac{dE}{dt}\right)^{-1}=-E\frac{dx}{dE} \tag{6}\] and used to follow the trajectory of the particle as \[\frac{dE}{dx}=-\frac{E}{l_{\rm loss}} \tag{7}\] being \(x\) the distance covered by the particle. Let us calculate the interaction rate in the case of UHECR protons interacting with the CMB. Already at the time of the discovery of the CMB, it was supposed that the photo-pion production \(p+\gamma_{\rm bkg}\to p(n)+\pi^{0}(\pi^{+})\) of protons off CMB photons could cause Figure 4: Total cross section for the photo-meson production process as a function of the energy of the photon in the proton rest frame; the cross section is taken from SOPHIA [21] and used in [17]. energy losses inducing a suppression of the UHECR flux [22, 23]. The threshold for this process can be calculated as: \[s_{\rm th}=m_{\rm p}^{2}c^{4}+2E_{\rm th}\epsilon(1-\beta\cos\theta)=(m_{\rm p}+ m_{\pi})^{2}c^{4} \tag{8}\] where \(m_{\rm p},m_{\pi}\) are the proton and the pion masses, respectively, \(\beta\) is the speed of the proton in the laboratory frame (being the particles ultrarelativistic, this can be taken as \(\beta\sim 1\) and will be omitted in the following), \(\theta\) is the angle between the photon and the proton momenta, \(\epsilon\) is the energy of the photon in the laboratory frame and \(E_{\rm th}\) is the minimum energy required for the proton in order to induce a pion production, which reads: \[E_{\rm th,p}^{\pi}=\frac{m_{\pi}^{2}c^{4}+2m_{\pi}m_{\rm p}c^{4}}{2\epsilon(1- \cos\theta)}\approx 7\times 10^{19}\,{\rm eV} \tag{9}\] if head-on collisions are taken into account with a photon of \(\epsilon\approx 7\times 10^{-4}\,{\rm eV}\) (average CMB photon energy). Another process that can cause energy losses of CR protons is the electron-positron pair production [24], \(p+\gamma_{\rm bkg}\to p+e^{+}+e^{-}\), for which the energy threshold can be calculated similarly to the case of the pion production: \[E_{\rm th,p}^{e^{+}e^{-}}=\frac{4m_{\rm e}^{2}c^{4}+8m_{\rm e}m_{\rm p}c^{4}}{ 2\epsilon(1-\cos\theta)}\approx 6\times 10^{17}\,{\rm eV}\,. \tag{10}\] In order to evaluate what photon fields can play a role in these processes, one can compute the energy of the photon in the proton rest frame: \(\epsilon^{\prime}=\epsilon\Gamma(1-\cos\theta)\approx\epsilon\Gamma\). Therefore the needed threshold Lorentz factor to trigger a photo-pion production in the EBL (mean infrared energy \(10^{-1}\div 10^{-2}\) eV) will be lower than what is found for the CMB photons, therefore permitting lower energy protons to induce the production of pions. Although the pion production by protons off EBL is less efficient in terms of energy losses if compared to the pair production in the same energy range, this is relevant for the production of neutrinos (as discussed in Sec. 4). If the energy spectrum of CMB photons (black body) \[n_{\rm ph}=\frac{dN_{\rm ph}}{dVd\epsilon}=\frac{1}{\pi^{2}(\hbar c)^{3}} \frac{\epsilon^{2}}{\exp(\epsilon/k_{\rm B}T)-1} \tag{11}\] is considered (where isotropy is also assumed, so that \(n_{\rm ph}(\epsilon,\cos\theta)\approx n_{\rm ph}(\epsilon)\)), the calculation of the integral over the photon density in Eq. 4 can be worked out analytically [25], with the transformation \(y=\exp(\epsilon/k_{\rm B}T)-1\). The interaction rate becomes then \[\frac{dN_{\rm int}}{dt}=\frac{ck_{\rm B}T}{2\pi^{2}(\hbar c)^{3}\Gamma^{2}} \int_{\epsilon_{\rm th}^{\prime}}^{\infty}\sigma(\epsilon^{\prime})\epsilon^ {\prime}\left\{-\ln\left[1-\exp\left(-\frac{\epsilon^{\prime}}{2\Gamma k_{\rm B }T}\right)\right]\right\}d\epsilon^{\prime}\,. \tag{12}\] The inelasticity at the threshold for the production can be computed taking into account the masses of the particles to be generated, so that \(f_{\pi}(\epsilon^{\prime}\approx 145\,\mathrm{MeV})=\frac{m_{\pi^{0}}}{m_{p}+m_{ \pi^{0}}}\approx 0.125\) and \(f_{e^{+}e^{-}}(\epsilon^{\prime}\approx 1\,\mathrm{MeV})=\frac{2m_{e}}{m_{p}+2m_{e}} \approx 10^{-3}\) corresponding to photo-pion and pair production respectively. In order to compute the same quantities at energies higher than the threshold, the particle distributions in the final states are needed. In the energy range of interest for CR interactions, the inelasticity is usually approximated as 20% for the photo-pion production and \(10^{-5}\) for the pair production [26]. Fig. 5 shows the energy loss lengths corresponding to different processes for protons in the CMB, as a function of the energy, and computed at the present time (or redshift \(z=0\)), with the inelasticity taken into account for the case of isotropic production of the pions in the center of mass frame (other options are described in [27]). The horizontal line shows the adiabatic energy losses due to the expansion of the Universe, corresponding to \(z=0\), that are in general given by: \[\frac{1}{E}\frac{dE}{dt}=-H(z(t))\,, \tag{13}\] where \(H(z)=H_{0}\sqrt{(1+z)^{3}\Omega_{m}+\Omega_{\Lambda}}\) (see also App. B for more details). At intermediate Figure 5: Total energy loss length for protons (solid line) as a function of the Lorentz factor, calculated at redshift \(z=0\), with the contributions from photo-pion production (red-dashed line refers to the total photo-pion energy loss length while the red-dotted line refers only to the photo-pion energy loss length off CMB) and from electron-positron pair production (orange dashed line) off CMB photons [17]. energies the effect of the energy losses due to the pair production is dominant; the typical length traversed by UHECRs undergoing these processes is of the order of Gpc. At the highest energies photo-pion processes can take place, and the typical length is of the order of 10 Mpc. Taking into account the fraction of proton energy lost in each interaction, protons with initial energy of \(10^{21}\) eV will be under threshold for photo-pion production after traveling \(50\div 100\) Mpc. This means that if UHECRs with EeV energies are detected at Earth, these should be produced within a sphere of the order of \(\sim 100\) Mpc, as predicted by [22, 23]. The energy loss lengths in Fig. 5 are computed corresponding to the present time. Due to the dependence of the density of the CMB photons on the redshift and the dependence of the temperature of the CMB photons, the energy loss length varies as: \[l_{\rm loss}(E,z)=\frac{l_{\rm loss}((1+z)E,z=0)}{(1+z)^{3}}\,, \tag{14}\] while if the EBL is used instead of the CMB, the expression above has a more complicate dependence on \(z\). The complete computation of Eq. 14 is reported in App. C. Current measurements of cosmic rays at the highest energies are found to be inconsistent with a pure-proton composition, using current hadronic interaction models for taking into account the development of the cascade of particles generated in the atmosphere after the first interaction of the primary cosmic ray [28, 29]. If cosmic rays reaching the top of the atmosphere are heavier than protons, their possible interactions must be taken into account for the propagation through extragalactic photons in order to possibly infer the UHECR mass composition and spectral parameters at their sources. In addition to the electron-positron pair production and the photo-pion production, the photo-disintegration process plays an important role in the modification of the nuclear species of the cosmic rays as escaped from their sources, on their way to the Earth. Unlike the pion production, the disintegration of nuclei can be triggered correspondingly to energies of the photon in the nucleus rest frame of tens of MeV, as shown for instance in Fig. 6 for the case of iron-56. At these energies it is possible to neglect the binding energy of the nucleons in the nucleus, therefore the energy loss lengths can be computed as separated contributions from the modification of the Lorentz factor (due to energy losses from adiabatic expansion, pair production and pion production) and the change in the atomic mass number, due to the photo-disintegration, where the Lorentz factor is conserved [30]: \[\frac{1}{E}\frac{dE}{dt}=\frac{1}{\Gamma}\frac{d\Gamma}{dt}+\frac{1}{A}\frac{ dA}{dt}\,. \tag{15}\] The photo-disintegration process comprises two main regimes [31, 32], as shown in Fig. 6: * a resonance at about 10 MeV (energy of the photon in the nucleus rest frame, slightly dependent on the nucleus), called Giant Dipole Resonance (GDR), corresponding to the behavior of protons and neutrons in the nucleus as penetrating fluids; the de-excitation of this resonance produces the ejection of one or two nucleons; * 150 MeV, called Quasi-Deuteron regime, where the photon wavelength in the nucleus rest frame is comparable to the nuclear dimensions and the photon is likely to interact with a nucleon pair, with the final ejection of that pair and possibly other nucleons. A cascade of isotopes lighter than the injected primary can be therefore generated due to photo-disintegration in astrophysical environments, as reported in Fig. 7 for the iron-56 as primary nucleus. Examples of energy loss lengths are shown in Fig. 8, corresponding to nitrogen-14 (top) and iron-56 (bottom) nuclei. Similarly to the case of protons, the adiabatic energy losses are dominant at low energies, and the corresponding energy loss length can be written as in Eq. 13. At intermediate energies, the pair production(1) is overcome by the photo-disintegration on the EBL, while at the highest energies the dominant process is the photo-disintegration on the CMB. The pion production is shifted towards higher energies with respect to the case of protons. This is due to the fact that in this process the particle involved is the nucleon in the nucleus, therefore the threshold is \(E_{\rm th}\approx A\Gamma_{\rm th}^{p}m_{p}c^{2}\), where \(\Gamma_{\rm th}^{p}\) can be derived from Eq. 9. Therefore, the pion production Figure 6: Total inelastic photo-nuclear cross section for iron-56 as a function of photon energy in the rest frame of the nucleus. Reproduced with permission from [33]. will be more efficient in the case of protons with respect to heavier nuclear species, and this will have consequences for the production of secondary messengers (see Sec. 4). ## 3 Computation of UHECR fluxes at Earth The propagation of UHECRs in the intergalactic or Galactic medium can be followed with a system of differential equations describing the evolution of the particles with respect to the time, taking into account all interactions that can modify their number or energy. These are known as diffusion-loss equation, for which an extensive treatment can be found in [25, 30, 35] for our cases of interest. Here the propagation in the intergalactic magnetic fields (treated for protons for instance in [36]) is neglected, and a general form for the transport equation can be given by: \[\frac{dn_{i}(E,t)}{dt}=-\frac{d}{dE}\left[\frac{dE(t)}{dt}n_{i}(E,t)\right]- \frac{n_{i}(E,t)}{\tau_{i}(E,t)}+Q_{i}(E,t) \tag{16}\] where \(n_{i}(E,t)\) represents the number of particles of species \(i\) per volume and energy, with energy \(E\) at the time \(t\); the variation of \(n_{i}\) with time is due to energy losses (first term at the right side of the equation), particle losses due to decays with lifetime \(\tau_{i}(E,t)\) (second term) and to an injection rate represented by the third term. In the following, Figure 7: Nuclear chart as a function of the number of protons and neutrons, showing the isotopes that can be created in the cascade after the first interaction of an isotope of iron-56 with a background photon field. Reproduced with permission from [34]. the flux will be also used, which can be defined from the quantity \(n\) as: \[J(E,z=0)=\frac{c}{4\pi}n(E,z=0). \tag{17}\] In the next subsections the computation will be specified to the case of UHECR protons and nuclei, using, if needed, the following notation: \[-\frac{1}{E}\frac{dE}{dt}=\beta_{0}\left(E\right), \tag{18}\] from which the quantity \[b_{0}(E)=-\frac{dE}{dt}=E\beta_{0}(E) \tag{19}\] can be also defined. Figure 8: Total energy loss lengths for nitrogen-14 (top) and iron-56 (bottom), calculated at \(z=0\), and the contributions from the different processes. Reproduced with permission from [37]. ### The case of UHECR protons In order to compute the spectrum for the case of protons, one should consider that the second term at the right hand side of Eq. 16 is zero. Therefore, one can evolve the particles from the time of observation to the cosmological epoch of generation using the adiabatic energy losses \(EH(z)\) and \(b\) as: \[-\frac{dE}{dt}=EH(z)+(1+z)^{2}b_{0}[(1+z)E]\,; \tag{20}\] the dependence of \(b\) and \(\beta\) from the redshift is explained in App. C. Let us first compute the flux from a single source; in order to do so, the evolution of the energy as a function of time/redshift has to be evaluated, and the energy intervals at the epoch of production and detection have to be computed. In order to connect the energy intervals, the following energy loss equation has to be solved: \[\frac{dE_{\rm g}}{dt}=-E_{\rm g}\beta(E_{\rm g},z(t)), \tag{21}\] where the subscript \(g\) refers to the energy of the particle at the generation. The \(\beta\)-function is given by the contribution from interactions and expansion of the Universe \[\beta(E_{\rm g},z(t)) \to \beta(E_{\rm g},z(t))+H(z(t)), \tag{22}\] where \(H(z(t))\) is the Hubble parameter at the time \(t\). Given the initial condition \(E(t=t_{0})=E\), where \(t=t_{0}\) is the present time, the solution of Eq. 21 for a generic time \(t<t_{0}\) is \[E_{\rm g}(t)=E+\int_{t}^{t_{0}}dt^{\prime}\,E_{\rm g}(t^{\prime})H(z(t^{\prime }))+\int_{t}^{t_{0}}dt^{\prime}\,E_{\rm g}(t^{\prime})\beta(E_{\rm g}(t^{ \prime}),z(t^{\prime})). \tag{23}\] This solution can be written using the redshift parameter such that \(z(t=t_{0})=0\). The energy loss equation for protons can be written as a function of the redshift making use of the transformation (see App. B for more details): \[\frac{dt}{dz}=-\frac{1}{H_{0}(1+z)\sqrt{(1+z)^{3}\Omega_{m}+\Omega_{\Lambda}}} \tag{24}\] as: \[E_{\rm g}(z)=E+\int_{0}^{z}dz^{\prime}\,\frac{E_{\rm g}(z^{\prime})}{1+z^{ \prime}}+\int_{0}^{z}dz^{\prime}\,\frac{E_{\rm g}(z^{\prime})\beta(E_{\rm g}(z ^{\prime}),z^{\prime})}{H(z^{\prime})(1+z^{\prime})}. \tag{25}\] The relations C.14 and C.17 can be used to show that \[\beta(E,z)=(1+z)^{3}\beta_{0}((1+z)E)=\frac{(1+z)^{2}}{E}b_{0}((1+z)E), \tag{26}\] and therefore \[E_{\rm g}(z)=E+\int_{0}^{z}dz^{\prime}\,\frac{E_{\rm g}(z^{\prime})}{1+z^{\prime} }+\int_{0}^{z}dz^{\prime}\,\frac{1+z^{\prime}}{H(z^{\prime})}b_{0}((1+z^{\prime })E_{\rm g}(z^{\prime})). \tag{27}\] If the previous equation is differentiated with respect to \(E\), the expansion factor of the energy interval can be computed as \[\begin{split} y(z)&=1+\int_{0}^{z}dz^{\prime}\, \frac{y(z^{\prime})}{1+z^{\prime}}+\int_{0}^{z}dz^{\prime}\,\frac{1+z^{\prime }}{H(z^{\prime})}\frac{db_{0}((1+z^{\prime})E_{\rm g}(z^{\prime}))}{dE}\\ &=1+\int_{0}^{z}dz^{\prime}\,\frac{y(z^{\prime})}{1+z^{\prime}}+ \int_{0}^{z}dz^{\prime}\,\frac{(1+z^{\prime})^{2}}{H(z^{\prime})}\frac{db_{0} ((1+z^{\prime})E_{\rm g}(z^{\prime}))}{d((1+z^{\prime})E_{\rm g}(z^{\prime})) }y(z^{\prime}),\end{split} \tag{28}\] whose corresponding differential equation is \[\frac{1}{y}\frac{dy}{dz}=\frac{1}{1+z}+\frac{(1+z)^{2}}{H(z)}\frac{db((1+z)E_{ \rm g}(z))}{d((1+z)E_{\rm g}(z))}. \tag{29}\] The solution of Eq. 29 is the connection between the energy intervals: \[y(z)=(1+z)\exp\left[\frac{1}{H_{0}}\int_{0}^{z}dz^{\prime}\,\frac{(1+z^{\prime })^{2}}{\sqrt{(1+z^{\prime})^{3}\Omega_{m}+\Omega_{\Lambda}}}\,\frac{db_{0}((1 +z^{\prime})E_{\rm g}(z^{\prime}))}{d((1+z^{\prime})E_{\rm g}(z^{\prime}))} \right]\,, \tag{30}\] which therefore enters in the computation of the expected number of particles at Earth. A complete derivation of the reported calculation can be found in [25]. Fig. 9 shows the energy at the source as a function of the energy at Earth, corresponding to different redshifts (left) and the energy at the source as a function of the redshift, corresponding to different energies at Earth (right). The flux from a single source at cosmological distance \(z\) will be: \[J(E,z)=\frac{1}{(4\pi)^{2}}\frac{Q(E_{\rm g}(E,z),z)}{(1+z_{\rm g})\chi^{2}} \frac{dE_{\rm g}}{dE} \tag{31}\] where the connection between the energy intervals appears; \(\chi\) is the comoving radial coordinate (as defined in App. B) and \(Q\) is the generation rate per unit energy, that can be expressed as \[Q(E_{\rm g})=Q_{0}\left(\frac{E_{\rm g}}{E_{0}}\right)^{-\gamma}f_{\rm cut}(E _{\rm g})\,. \tag{32}\] being \(\gamma\) the spectral index and \(f_{\rm cut}(E_{\rm g})\) a function that describes the cut-off of the flux at the source, which might depend on the acceleration process and/or the interactions suffered by the particles in the source environment; the normalization factor \(Q_{0}\) will be explained later. The calculation can be extended for considering a distribution of sources as: \[J(E)=\frac{1}{(4\pi)^{2}}\int dV\frac{\widetilde{Q}(E_{\rm g}(E,z),z)}{(1+z) \chi^{2}}\frac{dE_{\rm g}}{dE}; \tag{33}\] here the generation rate per comoving volume is used, being defined as \(\widetilde{Q}=n_{0}Q\) where \(n_{0}\) is the number of sources per unit volume. The previous expression can be written as an integral over the redshift as: \[J(E)=\frac{c}{4\pi}\int dz\left|\frac{dt}{dz}\right|\widetilde{Q}(E_{\rm g}(E, z),z)\frac{dE_{\rm g}}{dE} \tag{34}\] where the following transformation has been used: \[\frac{1}{4\pi}\frac{dV}{dz}=(1+z)^{3}cd_{\rm A}^{2}\left|\frac{dt}{dz}\right|, \tag{35}\] being \(d_{\rm A}\) the angular-diameter distance, as defined in App. B. Therefore the expected flux of UHECR protons at Earth can be computed as in Eq. 34, corresponding to an expanding Universe homogeneously filled by sources of accelerated primary UHE protons with some choice for the spectrum at the source reported in Eq. 32. An example is reported in Fig. 10, where the contribution of single sources at different distances and the cumulative spectrum (multiplied by \(E^{3}\)) are depicted, corresponding to \(\gamma=2.6\) and \(E_{\rm cut}=10^{21}\) eV, being defined as \(f_{\rm cut}(E_{\rm g})=\exp(-E_{\rm g}/E_{\rm cut,g})\). The closest sources show no deviations from the initial spectrum, while corresponding to increasing Figure 9: Left: the energy at generation as a function of the energy at Earth, for different values of the redshift at generation [17]. Right: the energy at generation as a function of the redshift, for different values of the energy at Earth [17]. distances a bump is visible, as expected due to the rapid pile-up of the protons below the photo-pion threshold [38]. The abrupt suppression of the individual spectra, which is also reflected in the diffuse spectrum at the highest energies, is the effect of the energy losses due to photo-pion processes, as predicted in [22, 23], commonly known as "GZK" suppression from the initials of the authors. The bump is then expected to be smoother in the diffuse flux, because individual peaks are located at different energies. Below the bump, a dip is visible at larger distances, as expected due to the pair production energy losses [39]. In the diffuse flux the protons in the dip should be collected from a large volume, thus one could expect this feature to be less dependent on the distribution of sources. The measured change of the slope at \(5\times 10^{18}\) eV, the ankle, could therefore, in the context of pure proton composition of UHECRs, be interpreted as a signature of the propagation of the protons through the CMB. Attributing the suppression of the spectrum to the GZK effect is however not entirely justified. In fact, at the highest energies the visible Universe in terms of cosmic rays is strongly dependent on the local distribution of sources. As an example, in Fig. 11 (left panel) the change in the suppression at the highest energies as due to the redshift of the closest source is shown: the farther is the closest source, the lower is the energy of the suppression. A similar effect can be obtained if the maximum energy at the acceleration is varied (see Fig. 11, right panel), indicating that the shape of the suppression is degenerate in terms of these variations, which would contribute to the depletion of the flux as well as the "pure" GZK effect. Figure 10: Expected flux of cosmic-ray protons at Earth, multiplied by \(E^{3}\), corresponding to protons injected with a power law with \(\gamma=2.6\) and maximum acceleration energy at the source \(E_{\rm cut,g}=10^{21}\) eV (indicated as \(E_{\rm max}\) in the top of the figure), from a uniform distribution of identical sources (upper line). Expected fluxes from single sources are also shown, with redshift respectively (with decreasing energy cutoff): 0.005 (\(\sim\)20 Mpc), 0.01 (\(\sim\) 40 Mpc), 0.03 (\(\sim\)125 Mpc), 0.1 (\(\sim\)420 Mpc), 0.2 (\(\sim\)820 Mpc), 0.3 (\(\sim\)1200 Mpc), 0.5 (\(\sim\)1890 Mpc), 0.7 (\(\sim\)2500 Mpc), 0.9 (\(\sim\)3000 Mpc). From [32]. ### The case of UHECR nuclei In the following, the chain of equations describing the nuclear cascade in the extragalactic space is discussed, using the Lorentz factor instead of the energy, as done in [30, 35]. The primary nucleus loses energy due to interactions with background photons and can photo-disintegrate; the generated secondary nucleus \(A\) is then considered as produced again homogeneously in the space with a rate \(Q_{A}(\Gamma,z)\) depending on the solution of the transport equation for \(A_{0}\) corresponding to the current \((\Gamma,z)\). To simplify the treatment, nuclei are assumed to suffer only photo-disintegrations with the emission of one nucleon. Under this simple hypothesis, the injection rate of the secondary nuclei is: \[Q_{A}(\Gamma,z)=\frac{n_{A_{0}}(\Gamma,z)}{\tau_{A_{0}}(\Gamma,z)} \tag{36}\] where \(n_{A_{0}}(\Gamma,z)\) is the equilibrium distribution of the parent nucleus and \(\tau_{A_{0}}(\Gamma,z)\) is the photo-disintegration life-time of \(A_{0}\). The resulting equation chain is then: \[\frac{\partial n_{A_{0}}(\Gamma,t)}{\partial t}-\frac{\partial}{ \partial\Gamma}[n_{A_{0}}(\Gamma,t)b_{A_{0}}(\Gamma,t)]+\frac{n_{A_{0}}( \Gamma,t)}{\tau_{A_{0}}(\Gamma,t)} = Q_{A_{0}}(\Gamma,t) \tag{37}\] \[\frac{\partial n_{A_{0}-1}(\Gamma,t)}{\partial t}-\frac{\partial }{\partial\Gamma}[n_{A_{0}-1}(\Gamma,t)b_{A_{0}-1}(\Gamma,t)]+\frac{n_{A_{0} -1}(\Gamma,t)}{\tau_{A_{0}-1}(\Gamma,t)} = \frac{n_{A_{0}}(\Gamma,t)}{\tau_{A_{0}}(\Gamma,t)}\] \[\vdots\] \[\frac{\partial n_{A}(\Gamma,t)}{\partial t}-\frac{\partial}{ \partial\Gamma}[n_{A}(\Gamma,t)b_{A}(\Gamma,t)]+\frac{n_{A}(\Gamma,t)}{\tau( \Gamma,t)} = \frac{n_{A+1}(\Gamma,t)}{\tau_{A+1}(\Gamma,t)}\ ;\] Figure 11: Expected flux of cosmic-ray protons at Earth, multiplied by \(E^{3}\), corresponding to protons injected with a power law with \(\gamma=2.6\) and maximum acceleration energy at the source \(E_{\rm cut,g}=10^{21}\) eV (indicated as \(E_{\rm max}\) in the top of the figure), from a uniform distribution of identical sources. Left: \(E_{\rm cut,g}=10^{21}\) eV at injection, uniform distribution of sources starting from different \(z_{\rm min}\). Right: \(E_{\rm cut,g}\) (indicated as \(E_{\rm max}\) in the top of the figure) variable, uniform distribution of sources starting from \(z_{\rm min}=0\). From [32]. here the notation of Eq. 16 has been used, with \(\Gamma\) instead of \(E\), as motivated by the conservation of the Lorentz factor in the photo-disintegration processes. Here the solution for the secondary nuclei is derived, from which the solutions for primary nuclei and secondary nucleons can be easily obtained. The characteristic equation for the transport reads \[\frac{d\Gamma(t)}{dt}=-b_{A}(\Gamma,t). \tag{39}\] With \(\Gamma(t)\) taken on the characteristics, the term \(b_{A}(\Gamma,t)\frac{\partial n_{A}(\Gamma,t)}{\partial\Gamma}\) disappears and Eq. 38 takes the form \[\frac{\partial n_{A}(\Gamma,t)}{\partial t}+n_{A}(\Gamma,t)\left[-\frac{ \partial b^{A}_{pair}(\Gamma,t)}{\partial\Gamma}-\frac{\partial b^{A}_{ad}( \Gamma,t)}{\partial\Gamma}+\tau_{A}^{-1}(\Gamma,t)\right]=Q_{A}(\Gamma,t); \tag{40}\] with this choice the time \(t\) becomes the only variable. The resulting solution is then \[n_{A}(\Gamma,t)=\int_{t_{g}}^{t}dt^{{}^{\prime}}Q_{A}(\Gamma,t^{{}^{\prime}}) \exp\left[-\int_{t^{\prime}}^{t}dt^{{}^{\prime\prime}}\left(-P_{1}(\Gamma,t^ {{}^{\prime\prime}})-P_{2}(\Gamma,t^{{}^{\prime\prime}})+\tau_{A}^{-1}(\Gamma,t^{{}^{\prime\prime}})\right)\right] \tag{41}\] where the notation used is \[P_{1}(\Gamma,z) = \frac{\partial b^{A}_{ad}(z)}{\partial\Gamma}=H(z) \tag{42}\] \[P_{2}(\gamma,z) = \frac{\partial b^{A}_{pair}(\Gamma,z)}{\partial\Gamma}=\frac{Z^{ 2}}{A}(1+z)^{3}\left(\frac{\partial b^{p}_{0}(\Gamma^{{}^{\prime}})}{\partial \Gamma^{{}^{\prime}}}\right)_{\Gamma^{{}^{\prime}}=(1+z)\Gamma} \tag{43}\] that are written in terms of \(z\) instead of time \(t\), as can be done using the relation 24. The solution can then be written at \(z=0\) as \[n_{A}(\Gamma,z=0)=\int_{0}^{z_{\max}}dz^{{}^{\prime}}\frac{Q_{A} (\Gamma^{{}^{\prime}}(\Gamma,z^{{}^{\prime}}))}{(1+z^{{}^{\prime}})H(z^{{}^{ \prime}})}\times \tag{44}\] \[\exp\left[\int_{0}^{z^{{}^{\prime}}}dz^{{}^{\prime\prime}}\frac{ P_{1}(\Gamma,z^{{}^{\prime\prime}})}{(1+z^{{}^{\prime\prime}})H(z^{{}^{ \prime\prime}})}\right]\exp\left[\int_{0}^{z^{{}^{\prime}}}dz^{{}^{\prime \prime}}\frac{P_{2}(\Gamma,z^{{}^{\prime\prime}})}{(1+z^{{}^{\prime\prime}}) H(z^{{}^{\prime\prime}})}\right]\exp\left[-\int_{t^{{}^{\prime}}}^{t_{0}}\frac{dt^{{}^{ \prime\prime}}}{\tau_{A}(\Gamma,t^{{}^{\prime\prime}})}\right]. \tag{45}\] The last integration is kept over time to show that it represents the suppression factor for the survival time of the nucleus \(A\), that can be read also as \[\eta(\Gamma^{{}^{\prime}},z^{{}^{\prime}})=\int_{t^{{}^{\prime}}}^{t_{0}} \frac{dt^{{}^{\prime\prime}}}{\tau_{A}(\Gamma,t^{{}^{\prime\prime}})}=\int_{z }^{z^{{}^{\prime}}}dz^{{}^{\prime\prime}}\frac{\tau_{A}^{-1}(\Gamma^{{}^{\prime \prime}},z^{{}^{\prime\prime}})}{(1+z^{{}^{\prime\prime}})H(z^{{}^{\prime\prime }})}\ ; \tag{46}\] the product of the first two exponents of Eq. 45 gives the ratio of energy intervals calculated in the previous section. Finally, \[n_{A}(\Gamma,z)=\int_{z}^{\infty}dz^{{}^{\prime}}\frac{Q_{A}(\Gamma^{{}^{\prime} }(\Gamma,z^{{}^{\prime}}))}{(1+z^{{}^{\prime}})H(z^{{}^{\prime}})}\frac{d \Gamma^{{}^{\prime}}}{d\Gamma}e^{-\eta(\Gamma^{{}^{\prime}},z^{{}^{\prime}})}, \tag{47}\] gives the density of a species \(A\) at a given \((\Gamma,z)\); to compute the density at Earth, the lower bound of the integral must be set to \(z=0\). The density of primary nuclei can be computed taking into account that the injection term in Eq. 47 is proportional to \(\Gamma^{-\gamma_{g}}\); instead in the case of secondary nucleons the transport equation is obviously devoid of the photo-disintegration term: \[\frac{\partial n_{n}(\Gamma,t)}{\partial t}-\frac{\partial}{\partial\Gamma}[n _{n}(\Gamma,t)b_{n}(\Gamma,t)]=\frac{n_{A+1}(\Gamma,t)}{\tau_{A+1}(\Gamma,t)}\, \tag{48}\] where the injection term is the same as in the case of secondary nuclei, because of the hypothesis of conservation of the Lorentz factor in the photo-disintegration process. The solution \(n_{n}(\Gamma,t)\) is then similar to Eq. 47, without the suppression factor for the survival time of the nucleus \(A\). A complete derivation of these quantities is given in [30]. The number of particles per volume and energy \(n\) for each species can be finally converted to the flux as done in Eq. 17. The solution of a set of coupled kinetic equations described in this section can be implemented semi-analytically, both for the case of the CMB and EBL. However, Monte Carlo approaches are also extensively used for the computation of the UHECR propagation, as for instance done in [16, 40]. Typically this approach consists in the evaluation of the probability of a particle to survive an interaction; for instance, in the case of a photo-disintegration, taking into account Eq. 4 for the interaction rate, the probability reads: \[P_{A}(\Gamma,z)=\exp\left(-\int\frac{1}{\tau_{A}(\Gamma(z^{\prime}),z^{\prime} )}\left|\frac{dt}{dz^{\prime}}\right|dz^{\prime}\right)\,; \tag{49}\] this is usually evaluated in steps in redshift, and the energy losses are computed separately. With the Monte Carlo method it is therefore also possible to compute the interaction point of the particle, that is in turn the generation point of the secondary particle (which is also fundamental in order to compute the fluxes of secondary particles such as neutrinos or gamma rays). It is interesting to note here that, similarly to the pion production for protons, also the photo-disintegration entails the disappearance of nuclei of a certain species, because of the creation of lighter fragments, and the excitation of the GDR for the interactions with CMB photons happens at similar energies as for the threshold of the pion production of protons on CMB. In addition, the energy loss lengths for the photo-disintegration processes are of similar order of magnitude as the one for the pion production from protons. For these reasons, the visible Universe in terms of cosmic rays at the highest energies is similar for protons and heavy nuclei (as one can see in Fig. 12, where simulations of _SimProp_ 2.4 [16] are used to show the energy at Earth as a function of the distance of the source that produced a cosmic-ray particle of the species indicated in the legend). This implies that the interpretation of the suppression of the spectrum, experimentally observed at the highest energies, is also degenerate in terms of the chemical composition of the cosmic rays, in addition to the other possible motivations due to the pion production effect (if protons), the distribution of the sources and the maximum energy at the acceleration, as shown in Sec. **3**'1. Understanding the origin of the suppression of the spectrum, as well as of its other features, requires considering other CR observables such as the ones connected to the chemical composition, as proposed for instance in [12], and constitutes one of the main open issues in cosmic-ray astrophysics. In [12], a uniform distribution of identical sources emitting cosmic-ray nuclei with a power law of energy up to some maximum rigidity is assumed, following \[\widetilde{Q}_{A}(E)=\widetilde{Q}_{0A}\cdot\left(\frac{E}{E_{0}}\right)^{- \gamma}\cdot\begin{cases}1,&E\leq Z_{A}\cdot R_{\rm cut};\\ \exp\left(1-\frac{E}{Z_{A}\cdot R_{\rm cut}}\right),&E>Z_{A}\cdot R_{\rm cut },\end{cases} \tag{50}\] and the propagation in the extragalactic space is taken into account in order to produce the expected energy spectrum and mass composition at Earth. In Eq. 50, \(\widetilde{Q}_{0A}\) accounts for the percentage of a certain nuclear species at the source (evaluated at a fixed energy \(E_{0}\)). In Ref. [12], the expected fluxes are then fitted to the data of the Pierre Auger Figure 12: Energy at Earth as a function of the distance at which the cosmic ray is created, for different nuclear species, computed from a _SimProp_ 2.4 [16] simulation. From [41]. Observatory, so that the corresponding spectral details and the nuclear species at the source are determined. The best-fit results for energies above the ankle predict an intermediate mass composition (dominated by the carbon-nitrogen-oxygen group), and a hard injection spectrum (\(\sim E^{-1}\)) and low rigidity (\(\sim 10^{18.7}\) V, corresponding to a maximum energy of \(\sim 10^{18.7}\) eV for protons and \(\sim 10^{20.15}\) eV for silicon nuclei, the heaviest nuclear species found in this study) at the escape from the sources. The results are reported in the upper right panel of Fig. 13 for the energy spectrum (all-particle and mass groups at Earth) and in the lower panels of Fig. 13 for the mass composition observables (the mean value - lower left panel - and the width - lower right panel - of the distributions of the position in the atmosphere at which the shower reaches its maximum number of particles(2)). These results can be compared to the case reported in the upper left panel of Fig. 13, where the spectrum of pure protons with \(E^{-2.4}\) up to a maximum rigidity of \(\sim 10^{19.7}\) V (with the same shape of the cutoff function at the source as in Eq. 50) is reported (for the proton case, a cosmological evolution of the sources as \(m=3.5\) has been used, while for the mixed composition \(m=0\); the cosmological evolution is included in the injection spectrum in Eq. 47 as \((1+z)^{m}\)). In both the pure-proton and mixed-composition case, the suppression at the highest energies cannot be due to the sole effect of the extragalactic propagation; in fact, as suggested by [15], the low value of the maximum rigidity found at the sources could be also responsible of the measured suppression. Regarding the ankle feature, while in the case of the mixed-composition it cannot be reproduced with a single population of sources, and an additional component at lower energies should be taken into account [15], the case of pure protons seems to reproduce the spectrum features of the suppression and of the ankle with the chosen parameters, but cannot account for the behavior of the mass composition observables and is reported here only for illustration purposes. The study of CR interactions in the extragalactic space, as treated in these lectures, is fundamental to contribute to the understanding of the characteristics of the main observables of cosmic rays. In addition, the same interactions here discussed could be Figure 13: Upper panels: Simulated energy spectrum of UHECRs (multiplied by \(E^{3}\) at the top of the Earth atmosphere, obtained with a pure-proton composition with \(E^{-2.4}\), \(R_{\rm max}=19.7\) V and \(m=3.5\) at the sources (left panel) and with a mixed mass composition (dominated by CNO) with \(E^{-1}\), \(R_{\rm max}=18.7\) V and \(m=0\) (the parameters are expressed at the sources, following Eq. 50). Partial spectra in the left panel are grouped as: \(A=1\) (red), \(2\leq A\leq 4\) (grey), \(5\leq A\leq 22\) (green), \(23\leq A\leq 28\) (cyan), while the total (all-particle) spectrum is shown in brown. Lower panels: Average and standard deviation of the \(X_{\rm max}\) distribution as expected from the astrophysical scenario corresponding to the propagated flux reported in the upper right plot (brown line) together with pure hydrogen (red), helium (grey), nitrogen (green) and iron (blue) lines corresponding to the predictions of the hadronic interaction model EPOS-LHC [45]. The data points are from [46]. The scenario reported in the upper right and lower panels is re-adapted from the best fit found in [12]. The simulations used in these plots are obtained with _SimProp_ 2.4 [16]. taken into account in the modeling of the source environment; together with the diffuse processes in the source site, these interactions could be responsible of the reprocessing of the cosmic rays after the acceleration process and influence the shape of the spectra at the escape in the region of the ankle [47]. Models that take into account both interactions in astrophysical candidate sources of cosmic rays and in extragalactic space could therefore enhance our capability of explaining the features of the UHECR energy spectrum at Earth, taking into account the UHECR mass composition. ## 4 Cosmogenic neutrinos and photons The interactions suffered by cosmic ray particles during their passage through CMB and EBL generate secondary particles that in turn decay. The photo-pion processes involve the excitation of the Delta resonance, whose de-excitation can produce charged or neutral pions, that in turn decay and produce neutrinos or gamma-rays, called _cosmogenic_. Let us first discuss the case of the production of charged pions as: \[p+\gamma_{\rm bkg}\rightarrow\Delta^{+}\rightarrow \,\pi^{+}+n \tag{53}\] \[\pi^{+}\rightarrow\mu^{+}+\nu_{\mu}\] \[\mu^{+}\to e^{+}+\nu_{\rm e}+\bar{\nu}_{\mu}\] Therefore three neutrinos (with flavor composition of \(\nu_{\rm e}:\nu_{\mu}:\nu_{\tau}=1:2:0\)) for each pion are produced. From considerations about the inelasticity, as reported in Sec. 2, the pions carry about 20% of the energy of the primary proton. The expected flux of cosmogenic neutrinos at Earth depends on the characteristics of the spectrum of protons emitted from the sources, as described in [50]. From Eq. 53 one can see that the expected fluxes of electron/muon neutrinos and muon anti-neutrinos (reported in Fig. 14, right panel) are expected to be of equal intensity (they are produced for each \(p\gamma\) interaction), and peaked at the same energy (they carry on average 5% of the energy of the initial proton). A contribution of electron anti-neutrinos can arise from the decay of neutrons, and its flux is expected to be peaked at lower energies. Anti-electron neutrinos which can be produced from the decay chain of negative pions (possibly produced in multi-pion productions) can also contribute to the high-energy neutrino peak (see Fig. 14, left panel). The expected neutrino flux is also connected to the photon fields with which the protons can interact. In order to trigger a photo-pion production off a CMB photon (average energy \(\epsilon\approx 7\cdot 10^{-4}\) eV), a more energetic proton with respect to the case of the EBL field is needed, being the energy of the photon in the nucleus rest frame \(\epsilon^{\prime}\approx\Gamma\epsilon\). As a consequence, the high energy peak of the neutrino flux is expected to be originated from interactions off CMB, while the low energy peak from interactions off EBL. Neutrinos can travel for long distances unimpeded; this is the reason why they can be accumulated for a large portion of the Universe, as can be seen in the different colors of the lines in Fig. 15 (right panel: here the case of pure-proton composition for the cosmic rays that generated the neutrinos is taken into account). Here the cases of no cosmological evolution of the distribution of UHECR sources is reported (red line) together with the case of Star Forming Rate (SFR) evolution (green line) [51] and the case of evolution of the high-luminosity Active Galactic Nuclei (AGN) as suggested in [52]. The effect of the cosmological evolution, to be considered in Eq. 34 or in Eq. 47 (depending on the mass of the CR) as a term like \((1+z)^{m}\) entering in the distribution of sources, is expected to be more important while increasing the redshift (if \(m>0\)). Due to the interactions of UHECR particles, the effect of the cosmological evolution of sources is less dominant than for neutrinos, as can be seen in the left panel of Fig. 15, where the propagated spectra of protons are different only below \(10^{18.2}\) eV. Combining the information from UHECRs and neutrinos can be, therefore, very relevant to the aim of constraining the cosmological distribution of the sources, which remains undetermined if only UHECRs are taken into account. Examples of combined studies involving both the UHECR spectrum (with pure proton composition) and the expected cosmogenic neutrinos can be found for instance in [53, 54]. The expected flux of cosmogenic neutrinos is strongly related to the characteristics of the flux of UHECRs at the escape from the sources, as well as from the details of the cosmological evolution of UHECR sources. The maximum energy of UHECRs determines the cutoff of the neutrino flux, while the shape of the neutrino flux is mainly dependent on the spectral index of the UHECR spectrum. The chemical composition of UHECRs is also affecting the expected neutrino flux; due to the fact that the photo-pion production is a process involving the nucleons in the nucleus, it is convenient to compute the value of the threshold Lorentz factor, which for the photo-pion production reads \(\Gamma_{\rm th}\approx 7\times 10^{10}\). Therefore the energy threshold for a photo-pion process for a generic nucleus is \(E_{\rm th}=A\Gamma_{\rm th}^{p}m_{p}c^{2}\) Figure 14: Left panel: fluxes of electron (blue solid line) and anti-electron (red dashed line) neutrinos generated in propagation of protons through CMB. Right panel: fluxes of muon (blue solid line) and anti-muon (red dashed line) neutrinos. The histograms are obtained from _SimProp_ simulations, while the lines are taken from [48]. Reproduced with permission from [49]. nuclei heavier than hydrogen would then require \(A\) times the energy of a proton in order to excite the resonance responsible for this process. For this reason, scenarios involving heavier nuclear composition of UHECRs predict a smaller neutrino flux [55, 56]. It is important to stress here that the fits of the UHECR energy spectrum and composition in terms of astrophysical scenarios usually involve only the energy range above the ankle; if the entire UHE range is taken into account, as done for instance in [15], a proton component, with a soft energy spectrum at the sources, would be required in order to reproduce the energy region below the ankle, together with a heavier contribution, probably coming from the final spectrum of the Galactic cosmic rays. This proton component would be connected to a more efficient neutrino production. In particular, if the light CR component below the ankle is connected to the interactions happening in the source environment, as for instance reported in [47], neutrinos produced in the same reactions in the sources are expected. These could be produced both in interactions of UHECRs with the photon fields in the source, which can be treated as the processes described for the extragalactic propagation where the extragalactic photon field is substituted by the photon field typical of the source, as well as by interactions of UHECRs with the matter in the source environment, as for instance treated in [59] for the case of the nuclei of starburst galaxies. Details of the production of neutrinos in these interactions can be found in the lecture notes by P. Serpico at this School [60]. Neutral pions can be produced in the de-excitation of \(\Delta^{+}\), giving origin to cosmogenic Figure 15: Left panel: cosmic-ray fluxes expected at Earth (in this case, the propagation through both CMB and EBL is computed) in scenarios with pure proton composition, with various models for the cosmological evolution of sources (solid red line: no evolution; dashed green line: SFR evolution [51]; dot-dashed blue line: AGN evolution [52]). For comparison also the experimental data from the Telescope Array [57] and the Pierre Auger Observatory [58] are shown (magenta and olive green dots, respectively). Right panel: fluxes of neutrinos in the same scenarios. Reproduced with permission from [49]. photons: \[p+\gamma_{\rm bkg}\rightarrow\Delta^{+}\rightarrow \pi^{0}+p \tag{54}\] \[\pi^{0}\rightarrow\gamma+\gamma\] Similarly to cosmic rays, cosmogenic photons can interact with photon backgrounds (interaction lengths for these processes are shown in Fig. 16), giving rise to electromagnetic cascades, where in turn high-energy photons can be absorbed due to pair production, and electrons undergo inverse Compton and produce synchrotron radiation, transferring energy to the range below \(10^{14}\) eV. It is interesting to notice that the energy densities in UHECRs, high-energy neutrinos and gamma-rays are similar (as reported in Fig. 17). This would suggest that the origin of these different messengers is strongly connected and support a multi-messenger view, which might strongly improve our understanding of the characteristics of UHECRs and other messengers. ## Appendix A Interaction rate The expression for the interaction length (Eq. 4) can be derived from fundamental quantities of the theory of interactions. We report here a procedure similar to what done in [26, 61]. The relativistic rate of interactions per unit volume can be defined as the Figure 16: Interaction length of gamma rays for pair production in several background fields. Courtesy of A. di Matteo. number of collisions per unit volume and time, between particles 1 and 2 with masses \(m_{1}\) and \(m_{2}\), with densities \(n_{1}\) and \(n_{2}\) in the reference frame \(K\), as \[\dot{n}=\frac{dN}{dVdt}\,. \tag{10}\] Due to relativistic length contraction, computing the densities in the frame \(K\) with respect to the one where the particles are at rest (indicated with the superscript "0") will give \[n_{i}=\Gamma_{i}n_{i}^{0} \tag{11}\] with \(i=1,2\); the Lorentz factor \(\Gamma_{i}\) is in this simplified case considered to be the same for all particles of the same type. If we apply then the transformation to the frame where particles of species 2 are at rest, we find \[\dot{n}=c\beta_{r}\sigma n_{2}^{0}n_{1}^{\prime} \tag{12}\] where \(\sigma\) is the cross section of the process and \(c\beta_{r}\) is the relative speed of particles of type 1 in the rest frame of particles of type 2, and can be expressed as: \[\beta_{r}=\left(\frac{(p_{1}^{\mu}p_{2}^{\mu})^{2}-m_{1}^{2}m_{2}^{2}}{(p_{1} ^{\mu}p_{2}^{\mu})^{2}}\right)^{1/2}\,; \tag{13}\] Figure 17: Cosmogenic photon (blue) and neutrino (orange) fluxes for UHECR models that fit the Pierre Auger Observatory data including spectrum and composition. Reproduced with permission from [13]. \(\beta_{r}=1\) in case at least one of the particles is a photon. The cross section is a function of the relative Lorentz factor of a particle in the rest frame of the other one, which reads: \[\Gamma_{r}=\frac{p_{1}^{\mu}p_{2}^{\mu}}{m_{1}^{2}m_{2}^{2}}=\Gamma_{1}\Gamma_{2 }(1-\vec{\beta}_{1}\cdot\vec{\beta}_{2})\,.\] (A.5) Let us consider the density of particles of species 1 in the rest frame of particles of species 2, \(n_{1}^{\prime}=\Gamma_{r}n_{1}^{0}=\Gamma_{r}n_{1}/\Gamma_{1}\), and \(n_{2}^{0}=n_{2}/\Gamma_{2}\), that therefore implies: \[\dot{n}=c\beta_{r}\sigma(\Gamma_{r})(1-\vec{\beta}_{1}\cdot\vec{\beta}_{2})n_ {1}n_{2}\] (A.6) which is valid if the particles are monoenergetic; in a realistic case, in which particles have a distribution of energy and velocity (also in space) the reaction rate can be written in a more general way as: \[\dot{n}=\frac{c}{1+\delta}\int\int\beta_{r}\sigma(\Gamma_{r})(1-\vec{\beta}_{ 1}\cdot\vec{\beta}_{2})dn_{1}dn_{2}\,,\] (A.7) with \(\delta=0(1)\) for interactions of different types of particles (self interacting particle distributions). It is important to notice that the reaction rate in Eq. A.7 has been derived in the proper frame of particles of species 2, but due to the fact that \(\dot{n}\) is an invariant quantity, it also gives the reaction rate in the frame \(K(^{3})\). Let us now consider the case of a particle traversing a photon field; in the case where one of the particles is a photon we have \(\delta=0\), \(\beta_{r}=\beta_{2}=1\), and the quantities expressed in \(\Gamma_{r}\) are then expressed in terms of \(\epsilon_{r}=\Gamma_{1}\epsilon(1-\beta_{1}\cos\theta_{12})\). In addition, it is also useful to consider now the differential spectral density which reads: \[n(\vec{p})=\frac{dn}{d^{3}\vec{p}}=\frac{dn}{p^{2}dpd\Omega}\,,\] (A.8) \(dn=n(\vec{p})p^{2}dpd\Omega\) and \(d\Omega=d\cos\theta\,d\phi\). Then the generic interaction rate is: \[\dot{n}=\frac{c}{1+\delta}\int d\Omega_{1}\int dp_{1}\,p_{1}^{2}\,n_{1}(\vec{p }_{1})\int d\Omega_{2}\int dp_{2}\,p_{2}^{2}\,n_{2}(\vec{p}_{2})\,\beta_{r}\, \sigma(\Gamma_{r})\,(1-\beta_{1}\beta_{2}\cos\psi)\,,\] (A.9) where \(\psi\) is the angle between the directions of the interacting particles or photons. For our case study, we want to consider the rate of interactions of particles in a photon field. For a single particle we have \(n_{1}(\vec{p}_{1})=n_{1}\delta(p-p_{1})\delta(\cos\theta_{1}-1)\delta(\phi_{1 })/(4\pi p_{1}^{2})\) and therefore we can write: \[\dot{N}(p_{1})=\frac{\dot{n}}{n_{1}}=c\int_{0}^{2\pi}d\phi\int_{-1}^{1}d\cos \theta(1-\beta_{1}\cos\theta)\int_{0}^{\infty}d\epsilon\,n_{\rm ph}(\epsilon, \Omega)\sigma(\epsilon_{r})\] (A.10) where \(n_{\rm ph}=dN/dVd\epsilon d\Omega\) is the photon distribution function and \(\epsilon_{r}\) is the invariant energy of the event, that coincides with the photon energy in the frame where the particle is at rest. Taking into account the Lorentz transformation in the reference frame of the particle 1, the photon energy \(\epsilon^{\prime}=\epsilon_{r}\) is then given by \[\epsilon^{\prime}=\Gamma_{1}\epsilon(1-\beta_{1}\cos\theta), \tag{11}\] and the corresponding differential is \[d\epsilon^{\prime}=-\Gamma_{1}\epsilon\beta_{1}\,d\cos\theta, \tag{12}\] where \(\Gamma_{a}\) is the Lorentz factor of the particle 1. Performing the integration over the azimuth angle, we obtain the expression of the interaction rate as \[\dot{N}(p_{1})=\frac{dN_{\rm int}}{dt}=\frac{c}{2\Gamma_{1}^{2}}\int_{ \epsilon_{\rm th}^{\prime}}^{\infty}\sigma(\epsilon^{\prime})\epsilon^{ \prime}\int_{\epsilon^{\prime}/2\Gamma_{1}}^{\infty}\frac{n_{\rm ph}(\epsilon) }{\epsilon^{2}}\,d\epsilon\,d\epsilon^{\prime}. \tag{13}\] where \(\epsilon_{\rm th}^{\prime}\) is the energy threshold for the reaction in the reference frame of the particle. ## Appendix B ### Cosmology Being the CR sources located at cosmological distances, the Robertson-Walker metric describing a homogeneous and isotropic expanding Universe has to be used [61, 62], and can be written as a function of the comoving radial coordinate \(\chi=r/R(t)\), being \(r\) the space coordinate and \(R(t)\) the dimensionless scale factor: \[ds^{2}=-c^{2}dt^{2}+R^{2}(t)\left(\frac{d\chi^{2}}{1-k\chi^{2}}+\chi^{2}d \Omega^{2}\right)\,. \tag{14}\] Objects with fixed \(\chi\) in the universe are separated by a distance that is determined by the variation of the scale factor \(R(t)\). The \(k\) parameter can be chosen to be \(>0\) (\(\Omega>1\)), \(<0\) (\(\Omega<1\)) or \(=0\) (\(\Omega=1\)) for spaces of constant positive, negative or zero spatial curvature, respectively. In the following, we will always refer to \(\Omega=\Omega_{m}+\Omega_{\Lambda}\), then \(k=0\) (\(\Omega\) is here representing the density). From the definition of the metric, one can compute the time required for the propagation of the photon from its emission (starred frame) to the detection in an expanding space-time as: \[c\int_{t_{*}}^{t}\frac{dt^{\prime}}{R(t^{\prime})}=-\int_{r}^{0}\frac{d\chi}{ \sqrt{1-k\chi^{2}}}\,; \tag{15}\] if we consider another event, detected at a later time, and we also take into account he fact that \(R(t)\) does not change if the events are separated by a small interval of time, we can write the previous integral as: \[\frac{\Delta t}{R(t)}=\frac{\Delta t_{*}}{R(t_{*})},\,\,\,\Delta t=\frac{R}{R _{*}}\Delta t_{*}=(1+z)\Delta t_{*}\,. \tag{16}\] Taking into account the definition of the redshift \(z\) from the relativistic Doppler effect, which reads \[z=\frac{\lambda-\lambda_{*}}{\lambda_{*}}=\frac{\nu_{*}}{\nu}-1\] (B.4) we can therefore write the scale factor (and the other relevant quantities) as a function of \(z\): \[\frac{R}{R_{*}}=1+z,\;\;\frac{\nu_{*}}{\nu}=\frac{\epsilon_{*}}{\epsilon}= \frac{\Delta t}{\Delta t_{*}}=1+z\,.\] (B.5) The Hubble relation, which can be written as \(\vec{v}(\vec{r})=H(t)\vec{r}\), can be also shown in terms of the scale factor as: \[H(t)=\frac{1}{\vec{r}}\frac{d\vec{r}}{dt}=\frac{1}{R(t)}\frac{dR(t)}{dt}\,,\] (B.6) where we have used the definition \(\vec{r}=R(t)\chi\hat{r}\). The Hubble constant at time \(t_{*}\) is \(H(t_{*})=\dot{R}(t_{*})/R(t_{*})\). The current estimate of the Hubble constant at the present epoch, together with the corresponding time and distance are: \[H_{0}\approx 69\frac{\rm km}{\rm s}\frac{1}{\rm Mpc},\;\;t_{\rm H}=\frac{1}{H_{ 0}}\approx 14.2\times 10^{9}\,\rm yr,\;\;D_{\rm H}=\frac{c}{H_{0}}\approx 4350 \,\rm Mpc.\] (B.7) while the estimates of the other cosmological parameters are: \[\Omega_{m}\approx 0.3,\;\;\Omega_{\Lambda}\approx 0.7\,.\] (B.8) Writing the scale factor as a function of the redshift \(R(z)=R_{0}/(1+z)\), therefore \(R^{-1}(dR/dz)=-1/(1+z)\) and \(H=\dot{R}/R=R^{-1}(dR/dz)(dz/dt)=-(dz/dt)/(1+z)\) and we obtain the conversion factor between the time and the redshift as: \[\frac{dt}{dz}=-\frac{1}{(1+z)H(z)}=-\frac{1}{H_{0}(1+z)\sqrt{(1+z)^{3}\Omega_ {m}+\Omega_{\Lambda}}}\,;\] (B.9) in the last expression we have used the redshift dependence of \(H(z)\) as can be derived by the Friedman equation and considering also the density contributions [62]. We report here the definition of distances can be useful in the computation of the CR fluxes at Earth [61]. Let us define the _proper distance_ as the one between two objects that would be measured at the same time \(t\); at the present time, this is given by the comoving coordinate (the distance between the two objects stays constant over time, if they only move with the Hubble flow and do not have peculiar motion), therefore we define the _comoving distance_ as: \[d_{c}(z)=\chi=ct=c\int_{0}^{\chi/c}dt=c\int_{0}^{z}\left|\frac{dt}{dz^{\prime} }\right|(1+z^{\prime})\,dz^{\prime}=\frac{c}{H_{0}}\int_{0}^{z}\frac{1}{\sqrt{ (1+z^{\prime})^{3}\Omega_{m}+\Omega_{\Lambda}}}\,dz^{\prime}\] (B.10) and the _light-travel distance_ as \[d_{T}(z)=c\int_{0}^{z}\left|\frac{dt}{dz^{\prime}}\right|dz^{\prime}=\frac{c}{H_ {0}}\int_{0}^{z}\frac{1}{1+z^{\prime}}\,\frac{1}{\sqrt{(1+z^{\prime})^{3} \Omega_{m}+\Omega_{\Lambda}}}dz^{\prime}\,.\] (B.11) Let us now consider the photon flux measured from a source with proper distance \(d_{c}\) and radiating energy per time as \({\cal L}=d{\cal E}/dt\); this is related to the energy flux from a source with isotropic luminosity \({\cal L}_{*}=d{\cal E}_{*}/dt_{*}\) at _luminosity distance_\(d_{L}\) as \[\Phi=\frac{d{\cal E}}{dA\,dt}=\frac{d{\cal E}/dt}{4\pi d_{c}^{2}}=\frac{d{ \cal E}_{*}/dt_{*}}{4\pi d_{L}^{2}}\,.\] (B.12) The fluence is defined as the integral of \(\Phi\) over a time interval \[{\cal F}=\int_{t_{1}}^{t_{2}}dt\Phi(t)\] (B.13) and is connected to the apparent isotropic energy as: \[{\cal E}=\frac{4\pi d_{L}^{2}{\cal F}}{1+z}\,.\] (B.14) Considering the expansion of the universe, \(d{\cal E}_{*}=\epsilon_{*}dN=\epsilon(1+z)dN=(1+z)d{\cal E}\) and \(dt_{*}=dt/(1+z)\). From this we have then \(d{\cal E}/dt=(1+z)^{-2}d{\cal E}_{*}/dt_{*}\), meaning that the photons of the source are redshifted by a factor \((1+z)\) and the time dilation increases the time interval between two photon emissions and that of their observations again by \((1+z)\). The luminosity distance can be linked to the comoving one as \[d_{L}(z)=(1+z)d_{c}=\frac{c}{H_{0}}(1+z)\int_{0}^{z}\frac{1}{\sqrt{(1+z^{ \prime})^{3}\Omega_{m}+\Omega_{\Lambda}}}\,dz^{\prime}\,,\] (B.15) for a flat universe. We can also consider an object at redshift \(z\) with transverse proper dimension \(D\), with measured angle \(\theta\), given in terms of \(D\) and the distance \(R_{*}\chi\) referred to the emission time \(t_{*}\). The angular diameter distance is then \(d_{A}=D/\theta=R_{*}\chi\). If a source at cosmological distance emits photons with isotropic luminosity \({\cal L}_{*}=d{\cal E}_{*}/dt_{*}\) and considering the definition of luminosity distance, we have \[\Phi=\frac{d{\cal E}}{dAdt}=\frac{{\cal L}_{*}}{4\pi d_{L}^{2}}=\frac{dA}{4 \pi d_{L}^{2}}\left|\frac{d{\cal E}_{*}}{d{\cal E}}\right|\left|\frac{dt_{*}} {dt}\right|\frac{d{\cal E}}{dAdt}=\frac{(1+z)^{2}dA}{4\pi d_{L}^{2}}\Phi\] (B.16) and therefore \(dA=R^{2}\chi^{2}d\Omega=4\pi d_{L}^{2}/(1+z)^{2}\). At the time of the emission we have \(dA_{*}=R_{*}^{2}\chi^{2}d\Omega_{*}\) so that \(dA_{*}/dA=1/(1+z)^{2}\), and therefore we can write the relation between the luminosity distance and the angular diameter distance as \[d_{L}=\left(\frac{R}{R_{*}}\right)^{2}R_{*}\chi=(1+z)^{2}d_{A}\,.\] (B.17) As a summary, we also report the relations with the other distances: \[d_{\rm c}(z)=(1+z)d_{A}(z)=\frac{d_{L}(z)}{1+z}\,;\] (B.18) the cosmological distances are also shown in Fig. 18. ## Appendix C Dependence of relevant quantities on the redshift In order to compute the interaction length of a process at redshift different from zero, how the target photon field appeared in the past has to be known. In Fig. 19 the evolution of the EBL intensity as a function of the redsfft is reported, as modeled in [20]. The treatment of the evolution of the CMB as a function of the redshift is instead reported in the following. If we assume that the photon field has been injected in the extragalactic space in the past, the cosmological evolution of its spectral energy density is given by \[n_{\rm ph}(\epsilon,z)=(1+z)^{2}n_{\rm ph}\left(\frac{\epsilon}{1+z}\right),\] (C.1) where \(n_{\rm ph}(\epsilon/(1+z))\) is the spectral energy density at the present time. The factor \((1+z)^{2}\) comes from the fact that the volume element evolves as \((1+z)^{3}\), while the energy is redshifted by a factor \((1+z)^{-1}\). Eq. C.1 is valid if there is no feedback from Figure 18: Cosmological distances as a function of the redshift, from [17]. astrophysical sources to the photon field (i.e. the evolution of the field is only driven by the expansion of the Universe). We derive here the evolution of two quantities: \[n_{\rm ph}=\int d\epsilon\,n_{\rm ph}(\epsilon)\,\ \ \ \ \rho_{\rm ph}=\int d \epsilon\,\epsilon\,n_{\rm ph}(\epsilon),\] (C.2) defined as the number density and the energy density of the photon field, respectively. The cosmological evolution of the number density \(n_{\rm ph}(z)\) is given by \[n_{\rm ph}(z)=\int d\epsilon\,n_{\rm ph}(\epsilon,z) =(1+z)^{2}\int d\epsilon\,n_{\rm ph}\left(\frac{\epsilon}{1+z}\right)\] (C.3) \[=(1+z)^{3}\int d\epsilon\,n_{\rm ph}(\epsilon)\] (C.4) \[=(1+z)^{3}n_{\rm ph}.\] (C.5) Similarly, one can show that \[\rho_{\rm ph}(z)=(1+z)^{4}\rho_{\rm ph}.\] (C.6) Starting from Eq. 4, we can define the interaction rate of an UHECR with Lorentz factor Figure 19: The spectral energy distribution of the EBL as modeled in [20] as a function of the photon energy, for different values of redshift, and used in [17]. \(\Gamma\) at redshift \(z\) as \[\tau^{-1}(\Gamma,z) =\frac{c}{2\Gamma^{2}}\int_{\epsilon^{\prime}_{\rm th}}^{\infty} \sigma(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{\prime}/2\Gamma}^{ \infty}\frac{n_{\rm ph}(\epsilon,z)}{\epsilon^{2}}\,d\epsilon\,d\epsilon^{\prime}\] (C.7) \[=\frac{c(1+z)^{2}}{2\Gamma^{2}}\int_{\epsilon^{\prime}_{\rm th}}^ {\infty}\sigma(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{\prime}/2 \Gamma}^{\infty}\frac{n_{\rm ph}(\epsilon/(1+z))}{\epsilon^{2}}\,d\epsilon\,d \epsilon^{\prime}.\] (C.8) We can change the integration variable \(\epsilon\) with \(\omega(1+z)\); then the interaction rate becomes \[\tau^{-1}(\Gamma,z) =\frac{c(1+z)}{2\Gamma^{2}}\int_{\epsilon^{\prime}_{\rm th}}^{ \infty}\sigma(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{\prime}/2(1+ z)\Gamma}^{\infty}\frac{n_{\rm ph}(\omega)}{\omega^{2}}\,d\omega\,d\epsilon^{\prime}\] (C.9) \[=\frac{c(1+z)^{3}}{2((1+z)\Gamma)^{2}}\int_{\epsilon^{\prime}_{ \rm th}}^{\infty}\sigma(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{ \prime}/2(1+z)\Gamma}^{\infty}\frac{n_{\rm ph}(\omega)}{\omega^{2}}\,d\omega\, d\epsilon^{\prime}\] (C.10) \[=(1+z)^{3}\tau^{-1}((1+z)\Gamma,z=0).\] (C.11) The interaction length is given by \(l=\tau c\) for UHECRs, then the interaction length at redshift \(z\) can be written as \[l(E,z)=\frac{l((1+z)E,z=0)}{(1+z)^{3}},\] (C.12) where we have used the particle energy instead of the particle Lorentz factor. Here we also derive the dependence on the redshift of other quantities used in the main text. As done in [25, 38], we define here \[-\frac{1}{E}\frac{dE}{dt}=\beta_{0}\left(E\right),\] (C.13) from which we can also define the quantity \[b_{0}(E)=-\frac{dE}{dt}=E\beta_{0}(E),\] (C.14) where the subscript \(0\) refers to the fact that these quantities have been defined at redshift \(z=0\). We can include the cosmological evolution of the background photon fields by replacing the photon spectral energy density with \(n(\epsilon,t)\). Thus the quantities in Eq. C.13 and C.14 become \[-\frac{1}{E}\frac{dE}{dt}=\beta\left(E,t\right)\] (C.15) and \[b\left(E,t\right)=E\beta\left(E,t\right).\] (C.16) Taking into account what has been derived in Eq. 5, then the relations for \(\beta\) and \(b\) read \[\beta(E,z)=(1+z)^{3}\beta_{0}((1+z)E), \tag{75}\] \[b(E,z)=(1+z)^{2}b_{0}((1+z)E). \tag{76}\] The author acknowledges A. di Matteo and C. Evoli for providing some of the figures shown in these notes. The author acknowledges also S. Rossoni and C. Trimarelli as co-authors of the lecture notes regarding "Ultra-High-Energy Cosmic Rays: Propagation and Detection" [63] for the lectures given at the first training school of COST Action CA18108 on "Quantum Gravity Phenomenology in the Multi-Messenger Approach" held in Corfu, Greece (September 27th to October 5th 2021), on which some parts of the present notes are based.
2309.16525
Intrinsic Emission of PSR B1937+21 at 327 MHz
At 327 MHz, the observed emission of PSR B1937+21 is greatly affected by scattering in the interstellar medium, on a timescale of order the pulse period. We use the bright impulsive giant pulses emitted by the pulsar to measure the impulse response of the interstellar medium and then recover the intrinsic emission of the pulsar by deconvolution -- revealing fine structure on timescales not normally observable. We find that the intrinsic widths of the main pulse and interpulse in the pulse profile are similar to those measured at higher frequencies. We detect 60,270 giant pulses which typically appear as narrow, ~100 ns bursts consisting of one to few nanoshots with widths $\lesssim \! 10$ ns. However, about 10% of the giant pulses exhibit multiple bursts which seem to be causally related to each other. We also report the first detection of giant micropulses in PSR B1937+21, primarily associated with the regular main pulse emission. These are distinct from giant pulses not only in the phases at which they occur, but also in their larger widths, of order a microsecond, and steeper energy distribution. These measurements place useful observational constraints on emission mechanisms for giant pulses as well as the regular radio emission of millisecond pulsars.
Nikhil Mahajan, Marten H. van Kerkwijk
2023-09-28T15:35:08Z
http://arxiv.org/abs/2309.16525v2
# Intrinsic Emission of PSR B1937+21 at 327 MHz ###### Abstract At 327 MHz, the observed emission of PSR B1937+21 is greatly affected by scattering in the interstellar medium, on a timescale of order the pulse period. We use the bright impulsive giant pulses emitted by the pulsar to measure the impulse response of the interstellar medium and then recover the intrinsic emission of the pulsar by deconvolution - revealing fine structure on timescales not normally observable. We find that the intrinsic widths of the main pulse and interpulse in the pulse profile are similar to those measured at higher frequencies. We detect 60,270 giant pulses which typically appear as narrow, \(\sim 100\) ns bursts consisting of one to few nanoshots with widths \(\lesssim 10\) ns. However, about \(10\%\) of the giant pulses exhibit multiple bursts which seem to be causally related to each other. We also report the first detection of giant micropulses in PSR B1937+21, primarily associated with the regular main pulse emission. These are distinct from giant pulses not only in the phases at which they occur, but also in their larger widths, of order a microsecond, and steeper energy distribution. These measurements place useful observational constraints on emission mechanisms for giant pulses as well as the regular radio emission of millisecond pulsars. Pulsars (1306), Radio bursts (1339), Deconvolution (1910) 0000-0002-4810-2886]Nikhil Mahajan 0000-0002-4880-7888]Marten H. van Kerkwijk ## 1 Introduction The nature of pulsar radio emission remains an open question (Melrose et al., 2021; Philippov & Kramer, 2022): we still lack sufficient understanding of pulsar magnetospheres as well as the physical mechanisms that generate both regular pulse emission and bright transients such as giant pulses. Since some of the proposed mechanisms have strong frequency dependence, it helps to gather observational constraints on the intrinsic emission over as large a range in radio frequency as possible. Radio signals from pulsars are distorted by the effects of propagation through the interstellar medium (ISM) such as dispersion, birefringence, and scintillation due to multi-path scattering (Rickett, 1990). As a result, the observed radio signal from a pulsar differs significantly from the true intrinsic emission. While dispersion and birefringence are easily mitigated by inverse filtering (Hankins & Rickett, 1975), there is no general technique for inverting the effects of multi-path scattering. Since the scattering timescale scales roughly as \(\nu^{-4}\), pulsar emission is significantly more scattered at low radio frequencies. Scattering-induced broadening can wash away finer structure in pulse profiles, especially in millisecond pulsars where the scattering timescales can be of order the pulse period. For instance, at frequencies below \(120\,\mathrm{MHz}\), the pulse profile of PSR B1937+21 (a \(1.55\,\mathrm{ms}\) pulsar) is almost entirely washed out by scattering, and the main pulse or interpulse components are no longer distinguishable (Kondratiev et al., 2016). One can describe the scattering seen in the observed signal \(y(t)\) from a pulsar as the result of a convolution of the intrinsic emission \(x(t)\) with the impulse response function of the ISM \(h(t)\), i.e., one observes \(y(t)=(h*x)(t)+\epsilon\) where \(*\) denotes convolution and \(\epsilon\) is a noise term. In principle, if either \(h\) or \(x\) are known, the other can be recovered by deconvolution. Previous attempts to infer the impulse response from observed data fall into two categories: using the observed scintillation properties to conduct interstellar holography (Walker et al., 2008; Baker et al., 2022), and using the cyclic nature of a pulsar signal to conduct cyclic spectroscopy (Demorest, 2011; Walker et al., 2013). Interstellar holography techniques assume that the impulse response is very sparse in some basis, and are ineffective otherwise (Oslowski & Walker, 2022). Cyclic spectroscopy, on the other hand, assumes that the pulsar signal is a cyclostationary signal. Walker et al. (2013) use cyclic spectroscopy to infer the impulse response and the in trinsic pulse profile of PSR B1937+21 at 430 MHz. However, many pulsars including PSR B1937+21 exhibit phenomena such as giant pulses, nulling, or mode changing which violate the cyclostationarity requirement. Thus far, neither of these techniques have been used to coherently recover the actual intrinsic emission of the pulsar, i.e., in voltages. In our previous work, Mahajan and van Kerkwijk (2023, hereafter MK23), we used the fact that PSR B1937+21 emits bright impulsive giant pulses which can be used as independent, but noisy, measurement of the impulse response function (IRF). Using this technique, we were able to successfully model the time-varying IRF of the ISM along the line-of-sight. In this paper, we use this time-varying IRF to recover the intrinsic emission of PSR B1937+21 at \(327\,\mathrm{MHz}\). PSR B1937+21 is of particular interest, as it is a bright and fast millisecond pulsar, which is very well-studied across the electromagnetic spectrum. In radio, it has a very stable regular pulse emission (Jenet et al., 2001), but also, as previously mentioned, emits giant pulses (Cognard et al., 1996), highly-energetic narrow bursts with timescales as short as a few nanoseconds (Soglasnov et al., 2004). Giant pulses have also been observed in a handful of others pulsars, which seem to share the property of having a high magnetic field strength at the light cylinder, \(B_{\mathrm{LC}}\), such as the Crab Pulsar (Hankins et al., 2003), PSR B0540-69 (Johnston and Romani, 2003), and PSR B1957+20 (Knight et al., 2006). As we will show later, PSR B1937+21 also appears to emit bright, but less energetic, bursts called "giant micropulses". Giant micropulses were first observed in the Vela pulsar (Johnston et al., 2001) and PSR B1706-44 (Johnston and Romani, 2002). In the next section, we describe the observational and signal processing techniques used to recover the intrinsic emission of the pulsar. We discuss the recovered intrinsic pulse profile in Section 3. We describe our search for transient pulses in the intrinsic emission signal in Section 4, and the properties of the detected giant micropulses and giant pulses in Section 5 and 6, respectively. Finally, we discuss our findings in Section 7 and our conclusions in Section 8. ## 2 Observations and Data Reduction We observed PSR B1937+21 for \(6540\,\mathrm{s}\) on MJD 58245 and \(1590\,\mathrm{s}\) on MJD 58298 using the \(327\,\mathrm{MHz}\) Gregorian receiver on the Arecibo Telescope. As described in MK23, we recorded dual-polarization raw baseband (voltage) data using the Puerto Rico Ultimate Pulsar Processing Instrument (PUPPI) backend. For our analysis, we use 19 contiguous bands of width \(3.125\,\mathrm{MHz}\) each for a total bandwidth of \(59.375\,\mathrm{MHz}\) centered on \(327\,\mathrm{MHz}\) (from \(297.3125\,\mathrm{to}\,356.6875\,\mathrm{MHz}\)). We use the Baseband (van Kerkwijk et al., 2021) and Pulsarbat (Mahajan and Lin, 2023) software packages to read and process the raw baseband data. We pre-process the data by flattening the passband to correct for the effects of a polyphase filter bank which is used in the instrument backend. This also has the added effect of attenuating bright narrowband radio-frequency interference (RFI). At the beginning of each observation, a correlated signal (generated by a pulsed noise diode) is injected into both polarization cables, which we used to measure the relative phase between the two polarizations. This relative phase is largely dominated by a cable delay, characterized by a linear phase gradient as a function of frequency, as we also found from the pulsar signal itself in MK23. We correct for this time shift by shifting the signals relative to each other by the measured delay, and compensate for the remaining, non-time-shift relative phases (all below \(0.1\,\mathrm{rad}\)) by rotating the signals by a constant phase in each of the 19 bands. We then dedisperse the data coherently, using dispersion measures of \(71.0201\,\mathrm{pc\,cm^{-3}}\) on MJD \(58245\), and \(71.0169\,\mathrm{pc\,cm^{-3}}\) on MJD \(58298\) (taken from MK23). We also correct for Faraday rotation using a constant rotation measure (RM) of \(9.35\,\mathrm{rad\,m^{-2}}\) for both observations by applying the corresponding transfer function to the baseband signal, which corrects for the time delays between the left- and right-circularly polarized signals emitted by the pulsar. The RM was determined by fitting a Faraday rotation curve to the frequency-dependent polarization angle in the folded pulse profile, and is consistent with previous RM measurements (Yan et al., 2011; Dai et al., 2015; Wahl et al., 2022) given the uncertainty introduced by ionospheric contributions. In these pre-processed, dedispersed data, we conduct a giant pulse search to model an IRF - which, with our correction of Faraday rotation in the baseband signal can now safely be assumed to be independent of polarization (Section 2.1). Next, we use the IRF to recover the intrinsic emission signal via a deconvolution technique (Section 2.2). Finally, we calibrate the polarization (Section 2.3) and normalize the data to estimate fluxes (Section 2.4). Note that all these steps are conducted on baseband data, and the output is thus also calibrated baseband signal. ### Impulse Response Function Using the processed baseband data, we follow the techniques described in MK23 to search for giant pulses and model the time-varying impulse response of the ISM, \(H(\nu,t)\). Since our goal in this work is to recover the intrinsic signal via deconvolution, we are more sensitive to noise. Hence, while in MK23, where the goal was to try to understand the structure of the wavefield, it made sense to include also fainter but more noisy contributions to \(H(\nu,t)\), here we should exclude those. So, in this work, we use a higher stopping criterion of \(\gamma=6\) (compared to \(\gamma=5\) used in MK23). Futhermore, we mask out regions in the \(\tau-\dot{\tau}\) space which are dominated by noise. The combined effect of these changes is that the modeled IRF is less noisy. Another small change we made was to the criteria for giant pulse detection: we slightly relaxed the maximum difference in arrival time between the various streams, to \(240\,\mathrm{ns}\) (compared to \(200\,\mathrm{ns}\) in MK23), as we noticed that we were unnecessarily ignoring pulses which, while having more temporal structure over the full band, are still usable as approximate impulses in individual frequency bands. With this relaxed constraint, we detect 14,414 giant pulses (compared to 13,025 in MK23). The wavefields are solved independently for each frequency band, so the modeled IRFs have arbitrary phases relative to each other, i.e. the dot product between an observed giant pulse and the modeled IRF has significantly different phases in each frequency band. The phase differences between adjacent bands, however, are largely consistent between giant pulses, as expected given that giant pulses should still be impulsive for neighbouring bands. In these phase differences, we see small variations over time, which can be modeled well by a slowly time-varying curve1. By correcting for these phase differences, we effectively get a wideband IRF. Footnote 1: The changes have the right amplitude and frequency dependence to be due to small changes in ionospheric DM. We did not explore fitting for this explicitly. ### Deconvolution With an IRF, we can now recover the intrinsic signal via deconvolution. Following MK23, we use a regularized inverse filter, \[G(\nu)=\frac{H^{*}(\nu)}{|H|^{2}+\mu} \tag{1}\] where \(H(\nu)\) is the impulse response in the frequency domain, and \(\mu>0\) is a regularization factor. If \(y(t)\) is the observed signal, then \(\hat{x}(t)=(g*y)(t)\) approximates the intrinsic signal, \(x(t)\). Here, \(g(\tau)\) is the time-domain representation of \(G(\nu)\). When the impulse response is perfectly known and \(\mu\) is the inverse of the signal-to-noise ratio, \(G(\nu)\) is the Wiener filter, and \(\hat{x}(t)\) is the best least-squares approximation of \(x(t)\). In our case, however, the modeled impulse response is noisy, and we do not know the signal-to-noise statistics of the intrinsic signal. Thus, we use a constant \(\mu\) determined by trial-and-error depending on the use case (see Sections 3 and 4). ### Polarization Calibration In this work, we use the PSR/IEEE convention for Stokes parameters (van Straten et al., 2010). We calibrate the polarization by modelling how the Stokes parameters change with parallactic angle due to feed rotation (Johnston, 2002; van Straten, 2004). As PSR B1937+21 has a high degree of linear polarization, and our observations span a large range in parallactic angle (\(156^{\circ}\) on MJD 58245 and \(80^{\circ}\) on MJD 58298), we can use the observed pulsar emission to self-calibrate. Specifically, we use that the recorded signal (or voltages) can be described as \(\mathbf{v_{\mathrm{out}}}=\mathbf{Jv_{\mathrm{in}}}\) where \(\mathbf{J}\) is a \(2\times 2\) complex-valued Jones matrix which describes how the input signal, \(\mathbf{v_{\mathrm{in}}}\), is transformed. The Jones matrix usually includes three components: the responses of the feed and receiver backend, and a rotation matrix for the rotation of the feed relative to the sky. Since we did not observe a reliable calibrator source with known polarization state, we have to make assumptions about the average polarization state of PSR B1937+21's emission in order to sufficiently constrain the feed response. After correcting for the relative phases and gains using noise diode observations, we found that the polarization angle of the pulsar's emission is well-described by a Faraday rotation curve and the average circular polarization is close to zero. Other observations of the PSR B1937+21 which measure polarization also show a very small circular polarization fraction (Wang et al., 2023; Dai et al., 2015; Alam et al., 2021; Stairs et al., 1999). Thus, we decided to solve for the feed response by assuming that both Stokes' \(U\) and \(V\) are zero. Consequentially, the polarization angles presented in this paper have an arbitrary (but constant) rotation relative to the sky, and the circular polarization values are systematically inaccurate by a few percent in the polarized fraction. Despite these inaccuracies, conducting a polarization calibration in this manner is still a significant improvement over not doing it. ### Flux Density Estimation Since we did not observe a known unpolarized flux calibrator, we cannot conduct a robust absolute flux calibration of our data. Thus, we use flux measurements from the literature and assume that the mean flux density of the pulsar is \(400\,\mathrm{mJy}\) at \(327\,\mathrm{MHz}\)(Foster et al., 1991). Though the pulsar is known to have a spectral index of around \(-2.6\), we assume a constant flux density across all frequency bands (thus, assuming a zero spectral index), both for simplicity and to ensure the spectra of giant pulses would display well. A benefit of normalizing the data in both observations relative to the mean flux density is that flux densities of various emission phenomena can be fairly compared between the observations. We note that even if we had observed a flux calibrator, it would have been difficult to use the fluxes, since interstellar scintillation causes the overall magnification of the pulsar's emission to vary significantly over timescales of order a month (Ramachandran et al., 2006). As described in MK23, we cannot measure this overall magnification, and instead assume that the total integrated intensity of the IRF is \(1\). ## 3 Intrinsic Pulse Profile To construct the intrinsic pulse profile, we use a regularization factor of \(\mu=0.01\) during deconvolution1. We then fold the resulting signal into a full-Stokes pulse profile using pulsar timing models provided by the NanoGrav collaboration (Alam et al., 2021). We have not attempted to calibrate the absolute pulse phase to match that of other studies. The intrinsic pulse profile for the data from MJD 58245 is shown in Figure 1. The intrinsic profile on MJD 58298 agrees with this profile (as shown in MK23). Footnote 1: We normalize the IRF to have a total energy of \(1\), i.e. the mean value of \(|H(\nu)|^{2}\) is \(1\). Thus, the numerical value of \(\mu\) is relative to this total energy. There are two primary pulse components, the main pulse (MP) which peaks at a pulse phase of \(\phi=-0.297\) and the interpulse (IP) which peaks at \(\phi=+0.217\). For both MP and IP, there are small narrow bumps on the trailing shoulder which are caused by giant pulse emission. The linear polarization fractions for the MP and IP are \(\sim 81\%\) and \(\sim 30\%\), respectively. We show Stokes \(V\) for reference, but this may deviate from the true circular polarization profile of PSR B1937+21 given the assumption of zero total circular polarization used in our polarization calibration (Section 2.3). The MP arrives \(756.5\,\upmu\)s after the IP (\(174.8^{\circ}\) in pulse phase) in agreement with measurements made by Foster et al. (1991). The pulse widths at half maximum, \(W_{50}\), are \(51.2\,\upmu\)s (\(11.8^{\circ}\)) and \(54.5\,\upmu\)s (\(12.6^{\circ}\)) for the MP and IP, respectively. These widths are consistent with those seen in the intrinsic profile at \(430\,\mathrm{MHz}\) recovered by Walker et al. (2013), and also with widths measured from observed profiles at higher frequency (and thus less affected by scattering) by Kramer et al. (1999). The leading edge of the IP exhibits a \(\sim 80^{\circ}\) jump in polarization angle, which has also been observed at \(610\,\mathrm{MHz}\) by Stairs et al. (1999). Around both components we see faint, wide bumps. We expect that some portion of these bumps could be an artifact of our noisy deconvolution method: since our modeled IRF does not capture the true IRF completely, a small portion of the observed signal is not correctly "deconvolved" but is instead scattered further, convolved with something like the cross-correlation of the modeled IRF and the un-modeled component of the true IRF. Any such descattering mismatch, however, should affect the MP and IP in the same way, while in our results the low-level bumps around the MP and IP do not look alike (even when adjusting for the different peak intensities of the components). Thus, we believe that at least some part of these bumps is real. This is supported by the results in Walker et al. (2013) who recover the intrinsic pulse profile at 430 MHz using cyclic spectroscopy and also find wider low-level features around both the MP and the IP. ## 4 Search for Bright Bursts We already know, a priori, that PSR B1937+21 emits bright narrow bursts in the form of giant pulses. In fact, we use them to model the IRF. However, the giant pulses found previously have a selection bias due to the requirement that they be very impulsive and bright. Thus, we conduct a new search for bright bursts in the intrinsic emission signal to con Figure 1: The intrinsic pulse profile, found by descattering the baseband data from MJD 58245, folded using \(1000\) bins in pulse phase, with polarization angle, PA, shown wherever \(|L|/\sigma_{I}>7\). The pulse profile obtained without descattering is shown as a dotted line for comparison. Note that the polarization calibration assumed that Stokes U and V were zero averaged over the pulse profile (see Sect. 2.3 for details). Hence, the absolute polarization angle is not calibrated, and Stokes V is not an accurate measurement of the pulsar’s circular polarization profile (though it should not be too far off, since other measurements find that the true average circular polarization is close to zero). duct a proper analysis of the transient bursts exhibited by the pulsar. We use a regularization factor of \(\mu=1\) for the deconvolution step for this search. The stronger regularization prevents small values of \(|H(\nu)|^{2}\) from amplifying noise in the recovered signal, but also implies a slight reduction in the fraction of energy that will be recovered for an intrinsically narrow burst. For every pulse period in our data, we extract baseband signal snippets aligned by pulse phase. We have 4,198,508 and 1,020,697 pulsar rotations on MJD 58245 and 58298, respectively. For each snippet, we take the signals from the 19 contiguous frequency bands and "stitch them together" in the frequency domain to form a \(59.375\,\mathrm{MHz}\) baseband signal for each polarization, with a time resolution of \(\approx\!16.84\,\mathrm{ns}\). We then take the squared modulus to compute intensities, subtracting the underlying regular pulse emission to ensure the mean intensity in the absence of a burst is zero. The regular pulse emission is well-described as amplitude-modulated noise and thus follows a \(\chi^{2}\) distribution with \(2\) degrees of freedom, with a scale parameter which is a function of pulse phase. We can estimate the scale parameter in an outlier-robust manner using the median pulse profile, and multiplying by the mean-to-median ratio of the distribution which is invariant to the scale parameter. We then add the polarizations together to get the total intensity (Stokes \(I\)) and apply a running uniform filter with a width of 3 samples (\(\sim\!50\,\mathrm{ns}\)). This uniform filter is used to avoid a bias against impulses which occur between two samples (and thus have as low as half the actual flux density in a given discrete sample). Bright bursts are detected when the flux density for a sample of this signal is above a threshold chosen such that the expected number of false detections across our data is \(0.5\), based on the measured noise statistics. This threshold is pulse-phase dependent as the underlying regular emission contributes to the variance of the noise. In practice, we expect even fewer false detections since we employ a minimum fluence cutoff as described later. In Figure 2, we show the histograms for the peak flux density, \(S_{\mathrm{peak}}\), in \(25\,\mathrm{\SIUnitSymbolMicro s}\) regions centered where MP and IP giant pulses are emitted, constructed using all pulsar rotations for each observation. We also show the distribution expected if the data contained only noise, which is modeled as a \(\chi^{2}\) distribution with the scale parameter appropriate for the given pulse phase. Evidently, there are a significant number of bright giant pulses, which cause the observed distribution to deviate at higher peak flux densities. The observed pulsar emission is brighter during the observation on MJD 58298, likely due to scintillation-induced magnification. Since we normalize the data to match flux densities, this means the noise appears to be at a lower flux level in MJD 58298. The fact that the MP and IP distributions of the two observations match in the high-flux tails (for both MP and IP) shows that the giant pulse flux distributions do not vary relative to the averaged pulsar flux. When detecting bursts, we often find clumps of samples above the flux density threshold, sometimes with small gaps in between them. In order to close the gaps and ensure we do not miss portions of a burst, we include all samples within \(200\,\mathrm{ns}\) of a detection as part of the detected burst. We measure the fluence, \(E\), of a burst by integrating the flux density; for its pulse phase, we use the centroid of the detected burst. In this work, we only consider bursts with \(S_{\mathrm{peak}}\geq 300\,\mathrm{Jy}\) (measured in a window of size \(50\,\mathrm{ns}\)) and a fluence of \(E\geq 35\,\mathrm{Jy}\,\mathrm{\SIUnitSymbolMicro s}\). A typical fluence measurement has an error of around \(3-4\)\(\,\mathrm{Jy}\,\mathrm{\SIUnitSymbolMicro s}\), implying that, at the cut-off, we measure the fluence at \(\sim\!10\sigma\). For comparison, the MP component in the regular pulse emission has a peak flux density of \(\sim\!6\,\mathrm{Jy}\) and an average fluence of \(\sim\!350\,\mathrm{Jy}\)\(\,\mathrm{\SIUnitSymbolMicro s}\). Thus, our weakest detected bursts are at least 50 times as bright as the MP, while having at least \(1/10\)th the fluence. The pulse phase - fluence distribution of all detected bursts is shown in Figure 3. We find that there are two distinct populations of bursts in both the MP and IP, which we will refer to as giant pulses (GP) and giant micropulses (GMP). GPs, on the trailing side (later in pulse phase) for both MP and IP, are characterized by the narrow pulse phase range in which they occur and a fluence distribution which extends to large values. GMPs, on the leading side, are characterized by a wider pulse phase range, lower fluences and are coincident with the pulse components in the regular pulse emission. For the purposes of measuring statistics, we divide detected bursts into GPs and GMPs based on their pulse phase, with the partition line denoted by a red dashed line in Figure 3 at \(\phi=-0.275\) in the MP region, and \(\phi=+0.25\) in the IP region. We summarize our statistics in Table 1, and present reverse cumulative fluence distributions in Figure 4. \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{MJD 58245} & \multicolumn{2}{c}{MJD 58298} \\ Component & \(N\) & \(r\) & \(N\) & \(r\) & \(N_{\mathrm{tot}}\) \\ \hline MP GP & 30913 & 4.73 & 7922 & 4.98 & \\ IP GP & 17078 & 2.61 & 4357 & 2.74 & \\ MP GMP & 847 & 0.13 & 235 & 0.15 & \\ IP GMP & 17 & \(\cdots\) & 5 & \(\cdots\) & \\ \hline \end{tabular} Note. – For each observation, we list the total number of detected pulses \(N\) as well as the rate \(r\) (in \(\mathrm{s}^{-1}\), using the exposure times of 6540 and \(1590\,\mathrm{s}\)). We do not list occurrence rates for the IP GMPs given their low number of detections. The last column lists the total number \(N_{\mathrm{tot}}\) of GP and GMP, i.e., combining MP and IP, as well as observation. \end{table} Table 1: Detection statistics for bright bursts emitted by PSR B1937+21 for both observations. Figure 3: The pulse phase – fluence distribution for bursts with \(E>35\) Jy us (with the colormap logarithmic in normalized counts). For both the MP (left) and IP (right) pulse phase regions, we see two distinct populations of bursts (separated by a red dashed lines): giant micropulses (GMP) and giant pulses (GP). The GMPs occur rarely, in the same phase window as the regular emission, while the GPs occur at much higher rates and fluences, in a narrow pulse phase window trailing the regular emission. Figure 2: Normalized histograms of the peak flux densities, \(S_{\rm peak}\), detected for every individual pulse rotation, over \(25\) μs snippets around the MP and IP giant pulse regions (with flux densities measured using a sliding \(50\) ns window). The vertical dotted line at \(300\) Jy is the \(S_{\rm peak}\) cutoff used when detecting a burst. Shown in grey are the expected histograms if the data consisted of only noise, which is modeled as an appropriately scaled \(\chi^{2}\) distribution. ## 5 Giant Micropulses From our search for bright bursts in the intrinsic emission, we have found a total of 1104 giant micropulses, mostly in the MP region of pulse phase. The highest fluences we observe for GMPs are \(160\,\mathrm{Jy\,\upmu s}\) and \(60\,\mathrm{Jy\,\upmu s}\) in the MP and IP regions, respectively, which can be compared to the typical fluence for the regular emission components of \(\sim 350\) and \(\sim 200\,\mathrm{Jy\,\upmu s}\), respectively. The fluence distribution for MP GMPs seems to follow a power-law, such that \(P(E>E_{0})\propto{E_{0}}^{\alpha}\) with \(\alpha\) being approximately \(-3.6\) and \(-4.0\) on MJDs 58245 and 58298, respectively. In Figure 5, we show the total intensity signals of all MP GMPs with fluence above \(85\,\mathrm{Jy\,\upmu s}\) from both observations. It is clear these are not statistical flukes. The bursts seem to occur in clumps, of roughly \(\sim\!1\,\upmu\)s width, but with many exhibiting narrower sub-clumps or even single-sample peaks, indicating structure on timescales of \(10-20\,\mathrm{ns}\). The brightest MP GMP we observe has a peak flux density of \(4\,\mathrm{kJy}\). For the MP GMPs, the median pulse phase of occurrence is \(\phi=-0.300\) and the interquartile range (IQR; the pulse phase range of the middle \(50\%\) of the MP GMPs) is \(15.4\,\upmu\)s (\(3.6^{\circ}\)). Thus, MP GMPs are coincident with the regular MP emission, but narrower in pulse phase range3. Footnote 3: If we assume normal distributions, the equivalent IQR for the regular pulse emission is \(\mathrm{IQR}=\mathrm{FWHM}/1.75\approx 29.3\,\upmu\)s. Previous single-pulse studies of PSR B1937+21 at \(430\,\mathrm{MHz}\) and \(1410\,\mathrm{MHz}\) did not find these GMPs and reported stable single-pulse behavior (Jenet et al., 2001; Jenet and Gil, 2004), with pulse-to-pulse fluctuations consistent with being caused by interstellar scintillation. These previous non-detections cannot be due to lack of sensitivity, since the GMPs we find are quite bright. Instead, it seems likely that scattering-induced broadening caused them to become undetectable, as their contribution to the total pulse brightness is only modest. Additionally, commonly-used metrics to measure single-pulse stability such as the coefficient of variation4 are insensitive to rare, low-fluence bursts such as the GMPs we detect. Footnote 4: In pulsar literature, this is often referred to as the “modulation index”. ## 6 Giant Pulses In our burst search, we find over 60,000 giant pulses (GPs). As is evident in Figure 2, this is a conservative count. primarily due to our choice of using a cutoff of \(S_{\mathrm{peak}}\geq 300\,\mathrm{Jy}\). Based on the distribution of peak flux density, we expect that there are \(>125,000\) GPs above the noise floor in our data and that the lower bound for the fluence of a GP, if there is one, is \(\lesssim 10\,\mathrm{Jy\,\upmu s}\), the fluence of a burst with \(S_{\mathrm{peak}}\sim\!200\,\mathrm{Jy}\) across \(50\,\mathrm{ns}\). The median pulse phases at which GPs occur are \(\phi=-0.268\) and \(\phi=+0.258\) for MP GPs and IP GPs, respectively. The MP GPs trail the IP GPs by \(738\,\upmu\)s (\(170.5^{\circ}\)). The IQR for both MP GPs and IP GPs is \(2.3\,\upmu\)s (\(0.54^{\circ}\)). Relative to the peaks of the pulse components in the regular emission, the MP GPs trail by \(45.8\,\upmu\)s (\(10.6^{\circ}\)) and the IP GPs trail by \(64.4\,\upmu\)s (\(14.8^{\circ}\)). The widths of the GP components as well as the separation between the GP components and the regular emission peaks are consistent with those found at higher frequencies by Kinkhabwala and Thorsett (2000). In Figure 4, we see that the tail of the fluence distribution seems to follow a power-law, as we found above for the GMPs. For GPs with fluence above \(1\,\mathrm{kJy\,\upmu s}\), the power-law indices are roughly \(-1.6\) and \(-2.0\) for MJDs 58245 and 58298, respectively, and are consistent between the MP GPs and IP GPs. At lower fluences, we see a turnover resulting in a less steep distribution. This turnover is expected as otherwise the GP emission would produce a significant peak in the average pulse profile, which we do not observe. A low-fluence turnover of this kind has been seen in previous GPs studies of PSR B1937+21 (Soglasnov et al., 2004; McKee et al., 2019) and the Crab (Popov and Stappers, 2007; Karuppusamy et al., 2010; Lin and van Kerkwijk, 2023). Figure 4: Reverse cumulative fluence distributions for detected bursts, for both observations (with IP GMPs omitted because of their low numbers). For reference, power laws with indices of \(-1.6\) and \(-3.6\) are shown. One sees that the MP and IP GP distributions are similar, with a turnover at a few \(100\,\mathrm{Jy\,\upmu s}\). The distribution of the MP GMPs is far steeper, providing further evidence that GMPs are a different type of burst. Figure 5: The total intensity signals of all “giant micropulses” (GMP) in the main pulse region that have fluence above \(85\,\mathrm{Jy\,\upmu s}\). The signals are offset from each other by \(1\,\mathrm{kJy}\), for which a vertical black bar is provided as a reference. At the top, the main pulse component from the averaged intrinsic pulse profile is shown for comparison (multiplied with a factor 100). The peak flux density of GMPs greatly exceeds that of the average pulse profile but their fluence is modest: only \(\sim\!25\%\) of the average profile even for the energetic GMPs shown here. Figure 6: A selection of bright giant-pulses. For each giant pulse, the top panels show the time-domain signal, and the bottom panels show the frequency spectrum. Plotted are total intensity (black), linear polarization (red), and circular polarization (green). Also provided is the UTC timestamp corresponding to \(t=0\), the total fluence of the giant pulse, the degree of polarization and the pulse phase. This is part of a figure set of 40 figures. The remaining figures are attached to the end of this paper as Figures 10 – 48. We find that MP GPs occur \(\sim\!1.8\) times more often than IP GPs, and thus the most energetic GPs tend to come from the MP region. However, accounting for the lower occurrence rate, we see no evidence that IP GPs are intrinsically less energetic than MP GPs. In fact, the highest fluence we measure is for an IP GP with \(29.6\,\mathrm{kJy\,\upmu s}\). The brightest GP detect has a peak flux density of \(934\,\mathrm{kJy}\) across \(\sim\!15\,\mathrm{ns}\), thus, having a implied brightness temperature of \(T_{b}=10^{40.5}\,\mathrm{K}\), using a distance to PSR B1937+21 of \(3\,\mathrm{kpc}\)(Ding et al., 2023). On the lower end, the weakest GPs we detect have \(S_{\mathrm{peak}}\geq 300\,\mathrm{Jy}\) across \(50\,\mathrm{ns}\) resulting in \(T_{b}=10^{36}\,\mathrm{K}\). In Figure 6 and the associated figure set, we show the full-Stokes time-domain signals as well as the frequency spectra for the most energetic GPs we observe. The first pulse shown (top left panels) is a bright narrow GP of fluence \(6.5\,\mathrm{kJy\,\upmu s}\) which seems to be localized (in time) to a single sample, and (consequently) has a relatively flat spectrum across our entire bandwidth. If we assume this GP is a Gaussian pulse, it would need to have a FWHM of \(\lesssim\!10\,\mathrm{ns}\) to explain our observation: a wider pulse would spill over into neighbouring samples much more. Generally, the GPs we detect seem to consist of clumps or bursts made up of one to a few of these "nanoshots", as can be seen in the other panels of Figure 6, as well as in the extended figure set. For GPs with multiple nanoshots, we see the corresponding periodic modulations in the spectra, reflecting how the nanoshots interfere with each other. The median width of a GP is \(\sim\!100\,\mathrm{ns}\) (measured by the maximum extent across detections for a GP). Most GPs exhibit just one clump of nanoshots, but some seem to exhibit multiple clumps. We analyze these multi-burst GPs further in Section 6.1 below, before turing to the polarization statistics of GPs in Section 6.2. ### Multi-burst Giant Pulses In Figure 7, we show the varied morphology of GPs we find. While the typical GP looks like the examples in the first two panels, a significant number of them show distinct multiple bursts. In order to measure the multi-burst statistics of GPs, we must consistently define what constitutes a burst. As described in Section 4, we define a "detection" as a sample of a total intensity signal (with \(50\,\mathrm{ns}\) uniform filtering applied) which exceeds a flux density threshold. We define "gaps" as parts of the signal which are more than 5 samples away from a detection. Thus, two adjacent bursts must necessarily have a gap of \(\geq 185\,\mathrm{ns}\) between them. This minimum gap size was chosen to be about double the typical width of a GP of \(100\,\mathrm{ns}\) as described earlier. Once all the gaps have been identified, we can count the number of distinct bursts in a GP. Table 2 summarizes the statistics of the multi-burst GPs we find. If bursts were independent of each other and their occurrence in the same pulse rotation was due to coincidence, we would expect to see significantly fewer multi-burst GPs than we actually observe. For example, on MJD 58245, using binomial statistics, we would expect to see \(\sim\!115\) MP GPs with \(\geq 2\) bursts and \(<1\) MP GPs with \(\geq 3\) bursts if the bursts occurred independently of each other. In addition to this, we find that multi-burst GPs occur much closer to each other in time than would be expected from the full pulse phase range at which GPs are observed. For GPs where we detect two bursts, we find that the median time difference between the centroids of the bursts is \(0.42\,\upmu\)s, much smaller than the expected \(\sim\!2\,\upmu\)s if the bursts were randomly sampled from the overall GP pulse phase distribution. Therefore, the multiple bursts of these GPs seem to be causally related to each other, and occur in much tighter groupings compared to the pulse phase range. We find that the pairwise time differences or fluence ratios between bursts have no preferred value or apparent pattern. From Table 2, we can see that for a GP with \(n\geq 1\) bursts, the probability of observing the \((n+1)\)-th burst is roughly consistent for all \(n\) and across different GP components. However, this probability is significantly higher than the probability of observing the first burst (i.e., the occurrence rate of GPs). We can roughly reproduce the observed multi-burst statistics if we set the probability of observing the first burst to be the MP and IP GP occurrence rates, \(P_{\mathrm{mp}}=0.0075\) and \(P_{\mathrm{ip}}=0.0042\), respectively, and the probability of observing each subsequent burst to be roughly \(P^{\prime}\sim\!0.075\) for both components. Thus, we see that while the occurrence rate is different between MP GPs and IP GPs, it does not appear affect the probability of observing subsequent bursts. The higher and consistent probability of observing a subsequent burst for both components further supports the notion that the bursts in a multi-burst GP are causally related. A similar causal relationship between mul \begin{table} \begin{tabular}{l r r r r} \hline \hline No. of & \multicolumn{1}{c}{...MJD 58245} & \multicolumn{1}{c}{...MJD 58298} \\ bursts & MP & IP & MP & IP \\ \hline \(\geq 0\) & 4198508 & 4198507 & 1020697 & 1020697 \\ \(\geq 1\) & 30913 & 17078 & 7922 & 4357 \\ \(\geq 2\) & 2112 & 1531 & 689 & 464 \\ \(\geq 3\) & 159 & 141 & 54 & 53 \\ \(\geq 4\) & 20 & 17 & 7 & 8 \\ \(\geq 5\) & 2 & 2 & 2 & 3 \\ \(\geq 6\) & – & – & – & 1 \\ \hline \end{tabular} Note. – Two bursts are considered separate if they are more than \(185\,\mathrm{ns}\) apart. \end{table} Table 2: Multi-burst statistics of giant pulses. Figure 7: A hand-picked selection of GPs, chosen to demonstrate morphological variety, showing progressively increasing number of “bursts”. The sample spacing is \(16.84\,\mathrm{ns}\), and for all plots, \(t=0\) is chosen to be the mean of the earliest and latest detections in time. tiple bursts of a giant pulse has been recently reported in observations of the Crab (Lin & van Kerkwijk, 2023). is at the point where the GP fluence distribution starts to turn over). ## 7 Discussion ### Giant Pulses and Giant Micropulses We observe two types of bright narrow bursts which we call giant pulses and giant micropulses. In the pulsar literature, GPs were somewhat arbitrarily defined as bright pulses with more than ten times the fluence of the average pulse (Cognard et al., 1996). When bright bursts were observed in the Vela pulsar (Johnston et al., 2001), they were called "giant micropulses" as they did not meet this fluence cutoff. Now, with more sensitive instruments and improved techniques, we can measure low-fluence bursts that are clearly part of the same population as the more energetic GPs, just arising in the fainter part of the distribution, and thus, relevant to our understanding of the GP emission mechanism. Here, we describe the key differences we see between these two populations in our data. GPs occur over a very narrow phase range and trail the regular pulse components. Their fluence distribution are not as steep, with power-law indices \(\alpha\gtrsim-2\), and extend to extremely high fluence. GP typically seem to consist of one or maybe a few bursts, each having one to few nanoshots, which have widths of order \(10\,\mathrm{ns}\). GMPs, on the other hand, occur over a pulse phase range roughly coincident with the regular pulse components. Their fluence distribution is much steeper, with power-law indices \(\alpha\sim-4\), and, therefore, does not extend to very high fluences. GMPs seem to be broader, with widths of order \(1\,\mathrm{\upmu s}\), although with structure at the tens of ns level. These differences suggest that, at least in PSR B1937+21, GPs and GMPs are emitted from different parts of the magnetosphere and thus likely have different underlying emission mechanisms. GMPs, given the overlap in phase with the reguluse pulse emission, may be transient bursts from the polar gap, whereas GPs are generally thought to originate at or beyond the light cylinder, given that they usually occur at the same phases as high-energy emission (Cusumano et al., 2003). PSR B1937+21 appears to be the first pulsar where both these phenomena are observed, allowing them to be compared and distinguished. It is possible that previous detections of, say, GPs in other pulsars are, in fact, detections of GMPs in the sense described here. For instance, the GPs we reported in the mode-changing PSR B1957+20 at 327 MHz seem to generally have very steep fluence distributions, with \(\alpha\sim-4.5\), except for the GPs in the main pulse during the "high mode", for which we found \(\alpha\sim-1.6\) for the more energetic bursts (Mahajan et al., 2018). A possible interpretation may be that the former are actually GMPs, and only bursts from the energetic component which emit during the high mode are GPs like the ones observed here in PSR B1937+21. It would be interesting to apply the techniques introduced here to descatter the emission of PSR B1957+20, and see if the emission phases at which bursts are emitted give further clues. ### Comparison with the Crab Pulsar and Implications for the Giant Pulse Emission Mechanism The properties we measure in this work should provide useful observational constraints for the radio emission mechanism of GPs, especially by comparing also with other GP emitters. For the comparison, the Crab pulsar is perhaps the most useful, as it is very well studied. It has a similarly strong magnetic field at the light cylinder, \(B_{\mathrm{LC}}\sim\!10^{6}\,\mathrm{G}\), but a much longer spin period, \(\sim\!33\,\mathrm{ms}\), thus potentially allowing one to tease apart properties that depend mostly on field strength and others that depend also on the spatial scale. We find that the GPs are emitted across a very narrow pulse phase range of \(0.54^{\circ}\). If we assume that this phase range is entirely due to relativistic beaming, then we can infer that the bulk Lorentz factor of the emission region is \(\gamma\sim\!2/\mathrm{IQR}\approx 200\). In the Crab, the GPs in the main pulse at \(1.7\,\mathrm{GHz}\) have an IQR of approximately \(1.5^{\circ}\)(Lin & van Kerkwijk, 2023) which, using the same argument, implies \(\gamma\sim\!75\). We also find that GPs are mostly randomly polarized (almost uniformly distributed around the Poincare sphere), with a slight preference for low circular polarization. In the Crab Pulsar, the polarization properties of the giant pulses are similar but not identical: its GPs are preferentially linearly polarized, although the polarization angle is (nearly) random (Hankins et al., 2016; Lin et al., 2023). We find that some GPs consist of multiple bursts, which are causally linked and seem to be grouped much more tightly than the overall pulse phase range. This again is like that observed for GPs in the Crab Pulsar (Sallmen et al., 1999; Hankins & Eilek, 2007; Lin & van Kerkwijk, 2023). One could interpret the reduced phase range as evidence for even higher \(\gamma\) factors for the emitting plasma, but it could also just reflect a maximum duration, corresponding to a maximum spatial extent over which emission can be generated. We also find that the probability of generating a subsequent burst is roughly invariant to the number of bursts in a GP, as again seems to be the case in the Crab Pulsar (Lin & van Kerkwijk, 2023), suggesting an emission process where a transient event can preferentially induce another transient event with some consistent probability. Each individual GP burst seems to consists of a couple of very closely spaced nanoshots, with the average degree of polarization suggesting an effective number of nanoshots of about three. This is similar to what is seen for the Crab Pulsar (Hankins & Eilek, 2007), although there the typical number of nanoshots seems to be somewhat higher, about five (Lin et al., 2023). Furthermore, in the Crab pulsar, the nanoshots in a given microburst are spread over about \(1\,\upmu\)s, while for PSR B1937+21 the bursts typically seem to last less long, about \(100\,\)ns. It may be meaningful that the ratio between the two is roughly the ratio of the spin periods. Finally, in the Crab Pulsar, the nebular scattering screen resolves the nanoshots, which is only possible if the plasma emitting them travels at highly relativistic speeds, with \(\gamma\simeq 10^{4}\)(Bij et al., 2021; Lin et al., 2023), implying it emits nanoshots over \(\sim 10^{3}\) light cylinder radii along the line of sight. For PSR B1937+21, if the Lorentz factor were the same, the duration would imply a similar extent, relative to the light cylinder radius. One of the most promising recent models for GPs is that they are emitted by merging plasmoids which form in the current sheet in the striped wind beyond the light cylinder (Philippov et al., 2019; see also Lyubarsky, 2019). This model predicts GPs with roughly the brightness temperature that we observe, and also seems to match the morphology at least qualitatively: clumps of one to few nanoshots. The simulations also seem to make testable predictions, e.g., for how the GP properties should change with frequency. Since PSR B1937+21, with its fast spin period and therefore compact magnetosphere, is easier to simulate than the Crab Pulsar, with its much larger light cylinder, it may be worthwhile to try to target simulations specifically to PSR B1937+21. ### Better deconvolution technique While our deconvolution technique works well in recovering the intrinsic emission of the pulsar, it should be possible to improve on it. One weakness of our method is that it is only least-squares optimal for well-known IRFs. In the case of noisy measures of the IRF, which is what we have, total least-squares methods should be better suited to recover the intrinsic signal (Huffel and Lemmerling, 2002; Mastronardi et al., 2000). A problem is that these techniques usually require working with extremely large matrices, the size of which is determined by the length of the signals involved. This is computationally intractable for wideband Nyquist-sampled baseband signals. We believe, however, that it should be possible to use these methods effectively for recovering short signals, of order a few hundred samples, such as around a giant pulse. Furthermore, we detect significantly more giant pulses in the intrinsic signal than we do in the original giant pulse search. While many of the additional detections are faint, they could be included in the solution for the wavefield, presumably leading to a less noisy model. ## 8 Conclusions We successfully recover the intrinsic emission, in the voltage domain, of PSR B1937+21 at \(327\,\)MHz, where in the observed signal propagation effects have significantly smeared the signal. From this intrinsic signal, we are able to measure the intrinsic pulse profile of the pulsar. We also successfully find transient bursts, detecting over 60,000 giant pulses from both the main pulse and interpulse emission components. We also discovered giant micropulses in PSR B1937+21, with over 1000 detected in our data. It would be useful to conduct a search for these giant micropulses at higher radio frequencies to determine if previous non-detections were a result of selection bias (for example, due to less sensitive instruments) or these bursts do not occur at higher frequencies. The observation of these transient bursts place significant observational constraints on the physical emission mechanisms of these components. Further studies of other millisecond pulsars, especially giant pulse emitters like PSR B1957+20, can bring us closer to a coherent theory of radio emission in millisecond pulsars. We thank Rebecca Lin for helpful comments and insight regarding giant pulses in the Crab Pulsar. This research has made use of NASA's Astrophysics Data System Bibliographic Services. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium (Loken et al., 2010; Ponce et al., 2019). SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. MHvK is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via discovery and accelerator grants, and by a Killam Fellowship. Arecibo (327 MHz Gregorian). The data used in this publication was obtained as part of observing project P3229. Pulsarbat (Mahajan and Lin, 2023), Baseband (van Kerkwijk et al., 2021), Numpy (Harris et al., 2020), Astropy (Astropy Collaboration et al., 2013, 2018, 2022), **Scipy**(Virtanen et al., 2020), **Matplotlib**(Hunter, 2007), **Dask**(Dask Development Team, 2016), TEMPO2 (Hobbs et al., 2006; Edwards et al., 2006)
2301.13519
Restricted distance-type Gaussian estimators based on density power divergence and their applications in hypothesis testing
Zhang (2019) presented a general estimation approach based on the Gaussian distribution for general parametric models where the likelihood of the data is difficult to obtain or unknown, but the mean and variance-covariance matrix are known. Castilla and Zografos (2021) extended the method to density power divergence-based estimators, which are more robust than the likelihood-based Gaussian estimator against data contamination. In this paper we introduce the restricted minimum density power divergence Gaussian estimator (MDPDGE) and study its main asymptotic properties. Also, we examine it robustness through its influence function analysis. Restricted estimators are required in many practical situations, in special in testing composite null hypothesis, and provide here constrained estimators to inherent restrictions of the underlying distribution. Further, we derive robust Rao-type test statistics based on the MDPDGE for testing simple null hypothesis and we deduce explicit expressions for some main important distributions. Finally, we empirically evaluate the efficiency and robustness of the method through a simulation study.
Ángel Felipe, María Jaenada, Pedro Miranda, Leandro Pardo
2023-01-31T10:13:27Z
http://arxiv.org/abs/2301.13519v2
Restricted distance-type Gaussian estimators based on density power divergence and their applications in hypothesis testing ###### Abstract Zhang (2019) presented a general estimation approach based on the Gaussian distribution for general parametric models where the likelihood of the data is difficult to obtain or unknown, but the mean and variance-covariance matrix are known. Castilla and Zografos (2021) extended the method to density power divergence-based estimators, which are more robust than the likelihood-based Gaussian estimator against data contamination. In this paper we introduce the restricted minimum density power divergence Gaussian estimator (MDPDGE) and study its main asymptotic properties. Also, we examine it robustness through its influence function analysis. Restricted estimators are required in many practical situations, in special in testing composite null hypothesis, and provide here constrained estimators to inherent restrictions of the underlying distribution. Further, we derive robust Rao-type test statistics based on the MDPDGE for testing simple null hypothesis and we deduce explicit expressions for some main important distributions. Finally, we empirically evaluate the efficiency and robustness of the method through a simulation study. **AMS 2001 Subject Classification:**62F35, 62J12. **Keywords and phrases:** Gaussian estimator, Minimum density power divergence Gaussian estimator, Robustness, Influence function, Restricted Minimum density power divergence Gaussian estimator, Rao-type tests, Elliptical family of distributions. ## 1 Introduction Let \(\mathbf{Y}_{1},...,\)\(\mathbf{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\mathbf{Y}\) with probability density function \(f_{\mathbf{\theta}}(\mathbf{y}),\)\(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}.\) We denote, \[E_{\mathbf{\theta}}\left[\mathbf{Y}\right]=\mathbf{\mu}\left(\mathbf{\theta}\right)\text{ and }Cov_{\mathbf{\theta}}\left[\mathbf{Y}\right]=\mathbf{\Sigma}\left(\mathbf{\theta}\right). \tag{1}\] The log-likelihood function of the assumed model is given by \[l\left(\mathbf{\theta}\right)=\sum\limits_{i=1}^{n}\log f_{\mathbf{\theta}}(\mathbf{y}_{i})\] for \(\mathbf{y}_{1},...,\)\(\mathbf{y}_{n}\) observations of the \(m\)-dimensional random vectors \(\mathbf{Y}_{1},...,\)\(\mathbf{Y}_{n}.\) Then, the the maximum likelihood estimator (MLE) is computed as \[\widehat{\mathbf{\theta}}_{\mathrm{MLE}}=\max_{\mathbf{\theta}\in\Theta}l\left(\mathbf{ \theta}\right). \tag{2}\] In many real life situations the underlying density function, \(f_{\theta}(\cdot),\) is unknown or it computation is quite difficult but contrariwise, the mean vector and variance-covariance matrices of the underlying distribution of the data, namely \(\mathbf{\mu}(\mathbf{\theta})\) and \(\mathbf{\Sigma}\left(\mathbf{\theta}\right),\) are known. In this case, Zhang (2019) proposed a general procedure based on the Gaussian distribution for estimating the model parameter vector \(\mathbf{\theta}.\) Zhang (2019) assumed that the \(m-\)dimensional random vector \(\mathbf{Y}\) came from a multidimensional normal distribution with vector mean \(\mathbf{\mu}\left(\mathbf{\theta}\right)\) and variance-covariance matrix \(\mathbf{\Sigma}\left(\mathbf{\theta}\right).\) From a statistical point of view this procedure can be justified on the basis of the maximum-entropy principle, (see Kapur (1989)), as the multidimensional normal distribution has maximum uncertainty in terms of Shannon entropy and is as well consistent with the given information, vector mean and variance-covariance matrix. Then, an estimator of the model parameter \(\mathbf{\theta}\) based on the Gaussian distribution can be obtained by maximizing the log-likelihood function as defined in (2), but using \(f_{\mathbf{\theta}}(\cdot)\) the probability density function of a normal distribution with known mean \(\mathbf{\mu}(\mathbf{\theta})\) and variance-covariance matrix \(\mathbf{\Sigma}\left(\mathbf{\theta}\right),\) corresponding to the true mean and variance-covariance matrix of the underlying distribution. That is, the Gaussian-based likelihood function of \(\mathbf{\theta}\) is given by \[l_{G}\left(\mathbf{\theta}\right)=-\frac{nm}{2}\log 2\pi-\frac{n}{2}\log\left|\mathbf{ \Sigma}\left(\mathbf{\theta}\right)\right|-\frac{1}{2}\sum\limits_{i=1}^{n}\left( \mathbf{y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right)\right)^{T}\mathbf{\Sigma}\left(\mathbf{ \theta}\right)^{-1}\left(\mathbf{y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right)\right) \tag{3}\] for any \(\mathbf{y}_{1},...,\mathbf{y}_{n}\) independent observations of the population \(\mathbf{Y}\), and the Gaussian MLE of \(\mathbf{\theta}\) is defined by \[\widehat{\mathbf{\theta}}_{G}=\arg\max_{\mathbf{\theta}\in\Theta}l_{G}\left(\mathbf{ \theta}\right).\] The Gaussian estimator is a MLE and so inherit all good properties of the likelihood estimators. It works well in terms of the asymptotic efficiency but it has important robustness problems. That is, in the absence of contamination in data, the MLE consistently estimates the true value of the model parameter, but in counterpart it may get quite heavily affected by outlying observations in the data. For this reason, Castilla and Zografos (2022) extended the concept of Gaussian estimator and defined a robust version of the estimator based on the density power divergence (DPD) introduced in Basu et al (1998). The DPD robustly quantifies the statistical difference between two distributions and it has been widely used for developing robust inferential methods in many different statistical models. Given a set of observations, the robust minimum DPD estimator (MDPDE) is computed as the minimizer of the DPD between the assumed model distribution and the empirical distribution of the data. The MDPDE enjoys good asymptotic properties and produces robust estimators under general statistical models, as discussed later. The minimum density power divergence Gaussian estimator (MDPDGE) of the parameter \(\mathbf{\theta}\), is defined for \(\tau\geq 0\) as \[\widehat{\mathbf{\theta}}_{G}^{\tau}=\arg\max_{\mathbf{\theta}\in\Theta\subset\mathbb{R }^{d}}H_{n}^{\tau}\left(\mathbf{\theta}\right) \tag{4}\] where \[H_{n}^{\tau}\left(\mathbf{\theta}\right)= \frac{\tau+1}{\tau\left(2\pi\right)^{m\tau/2}\left|\mathbf{\Sigma} \left(\mathbf{\theta}\right)\right|^{\tau/2}}\frac{1}{n}\left[\sum\limits_{i=1}^{n }\exp\left\{-\frac{\tau}{2}\left(\mathbf{y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right) \right)^{T}\mathbf{\Sigma}\left(\mathbf{\theta}\right)^{-1}\left(\mathbf{y}_{i}-\mathbf{\mu} \left(\mathbf{\theta}\right)\right)\right\}\right.\] \[\left.-\frac{\tau}{\left(1+\tau\right)^{\left(m/2\right)+1}} \right]-\frac{1}{\tau}\] \[= a\left|\mathbf{\Sigma}\left(\mathbf{\theta}\right)\right|^{-\frac{\tau}{ 2}}\frac{1}{n}\left[\sum\limits_{i=1}^{n}\exp\left\{-\frac{\tau}{2}\left(\bm {y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right)\right)^{T}\mathbf{\Sigma}\left(\mathbf{\theta }\right)^{-1}\left(\mathbf{y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right)\right)\right\} -b\right]-\frac{1}{\tau}, \tag{5}\] and \[a=\frac{\tau+1}{\tau\left(2\pi\right)^{m\tau/2}}\text{ \ and }b=\frac{\tau}{ \left(1+\tau\right)^{\left(m/2\right)+1}}. \tag{6}\] The MDPDGE family is indexed by a tuning parameter \(\tau\) controlling the trade-off between robustness and efficiency; the greater value of \(\tau\), the more robust the resulting estimator is but less efficiency. It has been shown in the literature that values of the tuning parameter above \(1\) do not provide sufficiently efficient estimators and so, the tuning parameter would be chosen in the \(\left[0,1\right]\) interval. Further, at \(\tau=0\) the MDPDGE reduces to the Gaussian estimator of Zhang (2019), \[\widehat{\mathbf{\theta}}_{G}=\arg\max_{\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d} }H_{n}^{0}\left(\mathbf{\theta}\right)\] with \[H_{n}^{0}\left(\mathbf{\theta}\right)=\lim_{\tau\to 0}H_{n}^{\tau}\left(\mathbf{ \theta}\right)=-\frac{n}{2}\log|\mathbf{\Sigma}\left(\mathbf{\theta}\right)|-\frac{1}{ 2}\sum\limits_{i=1}^{n}\left(\mathbf{y}_{i}-\mathbf{\mu}\left(\mathbf{\theta}\right) \right)^{T}\mathbf{\Sigma}\left(\mathbf{\theta}\right)^{-1}\left(\mathbf{y}_{i}-\mathbf{\mu} \left(\mathbf{\theta}\right)\right). \tag{7}\] Note the above objective function does not perfectly match with the likelihood function of the model stated in (2) as it lacks the first term of the likelihood. However, this term does not depend on the parameter \(\mathbf{\theta}\) and thus both loss functions will lead to the same maximizer. Indeed, the loss in Equation (7) corresponds to the Kullback-Leiber divergence between the assumed normal distribution and the empirical distribution of the data, which justifies the MLE from the information theory. The Kullback-Leiber divergence is the limiting divergence on the DPD family at \(\tau=0\) and so the MDPDGE is a generalization of the classical Gaussian estimator with a tuning parameter controlling the compromise between efficiency and robustness. Further, the MDPDGE is consistent and the asymptotically normal, that is, given \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) independent and identically distributed vectors from the \(m\)-dimensional random vector \(\boldsymbol{Y}\), the MDPDGE, \(\widehat{\boldsymbol{\theta}}_{G}^{\tau}\), defined in (4) satisfies \[\sqrt{n}\left(\widehat{\boldsymbol{\theta}}_{G}^{\tau}-\boldsymbol{\theta} \right)\underset{n\longrightarrow\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}(\boldsymbol{0}_{d},\boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol {\theta})^{-1}\boldsymbol{K}_{\boldsymbol{\tau}}(\boldsymbol{\theta}) \boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta})^{-1}) \tag{8}\] being \[\boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta})=\left(J_{\tau}^{ij} \left(\boldsymbol{\theta}\right)\right)_{i,j=1,..,d}\text{ and }\boldsymbol{K}_{\boldsymbol{\tau}}(\boldsymbol{\theta})=\left(K_{\tau}^{ij} \left(\boldsymbol{\theta}\right)\right)_{i,j=1,..,d}.\] and the elements \(J_{\tau}^{ij}\left(\boldsymbol{\theta}\right)\) and \(K_{\tau}^{ij}\left(\boldsymbol{\theta}\right)\) of the matrices \(J_{\tau}\left(\boldsymbol{\theta}\right)\) and \(K_{\tau}\left(\boldsymbol{\theta}\right)\) are given by \[J_{\tau}^{ij}\left(\boldsymbol{\theta}\right) = \left(\frac{1}{\left(2\pi\right)^{m/2}\left|\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\right|^{1/2}}\right)^{\tau}\frac{1}{\left(1+ \tau\right)^{\left(m/2\right)+2}}\] \[\left[\left(\tau+1\right)trace\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\mu}\left(\boldsymbol {\theta}\right)}{\partial\boldsymbol{\theta}}\left(\frac{\partial\boldsymbol{ \mu}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)^{T}\right)\right.\] \[\left.+\Delta_{\tau}^{i}\Delta_{\tau}^{j}+\frac{1}{2}trace \left(\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\frac{\partial \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{\partial\theta_{j}} \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\frac{\partial \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{\partial\theta_{i}}\right)\right]\] and \[K_{\tau}^{ij}\left(\boldsymbol{\theta}\right) = \left(\frac{1}{\left(2\pi\right)^{m/2}\left|\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\right|^{1/2}}\right)^{2\tau}\frac{1}{\left(1+ 2\tau\right)^{\left(m/2\right)+2}}\left[\Delta_{2\tau}^{i}\Delta_{2\tau}^{j}\] \[+\left(1+2\tau\right)trace\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\mu}\left(\boldsymbol {\theta}\right)}{\partial\boldsymbol{\theta}}\left(\frac{\partial\boldsymbol{\mu }\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)^{T}\right)\] \[+\frac{1}{2}trace\left(\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left(\boldsymbol{\theta} \right)}{\partial\theta_{j}}\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^ {-1}\frac{\partial\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{ \partial\theta_{i}}\right)\right]\] \[-\left(\frac{1}{\left(2\pi\right)^{m/2}\left|\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)\right|^{1/2}}\right)^{2\tau}\frac{1}{ \left(1+\tau\right)^{m+2}}\Delta_{\tau}^{i}\Delta_{\tau}^{j},\] with \(\Delta_{\tau}^{i}=\frac{\tau}{2}trace\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)}{\partial\theta_{i}}\right).\) The above asymptotic distribution follows from Theorem 2 in Basu et al. (2018b), after explicitly compute the matrices \[\boldsymbol{J}_{\tau}\left(\boldsymbol{\theta}\right)\] and \(\boldsymbol{K}_{\tau}\left(\boldsymbol{\theta}\right)\) there defined for general statistical models. Additionally, in some situations we may have additional knowledge about the true parameter space. For example, the parameter of the exponential or Poisson models must be always positive. Then, the parameter space \(\Theta\) should be restricted constraints. Here, we will consider restricted parameter spaces of the form \[\Theta_{0}=\left\{\boldsymbol{\theta\in\Theta/\ g(\boldsymbol{\theta})=0}_{r} \right\}, \tag{11}\] where \(\boldsymbol{g}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{r}\) is a vector-valued function mapping such that the \(d\times r\) matrix \[\mathbf{G}\left(\boldsymbol{\theta}\right)=\frac{\partial\boldsymbol{g}^{T}( \boldsymbol{\theta})}{\partial\boldsymbol{\theta}} \tag{12}\] exists and is continuous in \(\boldsymbol{\theta}\) and \(\text{rank}(\mathbf{G}\left(\boldsymbol{\theta}\right))=r\), and \(\mathbf{0}_{r}\) denotes the null vector of dimension \(r\). The notation \(\Theta_{0}\) clues the use of the present restricted estimator for defining test statistics under composite null hypothesis. The most popular estimator of \(\boldsymbol{\theta}\) satisfying the constraints in (11) is the restricted MLE (RMLE), naturally defined as the maximizer of the loglikelihood function of the model, but subject to the parameter space restrictions \(\boldsymbol{g}(\boldsymbol{\theta})=\mathbf{0}_{r}\) (see Silvey, 1975). Unlikely, the RMLE has the same robustness problems than the MLE, and so robust alternatives should be adopted in the presence of contamination in data. Several robust restricted estimators have been considered in the statistical literature to overcome the robustness drawback of the RMLE. For example, Pardo et al. (2002) introduced the restricted minimum Phi-divergence estimator and studied its properties. Basu et al (2018b) presented the restricted minimum density power divergence estimators (RMDPDE) and studied some applications of them in testing hypothesis. In Ghosh (2015) the theoretical robustness properties of the RMDPDE were studied. Jaenada et al (2022a, 2022b) considered the restricted Renyi pseudodistance estimator and from it, they defined Rao-type tests. More recently, Martin (2021) studied the RMDPD under normal distributions and develop independence test under the normal assumption, and later Martin (2023) used the RMDPDE in the context of independent but not identically distributed variables under heteroscedastic linear regression models. In this paper, we introduce and study the restricted minimum density power divergence Gaussian estimator (RMDPDGE). The rest of the paper is organized as follows: In Section 2 we introduce the RMDPDGE and we obtain its asymptotic distribution. Section 3 presents the influence function of the RMDPDGE and theoretically proves the robustness of the proposed estimators. Some statistical applications for testing are presented in Section 4 and explicit expression of the Rao-type test statistics based on the RMDPDGEs under exponential and Poisson models are given. Section 5 empirically demonstrates the robustness of the method trough a simulation study, and the advantages and disadvantage of the Gaussian assumption are discussed there. Section 6 presents some conclusions. The proofs of the main results stated in the paper are included in an Appendix Restricted minimum density power divergence Gaussian estimators In this section we present the RMDPDGE under general equality non-linear constraints and we study it asymptotic distribution, showing the consistency of the estimator. **Definition 1**: _Let \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}\). The RMDPDGE, \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau},\) is defined by_ \[\widetilde{\boldsymbol{\theta}}_{G}^{\tau}=\arg\max_{\Theta_{0}}H_{n}^{\tau} \left(\boldsymbol{\theta}\right),\] _where \(H_{n}^{\tau}\left(\boldsymbol{\theta}\right)\) is as given in Equation (5) and \(\Theta_{0}=\left\{\boldsymbol{\theta}\in\Theta/\text{ }\boldsymbol{g}(\boldsymbol{\theta}_{r})=\mathbf{0}_{r}\right\}\) is the restricted parameter space defined in (11)._ Before presenting the asymptotic distribution of the MDPDGE we present some previous results whose proofs are included in the Appendix. **Proposition 2**: _Let \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Then,_ \[\sqrt{n}\left(\frac{1}{\tau+1}\frac{\partial H_{n}^{\tau}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}}\right)\underset{n\longrightarrow \infty}{\overset{\mathcal{L}}{\longrightarrow}}\mathcal{N}(\mathbf{0}_{d}, \boldsymbol{K}_{\boldsymbol{\tau}}(\boldsymbol{\theta})),\] _where \(\boldsymbol{K}_{\boldsymbol{\tau}}(\boldsymbol{\theta})\) was defined in (10)._ **Proof.** See Appendix A. **Proposition 3**: _Let \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Then,_ \[\frac{\partial^{2}H_{n}^{\tau}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}\boldsymbol{\theta}^{T}}\underset{n\longrightarrow \infty}{\overset{\mathcal{P}}{\longrightarrow}}-\left(\tau+1\right) \boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta}),\] _where \(\boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta})\) was defined in (9)._ **Proof.** See Appendix B. Next, we present the asymptotic distribution of \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\). **Theorem 4**: _Let \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Suppose the true distribution of \(\boldsymbol{Y}\) belongs to the model and we consider \(\boldsymbol{\theta}\in\boldsymbol{\Theta}_{0}\). Then the RMDPDGE \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\) of \(\boldsymbol{\theta}\) obtained under the constraints \(\boldsymbol{g}(\boldsymbol{\theta})=\mathbf{0}_{r}\) has the asymptotic distribution,_ \[n^{1/2}(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}-\boldsymbol{\theta}) \underset{n\longrightarrow\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}(\mathbf{0}_{d},\boldsymbol{M}_{\boldsymbol{\tau}}(\boldsymbol{ \theta})) \tag{13}\] _where_ \[\boldsymbol{M_{\tau}(\theta)}=\boldsymbol{P}_{\tau}^{*}(\boldsymbol{\theta}) \boldsymbol{K_{\tau}}\left(\boldsymbol{\theta}\right)\boldsymbol{P}_{\tau}^{*}( \boldsymbol{\theta})^{T},\] \[\boldsymbol{P}_{\tau}^{*}(\boldsymbol{\theta})=\boldsymbol{J_{\tau}}( \boldsymbol{\theta})^{-1}-\boldsymbol{Q_{\tau}}(\boldsymbol{\theta})\boldsymbol {G}(\boldsymbol{\theta})^{T}\boldsymbol{J_{\tau}}(\boldsymbol{\theta})^{-1}, \tag{14}\] \[\boldsymbol{Q_{\tau}}(\boldsymbol{\theta})=\boldsymbol{J}_{\tau}^{-1}( \boldsymbol{\theta})\boldsymbol{G}(\boldsymbol{\theta})\left[\boldsymbol{G}( \boldsymbol{\theta})^{T}\boldsymbol{J_{\tau}}(\boldsymbol{\theta})^{-1} \boldsymbol{G}(\boldsymbol{\theta})\right]^{-1}, \tag{15}\] _and \(\boldsymbol{J_{\tau}}(\boldsymbol{\theta})\) and \(\boldsymbol{K_{\tau}}\left(\boldsymbol{\theta}\right)\) were defined in (9) and (10), respectively._ **Proof.** The estimating equations for the RMDPDGE are given by \[\left\{\begin{array}{c}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{ \tau}(\boldsymbol{\theta})+\boldsymbol{G}(\boldsymbol{\theta})\boldsymbol{ \lambda}_{n}=\boldsymbol{0}_{d},\\ \boldsymbol{g}(\widetilde{\boldsymbol{\theta}}_{G}^{\tau})=\boldsymbol{0}_{ r},\end{array}\right. \tag{16}\] where \(\boldsymbol{\lambda}_{n}\) is a vector of Lagrangian multipliers. Now we consider \(\boldsymbol{\theta}_{n}=\boldsymbol{\theta}+\boldsymbol{mn}^{-1/2}\), where \(||\boldsymbol{m}||<k\), for \(0<k<\infty\). We have, \[\frac{\partial}{\partial\boldsymbol{\theta}}\left.H_{n}^{\tau}(\boldsymbol{ \theta})\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{n}}=\frac{\partial} {\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{\theta})+\frac{\partial ^{2}}{\partial\boldsymbol{\theta}^{T}\partial\boldsymbol{\theta}}\left.H_{n}^ {\tau}(\boldsymbol{\theta})\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_ {*}}(\boldsymbol{\theta}_{n}^{*}-\boldsymbol{\theta})\] and \[n^{1/2}\left.\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}( \boldsymbol{\theta})\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{n}}=n^ {1/2}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta})+\frac{\partial^{2}}{\partial\boldsymbol{\theta}^{T}\partial \boldsymbol{\theta}}\left.H_{n}^{\tau}(\boldsymbol{\theta})\right|_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{*}}n^{1/2}(\boldsymbol{\theta}_{n}- \boldsymbol{\theta}) \tag{17}\] where \(\boldsymbol{\theta}^{*}\) belongs to the segment joining \(\boldsymbol{\theta}\) and \(\boldsymbol{\theta}_{n}\) Since \[\lim_{n\to\infty}\frac{\partial^{2}}{\partial\boldsymbol{\theta}^{T}\partial \boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{\theta})=-\left(\tau+1\right) \boldsymbol{J_{\tau}}(\boldsymbol{\theta})\] we obtain \[n^{1/2}\left.\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}( \boldsymbol{\theta})\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{n}}=n^ {1/2}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta})-\left(\tau+1\right)n^{1/2}\boldsymbol{J_{\tau}}(\boldsymbol{\theta}) (\boldsymbol{\theta}_{n}-\boldsymbol{\theta})+o_{p}(1). \tag{18}\] Taking into account that \(\boldsymbol{G}(\boldsymbol{\theta})\) is continuous in \(\boldsymbol{\theta}\) \[n^{1/2}\boldsymbol{g}(\boldsymbol{\theta}_{n})=\boldsymbol{G}(\boldsymbol{ \theta})^{T}n^{1/2}(\boldsymbol{\theta}_{n}-\boldsymbol{\theta})+o_{p}(1). \tag{19}\] The RMDPDGE \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\) must satisfy the conditions in (16), and in view of (18) we have \[n^{1/2}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta})-\left(\tau+1\right)\boldsymbol{J_{\tau}}(\boldsymbol{\theta})n^{1/2}( \widetilde{\boldsymbol{\theta}}_{G}^{\tau}-\boldsymbol{\theta})+\boldsymbol{G} (\boldsymbol{\theta})n^{1/2}\boldsymbol{\lambda}_{n}+o_{p}(1)=\boldsymbol{0} _{p}. \tag{20}\] From (19) it follows that \[\boldsymbol{G}^{T}(\boldsymbol{\theta})n^{1/2}(\widetilde{\boldsymbol{\theta}} _{G}^{\tau}-\boldsymbol{\theta})+o_{p}(1)=\boldsymbol{0}_{r}. \tag{21}\] Now we can express equations (20) and (21) in the matrix form as \[\left(\begin{array}{cc}\left(\tau+1\right)\boldsymbol{J_{\tau}}(\boldsymbol{ \theta})&-\boldsymbol{G}(\boldsymbol{\theta})\\ -\boldsymbol{G}^{T}(\boldsymbol{\theta})&\boldsymbol{0}\end{array}\right) \left(\begin{array}{c}n^{1/2}(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}- \boldsymbol{\theta})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1/ 2}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta})\\ \boldsymbol{0}\end{array}\right)+o_{p}(1).\] Therefore \[\left(\begin{array}{c}n^{1/2}(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}- \boldsymbol{\theta})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{cc} \left(\tau+1\right)\boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta})&- \boldsymbol{G}(\boldsymbol{\theta})\\ -\boldsymbol{G}^{T}(\boldsymbol{\theta})&\boldsymbol{0}\end{array}\right)^{-1 }\left(\begin{array}{c}n^{1/2}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n }^{\tau}(\boldsymbol{\theta})\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\] But \[\left(\begin{array}{cc}\left(\tau+1\right)\boldsymbol{J}_{\boldsymbol{\tau}} (\boldsymbol{\theta})&-\boldsymbol{G}(\boldsymbol{\theta})\\ -\boldsymbol{G}^{T}(\boldsymbol{\theta})&\boldsymbol{0}\end{array}\right)^{-1 }=\left(\begin{array}{cc}\boldsymbol{L}_{\tau}^{*}(\boldsymbol{\theta})& \boldsymbol{Q}_{\tau}(\boldsymbol{\theta})\\ \boldsymbol{Q}_{\tau}(\boldsymbol{\theta}_{0})^{T}&\boldsymbol{R}_{\tau}( \boldsymbol{\theta})\end{array}\right),\] where \[\boldsymbol{L}_{\tau}^{*}(\boldsymbol{\theta}) = \frac{1}{\tau+1}\left(\boldsymbol{J}_{\boldsymbol{\tau}}( \boldsymbol{\theta})^{-1}-\boldsymbol{Q}_{\boldsymbol{\tau}}(\boldsymbol{ \theta})\boldsymbol{G}(\boldsymbol{\theta})^{T}\boldsymbol{J}_{\boldsymbol{ \tau}}(\boldsymbol{\theta})^{-1}\right)\] \[= \frac{1}{\tau+1}\boldsymbol{P}_{\tau}^{*}(\boldsymbol{\theta})\] \[\boldsymbol{Q}_{\tau}(\boldsymbol{\theta}) = \boldsymbol{J}_{\boldsymbol{\tau}}^{-1}(\boldsymbol{\theta}) \boldsymbol{G}(\boldsymbol{\theta})\left[\boldsymbol{G}(\boldsymbol{\theta})^{ T}\boldsymbol{J}_{\boldsymbol{\tau}}(\boldsymbol{\theta})^{-1}\boldsymbol{G}( \boldsymbol{\theta})\right]^{-1}\] \[\boldsymbol{R}_{\tau}(\boldsymbol{\theta}) = \boldsymbol{G}(\boldsymbol{\theta})^{T}\boldsymbol{J}_{\boldsymbol{ \tau}}(\boldsymbol{\theta})^{-1}\boldsymbol{G}(\boldsymbol{\theta})\] and \(\boldsymbol{P}_{\tau}^{*}(\boldsymbol{\theta}_{0})\) and \(\boldsymbol{Q}_{\tau}(\boldsymbol{\theta}_{0})\) are as given in (14) and (15) respectively. Then, \[n^{1/2}(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}-\boldsymbol{\theta})=\left( \tau+1\right)^{-1}\boldsymbol{P}_{\tau}^{*}(\boldsymbol{\theta})n^{1/2}\frac {\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{\theta})+o_{p }(1), \tag{22}\] and we know by Proposition 2 that \[n^{1/2}\left(\tau+1\right)^{-1}\frac{\partial}{\partial\boldsymbol{\theta}}H_ {n}^{\tau}(\boldsymbol{\theta})\underset{n\longrightarrow\infty}{\overset{ \mathcal{L}}{\longrightarrow}}\mathcal{N}\left(\boldsymbol{0},\boldsymbol{K}_{ \boldsymbol{\tau}}\left(\boldsymbol{\theta}\right)\right). \tag{23}\] Now by (22) and (23) we have the desired result presented in (13). **Remark 5**: _Notice that the result in (8) is a special case of the previous theorem when there is no restriction on the parametric space, in the sense that \(\boldsymbol{G}\), defined in (12), is the null matrix. In this case the matrix \(\boldsymbol{P}_{\boldsymbol{\tau}}^{*}(\boldsymbol{\theta})\) given in (14) becomes \(\boldsymbol{P}_{\boldsymbol{\tau}}^{*}(\boldsymbol{\theta})=\boldsymbol{J}_{ \boldsymbol{\tau}}(\boldsymbol{\theta})^{-1}.\) Therefore, the asymptotic variance-covariance matrix of the unrestricted estimator, i.e., the MDPDGE, may be reconstructed from the previous theorem._ The MDPDGE is an optimum of a differentiable function \(H_{n}^{\tau},\) so it must annul it first derivatives. Using that * \(\frac{\partial\left|\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right) \right|^{-\tau/2}}{\partial\boldsymbol{\theta}}=-\frac{\tau}{2}\left| \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\right|^{-\tau/2}trace \left(\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\frac{\partial \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{ \theta}}\right)\) * \(\frac{\partial}{\partial\boldsymbol{\theta}}\left(\boldsymbol{y}_{i}-\boldsymbol {\mu}\left(\boldsymbol{\theta}\right)\right)^{T}\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_{i}-\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\right)=-2\left(\frac{\partial\boldsymbol{\mu}( \boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\right)^{T}\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_{i}-\boldsymbol {\mu}\left(\boldsymbol{\theta}\right)\right)-\) and taking derivatives in (5), for a certain fixed \(\tau\), we have that \[\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta}) = \frac{1}{n}\sum\limits_{i=1}^{n}\left\{-a\;\frac{\tau}{2}\left| \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\right|^{-\tau/2}trace \left(\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\frac{\partial \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{ \theta}}\right)\right.\] \[\left.\exp\left(-\frac{\tau}{2}\left(\boldsymbol{y}_{i}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)^{T}\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_{i}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right)+ba\;\frac{ \tau}{2}\left|\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\right|^{- \tau/2}\right.\] \[\left.trace\left(\boldsymbol{\Sigma}\left(\boldsymbol{\theta} \right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left(\boldsymbol{\theta} \right)}{\partial\boldsymbol{\theta}}\right)+a\;\frac{\tau}{2}\left| \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\right|^{-\tau/2}\exp \left(-\frac{\tau}{2}\left(\boldsymbol{y}_{i}-\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\right)^{T}\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)^{-1}\left(\boldsymbol{y}_{i}-\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\right)\right)\right.\] \[\left.\left[2\left(\frac{\partial\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)^{T} \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_{i }-\boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right.\right.\] \[\left.+\left(\boldsymbol{y}_{i}-\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\right)^{T}\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)^{-1}\right)\left(\boldsymbol{y}_{i}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right]\right\}\] \[:= \frac{1}{n}\sum\limits_{i=1}^{n}\boldsymbol{\Psi}_{\boldsymbol{ \tau}}(\boldsymbol{y}_{i};\boldsymbol{\theta}),\] with \[\boldsymbol{\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y}_{i}; \boldsymbol{\theta}) = a\;\frac{\tau}{2}\left|\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)\right|^{-\tau/2}\left\{\left[-trace\left(\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)\right.\right.\] \[\left.+\left(2\left(\frac{\partial\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)^{T} \boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_ {i}-\boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right.\right.\] \[\left.\left.+\left(\boldsymbol{y}_{i}-\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\right)^{T}\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)^{-1}\right)\boldsymbol{y}_{i}-\boldsymbol{ \mu}\left(\boldsymbol{\theta}\right)\right.\right).\] \[\left.\exp\left(-\frac{\tau}{2}\left(\boldsymbol{y}_{i}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)^{T}\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{y}_{i}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right)\right.\right]\] \[\left.+b\;trace\left(\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}}\right)\right\}.\] Therefore the estimating equations of the MDPDGE for a fixed parameter \(\tau\) are given by \[\sum\limits_{i=1}^{n}\boldsymbol{\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y}_{i}; \boldsymbol{\theta})=\boldsymbol{0}_{d}. \tag{25}\] The previous estimating equations characterizes the the MDPDGE as an M-estimator and so it asymptotic distribution could have been also derived from the general theory of M-estimators. In particular, the MDPDGE, \(\widehat{\boldsymbol{\theta}}_{G}^{\tau}\), satisfies for any \(\tau\geq 0\) \[\sqrt{n}\left(\widehat{\boldsymbol{\theta}}_{G}^{\tau}-\boldsymbol{\theta} \right)\underset{n\longrightarrow\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}\left(\boldsymbol{0}_{d},\boldsymbol{S}^{-1}\boldsymbol{M}\boldsymbol{S }^{-1}\right) \tag{26}\] with \[\boldsymbol{S}=-E\left[\frac{\partial^{2}H_{n}^{\tau}\left(\boldsymbol{\theta} \right)}{\partial\boldsymbol{\theta}\boldsymbol{\theta}\boldsymbol{\theta}^{T }}\right]\;\text{and}\;\boldsymbol{M}=Cov\left[\sqrt{n}\frac{\partial}{ \partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{\theta})\right]. \tag{27}\] Based on Propositions 2 and 3 we can express the previous matrices as \[\boldsymbol{S}=\left(\tau+1\right)\boldsymbol{J}_{\boldsymbol{\tau}}\left( \boldsymbol{\theta}\right)\;\text{and}\;\;\boldsymbol{M}=\left(\tau+1\right)^{2 }\boldsymbol{K}_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}\right)\] and we get back the expressions stated in (8). The asymptotic convergence in (26) offers an alternative proof of the asymptotic distribution of MDPDGE stated in Castilla and Zografos (2019), in terms of the transformed matrices \(\mathbf{S}\) and \(\mathbf{M}\) in Equation (27). ## 3 Influence function for the RMDPDGE To analyze the robustness of an estimator, Hampel et al. (1986) introduced the concept of Influence Function (IF). Since then, the IF have been widely used in the statistical literature to measure robustness in different statistical contexts. Intuitively, the IF describes the effect of an infinitesimal contamination of the model on the estimation. Robust estimators should be less affected by contamination and thus IFs associated to locally robust (B-robust) estimators should be bounded. The IF of an estimator, \(\widetilde{\mathbf{\theta}}_{G}^{\tau}\), is defined in terms of its statistical functional \(\widetilde{T}_{\tau}\) satisfying \(\widetilde{T}_{\tau}(g)=\widetilde{\mathbf{\theta}}_{G}^{\tau}\), where \(g\) is the true density function underlying the data. Given the density function \(g\) we define it contaminated version at the point perturbation \(\mathbf{y}_{0}\) as, \[g=(1-\varepsilon)g+\varepsilon\Delta_{\mathbf{y}_{0}}, \tag{28}\] where \(\varepsilon\) is fraction of contamination and \(\Delta_{\mathbf{y}_{0}}\) denotes the indicator function at \(\mathbf{y}_{0}\). Then, the IF of \(\widetilde{\theta}_{G}^{\tau}\) is defined as the derivative of the functional at \(\varepsilon=0\) \[IF(\mathbf{y}_{0},\widetilde{T}_{\tau})=\frac{\partial\widetilde{T}_{\tau}(g_{ \varepsilon})}{\varepsilon}|_{\varepsilon=0}\] The above derivative quantifies the rate of change of the sample estimator as the fraction of contamination changes, i.e. how much the estimate is affected by contamination. Let us now obtain the IF of RMDPDE. We consider the contaminated model \[g_{\varepsilon}(\mathbf{y})=(1-\varepsilon)f_{\mathbf{\theta}}(\mathbf{y})+\varepsilon \Delta_{\mathbf{y}_{0}},\] with \(\Delta_{\mathbf{y}_{0}}\) the indicator function degenerated at the mass point \(\mathbf{y}_{0}\), and \(f_{\mathbf{\theta}}\) is the assumed probability density function of a normal population. The MDPDGE for the contaminated model is then given by \(\widetilde{\mathbf{\theta}}_{G,\varepsilon}^{\tau}=\widetilde{T}_{\tau}(g_{ \varepsilon})\). By definition \(\widetilde{\mathbf{\theta}}_{G,\varepsilon}^{\tau}\) is the minimizer of the loss function \(H_{n}^{\tau}(\mathbf{\theta})\) in (5), subject to the constraints \(\mathbf{g}(\widetilde{\mathbf{\theta}}_{G,\varepsilon}^{\tau})=\mathbf{0}\). Using the characterization of the MDPDGE as an M-estimator we have that the influence function of the MDPDGE is given by \[IF(\mathbf{y},\widetilde{T}_{\tau},\mathbf{\theta})=\mathbf{J}_{\tau}(\mathbf{\theta})^{-1} \mathbf{\Psi}_{\mathbf{\tau}}(\mathbf{y};\mathbf{\theta}), \tag{29}\] where \(\mathbf{J}_{\tau}(\mathbf{\theta})\) was defined in (9) and \(\mathbf{\Psi}_{\mathbf{\tau}}(\mathbf{y};\mathbf{\theta})\) in (24). The influence function of the RMDPDGE will be obtained with the additional condition \(\mathbf{g}(\widetilde{\mathbf{\theta}}_{G,\varepsilon}^{\tau})=\mathbf{0}\). Differentiating this last equation gives, at \(\varepsilon=0\), \[\mathbf{G}\left(\mathbf{\theta}\right)^{T}IF(\mathbf{y},\widetilde{T}_{\tau},\mathbf{ \theta})=\mathbf{0}. \tag{30}\] Based on (29) and (30) we have \[\left(\begin{array}{c}\boldsymbol{J}_{\tau}(\boldsymbol{\theta})\\ \mathbf{G}\left(\boldsymbol{\theta}\right)^{T}\end{array}\right)IF(\boldsymbol{y },\widetilde{T}_{\tau},\boldsymbol{\theta})=\left(\begin{array}{c}\boldsymbol {\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y};\boldsymbol{\theta})\\ \boldsymbol{0}\end{array}\right).\] Therefore, \[\left(\boldsymbol{J}_{\tau}(\boldsymbol{\theta})^{T},\mathbf{G}\left( \boldsymbol{\theta}\right)\right)\left(\begin{array}{c}\boldsymbol{J}_{ \tau}(\boldsymbol{\theta})\\ \mathbf{G}\left(\boldsymbol{\theta}\right)^{T}\end{array}\right)IF(\boldsymbol {y},\widetilde{T}_{\tau},\boldsymbol{\theta})=\boldsymbol{J}_{\tau}( \boldsymbol{\theta})^{T}\boldsymbol{\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y}; \boldsymbol{\theta})\] and the influence function of the RMDPDGE, \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\), is given by \[\left(\boldsymbol{J}_{\tau}(\boldsymbol{\theta})^{T}\boldsymbol{J}_{\tau}( \boldsymbol{\theta})\right)+\mathbf{G}\left(\boldsymbol{\theta}\right) \mathbf{G}\left(\boldsymbol{\theta}\right)^{T}\right)^{-1}\boldsymbol{J}_{ \tau}(\boldsymbol{\theta})^{T}\boldsymbol{\Psi}_{\boldsymbol{\tau}}( \boldsymbol{y};\boldsymbol{\theta}). \tag{31}\] We can observe that the influence function of \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\), obtained in (31), will be bounded if the influence function of the MDPDGE, \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\), given in (29) is bounded. In general it is not easy to see if it is bounded or not but in particular situations it is not difficult. On the other hand if there are not restrictions, \(\mathbf{G}\left(\boldsymbol{\theta}\right)=\mathbf{0}\), and therefore (31) coincides with (29). In Section 4.1 we shall present the expression of \(J_{\tau}(\theta)\) and \(\psi_{\tau}(y,\theta)\) for the exponential and Poisson models. Based on that results we present in Figure 1 the influence function of the MDPDGE, \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\), for \(\theta=4\) and \(\tau=0\), \(0,2\) and \(0.8\) for the exponential model. We can see that for \(\tau=0\), the influence function is not bounded and for \(\tau=0,2\) and \(0.8\) is bounded. This fact points out the robustness of the MDPDGE, \(\widetilde{\boldsymbol{\theta}}_{G}^{\tau}\), for \(\tau>0\). Figure 1. Influence function for \(\tau=0\) (read), \(0.2\) (black) and \(0.8\) (Green) (Exponential model) Rao-type tests based on RMDPDGE Recently many robust test statistics based on minimum distance estimators have been introduced in the statistical literature for testing under different statistical models. Among them, density power divergence and Renyi's pseudodistance based test statistics have shown very competitive performance with respect to classical tests in many different problems. Distance-based test statistics are essentially of two types: Wald-type tests and Rao-type tests. Some applications of these tests are the following: Basu et al. (2013, 2016, 2017, 2018a, 2018b, 2021, 2022a, 2022b), Castilla et al (2020, 2022a, 2022b), Ghosh et al (2016, 2021), Jaenada et al (2022a, 2022b), Martin (2021), Menendez et al (1995) and all references therein. In this section we introduce the Rao-type tests based on RMDPDGE and we study their asymptotic properties, proving the consistency of the tests. We analyze here simple null hypothesis of the form \[H_{0}:\boldsymbol{\theta}=\boldsymbol{\theta}_{0}\text{ versus }H_{1}: \boldsymbol{\theta}\neq\boldsymbol{\theta}_{0}. \tag{32}\] **Definition 6**: _Let \(\boldsymbol{Y}_{1},...,\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d},\) and consider the testing problem defined in (32). The Rao-type test statistic based on RMDPDGE, is defined by_ \[R_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}_{0}\right)=\frac{1}{n} \boldsymbol{U}_{n}^{\boldsymbol{\tau}}(\boldsymbol{\theta}_{0})^{T}\boldsymbol {K}_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}_{0}\right)^{-1}\boldsymbol{U }_{n}^{\boldsymbol{\tau}}(\boldsymbol{\theta}_{0}), \tag{33}\] _where_ \[\boldsymbol{U}_{n}^{\boldsymbol{\tau}}(\boldsymbol{\theta})=\left(\frac{1}{ \tau+1}\sum\limits_{i=1}^{n}\Psi_{\boldsymbol{\tau}}^{1}(\boldsymbol{y}_{i}; \boldsymbol{\theta}),...,\frac{1}{\tau+1}\sum\nolimits_{i=1}^{n}\Psi_{ \boldsymbol{\tau}}^{d}(\boldsymbol{y}_{i};\boldsymbol{\theta})\right)^{T},\] _is the score function defining the estimating equations of the MDPDGE and \(\boldsymbol{\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y}_{i};\boldsymbol{\theta })=\left(\Psi_{\boldsymbol{\tau}}^{1}(\boldsymbol{y}_{i};\boldsymbol{\theta }),....,\Psi_{\boldsymbol{\tau}}^{d}(\boldsymbol{y}_{i};\boldsymbol{\theta}) \right).\)_ Before deriving the asymptotic distribution of the Rao-type test based on the MDPDGE, it is interesting to note that we rewrite \[\frac{1}{\sqrt{n}}\sum\limits_{i=1}^{n}\frac{1}{\tau+1}\boldsymbol{\Psi}_{ \boldsymbol{\tau}}(\boldsymbol{y}_{i};\boldsymbol{\theta})=\sqrt{n}\frac{1}{ \tau+1}\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{ \theta}),\] and hence by Proposition 2 we can stablish the asymptotic distribution of the score function \(\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\theta})\) \[\frac{1}{\sqrt{n}}\sum\limits_{i=1}^{n}\frac{1}{\tau+1}\boldsymbol{\Psi}_{ \boldsymbol{\tau}}(\boldsymbol{y}_{i};\boldsymbol{\theta})\underset{n\to \infty}{\overset{L}{\rightarrow}}\mathcal{N}\left(\boldsymbol{0}_{p}, \boldsymbol{K}_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}\right)\right).\] The next result establishes the asymptotic behaviour of the proposed Rao-type test statistic. **Theorem 7**: _Let \(\mathbf{Y}_{1},...,\)\(\mathbf{Y}_{n}\) be independent and identically distributed observations from a \(m\)- dimensional random vector \(\mathbf{Y}\) with \(E_{\mathbf{\theta}}\left[\mathbf{Y}\right]=\mathbf{\mu}\left(\mathbf{\theta}\right)\) and \(\mbox{Cov}\mathbf{\varrho}\left[\mathbf{Y}\right]=\mathbf{\Sigma}\left(\mathbf{\theta}\right)\), \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Under the null hypothesis given in (32) it holds_ \[R_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right)\mathop{\rightarrow}\limits_{n \rightarrow\infty}^{L}\chi_{d}^{2}.\] **Proof.** As remarked before, the score function is asymptotically normal, \[\frac{1}{\sqrt{n}}\mathbf{U}_{n}^{\mathbf{\tau}}(\mathbf{\theta})=\frac{1}{\sqrt{n}} \mathop{\sum}\limits_{i=1}^{n}\frac{1}{\tau+1}\mathbf{\Psi}_{\mathbf{\tau}}(\mathbf{y}_{i };\mathbf{\theta})=\sqrt{n}\frac{1}{\tau+1}\frac{\partial}{\partial\mathbf{\theta}}H _{n}^{\tau}(\mathbf{\theta})\mathop{\xrightarrow{\mathcal{L}}}_{n \longrightarrow\infty}\mathcal{N}\left(\mathbf{0},\mathbf{K}_{\mathbf{\tau}}\left(\mathbf{ \theta}\right)\right).\] Then, applying a suitable transformation, the result follows. **Remark 8**: _Based on Theorem 7, for large enough sample sizes, one can use the \(100(1-\alpha)\) percentile, \(\chi_{d,\alpha}^{2},\) of the chi-square with \(d\) degrees of freedom satisfying,_ \[\Pr\left(\chi_{d}^{2}>\chi_{d,\mathbf{\alpha}}^{2}\right)=\alpha,\] _to define the reject region of the test with null hypothesis in (32)_ \[RC=\{R_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right)>\chi_{d,\alpha}^{2}\}.\] For illustrative purposes, we present here the application of the proposed method in elliptical distributions. **Example 9**: _(Elliptical distributions). The \(m\)-dimensional random vector \(\mathbf{Y}\) follows an elliptical distribution if it characteristic function has the form_ \[\varphi_{\mathbf{Y}}\left(\mathbf{t}\right)=\exp\left(i\mathbf{t}^{2}\mathbf{\mu}\right)\psi \left(\frac{1}{2}\mathbf{t}^{2}\mathbf{\Sigma}\mathbf{t}\right)\] _where \(\mathbf{\mu}\) is a \(m\)-dimensional vector, \(\mathbf{\Sigma}\) is a positive definite matrix and \(\psi(t)\) denotes the so-called characteristic generator function. The function \(\psi\) may depend on the dimension of random vector \(\mathbf{Y}\). In general, it does not hold that \(\mathbf{Y}\) has a joint density function, \(f_{\mathbf{Y}}(\mathbf{y}),\) but if this density exists, it is given by_ \[f_{\mathbf{Y}}(\mathbf{y})=c_{m}\left|\mathbf{\Sigma}\right|^{-\frac{1}{2}}g_{m}\left( \frac{1}{2}\left(\mathbf{y}-\mathbf{\mu}\right)^{T}\mathbf{\Sigma}^{-1}\left(\mathbf{y}-\mathbf{ \mu}\right)\right)\] _for some density generator function \(g_{m}\) which could depend on the dimension of the random vector. The elliptical distribution family is in the following denoted by \(E_{m}\left(\mathbf{\mu,\Sigma,}g_{m}\right).\) Moreover, if the density exists, the parameter \(c_{m}\) is given explicitly by_ \[c_{m}=\left(2\pi\right)^{-\frac{m}{2}}\Gamma\left(\frac{m}{2}\right)\left( \int x^{\frac{m}{2}-1}g_{m}\left(x\right)dx\right)^{-1}.\] _For more details about the elliptical family \(E_{m}\left(\mathbf{\mu,\Sigma,}g_{m}\right)\) see Fang et al (1987), Gupta and Varga (1993), Cambanis et al (1981), Fang and Zhang (1990) and references therein. In Fang et al (1987), for instance, it can be seen that the mean vector and variance covariance matrix can obtained as_ \[E\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\text{ and }Cov\left[\boldsymbol{Y} \right]=c_{\boldsymbol{Y}}\boldsymbol{\Sigma}\] _where \(c_{\boldsymbol{Y}}=-2\psi^{\prime}(0).\)_ _For the elliptical model, the parameter to be estimated is \(\boldsymbol{\theta}=\left(\boldsymbol{\mu}^{T},\boldsymbol{\Sigma}\right)\) whose dimension is \(s=m+\frac{m(m+1)}{2}.\) In the following we denote \(\boldsymbol{\mu(\theta)}\) instead of \(\boldsymbol{\mu}\) and \(\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\) instead of \(\boldsymbol{\Sigma}\), so as to be consistent with the paper notation._ _Let us consider the testing problem_ \[H_{0}:\left(\boldsymbol{\mu(\theta)},\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)\right)=\left(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0} \right)\text{ versus }H_{1}:\left(\boldsymbol{\mu(\theta)},\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\right)\neq\left(\boldsymbol{\mu}_{0}, \boldsymbol{\Sigma}_{0}\right) \tag{34}\] _where \(\boldsymbol{\mu}_{0}\) and \(\boldsymbol{\Sigma}_{0}\) are known. The Rao-type test statistic based on the MDPDGE for the elliptical model is given by (34) if_ \[R_{\boldsymbol{\tau}}\left(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\right) =\frac{1}{n}\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\mu}_{0}, \boldsymbol{\Sigma}_{0})^{T}\boldsymbol{K}_{\boldsymbol{\tau}}\left( \boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\right)^{-1}\boldsymbol{U}_{n}^{ \tau}(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0})\] _where_ \[\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\text{ })=\sum_{i=1}^{n}\frac{1}{\tau+1}\boldsymbol{\Psi}_{\boldsymbol{\tau}}( \boldsymbol{y}_{i};\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0})\] _with \(\boldsymbol{\Psi}_{\boldsymbol{\tau}}(\boldsymbol{y}_{i};\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0})\) are as defined in (24) and (10), respectively, but replacing \(\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)\) by \(c_{\boldsymbol{Y}}\boldsymbol{\Sigma}\) and \(\boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\) by \(\boldsymbol{\mu}\). Then, the null hypothesis in (34) should be rejected if_ \[R_{\boldsymbol{\tau}}\left(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\right) >\chi_{m+\frac{m(m+1)}{2},\alpha}^{2},\] _with \(\chi_{m+\frac{m(m+1)}{2},\alpha}^{2}\) is the \(1-\alpha\) upper quantile of a chi-square with \(m+\frac{m(m+1)}{2}\) degrees of freedom._ We finally prove the consistency of the Rao-type test based on RMDPDGE. To simplify the statement of the next result, we first define the vector \[\boldsymbol{Y}_{\tau}(\boldsymbol{\theta}) = a\text{ }\frac{\tau}{2}\left|\boldsymbol{\Sigma}\left(\boldsymbol{ \theta}\right)\right|^{-\tau/2}\left\{-trace\left(\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)\right.\] \[\left.+\text{ }\exp\left(-\frac{\tau}{2}\left(\boldsymbol{Y}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)^{T}\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{Y}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right)\right.\] \[\left.+\text{ }\exp\left(-\frac{\tau}{2}\left(\boldsymbol{Y}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)^{T}\boldsymbol{ \Sigma}\left(\boldsymbol{\theta}\right)^{-1}\left(\boldsymbol{Y}- \boldsymbol{\mu}\left(\boldsymbol{\theta}\right)\right)\right)\right.\] \[\left.+\left(\boldsymbol{Y}-\boldsymbol{\mu}\left(\boldsymbol{ \theta}\right)\right)^{T}\left(\boldsymbol{\Sigma}\left(\boldsymbol{\theta} \right)^{-1}\frac{\partial\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}}\boldsymbol{\Sigma}\left(\boldsymbol{\theta}\right)^ {-1}\right)\left(\boldsymbol{Y}-\boldsymbol{\mu}\left(\boldsymbol{\theta} \right)\right)\right]\right\},\] where \(a\) and \(b\) were defined in (6). We can observe that \(\frac{\partial}{\partial\boldsymbol{\theta}}H_{n}(\boldsymbol{\theta})\) is the sample mean of a random sample of size \(n\) from the \(m\)-dimensional population \(\boldsymbol{Y}_{\tau}(\boldsymbol{\theta})\). **Theorem 10**: _Let \(\boldsymbol{Y}_{1},...,\)\(\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Let \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\) with \(\boldsymbol{\theta}\neq\boldsymbol{\theta}_{0},\) with \(\boldsymbol{\theta}_{0}\) defined in (32), and let us assume that \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}_{\tau}(\boldsymbol{\theta}_{0}) \right]\neq\boldsymbol{0}_{d}\). Then,_ \[\lim_{n\rightarrow\infty}P_{\boldsymbol{\theta}}\left(R_{\boldsymbol{\tau}} \left(\boldsymbol{\theta}_{0}\right)>\chi_{d,\alpha}^{2}\right)=1.\] **Proof.** From the previous results, it holds that \[\frac{1}{n}\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\theta}_{0})=\frac{1}{n} \sum\limits_{i=1}^{n}\frac{1}{\tau+1}\boldsymbol{\Psi}_{\boldsymbol{\tau}}( \boldsymbol{Y}_{i};\boldsymbol{\theta}_{0}\ )=\frac{1}{\tau+1}\frac{ \partial}{\partial\boldsymbol{\theta}}H_{n}^{\tau}(\boldsymbol{\theta}_{0}) \underset{n\rightarrow\infty}{\overset{P}{\rightarrow}}\frac{1}{\tau+1}E_{ \boldsymbol{\theta}}\left[\boldsymbol{Y}_{\tau}(\boldsymbol{\theta}_{0}) \right],\] where \(\boldsymbol{Y}_{\tau}(\boldsymbol{\theta}_{0})\) is as defined in (35). Therefore, \[P_{\theta}\left(R_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}_{0} \right)>\chi_{d,\alpha}^{2}\right)=P_{\theta}\left(\tfrac{1}{n}R_{\boldsymbol {\tau}}\left(\boldsymbol{\theta}_{0}\right)>\tfrac{1}{n}\chi_{d,\alpha}^{2}\right)\] \[\underset{n\rightarrow\infty}{\longrightarrow}\text{I}\left( \frac{1}{\left(\tau+1\right)^{2}}E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}_ {\tau}(\boldsymbol{\theta}_{0})\right]\boldsymbol{K}_{\tau}^{-1}\left( \boldsymbol{\theta}\right)E_{\boldsymbol{\theta}}^{T}\left[\boldsymbol{Y}_{ \tau}(\boldsymbol{\theta}_{0})\right]>0\right)=1,\] where \(\text{I}(\cdot)\) is the indicator function. A natural question that arises here is how the asymptotic power of different test statistics considered for testing the hypothesis in (15) could be compared. Lehmann (1959) stated that contiguous alternative hypotheses are of great interest in practical use for such purposes, as their associated power functions do not converge to 1. In this regard, we next derive the asymptotic distribution of \(R_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}_{0}\right)\) under local Pitman-type alternative hypotheses of the form \[H_{1,n}:\boldsymbol{\theta}=\boldsymbol{\theta}_{n}:=\boldsymbol{\theta}_{0}+n ^{-1/2}\boldsymbol{l},\] where \(\boldsymbol{l}\) is a \(d\)-dimensional normal vector and \(\boldsymbol{\theta}_{0}\) is the closest element to the null hypothesis. The next result determines the asymptotic power of the Rao-type test based on RMDPDGE under contiguous alternative hypothesis. **Theorem 11**: _Let \(\boldsymbol{Y}_{1},...,\boldsymbol{Y}_{n}\) be independent and identically distributed observations from a \(m\)-dimensional random vector \(\boldsymbol{Y}\) with \(E_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\mu}\left( \boldsymbol{\theta}\right)\) and \(Cov_{\boldsymbol{\theta}}\left[\boldsymbol{Y}\right]=\boldsymbol{\Sigma} \left(\boldsymbol{\theta}\right)\), \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{d}.\) Under the contiguous alternative hypothesis of the form_ \[H_{1,n}:\boldsymbol{\theta}_{n}=\boldsymbol{\theta}_{0}+n^{-1/2}\boldsymbol{l},\] _the asymptotic distribution of the Rao-type test based on RMDPDGE, \(R_{\boldsymbol{\tau}}\left(\boldsymbol{\theta}_{0}\right),\) is a non-central chi-square distribution with \(d\) degrees of freedom and non-centrality parameter given by_ \[\delta_{\tau}(\boldsymbol{\theta}_{0},\boldsymbol{l})=\boldsymbol{l}^{T} \boldsymbol{J}_{\tau}\left(\boldsymbol{\theta}_{0}\right)\boldsymbol{K}_{\tau }^{-1}\left(\boldsymbol{\theta}_{0}\right)\boldsymbol{J}_{\tau}\left( \boldsymbol{\theta}_{0}\right)\boldsymbol{l}.\] **Proof.** Consider the Taylor series expansion \[\frac{1}{\sqrt{n}}\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\theta}_{n})=\frac{1}{ \sqrt{n}}\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\theta}_{0})+\frac{1}{n}\left. \frac{\partial\boldsymbol{U}_{n}^{\tau}(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}^{T}}\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{n}^{* }}\boldsymbol{l},\] where \(\mathbf{\theta}_{n}^{\ast}\) belongs to the line segment joining \(\mathbf{\theta}_{0}\) and \(\mathbf{\theta}_{0}+\frac{1}{\sqrt{n}}\mathbf{l}\). Now, by proposition 3 \[\frac{1}{n}\frac{\partial\mathbf{U}_{n}^{\tau}(\mathbf{\theta})}{\partial\mathbf{\theta}^{T} }=\frac{1}{\tau+1}\frac{\partial^{2}H_{n}^{\tau}\left(\mathbf{\theta}\right)}{ \partial\mathbf{\theta}\ \mathbf{\theta}^{T}}\underset{n\longrightarrow\infty}{\overset{\mathcal{P}}{ \longrightarrow}}-\mathbf{J}_{\mathbf{\tau}}(\mathbf{\theta})\] Therefore, \[\frac{1}{\sqrt{n}}\mathbf{U}_{n}^{\tau}(\mathbf{\theta})\bigg{|}_{\mathbf{\theta}=\mathbf{ \theta}_{0}+n^{-1/2}\mathbf{d}}\underset{n\rightarrow\infty}{\overset{\mathcal{L }}{\longrightarrow}}\mathcal{N}\left(-\mathbf{J}_{\mathbf{\tau}}\left(\mathbf{\theta}_{0} \right)\mathbf{l},\ \mathbf{K}_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right)\right),\] and \[R_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right)\underset{n\rightarrow\infty}{ \overset{\mathcal{L}}{\longrightarrow}}\chi_{p}^{2}\left(\delta_{\tau}(\mathbf{ \theta}_{0},\mathbf{l})\right),\] with \(\delta_{\tau}(\mathbf{\theta}_{0},\mathbf{d})\) given by \[\delta_{\tau}(\mathbf{\theta}_{0},\mathbf{l})=\mathbf{l}^{T}\mathbf{J}_{\tau} \left(\mathbf{\theta}_{0}\right)\mathbf{K}_{\tau}^{-1}\left(\mathbf{\theta}_{0}\right)\mathbf{ J}_{\tau}\left(\mathbf{\theta}_{0}\right)\mathbf{l}.\] \[\delta_{\tau}(\mathbf{\theta}_{0},\mathbf{l})=\mathbf{l}^{T}\mathbf{J}_{\tau} \left(\mathbf{\theta}_{0}\right)\mathbf{K}_{\tau}^{-1}\left(\mathbf{\theta}_{0}\right)\mathbf{ J}_{\tau}\left(\mathbf{\theta}_{0}\right)\mathbf{l}.\] **Remark 12**: _The previous result can be used for defining an approximation to the power function under any alternative hypothesis, \(\mathbf{\theta}\in\Theta\setminus\Theta_{0},\) given as_ \[\mathbf{\theta}=\mathbf{\theta}-\mathbf{\theta}_{0}+\mathbf{\theta}_{0}=\sqrt{n}\frac{1}{ \sqrt{n}}\left(\mathbf{\theta}-\mathbf{\theta}_{0}\right)+\mathbf{\theta}_{0}=\mathbf{\theta} _{0}+n^{-1/2}\mathbf{l}\] _with \(\mathbf{l=\sqrt{n}}\left(\mathbf{\theta}-\mathbf{\theta}_{0}\right).\)_ **Remark 13**: _The family of Rao-type tests, \(R_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right),\) presented in this Section for simple null hypothesis can be extended to composite null hypothesis. If we are interested in testing \(H_{0}:\mathbf{\theta}\in\mathbf{\Theta}_{0}=\left\{\mathbf{\theta\in\Theta}\right/\)\(\mathbf{g(\mathbf{\theta})=0}_{r}\}\) we can consider the family of Rao-type tests given by_ \[R_{\mathbf{\tau}}\left(\widetilde{\mathbf{\theta}}_{G}^{\tau}\right)=\frac{1}{n}\mathbf{ U}_{n}^{\tau}(\widetilde{\mathbf{\theta}}_{G}^{\tau})^{T}\mathbf{Q}_{\mathbf{\tau}}( \widetilde{\mathbf{\theta}}_{G}^{\tau})\left[\mathbf{Q}_{\mathbf{\tau}}(\widetilde{\mathbf{ \theta}}_{G}^{\tau})\mathbf{K}_{\mathbf{\tau}}\left(\widetilde{\mathbf{\theta}}_{G}^{\tau} \right)\mathbf{Q}_{\mathbf{\tau}}(\widetilde{\mathbf{\theta}}_{G}^{\tau})\right]^{-1}\mathbf{ Q}_{\mathbf{\tau}}(\widetilde{\mathbf{\theta}}_{G}^{\tau})^{T}\mathbf{U}_{n}^{\tau}( \widetilde{\mathbf{\theta}}_{G}^{\tau}). \tag{36}\] _However, the extension of the presented results for the family of robust test statistics defined in (36) is not trivial, and it will be established in future research._ _In partircular, the simple null hypothesis in (32) can be written as a composite null hypothesis with \(\mathbf{g(\theta)=\theta-\theta}_{0}.\) In this case, \(\mathbf{G(\theta)}\) reduces to the identity matrix of dimension \(p,\) the restricted estimator \(\widetilde{\mathbf{\theta}}_{G}^{\tau}\) coincides with \(\mathbf{\theta}_{0}\) and the Rao-type test statistic in (36) \(R_{\mathbf{\tau}}\left(\widetilde{\mathbf{\theta}}_{G}^{\tau}\right),\) coincides with the proposed \(R_{\mathbf{\tau}}\left(\mathbf{\theta}_{0}\right)\) given in (33).For multidimensional normal populations Martin (2021) developed Rao-type test statistics based on the RMDPDE._ ### Rao-type tests based on MDPDGE for univariate distributions Let \(Y_{1},....,Y_{n}\) a random sample from the population \(Y,\) with \[E\left[Y\right]=\mu\left(\theta\right)\text{ and }Var\left[Y\right]=\sigma^{2}\left( \theta\right).\] Based on (25) the estimating equation is given by \[\sum\limits_{i=1}^{n}\Psi_{\tau}\left(y_{i},\theta\right)=0\] with \[\Psi_{\tau}\left(y_{i},\theta\right) = \frac{\left(\tau+1\right)\left(\sigma^{2}\left(\theta\right) \right)^{-\tau/2}}{2\left(2\pi\right)^{\tau/2}}\left\{\left[-\frac{\partial \log\sigma^{2}\left(\theta\right)}{\partial\theta}+\frac{\partial\log\sigma^{2 }\left(\theta\right)}{\partial\theta}\right.\] \[\left.\left.\left(\ref{eq:25}\right)+2\frac{\partial\mu\left( \theta\right)}{\partial\theta}\left(y_{i}-\mu\left(\theta\right)\right)\frac{ 1}{\sigma^{2}\left(\theta\right)}\right]\] \[\exp\left(-\frac{\tau}{2\sigma^{2}\left(\theta\right)}\left(y_{i }-\mu\left(\theta\right)\right)^{2}\right)+\frac{\tau}{\left(1+\tau\right)^{3 /2}}\frac{\partial\log\sigma^{2}\left(\theta\right)}{\partial\theta}\right\}.\] Moreover the expressions of \(J_{\tau}\left(\theta\right)\) and \(K_{\tau}\left(\theta\right)\) are, respectively, given by \[J_{\tau}\left(\theta\right) = \frac{1}{\left(2\pi\sigma\left(\theta\right)^{2}\right)^{\frac{ \tau}{2}}}\frac{1}{\left(1+\tau\right)^{5/2}}\left[\left(\tau+1\right)\sigma^ {-2}\left(\theta\right)\left(\frac{\partial\mu\left(\theta\right)}{\partial \theta}\right)^{2}+\frac{\tau^{2}}{4}\left(\frac{\partial\log\sigma^{2}\left( \theta\right)}{\partial\theta}\right)^{2}\right.\] \[\left.+\frac{1}{2}\left(\frac{\partial\log\sigma^{2}\left(\theta \right)}{\partial\theta}\right)^{2}\right]\] and \[K_{\tau}\left(\theta\right) = \left(\frac{1}{\left(2\pi\right)^{1/2}\sigma\left(\theta\right) }\right)^{2\tau}\left\{\frac{1}{\left(1+2\tau\right)^{5/2}}\left[\tau^{2} \left(\frac{\partial\log\sigma^{2}\left(\theta\right)}{\partial\theta} \right)^{2}\right.\] \[\left.+\left(1+2\tau\right)\sigma^{-2}\left(\theta\right)\left( \frac{\partial\mu\left(\theta\right)}{\partial\theta}\right)^{2}+\frac{1}{2} \left(\frac{\partial\log\sigma^{2}\left(\theta\right)}{\partial\theta} \right)^{2}\right]\] \[-\frac{\tau^{2}}{4\left(1+\tau\right)^{3}}\left(\frac{\partial \log\sigma^{2}\left(\theta\right)}{\partial\theta}\right)^{2}\right\}.\] Therefore, if we are interesting in testing \[H_{0}:\theta=\theta_{0}\text{ versus }H_{1}:\theta\neq\theta_{0},\] the Rao-type tests based on RMDPDGE is given by, \[R_{\boldsymbol{\tau}}\left(\theta_{0}\right)=\frac{1}{n}U_{n}^{\boldsymbol{ \tau}}(\theta_{0})^{2}K_{\boldsymbol{\tau}}\left(\theta_{0}\right)^{-1},\] where \[U_{n}^{\boldsymbol{\tau}}(\theta_{0})=\frac{1}{\tau+1}\sum\limits_{i=1}^{n}\Psi_{ \tau}\left(y_{i},\theta\right)\] and \(\Psi_{\tau}\left(y_{i},\theta\right)\) and \(K_{\tau}\left(\theta\right)\) are given as in (37) and (38). The null hypothesis is rejected if \[R_{\boldsymbol{\tau}}\left(\theta_{0}\right)>\chi_{1,\alpha}^{2},\] where \(\chi_{1,\alpha}^{2}\) is the upper \(1-\alpha\) quantile of a chi-square distribution with \(1\) degree of freedom. We finally derive explicit expressions of the Rao-type test statistics under Poisson and exponential models. #### 4.1.1 Poisson Model We us assume that the random variable \(Y\) is Poisson with parameter \(\theta.\) In this case it is well known that \(E\left[Y\right]=Var\left[Y\right]=\theta,\) and so the RMDPDGE, for \(\tau>0,\) is given by, \[\widehat{\theta}_{G}^{\tau}=\arg\max_{\theta}\left\{\frac{\tau+1}{\tau\left(2 \pi\theta\right)^{\frac{\tau}{2}}}\left(\frac{1}{n}\sum\limits_{i=1}^{n}\exp \left(-\frac{\tau}{2\theta}\left(y_{i}-\theta\right)^{2}\right)-\frac{\tau}{ \left(1+\tau\right)^{3/2}}\right)-\frac{1}{\tau}\right\}.\] At \(\tau=0,\) the RMDPDGE reduces to the restricted MLE, \[\widehat{\theta}_{G}=\arg\max_{\theta}\left\{-\frac{1}{2}\log 2\pi-\frac{1}{2} \log\theta-\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{2\theta}\left(y_{i}- \theta\right)^{2}\right\}.\] On the other hand the score function \(\Psi_{\tau}\left(\cdot\right)\) is \[\Psi_{\tau}\left(y_{i},\theta\right)=\frac{\tau+1}{2\left(2\pi\theta\right)^{ \frac{\tau}{2}}\theta^{2}}\left\{\left(-2\theta^{2}+y_{i}^{2}\right)\exp\left( -\frac{\tau}{2\theta}\left(y_{i}-\theta\right)^{2}\right)+\frac{\tau\theta}{ \left(1+\tau\right)^{\frac{3}{2}}}\right\}\] and naturally at \(\tau=0,\) we obtain the score function of the MLE presented in Zhang (2019) \[\Psi_{0}\left(y_{i},\theta\right)=\frac{1}{2\theta^{2}}\left(-2\theta^{2}+y_ {i}^{2}\right).\] On the other hand, the matrix \(K_{\tau}\left(\theta\right)\) under the Poisson model has the explicit expression \[K_{\tau}\left(\theta\right)=\left(\frac{1}{2\pi}\right)^{\tau}\frac{1}{2 \theta^{2+\tau}}\left\{\frac{1}{\left(1+2\tau\right)^{5/2}}\left(\left(2\tau^ {2}+2\theta+4\theta\tau+1\right)-\frac{\tau^{2}}{2\left(1+\tau\right)^{3}} \right)\right\}\] and hence the Rao-type tests based on RMDPDGE, \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right),\) for testing simple null hypothesis is given, for \(\tau>0,\) by \[R_{\boldsymbol{\tau}}\left(\theta_{0}\right) = \frac{1}{n}\frac{1}{\left(2\left(2\pi\theta\right)^{\frac{\tau}{ 2}}\theta^{2}\right)^{2}}\left(\sum\limits_{i=1}^{n}\left(\left(-2\theta_{0}^ {2}+y_{i}^{2}\right)\exp\left(-\frac{\tau}{2\theta_{0}}\left(y_{i}-\theta_{0} \right)^{2}\right)+\frac{\tau\theta}{\left(1+\tau\right)^{\frac{3}{2}}}\right) \right)^{2}\] \[\times\left(2\pi\right)^{\tau}\left(2\theta^{2+\tau}\right)\left( 1+2\tau\right)^{5/2}\left\{\left(\left(2\tau^{2}+2\theta+4\theta\tau+1 \right)-\frac{\tau^{2}}{2\left(1+\tau\right)^{3}}\right)\right\}^{-1}.\] Again for \(\tau=0\) we get the expression of the classical Rao test based on the MLE, \[R_{0}\left(\theta_{0}\right)=\frac{1}{4n}\left(\sum\limits_{i=1}^{n}\left(\frac{ -2\theta_{0}^{2}+y_{i}^{2}}{\theta_{0}^{2}}\right)\right)^{2}\frac{2\theta_{0}^ {2}}{2\theta_{0}+1}.\] In practical use, the null hypothesis is rejected if \[R_{\boldsymbol{\tau}}\left(\theta_{0}\right)>\chi_{1,\alpha}^{2}.\] #### 4.1.2 Exponential model Let assume now that the random variable \(Y\) comes from an exponential distribution with probability density function, \[f_{\theta}(x)=\frac{1}{\theta}\exp\left(-\frac{x}{\theta}\right),\ x>0. \tag{39}\] In this case the true mean and variance are given \(E\left[Y\right]=\theta\) and \(Var\left[Y\right]=\theta^{2}\). The RMDPDGE under the exponential model, for \(\tau>0\), is given by \[\widehat{\theta}_{G}^{\tau}=\arg\max_{\theta}\left\{\frac{\tau+1}{\tau}\left( \frac{1}{\theta\sqrt{2\pi}}\right)^{\tau}\left(\frac{1}{n}\sum\limits_{i=1}^{ n}\exp\left(-\frac{\tau}{2}\left(\frac{y_{i}-\theta}{\theta}\right)^{2} \right)-\frac{\tau}{\left(1+\tau\right)^{3/2}}\right)-\frac{1}{\tau}\right\},\] and for \(\tau\to 0\), we have \[\widehat{\theta}_{G}=\arg\max_{\theta}\left\{-\frac{1}{2}\log 2\pi-\log \theta-\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{2}\left(\frac{y_{i}-\theta}{ \theta}\right)^{2}\right\}.\] On the other hand, the score function is \[\Psi_{\tau}\left(y_{i},\theta\right)=\frac{\left(\tau+1\right)}{\theta^{\tau +3}\left(\sqrt{2\pi}\right)^{\tau}}\left\{\left(y_{i}^{2}-y_{i}\theta-\theta^{ 2}\right)\exp\left(-\frac{\tau}{2}\left(\frac{y_{i}-\theta}{\theta}\right)^{2 }\right)+\frac{\tau\theta^{2}}{\left(1+\tau\right)^{\frac{3}{2}}}\right\},\] and for \(\tau=0\), we recover the score function of the Gaussian MLE, \[\Psi_{0}\left(y_{i},\theta\right)=\frac{1}{\theta^{3}}\left(y_{i}^{2}-y_{i} \theta+\theta^{2}\right).\] The matrix \(K_{\tau}\left(\theta\right)\) has the expression \[K_{\tau}\left(\theta\right)=\frac{1}{\left(2\pi\right)^{\tau}\theta^{2\left( \tau+1\right)}}\left\{\frac{1}{\left(1+2\tau\right)^{5/2}}\left(4\tau^{2}+2 \tau+3\right)-\frac{\tau^{2}}{\left(1+\tau\right)^{3}}\right\}\] and at \(\tau=0\) \[K_{0}\left(\theta\right)=\frac{2}{\theta^{2}}.\] Correspondingly, the Rao-type tests based on RMDPDGE for testing \[H_{0}:\theta=\theta_{0}\text{ versus }H_{1}:\theta\neq\theta_{0},\] is given, for \(\tau>0\), by \[R_{\boldsymbol{\tau}}\left(\theta_{0}\right) = \frac{1}{n}\frac{1}{\theta_{0}^{2\tau+6}\left(2\pi\right)^{\tau}} \left(\sum_{i=1}^{n}\left\{\left(y_{i}^{2}-y_{i}\theta_{0}-\theta_{0}^{2} \right)\exp\left(-\frac{\tau}{2}\left(\frac{y_{i}-\theta_{0}}{\theta_{0}} \right)^{2}\right)\right.\] \[\left.+\frac{\tau\theta_{0}^{2}}{\left(1+\tau\right)^{\frac{3}{2 }}}\right\}\right)^{2}\times 2n\theta_{0}^{4}\left(\sum_{i=1}^{n}\left\{\left(y_{i} ^{2}-y_{i}\theta_{0}-\theta_{0}^{2}\right)\right\}\right)^{-2}.\] ## 5 Simulation study We analyze here the performance of the Rao-type tests based on the MDPDGE, \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right),\) in terms of robustness and efficiency. We compare the proposed general method assuming Gaussian distribution with Rao-type test statistics based on the true parametric distribution underlying the data. We consider the exponential model with density function \(f_{\theta_{0}}(x)\) given in (39). For the exponential model, the Rao-type test statistics based on MDPDGE is, for \(\tau>0\), as given in (40) and for \(\tau=0\) as given in (3). To evaluate the robustness of the tests we generate samples from an exponential mixture, \[f_{\theta_{0}}^{\varepsilon}(x)=(1-\varepsilon)f_{\theta_{0}}(x)+\varepsilon f _{2\theta_{0}}(x),\] where \(\theta_{0}\) denotes the parameter of the exponential distribution and \(\varepsilon\) is the contamination proportion. The uncontaminated model is thus obtained by setting \(\varepsilon=0\). For comparison purposes we have also considered the robust Rao-type tests based on the restricted MDPDE, introduced and studied in Basu et al (2022b). The efficiency loss caused by the Gaussian assumption should be advertised by the poorer performance of the Rao-type tests based on the restricted MDPDGE with respect to their analogous based on the restricted MDPDE. For the exponential model, the family Rao-type test statistics based on the restricted MDPDE is given, for \(\beta>0\), as \[S_{n}^{\beta}(\theta_{0})=\left(\frac{4\beta^{2}+1}{(2\beta+1)^{3}}-\frac{ \beta^{2}}{(\beta+1)^{4}}\right)^{-1}\frac{1}{n}\left(\frac{1}{\theta_{0}} \sum_{i=1}^{n}\left(y_{i}-\theta_{0}\right)\exp\left(-\frac{\beta y_{i}}{ \theta_{0}}\right)+\frac{n\beta}{(\beta+1)^{2}}\right)^{2}.\] For \(\beta=0\), the above test reduces to the classical Rao test given by \[S_{n}\left(\theta_{0}\right)=S_{\beta=0,n}\left(\theta_{0}\right)=\left( \sqrt{n}\frac{\bar{X}_{n}-\theta_{0}}{\theta_{0}}\right)^{2}.\] We consider the testing problem \[H_{0}:\theta_{0}=2\text{ vs }H_{1}:\theta\neq 2.\] and we empirically examine the level and power of both Rao-type test statistics, the usual test based on the parametric model and the Gaussian-based test by setting the true value of the parameter \(\theta_{0}=2\) and \(\theta_{0}=1\), respectively. Different sample sizes were considered, namely \(n=10\), \(20\), \(30\), \(40\), \(50\), \(60\), \(70\), \(80\), \(90\), \(100\) and \(200\), but simulation results were quite similar and so, for brevity, we only report here results for \(n=20\) and \(n=40\). The empirical level of the test is computed \[\widehat{\alpha}_{n}\left(\varepsilon\right)=\frac{\text{Number of times}\left\{R_{n}^{\tau}\left(\theta_{0}\right)\text{ (or }S_{n}^{\beta}\left(\theta_{0}\right)\right)>\chi_{1,0.05}^{2}=3.84146 \right\}}{\text{Number of simulated samples}}.\] We set \(\varepsilon=0\%,5\%,\)\(10\%\) and \(20\%\) of contamination proportions and perform the Monte-Carlo study over \(R=10000\) replications. The tuning parameters \(\tau\) and \(\beta\) are fixed from a grid of values, namely \(\left\{0,0.1,...,0.7\right\}\). Simulation results are presented in Tables 1 and 2 for \(n=20\) and \(n=40\), respectively. The empirical powers are denoted by \(\widehat{\pi}_{n}\left(\varepsilon\right)\) and we have considered \(\varepsilon=0\%,\)\(10\%,\)\(10\%\) and \(20\%.\) The robustness advantage in terms of level of both Rao-type tests considered, \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right)\) and \(S_{n}^{\beta}(\theta_{0})\) with positive values of the turning parameter with respect to the test statistics with \(\tau=0\) and \(\beta=0\) is clearly shown, as their simulated levels are closer to the nominal in the presence of contamination. \begin{tabular}{|l||l|l|l|l|l|l|l|l|} \hline \(\tau\) & \(\widehat{\alpha}_{20}\left(0\right)\) & \(\widehat{\alpha}_{20}\left(0.05\right)\) & \(\widehat{\alpha}_{20}\left(0.10\right)\) & \(\widehat{\alpha}_{20}\left(0.20\right)\) & \(\widehat{\pi}_{20}\left(0.1\right)\) & \(\widehat{\pi}_{20}\left(0.15\right)\) & \(\widehat{\pi}_{20}\left(0.20\right)\) \\ \hline 0.0 & 0.2601 & 0.3093 & 0.3453 & 0.4661 & 0.9278 & 0.6791 & 0.6887 & 0.5088 \\ 0.1 & 0.1895 & 0.1748 & 0.1561 & 0.1989 & 0.9544 & 0.7213 & 0.7301 & 0.0595 \\ 0.2 & 0.2120 & 0.1776 & 0.1417 & 0.1174 & 0.9747 & 0.8398 & 0.8430 & 0.5095 \\ 0.3 & 0.2532 & 0.2113 & 0.1660 & 0.1275 & 0.9826 & 0.8963 & 0.8961 & 0.7301 \\ 0.4 & 0.2963 & 0.2447 & 0.1986 & 0.1471 & 0.9863 & 0.9228 & 0.9257 & 0.7893 \\ 0.5 & 0.3243 & 0.2773 & 0.2307 & 0.1695 & 0.9875 & 0.9363 & 0.9386 & 0.8254 \\ 0.6 & 0.3512 & 0.3055 & 0.2599 & 0.1899 & 0.9885 & 0.9441 & 0.9437 & 0.8434 \\ 0.7 & 0.3751 & 0.3258 & 0.2762 & 0.2060 & 0.9884 & 0.9466 & 0.9469 & 0.8541 \\ \hline \hline \(\beta\) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline 0.0 & 0.0453 & 0.0682 & 0.1048 & 0.1909 & 0.7200 & 0.4365 & 0.4384 & 0.2323 \\ 0.1 & 0.0476 & 0.0602 & 0.0780 & 0.1417 & 0.7799 & 0.5223 & 0.5267 & 0.3029 \\ 0.2 & 0.0498 & 0.0552 & 0.0667 & 0.1103 & 0.7922 & 0.5751 & 0.5780 & 0.3558 \\ 0.3 & 0.0494 & 0.0517 & 0.0584 & 0.0897 & 0.7882 & 0.5997 & 0.6024 & 0.3878 \\ 0.4 & 0.0489 & 0.0505 & 0.0535 & 0.0773 & 0.7779 & 0.6067 & 0.6058 & 0.4106 \\ 0.5 & 0.0494 & 0.0498 & 0.0504 & 0.0692 & 0.7634 & 0.6048 & 0.6037 & 0.4221 \\ 0.6 & 0.0491 & 0.0504 & 0.0497 & 0.0647 & 0.7492 & 0.6008 & 0.5986 & 0.4265 \\ 0.7 & 0.0502 & 0.0495 & 0.0494 & 0.0613 & 0.7348 & 0.5932 & 0.5919 & 0.4259 \\ \hline \end{tabular} Table 1. Simulated sizes and powers for different contamination proportions and different tuning parameters \(\tau,\beta=0,0.1,...,0.7\) for the Rao-type tests \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right)\) and \(S_{20}^{\beta}(\theta_{0})\) for \(n=20\). Regarding the power of the tests, uncontaminated scenarios there are values at least so good than the corresponding to \(\tau=0\) and \(\beta=0\) and for contaminated data the power corresponding to \(\tau>0\) and \(\beta>0\) are higher. The loss of efficiency caused by the Guassian assumption can be measured by the discrepancy of the estimated levels and powers between the family of Rao-type tests based on the restricted MDPDGE and the MDPDE. As expected, empirical levels of the test statistics based on the MDPDGE are quite higher than the corresponding levels of the test based in the MDPDE. However, the test statistics based on the parametric model, \(S_{n}^{\beta}(\theta_{0})\), is quite conservative and so the corresponding powers are higher than those of the proposed tests, \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right).\) Based on the presented results, it seems that the proposed Rao-type tests, \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right),\) performs reasonably well and offers an appealing alternative for situations where the probability density function of the true model is unknown or it is very complicated to work with it. ## 6 Conclusions In this paper we have considered inferential techniques for situations where we do not know the parametric form of the density but the only available information is the mean vector and the variance-covariance matrix, expressed in terms of the a parameter vector \(\boldsymbol{\theta}.\) To deal with this problem, Zhang (2019) proposed a procedure based on the Gaussian distribution. However, the therein proposed \begin{table} \begin{tabular}{|l||l|l|l|l|l|l|l|} \hline \(\tau\) & \(\widehat{\alpha}_{40}\left(0\right)\) & \(\widehat{\alpha}_{40}\left(0.05\right)\) & \(\widehat{\alpha}_{40}\left(0.10\right)\) & \(\widehat{\alpha}_{40}\left(0.20\right)\) & \(\widehat{\pi}_{40}\left(0\right)\) & \(\widehat{\pi}_{40}\left(0.1\right)\) & \(\widehat{\pi}_{40}\left(0.20\right)\) \\ \hline 0.0 & 0.3014 & 0.3588 & 0.4407 & 0.5919 & 0.9948 & 0.8064 & 0.7591 & 0.5957 \\ 0.1 & 0.2393 & 0.1934 & 0.1757 & 0.2032 & 0.9991 & 0.9540 & 0.9229 & 0.7712 \\ 0.2 & 0.4257 & 0.2559 & 0.1970 & 0.1317 & 0.9995 & 0.9916 & 0.9846 & 0.9204 \\ 0.3 & 0.4257 & 0.3485 & 0.2782 & 0.1753 & 0.9997 & 0.9997 & 0.9953 & 0.9694 \\ 0.4 & 0.5021 & 0.4294 & 0.3572 & 0.2388 & 0.9999 & 0.9989 & 0.9978 & 0.9851 \\ 0.5 & 0.5642 & 0.4920 & 0.4253 & 0.2993 & 0.9999 & 0.9992 & 0.9986 & 0.9908 \\ 0.6 & 0.6084 & 0.5415 & 0.4742 & 0.3491 & 1.0000 & 0.9992 & 0.9994 & 0.9935 \\ 0.7 & 0.6416 & 0.5755 & 0.5081 & 0.3831 & 1.0000 & 0.9994 & 0.9994 & 0.9948 \\ \hline \hline \(\beta\) & \multicolumn{1}{c}{} & & & & & & \\ \hline 0.0 & 0.0467 & 0.0758 & 0.1309 & 0.2728 & 0.9838 & 0.8093 & 0.7483 & 0.4905 \\ 0.1 & 0.0469 & 0.0623 & 0.0959 & 0.1987 & 0.9870 & 0.8770 & 0.8317 & 0.6072 \\ 0.2 & 0.0464 & 0.0554 & 0.0800 & 0.1526 & 0.9862 & 0.9010 & 0.8687 & 0.6778 \\ 0.3 & 0.0481 & 0.0529 & 0.0704 & 0.1220 & 0.9846 & 0.9084 & 0.8804 & 0.7169 \\ 0.4 & 0.0483 & 0.0518 & 0.0649 & 0.1036 & 0.9808 & 0.9059 & 0.8809 & 0.7316 \\ 0.5 & 0.0500 & 0.0519 & 0.0618 & 0.0929 & 0.9756 & 0.9008 & 0.8742 & 0.7338 \\ 0.6 & 0.0500 & 0.0501 & 0.0577 & 0.0858 & 0.9689 & 0.8914 & 0.8662 & 0.7317 \\ 0.7 & 0.0504 & 0.0519 & 0.0562 & 0.0801 & 0.9634 & 0.8813 & 0.8562 & 0.7258 \\ \hline \end{tabular} \end{table} Table 2: Simulated sizes and powers for different contamination proportions and different tuning parameters \(\tau,\beta=0,0.1,...,0.7\) for the Rao-type tests \(R_{\boldsymbol{\tau}}\left(\theta_{0}\right)\) and \(S_{20}^{\beta}(\theta_{0})\) for \(n=20.\) estimator lacks of robustness and thus the procedure was extended to robust estimators based on the DPD. We have focused on the case in which additional constraints must be imposed to the estimated parameters, thus leading to the RMDPGE. We have derived the asymptotic distribution of the proposed estimator and we have studied its robustness properties in terms of the corresponding IF. Further, we have developed robust Rao-type test statistics under null hypothesis, which requires the restricted version of the estimator. Finally, a simulation study have been carried out to examine the performance of the proposed test statistics. From the results, we empirically showed that the Rao-type tests considered have a good performance in terms of efficiency and enjoys more robustness than the Zhang (2019) approach based on Gaussian estimators. ## Acknowledgements This research is supported by the Spanish Grant: PID2021-124933NB-I00. The authors are members of the Interdisciplinary Mathematics Institute (IMI).
2309.09363
A Distributed Strategy to Maximize Coverage in a Heterogeneous Sensor Network in the Presence of Obstacles
In this paper, an efficient deployment strategy is proposed for a network of mobile and static sensors with nonidentical sensing and communication radii. The multiplicatively weighted Voronoi (MW-Voronoi) diagram is used to partition the field and assign the underlying coverage task to each mobile sensor. A gradient-based method is applied to find the best candidate point based on the detected coverage holes and the coverage priority considering the relative distance of the mobile sensor from the static ones and the obstacles in the field. The sensors move to a new position if such a relocation increases their local coverage. The efficiency of the proposed strategy in different scenarios is demonstrated by simulations.
Hesam Mosalli, Amir G. Aghdam
2023-09-17T20:02:58Z
http://arxiv.org/abs/2309.09363v1
A Distributed Strategy to Maximize Coverage in a Heterogeneous Sensor Network in the Presence of Obstacles ###### Abstract In this paper, an efficient deployment strategy is proposed for a network of mobile and static sensors with non-identical sensing and communication radii. The multiplicatively weighted Voronoi (MW-Voronoi) diagram is used to partition the field and assign the underlying coverage task to each mobile sensor. A gradient-based method is applied to find the best candidate point based on the detected coverage holes and the coverage priority considering the relative distance of the mobile sensor from the static ones and the obstacles in the field. The sensors move to a new position if such a relocation increases their local coverage. The efficiency of the proposed strategy in different scenarios is demonstrated by simulations. ## I Introduction Wireless Sensor Networks (WSNs) have emerged as a promising technology for various applications such as environment monitoring, healthcare, and surveillance [1, 2, 3, 4]. However, a critical challenge in WSNs is ensuring that the sensors adequately cover the area of interest to guarantee reliable data collection. The problem is particularly important in scenarios involving heterogeneous nodes and fixed obstacles in the field, e.g., when the sensing field is a harsh outdoor environment [5]. Various deployment strategies have been introduced in the literature to address this challenge. Distributed deployment strategies in a WSN aim to move every sensor in a field to increase the covered area with minimal information exchange with other sensors. They often use a Voronoi-based approach to partition the sensing field into regions and assign a node (sensor) to each (for the mathematical description of the Voronoi diagram, see [6]). Virtual force-based algorithms are proposed in [7, 8, 9] to move the mobile sensors to enhance the covered area in a WSN. These algorithms use a combination of attraction and repulsion forces to relocate sensors. Gradient-based approaches are another class of coverage maximization strategies in mobile WSNs. These approaches use the gradient of the sensing field to determine each sensor's optimal moving direction and size. These algorithms may also consider environmental obstacles and constraints, such as energy consumption or the communication ranges of the sensors making such methods suitable for generalizing the problem statement and applying the necessary modifications in the strategy. A gradient descent algorithm is proposed in [10] for a class of utility functions encoding the optimal coverage and sensing policies. Moreover, [11] utilizes a distributed nonlinear optimization approach iteratively to increase the local coverage of each sensor as much as possible. Many real-world WSNs are heterogeneous, i.e., sensors have different characteristics. Heterogeneity poses additional challenges for maximizing the covered area in a mobile WSN. In some studies, the multiplicatively weighted Voronoi (MW-Voronoi) diagram [12] is employed to partition the field according to the sensors' sensing radii to maximize coverage [13]. There are limited studies considering a network of both mobile and static sensors, and most of them formulate the static sensors as sink nodes in the network to maximize coverage in two steps: allocation of the static sensors and path planning of the mobile sensors [14]. On the other hand, the presence of obstacles in the field can negatively impact the functionality of the WSN. An efficient obstacle detection scheme is proposed in [15], which models the obstacles as coverage holes. The centric MW-Voronoi configuration introduced in [16] guarantees the convergence of the mobile sensors in a heterogeneous network to the optimal locations in the presence of obstacles and limited communication ranges. In this paper, a gradient-based distributed strategy is introduced to solve the weighted coverage optimization problem. Unlike the previous studies, the proposed approach can address four problems simultaneously, i.e., network heterogeneity, the prioritized sensing field, the existence of obstacles in the field, and the presence of both mobile and static sensors. In an iterative procedure, each sensor first uses the information received from its neighbors to construct its Voronoi region. Then, it determines its optimal location by solving the local coverage maximization problem. This paper is organized as follows. The problem formulation is presented in Section II, along with some preliminary concepts. In Section III, coverage maximization is formulated as a nonlinear optimization problem, and a distributed algorithm is provided to solve it. An alternative approach, namely the modified max-area strategy, is provided in Section IV. The performance of the strategy is demonstrated by simulations in Section V. Finally, Section VI concludes the paper by summarizing the results. ## II Preliminaries and problem statement Consider a 2D sensing field \(\mathcal{F}\), which is to be covered by a set of \(n\) sensors \(\mathcal{S}=\{S_{i}(x_{s_{i}},R_{s_{i}},R_{c_{i}})|i\in\mathbb{N}_{n}\}\), \(\mathbb{N}_{n}:=\{1,2,\ldots,n\}\), where \(x_{s_{i}}\), \(R_{s_{i}}\), and \(R_{c_{i}}\) are, respectively, the position, sensing radius, and communication radius of sensor \(S_{i}\). Each sensor \(S_{i}\) is assumed to have a disk-shaped sensing range with the radius \(R_{s_{i}}\), capable of broadcasting its location and sensing radius to other sensors in its communication radius. Assume also that sensors \(S_{1},S_{2},\ldots,S_{m}\) are mobile and the remaining \(n-m\) are static, i.e., their positions are fixed. Also, the sensing and communication radii of different sensors are not necessarily the same. A priority function \(\varphi(q):\mathcal{F}\rightarrow\mathbb{R}^{+}\), where \(\mathbb{R}^{+}\) is the set of all non-negative real numbers, describes the relative importance of point \(q\) inside the 2D field to be covered. A point with a higher value of the priority function is more important to cover compared to a point with a lower value. The region of interest (ROI) may include fixed obstacles with arbitrary shapes that block the sensing range of the sensors. It is assumed that (i) the effect of obstacles on wireless communication between sensor nodes is negligible and can be compensated via multi-path signal propagation [5], and (ii) each sensor is capable of detecting the exact shape of any obstacle within its communication range. To analyze the effect of obstacles on the coverage performance of the sensor network, the _visible region_ of a sensor is defined below. **Definition 1**.: _Consider a sensing field with some obstacles. The visible region of a sensor located at point \(x\) is denoted by \(\Phi(x)\) and includes all the points in \(\mathcal{F}\) from which there exists an unobstructed line of sight to \(x\)._ **Definition 2**.: _Since it is assumed that obstacles block the sensor's line of sight, the sensing range of the sensor \(S\left(x,R_{s},R_{c}\right)\) is defined as:_ \[D(x)=\left\{q\in\Phi(x)|d(q,x)\leq R_{s}\right\}, \tag{1}\] _where \(d(q,x)\) is the Euclidean distance between points \(q\) and \(x\)._ It is desired to find a set of locations for the sensors resulting in the maximum coverage over the ROI. To achieve this goal, a distributed strategy is proposed under which each mobile sensor uses the information it obtains from its neighbors to iteratively find a new point from which its local coverage increases. The local coverage optimization problem is solved using a Voronoi-based approach by partitioning the ROI into preferably distinct regions, each assigned to one of the \(m\) mobile sensors. In Voronoi-based approaches, the fundamental assumption is to partition the entire area of a sensing field into distinct regions such that the sensor located inside each region is the nearest sensor to all points within that region [6]. While the Voronoi diagram provides the standard partitioning in a homogeneous network consisting of sensors with identical sensing radii, the MW-Voronoi diagram presents the desired partitioning for the cases when the sensors have different sensing radii [12]. Moreover, since the sensors have a limited communication range in general, the notion of _Connectivity-Aware Multiplicatively Weighted Voronoi_ (CAMW-Voronoi) diagram is used in this paper which is slightly different from the LCMW-Voronoi diagram introduced in [17]. Let the weighted distance between a point \(q\in\mathcal{F}\) and a weighted node \(S(x,w)\) be defined as: \[d_{w}(q,S)=\frac{d(q,S)}{w}. \tag{2}\] **Definition 3**.: _Consider the network of sensors \(\mathcal{S}\) introduced earlier. Let the set of all neighbors of a mobile sensor \(S_{i}\), denoted by \(\mathcal{N}_{i}\), be the set of all mobile sensors whose communication ranges reach \(S_{i}\), i.e., it can receive information from them. Then, the connectivity-aware multiplicatively weighted Voronoi region associated with \(S_{i}\) is defined as:_ \[\Pi_{i}=\{q\in\mathcal{F}|d_{w}(q,S_{i})<d_{w}(q,S_{j}),\forall j \in\mathcal{N}_{i},\] \[d(q,S_{i})<\min\{R_{c_{i}},r_{min}\}\}, \tag{3}\] _where \(r_{min}=\min_{j\in\mathcal{N}_{i}}\left\{\left[d(S_{i},S_{j})-R_{s_{j}}\right] _{+}|i\notin\mathcal{N}_{j}\right\}\), \(([a]_{+}=\max\{a,0\})\) and the corresponding weight of each sensor is equal to its sensing radius._ Note that based on the definition of the weighted distance, if a sensor cannot cover an arbitrary point inside its CAMW-Voronoi region, none of its neighbors can cover it. This is a critical point making the corresponding regions important in developing a distributed deployment strategy for coverage optimization. Unlike the MW-Voronoi diagram, the CAMW-Voronoi diagram is not necessarily a complete partitioning of \(\mathcal{F}\), as the regions are not always mutually distinct due to the limitation on the communication ranges of the sensors. However, it is worth mentioning that the effect of such shortcomings can be minimized in a good network configuration where the communication radii are sufficiently large. _Problem Definition:_ Given the specifications of the environment including the obstacles and the priority function, and the network configurations, it is desired to find a set of locations for the mobile sensors that achieves the maximum weighted coverage over the ROI. Here, the overall weighted coverage is defined as the surface integral of the priority function over the field \(\mathcal{F}\). The overall weighted coverage maximization problem is formulated as follows. \[\max_{\{x_{i}\}_{i=1}^{i}}\quad\int\limits_{\mathcal{F}\cap\left(\bigcup_{i=1 }^{i}D(x_{i})\right)}\varphi(q)dq. \tag{4}\] Here, the overall coverage is computed based on the areas covered by all sensors but only mobile sensors can contribute to modifying it. In Voronoi-based approaches, each mobile sensor is assigned the task of maximizing the local coverage w.r.t. the Voronoi region associated with itself. ## III Distributed maximum weighted coverage problem Since it is not straightforward to find the globally optimal solution in a distributed strategy, it is useful to reformulate it as multiple local problems. The problem of distributed maximum weighted coverage in a homogeneous network with no static sensors, no obstacles, and no constraint on the communication range of the sensors has been solved in [11]. In a similar way, an iterative approach is proposed in which each sensor is able to find the best position to move to maximize its own local weighted coverage. Under such conditions, the overall weighted coverage of the sensor network \(\mathcal{S}\) in each step could be computed as the sum of the local weighted coverage of all sensors over their associated Voronoi regions. In a similar way, the overall weighted coverage (4) can be rewritten as: \[\max_{\{x_{i},\Pi_{i}\}_{i=1}^{m}} \sum_{i=1}^{m}\ \int\limits_{\Pi_{i}^{\prime}\cap D(x_{i})}\varphi(q)dq\] (5) subject to : \[x_{i}\in\Pi_{i}\cap\Phi(x_{s_{i}}),\quad\forall\ i\in\mathbb{N}_{m},\] where \(\Pi_{i}^{\prime}\) is part of \(\Pi_{i}\) that is not covered by any static sensor. It is to be noted that the above reformulation of overall weighted coverage (4) as the sum of local weighted coverage of mobile sensors is valid if regions \(\Pi_{1},\Pi_{2},\ldots,\Pi_{m}\) are mutually distinct (and so are the local covered areas). However, this is not the case in a CAMW-Voronoi diagram due to the possible overlap between the regions. **Theorem 1**.: _Suppose that \(\{\Pi_{1},\Pi_{2},\ldots,\Pi_{m}\}\) is the CAMW-Voronoi diagram generated by \(\mathcal{S}\). Then, the local covered areas, defined as \(\Pi_{i}\cap D(x_{i})\), are mutually distinct for any two sensors if \(R_{ci}>2R_{s_{i}}\) for all \(i\in\mathbb{N}_{m}\)._ Proof.: The proposition will be proved for two sensors \(S_{1}\) and \(S_{2}\) and can then be extended to the entire network \(\mathcal{S}\). Without loss of generality, assume that \(R_{c_{1}}\leq R_{c_{2}}\). There are three possible cases: 1. If \(1\in\mathcal{N}_{2}\) and \(2\in\mathcal{N}_{1}\), then the covered areas of the two sensors are separated based on the definition of the CAMW-Voronoi diagram. 2. Suppose that \(1\notin\mathcal{N}_{2}\) and \(2\notin\mathcal{N}_{1}\) meaning that \(R_{c_{1}}\leq R_{c_{2}}<d(S_{1},S_{2})\). If the inequality \(R_{c_{i}}>2R_{s_{i}}\) holds for both sensors, it can be deduced that \(R_{s_{1}}+R_{s_{2}}<d(S_{1},S_{2})\). So, the sensing disks of the sensors do not overlap, making the covered areas distinct. 3. Lastly, suppose that \(1\in\mathcal{N}_{2}\) and \(2\notin\mathcal{N}_{1}\) (note that by assumption, we can not have \(1\notin\mathcal{N}_{2}\) and \(2\in\mathcal{N}_{1}\)). Based on the definition of \(r_{min}\), the extent of region \(\Pi_{1}\) and consequently the covered area of \(S_{1}\) are limited by the sensing disk of \(S_{2}\). As a result, under the stated relation between the sensing and communication radii, the covered areas of all sensors are mutually distinct. Theorem 1 provides a basis to ensure the objective function in (4) is separable, and the overall weighted coverage maximization problem can be iteratively solved in a distributed manner as formulated in (6). Furthermore, the provided condition is almost always true in operational cases due to the inherent difference between the communication and sensing equipment and methods in WSNs [18]. In what follows, the distributed strategy for maximizing the local weighted coverage is described for one sensor. Consider a single sensor \(S\) with a sensing radius \(R_{s}\) located at \(x_{s}\) inside its CAMW-Voronoi region \(\Pi\). The goal is to find the optimal point \(x\) inside \(\Pi\) for which both of the following criteria hold. * The sensor can move from its current position to \(x\) on an unobstructed line. * The maximum local weighted coverage of \(S\) over \(\Pi^{\prime}\) is achieved. Also, instead of presenting the objective in the form of a maximization problem, it is preferable to formulate it in a standard minimization problem as: \[\min_{x} F(x)=-\int\limits_{\Pi^{\prime}\cap D(x)}\varphi(q)dq\] (6) subject to : \[x\in\Pi\cap\Phi(x_{s}).\] The geometric configuration of the problem is illustrated by a simple example in Fig. 1. It is desired to find the optimal point for \(S\) inside \(\Pi\) such that the local weighted coverage over the hatched area is maximized, considering that the sensing disk and moving routes are blocked by the obstacle (depicted in black). Also, note that the area covered by the static sensor does not need to be considered in the coverage maximization problem. The problem described by (6) is generally a nonlinear optimization problem due to the constraints imposed by the heterogeneity of the network, the presence of static sensors, and the existence of obstacles in the field. As a result, the gradient-based approach introduced in [11] is modified to adapt it to the challenges introduced by such constraints. ## IV Modified Max-Area Strategy In the modified max-area (MMA) strategy introduced in this section, the mobile sensors find their optimal locations in the field iteratively, as described in the following algorithm: 1. Each sensor transmits information containing its location and sensing radius to the sensors within its communication range and receives the same information from its neighbors. 2. Every sensor constructs its own CAMW-Voronoi region based on the information it receives from its neighbors in the previous step and the shape of the obstacles it detects. 3. Each sensor computes its local weighted coverage over its region. Fig. 1: Geometric illustration of the local coverage maximization problem 4. If a sensor detects a local coverage hole, it finds a candidate point using the gradient-based method described below. 5. The sensor computes its local weighted coverage in the candidate position w.r.t. its current Voronoi region and moves accordingly if the new local coverage is more than the current value by a prescribed threshold \(\epsilon\). 6. If at least one of the sensors moves in the previous step, the procedure repeats from step (i); otherwise, it ends. The above procedure is repeated iteratively until no sensor moves, i.e., the network reaches a steady state regarding the weighted coverage. The choice of \(\epsilon\) in step (v) involves a trade-off between precision and convergence time. The smaller \(\epsilon\) is, the closer are to their optimal positions, but the longer it takes for the algorithm to reach the termination condition, when the sensors stop. To solve the optimization problem defined in (6), in each iteration \(k\), first, the best direction to move, denoted by \(p_{k}\), which is proportional to the gradient of the function \(F(x)\), is found. Therefore, the primary step is to compute the vector \(\nabla_{x}F(x)\). This can be performed similarly to the Max-Area strategy [11]. As a more general case, suppose that the objective function to minimize is the surface integral of a given function \(f\) over the region \(\mu(x)\), which is the loci of every point \(q\in\mathcal{F}\) satisfying \(M\) inequalities \(h_{j}(x,q)\leq 0,(j\in\mathbb{N}_{M})\). The boundary of \(\mu(x)\), denoted by \(\partial\mu(x)\), consists of \(M\) segments (curves or lines), where the \(j\)-th segment is \(\partial_{j}\mu(x)=\{q\in\mu(x)|h_{j}(x,q)=0\}\). In this case, the gradient of the objective function \[F(x)=\int_{\mu(x)}f(x,q)dq \tag{7}\] w.r.t. \(x\) can be derived using the results of [19] as follows: \[\nabla_{x}F(x)= \int\limits_{\mu(x)}\nabla_{x}f(x,q)dq\] \[-\sum\limits_{j=1}^{M}\int\limits_{\partial_{j}\mu(x)}\frac{f(x, q)}{\|\nabla_{q}h_{j}(x,q)\|}\nabla_{x}h_{j}(x,q)dL. \tag{8}\] In the local weighted coverage optimization problem, the function \(f\) in the above formulation is equal to the priority function \(\varphi(q)\), and is independent of the position of sensor \(x\), making the first term in (8) equal to zero. Since the second term is a line integral over the boundaries of \(\mu(x)\) (i.e., \(\Pi^{\prime}\cap D\left(x\right)\) in the MMA strategy), the boundary segments are first identified. The first two types of boundary segments are the boundaries of \(\Pi\) and the sensing ranges of every static sensor, lying inside the sensing range of the sensor. The third group involves parts of the obstacle edges that are facing the sensor, inside the sensing range of the sensor, and not covered by any static sensor. Without loss of generality, suppose that these three groups of boundaries include the first \(p\) segments, characterized by functions \(h_{1},h_{2},\ldots,h_{p}\). Since these segments do not change by the movement of the sensor, thus, \[\nabla_{x}h_{j}(x,q)=0,\forall j\in\mathbb{N}_{p}. \tag{9}\] Another segment of the region boundary is the portion of the perimeter of the sensing disk that is inside both the visible region of the sensor and region \(\Pi\) but is not covered by any static sensor. Let this segment be described by the equality \(h_{p+1}(x,q)=0\). It is shown in [11] that the gradient of the objective function originating from this set can be simply computed by a discrete summation as: \[\nabla_{x}^{D}F(x)=\frac{2\pi R_{s}}{N_{p+1}}\sum\limits_{\begin{subarray}{c} \tilde{b}=1\\ \tilde{q}_{k}^{p}\in\Pi^{\prime}\end{subarray}}^{N_{p+1}}\begin{bmatrix} \cos\theta_{k}\\ \sin\theta_{k}\end{bmatrix}\varphi(q_{k}^{D}), \tag{10}\] where, \(\theta_{k}=2(k-1)\pi/N_{p+1}\) (\(k\in\mathbb{N}_{N_{p+1}}\)) and \(q_{k}^{D}=x+R_{s}\left[cos\theta_{k},sin\theta_{k}\right]^{T}\). The number \(N_{p+1}\) is sufficiently large to guarantee the desired precision in computing the integral in a discrete form. The last group of segments is generated when the sensing disk is blocked by obstacles. These boundary partitions are line segments in a radial direction in the sensing disk starting from the blocking vertices of the obstacle denoted by \(v_{o}=\left[v_{o,1},v_{o,2}\right]^{T}\) to the point on the perimeter of the sensing disk. This is illustrated in Fig. 1 for a simple configuration. For now, assume that there is only one such line segment and that no part of it is covered by any static sensor. Define: \[h_{p+2}\left(x,q\right)=\left(q_{2}-v_{o,2}\right)\left(v_{o,1}-x_{1}\right)- \left(q_{1}-v_{o,1}\right)\left(v_{o,2}-x_{2}\right)\] which yields: \[\nabla_{x}h_{p+2}\left(x,q\right)=\begin{bmatrix}-\left(q_{2}-v_{o,2}\right) \\ q_{1}-v_{o,1}\end{bmatrix}=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\left(q-v_{o}\right), \tag{11}\] \[\nabla_{q}h_{p+2}\left(x,q\right)=\begin{bmatrix}-\left(v_{o,2}-x_{2}\right) \\ v_{o,1}-x_{1}\end{bmatrix}, \tag{12}\] resulting in \[\left\|\nabla_{q}h_{p+2}\left(x,q\right)\right\|=\left\|v_{o}-x\right\|. \tag{13}\] For simplicity, let \(\left\|v_{o}-x\right\|=d(x,v_{o})\) be called \(R_{v_{o}}\). By substituting these values in (8), one obtains: \[\nabla_{x}^{v_{o}}F(x)=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\frac{1}{R_{v_{o}}}\int\limits_{\partial_{p+2}\mu(x)}\left(q-v _{o}\right)\varphi(q)dL. \tag{14}\] It is to be noted that since both vectors \(\left(q-v_{o}\right)\) and \(\left(v_{o}-x\right)\) are in the same direction, \(\forall q\in\partial_{p+2}\mu(x)\), there exists \(l\in\left[0,R_{s}-R_{v_{o}}\right]\) such that \(q=v_{o}+l\left(v_{o}-x\right)/R_{v_{o}}\). On the other hand, since \(\left(v_{o}-x\right)/R_{v_{o}}\) is a unit vector normal to the perimeter of the sensing disk (denoted by \(n(v_{o})\)), the integral in (14) is formulated as: \[\nabla_{x}^{v_{o}}F(x)=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\frac{1}{R_{v_{o}}}n(v_{o})\int_{0}^{R_{s}-R_{v_{o}}}\varphi(q)dl. \tag{15}\] Using an approach similar to that in (10), the above integral can be computed as a discrete summation over the \(N_{p+2}\) points on \(\partial_{p+2}\mu(x)\). Thus, \[\nabla_{x}^{v_{o}}F(x)=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\frac{\left(R_{s}-R_{v_{o}}\right)^{2}}{N_{p+2}R_{v_{o}}}n(v_{o} )\sum\limits_{\begin{subarray}{c}k=1\\ q_{k}\in\Pi^{\prime}\end{subarray}}^{N_{p+2}}k\varphi(q_{k}), \tag{16}\] where \(q_{k}=v_{o}+k\frac{R_{x}-R_{x}}{N_{p+2}}n(v_{o})\), \(k\in\mathbb{N}_{N_{p+2}}\). In general, there may be more than one boundary segment generated, depending on the relative position of the sensor and the obstacles (characterized by vertices \(v_{o}^{p+2},v_{o}^{p+3},\ldots,v_{o}^{M}\)). Consequently, the gradient of the objective function w.r.t. \(x\) can be expressed as: \[\nabla_{x}F(x)=\nabla_{x}^{D}F(x)+\sum_{k=p+2}^{M}\nabla_{x}^{v_{o}^{k}}F(x). \tag{17}\] With the above formulation, the high computational complexity of the line integral along the boundaries is replaced by a set of simple numerical summations affordable for sensors with limited computational capability. After finding the gradient of the local weighted coverage and the decent direction \(p_{k}\), the next step is to find the optimal moving step size such that the new candidate position provides the optimal value for \(F(x)\). This is a line search problem that can be formulated as follows: \[\alpha_{k}=\arg\min_{\alpha}F(x_{k}+\alpha p_{k}). \tag{18}\] However, this may result in a point that is outside the region \(\Pi\) or a point that the sensor cannot move to through a direct route due to the existence of some obstacles on its way. In such cases, the candidate point must be projected to the region \(\Pi\cap\Phi(x_{s})\). Due to the similarity of this problem to the one studied in [11], the scaled gradient projection algorithm is used here. However, the non-convexity of the region caused by the heterogeneity of the network, the existence of static sensors, and the presence of obstacles, makes the problem more complex. **Theorem 2**.: _Consider the sensor network \(\mathcal{S}\) described in Section II, and let \(R_{c_{i}}\geq 2R_{s_{i}},\forall i\in\mathbb{N}_{n}\). Then, the overall weighted coverage under the MMA strategy increases in each round until it reaches the steady state._ Proof.: Although the CAMW-Voronoi diagram does not necessarily partition the field (due to the possible overlap between the regions), Theorem 1 guarantees the distinctiveness of local covered areas. The proof follows now similarly to that of Theorem 2 in [11]. It is worth mentioning that aside from the task of constructing the Voronoi region (which is common among all Voronoi-based approaches), the main computational complexity of the distributed MMA deployment strategy concerns deriving the local weighted coverage in each iteration. By using Delaunay triangulation of a region with \(N\) points, the order of this complexity is \(\mathcal{O}(N\log N)\)[20]. On the other hand, the number of points \(N\), as a design parameter, determines the accuracy of integration, and hence, introduces a trade-off between the computation time and the precision of weighted coverage estimation. ## V Simulation Results In this section, the performance of the MMA strategy is investigated in different scenarios. Due to the complexity and nonlinearity of the maximum weighted coverage problem, there is no tractable method to determine the globally optimal sensor configuration. Hence, the performance of the MMA strategy and two other methods is evaluated and compared using Monte Carlo simulations [8, 21]. Unlike the MMA strategy, the other methods are only applicable when the sensing field has a uniform priority function and no obstacles. Therefore, such a comparison is done only in Example 1. In all of the following examples, the communication radius of each sensor is assumed to be four times its sensing radius, which satisfies the condition of Theorem 1. **Example 1.** The goal of this example is to compare the performance of the MMA strategy with some other Voronoi-based approaches. In order for the scenario to provide credible information, the network and environment specifications have been chosen so that they mostly fit the constraints of other methods. For any test set, the sensors have been randomly deployed in a square field of \(30\times 30\)m. One-third of the sensors are static and the remaining are mobile. The sensing radius is set to 2m for all static sensors and it is a random value between 2m and 4m for the mobile ones. The sensing radius of the static sensors is set to be less than or equal to that of the mobile sensors to reflect a real-world scenario in which sensors with insufficient power become permanently immobile at their last positions to minimize energy depletion. Also, the field is assumed to have no obstacles, and a uniform priority function (i.e., \(\varphi(q)\equiv 1\)). Additionally, \(\epsilon\) is set to \(0.1\)m\({}^{2}\), meaning that the network reaches the steady state when no mobile sensor is able to improve its local coverage by at least \(0.1\)m\({}^{2}\). For Monte Carlo simulation, sensor networks of \(12\), \(21\), \(30\), and \(39\) sensors have each been tested for \(20\) initial random configurations under the MMA, PCVF [7], and Minimax [8] methods for comparison. One of the test results is shown in Fig. 2, demonstrating that under the other two strategies, the network cannot avoid the unnecessary overlapping between the sensing ranges of mobile and static sensors since they do not take the static sensors into consideration initially for constructing regions. To assess the performance of the three methods, the average coverage factor, defined as the ratio of the weighted coverage of the sensor network to the weighted area of the field versus the number of sensors in initial and final configurations are shown in Fig. 3. This figure shows that the final coverage factor is always higher when applying the MMA method. On the other hand, Fig. 4 shows that the higher final coverage factor comes at the cost of a higher number of required iterations to reach the termination point. Also, based on Fig. 5, the average moving distance for the sensors is also higher under the MMA strategy, meaning that each sensor travels a longer distance to find the appropriate location. Unlike the coverage factor which monotonously increases by increasing the number of sensors, the stopping round and average moving distance under the MMA strategy for networks with \(21\) sensors are higher than those with smaller and larger networks. If the number of sensors is neither too low nor too high, the sensors move around constantly, search ing for a configuration that results in the least overlapping and the most coverage. In networks with a small number of sensors, movements may not be necessary as they may not affect coverage due to the low sensor density. In networks with a large number of sensors, also, extensive movements may not be necessary due to the potentially high overlap in sensors' covered areas. Energy consumption is another important factor that needs to be considered in evaluating the performance of sensor movement strategies because in real-world applications, sensors have a limited power supply. Let the energy that a sensor consumes to travel \(1\)m (without stopping) be \(8.268\)J [22]. Also, suppose that the amount of energy required to stop a sensor and then overcome the static friction following a complete stop is equal to that required to travel \(1\)m and \(4\)m, respectively [8]. The energy consumption for communication and sensing is assumed to be negligible compared to that for movement. Fig. 6 gives the average energy consumed by each mobile sensor for different numbers of sensors. Unlike the Minimax and PCV strategies that act solely based on the relative positions of sensors in a non-prioritized field, the MMA method takes the area coverage of each sensor over its designated region into account. As a result, Fig. 4: Number of iterations before meeting the termination condition for different numbers of sensors under different strategies Fig. 5: Average moving distance for different numbers of sensors under different strategies Fig. 3: Final coverage factor for different numbers of sensors under different strategies Fig. 6: Average energy consumption for different numbers of sensors under different strategies Fig. 2: The initial and final configurations of the WSN under different strategies its improved final coverage factor comes at the cost of higher energy consumption and longer convergence time. **Example 2.** In this example, the objective is to measure the performance of the MMA strategy in the presence of both static sensors and obstacles which is the main difference between this method and its predecessors. Consider 24 mobile sensors with random sensing radii between 1m and 2m, and 6 static sensors with sensing radii equal to 1m, randomly deployed in a square field of \(15\times 15\)m with some obstacles as shown in Fig. 7. Similar to Example 1, the uniform priority function is considered. Experiments are repeated \(20\) times for different random initial configurations with and without obstacles. The initial and final configurations of the sensors for one of these tests are shown in Fig. 7. While the results demonstrate that the coverage increases in the final configuration, as expected, they also show the negative impact of obstacles in the coverage performance, as evident from the average final coverage factor from 86% to 69%. **Example 3.** In the last example, it is desired to demonstrate the performance of the MMA strategy in the presence of obstacles and a non-uniform priority function. The ROI is a square field of \(15\times 15\)m with an obstacle as shown in Fig. 8. The following priority function is used: \[\varphi(q)=e^{-\alpha\left[(q_{1}-10)^{2}+(q_{2}-10)^{2}\right]}, \tag{19}\] where \(\alpha=0.02\). The above function has a peak value at the point \(q=[10,10]^{T}\), and exponentially decays as moving farther from it. In Fig. 8, the darker spots indicate more important points to cover, according to the priority function. The sensor network includes 6 mobile sensors with sensing radii of \(1.3\), \(1.5\), \(1.7\), \(1.8\), \(1.9\), and \(2\) meters. The parameter \(\epsilon\) is set to \(0.05\)m\({}^{2}\) to enable the network to follow the priority function in areas where it has very low values. The sensors are initially deployed in the lower left corner of the field, meaning that their direct route toward the focal point of the priority function is blocked by the obstacle. Fig. 8 depicts the trajectories of the sensors and their final position, showing that they eventually cover the most important points by going around the obstacle. Furthermore, it shows that the sensors have taken the nearest routes by passing alongside the edge of the obstacle which can reduce the excessive traveling distance and energy consumption. It is to be noted that multiple focal points can be modeled by a priority function equal to the sum of exponentials similar to (19). A priority function with a greater \(\alpha\) represents a faster decay. For a sensor network in such environments, it may be hard to detect the priority function and follow its gradient. Thus, the sensitivity of the network must be strengthened in such cases by decreasing the parameter \(\epsilon\). ## VI Conclusion An iterative deployment strategy is proposed to maximize the weighted coverage of a network of mobile and static sensors with nonidentical sensing and communication radii over a field with obstacles. The objective is to develop a distributed approach, tasking every sensor to maximize the local coverage over the corresponding Voronoi region based on its interpretation of the environment and the information it obtains from its neighbors. The proposed method exploits a gradient-based approach to compute the optimal direction for every mobile sensor to move, considering the coverage priority of the points in the field, the static sensors' covered area, and the obstacles. The coverage efficacy of the method compared to the alternative approaches is demonstrated by simulations. As future research direction, one can consider more practical settings, e.g., using more realistic sensing models (instead of a perfect disk), increased reliability using sweeping coverage or k-coverage schemes, and utilizing machine learning-based techniques or evolutionary algorithms for improved deployment performance. Future research directions involve potential adjustments to the problem, making the method applicable to scenarios involving practical constraints. This opens up the possibility of extending the findings to other WSN applications like distributed target tracking. Furthermore, the utilization of machine learning-based techniques and evolutionary algorithms can enhance Fig. 8: Performance of MMA strategy in a prioritized field with obstacles Fig. 7: Performance of the MMA strategy for a network with mobile and static sensors in a field with obstacles deployment strategies for achieving the global optimum solution.
2309.12025
Robust Approximation Algorithms for Non-monotone $k$-Submodular Maximization under a Knapsack Constraint
The problem of non-monotone $k$-submodular maximization under a knapsack constraint ($\kSMK$) over the ground set size $n$ has been raised in many applications in machine learning, such as data summarization, information propagation, etc. However, existing algorithms for the problem are facing questioning of how to overcome the non-monotone case and how to fast return a good solution in case of the big size of data. This paper introduces two deterministic approximation algorithms for the problem that competitively improve the query complexity of existing algorithms. Our first algorithm, $\LAA$, returns an approximation ratio of $1/19$ within $O(nk)$ query complexity. The second one, $\RLA$, improves the approximation ratio to $1/5-\epsilon$ in $O(nk)$ queries, where $\epsilon$ is an input parameter. Our algorithms are the first ones that provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective. They, therefore, need fewer the number of queries than state-of-the-the-art ones by a factor of $\Omega(\log n)$. Besides the theoretical analysis, we have evaluated our proposed ones with several experiments in some instances: Influence Maximization and Sensor Placement for the problem. The results confirm that our algorithms ensure theoretical quality as the cutting-edge techniques and significantly reduce the number of queries.
Dung T. K. Ha, Canh V. Pham, Tan D. Tran, Huan X. Hoang
2023-09-21T12:42:52Z
http://arxiv.org/abs/2309.12025v1
Robust Approximation Algorithms for Non-monotone \(k\)-Submodular Maximization under a Knapsack Constraint ###### Abstract The problem of non-monotone \(k\)-submodular maximization under a knapsack constraint (kSMK) over the ground set size \(n\) has been raised in many applications in machine learning, such as data summarization, information propagation, etc. However, existing algorithms for the problem are facing questioning of how to overcome the non-monotone case and how to fast return a good solution in case of the big size of data. This paper introduces two deterministic approximation algorithms for the problem that competitively improve the query complexity of existing algorithms. Our first algorithm, LAA, returns an approximation ratio of \(1/19\) within \(O(nk)\) query complexity. The second one, RLA, improves the approximation ratio to \(1/5-\epsilon\) in \(O(nk)\) queries, where \(\epsilon\) is an input parameter. Our algorithms are the first ones that provide constant approximation ratios within only \(O(nk)\) query complexity for the non-monotone objective. They, therefore, need fewer the number of queries than state-of-the-art ones by a factor of \(\Omega(\log n)\). Besides the theoretical analysis, we have evaluated our proposed ones with several experiments in some instances: Influence Maximization and Sensor Placement for the problem. The results confirm that our algorithms ensure theoretical quality as the cutting-edge techniques and significantly reduce the number of queries. Approximation algorithm, \(k\)-submodular maximization, knapsack constraint, non-monotone. ## I Introduction \(k\)-submodular is a generalized version of submodular in polyhedra [1] in which some properties of submodularity have deep theoretical extensions to \(k\)-submodularity that challenge researchers to study [2, 3, 4], etc. Maximizing a \(k\)-submodular function subject to some constraints has recently become crucial in combinatorial optimization and machine learning such as influence maximization via social networks [5, 6, 7, 8], sensor placement [5, 6, 7], feature selection [2] and information coverage maximization [7], etc. Given a finite ground set \(V\) with \(|V|=n\), and an integer number \(k\), let \([k]=\{1,2,\ldots,k\}\), and \((k+1)^{V}=\{(V_{1},V_{2},\ldots,V_{k})|V_{i}\subseteq V,\forall i\in[k],V_{i} \cap V_{j}=\emptyset,\forall i\neq j\}\) be a family of \(k\) disjoint sets, called the \(k\)**-set**. We have the following definition of the \(k\)-submodular function: **Definition 1** (\(k\)-submodularity [3]).: A function \(f:(k+1)^{V}\mapsto\mathbb{R}_{+}\) is \(k\)**-submodular** iff for any \(\textbf{x}=(X_{1},X_{2},\ldots,X_{k})\) and \(\textbf{y}=(Y_{1},Y_{2},\ldots,Y_{k})\in(k+1)^{V}\), we have: \[f(\textbf{x})+f(\textbf{y})\geq f(\textbf{x}\sqcap\textbf{y})+f(\textbf{x} \sqcup\textbf{y}) \tag{1}\] where \[\textbf{x}\sqcap\textbf{y}=(X_{1}\cap Y_{1},\ldots,X_{k}\cap Y_{k})\] and \[\textbf{x}\sqcup\textbf{y}=(Z_{1},\ldots,Z_{k}),\text{ where }Z_{i}=X_{i} \cup Y_{i}\setminus(\bigcup_{j\neq i}X_{j}\cup Y_{j})\] In this paper, we consider the problem of \(k\)-Submodular Maximization under a Knapsack constraint (kSMK) which is defined as follows: **Definition 2** (The \(k\)-Submodular Maximization under a Knapsack constraint (kSMK) problem).: Under the knapsack constraint, each element \(e\) is assigned a positive cost \(c(e)\). Given a limited budget \(B>0\), the problem kSMK asks to find a \(k\)-set \(\textbf{x}=(X_{1},X_{2},\ldots,X_{k})\) with total cost \(c(\textbf{x})=\sum_{e\in X_{i},i\in[k]}c(e)\leq B\) so that \(f(\textbf{x})\) is maximized. The problem kSMK is a general model applied to a lot of essential instances such as \(k\)-topic influence maximization, \(k\)-type sensor placement, \(k\)-topic information coverage maximization [9, 10, 11], etc., with the knapsacks that encode users' constraints including budget, time or size. For example, \(k\)**-topic influence maximization under knapsack constraint **(kSMK)**[3, 6, 10], the problem asks for maximizing the expected number of users, who are influenced by at least one of \(k\) distinct topics with a limited budget \(B>0\). The mathematical nature is \(k\)-submodular maximization under a diffusion model, which Kempe _et al.[12]_ first proposed with a single type of influence. The challenge when providing a solution for kSMK is it has many candidate approximate solutions with different sizes. We have to select the best nearly optimal one within polynomial time. Therefore, beyond obtaining a nearly optimal solution to kSMK in the aforementioned applications, designing such a solution must also minimize the query complexity, especially for big data, since the tremendous amount of input data makes the search space for a solution crazily soar. Unfortunately, \(k\)-submodularity requires an algorithm to evaluate the objective function whenever observing an incoming element. Therefore, it is necessary to design efficient algorithms in reasonable computational time. We refer to the _query complexity_ as a measure of computational time since it dominates the time running of an algorithm. Previous works [11, 13, 14] proposed efficient algorithms for kSMK in which even some algorithms can provide solutions in linear query complexity of \(O(kn)\). However, these works are just available for the monotone case. Meanwhile, some works [15, 16, 17] showed that the \(k\)-submodular objective function might be non-monotone in practical applications. Therefore, solving the non-monotone kSMK problem within linear query complexity is critical. Overall, this paper aims to tackle both challenges above for non-monotone \(k\)-submodular maximization and constrained by a knapsack. ### _Our contribution_ In this work, we design novel approximate algorithms that respond to some requirements about providing considerable solution quality and reducing query complexity. In particular, our work is the first one that provides a constant approximation ratio within only \(O(kn)\) query complexity for non-monotone kSMK. The main version, RLA returns an approximation ratio of \(1/5-\epsilon\) which is equivalent to the state-of-the-art one proposed in [18]. In general, our contributions are as per the following: * We first propose the LAA algorithm (Algorithm 1), a \(1/19\)-approximation one that scans a single pass over the ground set within \(O(kn)\) query complexity. It's the first simple but vital algorithm of our work since it limits the range of the optimal value. Besides, it provides a data division strategy to reduce query complexity to \(O(nk)\). * We next propose RLA algorithm (Algorithm 2) that achieves an approximation ratio \(1/5-\epsilon\), and requires \(O(kn/\epsilon)\) query complexity where \(\epsilon>0\) is an accuracy parameter. Specifically, to the best of our knowledge, our algorithm is also equivalent to the current best approximation ratio of a deterministic algorithm for the studied problem in [18]. * To illustrate the theoretical contributions, we conduct several comprehensive experiments in two applications of kSMK including \(k\)-topic Influence Maximization and \(k\)-type Sensor Placement. Experimental results have shown that our algorithms save queries more than state-of-the-art (mentioned in Table I) and return comparable results in terms of performance. Table I compares our algorithms with some state-of-the-art algorithms for non-monotone kSMK on three aspects, including approximation ratio, query complexity, and deterministic or not. These fields indicate that our algorithms have both a low number of queries and valuable deterministic approximation ratios that are equivalent to or even better than the others. _Organization_ The rest of the paper is organized as follows: We provide a literature review and discussions in Section II. The notations and properties of \(k\)-submodular functions are presented in Section III. Section IV presents our algorithms and theoretical analysis. The extensive experiments are shown in Section V. Finally, we conclude this work in Section VI. ## II Related work In this section, we review related works and provide some discussion on existing algorithms. Studying \(k\)-submodular functions appears when considering the submodularity in polyhedra. Lovasz [1] found that it was a similar but deeper theory than submodularity when working with intersection matroids. After that, more works have focused on the issue of \(k\)-submodularity with general \(k\geq 2\). First, people studied maximizing unconstrained \(k\)-submodular [3, 19, 20]. Due to the practical values when solving the problem with constraints, some authors focused on \(k\)-submodular maximization under some kinds of constraints [5, 21, 22], etc. Authors focused on the monotone case such as Oshaka _et al._[5] studied monotone \(k\)-submodular maximization with two kinds of size constraint: overall size constraint and singular size constraint, authors in [10] proposed a multi-objective evolutionary method to provide an approximation ratio of \(1/2\) for the monotone \(k\)-submodular maximization problem with the overall size constraint. However, this algorithm took a high query complexity of \(O(kn\log^{2}B)\) in expectation. Authors [23] further proposed an online algorithm with the same approximation ratio of \(1/2\) but runs in polynomial time with regret bound. However, these contributions just work for the monotone case and for size constraints; hence, it's hard to adapt to non-monotone kSMK. Moreover, these algorithms required exponential running time [5] or high query complexity [10]. Recently, Nguyen _et al.[8]_ first applied streaming to solve the problem of \(k\)-submodular maximization with overall size constraint. Streaming fashion is an active approach when it requires only a small amount of memory to store data and scans one or a few times over the ground set \(V\). They devised two streaming algorithms within \(O(nk\log(k))\) query complexity. Their first one is deterministic and returns an approximation ratio of \(1/3-\epsilon\), while the second one is randomized and returns an approximation ratio of \(k/(3k-1)-\epsilon\). Later on, Ene and Nguyen [24] developed a single-pass streaming algorithm based on integer programming formulation for \(k\)-submodular maximization with singular size constraint with an approximation ratio of \(0.5/(1+B(2^{1/B}-1))\) within \(O(nk)\) queries, where \(B=\min_{i\in[k]}B_{i}\). Unlike cardinality or matroid, which just enumerates elements, the knapsack requires maximizing \(f(\cdot)\) subject to a given budget that the total cost of a solution can not exceed. Hence, there can be multiple maximal cost solutions that are not the same size. The authors [14] proposed a multi-linear extension method with an approximation ratio of \(1/2-2\epsilon\) in expectation for the KSMK. This work provides the best approximation ratio in expectation. However, this algorithm is impractical due to the high query complexity of a continuous extension [25]. Besides, Wang _et al._[13] proposed a \((1/2-1/(2\epsilon))\)-approximation algorithm for the KSMK that inspired from the Greedy algorithm in [26]. This algorithm, however, requires an expensive query complexity of \(O(n^{4}k^{3})\), and therefore it is difficult to apply to medium-sized instances even though one can compute the objective function \(f\) in \(O(1)\) time. The authors [14] proposed a multi-linear extension that provided the approximation ratio of \(1/2-2\epsilon\) in expectation for the KSMK. This work provides the best approximation ratio in expectation, however, it is impractical because of the high query complexity of a continuous extension [25]. Authors [11] first proposed a \((1/4-\epsilon)\)-desterministic approximation algorithm within \(O(kn/\epsilon)\). Nonetheless, the aforementioned works are not available for the non-monotone case. To state the non-monotone \(k\)-submodularity, Pham _et al._[18] recently have proposed two single-pass streaming algorithms for the \(k\)-submodular maximization under the budget constraint, a general of knapsack constraint within \(O(nk\log(n)/\epsilon)\) queries. These algorithms returned the ratios of \(1/5-\epsilon\) and \(k/(5k-2)-\epsilon\) (in expectation) for the non-monotone case. Our best algorithm version, RLA, gives an equivalent performance of them (\(1/5-\epsilon\)) approximation ratio) yet reduces the query complexity to \(O(kn/\epsilon)\). On the whole, the characteristic of our algorithms is deterministic, linear query complexity, and available for non-monotonicity. ## III Preliminaries **Notations.** Given a ground set \(V=\{e_{1},e_{2},\ldots,e_{n}\}\) and an integer \(k\), we define \([k]=\{1,2,\ldots,k\}\) and let \((k+1)^{V}=\{(V_{1},V_{2},\ldots,V_{n})|V_{i}\subseteq V\ \forall i\in[k],V_{i}\cap V_{j}= \emptyset\ \forall i\neq j\}\) be a family of \(k\) disjoint subsets of \(V\), called \(k\)-set. For \(\textbf{x}=(X_{1},X_{2},\ldots,X_{k})\in(k+1)^{V}\), we define \(supp_{i}(\textbf{x})=X_{i}\), \(supp(\textbf{x})=\cup_{i\in[k]}X_{i}\), \(X_{i}\) as \(i\)**-th set of x** and an empty \(k\)-set \(\textbf{0}=(\emptyset,\ldots,\emptyset)\). We set if \(\textbf{x}\in X_{i}\) then \(\textbf{x}(e)=i\) and \(i\) is called the **position** of \(e\) in x, otherwise \(\textbf{x}(e)=0\). Adding an element \(e\notin supp(\textbf{x})\) into \(X_{i}\) can be represented by \(\textbf{x}\sqcup(e,i)\). We also write \(\textbf{x}=\{(e_{1},i_{1}),(e_{2},i_{2}),\ldots,(e_{t},i_{t})\}\) for \(e_{j}\in supp(\textbf{x}),i_{j}=\textbf{x}(e_{j}),\forall 1\leq j\leq t\). When \(X_{i}=\{e\}\), and \(X_{j}=\emptyset,\forall j\neq i\), x is denoted by \((e,i)\). For \(\textbf{x}=(X_{1},X_{2},\ldots,X_{k}),\textbf{y}=(Y_{1},Y_{2},\ldots,Y_{k})\in( k+1)^{V}\), we denote by \(\textbf{x}\sqsubseteq\textbf{y}\) iff \(X_{i}\subseteq Y_{i}\ \forall i\in[k]\). **The objective function.** The function \(f:(k+1)^{V}\mapsto\mathbb{R}_{+}\) is \(k\)**-submodular** iff for any \(\textbf{x}=(X_{1},X_{2},\ldots,X_{k})\) and \(\textbf{y}=(Y_{1},Y_{2},\ldots,Y_{k})\in(k+1)^{V}\), we have: \[f(\textbf{x})+f(\textbf{y})\geq f(\textbf{x}\sqcap\textbf{y})+f(\textbf{x} \sqcup\textbf{y}) \tag{2}\] where \[\textbf{x}\sqcap\textbf{y}=(X_{1}\cap Y_{1},\ldots,X_{k}\cap Y_{k})\] and \[\textbf{x}\sqcup\textbf{y}=(Z_{1},\ldots,Z_{k}),\ \text{where}\ Z_{i}=X_{i} \cup Y_{i}\setminus(\bigcup_{j\neq i}X_{j}\cup Y_{j})\] For any \(\textbf{x}\in(k+1)^{V}\), \(e\notin supp(\textbf{x})\) and \(i\in[k]\), we have the _marginal gain_ when adding an element \(e\) to the \(i\)-set \(X_{i}\) of **x** is: \[\Delta_{(e,i)}f(\textbf{x})= f(X_{1},\ldots,X_{i-1},X_{i}\cup\{e\},X_{i+1},\ldots,X_{k})\] \[-f(X_{1},\ldots,X_{k})\] In this work, we consider \(f\) to be _non-monotone_, i.e., the marginal gain when adding a tuple \((e,i)\) to set **x**, \(\Delta_{(e,i)}f(\textbf{x})\), may be negative. we also assume that \(f\) is normalized, i.e, \(f(\textbf{0})=0\), and there exists an _oracle query_, which when queried with the \(k\)-set **x** returns the value \(f(\textbf{x})\). We also recap some properties of the \(k\)-submodular function that will be used for designing our algorithms. From [3], a \(k\)-submodular function \(f:(k+1)^{V}\mapsto\mathbb{R}_{+}\) is \(k\)-submodular iff it is pairwise monotone and orthant submodular. The \(k\)-submodularity of \(f\) implies the _orthant submodularity_, i.e., \[\Delta_{(e,i)}f(\textbf{x})\geq\Delta_{(e,i)}f(\textbf{y}) \tag{3}\] for any \(\textbf{x},\textbf{y}\in(k+1)^{V}\) with \(\textbf{x}\sqsubseteq\textbf{y}\), \(e\notin supp(\textbf{y})\) and \(i\in[k]\), and the _pairwise monotonicity_, i.e., \[\Delta_{(e,i)}f(\textbf{x})+\Delta_{(e,j)}f(\textbf{x})\geq 0 \tag{4}\] for any \(\textbf{x}\in(k+1)^{V}\) with \(e\notin supp(\textbf{x})\) and \(i,j\in[k]\) with \(i\neq j\). \begin{table} \begin{tabular}{l l l l} \hline **Reference** & **Approximation ratio** & **Query complexity** & **Is deterministic?** \\ \hline **LAA (Alg. 1, this paper)** & \(1/19\) & \(O(kn)\) & Yes \\ **RLA (Alg. 2, this paper)** & \(1/5-\epsilon\) & \(O(kn/\epsilon)\) & Yes \\ Deterministic Streaming[18] & \(1/5-\epsilon\) & \(O(kn\log(n)/\epsilon)\) & Yes \\ Random Streaming[18] & \(k/(5k-2)-\epsilon\) & \(O(kn\log(n)/\epsilon)\) & No \\ \hline \end{tabular} \end{table} TABLE I: Algorithm comparison for non-monotone KSMK; Note that Deterministic Streaming and Random Streaming in [18] is the special case when \(\beta=1\). **The problem definition.** Assuming that each element \(e\) is assigned a positive cost \(c(e)\) and the total cost of a \(k\)-set **x**\(c(\textbf{x})=\sum_{e\in supp(\textbf{x})}c(e)\). Given a limited budget \(B>0\), we assume that every item \(e\in V\) satisfies \(c(e)\leq B\); otherwise, we can simply discard it. The \(k\)-Submodular Maximization under Knapsack constraint (\(\mathsf{KSMK}\)) problem is to determine: \[\arg\max_{\textbf{x}\in(k+1)^{V}:c(\textbf{x})\leq B}f(\textbf{x}). \tag{5}\] It means the problem finds the solution **x** so that the total cost of **x** is less than or equal to \(B\) so that \(f(\textbf{x})\) is maximized. In this work, we only consider \(k\geq 2\) because if \(k=1\), the \(k\)-submodular function becomes the submodular function. ## IV The algorithms In this section, we introduce two deterministic algorithms for \(\mathsf{KSMK}\). The first algorithm, named **Linear Approximation Algorithm** (\(\mathsf{LAA}\)), has an approximation ratio of \(1/19\) and takes \(O(nk)\) query complexity. Although this approximation ratio is small, it is the **first one** that gives a constant approximation ratio within only \(O(kn)\) queries for the non-monotone case. The approximation ratio is improved by our second algorithm, named **Robust Linear Approximation** (\(\mathsf{RLA}\)), from \(1/19\) to \(1/5-\epsilon\) by recalling the first algorithm's solution to provide a suitable range for bounding the optimal value opt. Additionally, it scans the ground set \(O(1/\epsilon)\) times and integrates the decreasing threshold strategy to get the near-optimal solution. ### _Linear Approximation Algorithm_ Our \(\mathsf{LAA}\) algorithm adapts the idea of the recent work [11] that (1) divides the ground set \(V\) into two subsets: The elements with costs greater than \(B/2\) are included in the **first subset**, while the remaining is included in the **second**, and (2) near-optimal solutions are sought and combined for the two aforementioned subsets. In particular, the algorithm first receives an instance \((V,f,k,B)\) of \(\mathsf{KSMK}\) and initiates a candidate solution **x** as an empty set and a tuple \((e_{m},i_{m})\) as \((\emptyset,1)\). The target of the tuple \((e_{m},i_{m})\) is to update the optimal solution found in the first subset, while the candidate solution **x** is to locate what solution is close to the optimal in the second. For each incoming element \(e\), the algorithm finds "the best" position \(i_{e}\) in terms of the set \(i\) in \(k\) sets that returns the highest value \(f((e,i_{e}))\). If its cost is greater than \(B/2\), the role of \((e_{m},i_{m})\) is the best solution on the current first subset (line 5). Otherwise, the algorithm adds the tuple \((e,i_{e})\) into **x** if the condition \(\Delta_{(e,i_{e})}f(\textbf{x})\geq c(e)f(\textbf{x})/B\) is maintained. After the main loop completes, the algorithm selects a \(k\)-set \(\textbf{x}^{\prime}\) as the set of last \(j\) tuples adding into **x** with the maximum total cost nearest to \(B\) (line 11). Finally, the algorithm returns the final solution **s** as the best one between \((e_{m},i_{m})\) and \(\textbf{x}^{\prime}\). The details of the algorithm are fully presented in Algorithm 1. ``` Input:\(V\), \(f\), \(k\), \(B>0\). Output: A solution \(\textbf{s}\) 1:\(\textbf{x}\leftarrow\textbf{0}\); \((e_{m},i_{m})\leftarrow(\emptyset,1)\); \(\textbf{x}^{\prime}\leftarrow\textbf{0}\); 2:foreach\(e\in V\)do 3:\(i_{e}\leftarrow\arg\max_{i\in[k]}f((e,i))\) 4:\((e_{m},i_{m})\leftarrow\arg\max_{(e^{\prime},i^{\prime})\in\{(e_{m},i_{m}),(e,i_{e})\}}f((e^{\prime},i^{\prime}))\) 5:if\(c(e)\leq B/2\)then 6:if\(\Delta_{(e,i_{e})}f(\textbf{x})\geq c(e)f(\textbf{x})/B\)then 7:\(\textbf{x}\leftarrow\textbf{x}\sqcup(e,i_{e})\) 8:end 9:end 10:end 11:\(\textbf{x}^{\prime}\leftarrow\arg\max_{\textbf{x}_{i}:j\leq t_{x},c(\textbf{ x}_{j})\leq B}c(\textbf{x}_{j})\), where \(t_{x}=|supp(\textbf{x})|\) and \(\textbf{x}_{j}=\{(e_{t_{x}-j+1},i_{t_{x}-j+1}),(e_{t_{x}-j+2},i_{t_{x}-j+2}), \ldots,(e_{t_{x}},i_{t_{x}})\}\) is the last \(j\) tuples added into **x**. 12:\(\textbf{s}\leftarrow\arg\max_{\textbf{s}\in\{(e_{m},i_{m}),\textbf{x}^{\prime} \}}f(\textbf{s})\) 13:return\(\textbf{s}_{final}\) ``` **Algorithm 1**An Linear Approximation Algorithm (\(\mathsf{LAA}\)) To deal with the non-monotonicity of the objective function, we have to use non-trivial analyzes to give an approximation. Differing from the monotone case in [11], we use the property of pairwise monotonicity as a critical component in our theoretical analysis. In the following, we analyze the theoretical guarantee of the Algorithm 1. We first define the notations as follows: * \(V_{1}=\{e\in V:c(e)>B/2\},V_{2}=\{e\in V:c(e)\leq B/2\}\). * \(\textbf{o}\) is an optimal solution of the problem over \(V\) and the optimal value \(\mathsf{opt}=f(\textbf{o})\). * \(\textbf{o}_{1}^{\prime}=\{(e,\textbf{o}(e)):e\in V_{1}\},\textbf{o}_{2}^{ \prime}=\{(e,\textbf{o}(e)):e\in V_{2}\}\). * \(\textbf{o}_{1}\) is an optimal solution of the problem over \(V_{1}\). * \(\textbf{o}_{2}\) is an optimal solution of the problem over \(V_{2}\). * \((e_{j},i_{j})\) as the \(j\)-th element added of the main loop of the Algorithm 1. * \(\textbf{x}=\{(e_{1},i_{1}),\ldots,(e_{t},i_{t})\}\) the \(k\)-set **x** after ending the main loop, \(t=|supp(\textbf{x})|\). * \(\textbf{x}^{j}=\{(e_{1},i_{1}),\ldots,(e_{j},i_{j})\}\): the \(k\)-set **x** (in the main loop) after adding \(j\) elements \(1\leq j\leq t\), \(\textbf{x}^{0}=\textbf{0}\), \(\textbf{x}^{t}=\textbf{x}\). * \(\textbf{x}_{j}=\{(e_{t-j+1},i_{t-j+1}),(e_{t-j+2},i_{t-j+2}),\ldots,(e_{t},i_{ t})\}\) is the set of last \(j\) elements added into **x**. * \(\textbf{o}_{2}^{j}=(\textbf{o}_{2}\sqcup\textbf{x}^{j})\sqcup\textbf{x}^{j}\). * \(\textbf{o}_{2}^{j-1/2}=(\textbf{o}_{2}\sqcup\textbf{x}^{j})\sqcup\textbf{x}^{j-1}\). * \(\textbf{x}^{j-1/2}\): If \(e_{j}\in supp(\textbf{o}_{2})\), then \(\textbf{x}^{j-1/2}=\textbf{x}^{j-1}\sqcup(e_{j},\textbf{o}_{2}(e_{j}))\). If \(e_{j}\notin supp(\textbf{o}_{2})\), \(\textbf{x}^{j-1/2}=\textbf{x}^{j-1}\). * \(\textbf{u}^{t}=\{(u_{1},i_{1}),(u_{2},i_{2}),\ldots,(u_{r},i_{r})\}\) is a set of elements that are in \(\textbf{o}_{2}^{t}\) but not in \(\textbf{x}^{t}\), \(r=|supp(\textbf{u}^{t})|\). * \(\textbf{u}_{1}^{t}=\textbf{x}^{t}\sqcup\{(u_{1},i_{1}),(u_{2},i_{2}),\ldots,(u_ {l},i_{l})\},1\leq l\leq r\) and \(\textbf{u}_{0}^{t}=\textbf{x}^{t}\). Supposing that \(\textbf{x}^{\prime}\) gets \(T\) last tuples in **x**, i.e., \(\textbf{x}^{\prime}=\textbf{x}_{T}\). Denote \(Q=t-T\), we have \(\textbf{x}=\textbf{x}^{Q}\sqcup\textbf{x}^{\prime}\). The following Lemmas connect the candidate solution **x** with \(\textbf{o}_{2}\). **Lemma 1**.: \(f(\textbf{o}_{2})-f(\textbf{o}_{2}^{j})\leq 2f(\textbf{x}^{j})\) _for all \(0\leq j\leq t\)._ Proof.: See the Appendix, section VI-A **Lemma 2**.: \(f(\mathbf{x}^{\prime})\geq f(\mathbf{x}^{t})/3\)_._ Proof.: See the Appendix, section VI-A **Lemma 3**.: \(f(\mathbf{o}_{2}^{t})\leq 4f(\mathbf{x}^{t})\)_._ Proof.: See the Appendix, section VI-A From these above lemmas, we imply the following lemma: **Lemma 4**.: \(f(\mathbf{x}^{\prime})\geq f(\mathbf{o}_{2})/18\)_._ Proof.: See the Appendix, section VI-A **Theorem 1**.: _Algorithm 1 is a single-pass streaming algorithm that returns an approximation ratio of \(1/19\) and takes \(nk\) queries._ Proof.: See the Appendix, section VI-A ### _A Robust Linear Approximation Algorithm_ We next introduce the RLA algorithm, which improves the approximation ratio to \(1/5-\epsilon\) and takes \(O(kn/\epsilon)\) query complexity. RLA keeps the key idea of LAA by reusing the LAA's solution to bounding the opt's range and adapts a greedy threshold to improve the approximation ratio by conducting \(O(1/\epsilon)\) times scanning over the ground set. The details of the algorithm are fully presented in Algorithm 2. ``` Input:\(V\), \(f\), \(k\), \(B>0\), \(\epsilon>0\). Output: A solution \(\mathbf{s}\) ``` 1:\(\mathbf{s}_{b}\leftarrow\) result of Algorithm 1; \(\Gamma\leftarrow f(\mathbf{s}_{b})\) 2:\(A\leftarrow\{(1+\epsilon)^{i}:i\in\mathbb{N},\Gamma\leq(1+\epsilon)^{i}\leq 19\Gamma\}\) 3:for\(e\in V\)do 4:foreach\(v\in A\)do 5:\(i_{v}\leftarrow\arg\max_{i\in[k]}\Delta_{(e,i)}f(\mathbf{s}_{v})\) 6:\(\tau_{v}=2v/(5B)\) 7:if\(c(\mathbf{s}_{v})+c(e)\leq B\)and \(\Delta_{(e,i_{v})}f(\mathbf{s}_{v})/c(e))\geq\tau_{v}\)then 8:\(\mathbf{s}_{v}\leftarrow\mathbf{s}_{v}\sqcup(e,i_{v})\) 9:end 10:end 11:end 12:\(\mathbf{s}_{final}\leftarrow\arg\max_{\mathbf{s}^{\prime}\in\{\mathbf{s}_{max},\mathbf{s}_{1}, \mathbf{s}_{2},\ldots,\mathbf{s}_{|S|}\}}f(\mathbf{s}^{\prime})\)return\(\mathbf{s}_{final}\) ``` **Algorithm 2**Robust Linear Approximation (RLA) Algorithm Specifically, RLA takes an instance \((V,f,k,B)\) of kSMK and an accuracy parameter \(\epsilon>0\) as inputs. RLA first calls LAA as a subroutine and uses LAA's solution, \(\mathbf{s}_{b}\), to obtain a bound range of the optimal solution (line 1). From Theorem 1, we have \(\Gamma\leq\text{opt}\leq 19\Gamma\). The major part of the algorithm consists of two loops: the outer to scan each element \(e\) in the ground set \(V\) and the inner to consider each candidate solution \(\mathbf{s}_{v}\) for each \(v\) filtered out from the set \(A\). On the basis of Theorem 1, we construct the set \(A\) to bound the number of candidate solutions \(\mathbf{s}_{v}\). We define \((e,i_{v})\) as the tuple that gives the largest marginal gain when added into \(\mathbf{s}_{v}\). When an element \(e\) arrives, the algorithm handles these works: (1) choose the position \(i_{v}\) with maximal marginal gain with respect to \(\mathbf{s}_{v}\) and \(e\) (line (5)); (2) use threshold \(\tau_{v}=2v/(5B)\) to add the element \(e\) into \(\mathbf{s}_{v}\) if it has the high density gain that is defined as the ratio of the marginal gain of that element over its cost without violating the budget constraint (line (7)). We still keep the notations \(\mathbf{o}\) as an optimal solution of the problem over \(V\) and the optimal value \(\text{opt}=f(\mathbf{o})\). We add some notations regarding to Algorithm 2 as follows: * \(\mathbf{s}_{v}=\{(e_{1},i_{1}),(e_{2},i_{2}),\ldots,(e_{q},i_{q})\}\) is the candidate solution with respect to some elements \(v\in A\) after ending the outer loop. * \(\mathbf{s}_{v}^{j}=\{(e_{1},i_{1}),(e_{2},i_{2}),\ldots,(e_{j},i_{j})\},1\leq j \leq q\) and \(\mathbf{s}_{v}^{0}=\mathbf{0}\). * \(\mathbf{s}_{v}^{<e}\) as \(\mathbf{s}_{v}\) right before \(e\) is processed. * \(\mathbf{u}=\{(u_{1},i_{1}),(u_{2},i_{2}),\ldots,(u_{r},i_{r})\}\) as a set of elements belongs to \(\mathbf{o}\) yet doesn't belong to \(\mathbf{s}_{v}\), \(r=|supp(\mathbf{u})|\). * \(\mathbf{u}_{1}=\mathbf{s}_{v}\sqcup\{(u_{1},i_{1}),(u_{2},i_{2}),\ldots,(u_{l},i_{l}) \},\forall 1\leq l\leq r\) and \(\mathbf{u}_{0}=\mathbf{s}_{v}\). * \(\mathbf{o}^{j}=(\mathbf{o}\sqcup\mathbf{s}^{j})\sqcup\mathbf{s}^{j}\). * \(\mathbf{o}^{j-1/2}=(\mathbf{o}\sqcup\mathbf{s}^{j})\sqcup\mathbf{s}^{j-1}\). * \(\mathbf{s}^{j-1/2}\): If \(e_{j}\in supp(\mathbf{o})\), then \(\mathbf{s}^{j-1/2}=\mathbf{s}^{j-1}\sqcup(e_{j},\mathbf{o}(e_{j}))\). If \(e_{j}\notin supp(\mathbf{o})\), \(\mathbf{s}^{j-1/2}=\mathbf{s}^{j-1}\). **Lemma 5**.: _For any \(v\in A\), if there is no element \(o\in supp(\mathbf{o})\setminus supp(\mathbf{s}_{v})\) so that \(\Delta_{(o,\mathbf{o}(o))}f(\mathbf{s}_{v}^{<o})\geq\tau_{v}\) and \(c(\mathbf{s}_{v}^{<o})+c(o)>B\), we have: \(f(\mathbf{o})\leq 3f(\mathbf{s}_{v})+c(\mathbf{o})\tau_{v}\)._ Proof.: See the Appendix, section VI-A **Theorem 2**.: _For \(0<\epsilon<1/5\), the Algorithm 2 returns an approximation ratio of \(1/5-\epsilon\), within \(O(nk/\epsilon)\) queries._ Proof.: See the Appendix, section VI-A ## V Experiments In this section, we compare the performance between our algorithms and state-of-the-art algorithms for the kSMK problem listed below: * **Deterministic Streaming (DS)1**: A streaming algorithm in [18] which returns an approximation ratio of \(1/5-\epsilon\), requires 1-pass and \(O(kn\log(n)/\epsilon)\) queries. * **Random Streaming (RS)**: Another streaming algorithm in [18] which returns an approximation ratio of \(k/(5k-2)-\epsilon\) in expectation, requires one pass and \(O(kn\log(n)/\epsilon)\) queries. Footnote 1: The kSMK problem is a special case of the \(k\)-submodular maximization under the budget constraint in [18] with \(\beta=1\). Although Greedy proposed by [13] gives the best approximation ratio yet it is only available for the monotone case. Besides, in [11], authors also showed that the running time of Greedy was so long that they had to limit the time to cut off the experiment. Therefore, in the experiment, we eliminated the Greedy. Also, we conduct experiments on specific applications, which are \(k\)**-topic Influence Maximization under **knapsack constraint (kIMK)** and \(k\)**-type Sensor Placement under Knapsack constraint (kSPK)** on three important measurements: the oracle value of the objective function, the number of queries, and running time. We further show the trade-off between the solution quality and the number of queries of algorithms with various settings of budget \(B\). We also use the dataset as mentioned in [8] to illustrate the performance of compared algorithms (Table II). To demonstrate the performance of algorithms via the above three measurements, we show some figures numbered and captioned, in which the terms Fig, K, and M stand for Figure, thousands, and millions, respectively. All the implementations are on a Linux machine with configurations of \(2\times\) Intel Xeon Silver \(4216\) Processor @\(2.10\)GHz and \(16\) threads x\(256\)GB DIMM ECC DDR4 @\(2666\)MHz. ### \(k\)_-topic Influence Maximization under Knapsack constraint (kIMK)_ The information diffusion model, called Linear Threshold (LT) model [8, 12] was briefed, and the \(k\)-topic Influence Maximization under Knapsack constraint (kIMK) problem using this model was defined as follows: LT modelA social network is modeled by a directed graph \(G=(V,E)\), where \(V,E\) represent sets of users and links, respectively. Each edge \((u,v)\in E\) is assigned weights \(\{w^{i}(u,v)\}_{i\in[k]}\), where each \(w^{i}(u,v)\) represents how powerful \(u\) influences to \(v\) on the \(i\)-th topic. Each node \(u\in V\) has a _influence threshold_ with topic \(i\), denoted by \(\theta^{i}(u)\), which is chosen uniformly at random in \([0,1]\). Given a seed set \(\textbf{s}=(S_{1},S_{2},\ldots,S_{k})\in(k+1)^{V}\), the information propagation for topic \(i\) happens in discrete steps \(t=0,1,\ldots\) as follows. At step \(t=0\), all nodes in \(S_{i}\) become active by topic \(i\). At step \(t\geq 1\), a node \(u\) becomes active if \(\sum_{\textbf{{\rm{activated}}}_{v}}w^{i}(v,u)\geq\theta^{i}(u)\). The information diffusion process on topic \(i\) ends at step \(t\) if there is no new active node and the diffusion process of a topic is independent of the others. Denote by \(\sigma(\textbf{s})\) the number of nodes that become active in at least one of \(k\) topics after the diffusion process of a seed \(k\)-set **s**, i.e., \[\sigma(\textbf{s})=\mathbb{E}[|\cup_{i\in[k]}\sigma_{i}(S_{i})|] \tag{6}\] where \(\sigma_{i}(S_{i})\) is a random variable representing the set of active users for topic \(i\) with the seed \(S_{i}\). The kIMK problemThe problem is formally defined as follows: **Definition 3** (kIMK problem).: Assuming that each user \(e\) has a cost \(c(e)>0\) for every \(i\)-th topic, which illustrates how difficult it is to initially influence the appropriate individual about that topic. Given a budget \(B>0\), the problem asks to find a seed set **s** with \(c(\textbf{s})=\sum_{e\in S_{i},i\in k}c(e)\leq B\) so that \(\sigma(\textbf{s})\) maximal. ### \(k\)_-type Sensor Placement under Knapsack constraint_ We further study the performance of algorithms for _\(k\)-type Sensor Placement under Knapsack constraint_ (kSPK) problem which is formally defined as follows: **Definition 4** (kSPK problem).: Given \(k\) kinds of sensors for different measures and a set \(V\) of \(n\) locations, each of which is assigned with only one sensor. Assuming that each sensor \(e\) has a cost \(c(e)>0\) for every \(i\)-th type. Given a budget \(B>0\), the problem aims to locate these sensors to maximize the information gained with the total cost at most \(B\). Denote by \(R^{i}_{e}\) a random variable representing the observation collected from a \(i\)-type sensor and the information gained of a \(k\)-set **s** is \[f(\textbf{s})=H(\cup_{e\in supp(\textbf{s})}\{R^{i}_{e}\}) \tag{7}\] where \(H\) is an _entropy function_. ### _Results and discussion_ #### Iv-C1 Experiment settings **For kIMK.** We use the dataset Facebook and set up the model as the recent work [8]. Since the computation of \(\sigma(\cdot)\) is #P-hard [29], we adapt the sampling method in [8, 30] to give an estimation \(\hat{\sigma}(\cdot)\) with a \((\lambda,\delta)\)-approximation that is: \[\Pr[(1+\lambda)\sigma(\textbf{s})\geq\hat{\sigma}(\textbf{s})\geq(1-\lambda) \sigma(\textbf{s})]\geq 1-\delta \tag{8}\] It's said that \(\hat{\sigma}(\cdot)\) is \(\epsilon\)-estimation of \(\sigma(\cdot)\) with probability at least \(\lambda\). As [8, 18], in the experiment, we set parameters \(\lambda=0.8,\delta=0.2\), \(k=3\) and \(\epsilon=0.1\) to show a trade-off between solution quality and quantities of queries. We set \(B\) in \(\{0.5K,1K,1.5K,2K\}\) to illustrate the expense to influence \(k\) topics via social networks is not a small number and set the cost of each element from 1 to 10 according to the Normalized Linear model [18]. **For kSPK.** We use the dataset Intel Lab [28] to illustrate the kSPK problem. The data were preprocessed to remove missing fields. Moreover, we set \(k=3\), \(\epsilon=0.1\) as in the experiment of kIMK, and the cost range from 1 to 10 for the Intel Lab dataset whereas the values of \(B\) are fixed at several points from 10 to 50. This setting was due to the number of sensors and the similarity among algorithms. #### Iv-C2 Experiment results To provide a comprehensive experiment, we ran the above algorithms several times and collected results about objective values, the number of queries, and the running time according to the \(B\) milestones. For each milestone, the average values were calculated. Figures 1, and 2 illustrate the results. **Regarding kIMK.** First, Figure 1(a) represents the performance of algorithms via values of the objective function \(\sigma(\cdot)\). RLA is equivalent to DS, followed RS, while LAA's line hits the lowest points. In Figure (a) the gaps between groups RLA-DS, RS, and LAA seem bigger when \(B\geq 1.5K\). Second, Figure 1(b)(c) displays the amounts of queries called and the time needed to run these algorithms. LAA shows an advantage over others in terms of query complexity. It is \begin{table} \begin{tabular}{l c c c c} \hline **Database** & **\#Nodes** & **\#Edges** & **Types** & **Instances** \\ \hline Facebook [27] & 4039 & 88234 & directed & kIMK \\ Intel Lab sensors[28] & 56 & - & - & kSPK \\ \hline \end{tabular} \end{table} TABLE II: The dataset sharply from several to dozens of times lower than the remaining. Besides, the number of queries of RLA is equivalent to DS and lower than RS, respectively. Significantly, these lines explicitly determine and linear over \(B\) milestones. Overall, the number of queries of RS is the highest, followed by the group of RLA-DS and LAA, respectively. The experiment indicates the quantities of queries of our algorithms outperform the others. As the query complexity directly influences the running time, the representation of the time graph in Figure 1(c) looks quite similar to the representation of the query graph in Figure 1(b) in which LAA line was drawn typically lowest. It shows the running time of LAA is several to dozens of times faster than the others. RLA runs considerably faster than RS and equivalently to DS. The above figures show the trade-off between our proposed algorithms' solution qualities and the query complexities. LAA tries to target the near-optimal value by dividing the ground into two subsets according to the cost values of elements and reduces query complexity by the filtering condition of the algorithm 1. Hence, the query complexity is significantly low. Nevertheless, the performance of LAA regarding solution quality is not high. RLA enhance LAA by using LAA as an input and the decreasing constant threshold. As a result, the objective of RLA is better than LAA while the number of queries is higher but still deterministic. RLA use the threshold \(\tau_{v}\) to upgrade the performance. It leads to the objective value increasing, yet the number of queries also increases. Moreover, when the ground set and \(B\) value grow, the solution quality improves while running time and query complexity are linear. This is extremely important when working with big data. **Regarding kSPK.** As can be seen in 2(a) the discrimination between objective values of experimented algorithms is not large. LAA and RLA seem to overlap while DS and RS fluctuate a bit. When \(B\) increases, the gap between these lines becomes larger in which RS states the highest, followed by DS and the group of LAA-RLA, respectively. However, due to the number of nodes being small, on the whole, information gained from these algorithms is almost no different. Second, the gap between the number of queries of LAA and the others in Figure 2(b) is significantly large. With the large number of \(B\), lines of RLA, DS, and RS tend convergent in which RLA lies on the remaining, followed by RS and DS, respectively. Regarding Figure 2(c), these lines seem to overlap. Moreover, query lines and timelines of the above algorithms are almost horizontal over \(B\)'s milestones. This result illustrates the query complexity of our algorithms is linear and equivalent to other ones. From two actual uses of kIMK, and kSPK, our solutions are better or equivalent to existing ones while the number of queries reduces, especially when \(n\) and \(B\) grow. The steady of the proposed linear deterministic ones becomes vital when the data increases. The experiment showed was consistent with the theory. It also indicated the trade-off between our proposed algorithms' solution qualities and the query complexities. Overall, our proposed algorithms are described to outperform or be comparable to the state-of-the-art. ## VI Conclusion This paper works with the problem of maximizing a \(k\)-submodular function under a knapsack constraint for the non-monotone case. We propose two deterministic algorithms that take just \(O(kn)\) query complexity. The core of our algorithms is to keep which elements are over a given appropriate threshold and then choose among them the last elements so that the total cost does not exceed a given budget, \(B>0\). Fig. 1: Algorithm results for kIMK on Facebook: (a) The objective values, (b) The number of queries, (c) Time consumption To investigate the performance of our algorithms in practice, we conducted some experiments on two applications of Influence Maximization and Sensor Placement. Experimental results have shown that our algorithms not only return acceptably reasonable solutions regarding quality requirements but also take a smaller number of queries than state-of-the-art algorithms. However, there are still some open questions, such as how to improve the approximate ratio or the linear query complexity for the non-monotone kSMK problem, that will motivate us in the future.
2309.03128
Provably Unlinkable Smart Card-based Payments
The most prevalent smart card-based payment method, EMV, currently offers no privacy to its users. Transaction details and the card number are sent in cleartext, enabling the profiling and tracking of cardholders. Since public awareness of privacy issues is growing and legislation, such as GDPR, is emerging, we believe it is necessary to investigate the possibility of making payments anonymous and unlinkable without compromising essential security guarantees and functional properties of EMV. This paper draws attention to trade-offs between functional and privacy requirements in the design of such a protocol. We present the UTX protocol - an enhanced payment protocol satisfying such requirements, and we formally certify key security and privacy properties using techniques based on the applied pi-calculus.
Sergiu Bursuc, Ross Horne, Sjouke Mauw, Semen Yurkov
2023-09-06T16:06:40Z
http://arxiv.org/abs/2309.03128v1
# Provably Unlinkable Smart Card-based Payments ###### Abstract. The most prevalent smart card-based payment method, EMV, currently offers no privacy to its users. Transaction details and the card number are sent in cleartext, enabling the profiling and tracking of cardholders. Since public awareness of privacy issues is growing and legislation, such as GDPR, is emerging, we believe it is necessary to investigate the possibility of making payments anonymous and unlinkable without compromising essential security guarantees and functional properties of EMV. This paper draws attention to trade-offs between functional and privacy requirements in the design of such a protocol. We present the UTX protocol - an enhanced payment protocol satisfying such requirements, and we formally certify key security and privacy properties using techniques based on the applied \(\pi\)-calculus. + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal of Computer Science + Footnote †: journal: Journal of Computer Science + While there is no privacy in cleartext EMV, encrypting EMV by running BDH or UBDH as the first step does not help achieve an unlinkable protocol where an active attacker cannot link payment sessions, thereby tracing the cardholder. The fact that EMVCo officially abandoned efforts on EMV 2nd Gen to enhance privacy in 2019 (Cheng et al., 2019) also emphasises the need for a newly designed protocol (called UTX in the table) to meet future privacy demands. To the best of our knowledge, no existing solutions satisfy the basic functional and security requirements of EMV while relying exclusively on the computational resources of a smart card and being unlinkable at the same time. Mobile wallet apps like Apple Pay (Miller et al., 2019) protect the card number from being revealed by replacing it with a permanent Device Account Number (DAN) stored in the device (e.g. the smartphone). The DAN is exposed to an active attacker in the same way the PAN is exposed in a traditional EMV transaction1. At the same time, anonymous credential (AC) schemes (Kang et al., 2019; Wang et al., 2019) are a popular way for establishing unlinkability, e.g. in the context of anonymous access to online services. Some AC schemes have been effectively implemented on smart cards (Kang et al., 2019; Wang et al., 2019). In principle, an AC scheme could be employed to prove the legitimacy of the card to the terminal without revealing any identifying information. However, the full functionality of an EMV-like transaction requires a much richer functionality. For example, the parties need to agree on the parameters of the transaction, the terminal may need to verify the user PIN, and the bank needs to check that the payment request comes from a valid interaction with the corresponding card. AC schemes can be augmented with attributes that can be used to encode a richer functionality (e.g. attesting that the card is still valid at a certain date). However, such extensions typically rely on zero-knowledge proofs, that we aim to avoid since they would introduce too much overhead for a payment smart card. Furthermore, the design question remains, i.e. how to adapt an AC scheme for use in a larger payment system. In this paper, we demonstrate that a protocol with the desired functional, security and privacy requirements can be designed based on a particular and simple instance of anonymous credentials, namely self-blindable certificates (Wang et al., 2019). We discuss some deployment questions at the end of the paper and argue that our protocol could be implemented with minimal overhead on current smart cards. Footnote 1: However, an additional layer of security is provided in this case since the device should be ready for communication, e.g. unblocked with the proper app running, etc. The main contributions of the paper are as follows. * _A non-trivial threat model._ We build on recent work (Zhou et al., 2019) that explains why active attackers pose a real threat for contactless payments and how an appropriate Dolev-Yao model (Dolev and Yao, 2019) fully accounts for them. A key novelty of our model is that we account for both honest and dishonest terminals, but in very different ways. Attackers impersonating terminals not requiring the PIN are implicitly accounted for in the Dolev-Yao model. In contrast, honest terminals requiring the PIN are explicitly represented as processes. * _Requirements for privacy-preserving card payments._ From EMV we extract functional and security requirements. For privacy requirements we extract from the EMV 2nd Gen draft (Cheng et al., 2019) an unlinkability requirement and clarify it with respect to our threat model. * _A new payment protocol._ We design a non-trivial protocol that we argue is feasible to implement since it uses standard components that respect limited computational resources of the card. The assemblage, however, is unique. We also explain that new demands imposed by the protocol on infrastructure may be handled by software updates for the existing EMV infrastructure. * _A proof that the protocol satisfies our requirements._ Notably, unlinkability is proven directly using state-of-the-art bisimulation techniques and does not make use of tools. Our experiments show that our particular combination of protocol and threat model is not yet in scope of current tools. We begin by presenting a design space where we determine the requirements of an unlinkable payment protocol in Section 2, and draw attention to trade-offs between functional and privacy requirements. We then present an unlinkable payment protocol UTX in Section 3, and provide formal analysis in Section 4. ## 2. Design Space for Unlinkable Transactions In this section, we explore the design space for a privacy-preserving payment protocol. This top-level design space is narrowed down in later sections to guide the design of our proposed protocol. We explain the architecture of a payment system that should be respected, and emphasise the functional, security and privacy requirements. ### EMV infrastructure We present an overview of the payment infrastructure, assumed by the current EMV standard, in Figure 1. The card \(C\) is manufactured by the _issuing bank BC_ in collaboration with the payment system _Pay5ys_ (e.g. Visa or Amex). The terminal \(T\) is connected to an _acquiring bank BT_ supporting _Pay5ys_ that processes payments on behalf of the terminal. The acquiring bank processes payments by connecting to the _Pay5ys_ network that exchanges messages between banks. A successful run of the protocol results in the generation of an _Application Cryptogram_ AC by \(C\). AC is eventually sent by \(T\) to \(BT\), either before of after the payment is approved by the terminal, depending on whether the payment is online or offline, respectively. Figure 1. Payment architecture. The issuing bank \(BC\) receiving AC, decides to decline or accept the transaction, and replies with the appropriate message. In this paper, we are concerned about hiding the information about the card from the terminal. Thus, when modelling the system, we merge \(BT\), \(PaySys\), and \(BC\) into a single agent \(B\), modelling their common interface with the terminal when processing payments, as indicated in Figure 1. This is consistent with the fact, that EMV currently does not enforce any exact processing method on the bank's side, i.e. the standard contains an example while "issures may decide to adopt other methods" (Bock, 2000, Section 8). ### Requirements for unlinkable payments An unlinkable protocol should satisfy three types of requirements: functional, security and privacy requirements. We extract functional and security requirements from the current EMV specification, strengthen some security requirements, and introduce privacy requirements not previously present in EMV. #### 2.2.1. Functional requirements We consider smart card-based payments, hence we rely only on the computational resources of the smart card and the terminal. Devices like smartphones that can establish direct communication between the card and the bank are excluded from the discussion in this paper. We also prohibit indirect card-bank communication by means of, e.g. synchronised clocks since the card has no long-term power source. The card should use Elliptic-Curve Cryptography (ECC), as already required for the new iteration of the EMV standard (Bock, 2000). Since, currently, the card must be present within the reader's field for at most 500ms (Bock, 2000), computationally-heavy general-purpose zero-knowledge proofs are out of scope. The protocol should support contact and contactless transactions. For the purpose of this analysis we consider the PIN as the only cardholder verification method and the PIN is always required for high-value transactions. Hardware solutions that might help to replace the PIN are beyond the scope of this work. Cards can optionally support offline transactions which carry two risks resulting in the terminal not being paid (when AC is finally processed by the bank): either there is not enough money in the cardholder's account, or the card is blocked, e.g. reported as stolen. If offline transactions are supported, the insurance policy must cover these risks. #### 2.2.2. Security requirements Recall that some configurations of EMV have been shown to be insecure. The primary security goals we extract from good configurations of EMV are the following authentication and secrecy properties. * \(T\) must be sure that the presented card is a legitimate card that was issued by the \(PaySys\) that \(T\) supports and that \(C\) is not expired. * If the bank accepts the transaction, then \(T\), \(C\), and the bank must agree on the transaction. * Keys for message authentication and PIN are secret. Notice that the card does not authenticate the terminal. The reason is, in the philosophy of the EMV standard, that the payment system allows anyone to manufacture terminals. We strengthen these requirements by assuring the card that if the cryptogram is processed, then it is processed by a legitimate bank. In addition to the requirements extracted from EMV above we introduce the additional requirement that the application cryptogram AC must be secret. This is in line with the proposal of secret channel establishment (Bock, 2000), where a session-specific secret channel was introduced to protect all messages between the card and the terminal from eavesdroppers. Currently, the communication between the card and the terminal is in cleartext, and the AC, that contains transaction details, is always exposed. Formal security definitions reflecting these requirements are introduced in Section 4.5 where we present the analysis of our proposal for a protocol. #### 2.2.3. Privacy requirements As mentioned in the introduction and expanded upon next, currently no privacy properties are preserved by EMV. The privacy property we aim for in this paper is _unlinkability_. Unlinkability is standardised in the Common Criteria for Information Technology Security Evaluation ISO 15408 (Bock, 2000), as ensuring that two uses of a card cannot be linked. ISO standard 15408 also covers anonymity. Unlinkability is stronger in the sense that, if two sessions are not anonymous then they can be linked, but the converse does not hold. This explains why unlinkability is a suitable benchmark for privacy. For this initial discussion, we give an intuitive scheme for defining unlinkability. A formal definition is presented in Section 4.3, where we prove that the protocol we introduce in later sections satisfies unlinkability. **Scheme 1**.: (unlinkability) _Transactions are unlinkable if an attacker cannot distinguish between a system where a card can participate in multiple transactions and another system where a card can participate in at most one transaction._ Let us reflect on the above scheme. The former system represents a real-world scenario where the card is issued and within its lifespan can participate in several protocol sessions. The latter system is an idealised situation, where cards are disposed of after each transaction and can participate in one payment session at most, hence sessions are trivially unlinkable. Whenever, with respect to all attack strategies, there is no distinction between the two scenarios for a given payment protocol, such a protocol is unlinkable. Guaranteeing this property without compromising the aforementioned security and privacy requirements is our primary challenge. We explain that unlinkability cannot hold in all contexts, if we aim to fulfil also our functional and security requirements. As mentioned above, two sessions that are not anonymous can be linked. Therefore, to achieve unlinkability, certainly any identity unique either to the card or the cardholder must never be revealed to an attacker. We call such identities _strong_ and they include the cardholder's name, the PAN, the card's public key, and any signature on the data specific to the card. On the other hand, even if strong identities were protected, _coarse_ identities, that are common to a group of cards, may enable tracking of groups of cardholders. Coarse identities include the payment system, the validity date, the format of transaction data, and other implementation-specific features. Some coarse identities are inevitably exposed as a consequence of the requirements in Sections 2.2.1, 2.2.2. For instance, the terminal needs to know which payment system the card uses to authenticate the card, and needs to be able to distinguish between valid and expired cards. Other coarse identities include the network traffic response times, which may reveal information about whether the card belongs to a local or foreign bank. Coarse identities can be combined to _fingerprint_ a card. Thus we are obliged to accept that unlinkability can only be achieved up to their fingerprint, that is, we can link two sessions with the same fingerprint only. However, we require that this fingerprint is minimised, thereby limiting the capability of an attacker to perform unauthorised profiling of cardholders and their behaviours. ## 3. The UTX Protocol In this section, we introduce the UTX (Unlinkable Transactions) protocol that satisfies the security and privacy requirements introduced in Section 2.2. We pay particular attention to minimising the fingerprint given by the coarse identities thereby maximising unlinkability. We start by discussing the initialisation phase, then we introduce the message theory representing cryptographic primitives employed in the protocol. We then explain the key distribution between the participants of the protocol. Finally, we thoroughly explain transactions that can either be offline, online, high, or low-value. ### Application selection The card can generally support several payment methods, or, in EMV lingo, _applications_. In Fig. 2 we schematically show how the terminal currently selects the application. First, the terminal asks the card to send the list of supported applications, then the card provides the list, and the terminal selects one (possibly with the help of the cardholder). Knowing the payment system, the terminal can select the appropriate public key to authenticate the data on the card. Notice that the list of payment applications is a coarse identity of the card even if this list consists of a single application, since it can still be distinguished from other cards. In order to avoid a coarse identity being exposed at this point, we design the protocol such that the card presents a list comprising a single element, Unlinkable. This means that a group of payment systems agree to provide privacy-preserving payments using the name Unlinkable for the respective application. Terminals, thus, should also be upgraded to support Unlinkable in order to accept unlinkable payments, before such cards are rolled out. An alternative is to allow each payment system to provide their own unlinkable application, and to tolerate that the payment system becomes part of the coarse identity of the card. Our analysis covers both choices. ### Keys required to set up Unlinkable Here we explain who generates and holds keys and signatures involved in the UTX protocol. An authority, who is either a payment system or a delegate acting on behalf of a group of payment systems, produces signatures involved in the protocol using two types of signing keys. Firstly, a secret key \(s\), is used to produce _certificates for banks_, which are kept by the terminal and used by the card to check that the terminal is connected to a legitimate bank. Secondly, a list of secret keys \(\chi_{\mathsf{PM}}\) is maintained for each new calendar month. They are used by the authority upon request from the payment system to generate _month certificates_ unique to each card supporting Unlinkable for every month the card is valid. A card valid for five years would store 61 such month certificates, that the terminal checks to be sure that the card is valid at the month of a particular purchase. The public key for checking month certificates is broadcast to terminals from the first of every month. We take care to prohibit an attacker from learning the expiry or the issuing month, which would allow many cards to be distinguished. To do so, we introduce the following pointer mechanism. The card maintains a pointer to the most recent month certificate that has been used in response to a legitimate request by the terminal. When the terminal asks the card to show the certificate for the month, the card compares the pointer with the received month. If the received month is greater than what the pointer references, the card advances the pointer to this month and shows the respective certificate. If either the received month coincides with or is one month behind the pointer, the card simply shows the certificate for this month and the pointer remains untouched. Otherwise, if the month requested is older than two months the card terminates the session. A terminal cannot request a month in the future, assuming that the public keys for verification are carefully managed, such that they are never released in advance. We allow a window of two months, to allow time for offline terminals to eventually receive the most recent public key for the month. For this reason a new card valid for 60 months is loaded with 61 month certificates with a pointer referencing the issuing month. That way a newly issued card cannot be distinguished from cards already in circulation as it is ready to present the certificate for the month prior to the month in which it was issued. Thus, the only coarse identities revealed are whether the card is outdated or has not been used since the beginning of the month. ### Message theory We now introduce cryptographic primitives employed by the UTX protocol. Since later in Section 4 we reason about UTX symbolically and assume perfect cryptography, low-level details such as ECC domain parameters are out of scope. In particular, we assume the use of an encryption scheme that guarantees message integrity. Fig. 3 presents the message theory that consists of the syntax, that Figure 2. Payment System Selection. defines messages agents can form, and the equational theory \(E\), that axiomatises cryptographic operations. The message theory admits operations for ECC, i.e. multiplication between two field elements (scalars), and multiplication between a scalar and an element of the DH group. Whenever we say that "a message is blinded with a scalar", we mean multiplication by that scalar. Next, we include a standard set of cryptographic operations such as hashing, symmetric key cryptography, \(n\)-tuples, and generic digital signatures. Finally, we introduce the Verheul signature scheme (Verheul, 1997), which is invariant under blinding of the message-signature pair (hence can appear as "new" in each session). This scheme supports ECC and has been demonstrated to work sufficiently fast on smart cards (Kang et al., 2017). We also define several constants employed in UTX. The equational theory \(E\) captures the two types of multiplication and contains conventional destructor functions: decryption, projection, and two versions of signature verification. A digital signature is successfully verified whenever the message corresponds to the message extracted from the signature by applying the appropriate check function. Notice that the last equation ensures that if the function \(\mathrm{vcheck}(\cdot,\cdot)\) is applied to the signature, blinded with some scalar and the matching Verheul public key, it returns the message, blinded with the same scalar. ### Before running the protocol: the setup Before describing the protocol we explain how the payment system issues a card in collaboration with the issuing bank, how the acquiring bank joins the payment system, and how the terminal connects to the acquiring bank. In the next section, where we describe the transaction, we collapse the payment system, the issuing bank, and the acquiring bank into a single agent. #### 3.4.1. Issuing a card Here we outline how a card could be manufactured involving a signing authority that payment systems could share as explained in Section 3.2. To issue a card, the payment system generates a new card's private key \(c\), computes the card's public key \(\phi(c,\mathfrak{g})\), and asks the signing authority to generate the following list of month Verheul signatures \(\{(\forall\mathsf{M},\mathsf{vsig}(\phi(c,\mathfrak{g}),\lambda_{\mathsf{mk}} ))\}_{\mathsf{mk}=0}^{60}\) which it loads to the card together with \(\mathsf{pk}(s)\), \(c\), \(\phi(c,\mathfrak{g})\), PAN, and PIN. Then the card is sent to the issuing bank together with \(\phi(c,\mathfrak{g})\), PAN, and PIN; the bank generates and loads to the card a new master key \(mk\), and finally sends the card to the user. Since no one should ever have access to \(c\) except the card, we assume the payment system never shares or stores \(c\). #### 3.4.2. The keys used by the terminal to connect to the payment system To allow an acquiring bank supporting the payment system to process payments in the month \(\mathsf{M}\), the authority knowing \(s\) issues a certificate of the form \(\langle(\mathsf{M},\phi(b_{t},\mathfrak{g})),\mathsf{sig}((\mathsf{M},\phi(b_ {t},\mathfrak{g})),s)\rangle\) to each acquiring bank, where \(b_{t}\) is the private key of the bank. In turn, the acquiring bank loads the terminal with both this data and a symmetric key \(kbt\) used for secure communication between the terminal and the bank. The terminal presents the bank's certificate at each run of the protocol. As explained in Section 3.2, the terminal and the bank must update the month key certificate and the month validation key regularly without being offline for more than two months. First, we explain why the month \(\mathsf{M}\) is signed. Recall the card points to the most recent month it has seen. Hence, if this month requested by the terminal is the month pointed to by the card, or the month before, it is safe to reveal that it is valid for either of these two months. The signature \(\mathsf{sig}((\mathsf{M},\phi(b_{t},\mathfrak{g})),s)\) containing the month \(\mathsf{M}\) is required in the situation where the next month is requested, in which case this signature serves as proof to the card that the next month has arrived. This prevents attackers learning whether the card is valid next month, and also avoids the pointer being advanced too quickly thereby invalidating the card in the current month. Notice that \(\mathsf{vpk}(\lambda_{\mathsf{mk}})\) is publicly known for the past few months and could have been transferred by the terminal to the card and used by the card to check whether a request for the next month is valid. However, since checking Verheul signatures is too expensive for the card, we avoid using keys \(\mathsf{vpk}(\lambda_{\mathsf{pk}})\), and instead only check the certificate \(\langle(\mathsf{M},\phi(b_{t},\mathfrak{g})),\mathsf{sig}((\mathsf{M},\phi(b_ {t},\mathfrak{g})),s)\rangle\) against the generic \(\mathsf{pk}(s)\) already present in the card which can employ a more efficient signature scheme since it does not need to support blinding. Second, the bank's certificate enables the card to verify that \(\phi(b_{t},\mathfrak{g})\) is a public key for a legitimate bank connected to the payment system providing Unlinkable, hence it can safely use \(\phi(b_{t},\mathfrak{g})\) to encrypt the application cryptogram at the end of the transaction. This signature helps to avoid the situation when an attacker Figure 3. UTX message theory. introduces their own public key and thereby can look inside the cryptogram to gather sensitive information including the PAN. It is efficient to transmit the month and the bank's public key in a single message, however, in principle, the signatures on each could be separate. In this case, to prevent offline guessing attacks, the payment system should introduce certain padding to small and publicly known constants MM representing months. If a bank requires multiple keys, the payment system could produce multiple certificates. The secure channel between the bank and the terminal modelled here as a symmetric key _kbt_ could be established by other means, which is consistent with EMV as it is not specified. ### The UTX transaction We introduce online and offline modes of the UTX protocol in Fig. 4. The PIN is asked for in high-value purchases. In the offline mode, the PIN is sent to the card. As the PIN must be transferred to the card, and the card cannot leave the session until the PIN is entered, high-value offline transactions are always performed as a contact payment. In online mode, the PIN is not sent to the card, instead it is sent to the bank together with the application cryptogram. Parts of the protocol involving the PIN check are indicated by dashed lines and annotated as off and on indicating these two modes of operation. In Fig. 4 the two messages exchanged between the terminal and the bank are either executed during the transaction (online mode) or postponed to the moment when the terminal goes online to upload collected cryptograms and, optionally, to update its bank's certificate (offline mode). #### 3.5.1. Initialisation When the card is close enough to the terminal, it is powered up, and the terminal asks which payment methods the card supports by issuing the SELECT command. The card supporting unlinkable payments, replies with a singleton list containing only Unlinkable, as explained in Section 3.1 The terminal then selects this payment method and sends to the card the ephemeral public key \(\phi(t,\mathfrak{g})\). The card in response sends to the terminal \(\phi(a,\phi(c,\mathfrak{g}))\), which is its public key, blinded with a fresh scalar \(a\). After that the card and the terminal establish the symmetric session key \(k_{c}:=(\phi(a\cdot c,\phi(t,\mathfrak{g})))=k_{c}\ln(\phi(t,\phi(a,\phi(c, \mathfrak{g}))))\eqcolon k_{t}\) which they use _to encrypt all further communications_. In Fig. 4, phases of the protocol that are encrypted are represented by a box with a label in the top-left corner indicating the encryption key. A _passive eavesdropper_ who only observes messages is now locked out from the session since it has no access to the derived key. However, an _active attacker_ can choose their own public key and engage in the handshake. We will explain below how active attacks are mitigated. The only information about the card exposed at this point is the fact that the card supports the application Unlinkable. #### 3.5.2. Validity check After the secret key is established, the card presents evidence that it is valid. To do so, firstly, the terminal sends to the card \(\langle(\mathsf{MM},\phi(b_{t},\mathfrak{g})),\mathsf{sig}((\mathsf{MM},\phi( b_{t},\mathfrak{g})),s)\rangle\)\), the current bank's certificate. The card verifies this certificate against the public key \(\mathsf{pk}(s)\), hence believes that this terminal is connected to a legitimate acquiring bank, and that \(\mathsf{MM}\), and \(\phi(b_{t},\mathfrak{g})\) are authentic. Having received this legitimate request to show the month certificate corresponding to \(\mathsf{MM}\), the card updates its pointer, leaves it untouched or, aborts the transaction as described in Section 3.2. After the decision about the pointer has been made, the card blinds the appropriate month Verheul signature \(\mathsf{vsig}(\phi(c,\mathfrak{g}),\chi_{\mathsf{MM}})\) with a the scalar \(a\), and sends to the terminal the following blinded pair \(\langle\phi(a,\phi(c,\mathfrak{g})),\phi(a,\mathsf{vsig}(\phi(c,\mathfrak{g}),\chi_{\mathsf{MM}}))\rangle\). The terminal verifies this blinded message-signature pair against the current month Verheheul public key \(\mathsf{vpk}(\chi_{\mathsf{MM}})\) and additionally checks that the first element of the received pair coincides with the card's blinded public key used to establish a session key. This check ensures that the terminal is still communicating with the same card and prevents the construction of fake cards loaded with previously exposed blinded message-signature pairs. Since both elements of the message coming from the card at this stage are freshly blinded, as for the session key, they are distinct in each session, hence the terminal cannot use it to reidentify the card in future sessions by simply requesting the same month. At this point in the protocol the card exposes that it is valid at the month MM (since the key \(\mathsf{vpk}(\chi_{\mathsf{MM}})\) fits) which is not a coarse card's identity, as all other cards that have not yet expired and support unlinkable payments, expose the same information. #### 3.5.3. Cardholder verification (high-value) In case of a high-value _offline_ transaction, the terminal asks the cardholder to enter the PIN and sends the entered number \(\mathsf{uPIN}\) to the card together with the transaction details. If this input matches the actual card's PIN, the card includes the ok message both in the reply to the terminal and in the cryptogram to indicate to the issuing bank that the PIN has been successfully verified on the card's side. Otherwise, the card includes the \(\pm\) message in the reply and in the cryptogram, which the terminal has to send to the bank anyway to log failed attempts to enter the PIN for auditing purposes. In case of a high-value _online_ transaction, the terminal also asks the cardholder to enter the PIN but instead keeps it and sends it to the acquiring bank together with the cryptogram. #### 3.5.4. Cryptogram generation The terminal sends to the card the transaction details \(\mathsf{TX}^{\prime}\) comprising the currency, the amount, and the date; and either \(\bot\) (when the transaction is low-value), or, the entered \(\mathsf{uPIN}\) (when the transaction is high-value offline). The card computes \(k_{cb}:=\mathsf{h}(\phi(a\cdot c,\phi(b_{t},\mathfrak{g})))\), which serves as a symmetric session key between the card and the acquiring bank for this transaction only. Then the card generates one of the cryptograms. * \(\mathsf{AC}^{\bot}\coloneqq(a,\mathsf{PAN},\mathsf{TX})\) if no \(\mathsf{uPIN}\) has been received. * \(\mathsf{AC}^{\mathsf{ok}}\coloneqq(a,\mathsf{PAN},\mathsf{TX},\mathsf{ok})\) if the received \(\mathsf{uPIN}\) is correct. * \(\mathsf{AC}^{\mathsf{no}}\coloneqq(a,\mathsf{PAN},\mathsf{TX},\mathsf{no})\) otherwise. Finally, the card uses the master key _mk_ that has already been shared between the card and the issuing bank to compute hash-based message authentication code of the form \(\mathsf{h}(\langle\mathsf{AC},mk\rangle)\) and replies respectively with one the following messages to the terminal. * \(\langle\{(\mathsf{AC}^{\bot},\mathsf{h}(\mathsf{AC}^{\bot},mk))\}\rangle_{k_{cb} },\bot,\mathsf{TX}\rangle\) * \(\langle\{(\mathsf{AC}^{\mathsf{ok}},\mathsf{h}(\mathsf{AC}^{\mathsf{ok}},mk))\} \rangle_{k_{cb}},\mathsf{ok},\mathsf{TX}\rangle\) * \(\langle\{(\mathsf{AC}^{\mathsf{no}},\mathsf{h}(\mathsf{AC}^{\mathsf{no}},mk))\} \rangle_{k_{cb}},\mathsf{no},\mathsf{TX}\rangle\) Each of these messages corresponds to the cryptograms described and contains additional information on whether the PIN Figure 4. The UTX protocol. Offline and online high-value modes are annotated as off and on respectively. was successfully verified by the card (ok entry), or the PIN verification has failed (no entry) because the terminal cannot open the cryptogram encrypted for the acquiring bank. Notice that the card includes the nonce \(a\) in each of the cryptograms to make it unique per session. The fact that the same \(a\) is used for blinding the card's public key at the initialisation step allows the bank to strongly connect the cryptogram to the current session, thereby avoiding the cryptogram being replayed in other sessions. Although a trusted terminal is already assured that a valid card generated the cryptogram in the current session, it is beneficial for the bank to also check this. This is because the bank may not fully trust the terminal to be implemented correctly in which case, if the terminal fails to authenticate the card properly as described in Section 3.5.2, the terminal cannot be reimbursed for the cryptogram generated by an honest card in another session and replayed in a session with an unauthenticated device posing as a card. Therefore UTX ensures recent aliveness of the card from the perspective of the bank even in the presence of compromised terminals. #### 3.5.5. Transaction authorisation In the final stage of the protocol the terminal asks the bank to authorise the payment. The terminal uses the pre-established secret key \(kbt\) that is shared with the acquiring bank to send the following. * The transaction details \(\mathsf{TX}^{\prime}\). * The blinded card's public key \(\mathsf{Z}_{2}\coloneqq\phi(a,\phi(c,\mathfrak{g}))\). * The encrypted cryptogram of one of the three types described above that it has received from the card. * The user-entered PIN uPIN in case the transaction is high-value online, or the message \(\bot\) otherwise. Recall that \(B\) in Fig. 4 represents both the acquiring and the issuing banks. The acquiring bank uses its private key \(b_{t}\) and the received card's blinded public key \(\mathsf{Z}_{2}\) to compute the symmetric key with the card \(k_{bc}\coloneqq\mathsf{h}(\phi(b_{t},\mathsf{Z}_{2}))=\mathsf{h}(\phi(b_{t}, \phi(a,\phi(c,\mathfrak{g}))))\) and to decrypt the cryptogram. Internally to \(B\), the acquiring bank uses the PAN from the decrypted cryptogram and forwards all the information received from terminal to the issuing bank. In turn, the issuing bank determines \(mk\), \(\phi(c,\mathfrak{g})\), and the PIN corresponding to the PAN received and performs the following. * It checks that the first element of the cryptogram hashed with \(mk\) equals the second element, making sure the cryptogram is authentic. * It checks that the transaction details \(\mathsf{TX}^{\prime}\) received from the terminal match the transaction details from the cryptogram: \(\mathsf{TX}^{\prime}=\mathsf{TX}\) * It checks that the blinding factor \(a\) from the cryptogram multiplied by the card's public key \(\phi(c,\mathfrak{g})\) matches the blinded public key \(\mathsf{Z}_{2}\) received from the terminal: \(\phi(a,\phi(c,\mathfrak{g}))=\mathsf{Z}_{2}\). * It checks the transaction history of the card and ensures that the received \(a\) has not been used for an identical transaction, hence preventing a replay of the cryptogram. This replaces the transaction counter ATC from the EMV standard. * If the transaction value is high, the bank checks if the ok tag is present in the cryptogram and proceeds with the reply, otherwise, if the ok tag is not present, the bank checks if the received uPIN matches the card's PIN: uPIN = PIN and proceeds with the reply. If the above is successful, the terminal receives the reply message (\(\mathsf{TX}\), \(\mathsf{accept}\)) encrypted with \(kbt\). Notice that in UTX the payment system still uses the PAN to route payments between acquiring and issuing banks, however, it is now hidden from the terminal in contrast to the current EMV standard, where it is exposed. The main changes to the infrastructure to roll out UTX are as follows. The acquiring bank requires a key for decrypting the cryptogram. The issuing bank is required to ensure itself that the nonce from the cryptogram is tied to the legitimate card-terminal session. In addition a substantial update is needed for public key infrastructure explained in Sections 3.2, 3.4.1, and 3.4.2. ## 4. Unlinkability and security analysis We specify and verify our proposed protocol in a variant of the applied \(\pi\)-calculus (Cockock, 2015). In the formulation of the property of transaction unlinkability, we employ _quasi-open bisimilarity_(Kolmogorov, 1957) - an equivalence notion that is preserved in all contexts and captures an attacker capable of making dynamic decisions - and its corresponding labelled transition system. For the properties that constitute payment security, we rely on the ProVerif tool and its notion of _correspondence assertions_(Kolmogorov, 1957; Kolmogorov, 1957). We focus the analysis on the core component of our protocol, modelling its key agreement and transaction authorisation steps. We omit the application selection step as it involves only constant messages that are the same for all sessions. ### Attacker model The attacker model we use for verification of the UTX protocol is a Dolev-Yao attacker (Dolev and Yao, 1990) who controls the communications between the card, the terminal, and the bank. Such attackers can intercept, block, modify, and inject messages. In the presence of contactless payments, the Dolev-Yao attacker is particularly relevant since, within a range of 100cm, an attacker can power up the card and interact with it (Dolev and Yao, 1990), explaining why we insist on this attacker model when verifying our protocol. The connection between the terminal and the bank is not necessarily secure and an attacker could manipulate this connection, e.g. cutting it and forcing the terminal to go offline. We assume that cardholders only enter their PIN into honest terminals. In other words, the cardholder uses terminals at reputable points of sale in the process of a conscious purchase and never enters their PIN into random terminals that pop up on the street. The properties of unlinkability and PIN secrecy are immediately compromised if the PIN is entered into a malicious terminal which reveals the PIN to attackers. If an attacker possesses a PIN, clearly the card can be stolen and then used for high-value purchases for which the PIN is required. While theft may be mitigated by cancelling cards, an attacker knowing the PIN may authorise high-value purchases by relaying the messages between an honest terminal and an honest card (Dolev and Yao, 1990), making it difficult for the cardholder to dispute the transaction, as legally a cardholder is always held liable for transactions authorised by a PIN; and hence the primary goal of the security of money in the account would be compromised. Supposing that relay attacks were mitigated, an attacker knowing the PIN may still attack unlinkability as follows. For high-value transactions, it becomes possible for a terminal that remembers the PIN to track cards by the fact that the same PIN is used. Moreover, even in a low-value contact scenario not requiring the PIN, the PIN can nonetheless be used to track specific individuals, since such terminals remembering PINs can run a fake session with a high-value amount requiring the PIN to be sent from the terminal to the card in order to check if it has already seen this card before processing the legitimate low-value transaction. In contrast to the above, if an attacker is physically unable to perform contact transactions, low-value contactless payments are unlinkable even if the PIN is compromised. We analyse this case separately in Appendix C. There are other attacker models. We could have verified with respect to a weaker distant attacker that operates within a distance of 100cm to 20m from the card and can only eavesdrop on communications (Zhou et al., 2017; Wang et al., 2018). This attacker would have been sufficient to establish privacy for the proposal already considered by EMVCo establishing a channel to encrypt regular EMV transactions (Beng et al., 2017). Other attackers may attempt side-channel attacks by measuring execution time of cryptographic operations, or the response time from the bank, which is out of scope of our analysis. ### Formal specification of the protocol We use the applied \(\pi\)-calculus language (Han et al., 2017) to specify the formal model of the UTX protocol where all cards are synchronised to execute within the same month MI. In the essence of this formalism, we have processes that can communicate by sending and receiving messages using channels. We write \(\overline{ch}\langle M\rangle\) and \(ch(x)\) for sending the message \(M\) or receiving the input \(x\) on the channel \(ch\), respectively. A process can also generate private values (used e.g. for fresh secret keys and nonces), written as \(va\), be replicated using the! operator (allowing an unbounded number of its instances to execute), and run in parallel with other processes using the! operator. In Fig. 5 we have three processes that model the execution of a session of our protocol by the three roles in the UTX protocol: the terminal, the card, and the bank. Events, marked with ev:, will be used in the security analysis and can be ignored until Section 4.5. Fig. 6 specifies the top-level process that expresses how these processes are assembled and instantiated across multiple payment sessions in a full execution of the protocol. #### 4.2.1. The card process \(C\), described in Fig. 4(a), represents the execution of a payment session by a card. \[vch.\overline{card}\langle ch\rangle.C(ch,c,pk_{s},\texttt{v}\texttt{sig}_{ \texttt{MI}},\texttt{PAN},mk,\text{PIN})\] It is parameterised by the session channel \(ch\), the card's secret key \(c\), the system-wide public key \(pk_{s}\) used to check the bank's certificate crt received from the terminal, the signature \(\texttt{v}\texttt{sig}_{\texttt{MI}}\) on the card's public key for the current month (considering the currently valid month only simplifies the initial analysis), the card number PAN, and the PIN. First, the card establishes a key with the terminal, then checks the certificate of the terminal and sends back its own month certificate (comprising its public key and the corresponding Verheul signature) blinded with the scalar \(a\) used in the shared key establishment. Using the data provided in the terminal's certificate, the card also generates \(k_{cb}\), which is a fresh symmetric key to be used by the card to communicate securely with the bank (the terminal cannot obtain this key). Upon receiving the transaction details, the card decides as follows: if no PIN has been provided or the corresponding PIN matches its own PIN, the card accepts the transaction and replies with the corresponding cryptogram. Otherwise, the rejection cryptogram \(\texttt{AC}^{\texttt{no}}\) is generated and sent as reply to the terminal. #### 4.2.2. The terminal process The modes in which a terminal can operate are combined in a role \(T\) defined as follows. \[vch.\overline{term}\langle ch\rangle.T(user,ch,pk_{\texttt{MI}},\texttt{crt}, kbt)\] \(T\) is parametrised by the secret channel \(user\) used to enter the PIN, the session channel \(ch\), the public key used for verifying the card certificate for the given month \(pk_{\texttt{MI}}\), and the shared secret key between the terminal and the bank \(kbt\). To incorporate various operation modes for the terminal, we have three types of processes from which the terminal process \(T\) is made of: the process for online high-value transactions \(T_{\texttt{onh}i}\), for offline high-value transactions \(T_{\texttt{offh}i}\), and for low-value transactions \(T_{\texttt{10}}\). Initially, each terminal proceeds with the key establishment phase with the card, sends its certificate, and checks the received month certificate. High-value terminals rely on the PIN entered by the cardholder to perform transaction authorisation. To represent the different types of transactions that can occur, we have constants \(\texttt{lo}\) and \(\texttt{hi}\) for low-value and high-value transactions respectively. The online high-value terminal process \(T_{\texttt{onh}i}\) is given in Fig. 4(b). Since the transaction is high-value, the PIN is required and after the initialisation, the user enters the PIN using the private channel _user_, which models that the PIN can only be entered into honest terminals. Then the terminal sends the transaction details to the card, receives the application cryptogram in the response, and sends it to the bank together with the entered PIN. Since we are in the online mode, the terminal authorises the transaction only after receiving confirmation from the bank. In contrast, offline terminals authorise transactions right after receiving the reply from the card. The offline high-value and low-value modes are similar, and their specifications appear in Appendix B. The offline high-value mode requires the terminal to send the entered PIN to the card since only the card can verify the PIN if the terminal is offline. Terminals operating in this mode accept transactions only if the ok reply has been received from the card, however, regardless of the outcome, the cryptogram is always sent to the bank eventually. Low-value transactions are PINless, hence the corresponding role specification \(T_{\texttt{10}}\) does not require that online and offline modes are distinguished. #### 4.2.3. The bank process \(B\), specified in Fig. 4(c), that connects to a terminal session identified by the shared key \(kbt\) is represented as follows. \[vch.\overline{bank}\langle ch\rangle.B(ch,s,kbt,b_{t})\] In addition to \(kbt\), its parameters are the session channel \(ch\), the system-wide channel \(si\) that is used by the payment system to access the card database, and the bank's secret key \(b_{t}\). We model each entry inserted into the card database using the instruction \(!\langle\overline{si},\texttt{PAN}\rangle\langle\langle\text{PIN},mk,\phi(c, \mathfrak{a})\rangle\rangle\), and the corresponding entry can be read by receiving a message on the channel consisting of the pair \(\langle si,\texttt{PAN}\rangle\) where the first component of the channel keeps the database private to the bank and the second component indicates the \(ch(z_{1})\). _via_\(\mathbf{let}z_{2}:=\phi(a,\phi(c,\mathfrak{g}))\) in \(\overline{ch}(z_{2})\). \(\mathbf{let}k_{c}:=\mathsf{h}(\phi(a\cdot c,z_{1}))\)in \(ch(m)\). \(\mathbf{let}(\langle\mathsf{MM},y_{B}\rangle,\mathsf{MC}_{s}):=\mathsf{dec}(m,k _{c})\)in \(\mathbf{if}\ \mathrm{check}(\mathrm{MC}_{s,p}k_{s})=\langle\mathsf{MM},y_{B}\rangle\)then \(\mathbf{let}emc:=\{(\phi(a,\phi(c,\mathfrak{g})),\phi(a,\mathrm{vsig}_{ \mathfrak{mf}}))\}_{k_{c}}\)in \(\overline{ch}(emc)\). \(ch(x)\). \(\mathbf{let}(\mathsf{TX},\mathrm{uPin}):=\mathsf{dec}(x,k_{c})\)in \(\mathbf{let}\mathrm{AC}^{\perp}:=\langle a,\mathrm{PAN},\mathsf{TX}\rangle\)in \(\mathbf{let}\mathrm{AC}^{\mathrm{Ok}}:=\langle a,\mathrm{PAN},\mathsf{TX}, \mathrm{ok}\rangle\)in \(\mathbf{let}\mathrm{AC}^{\mathrm{no}}:=\langle a,\mathrm{PAN},\mathsf{TX}, \mathrm{no}\rangle\)in \(\mathbf{let}k_{cb}:=\mathsf{h}(\phi(a\cdot c,y_{B}))\)in \(\mathbf{if}\ \mathrm{uPin}=\bot\)then \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\perp},\mathsf{h}(\langle \mathsf{AC}^{\perp},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\langle\{(\{\mathsf{HAC}\}_{k_{c}}), \bot,\mathsf{TX}\rangle\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{g},\mathsf{eac})\) \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\overline{ch}(eac)\) \(\overline{ch}(eac)\) \(\mathbf{elseif}\ \mathrm{uPin}=\mathrm{PIN}\)then \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\mathrm{Ok}},\mathsf{h}( \langle\mathsf{AC}^{\mathrm{Ok}},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\{(\{\mathsf{HAC}\}_{k_{c}},\mathsf{ok}, \mathsf{TX})\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{g}\,\mathsf{\{HAC}\}_{k_{c}})\) \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\overline{ch}(eac)\) \(\mathbf{else}\) \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\mathrm{Ok}},\mathsf{h}( \langle\mathsf{AC}^{\mathrm{Ok}},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\{(\{\mathsf{HAC}\}_{k_{c}},\mathsf{ok}, \mathsf{TX})\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{g}\,\mathsf{\{HAC}\}_{k_{c}})\) \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\overline{ch}(\mathsf{auth})\) \(\mathbf{else}\) \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\mathrm{O}},\mathsf{h}( \langle\mathsf{AC}^{\mathrm{O}},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\{(\{\mathsf{HAC}\}_{k_{c}},\mathsf{ no},\mathsf{TX})\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},\mathsf{eac},\mathsf{eac})\) \(\mathbf{else}\) \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\mathrm{O}},\mathsf{h}( \langle\mathsf{AC}^{\mathrm{O}},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\{(\{\mathsf{HAC}\}_{k_{c}},\mathsf{ no},\mathsf{TX})\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\mathbf{else}\) \(\mathbf{let}\mathrm{HAC}:=\langle\mathsf{AC}^{\mathrm{O}},\mathsf{h}( \langle\mathsf{AC}^{\mathrm{O}},mk\rangle)\rangle\)in \(\mathbf{let}\mathrm{\mathit{eac}}:=\{(\{\mathsf{HAC}\}_{k_{c}},\mathsf{ no},\mathsf{TX})\}_{k_{c}}\)in \(\mathbf{ev}:\mathrm{CRun}(\mathfrak{z}_{1},z_{2},m,\mathsf{emc},x,eac)\) \(\overline{ch}(eac)\) \(\mathbf{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf }}}}}}}}}}}}}}}})\) \(\mathbf{ entry to look up. After receiving a transaction request from a terminal, the bank derives the symmetric key with the card \(k_{bc}\), obtains the PAN from the cryptogram, and obtains the card's PIN, its master key \(mk\), and the public key \(\phi(c,\mathfrak{g})\) from the database channel \(si\). The integrity of the cryptogram is then checked against the corresponding information from the database, taking into account the verification of the PIN if the transaction is high value. If all the checks are ok, the transaction is accepted, otherwise not; and in all cases, a confirmation message is sent in reply to the terminal. #### 4.2.4. The full protocol To complete the specification, in Fig. 6 we present the full system, which operates as follows. At the start, the system-wide parameters are generated and public data that includes the system public key \(\text{pk}(s)\) and the month public key \(\nu\text{pk}(\chi_{\text{PM}})\) is announced on the public channel \(out\). A new card is issued by the generation of the card-specific parameters PIN, \(mk\), \(c\), and PAN, and can participate in many sessions, hence the red replication operator "!". Notice that together with the card the system has a \(\overline{user}(\text{PIN})\) process that models the user entering PIN into a terminal on the channel \(user\) known only to the terminals; and the process \(\langle si,\text{PAN}\rangle\langle\langle\text{PIN},mk,\phi(c,\mathfrak{g})\rangle\rangle\) that models the entry into the card database that the bank can access to get the card's data. The bottom part of the figure specifies the back end of the system, i.e. the banks and the terminals. There is a system-wide secret key of the bank \(b_{t}\) and a session-wise (hence the replication) symmetric key between the bank and the terminal \(kbt\). Notice also that we are using public session channels \(ch\) to give an attacker the power to observe which agents are communicating. #### 4.2.5. The Dolev-Yao model accounts for malicious terminals Terminals operated by attackers should be accounted for in our threat model, since, consistent with EMV, terminals are not authenticated by the card and hence can be implemented and operated by anyone. In our model, indeed, an attacker can impersonate a terminal, either up until the point when the PIN is requested, or, in modes where the PIN is never requested, proceed to obtain the encrypted application cryptogram produced by the card. To operate as a terminal, an attacker only needs the bank's certificate \(\langle(\text{PM},\phi(b_{t},\mathfrak{g})),\text{sig}(\langle\text{PM},\phi(b_{ t},\mathfrak{g})\rangle,s)\rangle\) which is straightforward to obtain since an honest terminal gives away this certificate to anyone it communicates with. Indeed, a fake card can be used to obtain new monthly certificates even if authorities only distribute them to honest terminals. Such a fake card would first engage in a Diffie-Hellman handshake with an honest terminal, which establishes a channel on which an attacker can receive the certificate currently loaded into the terminal. No knowledge of any private key is required to implement such fake cards. This viable threat is accounted for in the proofs of unlinkability theorems in the next section. ### Unlinkability definition and analysis In this section, we clarify the informal definition of unlinkability given by Scheme 1 presented in Section 2.2.3 and formally prove that UTX is unlinkable. We also present some variations on the unlinkability problem that show that unlinkability still holds even if certain marginal coarse identities are tolerated. #### 4.3.1. The formal definition of unlinkability Recall that the core of the unlinkability scheme is the equivalence between the idealised and the real-world system. We define both in Fig. 6. Notice that in the system \(\textit{UTX}_{\text{impl}}\) defining the real-world scenario the card with the private key \(c\) can participate in any number of sessions, while in the system \(\textit{UTX}_{\text{spec}}\) defining the idealised situation, the card can only participate in one session at most. The possibility of entering the PIN arbitrarily many times is given by the process \(!\overline{user}(\text{PIN})\), and accessing the database in arbitrarily many bank-terminal sessions given by the process \(!\overline{(si,\text{PAN})}\langle(\text{PIN},mk,\phi(c,\mathfrak{g}))\rangle\), remains the same for both real and idealised worlds. We are ready now to give the unlinkability definition. Definition 1 (unlinkability).: _We say that the payments are unlinkable if \(\textit{UTX}_{\text{impl}}\sim\textit{UTX}_{\text{spec}}\), where \(\sim\) is quasi-open bisimilarity._ There is a difference with the definition of unlinkability for key establishment considered in (Krishna et al., 2019) (provided in Appendix A), where the terminal and the bank are deliberately omitted. The reason is that the key establishment in isolation, i.e. the UTX protocol up to the _Cardholder verification_ phase, requires no shared secret between the parties, yet to execute, for instance, a full high-value transaction, at least the PIN is required to be shared between all three parties involved in the protocol. In addition, to validate a transaction there is a secret \(mk\) shared between the bank and the card, meaning that, Figure 6. Specifications for the real UTX protocol and its ideal unlinkable version even if only transactions without the PIN are modelled, the bank and card must be explicitly modelled in a transaction. Finally we are ready to formulate our first result. Theorem 1 ().: \(\mathit{UTX}_{\mathit{impl}}\sim\mathit{UTX}_{\mathit{spec}}\)_._ The detailed proof of Theorem 1 is given in Appendix B, however, we give a proof sketch here. The key is to give a relation \(\mathfrak{R}\) between processes representing states of the two worlds demonstrating that an attacker has no strategy allowing to distinguish between these two worlds. We form such a relation by pairing the appropriate states and checking that it satisfies the conditions for a quasi-open bisimulation. We pair the states based on the number of sessions _started_ with terminals, cards, and banks and the respective stages of each session; and we ignore the number of exhausted processes that model entering the PIN and accessing the database for card's details. Then we check that each possible transition that either world can make can be matched by the opposing world; that the resulting states are related by \(\mathfrak{R}\), that any two related states are statically equivalent, i.e. indistinguishable by an attacker who can only observe which messages are on the network in this state; and finally, that \(\mathfrak{R}\) is _open_, i.e. there is no way for an attacker to distinguish between two worlds by manipulating free variables. #### 4.3.2. Unlinkability in the face of coarse identities Below we justify the observation made in Section 2.2.3 where we pointed out that unlinkability can only be achieved up to the fingerprint comprising the coarse identities of the card being revealed. We explain below how such coarse identities of the card can exist in the system without compromising unlinkability. _Signing authority._ We demonstrate that UTX is unlinkable even if an attacker can distinguish two cards that use different signing authorities. To do so, we exploit the fact that quasi-open bisimilarity is a congruence (Steiner, 2000), i.e. when a _smaller system_ satisfies unlinkability, then a _larger system_ containing the smaller one as a subsystem also satisfies unlinkability, i.e. process equivalence is preserved in any context. A context is a process "with a hole" such as \(\mathcal{O}(\cdot)\coloneqq\mathrm{i}(\cdot)\). Notice, by putting \(\mathit{UTX}_{\mathit{impl}}\) into \(\mathcal{O}\) we obtain a system with multiple signing authorities. Similarly, by putting \(\mathit{UTX}_{\mathit{spec}}\) into \(\mathcal{O}\) results in an ideal world in each card engages still in one session, but may use different signing authorities. Now, since quasi-open bisimilarity is a congruence, the following holds. Corollary 1 ().: \(\mathit{UTX}_{\mathit{impl}}\sim\mathit{UTX}_{\mathit{spec}}\)_, i.e. UTX is unlinkable even in the presence of multiple signing authorities._ The above means that unlinkability holds for systems with multiple signing authorities as long as we tolerate that coarse identity. That is, we permit a coarse identity, a signing authority, to exist in the system, as represented by building multiple authorities into the ideal world \(\mathit{!UTX}_{\mathit{spec}}\), without compromising unlinkability. In particular, Corollary 1 concerns the degree of unlinkability that can be established in a deployment scenario where multiple payment systems might not agree to provide a common application for unlinkable payments as discussed in Section 3.1, and therefore these different payment systems form a coarse identity of the card. _The card has been used recently._ To clarify that the existence of cards valid for several months does not invalidate unlinkability, we consider a model of unlinkability, where cards can respond to two months at any moment. Furthermore, this model admits transitions from one month to the next, maintaining a pointer as described in Section 3.2. To reflect such behaviour of cards, we build into the definition of a process modelling a card the ability to respond to two months at any time and, whenever the new month is asked, to invalidate the oldest of the two months. Notice that this requires a card to carry the state, i.e. to remember that it should respond only to the most recent two months and never respond to older months if asked. In Appendix D we show how to employ recursion to model such behaviour and prove the following. Theorem 2 ().: \(\mathit{UTXMM}_{\mathit{impl}}\sim\mathit{UTXMM}_{\mathit{spec}}\)_._ In the above \(\mathit{UTXMM}_{\mathit{impl}}\) and \(\mathit{UTXMM}_{\mathit{spec}}\) define the real and the ideal worlds in an enhanced model. The ideal world models an infinite supply of cards that are used only once, and in that single session, may either respond to the two most recent months, or the three most recent months (the latter modelling the tolerance of cards that are one month behind and can still be updated to the current month). Therefore, a coarse identity of whether or not the card has already been used in a session with an up-to-date terminal in the current month can also exist in the system without compromising unlinkability. There is an additional assumption made in this model, specifically, that we do not verify the unlinkability of cards which have not been used at all in the previous month. The coarse identity of having used the card in the past month, but not in the current month, barely gives any identifying information away at all. However, a card that has not been used for over two month, is relatively easy to identify among a pool of cards that are used in a normal, more frequent, manner, since it may be tracked with high probability by observing whether it responds rather than blocks when presented with a two-month-old certificate. ### Related methods for proving unlinkability. In the proofs of Theorems 1 and 2 establishing the unlinkability of UTX, we have constructed and checked by hand a bisimulation between the real and the ideal worlds of the respective models of the protocol. In this section we address the question of whether current tools can be used to confidently reach the same conclusion. Below we discuss existing tools and perform a small case study involving the full UTX protocol and the related, strictly simpler, key agreement phase defined in earlier work (Tamarino et al., 2019). The widely-used tools such as Tamarin (Tamarino et al., 2019) and ProVerif (Tamarino et al., 2019) offer limited support for bisimilarity checking - they can verify so-called diff-equivalence, i.e., equivalence between two processes that differ only in the messages exchanged. Definition 1 does not fall into the diff-equivalence category since the \(\mathit{UTX}_{\mathit{impl}}\) and \(\mathit{UTX}_{\mathit{spec}}\) processes have different structures. Hence we rule out the use of Tamarin. ProVerif, however, makes an attempt to represent the equivalence problem for arbitrary processes as a diff-equivalence problem, thus it can be considered as a candidate for verifying the unlinkability of UTX. Moreover, recently, two ProVerif-related tools have been introduced with the aim of improving the verification of equivalence-based properties. In this work, we call them schematically T1 and T2. The T1 tool (Tamarino et al., 2019) transforms the existing ProVerif model to another model that ProVerif is more likely to verify. The T2 tool (Tamarino et al., 2019), which improves on Ukano (Ukano, 2019), is essentially a new version of ProVerif that may verify observational equivalence (aka. early bisimilarity) between two arbitrary processes, lifting restrictions on diff-equivalence. Neither ProVerif nor T1 or T2 has been able to verify the unlinkability of the single-month model of UTX. This means that, for the time being, our manual proof of Theorem 1 is justified. However, to assess the reliability of T1 and T2 and out of curiosity, we performed the following test. Firstly, we attempted to verify the BDH key agreement protocol (Boh et al., 2017), which has already been proven to be linkable (Zhu et al., 2018). Secondly, we asked tools to verify the UBDH protocol, which roughly corresponds to the _Initialisation_ followed by the _Validity check_ phases of UTX and has already been proven to be unlinkable (Zhu et al., 2018). For comparison, we also include the results delivered by basic ProVerif. We have verified two different models of both protocols - with and without terminals. The presence of the terminals should not affect the verification results because in BDH and UBDH, cards and terminals share no common secret, as explained in Section 4.3.1. The verification results are presented in the table below, where \(\neg\langle\gamma\rangle\neg\)\(\neg\)\(\neg\) stands for the _annot be proved_ verdict and \(\checkmark\) and \(\checkmark\) indicate whether the verdict is correct or not, given that the correct results for the protocols tested are known from related work. We can immediately see that T1 is unreliable for the protocols we consider - it claims that BDH (with and without terminals) is unlinkable, which is not true. Thus, the verification results for both versions of UBDH in the T1 column would require further examination before they can be trusted. In contrast, T2 does not make incorrect claims for the protocols tested, e.g. in approximately 17 hours the tool concludes that observational equivalence cannot be proved for the full BDH protocol, and within 4 hours it was able to prove that the full UBDH model is unlinkable. Interestingly, ProVerif was able to verify only the restricted version of UBDH which highlights the importance of compositionality since it might be the case that a tool is only able to prove the property when a smaller subsystem is in a form the tool can handle. The final row shows that tools freeze either without output or with an error when fed UTX. Based on this observation we hypothesise that it is the structure of UTX that tools cannot deal with rather than the scalability issue which justifies the claim we made in the introduction. At the same time, no tool in the table is able to discover the known simple attack on BDH which requires only two sessions with the same card. This holds even if we restrict the tools so they consider exactly two sessions instead of using replication, in which case the outcome is identical to the first line of the table, differing only in running time. The repository (Boh et al., 2017) contains files corresponding to each cell of the table. We should mention here, that the _noname_ tool (Zhu et al., 2018), implementing a promising parallel approach to modelling privacy called _alpha-beta privacy_, can discover the simple attack on BDH that eludes equivalence checkers. ### Authentication in UTX Our security definition supporting the requirements identified in Section 2.2.2 relies on an authentication property called _injective agreement_(Zhu et al., 2018). A party X injectively agrees with the parties Y and Z whenever if X thinks it has authenticated Y and Z, then Y and Z executed the protocol exchanging the same messages as X (_agreement_), and each run of X corresponds to a unique run of Y and Z (_injectivity_). To verify injective agreement in UTX we have already included events in role specifications in Fig. 5 marking certain stages reached by processes during the execution of the protocol and then evaluate correspondence assertions (Boh et al., 2018) between events listed in Fig. 7 using the ProVerif tool (Boh et al., 2018). Appendix E contains further details regarding using ProVerif. The terminal agrees with the card (before contacting bank) \(\mathsf{TComC}(z_{1},z_{2},\text{ec},\text{emc},\text{ctx},\text{eac})\Rightarrow \mathsf{CRun}(z_{1},z_{2},\text{ec},\text{emc},\text{ctx},\text{eac})\)The terminal agrees with the bank and the card \(\mathsf{TComBC}(\text{req},\text{resp},z_{1},z_{2},\text{ec},\text{emc}, \text{ctx},\text{eac})\Rightarrow\mathsf{BRunT}(\text{req},\text{resp}) \ \ \mathsf{CRun}(z_{1},z_{2},\text{ee},\text{emc},\text{ctx},\text{eac})\)The bank agrees with the terminal and the card \(\mathsf{BComTC}(\text{req})\Rightarrow\mathsf{TRunBC}(\text{req},z_{1},z_{2}, \text{ec},\text{emc},\text{ctx},\text{eac})\ \land\mathsf{CRun}(z_{1},z_{2},\text{ec},\text{emc},\text{ctx},\text{ eac}))\)Bank agrees with the card on the encrypted cryptogram \(\mathsf{BComC}(\text{EAC})\Rightarrow\mathsf{CRunB}(\text{EAC})\) The events in Fig. 7 are parametrised by the messages the card, the terminal and the bank exchange, i.e. \(z_{1}\) and \(z_{2}\) stand for the ephemeral terminal's key and the blinded card's public key, _ec_, _emc_, _etx_, _eac_ represent the messages the card exchanges with the terminal and _req_, _resp_ the message the terminal exchanges with the bank. Finally, EAC represents the encrypted cryptogram \(\left\{\langle\mathsf{AC},\mathsf{AC}_{\mathit{hmac}}\rangle\right\}_{\mathit{ kb}}\). The first three assertions in Fig. 7 are straightforward - whenever the terminal or the bank thinks it has executed the session with the rest of the agents, they have exchanged the same messages, thereby agreeing on crucial data such as the derived keys, transaction details, the cryptogram, etc. The last assertion, representing the agreement between the bank and the card, ensures that an honest card was involved in low-value contactless payment even if terminals are fully compromised. In this scenario the terminals can be omitted in the specification as explained in the related work (Zhu et al., 2018). Security under compromised terminalsUsing ProVerif, we support the point made at the end of Section 3.5.4 that even if a terminal neglects to perform the checks required to authenticate the card, the bank is still ensured that a valid card is executing a transaction. To model that, we remove the Verheul signature verification in the terminal's process. In that case, the first property that the terminal authenticates the card fails as expected, while others are preserved. Security under compromised \(\chi_{\mathsf{MM}}\)Another scenario in which the terminal accepts a potentially fake card is when the key \(\chi_{\mathsf{MM}}\) is Figure 7. Injective agreement correspondences in UTX. leaked, allowing attackers to manufacture cards passing the terminal's check by producing valid Verheul signatures. The verification outcome in this case is similar - the terminal-card agreement fails, making offline transactions insecure, while online transactions are still safe, i.e. the injective agreement involving the bank holds. Therefore, the payment system should notify terminal owners to stop accepting offline payments if \(\chi_{\mathsf{PM}}\) has been compromised. The repository (Bartos et al., 2017) contains the code specifying the injective agreement in the UTX protocol and the expected secrecy of the private data. All properties are successfully verified within 100 minutes. The code verifying additional scenarios described above is provided in the directory _compromised_. #### 4.5.1. Remark on replay protection We explain here a small difference between the model used to verify unlinkability and authentication concerning replay protection. Recall that replay protection is enforced by the uniqueness by the bank of the triple (PAN, TX, \(a\)), as specified in Section 3.5.4 and in Fig. 4. This check is an essential ingredient for authentication, without which authentication could not be verified. Hence replay protection is accounted for in the threat model used in Section 4.5, specifically in line 165 of the ProVerif model. In contrast, the threat model we use for unlinkability simplifies this aspect by allowing terminals to replay cryptograms, that is the bank skips the uniqueness check. This does not introduce problems for the following two reasons. Firstly, the bank in the UTX protocol sends no message intended for the card, hence there is no way for the terminal to probe cards with such a message in an attempt to track them. Secondly, an observable auth output by the terminal that reveals whether the cryptogram was accepted by the bank introduces no issues regardless of the presence of the check. With replay protection, any attempt to replay the message from the terminal to the bank, i.e. the cryptogram with auxiliary data would result in the absence of auth, while, without replay protection, the replay would result in the message auth being always present. In both cases it is impossible for an attacker to link the presence or the absence of the auth with any session other than the one in which the cryptogram was created, and hence it cannot be used to link two sessions with the same card. ### Estimation of the runtime performance Concluding the analysis, we give a rough estimate of the runtime performance of the UTX protocol focusing on the card operations. Indeed since the terminal is a more powerful device than the card we expect its contribution to the runtime to be minuscule. We make our assessment based on the estimations reported in (Zhou et al., 2017; Wang et al., 2018) for the Multos Card ML3 supporting ECC scalar multiplication. The table below summarises the amount of time for individual operations performed by the card in the UTX protocol. As we expect the equality check and forming \(n\)-tuples operations to be negligible, we omit them in our calculation. Overall the numbers add up to 700ms per on-card computation per session. We expect that further optimisation and using more recent smart card platforms would lower this number within the current 500 _ms_ recommendation (Bartos et al., 2017). The numbers from the third line correspond to the 256-bit security level for \(\phi\) and check operations, which are evaluated using the Barreto-Naehrig pairing-friendly curve since Verheul signatures are pairing-based, and ECDSA, respectively. To the best of our knowledge, there is no credible source for 256-bit security assessment for the rest, hence we use the available benchmarks - dec and { } are evaluated using 128-bit key AES in CBC mode on 128-bit message, and, finally, h has been tested using SHA-256 on 128-bit message. ## 5. Conclusion In this paper, we have identified in Section 2 the requirements for a smartcard-based payments protocol, and have demonstrated that at least one protocol satisfying these requirements exists - the UTX protocol presented in Fig. 4. We strengthen the initial security of EMV as explained in Section 2.2. In particular, we request that the application cryptogram is secret and can only be processed by a legitimate acquiring bank. This requirement is addressed in UTX by using the certified bank's public key that the card obtains at the beginning of each transaction and uses to encrypt the cryptogram as we have explained in Sections 3.4.2, 3.5.4. Fig. 7 summarises how we have proved in ProVerif that UTX satisfies all security requirements we have identified. We explain how ISO 15408 supports targeting unlinkability as our privacy requirement in Section 2.2.3, and highlight that the fingerprint of the card, comprising coarse identities of a card that permit groups of cards to be tracked, should be minimised. Since strong identities compromise unlinkability, we have hidden any strong identity of the card by utilising Verheul signatures to make the validity signature distinct in every session, as explained in Section 3.5.2, and by encrypting the cryptogram that contains the PAN to hide it from the terminal, as explained in Section 3.5.4. We have minimised the card's fingerprint by introducing certificates that reveal that the card is valid for the current and previous months without revealing the expiry date, as explained in Sections 3.2, 3.5.2. If payment systems agree on a common certification authority we may reduce the card's fingerprint further by introducing the Unlinkable application as explained in Section 3.1. Theorem 1 proves that these measures indeed achieve unlinkability in UTX. We provide precisely three modes which agents should implement to process UTX payments. The modes of payment should be standardised and be common to all cards supporting UTX. This avoids cards being distinguished by implementation differences. This contrasts to the current EMV standard, which has many different modes of operation, defined in 2000 pages split into several books; thus the variety of _implementations_ serves as a coarse identity of the card. Moreover, having a concise, coherent, and linear presentation can improve the reliability of the system. Our message sequence chart in Fig. 4 and the applied \(\pi\)-calculus specification of UTX in Fig. 5 go some way towards this aim. Roll-out of the UTX protocol is feasible. The software of banks and terminals can be updated in advance across a region so both accept unlinkable payments, while continuing the support of old payment methods. Then cards supporting unlinkable payments can be issued. Of course, new cards must implement only one application to avoid attacks that downgrade cards to EMV. Regarding the protocol design future work includes introducing relay protection [36]. Firstly, it should mitigate the situation where a high-value online transaction is compromised via a relay attack if the PIN is exposed, as we mention in Section 4.1. Secondly, it is essential for the protocol that supports PIN tries counter, which limits the number of incorrect attempts to enter the PIN. An active attacker can exploit PIN tries counter by relaying messages from the honest terminal waiting to process an online high-value transaction to the card and entering the PIN incorrectly enough times to exceed the limit, thereby blocking the card from any online transactions. Then to identify such cards, an attacker should yet again relay communication between the card and an online terminal - transactions would be declined with an explicit reason of the PIN tries exceeded. Relay protection would mitigate this scenario, making it impossible to enter the PIN remotely since the user should be physically close and aware of someone entering the PIN. Regarding the verification future work includes developing automated methods for proving the privacy of security protocols which would lower the analysis effort - the proof of the Theorem 1 in Appendix B illustrates the high cost of the manual analysis of a single protocol. Regardless of the proof method, hand or computer-assisted, we consider checking proofs essential to improve the reliability of the result since even tools occasionally cannot be trusted as we have demonstrated in Section 4.4.
2301.00203
The vertex coordinates of the Galaxy's stellar systems according to the Gaia DR3 catalogue
We present the results of determining the coordinates of the vertices of various stellar systems, the centroids of which are located in the Galactic plane. To do this, the positions, parallaxes, proper motions, and radial velocities of red giants and subgiants contained in the $Gaia$~DR3 catalogue have been used. When determining the components of the deformation velocity tensors in local coordinate systems, we found the coordinates of the vertices of the stellar systems under study. It turned out that there is a complex dependence of vertex deviations $l_{xy}$ in Galactocentric cylindrical ($R, \theta$) and Galactic rectangular ($X,Y$) coordinates. Based on the approach proposed in this paper, heliocentric distances to vertices have been determined for the first time. The results obtained show that in addition to the fact that the angular coordinates of the Galactic center and the vertices of stellar systems do not coincide, their heliocentric distances do not coincide as well. This presumably indicates that there are structures in the Galaxy that noticeably affect its axisymmetry.
A. M. Dmytrenko, P. N. Fedorov, V. S. Akhmetov, A. B. Velichko, S. I. Denyshchenko
2022-12-31T13:54:57Z
http://arxiv.org/abs/2301.00203v2
# The vertex coordinates of the Galaxy's stellar systems according to the Gaia DR3 catalogue ###### Abstract We present the results of determining the coordinates of the vertices of various stellar systems, the centroids of which are located in the Galactic plane. To do this, the positions, parallaxes, proper motions, and radial velocities of red giants and subgiants contained in the \(Gaia\) DR3 catalogue have been used. When determining the components of the deformation velocity tensors in local coordinate systems, we found the coordinates of the vertices of the stellar systems under study. It turned out that there is a complex dependence of vertex deviations \(l_{xy}\) in Galactocentric cylindrical (\(R\), \(\theta\)) and Galactic rectangular (\(X,Y\)) coordinates. Based on the approach proposed in this paper, heliocentric distances to vertices have been determined for the first time. The results obtained show that in addition to the fact that the angular coordinates of the Galactic center and the vertices of stellar systems do not coincide, their heliocentric distances do not coincide as well. This presumably indicates that there are structures in the Galaxy that noticeably affect its axisymmetry. keywords: methods: data analysis-proper motions-stars: kinematics and dynamics-Galaxy: kinematics and dynamics-solar neighbourhood. ## 1 Introduction The third release of the \(Gaia\) mission, \(Gaia\) DR3 data (Gaia collaboration et al. (2016, 2022)), made new data available for studying the stellar kinematics not only in the Solar neighborhood, but also in a significant part of the Milky Way. The availability of high-precision data of millions of stars in the releases of the \(Gaia\) mission makes it possible to obtain new information about the kinematics of stars. Particularly valuable for kinematic studies is the availability of data on radial velocities and parallaxes, which, together with the stellar proper motions, make it possible to analyze the three-dimensional velocity field \(\mathbf{V}(\mathbf{r})\). This entails the emergence of new possibilities for determining some global kinematic parameters, as shown in the works by Fedorov et al. (2021, 2023). In this work, we present the results of determining the kinematic centers of rotation of various stellar systems. And although there are certain difficulties in interpreting the results obtained, caused, for example, by the discrepancy between the distances to sources determined from the \(Gaia\) parallaxes and using the Bayesian method, usage of different estimates of the distances \(R_{\odot}\) to the Galactic center, or relatively small number of stars with known radial velocities (\(\sim\)33.7 million), the relevance of such works is beyond doubt. In this paper, we determine the vertex coordinates of stellar samples whose centroids are located in the Galactic plane, using 3 different parallax sets. These are trigonometric parallaxes given in \(Gaia\) DR3, the same parallaxes, but corrected using the Bayesian method (Bailer-Jones et al. (2021)), as well as parallaxes corrected with the use of the Parallax bias \(Z_{5}\) proposed by Lindegren et al. (2021). In addition, according to the recommendations from Cantat-Gaudin & Brandt (2021), the proper motions of our sample were corrected in the range of magnitudes \(M_{G}\) 9 - 13. One way to determine the coordinates of the vertex is to analyze the strain rate tensor of the stellar system (Bobylev (2004); Bobylev & Bajkova (2020)). The paper is structured as follows. In section 2, we describe the basic steps to form stellar samples in a rectangular coordinate system from giants and subgiants, which are contained in the \(Gaia\) DR3 catalogue. In section 3 we present the formulas that are used in this paper to calculate the angle \(l_{xy}\). Section 4 contains a description of the problem solution, analysis and interpretation of the results obtained. ## 2 The sample It is well known, that the Galactic rectangular coordinate system with the origin at the barycenter of the Solar System is defined by a right-handed triple of mutually orthogonal unit vectors \((\mathbf{i},\mathbf{j},\mathbf{k})\) directed as follows: the X axis from the observer towards the galactic center \(L=0^{\circ}\), \(B=0^{\circ}\), the axis Y in the direction of Galactic rotation \(L=90^{\circ}\), \(B=0^{\circ}\), the Z axis is parallel to the direction to the North Pole of the Galaxy \(B=90^{\circ}\). As noted in (Fedorov et al. (2021, 2023)), a similar coordinate system can be introduced at any arbitrary point on the Galactic plane, provided that the spatial coordinates and components of the spatial velocity are known for each star. The transition from the Galactic Cartesian coordinate system with the origin at the barycenter of the Solar System (\(XYZ\)) to such a local Cartesian system (\(xyz\)) with the origin at the chosen point (\(xyz\)) is equivalent to moving a fictitious observer from the barycenter of the Solar System to the point specified by the coordinates of the chosen origin of the system. In the local Cartesian coordinate system, as well as in the Galactic Cartesian system, the x-axis (\(l=L=0^{\circ}\), \(b=B=0^{\circ}\)) is always directed from a particular centroid to the center of the Galaxy, the \(Oy\)-axis (\(l=L=90^{\circ}\), \(b=B=0^{\circ}\)) in the direction of galactic rotation and perpendicular to \(Ox\), while the \(Oz\)-axis (\(b=B=90^{\circ}\)) is always perpendicular to the plane of the Galaxy. The orientation of the x and y axes of the local coordinate systems was specified using the value \(\mathrm{R}_{\odot}=8.28\) kpc. (GRAVITY Collaboration et al. (2021)). In a particular case, for an observer who is in the Sun, the local coordinate system will coincide with the rectangular Galactic coordinate system. In this work, 33 million stars from \(Gaia\) DR3 were selected for which the radial velocities are known. From this sample, as in our previous work (Fedorov et al. (2023)), were excluded those stars for which the following conditions are satisfied (Lindegren et al. (2018)): \[\begin{cases}RUWE>1.4,\\ \pi/\sigma_{\pi}<5,\\ (\mu_{\alpha}/\sigma_{\mu\alpha})^{2}+(\mu_{\delta}/\sigma_{\mu\delta})^{2}< 25.\end{cases}\] By cutting off the main sequence on the \(M_{G}-(BP-RP)\) Hertzsprung-Russell diagram with two linear functions, as shown in Fig. 1, from the approximately 30 million stars (remaining after applying the Lindegren criteria), giants and subgiants were selected. Finally, a sample of approximately 15 million giants and subgiants was used further in kinematic studies. The points from which, fictitious observations were made by a fictitious observer, were given as follows. Firstly, we single out spherical regions with a radius of 1 kpc, whose centers are located at the nodes of a rectangular grid coinciding with the Galactic plane. Coincidence with the Galactic plane is provided by setting the condition \(Z=0\) for the coordinates of any node. Thus, the position of each node is uniquely specified by a pair of coordinates \(X\) and \(Y\). The distance between adjacent nodes along both coordinates was set equal to 250 pc, and the boundaries of the entire region under study in the Galactic plane were limited by the range of heliocentric distances from -8 to +8 kpc along both coordinates. Each sphere circumscribed around a given node includes stars located at distances not exceeding 1 kpc from it. Thus, the nodes of the rectangular grid are centroids whose velocities are equal to the average velocity of the stars located within the corresponding spheres. Only spheres containing at least 500 stars have been used in the work. ## 3 Determining the coordinates of the vertex. In this paper, deformation velocity tensors were computed for all stellar systems whose centers are located at the nodes of a rectangular grid, as indicated above (see also Fedorov et al. (2023)). If in the local Cartesian coordinate system the radius-vector of stars is denoted as \(\mathbf{r}=(x,y,z)=q_{i}\), and their velocities are \(\mathbf{V}=(V_{x},V_{y},V_{z})=V_{i}\), then from the expansion of the velocity field \(\mathbf{V}(\mathbf{r})\) in the vicinity of the node (centroid), we get: \[V_{i}(d\mathbf{r})=V_{i}(0)+\frac{1}{2}\left(\frac{\partial V_{i}}{\partial q _{k}}-\frac{\partial V_{k}}{\partial q_{i}}\right)_{0}dq_{k}+\frac{1}{2} \left(\frac{\partial V_{i}}{\partial q_{k}}+\frac{\partial V_{k}}{\partial q _{i}}\right)_{0}dq_{k}, \tag{1}\] where \[\frac{1}{2}\left(\frac{\partial V_{i}}{\partial q_{k}}-\frac{\partial V_{k}}{ \partial q_{i}}\right)_{0}=\omega_{ik}=-\omega_{ki}=M^{-}\] are components of antisymmetric and \[\frac{1}{2}\left(\frac{\partial V_{i}}{\partial q_{k}}+\frac{\partial V_{k}}{ \partial q_{i}}\right)_{0}=m_{ik}^{+}=m_{ki}^{+}=M^{+}\] symmetric tensors respectively. \(i\), \(k=1\), \(2\), \(3\). By making the appropriate substitutions, this equation can also be written in Galactic spherical coordinates. In this form, it is known in the literature as the Ogorodnikov-Milne (O-M) kinematic model (Ogorodnikov (1932, 1965)). The second rank symmetric tensor \(M^{+}\) is called the deformation velocity tensor. The matrix of this tensor has the form: \[M^{+}=\begin{pmatrix}m_{11}^{+}&m_{12}^{+}&m_{13}^{+}\\ m_{21}^{+}&m_{22}^{+}&m_{23}^{+}\\ m_{31}^{+}&m_{32}^{+}&m_{33}^{+}\end{pmatrix}. \tag{2}\] The \(M^{+}\) tensor completely determines the rate of deformation motion in the stellar system under consideration. In the rectangular Galactic coordinate system, it has 9 components, 6 of which are independent. As is known, the kinematic interpretation of the diagonal components of the tensor \(m_{11}^{+}\), \(m_{22}^{+}\), \(m_{33}^{+}\) is that these quantities are the velocities of relative elongation (contraction/expansion) along the axes of the coordinate system, and the non-diagonal components \(m_{12}^{+}=m_{21}^{+}\), \(m_{13}^{+}\), \(m_{31}^{+}\), \(m_{23}^{+}=m_{32}^{+}\), characterize the velocities of angular deformation in the planes (\(xOy\)), (\(yOz\)) and (\(xOz\)), respectively. The velocity of angular deformation is understood as a change in the right angle in these planes as a result of deformation. It is convenient to present the results of calculations in the cylindrical Galactocentric coordinate system \(R\), \(\theta\), \(Z\), since its unit vectors are parallel to the unit vectors of local rectangular coordinate systems in each centroid. For example, a node centered on the Sun will have the following coordinates: \(R=R_{\odot}=8.28\,\mathrm{kpc}\), \(\theta=180^{\circ}\), \(Z=0\,\mathrm{kpc}\). Fig. 2 shows the dependencies of the diagonal components of the deformation velocity tensors \(m_{11}^{+}\), \(m_{22}^{+}\), \(m_{33}^{+}\) in the Galactocentric cylindrical coordinates \(R\). These dependencies are additionally color-coded for different values of the coordinate \(\theta\). Fig. 3 shows the dependencies of the non-diagonal components of the velocity deformation tensors \(m_{12}^{+}\), \(m_{13}^{+}\), \(m_{23}^{+}\) on \(R\) and \(\theta\). It is known from continuum mechanics (Tarpow (2002)) that no matter how a particle of a continuous medium moves, all its deformation can be reduced Figure 1: Selection of red giants and subgiants. to the simplest - expansion (contraction) along three mutually perpendicular directions, which are the main axes \(x^{\prime}y^{\prime}z^{\prime}\) of the tensor. In the system of its main axes \(x^{\prime}y^{\prime}z^{\prime}\), the velocity deformation tensor \(M^{+}\) will have the following matrix: \[M^{+}=\begin{pmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&\lambda_{3}\end{pmatrix}. \tag{3}\] where the diagonal components \(\lambda_{1},\lambda_{2},\lambda_{3}\) are the principal values of the tensor, which are determined by solving the characteristic cubic equation: \[M^{+}=\begin{pmatrix}m_{11}^{+}-\lambda_{1}&m_{12}^{+}&m_{13}^{+}\\ m_{21}^{+}&m_{22}^{+}-\lambda_{2}&m_{23}^{+}\\ m_{31}^{+}&m_{32}^{+}&m_{33}^{+}-\lambda_{3}\end{pmatrix}. \tag{4}\] If \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) are positive, then the tensor surface is an ellipsoid, if \(\lambda_{1},\lambda_{2}\), \(\lambda_{3}\) have different signs, the tensor surface is a hyperboloid. Fig. 4 shows the dependencies of the principal values of the tensor in the Galactocentric cylindrical coordinates \(R\) and \(\theta\). As can be seen from the figures, \(\lambda_{1}\) and \(\lambda_{2}\) have opposite signs and different behavior, while the value of \(\lambda_{3}\) is almost independent of \(R\) and \(\theta\) and is close to zero on average. Only at the Galactocentric distance greater than 11 kpc do we see a slight systematic deviation of \(\lambda_{3}\) from zero, which we neglected in this work. The numerical values of the parameters \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) indicate that the velocity deformation tensor in the principal axes is almost independent of the \(Z\) coordinate and, therefore, is very close to a flat (two-dimensional) tensor. The kinematic interpretation of the components \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) in the system of principal axes \(x^{\prime}y^{\prime}z^{\prime}\) is identical to that of \(m_{11}^{+},m_{22}^{+},m_{33}^{+}\) in the system of axes \(xyz\). These quantities are the velocities of relative Figure 3: Non-diagonal components of the \(M^{+}\) tensor as a function of Galactocentric cylindrical coordinates. Figure 2: Diagonal components of the \(M^{+}\) tensor as a function of Galactocentric cylindrical coordinates. elongations (contractions/expansions) along the principal axes of the tensor. As can be seen from the figures, during the transition from the Galactic coordinate system to the system of principal axes, the \(m^{+}_{33}\) component almost exactly transforms into \(\lambda_{3}\), i.e., these two components practically coincide numerically. Indeed, it turned out that \(m^{+}_{33}\) almost coincides with \(\lambda_{3}\). However, it is clearly seen that the components \(m^{+}_{11}\) and \(m^{+}_{22}\) were not completely transformed into \(\lambda_{2}\) and \(\lambda_{1}\). This is due to the fact that the \(m^{+}_{12}\) component, being non-zero, contributes to \(\lambda_{2}\) and \(\lambda_{1}\). Considering the components \(\lambda_{2}\) and \(\lambda_{1}\) in the range of 4-12 kpc, one can see that the dependence of \(m^{+}_{11}\) on \(R\) and \(\theta\), similar to a sinusoidal one, has been transformed into an almost analogous dependence of the component \(\lambda_{2}\). While the dependence \(m^{+}_{22}\) on \(R\) and \(\theta\) has been transformed into a similar dependence \(\lambda_{1}\). These facts indicate that the main axes \(O\alpha^{\prime}\) and \(O\gamma^{\prime}\) of the deformation velocity tensors are located almost in the Galactic plane, and the \(O\gamma^{\prime}\) axis practically coincides with the \(Oz\) axis. This is also confirmed by the behavior of the \(z\)-dependent parameters \(m^{+}_{33}\), \(m^{+}_{13}\) and \(m^{+}_{23}\), which were presented above. The values of these parameters in the entire range of \(R\) and \(\theta\) are close to zero and indicate that the motion of stars is predominantly parallel to the Galactic plane. Therefore, in the further analysis, we consider the tensor \(M^{+}\) as flat, i.e. having only four components \(m^{+}_{11}\), \(m^{+}_{12}\), \(m^{+}_{21}\), \(m^{+}_{22}\), three of which are independent (\(m^{+}_{12}=m^{+}_{21}\)): \[M^{+}=\begin{pmatrix}m^{+}_{11}&m^{+}_{12}\\ m^{+}_{21}&m^{+}_{22}\end{pmatrix}. \tag{5}\] The components of this tensor are related to the gradients of the velocity components in the local rectangular Galactic (\(V_{x}\), \(V_{y}\)) and in the Galactocentric cylindrical (\(V_{R}\), \(V_{\theta}\)) coordinate systems along their coordinate axes by the following relations (Chandrasekhar (1945); Ogorodnikov (1965)): \[\begin{array}{l}A=m^{+}_{22}=\frac{1}{2}\left(\frac{\partial V_{x}}{ \partial x}+\frac{\partial V_{x}}{\partial x}\right)=\frac{1}{2}\left(\frac{ \partial V_{x}}{\partial R}-\frac{V_{y}}{R}+\frac{\partial V_{y}}{\partial R} \right)\\ B=\omega_{3}=\frac{1}{2}\left(\frac{\partial V_{x}}{\partial x}+\frac{ \partial V_{x}}{\partial y}\right)=\frac{1}{2}\left(\frac{\partial V_{x}}{ \partial R}-\frac{1}{R}\frac{\partial V_{y}}{\partial R}+\frac{V_{y}}{ \partial R}\right)\\ C=\frac{m^{+}_{11}-m^{+}_{22}}{2}=\frac{1}{2}\left(\frac{\partial V_{x}}{ \partial R}-\frac{\partial V_{y}}{\partial R}\right)=\frac{1}{2}\left(\frac{ \partial V_{x}}{\partial R}-\frac{1}{R}\frac{\partial V_{y}}{\partial R}- \frac{V_{y}}{R}\right)\\ K=\frac{m^{+}_{11}+m^{+}_{22}}{2}=\frac{1}{2}\left(\frac{\partial V_{x}}{ \partial R}+\frac{\partial V_{y}}{\partial R}\right)=\frac{1}{2}\left(\frac{ \partial V_{x}}{\partial R}+\frac{1}{R}\frac{\partial V_{y}}{\partial R}+ \frac{V_{y}}{R}\right)\end{array} \tag{6}\] where \(A\), \(B\), \(C\) and \(K\) are parameters similar to the generalized Oort constants that can be found for each region of stars under study. In continuum mechanics (Tarapov (2002)), it is shown that for a two-dimensional tensor \(M^{+}\), the angles \(\beta_{1}\) and \(\beta_{2}\), which form the main axes of the tensor \(M^{+}\) with the axes \(Ox\) and \(Oy\) of the coordinate system used, are found from the expression: \[tg(2\beta)=tg(2\beta_{1})=tg(2\beta_{2})=\frac{2m^{+}_{12}}{m^{+}_{11}-m^{+}_{2 2}}, \tag{7}\] where \(\beta_{1}\) is the angle between \(Ox\) and \(Ox^{\prime}\), and \(\beta_{2}\) is between \(Oy\) and \(Oy^{\prime}\). If the components of the \(M^{+}\) tensor are expressed in terms of the parameters \(A\), \(B\), \(C\), \(K\) then the matrix 5 will have the following form: \[M^{+}=\begin{pmatrix}K+C&A\\ A&K-C\end{pmatrix}. \tag{8}\] and formula 7 will be accordingly transformed to the form: \[tg(2\beta)=\frac{2A}{(K+C)-(K-C)}=\frac{A}{C}. \tag{9}\] In the case of a purely Oort rotation, there is only the rotational component \(V_{\theta}\), while the components \(V_{z}\) and \(V_{R}\) = 0. In this case, \(V_{\theta}\) does not depend on \(\theta\), i.e. \[\frac{\partial V_{R}}{\partial R}=0,\quad\ V_{R}+\frac{\partial V_{\theta}}{ \partial\theta}=0. \tag{10}\] As a result, the terms by which the parameters \(C\) and \(K\) are determined in a cylindrical Galactocentric coordinate system, are equal to zero: \[V_{R}=0,\quad\frac{\partial V_{\theta}}{\partial\theta}=0, \tag{11}\] and hence \(C\) and \(K\) are also equal to zero, and the tensor \(M^{+}\) will have zero diagonal components: \[M^{+}=\begin{pmatrix}0&A\\ A&0\end{pmatrix}. \tag{12}\] Thus, for a purely Oort rotation, it follows from formula 9 that the main axes of the deformation velocity tensor will be directed at an Figure 4: Eigenvalues of the \(M^{+}\) tensor as a function of Galactocentric cylindrical coordinates. angle \(\beta=45^{\circ}\) to the axes of the local rectangular Galactic coordinate system. In this case, the angle \(\beta\), for any non-zero values of \(A\), will be equal to \(45^{\circ}\). If \(C\) or \(K\) is non-zero, this means that the rotation is not Oort (axisymmetric). In this case, the angle \(\beta\neq 45^{\circ}\). ## 4 Solution of the problem and analysis of the obtained results. The term vertex is commonly referred to as a point on the sky, relative to which the stellar system under consideration rotates. To determine the coordinates of the vertex, we used the deformation velocity tensors derived by expanding the velocity field of various subsamples (stellar systems) described in Section 3. ### Using the Gaia DR3 trigonometric parallaxes #### 4.1.1 Determination of the angular vertex coordinates As shown above, with axisymmetric (Oort) rotation, the angle \(l_{xy}=(\beta-45^{\circ})\), called the deviation of the vertex longitudes from the direction to the Galactic center, is exactly equal to zero and does not depend on the coordinate angle \(\theta\). The angle \(\beta\) in this case is determined by formula 7. Fig. 5 shows the dependencies of the vertex deviations \(l_{xy}\) on the cylindrical coordinates \(R,\theta\). As can be seen from the figure, the values of the angles \(l_{xy}\) depending on \(R\) for various \(\theta\) are noticeably different. While in Fig. 6 we show the same values, but in the form of a map, where the rectangular Galactic coordinates \(X\) and \(Y\) are plotted along the axes, and the \(l_{xy}\) value is displayed in color. The advantage of such a graphical presentation of the results is that one can immediately see the behavior of \(l_{xy}\) in the entire range of rectangular Galactic coordinates \(X\) and \(Y\), where the kinematic analysis has been carried out. In Fig. 5 it is clearly seen that the angle \(l_{xy}\) is not equal to zero, and it changes with \(R\) and \(\theta\). One can clearly see the "stratification and intertwining" of the dependencies \(l_{xy}(R)\) corresponding to various fixed values of the angle \(\theta\). This behavior of the dependencies \(l_{xy}(R,\theta)\) is probably due to the difference in the deformation velocities in different parts of the Galaxy. This assumption is confirmed by Fig. 6, which demonstrates the differences in the orientations of the tensor surfaces due to the difference in the deformation velocities in different parts of the Galaxy. However, the similarity of the dependencies \(l_{xy}(R)\) in Figs. 5, at different values of the angle \(\theta\), as well as a noticeable predominance of red color in the upper part of Fig. 6, and blue at the bottom, suggests that "stratification and intertwining" may not be due to kinematic causes alone. One of these reasons may be an incorrect value of the accepted Galactocentric distance of the Sun \(R_{\odot}=8.28\) kpc. In this case, the use of the accepted value of \(R_{\odot}\) to determine the rotation angles of the \(x\) axes of local coordinate systems in the direction of the Galactic center will cause an inaccuracy in determining these angles. As a result, the behavior of \(l_{xy}(R,\theta)\) will be determined not only by kinematic differences, but also by the orientation inaccuracy of the axes of local coordinate systems. Checking this assumption, we found that the maximum convergence of the functions \(l_{xy}(R,\theta)\) is realized at a certain value \(R_{\rm V}\), which differs noticeably from the accepted one. In fact, when using the vertex coordinates to set the orientation of local coordinate systems, the dependencies \(l_{xy}(R,\theta)\) become closest and practically turn into one, "least stratified" function \(l_{xy}(R)\), which weakly depends on \(\theta\). It also turned out that the use of one vertex does not provide the best convergence of the functions \(l_{xy}(R,\theta)\) in the entire range of distances \(R\) used. Thus, achieving the best convergence of the functions \(l_{xy}(R,\theta)\) in the range of 5-10 kpc results in a noticeable deterioration in convergence within the range of 10-15 kpc (see Fig. 8 below). This result means that the vertices of different star systems are at different distances from the Sun. Amendt & Cuddeford (1991); Kuijken & Gilmore (1991); Smith et al. (2012) showed that in a stationary, axisymmetric disk galaxy, the axes of the stellar velocity ellipsoid of any local stellar system ideally coincide with the galactic coordinate axes (e.g., Binney & Tremaine (2008); Smith et al. (2012)). The main axes of the deformation velocity tensor for the Oort (axisymmetric) rotation, as shown above, rotate relative to the local axes by an angle of \(45^{\circ}\). It was shown by Dehnen (2000); Minchev & Famaey (2010); Vorobyov & Theis (2008); Saha et al. (2013) that structures which are not axisymmetric can have a noticeable effect on the observed orientation of the stellar velocity ellipsoids. They will have a similar effect on the orientation of the principal axes of the deformation velocity tensor. Figure 5: Vertex deviations depending on Galactocentric cylindrical coordinates (\(R\), \(\theta\)). Figure 6: Vertex deviations depending on rectangular Galactic coordinates (\(X,Y\)). Therefore, in the general case, when a non-axisymmetric rotation is realized, the direction to the Galactic center (a point on the celestial sphere with coordinates \(\alpha_{GC}=266^{\circ},40499\), \(\delta_{GC}=-28^{\circ},93617\), accepted by the Hipparcos consortium Perryman et al. (1997)) and the direction to the vertex (the point of the celestial sphere relative to which the stellar system rotates) do not coincide. Our result shows that not only the spherical coordinates of the Galactic center and the vertices of stellar systems, but also their distances from the Sun do not coincide. #### 4.1.2 Determination of distances to vertices The values of the \(R_{\rm V}\) distance can be estimated using the approach we proposed. Assuming that the accepted value \(R_{\odot}\)=8.28 kpc is correct, the \(x\) axes of local coordinate systems will always be directed to the same point - the Galactic center with coordinates \(L=0^{\circ}\), \(B=0^{\circ}\), \(R_{\odot}\)=8.28 kpc. In this case, the main axis \(x^{\prime}\) of the deformation velocity tensor, which is calculated in the local coordinate system \(XOY\), being rotated by 45 degrees, will be directed to the vertex. In other words, the longitude of the vertex in the local coordinate system will be numerically equal to the angle \(l_{xy}\). Using the values of the angles \(l_{xy}\) calculated for local systems, it is possible to construct rays that pass through their vertex point, with their origins locating in the centroids. To set the equations of rays (straight lines) passing through two points, we use rectangular Galactic coordinates of specific centroids and points lying on unit circles built around these centroids in the \(XOY\) plane. If their radii \(r\) are taken equal to 1 kpc, then the coordinates of the second point can be found as follows: \(X=X_{c}+r\cos l_{xy}\), \(Y=Y_{c}+r\sin l_{xy}\). Finding the coordinates of the intersection point of two arbitrary rays (straight lines) allows one to find the distance from the Sun to this point in the \(XOY\) coordinate system. Pairwise intersections of rays form a certain region of intersection in the Galactic plane. Fig. 7 show the region of intersection of the rays. Its rather large sizes and non-uniformity are visible (there are many peaks or nodes). The largest of these ray intersection nodes is located approximately at a distance of 9.3 kpc from the Sun. At the same time, it is not located on the \(OX\) axis of the rectangular Galactic coordinate system, but it is 0.5 kpc away from it along the \(OY\) axis in the positive direction. Similarly, we can select other nodes that can be seen in Fig. 7. The presence of many nodes can be explained by the present nonlinear dependence \(l_{xy}(R,\theta)\), which was considered earlier. The coordinates of the vertices \(G_{\rm V}^{j}(X_{\rm V}^{j},Y_{\rm V}^{j})\), at which the functions \(l_{xy}(R,\theta)\) will have the best convergence in certain ranges of Galactocentric distances \(R\), were estimated using the least squares method. To this end, a system of equations was compiled for those rays whose origins got inside the chosen range of Galactocentric distances \(\Delta R\). An additional condition for including the ray equation into the system of equations was the presence of at least 25 thousand stars in the stellar system. The solution of the system was the desired coordinates of the point of intersection of all rays -- the vertex. So, for the range \(7<R<10kpc\), we got point \(G_{\rm V}^{1}\) with coordinates: \(X_{\rm V}^{1}=9.82\) kpc, \(Y_{\rm V}^{1}=0.31\) kpc. For the range of Galactocentric distances \(10<R<15\) kpc, we got another point - \(G_{\rm V}^{2}\), which already has different coordinates: \(X_{\rm V}^{2}=8.99\) kpc, \(Y_{\rm V}^{2}=-0.36\) kpc. Also, we have estimated the coordinates of the point \(G_{\rm V}^{0}=G_{\rm V}\), obtained using the entire available range of \(R\) and called by us the general vertex. The coordinates of this point are: \(X_{\rm V}^{0}=9.39\) kpc, \(Y_{\rm V}^{0}=0.59\) kpc. In table 1, we provide for all points \(G_{\rm V}^{j}\) the coordinates and errors of their determination, as well as the calculated heliocentric distances. Now we can use the coordinates of the point \(G_{\rm V}^{j}(X,Y)\) instead of the \(X\)-th coordinate of the Galactic center, equal to \(R_{\odot}\). This allows us to define new "vertexcentric" cylindrical coordinate systems \(R^{\prime},\theta^{\prime}\) and orient properly the local coordinate system in order to construct new dependencies \(l_{xy}(R^{\prime},\theta^{\prime})\) and \(l_{xy}(X,Y)\). These results are shown in Figs. 8 and 9. Comparing Figs. 5 and 8, one can note an improvement in the convergence of the dependence \(l_{xy}(R^{\prime},\theta^{\prime})\) in those ranges of \(R\) where the corresponding \(G_{\rm V}^{j}\) was used. A similar conclusion can be reached by comparing Figs. 6 and 9. It should be noted that the local structures, which are clearly visible on the maps, remained the same, which indirectly confirms their purely kinematic, and not geometric, nature. The deviation of the angle \(l_{xy}\) in the entire region under study from zero can be considered as a measure of the non-axisymmetry of the Galaxy. \begin{table} \begin{tabular}{c c c c} \hline & \(G_{\rm V}^{0}=G_{\rm V}\) & \(G_{\rm V}^{1}\) & \(G_{\rm V}^{2}\) \\ \hline \(\Delta R\), kpc & 0–16 & 7–10 & 10–15 \\ \(X_{V}\), kpc & 9.39 & 9.82 & 8.99 \\ \(Y_{\rm V}\), kpc & 0.59 & 0.31 & -0.36 \\ \(R_{\rm V}\), kpc & 9.39 & 9.83 & 8.99 \\ \(\epsilon\left(X_{\rm V}\right)\), kpc & 0.04 & 0.04 & 0.07 \\ \(\epsilon\left(Y_{\rm V}\right)\), kpc & 0.02 & 0.02 & 0.03 \\ \hline \end{tabular} \end{table} Table 1: Results of estimation of Galactic rectangular coordinates of vertices Figure 7: Rays directed to the vertex in the Galactic plane and the area of their intersection. ### Using corrected parallaxes and proper motions of Gaia DR3 It is known from Lindegren et al. (2021) the \(Gaia\) DR3 parallaxes are biased, that is, they are systematically offset from the expected distribution around zero by several tens of microarcseconds. The authors show that this parallax offset depends in a non-trivial way on magnitude, color, and ecliptic latitude of the source, and give recommendations for a systematic correction of the parallaxes. In particular, users are strongly encouraged to make their own judgment as to whether the specified offset correction is appropriate for their particular application. To correct parallaxes, we used the Parallax bias \(Z_{5}\) computed according to the table 9 in the paper by Lindegren et al. (2021). The distribution of the number of stars in our sample is limited to Figure 8: Vertex deviation depending on “vertexcentric” cylindrical coordinates. The determined vertex coordinates have been used to set the orientation of local Galactic rectangular coordinate systems. Figure 9: Vertex deviation depending on rectangular Galactic coordinates.The determined vertex coordinates have been used to set the orientation of local Galactic rectangular coordinate systems. about magnitude 17 due to they were selected taking into account the presence of radial velocity measurements. Therefore, we divided the range of stellar magnitudes into only two parts. In the magnitude range from 6 to 13, we applied an offset of -30 \(\mu\)as, and -40 \(\mu\)as in the range \(m>13\). Since in the work by Lindegren et al. (2021) in Fig. 20, jumps are seen in the range from 11.5 to 13 magnitudes, we decided to use an average value of approximately \(Z_{5}\) = -30 \(\mu\)as in this range. In addition, we have corrected the proper motions of stars in the magnitude range from 6 to 13. It was already shown by Fedorov et al. (2018) that the proper motions of TGAS stars in the range from 11.5 to 13 magnitudes were distorted by the magnitude equation. Lindegren et al. (2018); Brandt (2018) showed that in the second release of \(Gaia\) (DR2) data, the reference frame of bright stars rotates at a velocity of \(\sim\)0.15 mas yr\({}^{-1}\) relative to faint stars and quasars. In EDR3, this rotation was previously removed (see Section 4.5 in Lindegren et al. (2021)). To align the proper motions system of stars in our sample brighter than \(G=13\) with the International Celestial Coordinate System, we used corrections calculated according to the recommendations presented by Cantat-Gaudin & Brandt (2021). The figures 10, 11, 12, 13, 14 and tables 2, 3 show the results obtained after applying the corrections of parallaxes and proper motions to our stellar sample. We provide plots with dependencies similar to those presented in 4.1.1 and 4.1.2, however, we omit the description of some figures, since the general idea of the behavior of the values given on them remains unchanged. At the same time, changes in these quantities eventually lead to some change in the dependence \(I_{xy}(R^{\prime},\theta^{\prime})\). This, in turn, leads to the fact that the best convergence of the functions \(I_{xy}(R^{\prime},\theta^{\prime})\) is realized for some other \(G_{V}\) values, which are noticeably smaller than the \(G_{V}\) found in the previous subsection. The results of determining the vertex coordinates in this case are presented in Table 2. an arbitrary point in the Galaxy, determines them in the framework of the O-M model, made it possible to obtain a number of new results related to a significant part of the Galaxy. In this work, we have applied this approach to determine the coordinates of the vertices of various stellar systems contained in spherical regions with a radius of 1 kpc, whose centers are located in the Galactic plane. Since the analysis revealed that the \(M^{+}\) deformation velocity tensors calculated in local coordinate systems are almost flat, we used only four components \(m_{11}^{+}\), \(m_{12}^{+}\), \(m_{21}^{+}\), \(m_{22}^{+}\). It turned out that the deviations of the vertices of these stellar systems obey a certain law and are presented mainly in the form of the dependence \(I(R)\) and weakly depend on \(\theta\). This result could not have been ob Figure 12: Vertex deviation depending on “vertexcentric” cylindrical coordinates. The determined vertex coordinates have been used to set the orientation of local Galactic rectangular coordinate systems. Corrected parallaxes (Lindegren et al. (2021)) and proper motions (Cantat-Gaudin & Brandt (2021)) have been used. Figure 13: Vertex deviation depending on rectangular Galactic coordinates.The determined vertex coordinates have been used to set the orientation of local Galactic rectangular coordinate systems. Corrected parallaxes (Lindegren et al. (2021)) and proper motions (Cantat-Gaudin & Brandt (2021)) have been used. tained without knowledge of the spatial coordinates and velocities of the stars contained in _Gaia_ DR3 and allowing one to set the local Galactic coordinate system at an arbitrary point in the Galactic plane. Usually, in works on determining the deviations of the vertex, it is explicitly or implicitly assumed that the distance from the Sun to the Galactic center and to the vertex are the same. This is indeed true for axisymmetric systems. For the case when \(m_{11}^{\star}\) and \(m_{22}^{\star}\) are not equal to zero, we show that the vertices of different stellar systems are located at different distances that do not coincide with the accepted distance of the Sun \(R_{\odot}\)=8.28 kpc to the center of the Galaxy. This indicates that for the investigated part of the Galaxy there is no single center of rotation, as in the case of axisymmetric systems. Unfortunately, our approach does not allow us to indicate the exact heliocentric distances of the vertices, since they turn out to be dependent on the knowledge of the systematic errors of the parallaxes used. Although we cannot give exact values of the distances from the Sun to the vertices, it is still possible to indicate suitable values of \(G_{\rm V}\), at which the function \(l_{xy}(R^{\prime},\theta^{\prime})\), in a specific range of Galactocentric distances, has a minimum stratification. Our results indicate that not only the angular coordinates of the Galactic center and the vertices of stellar systems do not coincide, but also their distances to the Sun do not coincide either. These results can be useful in many kinematic and dynamic problems. ## 6 Acknowledgements This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC,[https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. We are immensely grateful to the Armed Forces of Ukraine for the fact that in wartime we still have the opportunity to work and do science. We sincerely thank the anonymous reviewer for a careful reading the paper, very useful comments, and most importantly, for constructive suggestions. ## Data Availability The used catalogue data is available in a standardised format for readers via the CDS ([https://cds.u-strasbg.fr](https://cds.u-strasbg.fr)). The software code used in this paper can be made available upon request by emailing the corresponding author.
2309.04647
Stochastic Differential Mean-Field Games in a Weak Formulation
We study a Mean Field Game system in a weak formulation through its associated McKean-Vlasov Forward-Backward Stochastic Differential Equation (FBSDE). Our main goal is to obtain existence and regularity results of this FBSDE using the techniques of Malliavin calculus, in particular, classical and Malliavin differentiability. Using these results, we characterize the decoupling (master) field.
Hector Sanchez Morgado, Jesus Sierra
2023-09-09T00:13:37Z
http://arxiv.org/abs/2309.04647v1
# Stochastic differential mean-field games in a weak formulation ###### Abstract. We study a Mean Field Game system in a weak formulation through its associated McKean-Vlasov Forward-Backward Stochastic Differential Equation (FBSDE). Our main goal is to obtain existence and regularity results of this FBSDE using the techniques of Malliavin calculus, in particular, classical and Malliavin differentiability. Using these results, we characterize the decoupling (master) field. Key words and phrases:Mean Field Games, Hormander condition, Malliavin Calculus, McKean-Vlasov FBSDE 2020 Mathematics Subject Classification: 49N80, 91A15, 35Q89 J. Sierra was supported by CONACYT through its program Estancias Posdoctorales por Mexico. ## 1. Introduction ### Mean field games A mean-field game (MFG) is a system that models interacting agents in very large populations [4]. The agents are rational and seek to optimize a value function by selecting appropriate controls. Our main goal is to study a \(weak\ formulation\) of the following model of MFG: \[\inf_{\alpha\in\mathbb{A}}J(t,x;\alpha)\text{ with }J(t,x;\alpha)=\mathbb{E} \left[\int_{t}^{T}L(X_{s},\alpha_{s},\mathcal{L}(X_{s}))ds+g(X_{t},\mathcal{L }(X_{t}))\right], \tag{1}\] subject to \[\begin{cases}dX_{s}&=\sigma(X_{s})\alpha_{s}ds+\sigma(X_{s})\circ dW_{s},\ s \in[t,T],\\ X_{t}&=x\in\mathbb{R}^{d},\end{cases} \tag{2}\] where \(\circ\) represents Stratonovich integration. The dynamics of the private state \(X_{s}\) of a representative agent is given by (2). \(W\) is an \(m-\)dimensional Brownian motion on \([0,T]\) with the canonical probability space \((\Omega,\mathcal{F},P)\), \(\Omega=C_{0}([0,T]\,;\mathbb{R}^{m})\), \(P\) is the \(m-\)dimensional Wiener measure, and \(\mathcal{F}\) is the completion of the Borel \(\sigma-\)field of \(\Omega\) with respect to \(P\). Furthermore, \(\sigma(x)=[\sigma_{1}(x),\ldots,\sigma_{m}(x)]\in\mathbb{R}^{d\times m}\), and \(\sigma_{1}(x),\ldots,\sigma_{m}(x)\) are \(C_{b}^{2,\alpha}\) vector fields, \(\alpha>0\), i.e., \((2,\alpha)\)-Holder continuous with bounded derivatives; this will allow us to ensure later that \(b\) belongs to \(C_{b}^{1,\alpha}\). In (1)-(2), the agent manages its state by choosing a control \(\alpha\in\mathbb{A}\); \(\alpha_{s}\) is a progressively measurable \(\mathbb{R}^{m}-\)valued stochastic process satisfying the admissibility condition \[\mathbb{E}\left[\int_{t}^{T}\left|\alpha_{r}\right|^{2}dr\right]<\infty \tag{3}\] The agent chooses a control driven by the desire of minimizing an expected cost, \(J(t,x;\alpha)\), over a period \([t,T]\). This expected cost is a combination of a running cost, \(L:\mathbb{R}^{d}\times\mathbb{R}^{m}\times\mathcal{P}(\mathbb{R}^{d})\to \mathbb{R}\), and a terminal cost, \(g:\mathbb{R}^{d}\times\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}\); \(\mathcal{P}(\mathbb{R}^{d})\) is the space of Borel probability measures on \(\mathbb{R}^{d}\) and \(\mathcal{L}(X_{s})\) stands for the law of \(X_{s}\). Both \(L\) and \(g\) include the interactions between the agent and the mean field represented by \(\mathcal{L}(X_{s})\). The fact that the statistical distribution of the agents has to be given by \(\mathcal{L}(X_{s})\) indicates that we are searching for an equilibrium in the sense of Nash (see, e.g., [6]). The control problem (1)-(2) was studied in [10] in a particular case, where they obtained regularity results for the value function \[u(t,x)=\inf_{\alpha\in\mathbb{A}}J(t,x;\alpha).\] In what follows, we consider the SDE (2) in terms of Ito integration, that is, \[\begin{cases}dX_{s}&=\left[b(X_{s})+\sigma(X_{s})\alpha_{s}\right]dt+\sigma(X_ {s})dW_{s},\ s\in[t,T]\,,\\ X_{t}&=x\in\mathbb{R}^{d}.\end{cases}. \tag{4}\] where \[b^{i}(x)=\frac{1}{2}\sum_{l=1}^{m}\sum_{j=1}^{d}\sigma_{l}^{j}(x)\partial_{j} \sigma_{l}^{i}(x),\ i=1,\ldots,d.\] We will also assume that the (smooth) vector fields \(\sigma_{1},\ldots,\sigma_{m}\) satisfy the Hormander condition \[\mathcal{L}(\sigma_{1}(x),\ldots,\sigma_{m}(x))=\mathbb{R}^{d},\ \forall x\in \mathbb{R}^{d}, \tag{5}\] where \(\mathcal{L}(\sigma_{1}(x),\ldots,\sigma_{m}(x))\) denotes the Lie algebra generated by the given vector fields. Recall that this Lie algebra is the span of \[\sigma_{1},\ldots,\sigma_{m},\ [\sigma_{i},\sigma_{j}]\,,\ [\sigma_{i},[\sigma_{j}, \sigma_{k}]]\,,\ldots,\ 1\leq i,j,k\leq m,\] i.e., the iterated commutators of the family of vector fields \(\sigma_{1},\ldots,\sigma_{m}\), with \[[X,Y]:=XY-YX=\sum_{i,j=1}^{d}(X^{j}Y^{i}_{x_{j}}-Y^{j}X^{i}_{x_{j}})\partial_{ x_{i}},\] for vector fields \(X\) and \(Y\). Since solutions of SDEs like (4) are expected to have finite moments, we shall work in the space \(\mathcal{P}_{2}(\mathbb{R}^{d})\) which consists of Borel probability measures with finite second moments, i.e., \[\mathcal{P}_{2}(\mathbb{R}^{d})=\left\{\mu\in\mathcal{P}(\mathbb{R}^{d}):\int _{\mathbb{R}^{d}}\left|x\right|^{2}d\mu(x)<\infty\right\}.\] Furthermore, we endow \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with the \(2-\)Wasserstein distance \(W_{2}\): if \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the \(2-\)Wasserstein distance \(W_{2}(\mu,\nu)\) is given by: \[W_{2}(\mu,\nu)=\inf_{\pi\in\Pi_{2}(\mu,\nu)}\left[\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}\left|x-y\right|^{2}\pi(dx,dy)\right]^{1/2}, \tag{6}\] where \(\Pi_{2}(\mu,\nu)\) is the set of probability measures in \(\mathcal{P}_{2}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) with marginals \(\mu\) and \(\nu\). Moreover, if \(X,X^{\prime}\) are square integrable \(\mathbb{R}^{d}-\)valued random variables, we have \[W_{2}(\mathcal{L}(X),\mathcal{L}(X^{\prime}))^{2}\leq\mathbb{E}\left[\left|X-X ^{\prime}\right|^{2}\right].\] For our functionals defined on the \(2\)-Wasserstein space, we will consider differentiation with respect to the probability measure \(\mu\) as introduced by P. L. Lions in [4]: for (a differentiable) \(f:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\), the derivative of \(f\) with respect to \(\mu\) is a function \(\partial_{\mu}f:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d}\to\mathbb{ R}^{d}\). In particular, we will focus on the space \(C^{1,1}_{b}(\mathcal{P}_{2}(\mathbb{R}^{d}))\) of continuously differentiable functionals over \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with Lipschitz-continuos bounded derivatives; see Section 3 for details. The main idea in our analysis of (1)-(4) consists in giving a probabilistic representation of the value function of the optimization problem as the solution of a Backward Stochastic Differential Equation (BSDE). For this, we look for a minimizing control \[\hat{\alpha}(x,z,\mu)=\underset{a\in A}{\arg\min}\ L(x,a,\mu)-a\cdot z,\ z\in \mathbb{R}^{m}.\] We will assume that \(L(\cdot,\cdot,\mu)\) belongs to \(C^{3}(\mathbb{R}^{d}\times\mathbb{R}^{m})\) for all \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and \(L\) is strongly convex and has quadratic growth in \(a\), which will imply that there exists a unique minimizer \(\hat{\alpha}\), such that \(\hat{\alpha}(\cdot,\cdot,\mu)\) belongs to \(C^{3}(\mathbb{R}^{d}\times\mathbb{R}^{m})\) for all \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Therefore, we can represent (1)-(4) as the following McKean-Vlasov FBSDE (see [6, Section 4.4]): \[\begin{cases}dX_{s}=[b(X_{s})+\sigma(X_{s})\hat{\alpha}(X_{s},Z_{s},\mathcal{ L}(X_{s}))]\,ds+\sigma(X_{s})dW_{s}\\ dY_{s}=-L(X_{s},\hat{\alpha}(X_{s},Z_{s},\mathcal{L}(X_{s})),\mathcal{L}(X_{s} ))ds+Z_{s}\cdot dW_{s}\end{cases} \tag{7}\] for \(s\in[t,T]\), \(X_{t}=x\), and \(Y_{T}=g(X_{T},\mathcal{L}(X_{T}))\). Finally, we assume that the functions \(L(\cdot,a,\cdot)\) for all \(a\in\mathbb{R}^{m}\), and \(g\) satisfy the Lasry-Lions monotonicity condition: **Definition 1**.: A real valued function, \(B\), on \(\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\) is monotone (in the sense of Lasry and Lions) if, for all \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the mapping \(\mathbb{R}^{d}\ni x\mapsto B(x,\mu)\) is at most of quadratic growth, and, for all \(\mu,\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) we have: \[\int_{\mathbb{R}^{d}}\left[B(x,\mu)-B(x,\mu^{\prime})\right]d(\mu-\mu^{\prime })(x)\geq 0.\] In (7), the process \(Z\) is usually referred to as the _control process_. The function \(L\) is called the _driver_ and the random variable \(g(X(T))\) is the _terminal condition_. To study (1)-(4) through (7) we will rely heavily on the theory of Malliavin calculus, also called stochastic calculus of variations. For a thorough review of this topic, see, e.g., [14, 15]. ### Weak formulation To obtain regularity results using the techniques of Malliavin calculus, we will focus on the _weak formulation_ of (1)-(4). Let \((\Omega,\mathcal{F},\mathcal{F}_{s}^{t},P,W)\) be our generalized reference probability space [9] and assume now that \(\alpha(r)\), \(r\in[t,T]\) is an \(\mathcal{F}_{s}^{t}-\)progressively measurable process with values in \(\mathbb{R}^{m}\) such that \[\mathbb{E}\left[\exp(\frac{1}{2}\int_{t}^{T}\left|\alpha_{r}\right|^{2}dr) \right]<\infty\ \text{(Novikov's condition)}. \tag{8}\] Let \[K_{s}=-\int_{t}^{s}\alpha_{r}\cdot dW_{r}-\frac{1}{2}\int_{t}^{s}\left|\alpha _{r}\right|^{2}dr,\ s\in[t,T]\,,\] and \[M_{s}=\exp(K_{s}).\] Since the controls \(\alpha\) satisfy Novikov's condition (8), Girsanov's theorem ensures that \(M_{s}\) is a \(P-\)martingale and we can define a probability \(\tilde{P}\) by setting \(\tilde{P}(A)=\mathbb{E}\left[\mathbf{1}_{A}M_{t}\right],\)\(A\in\mathcal{F}\). Moreover, the process \[\tilde{W}_{s}=W_{s}-W_{t}+\int_{t}^{s}\alpha_{r}dr\] is an \(m-\)dimensional Brownian motion with respect to \(\mathcal{F}_{s}^{t}\) and \(\tilde{P}\), and \[\int_{t}^{s}\sigma(X_{r})d\tilde{W}_{r}= \int_{t}^{s}\sigma(X_{r})dW_{r}-\left[\int_{t}^{s}\sigma(X_{r}) dW_{r},K_{s}\right]\] \[= \int_{t}^{s}\sigma(X_{r})dW_{r}+\int_{t}^{s}\sigma(X_{r})\alpha_{ r}dr,\] where \([\cdot,\cdot]\) is the \(\mathbb{R}^{d}-\) valued covariation process. Considering the last expression, (4) becomes \[dX_{s}=b(X_{s})ds+\sigma(X_{s})d\tilde{W}_{s}. \tag{9}\] We define \[\inf_{\alpha\in\mathbb{A}}J^{weak}(\alpha)\text{ with }J^{weak}(\alpha)=E^{ \tilde{P}}\left[g(X_{T},\mathcal{L}(X_{T}))+\int_{t}^{T}L(X_{s},\alpha_{t}, \mathcal{L}(X_{s}))ds\right], \tag{10}\] and we call (9)-(10) the weak formulation of the MFG. We associate to (9)-(10) the following FBSDE: \[\begin{cases}dX_{s}=b(X_{s})ds+\sigma(X_{s})d\tilde{W}_{s},\\ dY_{s}=-L(X_{s},\hat{\alpha}(X_{s},Z_{s},\mathcal{L}(X_{s}),\mathcal{L}(X_{s} ))ds+Z_{s}\cdot d\tilde{W}_{s},\end{cases} \tag{11}\] \(s\in[t,T]\), \(X_{t}=x\), and \(Y_{T}=g(X_{T},\mathcal{L}(X_{T}))\), where \[\hat{\alpha}(x,z,\mu)=\operatorname*{arg\,min}_{a\in A}\left\{L(x,a,\mu)-a \cdot z\right\},\] We emphasize that one of the main difficulties in our analysis is related to the quadratic growth of \(L\) in its second variable. On the other hand, obtaining regularity information about the so-called _master field_\(u=u(t,x,\mu)\) (the decoupling field of the McKean-Vlasov FBSDE) should allow us to show that \(u\) solves the first order master equation: \[\partial_{t}u(t,x,\mu +b\left(x\right)\cdot\partial_{x}u\left(t,x,\mu\right)\] \[+\frac{1}{2}\operatorname{Tr}\left[\left(\sigma\sigma^{t}\right) \left(x\right)\partial_{xx}^{2}u\left(t,x,\mu\right)\right]\] \[+\frac{1}{2}\int_{\mathbb{R}^{d}}\operatorname{Tr}\left[\left( \sigma\sigma^{t}\right)\left(v\right)\partial_{v}\partial_{\mu}u\left(t,x, \mu\right)\left(v\right)\right]d\mu\left(v\right)\] \[+L\left(t,x,\mu,\hat{\alpha}\left(t,x,\mu,\partial_{x}u\left(t,x,\mu\right)\right)\right)=0. \tag{12}\] The rest of the paper is organized as follows. In Section 2, we present the results for our MFG system. In Section 3, we review the necessary theory for our analysis, in particular, Malliavin calculus and differentiability of functionals defined on \(\mathcal{P}_{2}(\mathbb{R}^{d})\). In Section 4, we recall the basic theory of FBSDE with quadratic growth and show some preliminary results for the case of McKean-Vlasov FBSDE. In Section 5, we give the proof of the main theorem. ## 2. Statement of results **Assumption 0**.: _We assume that_ 1. _The vector fields_ \(\sigma_{1},\ldots,\sigma_{m}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) _are_ \(C_{b}^{2,\alpha}\)_,_ \(\alpha>0\)_._ 2. _The vector fields_ \(\sigma_{1},\ldots,\sigma_{m}\) _are smooth and have bounded derivatives of all orders._ 3. \(g\in C_{b}^{1,1}(\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d}))\)_, see subsection_ 3.2_._ 4. _The functions_ \(g\) _and_ \(L(\cdot,a,\cdot)\) _for all_ \(a\in\mathbb{R}^{m}\)_, satisfy the Lasry-Lions monotonicity condition._ **Assumption 1**.: \(L:\mathbb{R}^{d}\times\mathbb{R}^{m}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to \mathbb{R}\) _is such that_ \(L(\cdot,\cdot,\mu)\) _belongs to_ \(C^{3}(\mathbb{R}^{d}\times\mathbb{R}^{m})\) _for all_ \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) _and_ \(L(x,a,\cdot),\nabla L(x,a,\cdot),D_{aa}^{2}L(x,a,\cdot)\) _are differentiable for all_ \(x\in\mathbb{R}^{d}\)_,_ \(a\in\mathbb{R}^{m}\)_, see subsection_ 3.2_. Moreover, there exist constants_ \(C,\gamma>0\) _such that for all_ \(x\in\mathbb{R}^{d}\)_,_ \(a,\xi\in\mathbb{R}^{m}\)_,_ \(\mu\in\mathcal{P}(\mathbb{R}^{d})\) _we have_ 1. \(|L(x,0,\mu)|\leq C\)_,_ \(|\nabla_{x}L(x,0,\mu)|\leq C\)_,_ \(|\nabla_{a}L(x,0,\mu)|\leq C\)_._ 2. \(\xi^{T}D_{aa}^{2}L(x,a,\mu)\xi\geq\gamma\left|\xi\right|^{2}\)_,_ 3. \(\left|D_{aa}^{2}L(x,a,\mu)\right|\leq C\)_,_ 4. \(\left|D_{ax}^{2}L(x,a,\mu)\right|\leq C(1+\left|a\right|)\)_,_ 5. \(\left|D_{xx}^{2}L(x,a,\mu)\right|\leq C(1+\left|a\right|^{2})\)_._ 6. \(\left|\partial_{\mu}(\nabla_{a}L)(x,a,\mu)\right|\leq C(1+\left|a\right|)\)_,_ 7. \(\left|\partial_{\mu}(\nabla_{x}L)(x,a,\mu)\right|\leq C(1+\left|a\right|^{2})\)__ Assumption 1 implies that \(L\) has the following properties 1. \(\frac{\gamma}{2}\left|a\right|^{2}-C\leq L(x,a,\mu)\leq C(1+\left|a\right|^{2})\), 2. \(\left|\nabla_{x}L(x,a,\mu)\right|\leq C(1+\left|a\right|^{2})\), 3. \(\gamma\left|a\right|^{2}\leq a^{T}\nabla_{a}L(x,a,\mu)+C\left|a\right|\), 4. \(\gamma\left|a\right|\leq\left|\nabla_{a}L(x,a,\mu)\right|+C\leq C(1+\left|a \right|)\). **Assumption 2**.: _There exist constants \(C,\eta>0\) such that for all \(x,x^{\prime}\in\mathbb{R}^{d}\)\(a,a^{\prime},\xi\in\mathbb{R}^{m}\), \(\mu,\mu^{\prime}\in\mathcal{P}(\mathbb{R}^{d})\) the function \(L:\mathbb{R}^{d}\times\mathbb{R}^{m}\times\mathcal{P}(\mathbb{R}^{d})\to \mathbb{R}\) satisfies_ \[-C\left|\xi\right|^{2}\leq D_{aaa}^{3}L((D_{aa}^{2}L)^{-1}\xi, (D_{aa}^{2}L)^{-1}\xi, (D_{aa}^{2}L)^{-1}D_{a}L)\] \[\leq\xi^{T}(D_{aa}^{2}L)^{-1}\xi-\eta\left|\xi\right|^{2}, \tag{2}\] \[|D_{xaa}^{3}L(x,a,\mu)| \leq C,\] (3) \[|\partial_{\mu}(D_{aa}^{2}L)(x,a,\mu,v)| \leq C,\] (4) \[|\partial_{\mu}L(x^{\prime},a^{\prime},\mu^{\prime},v^{\prime})- \partial_{\mu}L(x,a,\mu,v)|\] \[\leq C(1+\left|a\right|+\left|a^{\prime}\right|)\{(1+\left|a \right|+\left|a^{\prime}\right|)(\left|x^{\prime}-x\right|+W_{2}(\mu^{\prime}, \mu)+\left|v^{\prime}-v\right|))+\left|a^{\prime}-a\right|\},\] (5) \[|D_{xa}^{2}L(x^{\prime},a^{\prime},\mu^{\prime})-D_{xa}^{2}L(x,a, \mu)|\] \[\leq C\{(1+\left|a\right|+\left|a^{\prime}\right|)(\left|x^{\prime }-x\right|+W_{2}(\mu^{\prime},\mu)+\left|a^{\prime}-a\right|\},\] (6) \[|\partial_{\mu}(\nabla_{a}L)(x^{\prime},a^{\prime},\mu^{\prime},v ^{\prime})-\partial_{\mu}(\nabla_{a}L)(x,a,\mu,v)|\] \[\leq C\{(1+\left|a\right|+\left|a^{\prime}\right|)(\left|x^{ \prime}-x\right|+W_{2}(\mu^{\prime},\mu)+\left|v^{\prime}-v\right|)+\left|a^{ \prime}-a\right|\}. \tag{1}\] _Remark 1_.: Assumption 2 specifies the quadratic behaviour of \(L\). One can verify that for \(f\in C^{1,1}_{b}(\mathbb{R}^{d}\times\mathcal{P}(\mathbb{R}^{d}))\) with \(\inf f>0\), the function \(L(x,a,\mu)=f(x,\mu)|a|^{2}\) satisfies assumptions 1 and 2. To characterize the regularity of the involved stochastic processes, we use the spaces \(\mathbb{D}^{\infty},\mathbb{L}^{1,2}\), which will be defined in subsection 3.1. We will use the Hormander condition to obtain regularity of the density of \(X_{s}\) at any \(s\in(t,T]\). **Theorem 1**.: _Under Assumptions 0, 1 and 2, there exists a unique solution \((X,Y,Z)\) of (11) such that \((\mathcal{L}(X_{s}))_{t\leq s\leq T}\) is the unique equilibrium of the MFG associated with the stochastic optimal control problem (9)-(10). Moreover_ 1. _For_ \(x\in\mathbb{R}^{d}\)_, let_ \((X^{x},Y^{x},Z^{x})\) _be the solution of (_11_). Then, there exists a function_ \(\Omega\times[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\times\mathbb{R}\times \mathbb{R}^{m}\)_,_ \((\omega,t,x)\mapsto(X^{x}_{t},Y^{x}_{t},Z^{x}_{t})(\omega)\) _, such that for almost every_ \(\omega\)_, the mappings_ \((t,x)\mapsto X^{x}_{t}\) _and_ \((t,x)\mapsto Y^{x}_{t}\) _are continuous in_ \(t\) _and continuously differentiable in_ \(x\)_._ 2. _There exists a function_ \(u:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\)_, such that_ \(x\mapsto u(t,x,\mu)\) _is continuously differentiable for almost all_ \(t\in[0,T]\)_, and_ \(Y^{t,x}_{s}=u(s,X^{t,x}_{s},\mathcal{L}(X^{t,x}_{s}))\) _and_ \(Z^{t,x}_{s}=(\nabla_{x}u)(s,X^{t,x}_{s},\mathcal{L}(X^{t,x}_{s}))\sigma(X^{t,x} _{s})\)_._ 3. _Under Assumption 0(i') we have that_ \(X^{i}(s)\in\mathbb{D}^{\infty}\) _for all_ \(s\in[t,T]\) _and_ \(i=1,\ldots,d\)_._ 4. _Under Assumption 0(i') and with_ \(\sigma_{1},\ldots,\sigma_{m}\) _satisfying the Hormander condition, we have that for any_ \(s\in(t,T]\)_, the random vector_ \(X_{s}\) _has an infinitely differentiable density._ 5. _For any_ \(t\in[0,T]\) _and_ \(x\in\mathbb{R}^{d}\)_,_ \((Y^{x},Z^{x})\in\mathbb{L}^{1,2}\times(\mathbb{L}^{1,2})^{m}\)_. Furthermore,_ \(\{D_{t}Y^{x}_{t};t\in[0,T]\}\) _is a version of_ \(\{Z^{x}_{t};t\in[0,T]\}\)_._ 6. _There exists a continuous version of_ \((s,t)\mapsto D_{s}Y_{t}\) _in_ \(\{(s,t):0\leq s\leq t\leq T\}\)_. In particular, there exists a continuous version of_ \(t\mapsto Z_{t}\) _for_ \(t\in[0,T]\)_._ ## 3. Review This section presents the basic ideas and notation used throughout this paper. ### Malliavin calculus Let \(H\) be a real separable Hilbert space with scalar product and norm denoted by \(\langle\cdot,\cdot\rangle_{H}\) and \(\|\cdot\|_{H}\), respectively. Let \(W=\{W(h),h\in H\}\) be an isonormal Gaussian process defined on a complete probability space \((\Omega,\mathcal{F},P)\). Later on, we will focus on the case where \((\Omega,\mathcal{F},P)\) is the canonical probability space associated with an \(m-\)dimensional Brownian motion, \[\{W^{i}(t),t\in[0,T]\}\ i=1,\ldots,m,\ H=L^{2}([0,T];\mathbb{R}^{m}),\ \text{and}\ W(h)=\sum_{i=1}^{m}\int_{0}^{T}h^{i}_{t}dW^{i}_{t}\] (Wiener integral), but we start with the abstract setting to avoid cluttered notation. We want to differentiate a square integrable random variable, \(F:\Omega\to\mathbb{R}\), with respect to the chance parameter \(\omega\in\Omega\). For this, let \(\mathcal{S}\) be the class of smooth random variables of the form \[F=f(W(h_{1}),\ldots,W(h_{m})), \tag{13}\] where \(f\in C^{\infty}_{p}(\mathbb{R}^{m})\) (the set of infinitely continuously differentiable functions \(f:\mathbb{R}^{m}\to\mathbb{R}\) such that \(f\) and all of its partial derivatives have polynomial growth), \(h_{1},\ldots,h_{m}\in H\), and \(m\geq 1\). The (Malliavin) derivative of a smooth random variable, \(F\), of the form (13) is the \(H-\)valued random variable \[DF=\sum_{i=1}^{m}\partial_{i}f(W(h_{1}),\ldots,W(h_{m}))h_{i}.\] The operator \(D\) is closable from \(L^{p}(\Omega)\) to \(L^{p}(\Omega;H)\) for any \(p\geq 1\) (see [14, Proposition 1.2.1]). We denote the domain of \(D\) in \(L^{p}(\Omega)\) by \(\mathbb{D}^{1,p}\), i.e., \(\mathbb{D}^{1,p}\) is the closure of the class \(\mathcal{S}\) with respect to the norm \[\left\|F\right\|_{1,p}=\left[E(\left|F\right|^{p})+E(\left\|DF\right\|_{H}^{p })\right]^{\frac{1}{p}}.\] Moreover, the iterated derivative \(D^{k}F\) is a random variable with values in \(H^{\otimes k}\). For \(1\leq k\in\mathbb{N}\), \(p\geq 1\), and \(F\in\mathcal{S}\), let \[\left\|F\right\|_{k,p}=\left[E(\left|F\right|^{p})+\sum_{j=1}^{k}E(\left\|D^{ j}F\right\|_{H^{\otimes j}}^{p})\right]^{\frac{1}{p}}.\] \(\mathbb{D}^{k,p}\) denotes the completion of \(\mathcal{S}\) with respect to the norm \(\left\|\cdot\right\|_{k,p}\). In addition, let \[\mathbb{D}^{k,\infty}=\bigcap_{p>1}\mathbb{D}^{k,p}\text{ and }\mathbb{D}^{ \infty}=\bigcap_{k>1}\bigcap_{p>1}\mathbb{D}^{k,p}.\] The previous definitions can be extended to Hilbert-valued random variables. Note that if \(H=L^{2}(\left[0,T\right];\mathbb{R}^{m})\), then \(L^{2}(\Omega;H)\simeq L^{2}(\left[0,T\right]\times\Omega;\mathbb{R}^{m})\) and hence the derivative of \(F\in\mathbb{D}^{1,2}\) is a square integrable \(\mathbb{R}^{m}-\)valued process. The adjoint of the operator \(D\) is the divergence operator; we denote it by \(\delta\). In particular, \(\delta\) is an unbounded operator on \(L^{2}(\Omega;H)\) with values in \(L^{2}(\Omega)\) such that for all \(u\in\text{Dom }\delta\subseteq L^{2}(\Omega;H)\) we have \(\left|E(\left\langle DF,u\right\rangle_{H})\right|\leq C\left\|F\right\|_{2}\), for every \(F\in\mathbb{D}^{1,2}\), where \(C\) depends on \(u\). Furthermore, \(E(F\delta(u))=E(\left\langle DF,u\right\rangle_{H})\). The space \(\mathbb{D}^{1,2}(H)\) is included in the domain of \(\delta\) (see [14, Proposition 1.3.1]. If \(H=L^{2}(\left[0,T\right])\), then \(\text{Dom }\delta\subset L^{2}(\left[0,T\right]\times\Omega)\); in this case, \(\delta(u)\) is called the Skorohod stochastic integral of the process \(u\). We denote by \(\mathbb{L}^{1,2}\) the space \(\mathbb{D}^{1,2}(L^{2}(\left[0,T\right]))\). \(\mathbb{L}^{1,2}\) coincides with the class of processes \(u\in L^{2}(\left[0,T\right]\times\Omega)\) such that \(u(t)\in\mathbb{D}^{1,2}\) for almost all \(t\), and there exists a measurable version of the two parameter process \(D_{s}u_{t}\) such that \(\mathbb{E}\int_{0}^{T}\int_{0}^{t}(D_{s}u_{t})^{2}dsdt<\infty\). By [14, Proposition 1.3.1], \(\mathbb{L}^{1,2}\subset\text{Dom }\delta\). On the other hand, \(\mathbb{L}^{1,2}\) is a Hilbert space with the norm \[\left\|u\right\|_{\mathbb{L}^{1,2}}^{2}=\left\|u\right\|_{L^{2}(\left[0,T \right]\times\Omega)}^{2}+\left\|Du\right\|_{L^{2}(\left[0,T\right]^{2}\times \Omega)}^{2}.\] If, in addition, we consider an \(m-\)dimensional Brownian motion, then the class of square integrable adapted processes (with respect to the filtration generated by the Brownian motion) belongs to the domain of \(\delta\). Furthermore, \(\delta\) restricted to such class coincides with the Ito integral (see [14, Proposition 1.3.11]). Later, we will use a generalization of the space \(\mathbb{L}^{1,2}\). To define it, consider \(H=L^{2}(\left[0,T\right];\mathbb{R}^{m})\) along with an \(m-\)dimensional Brownian motion. Let \(\mathcal{H}^{p}(\mathbb{R}^{d})\) be the space of progressively measurable processes \((X_{t})_{t\in\left[0,T\right]}\) with values in \(\mathbb{R}^{d}\) normed by \[\left\|X\right\|_{\mathcal{H}^{p}}=\mathbb{E}\left[\left(\int_{0}^{T}\left|X_{ s}\right|^{2}ds\right)^{p/2}\right]^{\frac{1}{p}}.\] For \(1\leq k\in\mathbb{N}\), \(p\geq 1\), let \(\mathbb{L}^{k,p}(\mathbb{R}^{d})\) be the class of \(\mathbb{R}^{d}-\)valued progressively measurable processes \(u=(u^{1},\ldots,u^{d})\) on \([0,T]\times\Omega\) such that 1. \(u(t,\cdot)\in(\mathbb{D}^{k,p})^{d}\) for almost all \(t\in[0,T]\); 2. \(t,\omega\to D^{k}_{s}u(t,\omega)\in(L^{2}([0,T]^{k+1}))^{m\times d}\), \(s=(s_{1},\ldots,s_{k})\in[0,T]^{k}\), admits a progressively measurable version; 3. \(\left\|u\right\|_{\mathbb{L}^{k,p}}=\left[\left\|u\right\|\right\|_{\mathcal{H }^{p}(\mathbb{R}^{d})}^{p}+\sum_{i=1}^{k}\left\|\left|D^{i}u\right|\right\|_{( \mathcal{H}^{p}(\mathbb{R}^{d}))^{i+1}}^{p}\right]^{1/p}<\infty\). Finally, let \(\mathcal{S}^{p}(\mathbb{R}^{d})\) be the space of all measurable processes \((X_{t})_{t\in[0,T]}\) with values in \(\mathbb{R}^{d}\) normed by \[\left\|X\right\|_{\mathcal{S}^{p}}=\mathbb{E}\left[\left(\sup_{t\in[0,T]}|X_{ t}|\right)^{p}\right]^{\frac{1}{p}}.\] ### Differentiability of Functions of Probability Measures Consider a probability space \((\Omega,\mathcal{F},P)\) such that for every \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) there is a random variable \(\xi\in L^{2}(\Omega,\mathcal{F},P;\mathbb{R}^{d})\) with \(\mathcal{L}(\xi)=\mu\). A function \(f:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) is differentiable at \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) (in the sense of Lions) if, for \(\tilde{f}(\xi):=f(\mathcal{L}(\xi))\), \(\xi\in L^{2}(\mathcal{F};\mathbb{R}^{d})\), there is some \(\xi_{0}\in L^{2}(\mathcal{F};\mathbb{R}^{d})\) with \(\mathcal{L}(\xi_{0})=\mu\), such that the function \(\tilde{f}:L^{2}(\mathcal{F},\mathbb{R}^{d})\to\mathbb{R}\) is (Frechet) differentiable at \(\xi_{0}\), that is, there exists a linear continuous mapping, \(D\tilde{f}(\xi_{0}):L^{2}(\mathcal{F};\mathbb{R}^{d})\to\mathbb{R}\) such that \[\tilde{f}(\xi_{0}+\eta)-\tilde{f}(\xi_{0})=D\tilde{f}(\xi_{0})(\eta)+o(\|\eta \|_{L^{2}}),\] with \(\|\eta\|_{L^{2}}\to 0\) for \(\eta\in L^{2}(\mathcal{F};\mathbb{R}^{d})\). By the Riesz representation theorem, there is a (\(P-\)a.s.) unique random variable, \(\Theta_{0}\in L^{2}(\mathcal{F};\mathbb{R}^{d})\), such that \[D\tilde{f}(\xi_{0})(\eta)=\langle\Theta_{0},\eta\rangle_{L^{2}}=E[\Theta_{0} \cdot\eta],\] for all \(\eta\in L^{2}(\mathcal{F};\mathbb{R}^{d})\). Moreover, (see [4]) there is a Borel function \(l_{0}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that \(\Theta_{0}=l_{0}(\xi_{0})\), \(P-\)a.s. Therefore, we can write \[f(\mathcal{L}(\xi))-f(\mathcal{L}(\xi_{0}))=E[l_{0}(\xi_{0})\cdot(\xi-\xi_{0}) ]+o(\|\xi-\xi_{0}\|_{L^{2}}),\] \(\xi\in L^{2}(\mathcal{F};\mathbb{R}^{d})\). We define \[\partial_{\mu}f(\mathcal{L}(\xi_{0}),y):=l_{0}(y),\ y\in\mathbb{R}^{d},\] the derivative of \(f:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) at \(\mathcal{L}(\xi_{0})\). \(\partial_{\mu}f(\mathcal{L}(\xi_{0}),y)\) is \(\mathcal{L}(\xi_{0})(dy)-\) a.e. uniquely determined. Since we have to consider functions \(f:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) which are differentiable over the whole space \(\mathcal{P}_{2}(\mathbb{R}^{d})\), we will assume that \(\tilde{f}:L^{2}(\mathcal{F};\mathbb{R}^{d})\to\mathbb{R}\) is Frechet differentiable in all of \(L^{2}(\mathcal{F};\mathbb{R}^{d})\). In this case, \(\partial_{\mu}f(\mathcal{L}(\xi),y)\) is defined \(\mathcal{L}(\xi)(dy)-\)a.e. for all \(\xi\in L^{2}(\mathcal{F};\mathbb{R}^{d})\). Furthermore, Lemma 3.3[5] shows that if the Frechet derivative \(D\tilde{f}\) is Lipschitz continuous (Lipschitz constant \(K\)), then there exists for every \(\xi\in L^{2}(\mathcal{F};\mathbb{R}^{d})\) an \(\mathcal{L}(\xi)-\)version of \(\partial_{\mu}f(\mathcal{L}(\xi),\cdot):\mathbb{R}^{d}\to\mathbb{R}^{d}\) with \[|\partial_{\mu}f(\mathcal{L}(\xi),y)-\partial_{\mu}f(\mathcal{L}(\xi),y^{ \prime})|\leq K|y-y^{\prime}|\ \text{for all}\ y,y^{\prime}\in\mathbb{R}^{d}.\] In [3], these results motivate the following **Definition 2**.: 1. \(f\in C_{b}^{1,1}(\mathcal{P}_{2}(\mathbb{R}^{d}))\) if for all \(\xi\in L^{2}(\mathcal{F};\mathbb{R}^{d})\) there exists an \(\mathcal{L}(\xi)-\)modification of \(\partial_{\mu}f(\mathcal{L}(\xi),\cdot)\), again denoted by \(\partial_{\mu}f(\mathcal{L}(\xi),\cdot)\), such that \(\partial_{\mu}f:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d}\to \mathbb{R}^{d}\) is bounded and Lipschitz continuous, that is, for \(v,v^{\prime}\in\mathbb{R}^{d}\), and \(\mu,\mu^{\prime}\in\mathcal{P}(\mathbb{R}^{d})\) we have \[|\partial_{\mu}f(\mu,v)| \leq C\] \[|\partial_{\mu}f(\mu,v)-\partial_{\mu}(\mu^{\prime},v^{\prime})| \leq C(W_{2}(\mu,\mu^{\prime})+|v-v^{\prime}|)\] where \(C\) is a constant. We consider this function \(\partial_{\mu}f\) as the derivative of \(f\). 2. Let \(g:\mathbb{R}^{d}\times\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}\) be such that for any \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), \(g(\cdot,\mu)\) is differentiable and for any \(x\in\mathbb{R}^{d}\), \(g(x,\cdot)\) is differentiable. We say that \(g\in C_{b}^{1}(\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d}))\) if \(\nabla_{x}g\), \(\partial_{\mu}g\) are continuous and bounded. We say that \(g\in C_{b}^{1,1}(\mathbb{R}^{2}\times\mathcal{P}_{2}(\mathbb{R}^{d}))\) if for \(x,x^{\prime},v,v^{\prime}\in\mathbb{R}^{d}\), \(\mu,\mu^{\prime}\in\mathcal{P}(\mathbb{R}^{d})\) we have \[|\nabla_{x}g(x,\mu)-\nabla_{x}g(x^{\prime},\mu^{\prime})| \leq M(|x-x^{\prime}|+W_{2}(\mu,\mu^{\prime}))\] \[|\partial_{\mu}g(x,\mu,v)-\partial_{\mu}g(x^{\prime},\mu^{ \prime},v^{\prime})| \leq M(|x-x^{\prime}|+W_{2}(\mu,\mu^{\prime})+|v-v^{\prime}|)\] ## 4. Preliminary results Our main goal is to study the McKean-Vlasov FBSDE (11). We will consider the following FBSDE: \[X_{t}^{x}= x+\int_{0}^{t}b(X_{s}^{x})dr+\int_{0}^{t}\sigma(X_{s}^{x})dW_{s}, \tag{15}\] \[Y_{t}^{x}= g(X_{T}^{x})-\int_{t}^{T}Z_{s}^{x}\cdot dW_{s}+\int_{t}^{T}F(s,X_ {s}^{x},Z_{s}^{x},\mathcal{L}(X_{s}^{x}))ds, \tag{14}\] where \(b\), \(\sigma\), \(g\), and \(W\) are as in the previous sections and \(F:[t,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}\) satisfies the assumption 3 below. ### Basic results on SDE We have the following results for the SDE (14); see Theorem 3.1 in [13], Section 2.2 in [14], and Section 7.5 in [15] for details. **Theorem 2** (Existence and moment estimates).: _Let Assumption 0(i) hold, then (14) has a unique solution and for any \(p\geq 2\), \(x\in\mathbb{R}^{d}\) (initial condition), \(s,t\in[0,T]\), we have_ \[\mathbb{E}[\sup_{t\in[0,T]}|X_{t}|^{p}]\leq C(1+|x|^{p}), \tag{16}\] \[\mathbb{E}[\sup_{u\in[s,t]}|X_{u}-X_{s}|^{p}]\leq C(1+|x|^{p})|t-s|^{p/2}. \tag{17}\] _Moreover, given two initial conditions \(x,x^{\prime}\in\mathbb{R}^{d}\) with \(X^{x}\) and \(X^{x^{\prime}}\) denoting the respective solutions of (14), we have_ \[\mathbb{E}[\sup_{t\in[0,T]}|X_{t}^{x}-X_{t}^{x^{\prime}}|^{p}]\leq C|x-x^{\prime }|^{p}. \tag{18}\] **Theorem 3** (Classical differentiability).: _Let Assumption 0(i) hold, then the solution process X of (14) as a function of the initial condition \(x\) is continuously differentiable. Let \(\nabla_{x}X_{t}\) be the Jacobian matrix \(\frac{\partial X}{\partial x}(t,x)\), then_ \[\nabla_{x}X_{t}=I_{d}+\int_{0}^{t}(\nabla_{x}\sigma(X_{s})\cdot\nabla_{x}X_{s} )\cdot dW_{s}+\int_{0}^{t}\nabla_{x}b(X_{s})\nabla_{x}X_{s}ds,\] _Where "\(\cdot\)" stands for tensor inner product, i.e., tensor product and contraction. Furthermore, for any \(p\geq 2\)_ \[\sup_{x\in\mathbb{R}^{d}}\|\nabla_{x}X^{x}\|_{\mathcal{S}^{p}}\leq C _{p}, \tag{20}\] \[\mathbb{E}[\sup_{s\leq u\leq t}\|\nabla_{x}X^{x}_{u}-\nabla_{x}X^ {x}_{s}|^{p}]\leq C_{p}|t-s|^{p/2}, \tag{19}\] _and given \(x,x^{\prime}\in\mathbb{R}^{d}\) we have_ \[\mathbb{E}[\sup_{0\leq t\leq T}|\nabla_{x}X^{x}_{t}-\nabla_{x}X^{x^{\prime}}_{ t}|^{p}]\leq C_{p}|x-x^{\prime}|^{p/2}. \tag{21}\] _Moreover, \(\nabla_{x}X_{t}\) as an \(d\times d\) matrix is invertible for any \(t\in[0,T]\) and its inverse \((\nabla_{x}X_{t})^{-1}\) satisfies an SDE._ **Theorem 4** (Malliavin differentiability).: _Let Assumption 0(i) hold and let \(X\) be the solution of the stochastic differential equation (14). Then \(X^{i}(t)\in\mathbb{D}^{1,\infty}\) for all \(t\in[0,T]\) and \(i=1,\ldots,d\). Moreover,_ \[\sup_{0\leq r\leq t}\mathbb{E}\left(\sup_{r\leq s\leq T}\left|D^{j}_{r}X^{i} \left(s\right)\right|^{p}\right)<\infty. \tag{22}\] _The Malliavin derivative admits a version \((u,t)\mapsto D_{u}X_{t}\) which satisfies an SDE. By Theorem 3 we have the representation_ \[D_{u}X_{t}=\nabla_{x}X_{t}\left(\nabla_{x}X_{u}\right)^{-1}\sigma\left(X_{u} \right)1_{[0,u]}\left(t\right)\text{ for all }u,t\in[0,T]\,; \tag{23}\] _see eq. (2.59) in [14]._ **Theorem 5** (Smooth density).: _Let Assumption 0(i') hold and let \(\sigma_{1},\ldots,\sigma_{m}\) satisfy the Hormander condition. Let \(\{X(t),t\in[0,T]\}\) be the solution of (14). Then, for any \(t\in(0,T]\), the random vector \(X(t)\) has an infinitely differentiable density._ ### Results on Lipschitz BSDE We mention a relevant result for our computations concerning moment estimates for BSDE with drivers that satisfy Lipschitz conditions with random Lipschitz constant. For more details, particularly the BMO (Bounded Mean Oscillation) property, see Section 1.2.5 in [7]. For a process \((H_{t})_{t\in[0,T]}\), we will use the notation \[H\ast W=\int_{0}^{\cdot}H_{s}dW_{s}.\] Let \(F\) be a measurable function and let \(\zeta\) be a random variable. Consider the BSDE \[U_{t}=\zeta-\int_{t}^{t}V_{s}dW_{s}+\int_{t}^{t}F\left(\cdot,s,U_{s},V_{s} \right)ds,\quad t\in[0,T]\,. \tag{24}\] For \(p\geq 1\), assume **(A1):**: \(\zeta\) is an \(\mathcal{F}_{t}\)-adapted random variable and \(\zeta\in L^{2p}(\mathbb{R})\). **(A2):**: \(F:\Omega\times[0,T]\times\mathbb{R}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is measurable and there exists a positive constant, \(M\), and a positive predictable process, \((H_{t})_{t\in[0,T]}\), such that for all \(t\in[0,T]\), \(u,u^{\prime}\in\mathbb{R}\), and \(v,v^{\prime}\in\mathbb{R}^{m}\), we have \[|F(\cdot,t,u,v)-F(\cdot,t,u^{\prime},v^{\prime})|\leq M|u-u^{\prime}|+H_{t}|v -v^{\prime}|. \tag{25}\] Furthermore, \(H\ast W\) is a BMO martingale. **(A3):**: \((F\left(\cdot,t,0,0\right))_{t\in[0,T]}\) is a measurable \(\mathcal{F}_{t}\)-adapted process such that for every \(p\geq 1\) we have \(\mathbb{E}\left[\left(\int_{0}^{t}|F(\cdot,s,0,0)|ds\right)^{p}\right]<\infty\). We denote by \(\mathcal{E}(Z)\) the stochastic exponential of a process \(Z\). We have the following result (Lemma 2.1.1 in [7]): **Lemma 1**.: _Assume (A1)-(A3). Let \(p\geq 1\) and \(\bar{r}>1\) be such that \(\mathcal{E}(H*W)\in L^{\bar{r}}\left(\mathbb{P}\right)\). Assume that the pair \((U,V)\) is a square integrable solution of (24). Then, there exists a positive constant, C, depending only on \(p,\,T,\,M\), and the BMO norm of \(H*W\), such that, with the conjugate exponent \(\bar{q}\) of \(\bar{r}\), we have_ \[\|U\|_{\mathcal{S}^{2p}}^{2p}+\|V\|_{\mathcal{H}^{2p}}^{2p}\leq C\mathbb{E} \left[|\zeta|^{2p\bar{q}^{2}}+\left(\int_{0}^{t}|F(\cdot,s,0,0)|\ ds\right)^{2p \bar{q}^{2}}\right]^{1/q^{2}}.\] ### Results on FBSDE Basic results can be found in [1, 2, 7, 8, 11]. As in [7], we consider the parameterized BSDE \[Y_{t}^{x}=\xi(x)-\int_{t}^{T}Z_{s}^{x}\cdot dW_{s}+\int_{t}^{T}f(s,x,Z_{s}^{x} )\ ds \tag{26}\] under the condition **(C1):**: Let \(f:\Omega\times[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}\) be an adapted measurable function, differentiable in the spatial variables with continuous partial derivatives. There exists a positive constant \(M\) and a positive process \((K_{t}(x))_{t\in[0,T]}\) depending on \(x\in\mathbb{R}^{d}\) such that for all \((t,x,z)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\) \[|f(t,x,z)| \leq M(1+|z|^{2})\quad\text{a.s.},\] \[|\nabla_{x}f(t,x,z)| \leq K_{t}(x)(1+|y|+|z|^{2})\quad\text{a.s.},\] \[|\nabla_{z}f(t,x,z)| \leq M(1+|z|)\quad\text{a.s.}\] \[\sup_{x\in\mathbb{R}^{d}}\|K(x)\|_{\mathcal{S}^{2p}} <\infty\quad\text{for any $p\geq 1$.}\] For any \(x\in\mathbb{R}^{d}\) the random variable \(\xi(x)\) is \(\mathcal{F}_{T}\)-adapted and \(\sup_{x\in\mathbb{R}^{d}}\|\xi(x)\|_{L^{\infty}}<\infty\); for all \(p\geq 1\) the map \(\mathbb{R}^{d}\mapsto L^{2p}\), \(x\mapsto\xi(x)\) is differentiable, its derivative \(\nabla_{x}\xi\) belongs to \(L^{2p}(\mathbb{R}^{d})\), is continuous and \(\sup_{x\in\mathbb{R}^{d}}\|\nabla_{x}\xi(x)\|_{L^{2p}}<\infty\) **Theorem 6** (Differentiability).: _[_7_]_ _Assume (C1) holds. Then for all \(x\in\mathbb{R}^{d}\) and \(p\geq 1\) (26) has a unique solution \((Y^{x},Z^{x})\in\mathcal{S}^{p}(\mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^ {m})\). Moreover the map \(\mathbb{R}^{d}\to\mathcal{S}^{2p}(\mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R} ^{m})\), \(x\mapsto(Y^{x},Z^{x})\) is differentiable and the derivative is a solution of_ \[\nabla Y_{t}^{x}=\nabla\xi(x)-\int_{t}^{T}\nabla Z_{s}^{x}\cdot dW_{s}+\int_{t }^{T}(\nabla_{x}f(s,x,Z_{s}^{x})+\nabla_{z}f(s,x,Z_{s}^{x})\nabla Z_{s}^{x})\ ds. \tag{27}\] The following hypotheses fall in the setting proposed by Kobylanski for qgFBSDE [12]: **Assumption 3**.: \(\quad\)__ **(HY0):**: \(\quad g:\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) _is measurable continuous and uniformly bounded by a positive constant_ \(K\)_;_ \(F:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\times\mathcal{P}_{2}(\mathbb{R} ^{d})\to\mathbb{R}\) _is measurable_ _and continuous in the spatial variables. There exists a positive constant, \(M\), such that for all \(t\in[0,T]\), \(x,x^{\prime}\in\mathbb{R}^{d}\), and \(z,z,\mu^{\prime}\in\mathbb{R}^{m}\), we have_ \[|F(t,x,z,\mu)|\leq M(1+|z|^{2}),\] \[|F(t,x,z,\mu)-F(t,x^{\prime},z,\mu^{\prime})|\leq M(1+|z|^{2})(|x-x^{\prime}|+W_{2}(\mu,\mu^{\prime})),\] \[|F(t,x,z,\mu)-F(t,x,z^{\prime},\mu)|\leq M\{(1+|z|+|z^{\prime}|)|z-z^{\prime}|\}.\] **(HY1):**: _(HY0) holds,_ \(g\in C^{1}_{b}(\mathbb{R}^{m}\times\mathcal{P}_{2}(\mathbb{R}^{d}))\)_, for each_ \((t,\mu)\in[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\)_,_ \(F(t,\cdot,\cdot,\mu)\in C^{1}(\mathbb{R}^{d}\times\mathbb{R}^{m})\) _and for each_ \((t,x,z)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\)_,_ \(F(t,x,z,\cdot)\) _is differentiable. Furthermore, there exists a positive constant,_ \(M\)_, such that for all_ \((t,x,z,\mu,v)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\times\mathbb{R} ^{d}\) _we have_ \[|\nabla_{x}F(t,x,z,\mu)|\leq M(1+|z|^{2}),\] \[|\nabla_{z}F(t,x,z,\mu)|\leq M(1+|z|)\] \[|\partial_{\mu}F(t,x,z,\mu,v)|\leq M(1+|z|^{2})\] **(HY1\(+\)):**: _(HY1) holds,_ \(g\in C^{1,1}_{b}(\mathbb{R}^{d}\times\mathcal{P}(\mathbb{R}^{d}))\)_. For_ \(x,x^{\prime},v,v^{\prime}\in\mathbb{R}^{d}\)_,_ \(z,z^{\prime}\in\mathbb{R}^{m}\) _and_ \(\mu,\mu^{\prime}\in\mathcal{P}(\mathbb{R}^{d})\) _we have_ \[|\nabla_{z}F(t,x,z,\mu)-\nabla_{z}F(t,x^{\prime},z^{\prime},\mu^{\prime})|\leq M \{(1+|z|+|z^{\prime}|)(|x-x^{\prime}|+W_{2}(\mu,\mu^{\prime}))+|z-z^{\prime}|\} \tag{28}\] \[|\nabla_{x}F(t,x,z,\mu)-\nabla_{x}F(t,x^{\prime},z^{\prime},\mu^{\prime})|\] \[\leq M(1+|z|+|z^{\prime}<|)\{(1+|z|+|z^{\prime}|)(|x-x^{\prime}|+W_{2}(\mu, \mu^{\prime}))+|z-z^{\prime}|\}\] \[|\partial_{\mu}F(t,x,z,\mu,v)-\partial_{\mu}F(t,x^{\prime},z^{\prime},\mu^{ \prime},v^{\prime})|\] \[\leq M(1+|z|+|z^{\prime}|)\{(1+|z|+|z^{\prime}|)(|x-x^{\prime}|+W_{2}(\mu, \mu^{\prime})+|v-v^{\prime}|)+|z-z^{\prime}|\}.\] We denote \(\Theta^{x}=(X^{x},Z^{x})\), and consider the following linear FBSDE system: \[\nabla X^{x}_{t} =I_{d}+\int_{0}^{t}(\nabla_{x}\sigma(X_{s})\cdot\nabla_{x}X_{s}) \cdot dW_{s}+\int_{0}^{t}\nabla_{x}b(X_{s})\nabla_{x}X_{s}ds, \tag{30}\] \[\nabla Y^{x}_{t} =\nabla_{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T}))\nabla X^{x}_{T}+ \mathbb{E}^{\omega}[\partial_{\mu}g(X^{x}_{T},\mathcal{L}(X^{x}_{T}),X^{x}_{T} (\omega))\nabla X^{x}_{T}(\omega)]\] \[-\int_{t}^{T}\nabla Z^{x}_{s}dW_{s}+\int_{t}^{T}(\nabla_{x}F, \nabla_{z}F)(s,\Theta^{x}_{s},\mathcal{L}(X^{x}_{s}))\nabla\Theta^{x}_{s}ds\] \[+\int_{t}^{T}\mathbb{E}^{\omega}[\partial_{\mu}F(s,\Theta^{x}_{s },\mathcal{L}(X^{x}_{s}),X^{x}_{s}(\omega))\nabla X^{x}_{s}(\omega)]ds. \tag{29}\] **Theorem 7**.: _Let Assumptions 0(i), 3 hold. Then, for \(x\in\mathbb{R}^{d}\) (the initial condition) and \(p\geq 1\), (14)-(15) has a unique solution \((X^{x},Y^{x},Z^{x})\in\mathcal{S}^{2p}(\mathbb{R}^{d})\times\mathcal{S}^{2p}( \mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^{m})\), and the map \(\mathbb{R}^{d}\to\mathcal{S}^{2p}(\mathbb{R}^{d})\times\mathcal{S}^{2p}( \mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^{m})\), \(x\mapsto(X^{x},Y^{x},Z^{x})\) is differentiable and the derivative is a solution of (29)-(30)._ Proof.: We need to check that Assumption 3 implies condition (C1). We start with the statement about the terminal condition. Considering \(\xi(x)=g(X^{x}_{T},\mathcal{L}(X^{x}_{T}))\), we have that \(\xi(x)\) is \(\mathcal{F}_{T}\)-adapted and \[\nabla\xi(x)=\nabla_{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T}))\nabla X^{x}_{T}+ \mathbb{E}^{\omega}[\partial_{\mu}g(X^{x}_{T},\mathcal{L}(X^{x}_{T}),X^{x}_{T} (\omega))\nabla X^{x}_{T}(\omega)].\] Thus \(x\mapsto\nabla\xi(x)\) is continuous and \[|\nabla\xi(x)|\leq K(|\nabla X^{x}_{T}|+\mathbb{E}[|\nabla X^{x}_{T}|])\leq K (|\nabla X^{x}_{T}|+\|\nabla X^{x}\|_{\mathcal{S}^{p}})\] Using inequality (19) we obtain \(\sup\limits_{x\in\mathbb{R}^{d}}\|\nabla\xi(x)\|_{L^{p}}<\infty\) for any \(p\geq 2\). We define the driver \(f:\Omega\times[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}\) by \(f(\omega,t,x,z)=F(t,X^{x}_{t}(\omega),z,\mathcal{L}(X^{x}_{t}))\). The continuity of \(x\mapsto X^{x}\) combined with (HY1) yields that \(f\) and \(\nabla_{z}f\) are continuous in \(x\) and from (HY1) there is \(M>0\) such that for all \((t,x,z)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\) we have that \[|f(t,x,z)|\leq M(1+|z|^{2})\] \[|\nabla_{z}f(t,x,z)|\leq M(1+|z|).\] Since \[\nabla_{x}f(t,x,z)=\nabla_{x}F(t,X^{x}_{t},z,\mathcal{L}(X^{x}_{t}))\nabla X^ {x}_{t}+\mathbb{E}^{\omega}[\partial_{\mu}F(t,X^{x}_{t},z,\mathcal{L}(X^{x}_{t }),X^{x}_{t}(\omega))\nabla X^{x}_{t}(\omega)]\] \((t,x,z))\mapsto\nabla_{x}f(t,x,z)\) is continuous and from (HY1) there is \(M>0\) such that for all \((t,x,z)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\) \[|\nabla_{x}f(t,x,z)| \leq M(1+|z|^{2})(|\nabla X^{x}_{t}+\mathbb{E}[|\nabla X^{x}_{t} |])\leq M(1+|z|^{2})(|\nabla X^{x}_{t}|+\|\nabla X^{x}\|_{\mathcal{S}^{p}})\] \[: =K_{t}(x)(1+|z|^{2})\] From (19) we have that \(\sup\limits_{x\in\mathbb{R}^{d}}\|K(x)\|_{\mathcal{S}^{p}}<\infty\) for any \(p\geq 2\). **Theorem 8** (Classical differentiability).: _Let Assumptions 0(i), 3 hold. For \(x\in\mathbb{R}^{d}\) and \(p\geq 1\), let \((X^{x},Y^{x},Z^{x})\in\mathcal{S}^{2p}(\mathbb{R}^{d})\times\mathcal{S}^{2p}( \mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^{m})\) be the solution of (14)-(15). Then, there exists a function \(\Omega\times[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\times\mathbb{R}\times \mathbb{R}^{m}\), \((\omega,t,x)\mapsto(X^{x},Y^{x},Z^{x})(\omega)\), such that for almost every \(\omega\), the mappings \((t,x)\mapsto X^{x}_{t}\) and \((t,x)\mapsto Y^{x}_{t}\) are continuous in \(t\) and continuously differentiable in \(x\)._ In the proof we will use the following 2 Lemmas **Lemma 2**.: _Assume (HY1+) holds. For \(x\in\mathbb{R}^{d}\) let \((X^{x},Y^{x},Z^{x})\in\mathcal{S}^{2p}(\mathbb{R}^{d})\times\mathcal{S}^{2p}( \mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^{m})\) be the solution of (14)-(15). Then, for every \(p\geq 1\) there exists a constant \(C>0\), such that for all \(x,x^{\prime}\in\mathbb{R}^{d}\) we have_ \[\mathbb{E}\Big{[}|\nabla_{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T})) \nabla X^{x}_{T}-\nabla_{x}g(X^{x^{\prime}},\mathcal{L}(X^{x^{\prime}}))\nabla X ^{x^{\prime}}_{T}|^{2p}\Big{]}+\] \[\mathbb{E}\Big{[}|\mathbb{E}^{\omega}[\partial_{\mu}g(X^{x}_{T}, \mathcal{L}(X^{x}_{T}),X^{x}_{T}(\omega))\nabla X^{x}_{T}(\omega)]-\mathbb{E}^ {\omega}[\partial_{\mu}g(X^{x^{\prime}},\mathcal{L}(X^{x^{\prime}}_{T}),X^{x^{ \prime}}_{T}(\omega))\nabla X^{x^{\prime}}_{T}(\omega)]\big{|}^{2p}\Big{]}\] \[\leq C|x-x^{\prime}|^{p}\] Proof.: \[|\nabla_{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T})-\nabla_{x}g(X^{x^{ \prime}},\mathcal{L}(X^{x^{\prime}})|\] \[\leq|\nabla X^{x^{\prime}}_{T}||\nabla_{x}g(X^{x}_{T},\mathcal{L} (X^{x}_{T})-\nabla_{x}g(X^{x^{\prime}},\mathcal{L}(X^{x^{\prime}}_{T})|+|\nabla _{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T})||\nabla X^{x^{\prime}}_{T}-\nabla X^{x}_ {T}|\] \[\qquad\leq M^{2p}\mathbb{E}\Big{[}|\nabla X^{x^{\prime}}_{T}|^{2p }\big{(}|X^{x}_{T}-X^{x^{\prime}}_{T}|+W_{2}(\mathcal{L}(X^{x}),\mathcal{L}(X^ {x^{\prime}}))|\big{)}^{2p}\Big{]}\leq C|x-x^{\prime}|^{2p}\] \[\mathbb{E}\Big{[}|\nabla_{x}g(X^{x}_{T},\mathcal{L}(X^{x}_{T})|^ {2p}|\nabla X^{x^{\prime}}_{T}-\nabla X^{x}_{T}|^{2p}\Big{]}\leq K^{2p} \mathbb{E}\big{[}|\nabla X^{x^{\prime}}_{T}-\nabla X^{x}_{T}|^{2p}\big{]} \leq C|x-x^{\prime}|^{p}\] \[\begin{split}&\big{|}\mathbb{E}^{\omega}[\partial_{\mu}g(X_{T}^{x}, \mathcal{L}(X_{T}^{x}),X_{T}^{x}(\omega))\nabla X_{T}^{x}(\omega)]-\mathbb{E}^{ \omega}[\partial_{\mu}g(X^{x^{\prime}},\mathcal{L}(X^{x^{\prime}}),X_{T}^{x^{ \prime}}(\omega))\nabla X_{T}^{x^{\prime}}(\omega)]\big{|}\\ &\qquad\leq\big{|}\mathbb{E}^{\omega}[(\partial_{\mu}g(X_{T}^{x^{ \prime}},\mathcal{L}(X_{T}^{x^{\prime}}),X_{T}^{x^{\prime}}(\omega))-\partial_ {\mu}g(X_{T}^{x},\mathcal{L}(X_{T}^{x}),X_{T}^{x}(\omega)))\nabla X_{T}^{x^{ \prime}}(\omega)]\big{|}\\ &\qquad\qquad\qquad+\big{|}\mathbb{E}^{\omega}[\partial_{\mu}g(X_ {T}^{x},\mathcal{L}(X_{T}^{x}),X_{T}^{x}(\omega))(\nabla X_{T}^{x^{\prime}}( \omega)-\nabla X_{T}^{x}(\omega))]\big{|}\\ &\qquad\qquad\leq M\mathbb{E}^{\omega}\big{[}(|X_{T}^{x^{\prime} }<-X_{T}^{x}|+W_{2}(\mathcal{L}(X_{T}^{x}),\mathcal{L}(X_{T}^{x^{\prime}}))+|X _{T}^{x^{\prime}}(\omega)-X_{T}^{x}(\omega)|)|\nabla X_{T}^{x^{\prime}}(\omega )|\big{]}\\ &\qquad\qquad\qquad+K\mathbb{E}^{\omega}\big{[}|\nabla X_{T}^{x^{ \prime}}(\omega)-\nabla X_{T}^{x}(\omega)|\big{]}\leq C(|X_{T}^{x^{\prime}}-X_{ T}^{x}|+|x-x^{\prime}|+|x-x^{\prime}|^{\frac{1}{2}})\end{split}\] **Lemma 3**.: _Assume (HY1+) holds. For \(x\in\mathbb{R}^{d}\) let \((X^{x},Y^{x},Z^{x})\in\mathcal{S}^{2p}(\mathbb{R}^{d})\times\mathcal{S}^{2p}( \mathbb{R})\times\mathcal{H}^{2p}(\mathbb{R}^{m})\) solution of (14)-(15) and \((\nabla X^{x},\nabla Y^{x},\nabla Z^{x})\) be the solution of (29)-(30). Then, for every \(p\geq 1\) there exists a constant \(C>0\), such that for all \(x,x^{\prime}\in\mathbb{R}^{d}\) we have_ \[\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}\lvert\nabla_{x}F(s,\Theta_{s }^{x},\mathcal{L}(X_{s}^{x}))\nabla X_{s}^{x}-\nabla_{x}F(s,\Theta_{s}^{x^{ \prime}},\mathcal{L}(X_{s}^{x^{\prime}}))\nabla X_{s}^{x^{\prime}}\lvert ds \Big{)}^{2p}\Big{]}+\] \[\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}\lvert\mathbb{E}^{\omega}[ \partial_{\mu}F(s,\Theta_{s}^{x},\mathcal{L}(X_{s}^{x}),X_{s}^{x}(\omega)) \nabla X_{s}^{x}(\omega)]\] \[\qquad\qquad\qquad-\mathbb{E}^{\omega}[\partial_{\mu}F(s,\Theta ^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}),X_{s}^{x^{\prime}}(\omega)) \nabla X_{s}^{x^{\prime}}(\omega)]\big{|})ds\Big{)}^{2p}\Big{]}\leq C|x-x^{ \prime}|^{p}\] Proof.: \[\begin{split}\lvert\nabla_{x}F(s,\Theta_{s}^{x},& \mathcal{L}(X_{s}^{x}))\nabla X_{s}^{x}-\nabla_{x}F(s,\Theta_{s}^{x^{\prime}}, \mathcal{L}(X_{s}^{x^{\prime}}))\nabla X_{s}^{x^{\prime}}\rvert\\ &\qquad\qquad\leq\lvert\nabla_{x}F(s,\Theta_{s}^{x},\mathcal{L}(X_ {s}^{x}))-\nabla_{x}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}) )\lvert\lvert\nabla X_{s}^{x^{\prime}}\lvert+\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \[\begin{split}&\big{|}\mathbb{E}^{\omega}[\partial_{\mu}F(s,\Theta^{x}_{s}, \mathcal{L}(X^{x}_{s}),X^{x}_{s}(\omega))\nabla X^{x}_{s}(\omega)]-\mathbb{E}^ {\omega}[\partial_{\mu}F(s,\Theta^{x^{\prime}},\mathcal{L}(X^{x^{\prime}}_{s}), X^{x^{\prime}}_{s}(\omega))\nabla X^{x^{\prime}}_{s}(\omega)]\big{|}\\ &\leq\mathbb{E}^{\omega}[|\partial_{\mu}F(s,\Theta^{x}_{s}, \mathcal{L}(X^{x}_{s}),X^{x}_{s}(\omega))-\partial_{\mu}F(s,\Theta^{x^{\prime }},\mathcal{L}(X^{x^{\prime}}_{s}),X^{x^{\prime}}_{s}(\omega))||\nabla X^{x^{ \prime}}_{s}(\omega)||+\\ &\qquad\qquad\qquad\mathbb{E}^{\omega}[|\partial_{\mu}F(s,\Theta^ {x}_{s},\mathcal{L}(X^{x}_{s}),X^{x}_{s}(\omega))||\nabla X^{x}_{s}(\omega)- \nabla X^{x^{\prime}}_{s}(\omega)|]\\ &\qquad\qquad\qquad\qquad\leq M(1+|Z^{x}_{s}|+|Z^{x^{\prime}}_{s} |)\mathbb{E}^{\omega}\big{[}|\nabla X^{x^{\prime}}_{s}(\omega)|\\ &\{(1+|Z^{x}_{s}|+|Z^{x^{\prime}}_{s}|)(|X^{x}_{s}-X^{x^{\prime} }_{s}|+W_{2}(\mathcal{L}(X^{x}_{s}),\mathcal{L}(X^{x^{\prime}}_{s}))+|X^{x}_{ s}(\omega)-X^{x^{\prime}}_{s}(\omega)|)+|Z^{x}_{s}-Z^{x^{\prime}}_{s}|\}\big{]}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+K(1+|Z^{x}_{s}|^{2}) \mathbb{E}^{\omega}\big{[}|\nabla X^{x^{\prime}}_{s}(\omega)-\nabla X^{x}_{s} (\omega)|\big{]}\end{split}\] \[\begin{split}&\leq C(1+|Z^{x}_{s}|+|Z^{x^{\prime}}_{s}|)\sup_{ x}\|\nabla X^{x}\|_{\mathcal{S}^{2}}\\ &\qquad\{(1+|Z^{x}_{s}|+|Z^{x^{\prime}}_{s}|)(|X^{x}_{s}-X^{x^{ \prime}}_{s}|+\mathbb{E}[\sup_{s\in[0,T]}|X^{x}_{s}-X^{x^{\prime}}_{s}|])+|Z^{ x}_{s}-Z^{x^{\prime}}_{s}|\}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+C(1+|Z^{x}_{s}|^{2}) \mathbb{E}\big{[}\sup_{s\in[0,T]}|\nabla X^{x}_{s}-\nabla X^{x^{\prime}}_{s}|^ {2}\big{]}^{\frac{1}{2}}\\ &\leq C(1+|Z^{x}_{s}|+|Z^{x^{\prime}}_{s}|)\{(1+|Z^{x}_{s}|+|Z^{x^ {\prime}}_{s}|)(|X^{x}_{s}-X^{x^{\prime}}_{s}|+|x-x^{\prime}|)+|Z^{x}_{s}-Z^{x^ {\prime}}_{s}|\}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \(\delta\nabla Z_{t}\), \(t\in[0,T]\), \[\delta\nabla Y_{t} =\nabla_{x}g(X_{T}^{x},\mathcal{L}(X_{T}^{x}))-\nabla_{x}g(X^{x^{ \prime}},\mathcal{L}(X^{x^{\prime}}))+\mathbb{E}^{\omega}[\partial_{\mu}g(X_{T}^ {x},\mathcal{L}(X_{T}^{x}),X_{T}^{x}(\omega))\nabla X_{T}^{x}(\omega)]\] \[\qquad-\mathbb{E}^{\omega}[\partial_{\mu}g(X_{T}^{x^{\prime}}, \mathcal{L}(X_{T}^{x^{\prime}}),X_{T}^{x^{\prime}}(\omega))\nabla X_{T}^{x^{ \prime}}(\omega)]-\int_{t}^{T}\delta\nabla Z_{s}dW_{s}\] \[+\int_{t}^{T}[\nabla_{x}F(s,\Theta_{s}^{x},\mathcal{L}(X_{s}^{x} ))\nabla X_{s}^{x}-\nabla_{x}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x ^{\prime}}))\nabla X_{s}^{x^{\prime}}]ds\] \[+\int_{t}^{T}[(\nabla_{z}F(s,\Theta_{s}^{x},\mathcal{L}(X_{s}^{x} ))-\nabla_{z}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}))) \nabla Z_{s}^{x}+\nabla_{z}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x^{ \prime}}))\delta\nabla Z_{s}^{x}]ds\] \[+\int_{t}^{T}\mathbb{E}^{\omega}[\partial_{\mu}F(s,\Theta_{s}^{x },\mathcal{L}(X_{s}^{x}),X_{s}^{x}(\omega))\nabla X_{s}^{x}(\omega)-\partial_{ \mu}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}),X_{s}^{x^{ \prime}}(\omega))\nabla X_{s}^{x^{\prime}}(\omega)]ds\] We now apply Lemma 1 to this BSDE to obtain \[\mathbb{E}[\sup_{t\in[0,T]}|\nabla Y_{t}^{x}-\nabla Y_{t}^{x^{ \prime}}|^{2p}]\\ \leq C\Big{\{}\mathbb{E}\Big{[}|\nabla_{x}g(X_{T}^{x},\mathcal{L }(X_{T}^{x}))\nabla X_{T}^{x}-\nabla_{x}g(X^{x^{\prime}},\mathcal{L}(X^{x^{ \prime}}))\nabla X_{T}^{x}|^{2p\bar{q}^{2}}\\ +\big{|}\mathbb{E}^{\omega}[\partial_{\mu}g(X_{T}^{x},\mathcal{L }(X_{T}^{x}),X_{T}^{x}(\omega))\nabla X_{T}^{x}(\omega)-\partial_{\mu}g(X^{x^{ \prime}},\mathcal{L}(X_{T}^{x^{\prime}}),X_{T}^{x^{\prime}}(\omega))\nabla X_{ T}^{x^{\prime}}(\omega)]\big{|}^{2p\bar{q}^{2}}\\ +\Big{(}\int_{0}^{T}(|\nabla_{x}F(s,\Theta_{s}^{x},\mathcal{L}(X_ {s}^{x}))\nabla X_{s}^{x}-\nabla_{x}F(s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_ {s}^{x^{\prime}}))\nabla X_{s}^{x^{\prime}}|+\\ \big{|}\mathbb{E}^{\omega}[\partial_{\mu}F(s,\Theta_{s}^{x}, \mathcal{L}(X_{s}^{x}),X_{s}^{x}(\omega))\nabla X_{s}^{x}(\omega)-\partial_{\mu }F(s,\Theta^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}),X_{s}^{x^{\prime}}( \omega))\nabla X_{s}^{x^{\prime}}(\omega)]\big{|}\\ +|\nabla_{z}F(s,\Theta_{s}^{x},\mathcal{L}(X_{s}^{x}))-\nabla_{z}F (s,\Theta_{s}^{x^{\prime}},\mathcal{L}(X_{s}^{x^{\prime}}))||\nabla Z_{s}^{x} |)ds\Big{)}^{2p\bar{q}^{2}}\Big{]}^{\frac{1}{q^{2}}}\Big{\}}\] Applying Lemmas 2 and 3 we have that there is \(C>0\) such that \[\mathbb{E}[\sup_{t\in[0,T]}|\nabla Y_{t}^{x}-\nabla Y_{t}^{x^{\prime}}|^{2p}] \leq C|x-x^{\prime}|^{p}.\] Choosing \(p\) large enough and applying Kolmogorov's continuity criterion we obtaing the continuity of \(x\mapsto\nabla Y^{x}\). As in [7] we consider the BSDE \[Y_{t}=\xi-\int_{t}^{T}Z_{s}dWs+\int_{t}^{T}f(s,Z_{s})ds,\quad t\in[0,T] \tag{31}\] under the following assumptions **(E1):**: \(f:\Omega\times[0,T]\times\mathbb{R}^{m}\to\mathbb{R}\) is an adapted measurable function continuously differentiable in the spatial variable. There exists a positive constant \(M\) such that for all \((t,z)\in[0,T]\times\mathbb{R}^{m}\) \[|f(t,z)|\leq M(1+|z|^{2})\text{ a.s.}\] \[|\nabla f(t,z)|\leq M(1+|z|)\text{ a.s.}\] **(E2):**: For each \(z\in\mathbb{R}^{m}\), \((f(t,y,z))_{t\in[0,T]}\in\mathbb{L}^{1,2p}(\mathbb{R})\) for all \(p\geq 1\) and its Malliavin derivative is given by \((D_{u}f(t,y,z))_{u,t\in[0,T]}\). For each \(u,t\in[0,T]\) \(z\mapsto D_{u}f(t,z)\) is continuous. There exist two positive adapted processes \((K_{u}(t))u,t\in[0,T]\) and \((\tilde{K}_{u}(t)_{u,t}\in[0,T]\) satisfying for all \(p\geq 1\) \[\int_{0}^{T}\mathbb{E}[\|K_{u}(t)\|_{\mathcal{H}^{2p}}^{2p}+\|\tilde{K}_{u}(t) \|_{\mathcal{S}^{2p}}^{2p}]du<\infty\] such that for any \((u,t,z)\in[0,T]\times[0,T]\times\mathbb{R}^{m}\) we have \[|D_{u}f(t,y,z))|\leq K_{u}(t)(1+|z|)+\tilde{K}u(t)|z|^{2},\ \text{a.s}\] **(E3):**: \(\xi\) is a \(\mathcal{F}_{T}\)-adapted measurable random variable absolutely bounded and belongs to \(\mathbb{D}^{1,\infty}\). **Theorem 9**.: _[_7_]_ _Assume that \(f\) and \(\xi\) satisfy (E1), (E2) and (E3). Then, the solution process \((Y,Z)\) of (31) belongs to \(\mathbb{L}^{1,2}\times(\mathbb{L}^{1,2})^{m}\), and a version of \((D_{u}Y_{t},D_{u}Z_{t})_{u,t\in[0,T]}\) is the unique solution of_ \[D_{u}Y_{t}=0,\ D_{u}Z_{t}=0,\ t\in[0,u),\] \[D_{u}Y_{t}=D_{u}\xi-\int_{t}^{T}D_{u}Z_{s}\ dW_{s}+\int_{t}^{T}[(D_{u}f)(s,Z_{ s})+\nabla f(s,Z_{s})D_{u}Z_{s}]ds,\ t\in[u,T].\] _Moreover, \(\{D_{t}Y_{t};t\in[0,T]\}\), is a version of \(\{Z_{t};t\in[0,T]\}\)._ **Theorem 10** (Malliavin differentiability).: _Let Assumptions 0(i), 3 hold. For \(x\in\mathbb{R}^{d}\), let \((X^{x},Y^{x},Z^{x})\) be the unique solution of (14)-(15)). Then, for any \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \((Y^{x},Z^{x})\in\mathbb{L}^{1,2}\times(\mathbb{L}^{1,2})^{m}\) and a version of \(\{(D_{u}^{i}Y_{t}^{x},D_{u}^{i}Z_{t}^{x}):0\leq u,t\leq T\}\), \(i=1,\ldots,m\), is the unique solution of_ \[D_{u}Y_{t}^{x}=0,\ D_{u}Z_{t}^{x}=0,\ t\in[0,u),\] \[D_{u}Y_{t}^{x}= (\nabla_{x}g)(X_{t}^{x})D_{u}X_{t}^{x}-\int_{t}^{T}D_{u}Z_{s}^{ x}dW_{s} \tag{32}\] \[+\int_{t}^{T}(\nabla_{x}F,\nabla_{z}F)(s,\Theta_{s}^{x})(D_{u}X_ {s}^{x},D_{u}Z_{s}^{x})\ ds,\quad t\in[u,T].\] _Furthermore, \(\{D_{t}Y_{t}^{x};t\in[0,T]\}\), defined by the last equation, is a version of \(\{Z_{t}^{x};t\in[0,T]\}\). On the other hand, the following set of equations hold for any \(x\in\mathbb{R}^{d}\) and \(0\leq u\leq t\leq T\), \(P-\)almost surely,_ \[D_{u}X_{t}^{x}= \nabla_{x}X_{t}^{x}(\nabla_{x}X_{u}^{x})^{-1}\sigma(X_{u}^{x}),\] \[D_{u}Y_{t}^{x}= \nabla_{x}Y_{t}^{x}(\nabla_{x}X_{u}^{x})^{-1}\sigma(X_{u}^{x}),\] \[Z_{t}^{x}= \nabla_{x}Y_{t}^{x}(\nabla_{x}X_{t}^{x})^{-1}\sigma(X_{t}^{x}),\] _and \(D_{u}Z_{t}^{x}=\nabla_{x}Z_{t}^{x}(\nabla_{x}X_{u}^{x})^{-1}\sigma(X_{u}^{x})\) for almost all \((\omega,u,t)\in\Omega\times[0,T]\times[0,T]\) with \(0\leq u\leq t\leq T\)._ Proof.: For fixed \(x\in\mathbb{R}^{m}\) define \(f:\Omega\times\mathbb{R}^{m}\) by \(f(\omega,t,z)=F(t,X_{t}^{x}(\omega),z,\mathcal{L}(X_{t}^{x}))\). From (HY1) we get (E1) and moreover there is \(M>0\) such that \[|D_{u}f(t,z)|=|\nabla_{x}F(t,X_{t}^{x},z,\mathcal{L}(X_{t}^{x}))D_{u}X_{t}^{x} |\leq M|D_{u}X_{t}^{x}|(1+|z|^{2})\ \ \text{a.s.}\] By (22), \(\sup_{u\in[0,T]}\|D_{u}X^{x}\|_{\mathcal{S}^{2p}}<\infty\) for any \(p\geq 1\) and \(x\in\mathbb{R}^{d}\) and then we obtain (E2). Considering \(\xi(x)=g(X_{T}^{x},\mathcal{L}(X_{T}^{x}))\) for \(x\in\mathbb{R}^{m}\), we have that \(\xi(x)\) is \(\mathcal{F}_{T}\)-adapted and \[|D_{u}\xi(x)|=|\nabla_{x}g(X_{T}^{x},\mathcal{L}(X_{T}^{x}))D_{u}X_{T}^{x}| \leq K|D_{u}X_{T}^{x}|.\] By Theorem 4 and inequality (16) we have for any \(p\geq 2\) \[\sup_{u\in[0,T]}C\mathbb{E}[(1+|X_{T}^{x}|)|D_{u}X_{T}^{x}|^{p}]\leq C\mathbb{E}[ (1+|X_{T}^{x}|)^{2p}]^{\frac{1}{2}}\sup_{u\in[0,T]}\mathbb{E}[|D_{u}X_{T}^{x}|^{ 2p}]^{\frac{1}{2}}<\infty\] and we obtain (E3). Applying Theorem 9 we get the first part of the Theorem. For the second part of the Theorem fix \(x\in\mathbb{R}^{d}\) and \(0\leq u\leq t\leq T\). The representation formula for \(D_{u}X^{x}\) is given by (23). From Theorem 9 we have that \(\{D_{t}Y_{t};t\in[0,T]\}\), is a version of \(\{Z_{t};t\in[0,T]\}\). To simplify notation we omit the superscript \(x\) for the processes \(X^{x},Y^{x},Z^{x}\). Apply Ito's formula to \(\nabla_{x}Y_{t}(\nabla_{x}X_{u})^{-1}\sigma(X_{u})\) and use (27). Next, use the representation of \(D_{u}X\) given by (23) to account for the terminal condition. In this way we obtain (32) with \(D_{u}Y_{t},D_{u}Z_{t}\) replaced by \[\nabla_{x}Y_{t}(\nabla_{x}X_{u})^{-1}\sigma(X_{u}),\;\nabla_{x}Z_{t}(\nabla_{ x}X_{t})^{-1}\sigma(X_{t})\] By the uniqueness of solution of (32) we obtain the representations of \(D_{u}Y_{t},D_{u}Z_{t}\). The following theorems are consequences of the differentiability shown up to now. The proofs are similar to the ones presented in Chapter 4 of [7], but we must consider our results regarding classical and Malliavin differentiability. Let \(\mathcal{D}^{d}\), \(d\in\mathbb{N}\) be the \(\sigma-\)algebra on \(\mathbb{R}^{d}\) generated by the family of functions \(\mathbb{R}^{d}\ni x\mapsto\mathbb{E}\left[\int_{t}^{T}\varphi(s,X_{s}^{t,x}) ds\right]\) where \(\varphi:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) is a bounded and continuous function and \(X^{t,x}\) is the solution of (14)) with \((t,x)\in[0,T]\times\mathbb{R}^{d}\). **Theorem 11**.: _Let Assumptions 0(i), 3 hold. Take \((t,x)\in[0,T]\times\mathbb{R}^{d}\). Then, there exist two \(\mathcal{B}\left[0,T\right]\otimes\mathcal{D}^{d}-\)measurable deterministic functions \(U\) and \(V\) mapping \([0,T]\times\mathbb{R}^{d}\) onto \(\mathbb{R}\) and \(\mathbb{R}^{d}\) respectively such that_ \[Y_{s}^{t,x}=U(s,X_{s}^{t,x},\mathcal{L}\left(X_{s}^{t,x}\right))\text{ and }Z_{s}^{t,x}=V(s,X_{s}^{t,x},\mathcal{L}\left(X_{s}^{t,x}\right))\sigma(X_{s}^ {t,x}),\] _for \(dP\otimes ds-\)a.a. \((\omega,s)\in\Omega\times[t,T]\)._ **Theorem 12**.: _Let Assumptions 0(i), 3 hold. Let \(t\in[0,T]\) and \(x\in\mathbb{R}^{d}\). Then \(x\mapsto U(t,x,\mathcal{L}\left(X_{t}^{t,x}\right))\) is continuously differentiable for almost all \(t\in[0,T]\). Moreover,_ \[Z_{s}^{t,x}=(\nabla_{x}U)(s,X_{s}^{t,x},\mathcal{L}\left(X_{s}^{t,x}\right)) \sigma(X_{s}^{t,x}),\] _for \(dP\otimes ds-\)a.a. \((\omega,s)\in\Omega\times[t,T]\)._ In the proof of the last theorem (c.f. Theorem 4.1.2 in [7]) we have to use the (Malliavin) chain rule for the map \(x\mapsto U(t,x,\mathcal{L}\left(X_{t}^{t,x}\right))=Y_{t}^{t,x}\); this is posible due to our result in Theorem 8. **Theorem 13** (Time continuity).: _Let \(g:\mathbb{R}^{d}\to\mathbb{R}\) be smooth and Assumptions 0(i), 3 hold. Then, there exists a continuous version of \((u,t)\mapsto D_{u}Y_{t}\) in \(\{(u,t):0\leq u\leq t\leq T\}\). In particular, there exists a continuous version of \(t\mapsto Z_{t}\) for \(t\in[0,T]\)._ ## 5. Proof of Theorem 1 **Proposition 14**.: _Under Assumptions 0 and 1, there exists a unique solution \((X,Y,Z)\in\mathcal{S}^{2}(\mathbb{R}^{d})\times\mathcal{S}^{2}(\mathbb{R}) \times\mathcal{H}^{2}(\mathbb{R}^{m})\) of (7) such that \((\mathcal{L}(X_{s}))_{t\leq s\leq T}\) is the unique equilibrium of the MFG associated with the stochastic optimal control problem (1)-(4)._ Proof.: Theorem 4.44 along with remark 4.50 in [6] ensure that (7) is \(solvable\). Uniqueness of the associated MFG problem follows from [6, Theorem 3.29]. Proof of Theorem 1.: Let \[\hat{\alpha}(x,z,\mu)=\underset{a\in A}{\arg\min}\ L(x,a,\mu)-a\cdot z,\] and \[H(x,z,\mu)=\underset{a\in A}{\inf}\ L\left(x,a,\mu\right)-a\cdot z.\] Under our assumptions, \(\hat{\alpha}(x,z)\) is the unique solution of \(\nabla_{a}L(x,\hat{\alpha},\mu)=z\), i.e., for each \(x,\mu\), \(\hat{\alpha}(x,\cdot)\) is the inverse function of \(\zeta(x,\cdot,\mu)=\nabla_{a}L(x,\cdot,\mu)\). Furthermore, by the implicit function theorem, \(\hat{\alpha}(x,z,\mu)\) is continuously differentiable in all its arguments and property (d) of \(L\) implies that \(|\hat{\alpha}(x,z,\mu)|\leq C(1+|z|)\). Moreover denoting \(G=(D_{aa}^{2}L)^{-1}\) and writing \(\hat{\alpha}\) for \(\hat{\alpha}(x,z,\mu)\) \[D_{x}\hat{\alpha}= -G(x,\hat{\alpha},\mu)D_{xa}^{2}L(x,\hat{\alpha},\mu)\] \[D_{z}\hat{\alpha}= G(x,\hat{\alpha},\mu)\] \[\partial_{\mu}\hat{\alpha}(x,z,\mu,v)=-G(x,\hat{\alpha},\mu) \partial_{\mu}(\nabla_{a}L)(x,\hat{\alpha},\mu,v)\] Thus \[|D_{x}\hat{\alpha}| \leq C(1+|\hat{\alpha}|)\leq C(1+|z|),\] \[|D_{z}\hat{\alpha}| \leq C,\] \[|\partial_{\mu}\hat{\alpha}(x,z,\mu,v)| \leq C(1+|\hat{\alpha}|)\leq C(1+|z|)\] which imply \[|\hat{\alpha}(x^{\prime},z^{\prime},\mu^{\prime})-\hat{\alpha}(x,z,\mu)|\leq C \{(1+|z|+|z^{\prime}|)(|x^{\prime}-x|+W_{2}(\mu^{\prime},\mu))+|z-z^{\prime}|\}\] Let \(X,Y,Z\) be the processes given in Proposition 14 and \(\hat{\alpha}_{s}=\hat{\alpha}\left(X_{s},Z_{s}\right)\). Since the process \[\left(\int_{t}^{\tau}Z_{s}\cdot dW_{s}\right)_{t\leq\tau\leq T}\] is a BMO (Bounded Mean Oscillation) martingale (see [6, Theorem 4.19]), so is the stochastic integral of \(\hat{\alpha}_{s}\). By [6, Proposition 4.18], the stochastic exponential of \(\int_{t}^{t}\hat{\alpha}_{s}\cdot dW_{s}\) is a true martingale. Therefore, we can use Girsanov's theorem to define the \(m-\)dimensional Brownian motion \[\tilde{W}_{s}=W_{s}-W_{t}+\int_{t}^{s}\hat{\alpha}_{r}dr.\] One proves as in Proposition 14 the existence of a solution \((X,Y,Z)\) to 11. Let \(F(x,z,\mu):=L(x,\hat{\alpha}(x,z,\mu),\mu)\), and consider the system \[\begin{cases}dX_{s}=b(X_{s})ds+\sigma(X_{s})dW_{s}\\ dY_{s}=-F(X_{s},Z_{s},\mathcal{L}(X_{s}))ds+Z_{s}\cdot dW_{s}\end{cases} \tag{33}\] for \(s\in\left[t,T\right],X_{t}=x,M_{T}=g(X_{T}).\) We will show that Assumption 3 holds, which allows us to use Theorems 8 to 13 and conclude the proof. Property (a) of \(L\) implies \(|F(x,0,\mu)|\leq C\). \[\nabla_{z}F(x,z,\mu)=\nabla_{a}L(x,\hat{\alpha},\mu)\nabla_{z}\hat{\alpha}=z^ {t}G(x,\hat{\alpha},\mu), \tag{34}\] so that \[\nabla_{z}F(x,z,\mu)D^{2}_{aa}L(x,\hat{\alpha},\mu)=z\] \[\nabla_{x}F(x,z,\mu) =\nabla_{x}L(x,\hat{\alpha},\mu)+\nabla_{a}L(x,\hat{\alpha},\mu) \nabla_{x}\hat{\alpha}\] \[=\nabla_{x}L(x,\hat{\alpha},\mu)-\nabla_{z}F(x,z,\mu)\nabla^{2}_{xa }L(x,\hat{\alpha},\mu)\] \[\partial_{\mu}F(x,z,\mu,v) =\partial_{\mu}L(x,\hat{\alpha},\mu)+\nabla_{a}L(x,\hat{\alpha}, \mu)\partial_{\mu}\hat{\alpha}\] \[=\partial_{\mu}L(x,\hat{\alpha},\mu,v)-\nabla_{z}F(x,z,\mu) \partial_{\mu}(\nabla_{a}L)(x,\hat{\alpha},\mu,v).\] Thus \[|\nabla_{x}F(x,z,\mu)| \leq C(1+|\hat{\alpha}|^{2}+|z|(1+|\hat{\alpha}|))\leq C(1+|z|^{2})\] \[|\partial_{\mu}F(x,z,\mu)| \leq C(1+|\hat{\alpha}|^{2}+|z|(1+|\hat{\alpha}|))\leq C(1+|z|^{2})\] \[D^{2}_{zz}F(x,z,\mu)D^{2}_{aa}L(x,\hat{\alpha},\mu)+\nabla_{z}F (x,z,\mu)\cdot D^{3}_{aaa}L(x,\hat{\alpha},\mu)\nabla_{z}\hat{\alpha}=I\] \[D^{2}_{zz}F(x,z,\mu) =G(x,\hat{\alpha},\mu)-D^{3}_{aaa}L(x,\hat{\alpha},\mu)[G(x,\hat {\alpha},\mu)z,G(x,\hat{\alpha},\mu),G(x,\hat{\alpha},\mu)]\] \[D^{2}_{xz}F(x,z,\mu)D^{2}_{aa}L(x,\hat{\alpha},\mu)+\nabla_{z}F (x,z,\mu)(D^{3}_{xaa}L(x,\hat{\alpha},\mu)+D^{3}_{aaa}L(x,\hat{\alpha},\mu)D_ {x}\hat{\alpha}=0\] \[D^{2}_{xz}F(x,z,\mu) =D^{2}_{xa}L(x,\hat{\alpha},\mu)(G(x,\hat{\alpha},\mu)-D^{2}_{zz }F(x,z,\mu))\] \[-\nabla_{z}F(x,z,\mu)D^{3}_{xaa}L(x,\hat{\alpha},\mu)G(x,\hat{ \alpha},\mu)\] \[\partial_{\mu}\nabla_{x}F(x,z,\mu,v) D^{2}_{aa}L(x,\hat{\alpha},\mu)+\nabla_{z}F(x,z,\mu)\partial_{ \mu}(D^{2}_{aa}L)(x,\hat{\alpha},\mu,v)+D^{3}_{aaa}L(x,\hat{\alpha},\mu) \partial_{\mu}\hat{\alpha}=0\] \[\partial_{\mu}\nabla_{x}F(x,z,\mu,v) =\partial_{\mu}(\nabla_{a}L)(x,\hat{\alpha},\mu,v)(G(x,\hat{ \alpha},\mu)-D^{2}_{zz}F(x,z,\mu))\] \[-\nabla_{z}F(x,z,\mu)\partial_{\mu}(D^{2}_{aa}L)(x,\hat{\alpha}, \mu,v)G(x,\hat{\alpha},\mu).\] Thus \[\xi^{t}D^{2}_{zz}F(x,z,\mu)\xi=\xi^{t}G(x,\hat{\alpha},\mu)\xi-D^{3}_{aaa}L(x,\hat{\alpha},\mu)[G(x,\hat{\alpha},\mu)z,G(x,\hat{\alpha},\mu)\xi,G(x,\hat{ \alpha},\mu)\xi] \tag{35}\] for \(\xi\in\mathbb{R}^{m}\). On the other hand, since \(C\left|\xi\right|^{2}\geq\xi^{T}D^{2}_{aa}l\)\(\xi\geq\gamma\left|\xi\right|^{2}\), we have \(C\geq\lambda_{1},\ldots,\lambda_{m}\geq\gamma\), where \(\lambda_{1},\ldots,\lambda_{m}\) are the eigenvalues of \(D^{2}_{aa}L\). It follows that the eigenvalues of \(G\) are bounded between \(1/C\) and \(1/\gamma\). Therefore, under Assumptions 1, 2 we have \[|D^{2}_{zz}F(x,z,\mu)| \leq C \tag{37}\] \[|D^{2}_{xz}F(x,z,\mu)| \leq C(1+|z|)\] (38) \[|\partial_{\mu}\nabla_{x}F(x,z,\mu)| \leq C(1+|z|) \tag{36}\] which imply (28). By Assumptions 1, 2 and (28) \[|\nabla_{x}F(x^{\prime},z^{\prime},\mu^{\prime})-\nabla_{x}F(x,z, \mu)|\leq|\nabla_{x}L(x^{\prime},\hat{\alpha}(x^{\prime},z^{\prime},\mu^{ \prime}),\mu)-\nabla_{x}L(x,\hat{\alpha}(x,z,\mu),\mu)|\] \[+|\nabla_{z}F(x^{\prime},z^{\prime},\mu^{\prime})-\nabla_{z}F(x,z, \mu)||D^{2}_{xa}L(x^{\prime},\hat{\alpha}(x^{\prime},z^{\prime},\mu^{\prime}), \mu^{\prime})|\] \[+|\nabla_{z}F(x,z,\mu)||D^{2}_{xa}L(x^{\prime},\hat{\alpha}(x^{ \prime},z^{\prime},\mu^{\prime}),\mu^{\prime})-D^{2}_{xa}L(x,\hat{\alpha}(x,z, \mu),\mu)|\] \[\leq C(1+|z|+|z^{\prime}|)\{(1+|z|+|z^{\prime}|)\{(|x-x^{\prime} |+W_{2}(\mu,\mu^{\prime}))+|z-z^{\prime}|\}\] \[|\partial_{\mu}F(x^{\prime},z^{\prime},\mu^{\prime},v^{\prime})- \partial_{\mu}F(x,z,\mu,v)|\\ \leq|\partial_{\mu}L(x^{\prime},\hat{\alpha}(x^{\prime},z^{\prime}, \mu^{\prime}),\mu^{\prime},v^{\prime})-\partial_{\mu}L(x,\hat{\alpha}(x,z,\mu), \mu,v)|\\ +|\nabla_{z}F(x^{\prime},z^{\prime},\mu^{\prime})-\nabla_{z}F(x,z, \mu)||\partial_{\mu}(\nabla_{a}L)(x^{\prime},\hat{\alpha}(x^{\prime},z^{\prime },\mu^{\prime}),\mu^{\prime},v^{\prime})|\\ +|\nabla_{z}F(x,z,\mu)||\partial_{\mu}(\nabla_{a}L)(x^{\prime}, \hat{\alpha}(x^{\prime},z^{\prime},\mu^{\prime}),\mu^{\prime},v^{\prime})- \partial_{\mu}(\nabla_{a}L)(x,\hat{\alpha}(x,z,\mu),\mu,v)|\\ \leq C(1+|z|+|z^{\prime}|)\{(1+|z|+|z^{\prime}|(|x-x^{\prime}|+W_ {2}(\mu,\mu^{\prime}))+|z-z^{\prime}|\}\]
2309.07366
Induced Distributions from Generalized Unfair Dice
In this paper we analyze the probability distributions associated with rolling (possibly unfair) dice infinitely often. Specifically, given a $q$-sided die, if $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^{\text{th}}$ toss, then the distribution function is $F(x)=\mathbb{P}[X\leq x]$, where $X = \sum_{i=1}^\infty x_i q^{-i}$. We show that $F$ is singular and establish a piecewise linear, iterative construction for it. We investigate two ways of comparing $F$ to the fair distribution -- one using supremum norms and another using arclength. In the case of coin flips, we also address the case where each independent flip could come from a different distribution. In part, this work aims to address outstanding claims in the literature on Bernoulli schemes. The results herein are motivated by emerging needs, desires, and opportunities in computation to leverage physical stochasticity in microelectronic devices for random number generation.
Douglas T. Pfeffer, J. Darby Smith, William Severa
2023-09-14T00:57:19Z
http://arxiv.org/abs/2309.07366v2
# Induced Distributions from Generalized Unfair Dice ###### Abstract In this paper we analyze the probability distributions associated with rolling (possibly unfair) dice infinitely often. Specifically, given a \(q\)-sided die, if \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss, then the distribution function is \(F(x)=\mathbb{P}[X\leq x]\), where \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\). We show that \(F\) is singular and establish a piecewise linear, iterative construction for it. We investigate two ways of comparing \(F\) to the fair distribution--one using supremum norms and another using arclength. In the case of coin flips, we also address the case where each independent flip could come from a different distribution. In part, this work aims to address outstanding claims in the literature on Bernoulli schemes. The results herein are motivated by emerging needs, desires, and opportunities in computation to leverage physical stochasticity in microelectronic devices for random number generation. ## 1 Introduction Contemporary computing approaches are dominated by deterministic operations. In terms of both the algorithmic approach and the underlying computing devices, determinism is deeply woven into our computing mindset. At the device level, stochastic behavior is often seen as a defect. Noise and fluctuations have been eliminated or constrained wherever possible. This is often beneficial; resistance to noise is one of the key benefits of _digital_ electronics. However, a direct consequence is that our everyday computers are deterministic. Of course, randomness plays a role in many algorithms, including those from scientific computing and cryptography. For example, particle methods and other probabilistic approaches are often applied to high-dimensional physics problems where direct numerical solutions can be intractable. However, there is an inherent misalignment between stochastic behavior and deterministic hardware. Well-distributed and difficult-to-predict numbers can be generated by a Pseudo-Random Number Generator (pRNG). These methods (generally) take a _seed_ value and generate a sequence of corresponding numbers through iteration. The sequence of values can appear to be random, but are entirely determined by the seed and the pRNG. The quality of the 'random' numbers is dependent on the quality of the pRNG. While this method is sufficient for many applications, deficiencies in either the seed setting or the pRNG can be disastrous.1 Footnote 1: We provide two examples, though we suggest the reader to explore the fascinating world of deficient pRNGs. The first is of historical notoriety: The RANDU generator’s iterates in three dimensions fall along planes [14], making predictions trivially easy in this case. The second is that system time is a common seed value. Knowing the system time at seed setting allows players to predict future events in games such as Pokémon [21]. Sources of noise can be used to help improve the quality of pRNG number generation. Small timing differences in keyboard input and mouse input or fluctuations in measured quantities, such as WiFi signal, can be used. In modern approaches, these noisy signals feed what is called an _entropy pool_. This entropy pool (e.g. via /dev/random/ on Linux) can then be combined, hashed, and otherwise manipulated to produce yet more unpredictable "random" numbers. Unfortunately, this entropy pool approach has three main challenges: 1. The sources of noise may not be truly random. 2. The pRNGs still produce non-random numbers. 3. The entropy pool can be depleted. We believe these motivate the study of probabilistic computing devices and, consequently, the study of how to best use a naturally stochastic computing device. This motivation is shared by those in the computing field, where probabilistic devices and true Random Number Generators (tRNGs) are an area of active study [22, 23, 24, 25, 26]. Certain devices, such as magnetic tunnel junctions [27, 28] or tunnel diodes [3], can be made to behave as an effective Bernoulli random variable (hereafter, a coin flip). These devices have some probability \(p\) of returning \(1\) and a probability \(1-p\) of returning \(0\). Simple random variables have great utility and can be exploited to return samples from a variety of distributions [11, 12]. Furthermore, when used as input to novel and emerging hardware, like spiking neurons in a neuromorphic computer [29, 10, 11], such simple stochastic input can be used to find approximate solutions to NP-hard problems such as MAXCUT [23, 14]. These formulations tend to model coin flips as precisely that--A two-outcome, even draw. We note that these devices conceivably behave not as just coins but as \(q\)-sided dice. Indeed, current devices considered for other purposes can be in one of five states [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 287, 288, 289, 291, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 323, 334, 335, 34, 351, 352, 353, 36, 371, 372, 38, 393, 310, 311, 32, 334, 336, 373, 38, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 88, 89, 80, 81, 82, 85, 86, 89, 80, 81, 83, 84, 87, 88, 82, 89, 81, 84, 85, 86, 87, 88, 89, 80, 82, 85, 88, 89, 81, 86, 89, 82, 87, 88, 83, 89, 82, 89, 80, 83, 84, 85, 86, 87, 88, 89, 82, 89, 83, 84, 85, 87, 88, 89, 80, 84, 86, 89, 80, 85, 89, 82, 86, 87, 88, 89, 80, 88, 89, 81, 80, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 80, 81, 84, 88, 89, 82, 83, 85, 89, 80, 84, 86, 88, 87, 88, 89, 82, 88, 89, 80, 85, 89, 80, 86, 87, 88, 89, 80, 88, 89, 81, 82, 89, 82, 83, 84, 85, 86, 88, 89, 80, 87, 88, 89, 80, 88, 82, 89, 80, 88, 81, 82, 84, 85, 86, 87, 88, 88, 89, 82, 89, 83, 84, 85, 87, 88, 89, 82, 89, 84, 86, 88, 89, 80, 88, 89, 80, 81, 82, 83, 85, 89, 84, 86, 87, 88, 89, 82, 89, 83, 85, 89, 84, 88, 85, 86, 89, 87, 88, 89, 80, 88, 89, 80, 81, 82, 84, 85, 86, 88, 89, 87, 88, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 88, 89, 80, 82, 87, 89, 80, 88, 89, 80, 81, 82, 84, 85, 87, 88, 89, 82, 89, 83, 86, 88, 89, 80, 81, 83, 88, 89, 82, 84, 85, 86, 89, 83, 87, 88, 89, 84, 88, 85, 89, 86, 87, 88, 89, 82, 89, 84, 88, 89, 85, 86, 89, 87, 88, 88, 89, 88, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 87, 88, 89, 80, 82, 89, 83, 86, 88, 89, 80, 84, 88, 85, 89, 86, 87, 88, 89, 80, 87, 88, 89, 80, 89, 82, 89, 80, 83, 84, 85, 86, 87, 89, 80, 88, 89, 80, 81, 82, 83, 85, 89, 82, 84, 85, 88, 86, 89, 80, 87, 88, 89, 80, 88, 89, 82, 85, 89, 83, 86, 89, 80, 84, 88, 86, 87, 88, 89, 80, 88, 89, 80, 89, 80, 81, 82, 85, 89, 83, 86, 89, 80, 82, 86, 89, 80, 83, 88, 89, 80, 84, 89, 80, 85, 86, 89, 81, 87, 89, 82, 89, 83, 86, Each \(X_{n}\) is a decimal number in \([0,1]\) formed from a binary encoding of \(n\) outcomes \(\{x_{i}\}_{i=1}^{n}\). Let \(X:=\lim_{n\to\infty}X_{n}\) be the encoded value in \([0,1]\) obtained by flipping our coin infinitely many times. We ask, given \(y\in[0,1]\), what is \(\mathbb{P}[X\leq y]\)? That is, what is the cumulative distribution function (CDF) of \(X\)? Consider first the case that \(p=0.5\). For a finite number of flips \(N\), the probability of getting any single value of \(X\) is the probability of getting a particular string of outcomes. In this fair case, that probability is \(1/2^{N}\). Extending this probability mass function to a probability density function on the real line, we need to divide this probability by the width of the unit of mass it represents. Given the binary encoding, this width is \(1/2^{N}\). Hence the probability density induced on the real line by \(N\) flips is \(1\). The limit in \(N\) is still \(1\), and the associated probability density function (PDF) of \(X\) in the fair case is the uniform PDF. Therefore the CDF of \(X\) in the fair case is given by \[F(x)=\int_{-\infty}^{x}\mathbbm{1}_{[0,1]}\,d\mu=\begin{cases}0&\text{if }x<0\\ x&\text{if }0\leq x\leq 1\\ 1&\text{if }x>1\end{cases}. \tag{1}\] We remark that the uniform measure (the PDF) on \([0,1]\) coincides with Lebesgue measure restricted to \([0,1]\). What happens when \(p\neq 0.5\)? [1, Example 31.1] shows that the distribution turns out to be _singular_--a probability distribution concentrated on a set of Lebesgue measure zero. This is observed by showing that the cumulative distribution function \(F(x)\) for \(X\) in the non-fair case is continuous, strictly increasing, and has \(F^{\prime}(x)=0\) almost everywhere. A related concept is the Cantor distribution, a probability distribution whose cumulative distribution function is the Cantor function. After demonstrating that the cumulative distribution function, \(F(x)\), for the unfair coin is singular, Billingsley establishes a recursion definition for it. With \(\mathbb{P}[x_{i}=0]=p_{0}\) and \(p_{1}=1-p_{0}\), \[F(x)=\begin{cases}p_{0}F(2x)&\text{if }0\leq x\leq\frac{1}{2}\\ p_{0}+p_{1}F(2x-1)&\text{if }\frac{1}{2}\leq x\leq 1\end{cases}. \tag{2}\] As examples, we graph this recursive function for four different coins \(p_{0}=.15,.25,.40,\text{ and }.49\) in Figure 1. As any unfair coin produces a singular measure, all such unfair coin measures are singular with respect to Lebesgue measure (on \([0,1]\)) and therefore singular with respect to the uniform measure on \([0,1]\). In the sequel, we will extend this classic CDF result into a series of results on unfair \(q\)-sided dice. This natural extension of the CDF formula to unfair dice is conjectured by Billingsley as an exercise and, to our knowledge no proof of this result exists in the literature. The following is a reinterpretation of this conjecture: **Conjecture 2.1** (Problem 31.1 in [1]).: Let \(p_{0},\ldots,p_{q-1}\) be non-negative numbers adding to \(1\), where \(q\geq 2\); suppose there is no \(j\) such that \(p_{j}=1\). Let \(x_{1},x_{2},\ldots\) be independent, identically distributed random variables such that \(P[x_{i}=j]=p_{j}\), \(0\leq j<q\), and put \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\). If \(F(x)=\mathbb{P}[X\leq x]\) is the distribution function of \(X\), then 1. \(F\) is continuous, 2. \(F\) is strictly increasing on \([0,1]\) if and only if \(p_{j}>0\) for all \(j\), 3. if \(p_{j}=q^{-1}\) for all \(j\), then \(F(x)=x\) on \([0,1]\); and * if \(p_{j}\neq q^{-1}\) for some \(j\), then \(F\) is singular. In addition to the preceding proposition on a single (unfair) die, recent research has referenced pairs of dice as well; notably, [10] and [10]. While their work is focused on the generalized _stationarity_ setting, they provide a discussion of the problem's history in the i.i.d. case--a so-called 'Bernoulli scheme'. They identify the \(q=2\) CDF as a 'Riesz-Nagy' function, and explicitly examine the Cantor function for \(q=3\) (Problem 31.2 in [11]). In [10, SS1.1], the authors go onto to make two claims without proof: * The measures \(\mathrm{d}F=\mu\) in all the Bernoulli schemes for any \(q\) are again all singular with respect to one another. * Only one measure is absolutely continuous relative to Lebesgue measure; namely where all \(j\in\{0,\ldots,q-1\}\) are equally likely. In this case, \(\mathrm{d}F=\lambda\) is Lebesgue measure itself on \([0,1]\). The first of which they refer to as a 'folk theorem'. While they refer the reader to Section 14 of [11] for a discussion on the matter, we were unable to reproduce these observations from this text. That said, [11, Example 3.5] does allude to the base-\(q\) case for \(q\geq 2\). Here, similar questions are posed as those [11, Problems 31.1 and 32.1], but still no proofs are given. In part, the next section will provide proofs to the aforementioned facts. Specifically, we prove Conjecture 2.1 in Theorems 3.1 and 3.2. We prove claims (I) and (II) in Theorems 3.2 and 3.3. Afterward, in Theorem 3.4, we provide a novel analogous result to [11, Example 31.1] for independent, but not identically distributed, coin flip sequences. Figure 1: The CDF \(F(x)\) shown for various unfair coins with values for \(p_{0}\) indicated by color. In each case, \(p_{1}=1-p_{0}\) Results on infinite (unfair) dice rolling. In this section we begin by establishing qualitative results for the cumulative distribution functions associated with sequences obtained from unfair dice rolls. We then follow this discussion with the development of some machinery one can use to compare the unfair cumulative distribution function with the uniform distribution. ### Analysis of the CDF. Our first goal is prove parts (i) and (ii) of Conjecture 2.1: **Theorem 3.1**.: Consider a \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\), if \(F(x)=\mathbb{P}[X\leq x]\) is the distribution function obtained after tossing the die an infinite number of times, then 1. \(F\) is continuous; 2. \(F\) is strictly increasing on \([0,1]\) if and only if \(p_{j}>0\) for all \(j\); 3. \(F^{\prime}\) exists almost everywhere in \([0,1]\); Proof.: For an arbitrary sequence \((u_{1},u_{2},\ldots)\) taken from \(\{0,1,\ldots,q-1\}\), let \(p_{u_{i}}:=\mathbb{P}[x_{i}=u_{i}]\). Since each \(p_{u_{i}}<1\), we have \[\mathbb{P}[x_{i}=u_{i},\ i=1,2\ldots]=\lim_{n\to\infty}p_{u_{1}}\cdot p_{u_{2} }\cdot\ldots\cdot p_{u_{n}}=0. \tag{3}\] Letting \(x=\sum_{i=1}^{\infty}\frac{u_{i}}{q^{i}}\) be the (essentially) unique base-\(q\) expansion for a number in \([0,1]\), we see immediately that \(\mathbb{P}[X=x]=0\). Hence, \[\mathbb{P}[X\leq x]=\mathbb{P}[X<x]+\mathbb{P}[X=x]=\mathbb{P}[X<x].\] It follows that \(F\) is left-continuous. As a distribution, \(F\) must be right-continuous. Therefore \(F\) is everywhere continuous. Now let \(k\in\mathbb{N}\) so that \(0\leq\frac{k}{q^{n}}\leq 1\). We can see that \[\frac{k}{q^{n}}=\sum_{i=1}^{n}\frac{u_{i}}{q^{i}}\] for some \(u_{i}\in\{0,1,\ldots,q-1\}\). Since \(F\) is continuous, \[F\left(\frac{k+1}{q^{n}}\right)-F\left(\frac{k}{q^{n}}\right) =\mathbb{P}\left[X<\frac{k+1}{q^{n}}\right]-\mathbb{P}\left[X< \frac{k}{q^{n}}\right]\] \[=\mathbb{P}\left[\frac{k}{q^{n}}<X<\frac{k+1}{q^{n}}\right] \tag{4}\] \[=\mathbb{P}[x_{i}=u_{i},\ i=1,2,\ldots,n]\] \[=p_{u_{1}}\cdot\ldots\cdot p_{u_{n}}.\] Therefore, since base-\(q\) expansions are dense in \([0,1]\), \(F\) is strictly increasing on \([0,1]\) if and only if \(p_{j}>0\) for all \(j\). In any case, \(F\) is non-decreasing and therefore, by Theorem 31.2 in [2], \(F^{\prime}\) exists almost everywhere in \([0,1]\) To proceed, we require two results on the frequency of digits in our base-\(q\) expansions (both of which are due to Emile Borel). To discuss both, we fix the following notation: Given a finite set of \(b\)-digits, \(B\), and an infinite sequence \(\omega\) taken from \(B\), let \(\#_{\omega}(a,n)\) denote the number of times \(a\) shows up in the first \(n\) terms of \(\omega\). The first result we need is known as _Borel's law of large numbers_ (see [21] for an analytic proof) which states that if \(S_{n}\), \(n\geq 1\), is the number of successes in the first \(n\) independent repetitions of a Bernoulli trial with success probability \(p\), \(0<p<1\), then \[\mathbb{P}\left(\lim_{n\to\infty}\frac{S_{n}}{n}=p\right)=1.\] In the context of our paper, our Bernoulli trial is the tossing of a \(q\)-sided die. Borel's law of large numbers is then asserting that, with probability \(1\), the frequency of each individual outcome tends toward its probability. From this, Lemma 3.1 directly follows. **Lemma 3.1**.: Consider a \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\), if \(F(x)=\mathbb{P}[X\leq x]\) is the distribution function obtained after tossing the die an infinite number of times with \(\mu\) as its associated probability measure, and \(\omega_{q}=(d_{1}(x),d_{2}(x),\ldots)\) is the sequence of digits in the non-terminating base-\(q\) expansion of an \(x\in[0,1]\), then \[\mu\left(\left\{x\in(0,1]\colon\lim_{n\to\infty}\frac{\#_{\omega_{q}}(j,n)}{n} =p_{j}\right\}\right)=1.\] The second result we need is Borel's _Normal Number Theorem_. While this result was originally established in [1], we refer the reader to [19, Chapter 8] for more details. By definition, given a finite set of \(b\) digits, \(B\), an infinite sequence \(\omega\) on this set is **(simply) normal** if \[\lim_{n\to\infty}\frac{\#_{\omega}(a,n)}{n}=\frac{1}{b}\] for any \(a\in B\). Thus, a sequence \(\omega\) is normal for a set \(B\) if the relative frequency of each item in \(B\) is 'fair'. The normal number theorem says that almost every real number \(x\) is normal in any integral base \(b>1\). Utilizing our notation, we formally write **Lemma 3.2** (Normal Number Theorem).: Let \(x\in[0,1]\), \(b>1\) be an integer, and \(B=\{0,\ldots,b-1\}\). If \(\omega_{b}\) is the sequence of digits from \(B\) that form the base-\(b\) expansion of \(x\), then \[\lambda\left(\left\{x\in[0,1]\colon\lim_{n\to\infty}\frac{\#_{\omega_{b}}(j,n) }{n}=\frac{1}{q}\right\}\right)=1\] for all \(j\in B\), where \(\lambda\) is Lebesgue measure on \([0,1]\). Next, we record a measure-theoretic definition and proposition that have been localized to \([0,1]\). Both are reproduced directly from [1, pg.410]. **Definition 1**.: Two measures \(\mu\) and \(\lambda\) on \([0,1]\) have _disjoint supports_ if there exist Borel sets \(S_{\mu}\) and \(S_{\lambda}\) such that \[\mu([0,1]\setminus S_{\mu})=0,\ \ \ \ \ \lambda([0,1]\setminus S_{\lambda})=0,\ \ \ \ \text{and}\ \ \ \ S_{\mu}\cap S_{\lambda}=\emptyset.\] **Proposition 1**.: If \(F\colon[0,1]\to[0,1]\) is a differentiable function for which \(\mu((a,b])=F(b)-F(a)\), then \(\mu\) and Lebesgue measure \(\lambda\) have disjoint supports if and only if \(F^{\prime}(x)=0\) except on a set of Lebesgue measure \(0\). We may now state and prove our second main result which proves parts (iii) and (iv) of Conjecture 2.1 and simultaneously proves claim (II): **Theorem 3.2**.: Consider a \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\), if \(F(x)=\mathbb{P}[X\leq x]\) is the cumulative distribution function obtained after tossing the die an infinite number of times, then 1. If \(p_{j}=\frac{1}{q}\) for all \(j\), then \(F(x)=x\) on \([0,1]\) and 2. If \(p_{k}\neq\frac{1}{q}\) for some \(k\), then \(F\) is singular. In either case, \(F\) is given by the following recursion formula: \[F(x)=\begin{cases}p_{0}F(qx)&\text{if }0\leq x\leq\frac{1}{q}\\ p_{0}+p_{1}F(qx-1)&\text{if }\frac{1}{q}\leq x\leq\frac{2}{q}\\ \vdots&\vdots\\ (p_{0}+p_{1}+\ldots+p_{q-2})+p_{q-1}F(qx-(q-1))&\text{if }\frac{q-1}{q}\leq x \leq 1\end{cases}. \tag{5}\] Proof.: While (i) will follow from the recursion formula established independently, we can also use the setting detailed in (4) to observe that, if \(p_{j}=\frac{1}{q}\) for all \(j\), then \[F\left(\frac{k}{q^{n}}+\frac{1}{q^{n}}\right)-F\left(\frac{k}{q^{n}}\right)=F \left(\frac{k+1}{q^{n}}\right)-F\left(\frac{k}{q^{n}}\right)=p_{u_{1}}\cdot \ldots\cdot p_{u_{n}}=\frac{1}{q^{n}}.\] Hence, due to the density of base-\(q\) expansions in \([0,1]\), \(F(x)=x\). We now establish (ii). Take \(x\in(0,1]\) and let \(\omega_{q}=(d_{1}(x),d_{2}(x),\ldots)\) be the sequence of digits in its non-terminating base-\(q\) expansion. If \(\mu\) represents our probability measure, then \(\mu[x\colon d_{i}(x)=j]=p_{j}\) for \(j\in\{0,\ldots,q-1\}\). For every \(j\), form \[S_{j}:=\left\{x\in(0,1]\colon\lim_{n\to\infty}\frac{\#\omega_{q}(j,n)}{n}=p_{ j}\right\}\quad\text{ and consider }\quad\widetilde{S}:=\bigcap_{j=0}^{q-1}S_{j}.\] Lemma 3.1 asserts that \(\mu(S_{j})=1\) for every \(j\). Thus, by subadditivity of the measure \(\mu\), \[\mu((\widetilde{S})^{c})=\mu\left(\bigcup_{j=0}^{q-1}S_{j}^{c}\right)\leq\sum _{j=0}^{q-1}\mu(S_{j}^{c})=0,\] and therefore \(\mu(\widetilde{S})=1\). Similarly, for every \(j\) form \[T_{j}:=\left\{x\in(0,1]\colon\lim_{n\to\infty}\frac{\#\omega_{q}(j,n)}{n}= \frac{1}{q}\right\}\quad\text{ and consider }\quad\widetilde{T}:=\bigcap_{j=0}^{q-1}T_{j}.\] Lemma 3.2 asserts that \(\lambda(T_{j})=1\) for every \(j\), where \(\lambda\) is Lebesgue measure. Thus, again by the subadditivity of \(\lambda\), \(\lambda(\widetilde{T})=1\). Now suppose that \(p_{k}\neq\frac{1}{q}\) for some \(k\). Then, by the uniqueness of limits, \(S_{k}\cap T_{k}=\emptyset\) and therefore \(\widetilde{S}\cap\widetilde{T}=\emptyset\). By Definition 1, \(\mu\) and \(\lambda\) are seen to have disjoint supports. It now follows from Proposition 1 that \(F^{\prime}(x)=0\) except on a set of Lebesgue measure \(0\) and therefore \(F\) is singular. Finally, we establish the recursion formula given in (5). Note that \([0,1]\) can be divided into \(q\) intervals \([0,\frac{1}{q}],[\frac{1}{q},\frac{2}{q}],\ldots,[\frac{q-1}{q},1]\) -- the so-called base-\(q\) intervals of rank \(1\). All cases in the recursion proceed in an identical fashion. As such, we provide an explicit proof of the last case of the recursion formula only. Suppose \(x\in[\frac{q-1}{q},1]\), the \(q^{\text{th}}\) base-\(q\) interval of rank \(1\). Here, \(X\leq x\) can occur in \(q\) different ways. Specifically, either \(X\) lies in one of the previous base-\(q\) intervals, or it lies in the last interval with \(x\). Thus, \[\mathbb{P}[X\leq x] =\mathbb{P}\left[x_{1}=0\text{ or }\ldots\text{ or }x_{1}=q-2\text{ or }\left(x_{1}=q-1\text{ and }\frac{q-1}{q}+\sum_{i=2}^{\infty}\frac{x_{i}}{q^{i}}\leq x \right)\right]\] \[=(p_{0}+\ldots+p_{q-2})+p_{q-1}\mathbb{P}\left[q-1+\sum_{i=2}^{ \infty}\frac{x_{i}}{q^{i-1}}\leq qx\right]\] \[=(p_{0}+\ldots+p_{q-2})+p_{q-1}\mathbb{P}\left[\sum_{i=1}^{\infty }\frac{x_{i+1}}{q^{i}}\leq qx-(q-1)\right]\] \[=(p_{0}+\ldots+p_{q-2})+p_{q-1}\mathbb{P}\left[X\leq qx-(q-1)\right]\] \[=(p_{0}+\ldots+p_{q-2})+p_{q-1}F(qx-(q-1)).\] Therefore, when \(x\in[\frac{q-1}{q},1]\), we have the recursion \[F(x)=(p_{0}+\ldots+p_{q-2})+p_{q-1}F(qx-(q-1)).\] The rest of the cases follow similarly. **Remark 3.1**.: Note that the _Cantor distribution_ is the probability distribution whose cumulative distribution function is the Cantor function. This distribution is often given as an example of a singular distribution. As a result, it is worth noting that the singular distribution obtained in Theorem 3.2 is a generalization of the Cantor distribution. Indeed, if we let \(q=3\), \(p_{0}=p_{2}=0.5\), and \(p_{1}=0\), Theorem 3.2 states that the resulting cumulative distribution function, \(F(x)\), is singular and is given by the following recursion formula: \[F(x)=\begin{cases}\frac{1}{2}F(3x)&\text{if }0\leq x\leq\frac{1}{3}\\ \frac{1}{2}&\text{if }\frac{1}{3}\leq x\leq\frac{2}{3}\\ \frac{1}{2}+\frac{1}{2}F(3x-2)&\text{if }\frac{2}{3}\leq x\leq 1\end{cases}\] By comparing this formula to that given in [1] and [1, pg.9], we see that this formula exactly defines the Cantor distribution whose graph is the 'Devil's Staircase'. Notably, the proof given in Theorem 3.2 can be modified to form a stronger conclusion on singularity. Specifically, we can compare the probability measures obtained from differently weighted dice. The following result proves claim (I): **Theorem 3.3**.: Consider two \(q\)-sided dice with sides taken from \(Q:=\{0,\ldots,q-1\}\). Let \(x_{i}\in Q\) denote the outcome of the \(i^{\text{th}}\) toss of one die with corresponding probabilities \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in Q\), and let \(\widetilde{x}_{k}\) and \(\widetilde{p}_{k}\) be defined similarly for the second die (with \(k\in Q\)). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\) and \(\widetilde{X}=\sum_{i=1}^{\infty}\widetilde{x}_{i}q^{-i}\), put \(F(x)=\mathbb{P}[X\leq x]\) and \(\widetilde{F}(x)=\mathbb{P}[\widetilde{X}\leq x]\) as the respective cumulative distribution functions obtained after tossing the corresponding die an infinite number of times. If there exists an outcome \(t\in Q\) so that \(p_{t}\neq\widetilde{p}_{t}\), then the associated probability measures, \(\mu\) and \(\widetilde{\mu}\), are mutually singular. Proof.: It suffices to show that \(\mu\) and \(\widetilde{\mu}\) have disjoint supports. We will essentially use the argument given in the proof of Theorem 3.2, but using two probability measures instead of one probability measure and Lebesgue measure. Take \(x\in(0,1]\) and let \(\omega_{q}=(d_{1}(x),d_{2}(x),\ldots)\) be the sequence of digits in its non-terminating base-\(q\) (equivalently, base-\(\widetilde{q}\)) expansion. If \(\mu\) and \(\widetilde{\mu}\) represent our two probability measures, then \(\mu[x\colon d_{i}(x)=j]=p_{j}\) and \(\widetilde{\mu}[x\colon d_{i}(x)=k]=\widetilde{p}_{k}\) for \(j,k\in Q\). For every \(j\), form \[S_{j}:=\left\{x\in(0,1]\colon\lim_{n\to\infty}\frac{\#_{\omega_{q}}(j,n)}{n}=p_ {j}\right\}\quad\text{ and }\quad\quad S:=\bigcap_{j\in Q}S_{j}.\] Lemma 3.1 asserts that \(\mu(S_{j})=1\) for every \(j\). Thus, by subadditivity of \(\mu\), we have \(\mu(\widetilde{S})=1\). Similarly, for every \(k\), form \[\widetilde{S}_{k}:=\left\{x\in(0,1]\colon\lim_{n\to\infty}\frac{\#_{\omega_{q }}(k,n)}{n}=\widetilde{p}_{k}\right\}\quad\text{ and }\quad\quad\widetilde{S}:=\bigcap_{k\in Q} \widetilde{S}_{k}.\] Again, by Lemma 3.1 and subadditivity of \(\widetilde{\mu}\), we have \(\widetilde{\mu}(\widetilde{S})=1\). By assumption, there exists an outcome \(t\) for which \(p_{t}\neq\widetilde{p}_{t}\). Thus, by the uniqueness of limits, we have \(S_{t}\cap\widetilde{S}_{t}=\emptyset\) and therefore \(S\cap\widetilde{S}=\emptyset\). Hence, \(\mu\) and \(\widetilde{\mu}\) have disjoint supports. We have so far only addressed i.i.d. random dice rolls and coin flips. For the coin flips case, we can say a little bit more in the independent, but not necessarily identically distributed case. **Theorem 3.4**.: Suppose we flip an infinite number of \(2\)-sided unfair coins, but each have a different weighting. Specifically, let \(x_{i}\in\{0,1\}\) denote the outcome of the \(i^{\text{th}}\) flip and suppose \(0<\mathbb{P}[x_{i}=0]=:p_{i;0}\neq 0.5\). If \((p_{i;0})\not\to 0.5\), then \(F^{\prime}(x)=0\) almost everywhere and therefore \(F\) is singular. Proof.: Analogous arguments to those in Theorem 3.1 demonstrate that \(F\) is well-defined, continuous, and increasing, and therefore \(F^{\prime}\) exists almost everywhere in \([0,1]\). Suppose that \((p_{i;0})\not\to 0.5\). We will demonstrate that \(F^{\prime}(x)=0\). Let \(k\in\mathbb{N}\) so that \(0\leq\frac{k}{2^{n}}\leq 1\). Then, \(\frac{k}{2^{n}}=\sum_{i=1}^{n}\frac{u_{i}}{2^{i}}\) for some \(u_{i}\in\{0,1\}\) and \[F\left(\frac{k+1}{2^{n}}\right)-F\left(\frac{k}{2^{n}}\right) =\mathbb{P}\left[X<\frac{k+1}{2^{n}}\right]-\mathbb{P}\left[X< \frac{k}{2^{n}}\right]\] \[=\mathbb{P}\left[\frac{k}{2^{n}}<X<\frac{k+1}{2^{n}}\right]\] \[=\mathbb{P}[x_{i}=u_{i},\ i=1,2,\ldots,n]\] \[=p_{u_{1}}\cdot\ldots\cdot p_{u_{n}}.\] Let \(x\) be given and for each \(m\in\mathbb{N}\), choose \(k_{m}\) so that \(x\in I_{n}\), where \[I_{m}=\left(\frac{k_{m}}{2^{m}},\frac{k_{m}+1}{2^{m}}\right)\] is the dyadic interval of rank \(m\) that contains \(x\). It follows from the density of dyadics in \([0,1]\), that \[\lim_{m\to\infty}\frac{\mathbb{P}[X\in I_{m}]}{2^{m}}=\lim_{m\to\infty}\frac{F \left(\frac{k+1}{2^{m}}\right)-F\left(\frac{k}{2^{m}}\right)}{2^{m}}=F^{\prime} (x).\] Therefore, if we suppose, for the sake of contradiction, that \(F^{\prime}(x)\neq 0\), then on one hand we obtain the following: \[\lim_{m\to\infty}\frac{\frac{\mathbb{P}[X\in I_{m+1}]}{2^{m+1}}}{\frac{\mathbb{P} [X\in I_{m}]}{2^{m}}}=\frac{F^{\prime}(x)}{F^{\prime}(x)}=1.\] Thus, \[\lim_{m\to\infty}\frac{\mathbb{P}[X\in I_{m+1}]}{\mathbb{P}[X\in I_{m}]}=\frac {1}{2}. \tag{6}\] On the other hand, We know that \(I_{m}\) consists of those numbers in \([0,1]\) whose dyadic expansions match \(x\)'s for the first \(m\) terms. Thus, if \(X\in I_{m}\), then \[X=\sum_{i=1}^{m}\frac{u_{i}}{q^{i}}+\sum_{i=m+1}^{\infty}\frac{x_{i}}{q^{i}}.\] This implies that \(\mathbb{P}[X\in I_{m}]=p_{u_{1}}\cdot p_{u_{2}}\cdot\ldots\cdot p_{u_{m}}\). Therefore, \[\frac{\mathbb{P}[X\in I_{m+1}]}{\mathbb{P}[X\in I_{m}]}=\frac{p_{u_{1}}\cdot p _{u_{2}}\cdot\ldots\cdot p_{u_{m}}\cdot p_{u_{m+1}}}{p_{u_{1}}\cdot p_{u_{2}} \cdot\ldots\cdot p_{u_{m}}}=p_{u_{m+1}}.\] We assumed that \((p_{i;0})\not\to 0.5\), therefore \[\lim_{m\to\infty}\frac{\mathbb{P}[X\in I_{m+1}]}{\mathbb{P}[X\in I_{m}]}=\lim _{m\to\infty}p_{u_{m+1}}\neq\frac{1}{2}.\] This contradicts the conclusion yielded in (6). Thus, \(F^{\prime}(x)=0\) almost everywhere and therefore \(F\) is singular. ## 4 Comparisons of distributions to uniform. If computational devices can be used to create distributions based on possibly unfair coin tosses or die rolls, it is natural to ask how far away results could be expected to be from uniform. In practice, such comparisons are likely to be statistical in nature. However, given the results of the previous sections, we now have firm distributional objects with which to compare. In this section we offer two analytic ways to compare an unfair distribution to the uniform and fair one. The first is done in the infinite toss limit and utilizes the sup-norm. The second considers the practicality of the finite and compares a finite number of rolls or tosses to uniform through arclength. ### Comparison to Uniform under \(\|\cdot\|_{\infty}\) In this section we establish a method to compare the (possibly unfair) distribution \(F(x)\) with the uniform (fair) distribution using the sup-norm. To start, consider the operator \(T\colon C[0,1]\to C[0,1]\) defined by \[(Tf)(x)=\begin{cases}p_{0}f(qx)&\text{if }0\leq x\leq\frac{1}{q}\\ p_{0}+p_{1}f(qx-1)&\text{if }\frac{1}{q}\leq x\leq\frac{2}{q}\\ \vdots&\vdots\\ (p_{0}+p_{1}+\ldots+p_{q-2})+p_{q-1}f(qx-(q-1))&\text{if }\frac{q-1}{q}\leq x \leq 1\end{cases} \tag{7}\] **Lemma 4.1**.: Put \(p_{\max}=\max\{p_{i}\}_{i=1}^{q}\). If \(f,g\in C[0,1]\), then \(\|Tf-Tg\|_{\infty}\leq p_{\max}\|f-g\|_{\infty}\), and therefore \(T\) defines a contraction mapping. Proof.: Suppose \(\frac{q-1}{q}\leq x\leq 1\). Then, \[Tf-Tg =\left(\sum_{j=0}^{q-2}p_{j}+p_{q-1}f(qx-(q-1))\right)-\left(\sum_{j =0}^{q-2}p_{j}+p_{q-1}g(qx-(q-1))\right)\] \[=p_{q-1}(f(qx-(q-1))-g(qx-(q-1))).\] Therefore \(\|Tf-Tg\|_{\infty}=p_{q-1}\|f-g\|_{\infty}\) when restricted to \(\left[\frac{q-1}{q},1\right]\). Similar conclusions hold for the other subsets of the domain. Putting them all together (and supping over all of \([0,1]\)), we conclude \[\|Tf-Tg\|_{\infty}\leq p_{\max}\|f-g\|_{\infty}.\] **Theorem 4.1**.: Given the sequence of functions \((f_{n})_{n=0}^{\infty}\) defined by \(f_{0}=x\) and \(f_{n+1}=Tf_{n}\) and the distribution function \(F\) in Theorem 3.2, 1. \(f_{n}\to F\) 2. \(\|x-F(x)\|_{\infty}\leq\left(\frac{1}{1-p_{\max}}\right)\|x-f_{1}(x)\|_{\infty}\) Proof.: Lemma 4.1 showed that the operator \(T\colon C[0,1]\to C[0,1]\) is a contraction mapping. Thus, since \((C[0,1],\|\cdot\|_{\infty})\) is a (non-empty) complete metric space, the Banach Fixed Point Theorem guarantees that \(T\) admits a unique fixed point--a function \(G\in C[0,1]\) such that \(TG=G\), where \(G=\lim f_{n}\). This function is exactly the distribution \(F\) in Theorem 3.2. The Banach Fixed Point Theorem implies that you'll get the same fixed point no matter what starting function you pick (i.e., the choice of \(f_{0}\in C[0,1]\) is arbitrary). As such, we can choose our initial starting function as \(f_{0}=x\) and denote \(F(x)=:f_{\infty}(x)\). Now, since \(x-F(x)=f_{0}-f_{\infty}\) is seen to be a telescoping series,,we have the following: \[\|x-F(x)\|=\|f_{0}-f_{\infty}\|=\left\|\sum_{n=0}^{\infty}f_{n}-f_{n+1}\right\| \leq\sum_{n=0}^{\infty}\|f_{n}-f_{n+1}\|.\] Which, by Lemma 4.1 and the fact that \(|p_{\max}|<1\), yields \[\|x-F(x)\|\leq\sum_{n=0}^{\infty}(p_{\max})^{n}\|f_{0}-f_{1}\|=\left(\frac{1} {1-p_{\max}}\right)\|f_{0}-f_{1}\|=\left(\frac{1}{1-p_{\max}}\right)\|x-f_{1}\|,\] where all norms are sup-norms over \([0,1]\). Thus, to understand the sup-norm difference between the distribution \(F(x)\) and the uniform distribution \(y=x\), it suffices to understand the quantity \(\|x-f_{1}\|\), where \(f_{1}\) is a (finite) piece-wise linear function and has no fractal-like components. ### Comparisons via Arclength The previous section's result gave comparative information about the full singular distribution--one obtained after we roll our unfair die infinitely often. What if we wanted to compare our (possibly unfair) distribution after finitely many rolls? One route is to look at their arclengths. We start by recording a small result from [13, Theorem 6.22]: **Lemma 4.2**.: Let \(F\colon[0,1]\to\mathbb{R}\) be a continuous, increasing function for which \(F(0)=0\) and \(F(1)=1\). Then the following two statements are equivalent 1. The length of the arc \(y=F(x)\) on \([0,1]\) is \(2\). 2. The function \(F\) is singular. **Proposition 2**.: Consider a \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\), if \(F(x)=\mathbb{P}[X\leq x]\) is the cumulative distribution function obtained after tossing the die an infinite number of times, then 1. If \(p_{j}=\frac{1}{q}\) for all \(j\), then the arclength of \(F(x)\) on \([0,1]\) is \(\sqrt{2}\). 2. If \(p_{j}\neq\frac{1}{q}\) for some \(j\), then the arclength of \(F(x)\) on \([0,1]\) is \(2\). Proof.: By Theorem 3.2, if \(p_{j}=\frac{1}{q}\) for all \(j\), then \(F(x)=x\) on \([0,1]\) and therefore its length is \(\sqrt{2}\). If, on the other hand, we have that \(p_{j}\neq\frac{1}{q}\) for some \(j\), then Theorem 3.2 guarantees that \(F(x)\) is singular. Moreover, by Theorem 3.1, we know that \(F\) is both continuous and increasing on \([0,1]\). Finally, we observe that \(F(0)=0\) and \(F(1)=1\) (this can be seen, for example, by using the recursion formula in Theorem 3.2). Thus, by Lemma 4.2, the arclength of \(F\) on \([0,1]\) is equal to \(2\). This theorem tells us the arclength for the cumulative distribution function after we roll our die infinitely often. If, however, we are given a die and roll it \(n\) times, we can ask: Are we getting closer to a fair distribution, or an unfair one? One way to answer this is to look at the \(n^{\text{th}}\) iterate of our recursion formula. That is, in the language of Theorem 4.1, we consider \(f_{n}=T^{n}x\) and look at its arclength as \(n\) gets larger. We fix some notation first. Given our \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\), we will put \(P=\{p_{j}\}_{j=0}^{q-1}\) and denote the set of its \(n\)-tuples by \(P^{n}\). Order this finite set lexicographically and put \(P^{n}=\{v_{\ell}\}_{\ell=0}^{q^{n}-1}\) so that, for example, \(v_{0}=(p_{0},p_{0},\ldots,p_{0})\), \(v_{1}=(p_{0},p_{1},p_{0},\ldots,p_{0})\), etc. Let \(\Pi_{n}\colon P^{n}\to\mathbb{R}\) be the mapping that multiplies the coordinates of a given tuple from \(P^{n}\). For example, if \(v_{\ell}=(p_{0},p_{1},p_{0},p_{2},\ldots,p_{3})\), then \(\Pi_{n}(v_{\ell})=p_{0}\cdot p_{1}\cdot p_{0}\cdot p_{2}\cdot\ldots\cdot p_{3}\). **Remark 4.1**.: The tuple of probabilities associated with a given \(v_{\ell}\), say, \((p_{\ell_{0}},\ldots,p_{\ell_{q-1}})\) provide a _unique_ 'tag' by which we can locate the \(q^{n}\) base-\(q\) intervals of rank \(n\). For example, if we let \(q=4\) so that our probabilities are \(p_{0},p_{1},p_{2}\), and \(p_{3}\), and we consider the base-\(4\) intervals of rank \(n=2\), then \[P^{2}=\{(p_{i},p_{j})\ :\ i,j\in\{0,1,2,3\}\}.\] Now, the interval \(\left[\frac{14}{4^{2}},\frac{15}{4^{2}}\right]\) is uniquely associated with the tuple \(v_{14}=(p_{3},p_{2})\) in the following way: Start with \([0,1]\), zoom in on the fourth subinterval \(\left[\frac{3}{4},1\right]\) and further zoom in on _its_ third subinterval to yield \(\left[\frac{14}{4^{2}},\frac{15}{4^{2}}\right]\). Note that order matters. For example, the tuple \(v_{7}=(p_{2},p_{3})\) corresponds to the interval \(\left[\frac{7}{4^{2}},\frac{8}{4^{2}}\right]\). In either case, \(\Pi_{2}((p_{3},p_{2}))=\Pi_{2}((p_{2},p_{3}))=p_{3}p_{2}\). **Theorem 4.2**.: Consider a \(q\)-sided die, where \(x_{i}\in\{0,\ldots,q-1\}\) denotes the outcome of the \(i^{\text{th}}\) toss and \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\ldots,q-1\}\). Let \(f_{n}=T^{n}x\) be the piece-wise linear function described in Theorem 4.1 and let \(P^{n}=\{v_{i}\}_{i=0}^{q^{n}-1}\) be the set of \(q\)-tuples described in the preceding conversation. Then \[\text{Arclength of }f_{n}=\sum_{i=0}^{q^{n}-1}\sqrt{\left(\frac{1}{q^{n}} \right)^{2}+(\Pi_{n}(v_{i}))^{2}}.\] Proof.: The function \(f_{n}\) is piece-wise linear on the base-\(q\) intervals of rank \(n\): \[\left[0,\frac{1}{q^{n}}\right],\left[\frac{1}{q^{n}},\frac{2}{q^{n}}\right], \ldots,\left[\frac{q^{n}-1}{q^{n}},1\right]. \tag{8}\] As such, the arclength of \(f_{n}\) is equal to the sum of the length of the linear components on each of these intervals. Let \([\frac{i}{q^{n}},\frac{i+1}{q^{n}}]\) be an arbitrary such interval and consider the points \(f_{n}(\frac{i}{q^{n}})\) and \(f_{n}(\frac{i+1}{q^{n}})\). Note that, if \(F\) is the full cumulative distribution function, then \(F\) and \(f_{n}\) agree on the endpoints of every base-\(q\) interval of rank \(n\). (In fact, this is true for any rank of base-\(q\) intervals.) Thus, we can use the recursive definition for \(F\) given in Theorem 3.2 to evaluate the endpoints \(f_{n}(\frac{i}{q^{n}})\) and \(f_{n}(\frac{i+1}{q^{n}})\). To see this, note that the points \(\frac{i}{q^{n}}\) and \(\frac{i+1}{q^{n}}\) must both live in some base-\(q\) interval of rank \(1\). That is, they must both live in one of \([0,\frac{1}{q}],[\frac{1}{q},\frac{2}{q}],\ldots,[\frac{q-1}{q},1]\). (Here we are allowing the possibility that one of these points is the endpoint of an interval). Suppose the two points live in the interval \([\frac{k}{q},\frac{k+1}{q}]\) for some \(k\). Then \[f_{n}\left(\frac{i}{q^{n}}\right)=F\left(\frac{i}{q^{n}}\right) =(p_{0}+p_{1}+\ldots+p_{k-1})+p_{k}F\left(q\cdot\frac{i}{q^{n}}-k\right)\] \[=(p_{0}+p_{1}+\ldots+p_{k-1})+p_{k}F\left(\frac{i-kq^{n-1}}{q^{n- 1}}\right)\] and \[f_{n}\left(\frac{i+1}{q^{n}}\right)=F\left(\frac{i+1}{q^{n}}\right) =(p_{0}+p_{1}+\ldots+p_{k-1})+p_{k}F\left(q\cdot\frac{i+1}{q^{n}} -k\right)\] \[=(p_{0}+p_{1}+\ldots+p_{k-1})+p_{k}F\left(\frac{(i-kq^{n-1})+1}{q ^{n-1}}\right)\] so that \[f_{n}\left(\frac{i+1}{q^{n}}\right)-f_{n}\left(\frac{i}{q^{n}}\right)=p_{k} \left(F\left(\frac{(i-kq^{n-1})+1}{q^{n-1}}\right)-F\left(\frac{i-kq^{n-1}}{q^ {n-1}}\right)\right) \tag{9}\] Notably, we are now looking at the endpoints of the base-\(q\) interval of rank \(n-1\) given by \(\left[\frac{i-kq^{n-1}}{q^{n-1}},\frac{(i-kq^{n-1})+1}{q^{n-1}}\right]\). Thus, (9) shows that when we run the difference \(f_{n}\left(\frac{i+1}{q^{n}}\right)-f_{n}\left(\frac{i}{q^{n}}\right)\) through an iteration of the recursive formula for \(F\), it returns the probability \(p_{k}\) (where \(k\) is the rank-\(1\) interval that the rank-\(n\) interval \(\left[\frac{i}{q^{n}},\frac{i+1}{q^{n}}\right]\) landed in) times another difference of \(F\)-function evaluations at the endpoints of a rank \(n-1\) interval. Repeating this process \(n\) times, we get that \[f_{n}\left(\frac{i+1}{q^{n}}\right)-f_{n}\left(\frac{i}{q^{n}}\right)=\Pi_{n} (v_{i})(F(1)-F(0))=\Pi_{n}(v_{i})(1-0)=\Pi_{n}(v_{i}),\] where, in light of Remark 4.1, \(v_{i}\) is exactly the unique \(q\)-tuple 'tag' for the interval \(\left[\frac{i}{q^{n}},\frac{i+1}{q^{n}}\right]\). Since \(f_{n}\) is piece-wise linear on the base-\(q\) intervals of rank \(n\), its arclength over \(\left[\frac{i}{q^{n}},\frac{i+1}{q^{n}}\right]\) is equal to \[\sqrt{\left(\frac{i+1}{q^{n}}-\frac{i}{q^{n}}\right)^{2}+\left(f_{n}\left( \frac{i+1}{q^{n}}\right)-f_{n}\left(\frac{i}{q^{n}}\right)\right)^{2}}=\sqrt{ \left(\frac{1}{q^{n}}\right)^{2}+\left(\Pi_{n}(v_{i})\right)^{2}}\] Computing this for each of interval in (8) gives the total arclength as \[\text{Arclength of }f_{n}=\sum_{i=0}^{q^{n}-1}\sqrt{\left(\frac{1}{q^{n}}\right)^{2 }+(\Pi_{n}(v_{i}))^{2}}.\] Observe that, indeed, if \(p_{j}=\frac{1}{q}\) for all \(j\), then \(\Pi_{n}(v_{i})=\frac{1}{q^{n}}\) for all \(v_{i}\in P^{n}\) so that \[\text{Arclength of }F\text{ on }[0,1] =\lim_{n\to\infty}\sum_{i=0}^{q^{n}-1}\sqrt{\left(\frac{1}{q^{n} }\right)^{2}+(\Pi_{n}(v_{i}))^{2}}\] \[=\lim_{n\to\infty}\sum_{i=0}^{q^{n}-1}\sqrt{\left(\frac{1}{q^{n} }\right)^{2}+\left(\frac{1}{q^{n}}\right)^{2}}\] \[=\sqrt{2},\] Thus, this formula now provides an alternate way to recover (i) from Proposition 2. We also have the following corollary: **Corollary 4.1**.: If \(p_{j}\neq\frac{1}{q}\) for some \(j\), then \[\text{Arclength of }F\text{ on }[0,1]=\lim_{n\to\infty}\sum_{i=0}^{q^{n}-1} \sqrt{\left(\frac{1}{q^{n}}\right)^{2}+(\Pi_{n}(v_{i}))^{2}}=2.\] Proof.: The fact that \(p_{j}\neq\frac{1}{q}\) implies, by Proposition 2, that the arclength of \(F\) is \(2\). Using the formula in Theorem 4.2 gives the result. In general, the authors know of no way to compute this limit directly. This might be interesting due to the fact that result can be rephrased as \[\lim_{n\to\infty}\left(\begin{subarray}{c}\text{Average}\\ v_{i}\in P^{n}\end{subarray}\left\{\sqrt{1+(q^{n}\Pi_{n}(v_{i}))^{2}}\right\} \right)=2,\] where we van view \(\log(p_{i})\) as being the weights on a complete \(q\)-ary tree and the mapping \(\Pi_{n}\) as computing length of the paths (in a graph-theoretic sense) through the tree. The expression above is then taking a limit of the average distance, over all paths in the tree, between \(1\) and the perturbation-from-fair that a particular path yields. **Remark 4.2**.: In light of Remark 3.1, we know that the Cantor function is obtained when \(q=3\) and our probabilities are chosen so that \(p_{0}=p_{2}=0.5\) and \(p_{1}=0\). Proposition 2 therefore guarantees that its arclength on \([0,1]\) is equal to \(2\) and hence, we know that the limit considered in Corollary 4.1 is equal to \(2\). Interestingly, this is one of the few instances that one _can_ compute this limit directly. See [1, pg.4] for details. ## 5 Discussion Motivated by the need to understand probabilistic computing devices and their inherent randomness, our paper aimed to investigate the distribution function associated with rolling a (possibly unfair) \(q\)-sided die. While the literature covers the \(q=2\) coin flip case extensively, the full \(q\)-sided die case had not yet been addressed. Theorems 3.1 and 3.2 helped answer Conjecture 2.1, while Theorems 3.2 and 3.3 addressed recent 'folk lore' claims (reproduced here as claims (I) and (II).) In the spirit of these results, Theorem 3.4 provided a novel analogous result to [12, Example 31.1] for independent, but not identically distributed, coin flip sequences. Adding to these investigations, we have provided two theoretical tools to compare an unfair distribution, \(F(x)\), to a fair one, \(y=x\). Both use the iterative construction of the distribution given in (7). Theorem 4.1 in Section 4.1 provided an upper bound on \(\|x-F(x)\|_{\infty}\), and Theorem 4.2 in Section 4.2 provided a formula for calculating the arclength of the \(F(x)\) after finitely many dice rolls. ### Future Mathematical Work In this paper, we investigated the analytic properties of distributions associated with unfair dice. We looked at a single die in Theorems 3.1 and 3.2 and a pair of same-sided dice in Theorem 3.3. It is reasonable to ask about analytic comparisons between pairs of dice with differing number of sides. Consider a \(q\)-sided and \(\widetilde{q}\)-sided pair of dice. Let \(x_{i}\in\{0,\dots,q-1\}\) denote the outcome of the \(i\)th toss of the \(q\)-sided die with corresponding probabilities \(p_{j}:=\mathbb{P}[x_{i}=j]\) for \(j\in\{0,\dots,q-1\}\), and let \(\widetilde{x}_{k}\) and \(\widetilde{p}_{k}\) be defined similarly for the second die (with \(k\in\{0,\dots,\widetilde{q}-1\}\)). Given \(X=\sum_{i=1}^{\infty}x_{i}q^{-i}\) and \(\widetilde{X}=\sum_{i=1}^{\infty}\widetilde{x}_{i}\widetilde{q}^{-i}\), put \(F(x)=\mathbb{P}[X\leq x]\) and \(\widetilde{F}(x)=\mathbb{P}[\widetilde{X}\leq x]\) as the respective cumulative distribution functions obtained after tossing the corresponding die an infinite number of times. Let \(\mu\) and \(\widetilde{\mu}\) be the associated probability measures, respectively. **Question 5.1**.: If \(q\neq\widetilde{q}\), then what can be said of \(\mu\) and \(\widetilde{\mu}\)? As far as we can tell, this situation is more nuanced. It seems possible to show that when \(q>\widetilde{q}\), and the \(q\) die has only \(\widetilde{q}\) possible outcomes (with the \(q-\widetilde{q}\) outcomes having zero probability), the associated measures are mutually singular. However, when \(q\) and \(\widetilde{q}\) are not relatively prime (for example, when \(q=2\) and \(\widetilde{q}=4\)), it may be the case that, for some specific choices of probabilities, the support of \(\mu\) and \(\widetilde{\mu}\) could be the same. It is further unclear what happens when \(q\) and \(\widetilde{q}\)_are_ taken to be relatively prime. Tackling this generalization would require a more thorough understanding of how the measures \(\mu\) and \(\widetilde{\mu}\) interact with base expansions different from their own. In Corollary 4.1, we showed that when \(p_{j}\neq\frac{1}{q}\) for some \(j\), \[\text{Arclength of $F$ on $[0,1]=\lim_{n\to\infty}\sum_{i=0}^{q^{n}-1}\sqrt{ \left(\frac{1}{q^{n}}\right)^{2}+(\Pi_{n}(v_{i}))^{2}}=2$}.\] We proved this by showing the distribution is singular. Ideally, we would like a direct proof of this fact to facilitate a deeper understanding of the iteration approach given by the recursive formula in (7). Specifically, we would like to rigorously quantify how much additional arclength is witnessed every time we iterate. ### Future Computational Work The largely theoretical results shown inform us about how distributions should look when unfair coins or dice are thrown. Of immediate computational interest is the formula for arclength after \(n\) tosses given in Theorem 4.2. Given precisely tuned devices returning weighted dice or coin flips results, one could feasibly form a sort of hypothesis test on how many flips one must take before noticing a deviation from a uniform distribution. Verifying the arclength formula in device simulation and devising such a test is the subject of ongoing work. Beyond the fast application of the comparison metrics, the form of the singular measures themselves provides an inspiration point for future microelectronic design and verification. In our proof of the folk theorem, Theorem 3.2, an argument is made based on a set of full measure for one weighting being a set of zero measure for all other weightings. This result then provides a basis for verifying the distribution of an array of \(p\)-weighted coin-like devices. If one had the probability measure induced by each device and had the set of all binary numbers with density \(p\), then the measure of that set determines if the device is correctly weighted. Obviously, such a distributional object does not exist. However, this theory touchpoint can guide the discovery of future approximate methods and heuristics. Future work will see us put these comparative tools into practice via simulation. We will analyze the resulting data, discuss their merits and difficulties, and offer additional refinements to their implementation. ## Acknowledgements This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan). The authors acknowledge support from the DOE Office of Science (ASCR/BES) Microelectronics Co-Design project COINFLIIPS.
2301.13531
Effects of calibration uncertainties on the detection and parameter estimation of isotropic gravitational-wave backgrounds
Gravitational-wave backgrounds are expected to arise from the superposition of gravitational wave signals from a large number of unresolved sources and also from the stochastic processes that occurred in the Early universe. So far, we have not detected any gravitational wave background, but with the improvements in the detectors' sensitivities, such detection is expected in the near future. The detection and inferences we draw from the search for a gravitational-wave background will depend on the source model, the type of search pipeline used, and the data generation in the gravitational-wave detectors. In this work, we focus on the effect of the data generation process, specifically the calibration of the detectors' digital output into strain data used by the search pipelines. Using the calibration model of the current LIGO detectors as an example, we show that for power-law source models and calibration uncertainties $\lesssim 10 \%$, the detection of isotropic gravitational wave background is not significantly affected. We also show that the source parameter estimation and upper limits calculations get biased. For calibration uncertainties of $\lesssim 5 \%$, the biases are not significant ($\lesssim 2 \%$), but for larger calibration uncertainties, they might become significant, especially when trying to differentiate between different models of isotropic gravitational-wave backgrounds.
Junaid Yousuf, Shivaraj Kandhasamy, Manzoor A Malik
2023-01-31T10:25:57Z
http://arxiv.org/abs/2301.13531v2
Effects of calibration uncertainties on the detection and parameter estimation of isotropic gravitational-wave backgrounds ###### Abstract Gravitational-wave backgrounds are expected to arise from the superposition of gravitational wave signals from a large number of unresolved sources and also from the stochastic processes that occurred in the Early universe. So far, we have not detected any gravitational wave background, but with the improvements in the detectors' sensitivities, such detection is expected in the near future. The detection and inferences we draw from the search for a gravitational-wave background will depend on the source model, the type of search pipeline used, and the data generation in the gravitational-wave detectors. In this work, we focus on the effect of the data generation process, specifically the calibration of the detectors' digital output into strain data used by the search pipelines. Using the calibration model of the current LIGO detectors as an example, we show that for power-law source models and calibration uncertainties \(\lesssim 10\%\), the detection of isotropic gravitational wave background is not significantly affected. We also show that the source parameter estimation and upper limits calculations get biased. For calibration uncertainties of \(\lesssim 5\%\), the biases are not significant (\(\lesssim 2\%\)), but for larger calibration uncertainties, they might become significant, especially when trying to differentiate between different models of isotropic gravitational-wave backgrounds. ## I Introduction Since the first detection in September 2015 [1], the LIGO [2], and the Virgo [3] gravitational wave (GW) detectors have detected nearly one-hundred compact binary merger signals [4; 5; 6]. They correspond to individual merger signals with a high signal-to-noise ratio (SNR). In addition to those high SNR signals, assuming the merger events are outliers in a much larger population of compact mergers, we also expect many low SNR signals that are hard to detect individually. The superposition of such a large number of low SNR signals would give rise to a gravitational-wave background (GWB) that could be detected with the current or next generation of GW detectors [7; 8; 9; 10]. Apart from the compact binary mergers signals, superposition of other astrophysical GW signals such as from core-collapse supernovae [11; 12], magnetars [13; 14] could also give rise to GWB. In addition to these astrophysical sources, various events that took place in the early universe such as inflation and phase transitions could also give rise to GWB [15]. The detection of GWB from astrophysical sources can help us better understand the population and the evolution of stars in the universe [16; 17; 18] while the detection of GWB from cosmological sources can provide information about the processes in the very early universe which are otherwise difficult to obtain [19]. The LIGO-Virgo-KAGRA (LVK) collaboration, in their recent analyses using data from the observing run O3, did not find any evidence of GWBs and hence placed upper limits on the amplitudes of possible isotropic [20] and anisotropic GWBs [21]. With the proposed improvements to the current GW detectors [22], it might be possible to detect the GWB from compact binary mergers [10]. Also, the proposed next-generation GW detectors [23; 24] are expected to observe the GWB from compact binary mergers with high SNRs [25; 26]. The data generation and various aspects of the search are expected to affect the GWB search results, and hence it is important to understand them. In this paper, we focus on the effects of the data generation, specifically that of the calibration, on the analysis results. Calibration is the process of converting the raw digital outputs of the detectors into strain data that are further used in the GW analyses. Any uncertainties in that process could translate into biases and larger uncertainties in the final results, affecting our interpretations. Typically, cross-correlation-based searches correlating data from multiple detectors are used to detect GWBs [27]. In previous such searches using LIGO-Virgo data [28; 29; 20], upper limits were calculated after marginalizing over calibration uncertainties as outlined in [30]. However, that method does not capture any biases introduced by uncertainties and systematic errors in the calibration model. In this work, we try to address that issue. In the past, this has been studied primarily in the context of the search for GW signals from individual compact binaries [31; 32; 33; 34]. Recently, such questions have also been addressed for the detection and parameter estimation of individual compact binary merger signals [35; 36; 37]. We use a similar simulation-based method [35; 36] to address the effects of calibration uncertainties on the searches for GWB. In addition, we also show that one could try to estimate the GWB and calibration model parameters simultaneously and get a reasonable signal recovery. The remainder of this paper is organized as follows. In Sec. II, we briefly introduce the model and search for GWB using data from GW detectors. In Sec. III, we discuss the calibration model used to convert the raw digital output into strain data used in GW searches. In Sec. IV, we describe the method used to quantify the effects of calibration uncertainties on the isotropic GWB searches. In Sec. V, we show the results of our analyses, and in Sec. VI conclude with the main results and future outlook. ## II Modeling and search for isotropic gravitational-wave backgrounds An isotropic GWB is usually characterized in terms of fractional energy density in gravitational waves \(\Omega_{gw}(f)\)[27], given by, \[\Omega_{gw}(f)=\frac{f}{\rho_{c}}\frac{d\rho_{gw}}{df}\, \tag{1}\] where \(f\) is the frequency, \(d\rho_{gw}\) is the energy in gravitational waves in the frequency interval from \(f\) to \(f+df\), \(\rho_{c}\) is the critical energy density needed to close the universe. The value of \(\rho_{c}\) is given by \[\rho_{c}=\frac{3c^{2}H_{0}^{2}}{8\pi G}\, \tag{2}\] where \(c\) is the speed of light, \(G\) is the gravitational constant and \(H\) is the Hubble constant. In this work, we use the value of Hubble constant measured by the Plank satellite, \(H_{0}=67.9\ \mathrm{km\ s^{-1}\ Mpc^{-1}}\)[38]. However, the conclusions drawn are independent of the actual value of \(H_{0}\). Typically \(\Omega_{gw}(f)\) is expressed in the form of a power law, \[\Omega_{gw}(f)=\Omega_{\alpha}\left(\frac{f}{f_{\mathrm{ref}}}\right)^{\alpha }\, \tag{3}\] where \(f_{\mathrm{ref}}\) is a reference frequency. For results reported in this paper, we use a reference frequency of \(f_{\mathrm{ref}}=25\ \mathrm{Hz}\) as used in the LVK analyses [28; 29; 20]. The value of the power-law index \(\alpha\) depends on the source of GWB we are interested in. For cosmological GWB from inflationary scenarios, we typically expect \(\alpha=0\)[15] while for astrophysical GWB from the superposition of many compact binary mergers signals \(\alpha=2/3\)[16]. Similar to LVK analyses [28; 29; 20], in addition to \(\alpha=0\) and \(\alpha=2/3\), we also look at \(\alpha=3\) representing astrophysical GWB models such as from supernovae [39]. Instead of searching for \(\Omega_{gw}(f)\), traditionally, isotropic GWB searches try to estimate \(\Omega_{\alpha}\) for different values of power-law index \(\alpha\). The optimal estimator of \(\Omega_{\alpha}\), for an isotropic GWB, at a time \(t\) and at a frequency bin \(f\) is given by [18; 40], \[\hat{\Omega}_{\alpha}(t;f)=\frac{2}{T}\frac{\Re[d_{I}^{*}(t;f)d_{J}(t;f)]}{ \gamma_{IJ}(f)S_{\alpha}(f)}\, \tag{4}\] where \(d_{1}(t;f)\) and \(d_{2}(t;f)\) are short-time Fourier transforms of the strain data from the two detectors \((I,J)\) evaluated at time \(t\), \(T\) is the duration of the data segments used for Fourier transforms and \(\gamma_{IJ}(f)\) is the normalized overlap reduction function for the given two detectors \((I,J)\). The function \(S_{\alpha}(f)\) is proportional to the assumed spectral shape \(\alpha\) and is given by [18; 40], \[S_{\alpha}(f)=\frac{3H^{2}}{10\pi^{2}}\frac{1}{f^{3}}\left(\frac{f}{f_{ \mathrm{ref}}}\right)^{\alpha} \tag{5}\] In the weak-signal limit, the variance of \(\hat{\Omega}_{\alpha}\) is given by [18; 40], \[\sigma_{\hat{\Omega}_{\alpha}}^{2}(t;f)=\frac{1}{2T\Delta f}\frac{P_{I}(f)P_{ J}(f)}{\gamma_{IJ}^{2}(f)S_{\alpha}^{2}(f)} \tag{6}\] where \(P_{I}(f)\), \(P_{J}(f)\) are the one-sided power spectral densities of the strain data from the two detectors \((I,J)\), and \(\Delta f\) is the frequency resolution. For data spanning many segments and a large frequency band, the final optimal estimators are obtained by a weighted sum, \[\hat{\Omega}_{\alpha}=\frac{\sum_{t,f}\sigma_{\hat{\Omega}\alpha}^{-2}(t;f) \hat{\Omega}_{\alpha}(t;f)}{\sum_{t,f}\sigma_{\hat{\Omega}_{\alpha}}^{-2}(t;f )},\quad\sigma_{\hat{\Omega}_{\alpha}}^{-2}=\sum_{t,f}\sigma_{\hat{\Omega}_{ \alpha}}^{-2}(t;f), \tag{7}\] where \(t\) runs over available time segments and \(f\) runs over discrete frequency bins in the desired frequency band. ## III Calibration model The raw outputs of gravitational wave detectors are digitized electrical signals from the photodetectors at the output port. The process of converting these electrical signals into strain data is called _calibration_. The LIGO, Virgo, and KAGRA detectors have similar fundamentals in optical layout and control system topology [2; 3; 41]. While their methods to describe and characterize that system are different (sometimes only in subtle ways that reflect their detailed differences), any of those methods could be used to describe current GW detectors. Thus, here, we follow and choose the methods of the LIGO detectors [42; 43]. For details of different calibration techniques used in the current generation of gravitational wave detectors, see [44; 45; 42; 46]. As shown in [43], after detailed modeling of the detectors, a response function \(R(f)\) is derived, which is then used to convert the digitized electrical output into strain \(h(f)\) using the expression, \[d(f)=\frac{1}{L}e(f)R(f) \tag{8}\] where e(f) is the digitized signals from the output photo-detectors, R(f) is the response function that converts e(f) into the differential displacement of the two arms of the detector and L is the average (macroscopic) length of the two arms. The response function of a gravitational wave detector, in the frequency domain, can be written as [43], \[R(f)=\frac{1+A(f)D(f)C(f)}{C(f)} \tag{9}\] where \(C(f)\) is the sensing function corresponding to the response of the detector to differential changes in its two arms without any feedback control, \(A(f)\) is the actuation function used to control the positions of the mirrors and \(D(f)\) is any digital filter(s) used in the control loop. ### Sensing function The sensing function \(C(f)\) can be modeled in the frequency domain as [43; 47], \[C(f) = \left(\frac{\kappa_{C}H_{C}}{1+iff_{cc}^{-1}}\right)\left(\frac{f ^{2}}{f^{2}+f_{s}^{2}-iff_{s}Q^{-1}}\right) \tag{10}\] \[\times~{}C_{R}(f)\] where optical gain \(H_{C}\) represents the overall gain, coupled-cavity pole frequency \(f_{cc}\) defines the detector bandwidth, \(f_{s}\) and \(Q\) correspond to optical anti-spring pole frequency and its quality factor, respectively. The term \(C_{R}\) represents the frequency dependencies not captured by the other terms (for example, the response of the electronics chain used for the digitization, etc.), and \(\kappa_{C}\) is a scale factor representing the changes in the sensing function with respect to a reference time. The sensing function we use in our analysis is shown in Fig. 1. We use the pyDARM package [48] to generate the calibration model used in this work. For LIGO detectors, during the past observing runs and for frequencies \(\gtrsim 20\) Hz, the optical spring term (second term in Eq. 10) was usually close to one (for example, see [49; 50]). Since in our work, we use \(20-1726\) Hz band as done in LVK analyses [29; 20; 28], we treat the optical spring term in Eq. 10 as constant and do not study its effects in this work. ### Actuation function The actuation function is modeled in the frequency domain as [43; 47], \[A(f)=\kappa_{U}A_{U}(f)+\kappa_{P}A_{P}(f)+\kappa_{T}A_{T}(f) \tag{11}\] where \(U\), \(P\), and \(T\) represent the lowest three stages of suspensions (upper intermediate mass, penultimate, and test mass stages) used to suspend the main optics [2; 43]. \(A_{i}(f)\) (where \(i=U,P,T\)) are frequency-dependent actuation models of the three stages of the suspensions, including digital filters in the control path and analog responses of the three stages of suspensions [43]. The scale factors \(\kappa_{i}\) capture any changes in the reference actuation model of each stage, and in general, they could be time- and frequency-dependent [51]. The plots of actuation models for the three stages and the combined actuation model used in this work are shown in Fig. 2. ### Interferometer response function Apart from the notch filters used to prevent the excitation of resonances of the test mass suspensions, \(D(f)\) Figure 1: The sensing function \(C(f)\) used in our analysis. It is one of the sensing functions of the LIGO Hanford detector during the observing run O3 that is available in the pyDARM package. The unit of \(C(f)\) is the counts produced in the Analog-to-Digital converter at the output port for a meter differential length change in the two arms of the GW detector [43]. Figure 2: The actuation functions of the bottom three stages (top, penultimate, and test mass stages) and the combined actuation function used in our analysis. This is one of the models of LIGO Hanford’s main optic suspension during the observing run O3 available in the pyDARM package. The unit of \(A(f)\) is the differential length change produced in the two arms for a unit count in the Digital-to-Analog converter that drives the actuators [43]. is a smooth function of frequency that is decided by the feedback control morphology used. The total response function, as shown in Eq. 9, is a function of \(C(f)\), \(A(f)\), and \(D(f)\). Fig. 3 shows the response function we use in our analysis. ## IV Analysis method In this work, we look at the effects of calibration uncertainties on the recovery of GWB and on the parameter estimation of the recovered GWB. Specifically, we look at the isotropic GWBs described by power-law models with power-law indices of \(\alpha=0,2/3,3\) (see Sec. II). If the response function used to calibrate the digitized signal in Eq. 8 is not the true response function, then we get, \[d_{\rm true}(f) =d_{\rm calc}(f)\times\frac{R_{\rm true}(f)}{R_{\rm calc}(f)} \tag{12}\] \[=d_{\rm calc}(f)\times\Lambda(f) \tag{13}\] where _true_ and _calc_ correspond to the true and calculated quantities respectively. In the above Eq. 12, we have defined \(\Lambda(f)\) as, \[\Lambda(f)=\frac{R_{\rm true}(f)}{R_{\rm calc}(f)} \tag{14}\] for convenience. The uncertainties in the calibration process enter the GW analyses as \(\Lambda(f)\) shown above. We note here that \(R_{\rm true}(f)\), with measurement uncertainty, can be calculated using a length (or frequency) reference such as a photon calibrator [52; 53; 54; 55; 56], but due to difficulty in the implementation \(R_{\rm calc}(f)\) is traditionally used in the calibration process leading to the difference we see in the Eq.12. The \(R_{\rm true}(f)\) is usually in a non-parametric form while \(R_{\rm calc}(f)\) is parameterized with a relatively small number of parameters (Eq. 9). Hence from an implementation point of view, \(R_{\rm calc}(f)\) is more convenient. Because of the simple parameterization, changes in \(R_{\rm calc}(f)\) can also be easily tracked, which is also important for calibration. Moreover, the ratios \(\Lambda(f)\) are usually very close to one, and hence use of \(R_{\rm calc}(f)\) is well justified. Due to the measurement uncertainties in \(R_{\rm true}(f)\), the estimation of the ratios \(\Lambda(f)\) has both systematic and statistical uncertainties associated with it. Using Eq. 12 in Eqs.4 and 6 we get, \[\hat{\Omega}_{\alpha}(f)=\frac{2}{T}\frac{\Re\left[d_{I,\rm calc}^{*}(f)d_{J, \rm calc}(f)\Lambda_{I}^{*}(f)\Lambda_{J}(f)\right]}{\gamma_{IJ}(f)S_{\alpha}( f)} \tag{15}\] and \[\sigma^{2}_{\hat{\Omega}_{\alpha}}(f)=\frac{1}{2T\Delta f}\frac{P_{I,\rm calc }(f)P_{J,\rm calc}(f)}{\gamma_{IJ}^{2}(f)S_{\alpha}^{2}(f)}|\Lambda_{I}|^{2}| \Lambda_{J}|^{2}. \tag{16}\] The Eqs. 15 and 16 provide a way to estimate the effects of calibration uncertainties on the signal estimate \(\hat{\Omega}_{\alpha}\) and its variance \(\sigma^{2}_{\hat{\Omega}_{\alpha}}\). If we further assume that the ratios \(\Lambda(f)\) are real, i.e., the difference is only in the magnitude, then we get, \[\hat{\Omega}_{\alpha}(f) =\hat{\Omega}_{\alpha,\rm nocal}(f)\Lambda_{I}(f)\Lambda_{J}(f)\;, \tag{17}\] \[\sigma^{2}_{\hat{\Omega}_{\alpha}}(f) =\sigma^{2}_{\hat{\Omega}_{\alpha},\rm nocal}(f)\Lambda_{I}^{2}( f)\Lambda_{J}^{2}(f), \tag{18}\] where _nocal_ subscript corresponds to the quantities calculated in the absence of calibration uncertainties that we want. With this assumption, the simulation becomes a little bit easier. We can start with \(\hat{\Omega}_{\alpha,\rm nocal}(f)\) and \(\sigma^{2}_{\hat{\Omega}_{\alpha},\rm nocal}(f)\) calculated from the simulated data and using Eqs. 17, 18 and 7 we can estimate the effects of calibration uncertainties on the calculation of \(\hat{\Omega}_{\alpha}(f)\) and \(\sigma^{2}_{\hat{\Omega}_{\alpha}}(f)\). However, in Sec.V we also show the results without using this assumption. Since the response functions, \(R_{I,J}\) themselves are functions of \(A\) (Eq. 11), \(C\) (Eq. 10) and \(D\) the number of free parameters in the above equations becomes large. Due to the large number of parameters, it is difficult to calculate the effects analytically, so we use numerical simulation to calculate the effects. This method becomes more valuable when including a more complicated signal model and additional calibration parameters. For the results reported in this paper, we use one week of simulated data for Hanford and Livingston detectors using advanced LIGO design sensitivity [22]. Here, one week of data is chosen to represent the traditional long-duration analyses of GWB and to avoid complexities arising from large SNRs in individual segments [27]. We use publicly available LVK code packages [57] to calculate \(\hat{\Omega}_{\alpha}(t;f)\) and \(\sigma_{\hat{\Omega}_{\alpha}}(t;f)\). We use standard search parameters of 192-sec segment duration and frequencies from 20 Hz to 1726 Hz with a frequency resolution of 1/32 Hz as used in the LVK isotropic GWB searches [28; 29; 20]. In this work, we use the same calibration model for Hanford and Livingston detectors described in Sec. III. Figure 3: The reference response function \(R(f)\) used in our analysis. We do the following to calculate the effects of calibration uncertainties on the recovery of GWB signal. As indicated in the Eqs. 17 and 18, we multiply the \(\hat{\Omega}_{\alpha,\text{nocal}}(t;f)\) and \(\sigma^{2}_{\hat{\Omega}_{\alpha,\text{nocal}}}(t;f)\) estimators of each segment calculated using LVK code packages by distributions representing the ratios \(\Lambda(f)\). We assume Gaussian distributions for \(\Lambda(f)\), centered at one with standard deviations defined by the desired calibration uncertainty. We also truncate the Gaussian distribution at 2-sigma points on both sides to avoid the realization of unrealistic values for \(\Lambda(f)\) (for example, values close to zero or even negative). Then, using Eqs. 7, we combine the segment-wise and frequency-dependent results of \(\hat{\Omega}_{\alpha}(t;f)\)\(\sigma_{\hat{\Omega}_{\alpha}}(t;f)\) to get the final estimate and its uncertainty. Then we use SNR, defined in a frequentist approach [58], given by, \[\text{SNR}=\frac{\hat{\Omega}_{\alpha}}{\sigma_{\hat{\Omega}_{\alpha}}}\] as the detection statistics in the search for an isotropic GWB. We then compare these results against the results obtained without any calibration uncertainties. Since the difference between these results is just the application of calibration uncertainties, the differences would typically show the effects of calibration uncertainties on \(\hat{\Omega}_{\alpha}\) and \(\sigma^{2}_{\hat{\Omega}_{\alpha}}\). We further look at the effects of calibration uncertainties on the parameter estimation, specifically on the \(\hat{\Omega}_{\alpha}\) and \(\hat{\alpha}\), by varying the values of various parameters in the \(R(f)\) (see Eqs.9, 10. 11). ## V Results In this section, we present the results of our studies. To generate these results, we initially assume that the ratios of response function \(\Lambda(f)\) are real and hence use Eqs. 17 and 18. We note that this assumption is used to marginalize calibration uncertainties in the LVK isotropic GWB analyses [20; 28; 29]. However, for comparison, we also produce results by additionally using 1-sigma phase uncertainties of \(5^{\circ}\), the maximum of what was seen in LIGO detectors during the observing run O3 [43]. This is to show how much phase uncertainties that are currently not included in the GWB analyses affect the final results. At each frequency, we model the magnitude of \(\Lambda(f)\) by a Gaussian distribution with a mean one and standard deviation \(\sigma_{\Lambda(f)}\) that is small compared to one and phase of \(\Lambda(f)\) by a Gaussian distribution with a mean zero and standard deviation of \(5^{\circ}\). As indicated earlier, we also truncate the Gaussian distribution at 2-sigma values to avoid unrealistic realizations of \(\Lambda(f)\). ### Effect of calibration uncertainties on the isotropic GWB detection The recovered values of the \(\hat{\Omega}_{\alpha}\), \(\sigma_{\hat{\Omega}_{\alpha}}\) and SNR at various levels of calibration uncertainties for the three power law models \(\alpha=0,2/3,3\) are shown in Fig. 4. In this analysis, we increase the uncertainty from \(0\,\%\) to \(20\,\%\) in steps of \(2\,\%\). We also repeat the analysis 20 times, regenerating the \(\Lambda(f)\) values 20 times at each uncertainty level to calculate the spread on the recovered values. We also compare the results, including 1-sigma phase uncertainties of \(5^{\circ}\). From the plots, we see that as we increase the values of uncertainties, there are changes in the recovered values of \(\hat{\Omega}_{\alpha}\), \(\sigma_{\hat{\Omega}_{\alpha}}\), and SNR. The recovered values are underestimated, and the trends are similar for the three \(\alpha\) values. However, the changes in the recovered SNRs are small, almost negligible, below the calibration uncertainties of \(\sim 10\%\). Since SNR is generally used as a detection statistic, this suggests that the detection of an isotropic GWB is not significantly affected by the uncertainties in the calibration. We also see a slight reduction in the SNR for larger calibration uncertainties. The SNR dependence on the calibration uncertainty goes as \((1-\sigma^{2}_{\Lambda(f)})\) where \(\sigma_{\Lambda(f)}\) is the standard deviation of the Gaussian distribution used for the different realizations of \(\Lambda(f)\). This quadratic dependence agrees with the results previously reported in the literature [31]. The \(\hat{\Omega}_{\alpha}\), \(\sigma_{\hat{\Omega}_{\alpha}}\) change by \(\sim 10\%\) when we change the uncertainty of response function by \(\sim 20\%\). The reduction in the estimated \(\sigma_{\hat{\Omega}_{\alpha}}\) can be attributed to how we combine different time segments and frequency bins. Since we use weighted average method (see Eq. 7), any downward fluctuations in individual \(\sigma_{\hat{\Omega}_{\alpha}}(t;f)\) due to calibration uncertainties will bring down the final \(\sigma_{\hat{\Omega}_{\alpha}}\). A similar effect could be attributed to the reduction in the final \(\Omega_{\alpha}\). This suggests that the recovered values of \(\Omega_{\alpha}\) and \(\sigma_{\hat{\Omega}_{\alpha}}\) are biased in the presence of calibration uncertainties. Since the upper limits on \(\Omega_{\alpha}\), for example, 95 % upper limit in the frequentist approach, can be written as \[\Omega_{\alpha,95\%}\approx\hat{\Omega}_{\alpha}+2\,\sigma_{\hat{\Omega}_{ \alpha}},\] calibration uncertainties are also expected to bias the upper limit calculations. From our results, we see that if the calibration (magnitude) uncertainty is \(10\%\,(\sigma_{\Lambda(f)}=0.1)\), the upper limit would be underestimated by \(\sim 3\%\). Since this dependence on the calibration uncertainty is quadratic, this effect could become significant at larger calibration uncertainties. Such biases are not completely taken into account when estimating \(\Omega_{\alpha}\) or while calculating upper limits on \(\Omega_{\alpha}\) in the analyses reported in the literature [20; 28; 29] and need to be accounted for in future analyses. The plots also suggest that including phase uncertainties at the level of \(\lesssim 5^{\circ}\) does not change the results significantly. Hence, as done in LVK analyses [20; 28; 29], phase uncertainties can be neglected if they are \(\lesssim 5^{\circ}\) when searching for isotropic GWB using LVK data. ### Effects of the calibration uncertainties on the parameter estimation of isotropic GWBs The second part of the study looks at the effects of calibration uncertainties on estimating the parameters of the isotropic GWB signals. Here we mainly focus on the estimation of \(\Omega_{\alpha}\) and \(\alpha\) (see Eq. 3). In Sec. V.1, Fig. 4 already shows the effect of the uncertainties of the response function as a whole on the recovery of \(\Omega_{\alpha}\). Instead of the uncertainties of the total response function, in this section, we look at the effects of individual calibration parameters on the recoveries of \(\Omega_{\alpha}\) and \(\alpha\). Since we are using the parameters that make up the calibration model, in the literature, this is considered a physically motivated approach to include calibration uncertainties in the signal analyses [35, 36]. In this study we mainly focus on the parameters \(\kappa_{C}\), \(f_{cc}\) (see Sec. III.1), \(\kappa_{U}\), \(\kappa_{P}\) and \(\kappa_{T}\) (see Sec. III.2). Other parameters in the response function tend to be more or less constant during an observing run, or their effects are small, and hence we do not include them here. The maximum likelihood values of the recovered parameters \(\Omega_{\alpha}\) and \(\alpha\), for \(\alpha=0,2/3,3\), as functions of errors on the various calibration parameters are shown in Fig. 5. The plots in Fig. 5 show the recovered values of \(\Omega_{\alpha}\) and \(\alpha\) as we increase the errors on the calibration parameters \(\kappa_{C}\), \(f_{cc}\), \(\kappa_{U}\), \(\kappa_{P}\) and \(\kappa_{T}\) in the response function \(R(f)\) used to calibrate the detector output. For testing the recovery, we inject isotropic GWBs with amplitudes of \(\Omega_{\alpha}=1.21\times 10^{-8},1.04\times 10^{-8},2.70\times 10^{-9}\) for \(\alpha=0,2/3,3\) respectively and try to recover them with and without errors on the above calibration parameters. On the right side of the plots in Fig. 5, we also show the difference between the injected and recovered values normalized by the 1-sigma uncertainties in the recovery. To have a common y-axis on the right side, for each \(\alpha\), we use the largest 1-sigma uncertainty we observe among different calibration parameters for the normalization. We use the maximum likelihood method described in [59] and use dynesty[60] sampler in bilby[61] package Figure 4: Plots showing the effect of calibration uncertainty on the recovery of \(\Omega_{\alpha}\), \(\sigma_{\Omega_{\alpha}}\) and SNR for injected isotropic GWB signals described by \(\alpha=0,2/3,3\). The calibration uncertainty is quantified by the standard deviation of the Gaussian distribution \(\sigma_{\Lambda(f)}\) used for the different realizations of \(\Lambda(f)\). The solid (blue) line corresponds to no phase uncertainty, while the dotted (red) line corresponds to \(5^{\circ}\) 1-sigma phase uncertainty. for sampling the likelihoods and estimating the maximum likelihood values of \(\Omega_{\alpha}\) and \(\alpha\) (shown in Fig. 5) from \(\hat{\Omega}_{\alpha}(f)\) and \(\sigma_{\hat{\Omega}_{\alpha}}(f)\). From the plots in Fig. 5, we see that when the errors on the calibration model parameters are zero, we recover the injected values very well. However, the recovered values of \(\Omega_{\alpha}\) and \(\alpha\) become biased as we increase the error on the calibration model parameters. The errors on \(\kappa_{P}\), \(\kappa_{T}\) and \(\kappa_{C}\) significantly bias the recoveries of \(\Omega_{\alpha}\) and \(\alpha\) while \(f_{cc}\) and \(\kappa_{U}\) have very little effect. For example, for \(\alpha=2/3\), with \(10\,\%\) error on the \(\kappa_{T}\) the recovered \(\Omega_{\alpha}\) is \(\approx 2.5\,\sigma_{\Omega_{\alpha}}\) away from its true value, while with \(10\,\%\) error on the \(\kappa_{P}\) the recovered \(\alpha\) is \(\approx 1.5\,\sigma_{\alpha}\) away from its true value. We also notice that, even though \(\kappa_{T}\) significantly affects the \(\Omega_{\alpha}\) estimate, it has minimal impact on the recovery of \(\alpha\). These effects are likely due to how these different terms contribute to the interferometer response function. Rewriting Eq. 9 into contributions from different components, we get, \[R(f) = 1/C(f)+\kappa_{U}D(f)A_{U}(f) \tag{19}\] \[+\kappa_{P}D(f)A_{P}(f)+\kappa_{T}D(f)A_{T}(f).\] Fig. 6 shows the relative contribution of the different terms in Eq. 19 to the response function and also \(90\,\%\) search sensitivity region for the \(\alpha=2/3\) isotropic GWB. The \(90\,\%\) isotropic GWB search sensitivity region increases as we increase the values of \(\alpha\). For \(\alpha=2/3\), the \(90\,\%\) search sensitivity region extends up to \(\approx 45\) Hz, while for \(\alpha=0\) and \(\alpha=3\), the \(90\,\%\) search sensitivity regions extend up to \(\approx 40\) Hz and \(\approx 175\) Hz respectively. We see that in the \(90\,\%\) sensitivity region, penultimate and test mass actuation and sensing functions make the most significant contributions. The top test mass actuation function contributes \(\lesssim 10\%\) to the response function in the \(20-1726\) Hz band and hence does not affect the signal recovery. In the sensing function (see Eq.10), the dominant contribution comes from \(\kappa_{C}\). Since the typical value of \(f_{cc}\) of advanced LIGO detectors during the O3 run was \(\sim 400\,\)Hz and the \(90\,\%\) search sensitivity region extents only up to a maximum of \(\sim 200\) Hz (for \(\alpha=3\)), the effect of \(f_{cc}\) on the estimation of the parameters is minimal. Since the \(\alpha\) values of \(0\) and \(2/3\) are relatively closer, the results of \(\alpha=0\) and \(\alpha=2/3\) in Fig. 5 are very similar. We also observe that the result for \(\alpha=3\) is slightly different. Since \(\alpha=3\) probes a much larger frequency band of \(\sim 20-175\) Hz where contributions from \(\kappa_{C}\) and \(\kappa_{T}\) to the response function tend to be larger on average compared to the other parameters (see Fig. 6), \(\kappa_{C}\) and \(\kappa_{T}\) start to affect the recoveries of \(\Omega\) and \(\alpha\) significantly. We see this for \(\alpha=3\) in Fig. 5. The design of the detector, for example, the finesse of arm and recycling cavities, determines the cavity pole frequency, while the control architecture of the detector determines the relative contributions of different actuation stages. Thus, the effects of different calibration factors on the isotropic GWB search heavily depend on the detector's design and operation. We also try to simultaneously estimate the calibration and GWB signal parameters to see how well we can do. Figure 5: Effect of the errors in various calibration model parameters on the recovery of the signal parameter \(\Omega_{\alpha}\) and \(\alpha\) for isotropic GWB signals described by \(\alpha=0,2/3,3\). The solid lines correspond to the maximum likelihood values, and the shaded regions indicate \(68\,\%\) confidence interval. The injected values of \(\Omega_{\alpha}\) are \(1.21\times 10^{-8}\), \(1.04\times 10^{-8}\), and \(2.70\times 10^{-9}\) for \(\alpha=0,2/3,3\) respectively. Here we use (simulated) uncalibrated raw digital signals to extract all the parameters. Fig. 7 shows an example of the simultaneous estimation of all the parameters for the \(\alpha=2/3\) signal model. The plot shows that, along with the GWB model parameters, we can also infer the values \(\kappa_{P}\), \(\kappa_{T}\), and \(\kappa_{C}\) to some level, but recoveries of \(f_{cc}\) and \(\kappa_{U}\) are poor which are consistent with the results in Fig. 5. For comparison, we also show the recovery of GWB model parameters using calibrated data without any uncertainties. The plots also have the Bayes factors, comparing the signal vs. noise hypothesis for those two cases. We see that the Bayes factors do not change significantly in the two cases (as expected, it is slightly lower when we estimate calibration parameters also). However, the posteriors of GWB parameters are very broad and probably biased when we simultaneously estimate the GWB and calibration model parameters. So it is crucial to have well-calibrated data to get better posteriors on the signal parameters and a better Bayes factor. ## VI Conclusions In this work, we have studied the effect of calibration uncertainties on the detection and parameter estimation of isotropic GWB signals. We focused on the amplitude (\(\Omega_{\alpha}\)) and power law index (\(\alpha\)) of power-law isotropic GWBs. We find that, for the second generation of gravitational wave detectors, when the calibration uncertainties are less than \(\sim 10\%\), they do not significantly affect the detection of a GWB signal. The calibration uncertainties of the LIGO detectors reported during the last observing run O3 are well within this \(\sim 10\%\) limit [43]. We also find that the recovery of isotropic GWB model parameters could be affected depending on which calibration parameter is poorly constrained and its uncertainty level. The recovered values of signal parameters are biased due to errors in calibration model parameters. Even though the current errors on the individual model parameters of LIGO detectors are much smaller (\(\lesssim 1\%\)), the cumulative effect of the different parameters could bias the recovered GWB parameters. Currently, this bias is not considered during the GWB parameter estimation or upper limit calculation. For a calibration uncertainty of \(\sim 5\) % of the interferometer response function (\(90\) % maximum reported for the LIGO detectors during O3), the biases in estimating GWB amplitudes or its upper limits are not significant \(\lesssim 2\) %. However, this might become significant for larger calibration uncertainties, especially when we try to differentiate between different models of GWB. In this work, we also try to estimate the isotropic GWB and calibration model parameters simultaneously and find that we could detect the GWB signal, albeit with some loss of Bayes factor (SNR). However, the posteriors of the GWB signal parameters become very broad and probably biased due to their correlation with some of the calibration parameters. This suggests the importance of well-calibrated data for detecting and recovering GWB signals, which is expected to be in the near future. We also note that the analysis presented in this paper highly depends on the GW detectors' calibration model (parameters). Hence, one might need to repeat this study when the calibration model changes significantly, for example, for future detectors. However, if the calibration uncertainties are kept small (\(\lesssim 5\%\)), as we see in our analysis in this paper, the effects on the isotropic GWB analyses are expected to be small. Since the calibration model depends on the detector design and its control system architecture, one could also choose to design future detectors that would reduce the effect of calibration uncertainties. This is something that could be studied further. One could also extend the study reported in this paper to estimate the effect of calibration uncertainties on the GWB with more complicated model parameters or anisotropic GWB. ## Acknowledgements The authors thank Jeffrey S Kissel for providing useful comments on the draft. The authors acknowledge the use of the IUCAA LDG cluster Sarathi for the computational/numerical work. J. Yousuf also acknowledges IUCAA for providing accommodation while carrying out this work. J. Yousuf is thankful to the Department of Science and Technology (DST), Government of India, for providing financial assistance through IN Figure 6: Relative contribution of various calibration parameters to the interferometer response function and \(90\) % search sensitivity region for the \(\alpha=2/3\) GWB search. For \(\alpha=0\) and \(\alpha=3\), the \(90\) % search sensitivity regions extend up to \(\approx 40\) Hz and \(\approx 175\) Hz respectively. Because of the non-trivial phase relationship between different components in Eq.19, we see that individual components’ relative contributions to the response function can even go above one. SPIRE Fellowship. For this work, we used the software packages pyDARM [48], bilby[61], stochastic[57] and Matplotlib[62].
2309.13475
Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers
Autonomous systems, such as self-driving cars and drones, have made significant strides in recent years by leveraging visual inputs and machine learning for decision-making and control. Despite their impressive performance, these vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs. Such errors can cascade to catastrophic system failures and compromise system safety. In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures. Specifically, we leverage a reachability-based framework to stress-test the vision-based controller offline and mine its system-level failures. This data is then used to train a classifier that is leveraged online to flag inputs that might cause system breakdowns. The anomaly detector highlights issues that transcend individual modules and pertain to the safety of the overall system. We also design a fallback controller that robustly handles these detected anomalies to preserve system safety. We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing. Our results show the efficacy of the proposed approach in identifying and handling system-level anomalies, outperforming methods such as prediction error-based detection, and ensembling, thereby enhancing the overall safety and robustness of autonomous systems.
Aryaman Gupta, Kaustav Chakraborty, Somil Bansal
2023-09-23T20:33:38Z
http://arxiv.org/abs/2309.13475v5
# Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers ###### Abstract Autonomous systems, such as self-driving cars and drones, have made significant strides in recent years by leveraging visual inputs and machine learning for decision-making and control. Despite their impressive performance, these vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs. Such errors can cascade to catastrophic system failures and compromise system safety. In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures. Specifically, we leverage a reachability-based framework to stress-test the vision-based controller offline and mine its system-level failures. This data is then used to train a classifier that is leveraged online to flag inputs that might cause system breakdowns. The anomaly detector highlights issues that transcend individual modules and pertain to the safety of the overall system. We also design a fallback controller that robustly handles these detected anomalies to preserve system safety. We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxing. Our results show the efficacy of the proposed approach in identifying and handling system-level anomalies, outperforming methods such as prediction error-based detection, and ensembling, thereby enhancing the overall safety and robustness of autonomous systems. **Website:** phoenixrider12.github.io/FailureMitigation ## I Introduction With the advances in deep learning and computer vision, modern autonomous and robotic systems have reached a level of competence that, in some instances, exceeds human capabilities [1]. Nevertheless, given the vast array of scenarios these vision-driven controllers might face in the real world, we can never entirely rule out the occurrence of uncommon corner cases and failure scenarios. Thus, even as we aspire for our robots to adapt to new conditions, there is a growing need for runtime anomaly detection systems that can provide early alerts when a system encounters anomalies, helping to counteract and mitigate potential rare failures [2]. Anomaly detection (AD) methods for learning components typically fall under two categories: distributional shift methods and functional uncertainty methods [3]. The former aims to detect and mitigate distribution shifts between training and the test times. These methods include approaches that artificially inject noise in the ground truth data (e.g., images) to force a distribution shift and create anomalous data [4]. Another line of research in this direction aims to develop learning algorithms that are distributionally robust, optimizing the worst-case performance within a pre-specified envelope of distributional shifts to guarantee out-of-distribution performance [5, 6] or performing domain randomization during training [7]. Since detecting distribution shifts can be challenging in general, especially when the test distribution is unknown _a priori_, the functional uncertainty methods instead detect the inputs that are either dissimilar to the training data [8, 9, 10, 11, 12] or lead to erroneous or low-confidence predictions [13, 14, 15, 16, 17, 18]. The above methods predominately detect anomalous inputs at the component level; however, such component level monitoring (e.g., detecting image classification errors) can be insufficient to prevent system-level faults. Seemingly minor errors in the individual modules can cascade into catastrophic effects at a system level. Hence, a system-level view of such problems is often encouraged and is the main motivation behind this work. In this work, we present an approach to detect and mitigate such system-level anomalies for autonomous systems that leverage learning-driven vision-based controllers for decision making. Our work builds upon the study conducted in [19] that leverages a visual simulator within a Hamilton-Jacobi Reachability framework to expose the closed-loop failures of the system under the vision-based controller. However, the proposed framework requires privileged information about the test environment and is computationally not suitable for online applications. Our key idea is to utilize this framework _offline_ to stress-test and automatically mine the system-level failures of the closed-loop system across a variety of environment conditions. Ultimately, this process provides us with a diverse set of input image sequences which when seen by the vision-based controller leads to an overall system failure. These images are then used to train a simple classifier that can serve as an anomaly detector during runtime. We demonstrate that the resultant AD implicitly leverages the revealed failure modes to classify whether a previously unseen input image has the potential to trigger a system failure, without requiring any domain-specific heuristics. Next, we employ the AD to trigger a simple fallback controller online that can trade the system's performance to assure system safety whenever the anomaly detector determines the system is at risk of entering the failure zone. We illustrate the joint properties of our anomaly detector and fallback controller on an autonomous aircraft taxiing case study, leveraging a vision-based controller. We compare our method against commonly used component-level AD techniques, such as predicting ensemble-based uncertainty, and prediction error-based anomaly detectors to highlight the key advantages of the proposed system-level anomaly detector. ## II Problem Setup Consider a robot whose dynamics are given by, \(\dot{\mathbf{x}}=f(\mathbf{x},u)\) where, state \(\mathbf{x}\in\mathbb{R}^{n}\) and control \(u\in\mathbb{U}\) (a compact set). The robot possesses a sensor \(S\), that allows it to perceive visual inputs from its surroundings at any given state \(\mathbf{x}\). We can express the mapping between the state to the input as, \(I=S(\mathbf{x})\). \(I\) could be an RGB image, a pointcloud, etc. Additionally, we have a vision-based controller, \(\pi\), that maps \(I\) to the control \(u\), and defined as, \(u\coloneqq\pi(I)\). Note that \(\pi\) can be an end-to-end policy or it can consists of several sub-modules some of which may not be data-driven. Let \(\zeta^{\pi}_{\mathbf{x}}(\tau)\) be the robot's state achieved at time \(\tau\) when it starts from state \(\mathbf{x}\) at time \(t=0\), and follows the policy \(\pi\) over \([0,\tau]\). Finally, let \(\mathcal{O}\) denote a set of undesirable states (or failure states) for the system. As an example, \(\mathcal{O}\) could represent obstacles for a ground robot. Thus, an initial state \(\mathbf{x}\) is considered unsafe for the system if \(\exists s\in[0,\tau]\), \(\zeta^{\pi}_{\mathbf{x}}(s)\in\mathcal{O}\). The set of input images the system sees, starting from such unsafe states, are thus considered _anomalous for the closed-loop system_ (as they eventually steer the system to \(\mathcal{O}\)) and denoted as \(\mathcal{I}_{unsafe}\). In this work, we are primarily interested in obtaining a mapping, \(\sigma\), that provides a binary decision of whether a given input \(I\) can possibly lead to the failure of the system: \[\sigma:I\rightarrow\ \{0,1\} \tag{1}\] where 1 means that \(I\) is anomalous, and 0 means it is not. Note that an ideal \(\sigma\) should output 1 whenever \(I\in\mathcal{I}_{unsafe}\) and 0 otherwise. In addition, we seek to find a mitigation system that can preserve system safety even if the robot encounters an anomalous input in \(\mathcal{I}_{unsafe}\). Generating \(\sigma\) requires addressing a few challenges: (a) \(\mathcal{I}_{unsafe}\) is typically hard to obtain as the test environment is known _a priori_; (b) even in known environments, obtaining high-dimensional inputs, e.g., RGB images, that lead to system-level failures is a challenging problem; finally, (c) for vision-based controllers, \(\mathcal{I}_{unsafe}\) varies with changes in the robot's surroundings, e.g., \(\mathcal{I}_{unsafe}\) for an indoor navigation robot might change as we move from one room to another. Thus, formulation of a general AD pipeline that can recognize features of anomalous input images, without a prior knowledge of the new surrounding is non-trivial. _Running example (TaxiNet)_. We introduce the autonomous aircraft taxiing problem [20] as a running example to illustrate the key aspects of our framework. Here, the robot is a Cessna 208B Grand Caravan aircraft modeled as a three-dimensional non-linear system with dynamics: \[\dot{p}_{x}=vcos(\theta)\quad\dot{p}_{y}=vsin(\theta)\quad\dot{\theta}=u \tag{2}\] where \(p_{x}\) is the crosstrack error (CTE), \(p_{y}\) is the downtrack position (DTP) and \(\theta\) is the heading error (HE) of the aircraft in degrees from the centreline (Fig. 1(a) shows how these quantities are measured). \(v\) is the linear velocity of the aircraft kept constant at 5 m/s, and the control \(u\) is the angular velocity. The goal of the aircraft is to follow the centreline as closely as possible using the images obtained through a camera mounted on its right wing. For this purpose, the aircraft uses a Convolutional Neural Network (CNN), which returns the estimated CTE and HE, \((\hat{p}_{x},\hat{\theta})\). A proportional controller (P-Controller) then takes these predicted tracking errors to return the control input as follows: \[u:=tan(-0.74\hat{p}_{x}-0.44\hat{\theta}) \tag{3}\] Hence, the policy \(\pi\) is a composition of the CNN and the P-Controller. Intuitively, the P-controller is designed to steer the aircraft towards the centreline based on the state estimate provided by the CNN. The image observations are obtained using the X-Plane flight simulator that can render the RGB image, \(I\), from a virtual camera (\(S\)) mounted on the right wing of the aircraft at any state and a given time of day (see Fig. 1(b)-(g) for representative images under different simulation conditions). Note that the CNN here is also trained in simulation using the data collected from X-Plane. We define the unsafe states for the aircraft as \(\mathcal{O}=\{\mathbf{x}:|p_{x}|\geq B\}\), where \(B\) is the runway width. Thus, \(\mathcal{O}\) corresponds to aircraft leaving the runway. Our goal is to find the mapping, \(\sigma\), that is able to classify whether an input image \(I\) is likely to eventually drive the system off the runway. ## III Background: Hamilton-Jacobi Reachability Analysis In this work, we will use Hamilton-Jacobi (HJ) Reachability analysis to obtain a dataset of system-level anomalies offline. We now provide a brief overview of HJ reachability and refer the readers to [21] for more details. In reachability analysis, we focus on calculating the _Backward Reachable Tube (BRT)_ of the system. The BRT refers to the collection of initial states from which an agent, starting from these states and following policy \(\pi(\mathbf{x})\), can reach the target set \(\mathcal{O}\) within the time interval \([t,T]\): \[\mathcal{V}\coloneqq\{\mathbf{x}:\exists\tau\in[t,T],\zeta^{\pi}_{\mathbf{x}}( \tau)\in\mathcal{O}\} \tag{4}\] HJ reachability analysis allows us to compute the BRT for general nonlinear systems, even when dealing with control and disturbance inputs affecting the system within arbitrarily shaped target sets. Specifically, to compute the BRT, the target set is first represented as a sub-zero level set of a function \(l(\mathbf{x})\), denoted as \(\mathcal{O}=\{\mathbf{x}:l(\mathbf{x})\leq 0\}\)[22, 23]. The function \(l(\mathbf{x})\) typically represents the signed distance from Fig. 1: **(a)**\(p_{x}\), \(p_{y}\), \(\theta\) denote the state of the aircraft; dashed-white lines show FoV of the camera. Runway simulation images with clear sky at **(b)**9AM, **(c)**5PM, and **(d)**9PM, and overcast clouds at **(e)**9AM, **(f)**5PM, and **(g)**9PM taken by camera mounted on the aircraft for the KMWH runway showing the variations in lighting conditions and shadows (**(c),(f)**) for changes in the environment. a state to the target set \(\mathcal{O}\). With this formulation, the BRT computation can be reframed as an optimal control problem that involves finding a value function defined as: \[V(\mathbf{x},t)=\min_{\tau\in[t,T]}l(\zeta_{\mathbf{x}}^{\pi}(\tau)) \tag{5}\] This value function (defined in (5)), can be iteratively computed using dynamic programming principles leading to a partial differential equation known as the Hamilton-Jacobi-Bellman Variational Inequality (HJB-VI) [21]: \[\begin{split}&\min\{D_{t}V(\mathbf{x},t)+H(\mathbf{x},t),l( \mathbf{x})-V(\mathbf{x},t)\}=0\\ &\text{with }V(\mathbf{x},T)=l(\mathbf{x})\end{split} \tag{6}\] In this equation, \(D_{t}\) and \(\nabla\) represent the time and spatial gradients of the value function, respectively. The Hamiltonian, denoted as \(H\coloneqq\langle\nabla V(\mathbf{x},t),f(\mathbf{x},\pi(\mathbf{x}))\), embeds the system dynamics in the HJI-VI. Essentially, (6) is a continuous-time counterpart to the Bellman equation in discrete time. Once the value function is determined, the BRT is obtained as the set of states from which entry into the target set is unavoidable. Consequently, the BRT corresponds to the subzero level set of the value function: \[\mathcal{V}=\{\mathbf{x}:V(\mathbf{x},t)\leq 0\} \tag{7}\] In the next section, we describe how we can use \(\mathcal{V}\) for obtaining a system-level anomaly detector. ## IV Learning and Mitigating System Failures using Reachability Analysis In this work, we use a learned classifier model as our anomaly detector. Our key idea is to compute the BRT of the system offline under a vision-based controller for a diverse set of environments. The BRT can then be used to label the training data for our classifier. We now describe this process in detail over three steps. ### _Automatic labels for system-level anomalous inputs_ To compute a dataset of anomalous inputs, we first compute the set of all starting states that lead the closed-loop system to \(\mathcal{O}\) under the vision-based controller. In other words, we compute the BRT of the system under the vision-based controller. Once the BRT is obtained, the images corresponding to the states inside the BRT can be used as examples of system-level anomalous inputs, i.e., \[I\in\mathcal{I}_{unsafe}\Leftrightarrow\mathbf{x}\in\mathcal{V},\text{ where, }I=S(\mathbf{x}) \tag{8}\] However, the BRT computation in (6) typically requires an analytical model of \(\pi(x)\), which is not possible in our case, since an analytical model of vision sensor \(S\) is generally not available. To overcome this challenge, we will follow the approach used in [19], wherein the BRT is computed using image samples collected from a photorealistic simulator. These image samples are then processed through the vision-based controller to obtain samples of \(u\), which are subsequently used to approximate the BRT. Thus, access to photorealistic simulators provides us with an opportunity to inexpensively sample from high-dimensional state spaces, which could otherwise be a challenging and tedious process. Moreover, since the image observed at a particular state depends on the environment conditions (e.g., lighting conditions, weather, etc.), we compute BRTs for a diverse set of environment conditions to capture a wider set of system anomalies in our training dataset, allowing for a better generalization during runtime. Once all the BRTs are computed, we can sample states randomly in the state space, render the corresponding images, and automatically label any sampled image as safe or anomalous depending on whether the state is outside or inside the BRT. Note that by the construction of the training dataset, our method targets image inputs that lead to system-level failures, without requiring any manual labeling. ### _Learning an anomaly detector_ Equipped with a training dataset of system-level anomalies, we train a binary classifier to predict the label for a novel input image. Specifically, we use a deep neural network (DNN)-based classifier, given their remarkable success with image classification tasks. The DNN takes as input an image and returns the softmax scores of the input being an anomaly. During training, we assume access to the system's BRT and hence the true labels of an image being anomalous. However, during testing, we do not possess the BRT information. Our model is seen to identify anomalies during testing without needing the BRT by learning the underlying features of anomalous inputs. We evaluate the performance and highlight some interesting cases of the detected anomalies by the trained classifier in Sec. V. ### _Fallback controller_ Equipped with an anomaly detector, we propose a safety-preserving controller pipeline for the system. Specifically, when the system detects a possible anomalous input, we switch the system's vision-based controller \(\pi\) with a fallback controller \(\pi^{*}\). For an input \(I\), the overall control input for the system is given as: \[u_{\text{filtered}}=\begin{cases}\pi^{*}(I)),&\text{if }\sigma(I)=1\\ \pi(I),&\text{otherwise,}\end{cases} \tag{9}\] where \(\sigma\) is the learned anomaly detector. The condition \(\sigma(I)=1\) is triggered if \(I\) is an anomaly, and hence the system switches to \(\pi^{*}\). The choice of \(\pi^{*}\) is primarily aimed at maintaining system safety and may vary from system to system. For example, \(\pi^{*}\) might be an alternative controller that is safer but computationally intensive (e.g., a full-stack SLAM for navigation) or trade-off performance for safety (e.g., coming to a complete stop), or logistically expensive (e.g., relying on a human operator). Designing a good \(\pi^{*}\) for a system is itself an interesting research question that we defer to future work. ## V Case Study: Autonomous Aircraft Taxing _Generation of the Anomaly Dataset._ To generate data for training the AD, we compute BRT of the aircraft under the TaxiNet controller across two different times of the day (9 AM and 9 PM), two cloud conditions (clear and overcast), and three different runways (codenamed: KMWH, KATL, PAEI), leading to 12 different BRTs. Note that each of the above conditions might lead to a different visual input \(I\) at the same state **x** (see Fig. 1 (b)-(e) for samples images), and hence lead to a different control input being applied to the aircraft, and subsequently, different BRTs. To compute the BRTs, we use the Level Set Toolbox (LST) [24] that solves the HJB-VI in (6) numerically over a state-space grid for a time horizon of 8s. Following [19], we use a uniform \(101\times 101\times 101\) grid over \(p_{x}\in[-X,X]m\), \(p_{y}\in[100,250]m\) and \(\theta\in[-28^{\circ},28^{\circ}]\) (the value of \(X\) depends on the runway, e.g., for KMWH, X = 11). The LST requires the control input \(u\) at each of the grid point to compute the BRT, which is obtained by rendering the image using X-Plane simulator at that state and querying the controller in (3). This consists of passing the rendered image through the TaxiNet CNN, followed by a P-controller. We refer the readers to [19] for more details on the BRT computation. Fig. 2 (left) shows a slice of the computed BRT for the KMWH runway at 9am in clear conditions for \(p_{y}=100m\). The gray area represent the set of starting states of the aircraft from which it will eventually leave the runway under TaxiNet, whereas white represents the safe area. We next randomly sample 20K states for each of the 12 conditions and render the images for these states, leading to an overall training dataset of 240K images. If an image is generated from a state present in the BRT, we label it as an anomaly. Otherwise, we label it as safe. _Anomaly Detector._ We next train a binary classifier on the collected dataset. Our classifier is a pre-trained EfficientNet-B0 Model, replacing the last layer with a fully connected layer that feeds into a softmax output. We trained the classifier using cross-entropy loss and Adam optimizer for 20 epochs. _Evaluation._ To test the generalization capabilities of the learned AD, we test it on an unseen time of the day (5 PM) for the three airports present in the training set, as well as on two unseen KSFO and KEWR airports, across both cloud conditions (clear and overcast). Specifically, we measure the recall and accuracy of the learned AD. A higher recall value tells us that our AD can reliably detect true positives, i.e., the AD returns a conservative prediction if it is unsure of an input. Such a behavior err on caution to produce a safety-first detector. To compute these metrics, we also obtain the BRT of the system for the test environments; however, note that during testing, our network has access to only the sampled image _without_ any privileged information regarding the system, surroundings, or the BRTs. We summarize the performance of the anomaly detector in Table I (Here (C) = cloudy, (O) = overcast). As evident from the results, the learned AD is consistently able to detect system-level anomalies in new environments. We illustrate some of the representative images that were classified as anomalous in Fig. 3. The AD is able to learn the safe limits of the runway and that if the system is close to the runway boundary. TaxiNet may not be able to correctly estimate aircraft's state and it might fail under the vision-based controller. This may not be particularly surprising as most states near the runway boundary (white lines in Fig. 3 (a-b)) are unsafe in the training dataset (the region near the runway boundary is contained in the BRT in Fig. 2), potentially allowing the AD to learn that such images often lead to a failure. However, what's interesting is that the AD learns that a similar image _will not_ cause a system failure during night time because of the runway lights, which when are lit, help the TaxiNet to estimate the aircraft position more accurately, avoiding a failure. Finally, we noticed that several images classified as anomalies contained the aircraft runway markings (Fig. 3 (d-f)). Such an emergent understanding of semantic failure mode is quite impressive since in the prior work [19], the authors had to manually analyze the failures to recognize these patterns as anomalies. Our AD justifies the use of a learned classifier by practically automating the detection process without any inclusion of heuristics or manual intervention. _Baselines._ We next compare our method to a few commonly used techniques for anomaly detection. #### Iii-B1 Prediction Error-based Labels We use a prediction error-based labeling scheme instead of the proposed HJ Reachability-based scheme to collect the training dataset for our AD. Specifically, if the TaxiNet prediction error is above a certain threshold for a particular image, then we label this image as an anomaly. We overlay the prediction error-based anomaly labels (blue) on top of the BRT-based labels (red) for one of the environments in Fig. 4(a). It is evident that the prediction error-based labels may not be a good representative of the system-level failures. For example, the states near the Yellow star are anomalous as per prediction error but do not actually cause the system failure (the yellow trajectory in Fig. 4(c)), resulting in a pessimistic AD and hampering the system performance. On the other hand, certain states and images may have a small error from the TaxiNet module perspective (states near the Green star) and are not classified as anomalies; yet, they \begin{table} \begin{tabular}{c c c c c} \hline \hline **Airport ID** & \begin{tabular}{c} **Recall (\%)** \\ (C) \\ \end{tabular} & \begin{tabular}{c} **Accuracy (\%)** \\ (O) \\ \end{tabular} & \begin{tabular}{c} **(C)** \\ \end{tabular} & \begin{tabular}{c} **(O)** \\ \end{tabular} \\ \hline KMWH & 91.50 & 92.22 & 95.89 & 97.02 \\ KATL & 97.44 & 95.89 & 98.19 & 97.97 \\ PAEI & 93.24 & 93.71 & 97.71 & 97.99 \\ KEWR & 96.31 & 94.20 & 96.29 & 96.72 \\ KSFO & 90.34 & 90.56 & 89.72 & 90.62 \\ \hline \hline \end{tabular} \end{table} TABLE I: Performance of the learned anomaly detector. Fig. 2: BRT of 9AM KMWH runway (part of train set) shown on the left along with a few training images and BRT of 5PM KSFO runway (part of test set) shown on the right, along with a few testing images, showing diversity in our training and testing scenarios. cascade to a system failure (the green trajectory in Fig. 4(c)). This results in an overly optimistic nature of the prediction of error-based AD in certain regions. This "unpredictable" nature of the prediction error-based AD persists with the change in the error threshold used for labeling (Fig. 4(b)). Unsurprisingly, we observe the same phenomenon in the AD trained on these prediction-error-based labels and, thus, omit those results for brevity. This leads us to conclude that component-level, prediction error-based labels may not be a good representative for determining system-level failures. #### Iv-A2 Ensembled Predictions Another popular mechanism to detect anomalies is based on the predictive uncertainty of an ensemble of neural networks. To design an ensemble, we train 5 different versions of the TaxiNet with different weight initializations. If the variance between the predictions exceeds a threshold for any input image, we assign such an image as an anomaly. The corresponding anomalies over the statespace are shown in Fig. 5. We observe that this method does not perform well because the ensemble confidently makes incorrect predictions for some states (around the top right of the statespace), leading to faulty labels in those states. On the other hand, for some states (near the central region of the statespace), the ensemble disagreed on the predictions, leading to states being incorrectly marked as an anomaly. Hence, an ensembling-based approach also fails to accurately predict system-level anomalies. _Fallback Mechanism._ Equipped with a capable AD, we designed a simple fallback mechanism to ensure our aircraft's safety under anomalous inputs. If at any point in its trajectory, the aircraft observes an image \(I\) that is classified as an anomaly (i.e., \(\sigma(I)=1\)), the linear velocity of the plane (\(v\) in Eqn. (2)) is reduced by \(0.01m/s\). Intuitively, this results in slowing the aircraft every time it encounters an anomaly, ultimately coming to a complete stop if it continues to encounter anomalies. As soon as the aircraft detects an image that is Fig. 4: **(a)** Comparison between prediction error (blue) and BRT-based (red) labels. **(b)** Prediction error-based labels for \(threshold=0.3\) (green) and \(threshold=0.6\) (red). **(c)** Yellow and Green lines show trajectories starting from the yellow and green stars, respectively. Fig. 5: Labels generated using ensembling denoting failures (red) and success (blue). Fig. 3: Some of the failures detected by AD. **(a, b)** Images correspond to the aircraft being close to the runway boundaries (highlighted with the magenta bounding boxes).**(c, d)** The visual controller confuses the runway markings (highlighted with the cyan bounding boxes) with the centerline and ultimately leads to a system failure. **(e, f)** Image (f) is (accurately) not classified as an anomaly during the night time (the same image is classified as anomaly during the day, shown in (e)), as the runway lights (highlighted with the yellow bounding boxes) help the visual controller to predict its position accurately and thereby avoid failure. Fig. 6: **(a, b)** Trajectory followed by the aircraft under the TaxiNet controller (dashed black line) and the safety pipeline (red line). The color shift into the red curve shows velocity variation due to the fallback controller. **(c)** The grey region represents the system BRT under the TaxiNet controller, and the blue region represents the BRT under the safety pipeline. The BRT obtained using the AD and the fallback controller is appreciably smaller than the one obtained using vanilla TaxiNet. **(d)** Input image at the start state in (a), causing system failure due to runway boundaries. **(e)** Input image at the start state in (b), causing system failure due to runway markings. to semantic failure of the runway markings. In both cases, the TaxiNet controller leads the system off the runway. On the other hand, the fallback controller decreases the aircraft velocity whenever an anomaly is triggered to ensure system safety. This can also be seen from the velocity variation along the red trajectories. To further illustrate the advantage of the fallback controller, we compute the system BRT under the Taxinet controller and the fallback controller pipeline for one of the test cases on KMWH runway (Fig.6(c)). The BRT (i.e., the set of failure states) is much smaller under the safety pipeline (the BRT volume decreases from 28.23% to 16.86%), showing that the proposed mechanism significantly reduces the number of closed-loop system failures. _Failure Modes of AD._ Even with all these exciting discoveries, the AD fails to detect some anomalies. We find that this happens particularly for the images corresponding to the states near the BRT boundary. For example, in Fig. 7(b) and 7(c), we show two visually similar images in our test dataset (corresponding to the green and yellow stars in Fig. 7(a)). Even though the two images are visually similar, one image is inside the BRT and leads to the system failure (hence an anomaly), while the other does not. Such similar images with minor differences are hard to detect for our AD. Finally, we noticed that some of the semantics of the test environment that cause anomalies are not present in the training dataset and are not predicted well by the proposed AD (Fig. 7(d)). Such issues highlight the need for a continual update of anomaly detectors as more data about the system anomalies is obtained. _Incremental Training._ A complementary approach to employing an AD can be to perform targeted incremental training of the TaxiNet model on the collected anomalous data. Essentially, these labeled anomalies represent scenarios where TaxiNet fails to perform optimally. Therefore, by conducting specialized training on these failure instances, we aim to fortify TaxiNet's robustness in handling such cases. We present some preliminary result of incremental training of TaxiNet in Fig. 8. We show the unmodified TaxiNet in grey and the incrementally trained version in blue. The incrementally trained version has significantly fewer failures. This is also evident from the system trajectory from the same starting state under the two controllers. Even with such promising results, it's important to acknowledge that incremental training demands a deep understanding of the base network's training process, such as hyperparameters, dataset augmentation, among others. Moreover, incremental training may suffer from catastrophic forgetting. In contrast, our approach to training a relatively simple AD, as demonstrated in our work, entails significantly less overhead. Nevertheless, it is essential to appreciate the potential of incremental training as a valuable avenue to improve the performance of the vision-based controller which we will explore further in future work. ## VI Discussion and Future Work In this work, we present an approach aimed at identifying and mitigating system-level anomalies of autonomous systems that rely on vision-based controllers for decision-making. By leveraging insights from reachability analysis, our approach learns an anomaly detector that effectively tackles concerns related to system-level safety during run-time. The learned anomaly detector is combined with a fallback controller to significantly reduce the potential for catastrophic system failures. However, it was seen that the learned AD was unable to recognize failure cases that were near the BRT boundary. To further enhance the performance of the AD, one can take advantage of temporal information, such as the visual history instead of only the current image. Alternatively, instead of relying on a classifier, other approaches could explore unsupervised methods like clustering via support vector machines or k-means applied to our annotated dataset. These techniques aim to capture and analyze common characteristics found in anomalous images and present a promising avenue for future research. Finally, we will further explore leveraging targeted retraining and incremental training on the anomalous dataset to improve the performance of the vision-based controller.
2308.00046
Degenerations of complete collineations and geometric Tevelev degrees of $\mathbb{P}^r$
We consider the problem of enumerating maps $f$ of degree $d$ from a fixed general curve $C$ of genus $g$ to $\mathbb{P}^r$ satisfying incidence conditions of the form $f(p_i)\in X_i$, where $p_i\in C$ are general points and $X_i\subset\mathbb{P}^r$ are general linear spaces. We give a complete answer in the case where the $X_i$ are points, where the counts, the ``Tevelev degrees'' of $\mathbb{P}^r$, were previously known only when $r=1$, when $d$ is large compared to $r,g$, or virtually in Gromov-Witten theory. We also give a complete answer in the case $r=2$ with arbitrary incidence conditions. Our main approach studies the behavior of complete collineations under various degenerations.
Carl Lian
2023-07-31T18:01:50Z
http://arxiv.org/abs/2308.00046v4
# Degenerations of complete collineations and geometric levelev degrees of \(\mathbb{P}^{r}\) ###### Abstract. We consider the problem of enumerating maps \(f\) of degree \(d\) from a fixed general curve \(C\) of genus \(g\) to \(\mathbb{P}^{r}\) satisfying incidence conditions of the form \(f(p_{i})\in X_{i}\), where \(p_{i}\in C\) are general points and \(X_{i}\subset\mathbb{P}^{r}\) are general linear spaces. We give a complete answer in the case where the \(X_{i}\) are points, where the counts, the "Tevelev degrees" of \(\mathbb{P}^{r}\), were previously known only when \(r=1\), when \(d\) is large compared to \(r,g\), or virtually in Gromov-Witten theory. We also give a complete answer in the case \(r=2\) with arbitrary incidence conditions. Our main approach studies the behavior of complete collineations under various degenerations. We expect the technique to have further applications. ## 1. Introduction ### New results In this paper, we prove the following. **Theorem 1.1**.: _Let \((C,p_{1},\ldots,p_{n})\in\mathcal{M}_{g,n}\) be a general curve. Let \(x_{1},\ldots,x_{n}\in\mathbb{P}^{r}\) be general points. Suppose that \(n\geq r+1\) and that \(d\geq 0\) is an integer for which_ \[n=\frac{r+1}{r}\cdot d-g+1. \tag{1}\] _Then, the number of maps \(f:C\to\mathbb{P}^{r}\) of degree \(d\) for which \(f(p_{i})=x_{i}\) for all \(i=1,\ldots,n\) is equal to_ \[\int_{\operatorname{Gr}(r+1,d+1)}\sigma_{1^{r}}^{g}\cdot\left(\sum_{\lambda \subset(n-r-2)^{r}}\sigma_{\lambda}\sigma_{\overline{\lambda}}\right)_{\lambda _{0}\leq n-r-1}.\] _Above, the partition \(\overline{\lambda}\) denotes the complement of \(\lambda\) inside the rectangle \((n-r-2)^{r}\)._ See SS2.1 for notation, and SS7.5 for additional combinatorial properties of the formula. We emphasize that Theorem 1.1 gives an actual count of curves, rather than a virtual intersection number. These counts are the _geometric Tevelev degrees_\(\operatorname{\sf{Tev}}_{g,n,d}^{\mathbb{P}^{r}}\) of \(\mathbb{P}^{r}\), and complete a picture studied by many authors [6, 4, 26, 9, 5, 13, 7] dating back to the 19th century and passing through the development of Gromov-Witten theory, see SS1.3. Projective spaces are the first family of examples in which the geometric Tevelev degrees are fully understood. We also consider the natural generalization in which the conditions \(f(p_{i})=x_{i}\) are replaced by conditions \(f(p_{i})\in X_{i}\), where the \(X_{i}\subset\mathbb{P}^{r}\) are general linear spaces of any dimension. Our most explicit result in this direction is the following. **Theorem 1.2**.: _Let \((C,p_{1},\ldots,p_{n})\in\mathcal{M}_{g,n}\) be a general curve. Let \(x_{1},\ldots,x_{n_{0}}\in\mathbb{P}^{2}\) be general points and let \(X_{n_{0}+1},\ldots,X_{n}\subset\mathbb{P}^{2}\) be general lines. Suppose that \(d\geq 0\) is an integer for which_ \[n+n_{0}=3d-2g+2. \tag{2}\] _Then, the number of non-degenerate maps \(f:C\to\mathbb{P}^{2}\) of degree \(d\) for which \(f(p_{i})=x_{i}\) for all \(i=1,\ldots,n_{0}\) and \(f(p_{i})\in X_{i}\) for all \(i=n_{0}+1,\ldots,n\) is equal to_ \[\int_{\operatorname{Gr}(3,d+1)}\sigma_{1^{2}}^{g}\cdot\left(\sum_{|\lambda|=n+ n_{0}-8}\Gamma_{2,\overrightarrow{n},d}^{\lambda}\cdot\sigma_{\lambda} \right),\] _where the coefficient \(\Gamma_{2,\overrightarrow{n},d}^{\lambda}\in\mathbb{Z}_{\geq 0}\) is equal to the cardinality of the following subset of \(\operatorname{\sf{SSYT}}_{3}(\lambda)\):_ * _the whole set_ \(\operatorname{\sf{SSYT}}_{3}(\lambda)\)_, if_ \(\lambda_{0}\leq n-5\) _the subset of SSYTs in which the entry \(2\) appears at most \(n_{0}-3\) times, if \(\lambda_{0}=n-4\),_ * _the subset of SSYTs containing neither a_ \((1,2)\)_-strip of length_ \(n_{0}-3\)_, nor a_ \((2,3)\)_-strip of length_ \(n-3\)_, if_ \(\lambda_{0}=n-3\)_, and_ * \(\emptyset\)_, if_ \(\lambda_{0}>n-3\)_._ See SS2.2 for notation and relevant definitions for Young tableaux. We will see that, in higher dimension, one should expect similar formulas involving coefficients \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}\) which are equal to the cardinality of a combinatorially meaningful subset of \(\mathsf{SSYT}_{r+1}(\lambda)\). In the rest of the introduction, we recall the history of the problem and give an overview of our new techniques. ### Tevelev degrees Tevelev degrees count the number of curves of fixed complex structure in an ambient space \(X\) passing through the maximal number of points. More precisely, let \(X\) be a smooth projective variety, and let \(\beta\in H_{2}(X,\mathbb{Z})\) be an effective curve class. Consider the forgetful map \[\tau:\mathcal{M}_{g,n}(X,\beta)\to\mathcal{M}_{g,n}\times X^{n}\] and suppose that \[\int_{\beta}c_{1}(T_{X})=\dim(X)(n+g-1), \tag{3}\] that is, the expected relative dimension of \(\tau\) is zero. Suppose further that all dominating components of \(\mathcal{M}_{g,n}(X,\beta)\) are generically smooth of the expected dimension. Then, the **geometric Tevelev degree**\(\mathsf{Te}^{X}_{g,n,\beta}\) of \(X\) is by definition equal to the degree of \(\tau\). We will soon specialize to the case \(X=\mathbb{P}^{r}\), see [19, 8] for recent partial computations of geometric Tevelev degrees of other targets. Buch-Pandharipande [5] study systematically a parallel set of counts in Gromov-Witten theory. Consider now the forgetful map \[\overline{\tau}:\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M }}_{g,n}\times X^{n}\] and assume (3). Then, the **virtual Tevelev degree**\(\mathsf{v}\mathsf{Te}^{X}_{g,n,\beta}\) of \(X\) is defined to be the unique rational number for which \[\overline{\tau}_{*}[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\mathsf{vir}}= \mathsf{v}\mathsf{Te}^{X}_{g,n,\beta}\cdot[\overline{\mathcal{M}}_{g,n} \times X^{n}].\] For virtual degrees, no transversality hypothesis is required. The virtual and geometric Tevelev degrees often, but do not always, agree, see [22, 2] for detailed investigations. The term "Tevelev degree" was introduced in 2021 by Cela-Pandharipande-Schmitt [9], after the formula (using our notation) \(\mathsf{Te}^{\mathbb{P}^{1}}_{g,g+3,(g+1)[\mathbb{P}^{1}]}=2^{g}\) appeared in the work of Tevelev [27, Theorem 6.2]. On the other hand, much further-reaching calculations, now understood to be on the virtual side, had already appeared in the preceding decades, as we review in SS1.3. Nevertheless, we will follow the most recent literature in using the name "geometric Tevelev degree" for the counts we consider, to highlight our interest in the geometric invariants that see only maps of degree \(d\) out of smooth curves. ### Tevelev degrees of \(\mathbb{P}^{r}\) In this section, we review the known results on the Tevelev degrees of \(\mathbb{P}^{r}\). We write \(\beta=d\) for the homology class equal to \(d\) times a line. Then, the condition (3) becomes (1). The virtual degrees \(\mathsf{v}\mathsf{Te}^{\mathbb{P}^{r}}_{g,n,d}\) are understood, following from a straightforward computation in the quantum cohomology ring of \(\mathbb{P}^{r}\), see [5, (3)]. **Theorem 1.3**.: _Assume (1). Then_ \[\mathsf{v}\mathsf{Te}^{\mathbb{P}^{r}}_{g,n,d}=(r+1)^{g}.\] This formula was first obtained by Bertram-Daskalopoulos-Wentworth [4] before the systematic development of the theory of virtual fundamental classes. In fact, the more general problem of counting curves on Grassmannians satisfying any collection of Schubert incidence conditions was considered by Bertram in [3], and it was proven by Siebert-Tian [26] in large degree and by Marian-Oprea [23] virtually in all degrees that these much more general counts are determined by the Vafa-Intrilligator formula. In particular, the virtual number of maps \(f:C\to\mathbb{P}^{r}\) with respect to arbitrary incidence conditions \(f(p_{i})\in X_{i}\subset\mathbb{P}^{r}\) is also equal to \((r+1)^{g}\). Further virtual calculations in this direction can be carried out on moduli spaces of stable quotients [24], which determine the Gromov-Witten theory of Grassmannians, and quasi-maps [10]. In fact, the original calculation of Bertram-Daskalopoulos-Wentworth [4] may be viewed as taking place on the space of quasi-maps to \(\mathbb{P}^{r}\), and quasi-maps seem to be a natural setting for other targets. We focus in this paper on geometric curve counts in \(\mathbb{P}^{r}\), which are much more subtle, and are in general not determined in an apparent way by any of the aforementioned virtual theories. If \(n\geq r+1\), then a map \(f:C\to\mathbb{P}^{r}\) out of a general curve with general point incidence conditions \(f(p_{i})=x_{i}\) is automatically non-degenerate, and the Brill-Noether theorem guarantees that the needed transversality hypothesis on \[\tau:\mathcal{M}_{g,n}(\mathbb{P}^{r},d)\to\mathcal{M}_{g,n}\times(\mathbb{P}^ {r})^{n}\] is satisfied. When instead \(n<r+1\), it is often the case that there are more degenerate maps \(f\) than expected1, so we assume throughout the discussion of geometric Tevelev degrees of \(\mathbb{P}^{r}\) that \(n\geq r+1\). Footnote 1: For example, when \(r=2\), \(n=2\), and \(g=\frac{3}{2}d-1\geq 2\), there are infinitely many \(f\) of degree \(d\) from \(C\) to the line between the two points \(X_{1},X_{2}\in\mathbb{P}^{2}\), satisfying \(f(p_{i})=X_{i}\) for \(i=1,2\). The case in which \(d=r+\frac{rg}{r+1}\) and \(n=r+2\) are as small as possible is classical. Then, the data of a map \(f:C\to\mathbb{P}^{r}\) with \(f(p_{i})=x_{i}\) for \(i=1,2,\ldots,r+2\) is equivalent to the data of a linear series of minimal degree on \(C\). The celebrated 19th century result of Castelnuovo [6] gives the number of such as \[\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}=\int_{\operatorname{Gr}(r+1,d+1)} \sigma_{1^{r}}^{g}=g!\cdot\frac{1!\cdot 2!\cdot\cdots\cdot r!}{s!\cdot(s+1)! \cdot\cdots\cdot(s+r)!}, \tag{4}\] where \(s=\frac{d}{r}-1=\frac{g}{r+1}\). In particular, Castelnuovo's count is quite far from the virtual (Gromov-Witten) count. At the other extreme, where \(d,n\) are sufficiently large, the geometric counts match the virtual ones, as had been essentially understood in the early work [4, 26]. **Theorem 1.4**.: _[_13_, Theorem 1.1, Theorem 1.2]_ _Assume (1) and that \(d\geq rg+r\) (equivalently, \(n\geq d+2\)). Then_ \[\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}} =(r+1)^{g}\] \[=\int_{\operatorname{Gr}(r+1,d+1)}\sigma_{1^{r}}^{g}\cdot\sum_{a_ {0}+\cdots+a_{r}=r(n-r-2)}\sigma_{a_{0}}\cdots\sigma_{a_{r}}\] The equality of the two formulas is non-trivial. They are obtained by two independent computations in [13], but a direct combinatorial proof of their equality via the RSK algorithm was given in [15]. When \(r=1\), the geometric degrees \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{1}}\) have been computed in full. **Theorem 1.5**.: _[_9, 13, 7_]_ _Assume (1) for \(r=1\). Then_ \[\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{1}} =2^{g}-\sum_{j=0}^{g-d-1}\binom{g}{j}+(g-d-1)\binom{g}{g-d}+(d-g-1 )\binom{g}{g-d+1}\] \[=\int_{\operatorname{Gr}(2,d+1)}\sigma_{1}^{g}\cdot\sum_{a_{0}+a _{1}=n-3}\sigma_{a_{0}}\sigma_{a_{1}}\] Binomial coefficients \(\binom{g}{j}\) with \(j<0\) are interpreted to vanish. In particular, when \(d\geq g+1\), we obtain simply \(\mathsf{Tev}_{g,d,n}^{\mathbb{P}^{1}}=2^{g}\), agreeing with the first formula of Theorem 1.4. The second formula of Theorem 1.5 shows that the Schubert calculus formula of Theorem 1.4 holds when \(r=1\) for _all_\(d\) (whereas the \((r+1)^{g}=2^{g}\) formula does not), but this will not be the case when \(r>1\). The geometric counts when \(r>1\) and \(r+\frac{rg}{g+1}<d<rg+r\), which one may view as interpolating between Castelnuovo's count (4) and the Gromov-Witten count \((r+1)^{g}\), have remained open. ### Complete collineations We now discuss the new ingredients of this paper. The geometric approaches of [4, 13] to Tevelev degrees encounter the same difficulty: in intersection theory calculations, one may obtain contributions from "maps with base-points," that is, \((r+1)\)-tuples of sections \([f_{0}:\cdots:f_{r}]\), where \(f_{j}\in H^{0}(C,\mathcal{L})\) are sections all vanishing at some (or all) of the \(p_{i}\). Essentially the same issue arises in the Gromov-Witten setting, where one has virtual contributions from stable maps obtained by attaching rational tails at \(p_{i}\in C\) whose images pass through \(x_{i}\in\mathbb{P}^{r}\). This in particular explains the discrepancy between the virtual and geometric degrees. In order to access the geometric Tevelev degrees \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}\) in all cases, one needs to avoid such contributions. It is essentially a consequence of the Brill-Noether theorem that such contributions only arise when the \(f_{j}\) are linearly dependent. We therefore pass to the moduli space of _complete collineations_, which is obtained by blowing up the loci where the linear map \(\mathbb{C}^{r+1}\to H^{0}(C,\mathcal{L})\) defined by the \(f_{j}\) drops rank. This on the one hand isolates the desired contributions from "honest" maps \(f:C\to\mathbb{P}^{r}\) of degree \(d\), and on the other facilitates a limit linear series degeneration from genus \(g\) to genus \(0\). In fact, the method works for arbitrary linear incidence conditions \(f(p_{i})\in X_{i}\). Let \(V,W\) be vector spaces of dimensions \(r+1,d+1\), respectively, and let \(\operatorname{Coll}(V,W)\) be the moduli space of complete collineations \(\phi:V\to W\), see SS3. Let \(b:\operatorname{Coll}(V,W)\to\mathbb{P}\operatorname{Hom}(V,W)\) be the iterated blowup remembering the linear map \(\phi_{0}:V\to W\), and let \(\pi:\operatorname{Coll}(V,W)\to\operatorname{Gr}(r+1,W)\) be the map remembering the image of \(\phi\). Let \(L\subset V\) be a vector subspace of any dimension and let \(M\subset W\) be a hyperplane. We will define the _incidence locus_\(\mathsf{Inc}(L,M)\subset\operatorname{Coll}(V,W)\) by the proper transform under \(b\) of the locus of maps for which \(\phi_{0}(L)\subset M\), and define \(\gamma_{\dim(L)}\in H^{2\dim(L)}(\operatorname{Coll}(V,W))\) to be the corresponding cycle class. These may be regarded as tautological classes in the cohomology of \(\operatorname{Coll}(V,W)\) which may be of independent interest. The relevance of these classes for us is the following. We may regard \(L\subset V\cong\mathbb{C}^{r+1}\) as corresponding to a subspace of \(X_{i}\subset\mathbb{P}^{r}\) of codimension \(\dim(L)\), and \(M\subset W\) as the hyperplane of sections of \(H^{0}(C,\mathcal{L})\) vanishing at a fixed point \(p_{i}\). Then, the locus \(\mathsf{Inc}(L,M)\) corresponds to condition \(f(p_{i})\in X_{i}\). We prove: **Theorem 1.6**.: _Let \((C,p_{1},\dots,p_{n})\in\mathcal{M}_{g,n}\) be a general curve. Let \(X_{1},\dots,X_{n}\subset\mathbb{P}^{r}\) be general linear spaces of dimensions \(k_{1},\dots,k_{n}<r\), respsectively. For \(j=0,1,\dots,r-1\), let \(n_{j}\) be the number of \(X_{i}\) of dimension \(j\), and write \(\overrightarrow{n}=(n_{0},\dots,n_{r-1})\)._ _Assume that_ \[(r+1)(d+1)-1-rg=\sum_{i=1}^{n}(r-\dim X_{i})=\sum_{k=0}^{r-1}(r-k)n_{k}=:| \overrightarrow{n}|. \tag{5}\] _Then, the number \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) of non-degenerate maps \(f:C\to\mathbb{P}^{r}\) of degree \(d\) with \(f(p_{i})\in X_{i}\) is equal to_ \[\int_{\operatorname{Coll}(V,W)}\pi^{*}(\sigma_{1^{r}}^{g})\cdot\prod_{j=1}^{r} \gamma_{j}^{n_{r-j}}.\] The class \(\prod_{j=1}^{r}\gamma_{j}^{n_{r-j}}\) is the cycle class of an intersection \[Y_{r,\overrightarrow{n},d}=\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\subset \operatorname{Coll}(V,W),\] where the \(L_{i}\subset V\) are general subspaces of dimension \(\operatorname{codim}(X_{i})\) and the \(M_{i}\subset W\) are general hyperplanes. Theorem 1.6 should be viewed as a tranversality statement: the space of complete collineations resolves all excess intersections from (for example) the moduli space of stable maps and is therefore the correct place to obtain the geometric fixed-domain curve counts for \(\mathbb{P}^{r}\). By the projection formula, to compute \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) (and, by taking \(n_{0}=n\), the geometric Tevelev degrees of \(\mathbb{P}^{r}\)), it suffices to understand the classes \[\Gamma_{r,\overrightarrow{n},d}:=\pi_{*}([Y_{r,\overrightarrow{n},d}])\] on \(\operatorname{Gr}(r+1,W)=\operatorname{Gr}(r+1,d+1)\). These classes seem to be new and may be of independent interest. In fact, we will reduce further in SS6 to the case \(d=n-1\), showing that the classes \(\Gamma_{r,\overrightarrow{n},n-1}\) determine all classes \(\Gamma_{r,\overrightarrow{n},d}\). ### Degenerations of subspace arrangements Our main approach to studying the classes \(\Gamma_{r,\overrightarrow{n},d}\) is by degeneration. First, consider the case in which the \(L_{i}\) are hyperplanes, corresponding to the geometric Teeveley degrees; we abuse notation and write \(n=\overrightarrow{n}=(n,0,\ldots,0)\). When \(d=n-1\), we show that \(Z_{r,n,d}:=\pi(Y_{r,n,d})\) is equal to a generic torus orbit closure in the Grassmannian. In this case, the class \(\Gamma_{r,n,d}=[Z_{r,n,d}]\) has been computed by Klyachko [17] and Berget-Fink [1], see also Theorem 7.2. Along with the reductions in SS6, this will complete the proof of Theorem 1.1. However, we outline an independent computation of the class by \(\Gamma_{r,n,d}\) that proceeds by gradually degenerating the \(L_{i}\) to contain successively larger subspaces \(\Lambda\subset V\), showing that the subscheme \(Z_{r,\overrightarrow{n},d}\) degenerates to a union of Richardson varieties whose classes are transparent. In the language of maps \(f:C\to\mathbb{P}^{r}\), we degenerate the points \(x_{i}\) to lie in successively smaller linear spaces and study the corresponding degenerations of \(f\) (viewed as complete collineations). We have essentially carried out this degeneration in the more general setting of torus orbit closures on full flag varieties in [20], but we revisit the method in order to apply it in the more general setting of \(L_{i}\) of arbitrary dimension. Specifically, we prove Theorem 1.2 by moving the points and lines \(x_{1},\ldots,x_{n_{0}},X_{n_{0}+1},\ldots,X_{n}\) into successively more special position. The method here is much more delicate. A key feature present in the orbit closure case but missing here is that \(Z_{2,\overrightarrow{n},d}=\pi(Y_{2,\overrightarrow{n},d})\) is no longer toric, and its degenerations are no longer controlled by polyhedral subdivisions. In particular, we must prove by hand that no components appear in the limit of \(Z_{2,\overrightarrow{n},d}\) other than those that we construct explicitly. ### Further directions Our degeneration method extends to computations of the classes \(\Gamma_{r,\overrightarrow{n},d}\), and hence of the counts \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\), when \(r>2\) and the \(L_{i}\) have arbitrary dimension. However, the combinatorics become increasingly complicated, and we have not yet obtained simple formulas. The combinatorial interpretation of \(\Gamma_{2,\overrightarrow{n},d}^{\lambda}\) in Theorem 1.2 as well as the upper bound \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\leq|\mathsf{SSYT}_{r+1}(\lambda)|\) given in Proposition 5.8 suggest, however, that such formulas may await discovery. Another approach to the counts \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) afforded by Theorem 1.6 is to apply the localization formula to \(\operatorname{Coll}(V,W)\), on which a product of tori of dimensions \(r+1\) and \(d+1\) acts with isolated fixed points. As was carried out in earlier versions of this work, one can write down an entirely explicit residue formula for \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) in this way, but any combinatorial properties of \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\) seem to be invisible. We omit the calculation here. In the same way that the space of complete collineations allows one to access arbitrary geometric Teeveley degrees of \(\mathbb{P}^{r}\), we expect that passing to related moduli spaces will allow for new calculations for other targets \(X\). For example, a candidate moduli space for computing geometric Teeveley degrees of Grassmannians is the space of _complete quotients_[16]. ### Outline The paper is organized as follows. We discuss preliminaries in SS2 and SS3. In particular, we introduce and study the geometry of the loci \(\operatorname{lnc}(L,M)\subset\operatorname{Coll}(V,W)\) in SS3. Sections SS4, SS5, and SS6 establish generalities relating to the counts \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\). In particular, we prove Theorem 1.6 in SS4. The coefficients \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\) in the Schubert basis of the classes \(\Gamma_{r,\overrightarrow{n},d}\) are compared to the numbers \(|\mathsf{SSYT}_{r+1}(\lambda)|\) in SS5. We reduce the computation of \(\Gamma_{r,\overrightarrow{n},d}\) to the case \(d=n-1\) in SS6. Finally, our main calculations take place in SS7 and SS8. In SS7, we relate geometric Teeveley degrees of \(\mathbb{P}^{r}\) to torus orbit closures on \(\operatorname{Gr}(r+1,W)\), proving Theorem 1.1. It is here where our main degeneration technique is introduced. Finally, we prove Theorem 1.2 in SS8. ### Conventions * We work exclusively over \(\mathbb{C}\). * If \(S\) is a finite set, we denote its cardinality by \(|S|\). * If \(V\) is a vector space, \(\mathbb{P}(V)\) is the space of lines in \(V\), and \(\mathbb{P}(V^{\vee})\) is the space of hyperplanes in \(V\). Similarly, if \(W\) is a vector space, then \(\operatorname{Gr}(r+1,W)\) is the Grassmanian of \((r+1)\)-dimensional subspaces of \(W\). * Angle brackets \(\langle-\rangle\) denote linear span in a vector space or projective space. * Let \(Y\subset X\) be a pure-dimensional subscheme of a smooth, projective variety \(X\). Then, the cycle class in \(H^{2*}(X)\) associated to \(Y\) is denoted \([Y]\). * Let \(W\) be a vector space, let \(U\subset W\) be a subspace, and let \(g\in GL(W)\) be an automorphism. We say that \(g\)_stabilizes_\(U\) if \(g(U)=U\). We say that \(g\)_fixes_\(U\) if \(g\) stabilizes \(U\) and in addition restricts to the identity map on \(U\). ### Acknowledgments We thank Alessio Cela, Izzet Coskun, Gavril Farkas, Alex Fink, Maria Gillespie, Eric Larson, Alina Marian, Rahul Pandharipande, Andrew Reimer-Berg, Johannes Schmitt, and Hunter Spink for helpful discussions. This project was completed with support from an NSF postdoctoral fellowship, grant DMS-2001976, and the MATH+ incubator grant "Tevelev degrees." ## 2. Preliminaries ### Schubert calculus Let \(W\) be a vector space of dimension \(d+1\) and fix a complete flag \(F\) of subspaces \[0=F_{0}\subset F_{1}\subset\cdots\subset F_{d+1}=W.\] Let \(\lambda=(\lambda_{0},\ldots,\lambda_{r})\) be a partition, where \(d-r\geq\lambda_{0}\geq\cdots\geq\lambda_{r}\geq 0\). Then, the Schubert cycle \(\Sigma_{\lambda}^{F}\subset\operatorname{Gr}(r+1,W)=\operatorname{Gr}(r+1,d+1)\) is by definition the subvariety consisting of subspaces \(V\subset W\) of dimension \(r+1\) for which \[\dim(V\cap F_{d-r+i+1-\lambda_{i}})\geq i+1\] for \(i=0,1,\ldots,r\). The class of \(\Sigma_{\lambda}^{F}\) in \(H^{2|\lambda|}(\operatorname{Gr}(r+1,W))\), where \(|\lambda|=\lambda_{0}+\cdots+\lambda_{r}\) denotes the size of the partition, is denoted \(\sigma_{\lambda}\). The top cohomology group \(H^{2(r+1)(d-r)}(\operatorname{Gr}(r+1,W))\) is generated by the single class \(\sigma_{(d-r)^{r+1}}\). The unique \(\mathbb{Q}\)-linear map \(H^{2(r+1)(d-r)}(\operatorname{Gr}(r+1,W))\to\mathbb{Q}\) sending \(\sigma_{(d-r)^{r+1}}\) to \(1\) is denoted by \(\int_{\operatorname{Gr}(r+1,d+1)}\). If \(\zeta\in H^{*}(\operatorname{Gr}(r+1,W))\) is any class, then \((\zeta)_{\lambda_{0}\leq m}\) is defined by expanding \(\zeta\) in the basis of Schubert classes \(\sigma_{\lambda}\) and projecting to the subgroup generated by classes with \(\lambda_{0}\leq m\). Similarly, one can replace \(\lambda_{0}\leq m\) with other inequalities on the parts of \(\lambda\). ### Young tableaux Let \(\lambda=(\lambda_{0},\ldots,\lambda_{r})\) be a partition. We adopt the convention throughout that \(\lambda_{0}\geq\cdots\geq\lambda_{r}\). We identify \(\lambda\) with its Young diagram. **Definition 2.1**.: _A strip\(S\) of \(\lambda\) is a collection of \(k\) boxes in the Young diagram of \(\lambda\) with the property that:_ * _Exactly one box of_ \(S\) _lies in each of the first_ \(k\) _columns of_ \(\lambda\)_, and_ * _Given any distinct boxes_ \(b_{1},b_{2}\) _of_ \(S\)_, if_ \(b_{1}\) _lies in a column to the left of_ \(b_{2}\)_, then_ \(b_{1}\) _does not also lie in a row above_ \(b_{2}\)_._ Unlike in [21], we do not require \(k\) to be as large as possible with respect to an ambient rectangle; in particular, we allow \(k<\lambda_{0}\). An example of a strip in the partition \(\lambda=(12,9,4,2)\) is shaded below. Recall that a _semi-standard Young Tableau (SSYT)_ of shape \(\lambda\) is a filling of the boxes of \(\lambda\) with the entries \(1,2,\ldots,r+1\) so that entries increase weakly across rows and strictly down columns. The number of SSYTs with \(r+1\) allowed entries of shape \(\lambda\) is denoted \(|\mathsf{SSYT}_{r+1}(\lambda)|\), and is given by a hook length formula, see [28, Corollary 7.21.4]. For \(i=1,2,\ldots,r+1\) and a fixed SSYT, we denote the number of appearances of the entry \(i\) by \(c_{i}\). **Definition 2.2**.: _For \(i=1,2,\ldots,r\), an \((i,i+1)\)-strip of a SSYT is a strip, all of whose boxes are filled with the entry \(i\) or \(i+1\), and for which all instances of \(i\) all appear to the left of all instances of \(i+1\). A \(1\)-strip is, by definition, an \((i,i+1)\)-strip for some \(i\)._ Note that a strip filled entirely with the entry \(i\) is both a \((i-1,i)\)- and a \((i,i+1)\)-strip. Below, the SSYT of shape \(\lambda=(10,9,4,2)\) has a \((2,3)\)-strip of length \(10\). The longest \((1,2)\)-strip has length \(6\) and the longest \((3,4)\)-strip has length \(5\). \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 1 & 2 & 2 & 3 & 3 & 3 & 3 \\ \hline 2 & 2 & 2 & 2 & 3 & 4 & 4 & 4 & 4 \\ \hline 3 & 3 & 3 & 4 & & & & & & \\ \hline 4 & 4 & & & & & & & & & \\ \hline \end{tabular} ## 3. Complete Collineations ### Basic notions The main reference is the work of Vainsencher [30]; see also the work of Thaddeus [29] for a more modern treatment. **Definition 3.1**.: _Let \(V,W\) be vector spaces of dimensions \(r+1,d+1\), respectively. We assume throughout that \(r\leq d\). A **complete collineation**\(\phi=\{\phi_{i}:V_{j}\to W_{j}\}_{j=0}^{\ell}\), often abusively denoted \(\phi:V\to W\), is a collection of non-zero linear maps \(\phi_{j}:V_{j}\to W_{j}\), each considered up to scaling, such that \(V=V_{0}\), \(W=W_{0}\), \(V_{j}=\ker(\phi_{j-1})\) and \(W_{i}=\operatorname{coker}(\phi_{j-1})\), and the last map \(\phi_{\ell}:V_{\ell}\to W_{\ell}\) is of full rank (that is, is injective)._ Due to the requirement that the \(\phi_{j}\) be non-zero, the dimensions of the \(V_{j}\) and \(W_{j}\) are strictly decreasing, so any sequence of such maps must terminate in one of full rank. For sake of brevity, we often refer to complete collineations throughout this paper as simply "collineations." **Definition 3.2**.: _We refer to the integer \(\ell=\ell(\phi)\) as the **length** of \(\phi\), and the \((\ell+1)\)-tuple of positive integers \(\overrightarrow{r}=(\operatorname{rank}(\phi_{0}),\ldots,\operatorname{rank }(\phi_{\ell}))=(r_{0},\ldots,r_{\ell})\) as the **type** of \(\phi\)._ _We say that \(\phi\) is of **full rank** if it is of type \((r+1)\) (equivalently, of length 0), and is **totally degenerate** if it is of type \((1,1,\ldots,1)\) (equivalently, of length \(r\))._ Note that the possible types \(\overrightarrow{r}\) of \(\phi\) index the \(GL(V)\times GL(W)\)-orbits of \(\operatorname{Coll}(V,W)\). **Definition 3.3**.: _Denote by \(\operatorname{Coll}(V,W)=\operatorname{Coll}(r+1,d+1)\) the moduli space of complete collineations from \(V\) to \(W\). Let \(b:\operatorname{Coll}(V,W)\to\operatorname{\mathbb{P}Hom}(V,W)\) be the canonical morphism remembering the map \(\phi_{0}:V\to W\)._ In fact, \(b\) is an iterated blowup at smooth subvarieties: the locus of rank \(1\) maps \(\phi_{0}:V\to W\) in \(\operatorname{\mathbb{P}Hom}(V,W)\) is blown up first, followed by the proper transform of the locus of maps of rank at most \(2\), and so on. In particular, \(\operatorname{Coll}(V,W)\) is smooth, projective, and irreducible of dimension \((r+1)(d+1)-1\). **Definition 3.4**.: _Let \(\pi:\operatorname{Coll}(V,W)\to\operatorname{Gr}(r+1,W)\) be the map sending \(\phi\) to \(\operatorname{pr}_{\ell}^{-1}(\operatorname{im}(\phi_{\ell}))\subset W\), where \(\operatorname{pr}_{i}:W\to W_{i}\) denotes the canonical quotient map._ The subspace \(\operatorname{pr}_{\ell}^{-1}(\operatorname{im}(\phi_{\ell}))\subset W\) may alternatively be regarded as the span of the images of _all_ of the \(\phi_{i}\) upon pullback to \(W\). We will refer to this subspace of \(W\) as the **image** of \(\phi\), denoted \(\operatorname{im}(\phi)\). For a fixed subspace \(W^{\prime}\subset W\), the fiber of \(\pi\) over \(W^{\prime}\in\operatorname{Gr}(r+1,W)\) is isomorphic to \(\operatorname{Coll}(V,W^{\prime})\). Note now that \(V,W^{\prime}\) have the same dimension; the space of collineations may be identified with the wonderful compactification of \(PGL(r+1)\). We will not use this identification in what follows. **Definition 3.5**.: _Fix a type \(\overrightarrow{r}=(r_{0},\ldots,r_{\ell})\) with \(r_{0}+\cdots+r_{\ell}=r+1\). Let \(\operatorname{Coll}_{\overrightarrow{r}}^{\circ}(V,W)\) be the locally closed subvariety of \(\operatorname{Coll}(V,W)\) of collineations of type \(\overrightarrow{r}\), and let \(\operatorname{Coll}_{\overrightarrow{r}}(V,W)\) be its closure._ As a scheme, \(\operatorname{Coll}_{\overrightarrow{r}}(V,W)\) may be constructed inductively, as an open subset of a sequence of iterated projective and Grassmannian bundles. In particular, we find that \(\operatorname{Coll}_{\overrightarrow{r}}(V,W)\subset\operatorname{Coll}(V,W)\) is smooth and irreducible of codimension \(\ell\). ### Limits of full rank maps Let \(\phi^{t}:V\to W\) be a \(1\)-parameter family of linear maps, which we assume are injective near, but not at, \(t=0\). We explain a non-canonical procedure for computing \(\lim_{t\to 0}\phi^{t}\) in the space \(\operatorname{Coll}(V,W)\). First, define \(\phi_{0}=\lim_{t\to 0}\phi^{t}\) as a projectivized linear map, and define \(V_{1}=\ker(\phi_{0})\), \(W_{1}=\operatorname{coker}(\phi_{0})\). Write \(r_{0}\) for the rank of \(\phi_{0}\); we may assume \(r_{0}\leq r\), or else \(\phi_{0}\) is also the limit of \(\phi^{t}\) as a collineation. Note that the projectivized map \(\phi_{1}:V_{1}\to W_{1}\) is determined by the associated rational map \(\overline{\phi_{1}}:\mathbb{P}V_{1}\dashrightarrow\mathbb{P}W_{1}\). Choose a decomposition \(V=V_{0}=V_{1}\oplus V_{1}^{\prime}\). For any \(v_{1}\in\mathbb{P}V_{1}\), define \[\widetilde{\phi_{1}}(v_{1}):=\lim_{t\to 0}\phi^{t}(\langle V_{1}^{ \prime},v_{1}\rangle)\subset W.\] Then, \(\widetilde{\phi_{1}}(v_{1})\) is a subspace of \(W\) of dimension \(r_{0}+1\) containing the image of \(\phi_{0}\), and can therefore be regarded as an element of \(\mathbb{P}W_{1}\). For each \(v_{1}\), we also define \(k_{1}(v_{1})\) to be the minimal positive integer \(k\) for which the \(k\)-th order thickening near \(t=0\) of \(\phi^{t}(\langle V_{1}^{\prime},v_{1}\rangle)\), regarded as a module over \(\mathbb{C}[\epsilon]/\epsilon^{k+1}\), has length \(r_{0}+1\). That is, \(k_{1}(v_{1})\) is the minimal positive integer \(k\) for which the restriction of \(\phi_{0}\) to \(\langle V_{1}^{\prime},v_{1}\rangle\) "smooths to order \(k\) to an injection." Now, let \(k_{1}=\min_{v_{1}\in\mathbb{P}V_{1}}k_{1}(v_{1})\). Then, we have \(\overline{\phi_{1}}(v_{1})=\widetilde{\phi_{1}}(v_{1})\) if \(k_{1}(v_{1})=k_{1}\), and \(\overline{\phi_{1}}(v_{1})=0\), that is, \(v_{1}\in V_{2}\), if \(k_{1}(v_{1})>k_{1}\). The further maps \(\phi_{j}:V_{j}\to W_{j}\) are determined similarly by iterating this procedure. ### The incidence loci Fix integers \(d\geq r\geq 1\) and vector spaces \(V,W\) of dimensions \(r+1,d+1\), respectively. Let \(L\subset V\) be a vector subspace of codimension \(k+1\geq 1\), and let \(M\subset W\) be a vector subspace of codimension \(1\)(hyperplane). **Definition 3.6**.: _Let \(\mathsf{Inc}^{\prime}(L,M)\subset\mathbb{P}\operatorname{Hom}(V,W)\) be the locus of \(\phi:V\to W\) for which \(\phi(L)\subset M\)._ _Let \(\mathsf{Inc}(L,M)\subset\operatorname{Coll}(V,W)\) be the proper transform of \(\mathsf{Inc}^{\prime}(L,M)\) under the iterated blowup \(b:\operatorname{Coll}(V,W)\to\mathbb{P}\operatorname{Hom}(V,W)\)._ The locus \(\mathsf{Inc}^{\prime}(L,M)\subset\mathbb{P}\operatorname{Hom}(V,W)\) is a linear subspace of codimension \(r-k\), so \(\mathsf{Inc}(L,M)\subset\operatorname{Coll}(V,W)\) also has codimension \(r-k\). We denote by \(\gamma_{r-k}\in H^{2(r-k)}(\operatorname{Coll}(L,M))\) the cycle class of \(\mathsf{Inc}(L,M)\). **Remark 3.7**.: _Vainsencher [30, SS4] considers similar loci, where the dimension of \(L\) is equal to the codimension of \(M\), and the composition_ \[L\hookrightarrow V\to W\twoheadrightarrow W/M\] _is required to drop in rank. The corresponding loci are divisors in \(\mathbb{P}\operatorname{Hom}(V,W)\) and \(\operatorname{Coll}(V,W)\)._ _More generally, one can take \(L\) and \(M\) to have arbitrary dimension and consider arbitrary degeneracy conditions on the composite map \(L\to W/M\). Many of the techniques of this paper can be extended to this general setting; it would be of independent interest to develop a theory of the associated cycles and their intersection theory on \(\operatorname{Coll}(V,W)\), but we will stick to the setting above._ **Example 3.8**.: _Take \(V=\mathbb{C}^{r+1}\) and \(W=H^{0}(\mathbb{P}^{1},d)\). Let \(L\subset V\) be a linear space of codimension \(k+1\), let \(p\in\mathbb{P}^{1}\), and let \(M\subset W\) be the hyperplane of sections vanishing at \(p\). We interpret \(\mathbb{P}\operatorname{Hom}(V,W)\) as the locus of maps (possibly with base-points) \(f:\mathbb{P}^{1}\to\mathbb{P}^{r}=\mathbb{P}(V^{\vee})\) of degree \(d\)._ _In this case, \(\mathsf{Inc}(L,M)\) is (away from the locus where \(\operatorname{im}(\phi)\subset M\)) the locus of maps sending \(p\) to the linear subspace \(X_{L}\subset\mathbb{P}^{r}\) of dimension \(k\) corresponding to \(L\)._ We now give a set-theoretic description of \(\mathsf{Inc}(L,M)\). First, for any collineation \(\phi\), for integers \(j=0,1,\dots,\ell(\phi)\), let \(\operatorname{pr}_{j}:W\to W_{j}\) be the projection map, and denote by \((\dagger)_{j}\) the condition that \[\phi_{j}(L\cap V_{j})\subset\operatorname{pr}_{j}(M).\] **Proposition 3.9**.: _We have \(\phi\in\mathsf{Inc}(L,M)\subset\operatorname{Coll}(V,W)\) if and only if property \((\dagger)_{j}\) holds for all \(j\) for which \(L,V_{j}\subset V\) are transverse, that is, intersect in the expected codimension of \((k+1)+(r_{0}+\dots+r_{j-1})\)._ Note the following: * If \((k+1)+(r_{0}+\cdots+r_{j-1})\geq r+1\), then the transversality condition is that \(L\cap V_{j}=0\), in which case \((\dagger)_{j}\) is automatic. Thus, the property \((\dagger)_{j}\) only needs to be checked for \(j\) sufficiently small, depending on \(k\) and \(\overrightarrow{r}\). * If \(V_{j}\) is transverse to \(L\) and \((k+1)+(r_{0}+\cdots+r_{j-1})\leq r+1\), then the same is true of \(V_{0},\ldots,V_{j-1}\). Furthermore, \(V_{0}=V\) is always transverse to \(L\). * The image \(\operatorname{pr}_{j}(M)\subset W_{j}\) is either a hyperplane in \(W_{j}\) or all of \(W_{j}\). If the latter, then \((\dagger)_{j}\) is automatic, and if the former, then \(\operatorname{pr}_{j^{\prime}}(M)=W_{j^{\prime}}\) for all \(j^{\prime}>j\). Therefore, the property \((\dagger)_{j}\) non-trivial for at most one value of \(j\), namely, the minimal \(j\) for which \(\operatorname{pr}_{j}(M)\neq W_{j}\). Proof of Proposition 3.9.: We first show that the conditions \((\dagger)_{j}\) are necessary. Suppose that \(\phi=\{\phi_{j}\}_{j=0}^{\ell}\) is the limit of a one-parameter family of full rank collineations \(\phi^{t}\) in \(\operatorname{\mathsf{lnc}}^{\prime}(L,M)\subset\mathbb{P}\operatorname{Hom }(V,W)\). We wish to show that \(\phi\) satisfies \((\dagger)_{j}\) whenever \(V_{j}\) is transverse to \(L\). For \(j=0\), the claim is that \(\phi_{0}(L)\subset M\); this is clear from taking the limit in \(\mathbb{P}\operatorname{Hom}(V,W)\). We now proceed by induction on \(j\); we have already noted that the transversality condition for \(V_{j}\) implies the same for \(V_{0},\ldots,V_{j-1}\). We assume further that \(L\cap V_{j}\neq 0\), otherwise there is nothing further to prove. For \(h=0,\ldots,j-1\), the map \(L\cap V_{h}\to\operatorname{im}(\phi_{h})\) induced by \(\phi_{h}\) is surjective, as \(L\cap V_{h}\) is transverse to \(\ker(\phi_{h})=V_{h+1}\) in \(V_{h}\). Thus, there exist decompositions \(V_{h}=V_{h+1}\oplus V_{h+1}^{\prime}\), where \(V_{h+1}^{\prime}\subset L\), for \(h=0,\ldots,j-1\), and \(\phi_{h}\) maps \(V_{h+1}^{\prime}\) isomorphically to a subspace of \(\operatorname{pr}_{h}(M)\), by the inductive hypothesis. Now, let \(v_{j}\in L\cap V_{j}\) be any vector. Then, if \(\phi_{j}(v_{j})\neq 0\), then, adopting the notation of SS3.2, we have \[\widetilde{\phi}_{j}(v_{j})=\lim_{t\to 0}\phi^{t}(\langle V_{1}^{\prime}, \ldots,V_{j}^{\prime},v_{j}\rangle),\] where the right-hand side is viewed as a line in \(W_{j}=W/\langle\phi_{0}(V_{1}^{\prime}),\ldots,\phi_{j-1}(V_{j}^{\prime})\rangle\). Because \(\langle V_{1}^{\prime},\ldots,V_{j}^{\prime},v_{j}\rangle\subset L\) and \(\phi^{t}\in\operatorname{\mathsf{lnc}}^{\prime}(L,M)\), it follows that \(\phi_{j}(v_{j})\subset\operatorname{pr}_{j}(M)\). Conversely, suppose \(\phi\) satisfies \((\dagger)_{j}\) whenever \(V_{j}\) is transverse to \(L\). We wish to express \(\phi\) as the limit of linear maps \(\phi^{t}:V\to W\) of full rank for which \(\phi^{t}(L)\subset M\). We first reduce by induction to the case \(\ell=1\). The case \(\ell=0\) is easy, dealing only with linear equations on \(\mathbb{P}\operatorname{Hom}(V,W)\), so we assume that \(\ell>1\). If either the intersection of \(L\) and \(V_{\ell-1}\) is non-transverse (or zero), then we have no constraints on the maps \(\phi_{\ell-1},\phi_{\ell}\), and the length \(1\) collineation \(\{\phi_{\ell-1},\phi_{\ell}\}\) from \(V_{\ell-1}\) to \(W_{\ell-1}\) is a limit of full rank collineations \(\phi_{\ell-1}^{t}:V_{\ell-1}\to W_{\ell-1}\), by the irreducibility of \(\operatorname{Coll}(V_{\ell-1},W_{\ell-1})\). We may then replace the last two maps \(\phi_{\ell-1},\phi_{\ell}\) with a generic \(\phi_{\ell-1}^{t}\). On the other hand, if \(L\) and \(V_{\ell-1}\) are transverse, then property \((\dagger)_{\ell-1}\) and the claim for \(\ell=1\) (along with property \((\dagger)_{\ell}\) if \(L\) is additionally transverse to \(V_{\ell}\)) allow us again to replace \(\{\phi_{\ell-1},\phi_{\ell}\}\) with a generic \(\phi_{\ell-1}^{t}:V_{\ell-1}\to W_{\ell-1}\), now with the property that \(\phi_{\ell-1}^{t}(L\cap V_{\ell-1})\subset\operatorname{pr}_{\ell-1}(M)\). (Note that this argument is still valid if \(\operatorname{pr}_{\ell-1}(M)=W_{\ell-1}\); the incidence conditions again become vacuous.) The claim for all \(\ell\) now follows by induction. We therefore assume henceforth that \(\ell=1\). First, suppose that \(L\) is transverse to \(V_{1}\). We may represent a \(1\)-parameter family of maps \(\phi^{t}:V\to W\) by one of the following two \((d+1)\times(r+1)\) matrices. \[A_{+}^{t}=\begin{bmatrix}a_{0,0}^{\prime}t&\cdots&a_{0,k}^{\prime}t&0&\cdots&0 &0&\cdots&0\\ a_{1,0}^{\prime}t&\cdots&a_{1,k}^{\prime}t&a_{1,k+1}^{\prime}t&\cdots&a_{1,r -1}^{\prime}t&a_{1,r_{1}}&\cdots&a_{1,r}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\\ a_{d,0}^{\prime}t&\cdots&a_{d,k}^{\prime}t&a_{d,k+1}^{\prime}t&\cdots&a_{d,r-1 }^{\prime}t&a_{d,r_{1}}&\cdots&a_{d,r}\end{bmatrix}\] \[A_{-}^{t}=\begin{bmatrix}a_{0,0}^{\prime}t&\cdots&a_{0,r_{1}-1}^{\prime}t&a_{0,r_ {1}}&\cdots&a_{0,k}&0&\cdots&0\\ a_{1,0}^{\prime}t&\cdots&a_{1,r_{1}-1}^{\prime}t&a_{1,r_{1}}&\cdots&a_{1,k}&a_{1,k+1}&\cdots&a_{1,r}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\\ a_{d,0}^{\prime}t&\cdots&a_{d,r_{1}-1}^{\prime}t&a_{d,r_{1}}&\cdots&a_{d,k}&a_{d,k+1}&\cdots&a_{d,r}\end{bmatrix}\] In both cases, the subspace \(V_{1}\subset V\) of dimension \(r_{1}=r+1-r_{0}\) is represented by the first \(r_{1}\) columns, and the subspace \(L\subset V\) of codimension \(k+1\) is represented by the last \(r-k\) columns. The first case corresponds to the case \(r_{1}>k\), in which case \(V_{1}\) and \(L\) intersect non-trivially and transversely, and the second case corresponds to the case \(r_{1}\leq k\), in which case \(V_{1}\cap L=0\). The hyperplane \(M\subset W\) corresponds to the (dual of the) first row of the matrix, and the incidence condition \(\mathsf{Inc}(L,M)\) is reflected in the vanishing of the last \(r-k\) entries of this row. If the \(a^{\prime}_{i,j}\) and \(a_{i,j}\) are chosen generally, then the maps \(\phi^{t}\) are injective in a neighborhood of \(t=0\), and limit to a general collineation of type \((r_{0},r_{1})\) with \(\ker(\phi_{0})=V_{1}\) and satisfying \((\dagger)_{0}\) and \((\dagger)_{1}\). Indeed, \(\phi_{0}\) is the linear map obtained by setting \(t=0\), and \(\phi_{1}\) is obtained by restricting to the first \(r_{1}\) columns, dividing by \(t\), and post-composing with the quotient \(W\to W/\operatorname{im}(\phi_{0})\). Now, suppose that \(L\) is not transverse to \(V_{1}\). Then, for some \(s\leq k,r_{1}-1\), we may represent a \(1\)-parameter family of maps \(\phi^{t}:V\to W\) by the matrix \[A_{+}^{t}=\begin{bmatrix}a^{\prime}_{0,0}t&\cdots&a^{\prime}_{0,s-1}t&0&\cdots &0&0&\cdots&0&a_{0,s+r-k}&\cdots&a_{0,r}\\ a^{\prime}_{1,0}t&\cdots&a^{\prime}_{1,s-1}t&a^{\prime}_{1,s}t&\cdots&a^{ \prime}_{1,r_{1}-1}t&a_{1,r_{1}}&\cdots&a_{1,s+r-k-1}&a_{1,s+r-k}&\cdots&a_{1, r}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots\\ a^{\prime}_{d,0}t&\cdots&a^{\prime}_{d,s-1}t&a^{\prime}_{d,s}t&\cdots&a^{ \prime}_{d,r_{1}-1}t&a_{d,r_{1}}&\cdots&a_{d,s+r-k-1}&a_{d,s+r-k}&\cdots&a_{d, r}\end{bmatrix}\] The subspace \(V_{1}\subset V\) is again represented by the first \(r_{1}\) columns, but now \(L\) is represented by columns \(s\) through \(s+r-k-1<r\). In particular, the last column corresponds to a non-zero vector of \(V\) not in \(\langle V_{1},L\), yet \(V_{1}\cap L\neq 0\). If the \(a^{\prime}_{i,j}\) and \(a_{i,j}\) are chosen generally, then the maps \(\phi^{t}\) are injective in a neighborhood of \(t=0\), and limit to a general collineation of type \((r_{0},r_{1})\) with \(\ker(\phi_{0})=V_{1}\) and satisfying \((\dagger)_{0}\). Here, there is _no condition_ on \(\phi_{1}\), which is again obtained by dividing the first \(r_{1}\) columns by \(t\). Indeed, the image of \(\phi_{0}\) is not constrained to lie in \(M\), corresponding to the fact that \(a_{0,d}\neq 0\) generically, so any map \(\phi_{1}:V_{1}\to W/\operatorname{im}(\phi_{0})\) can be obtained with the appropriate choice of \(a^{\prime}_{i,j}\). This completes the proof. By a straightforward parameter count, one can deduce: **Corollary 3.10**.: _For any type \(\overrightarrow{r}=(r_{0},\ldots,r_{\ell})\) and any \(L,M\) as above, the intersection_ \[\mathsf{Inc}(L,M)\cap\operatorname{Coll}_{\overrightarrow{r}}(V,W)\subset \operatorname{Coll}(V,W)\] _is pure of the expected codimension \((r-k)+\ell\)._ ## 4. From complete collineations to curve counts In this section, we relate the curve counts \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) to the cycles \(\mathsf{Inc}(L,M)\) defined in the previous section, proving Theorem 1.6. ### Setup We first recall the relevant definitions. Let \((C,p_{1},\ldots,p_{n})\in\mathcal{M}_{g,n}\) be a general curve. Let \(X_{1},\ldots,X_{n}\subset\mathbb{P}^{r}\) be general linear spaces of dimensions \(k_{1},\ldots,k_{n}<r\), respsectively. For \(j=0,1,\ldots,r-1\), let \(n_{j}\) be the number of \(X_{i}\) of dimension \(j\), and write \(\overrightarrow{n}=(n_{0},\ldots,n_{r-1})\) **Definition 4.1**.: _Let \(d\geq 0\) be an integer for which (5) holds. Then, we define \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) to be the number of non-degenerate maps \(f:C\to\mathbb{P}^{r}\) of degree \(d\) with \(f(p_{i})\in X_{i}\)._ _When \(\overrightarrow{n}=(n,0,\ldots,0)\), the counts \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}:=\mathsf{T}_{g,\overrightarrow{n},d}^{ \mathbb{P}^{r}}\) are the geometric Tevelev degrees of \(\mathbb{P}^{r}\)._ Throughout, we will often abbreviate \(\overrightarrow{n}=(n,0,\ldots,0)\), by simply \(n\) when there is no opportunity for confusion. The Brill-Noether theorem ensures that, under the assumption (5) and the stated generality assumptions, the number of maps in question is finite and transverse, in the sense that such maps admit no first-order deformations. Let \(V,W\) be vector spaces of dimensions \(r+1,d+1\) as before, with \(r\leq d\). Let \(L_{1},\ldots,L_{n}\subset V\) be general linear subspaces of dimensions \(r-k_{1},\ldots,r-k_{n}\), and let \(M_{1},\ldots,M_{n}\subset W\) be general hyperplanes. **Definition 4.2**.: _We define the subscheme_ \[Y_{r,\overrightarrow{n},d}:=\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\subset \operatorname{Coll}(V,W).\] _and the subscheme_ \[Z_{r,\overrightarrow{\pi},d}:=\pi(Y_{r,\overrightarrow{\pi},d})\subset\operatorname{ Coll}(V,W).\] ### Specializing the \(M_{i}\subset W\) We now specialize the previous discussion of complete collineations to the following setting. Let \(V=\mathbb{C}^{r+1}\) and let \(W=H^{0}(\mathbb{P}^{1},d)\). Let \(L_{1},\ldots,L_{n}\subset V\) be general linear subspaces of dimensions \(r-k_{1},\ldots,r-k_{n}\) as above, and let \(p_{0},p_{1},\ldots,p_{n}\in\mathbb{P}^{1}\) be general points. For each \(p_{i}\), let \(M_{i}\subset W\) be the hyperplane of \(d\)-forms vanishing at \(p_{i}\). Define the subschemes \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\subset\operatorname{Coll}(V,W)\) as before. More generally, to any point \(p\in\mathbb{P}^{1}\) we may associate the hyperplane \(M_{p}\subset W\) of \(d\)-forms vanishing at \(p\). We refer to this 1-parameter family of hyperplanes as the _bp-hyperplanes_ (where bp stands for "base-point"). We say that a collineation \(\phi\in\operatorname{Coll}(V,W)\) has a _base-point_ at \(p\) if \(\operatorname{im}(\phi)\subset M_{p}\), and is _bpf_ (base-point free) if the image of \(\phi\) is not contained in any bp-hyperplane. Let \[0=F_{0}\subset F_{1}\subset\cdots\subset F_{d+1}=W\] be the complete flag defined by \(F_{h}=H^{0}(\mathbb{P}^{1},\mathcal{O}(d)(-(d+1-h)p_{0}))\), that is, \(F_{h}\subset W\) is the subspace of sections vanishing to order \(d+1-h\) at \(p_{0}\). We consider the subschemes \[Y^{\operatorname{pt}}_{r,\overrightarrow{\pi},d}:=\bigcap_{i=1}^{n} \operatorname{\mathsf{Inc}}(L_{i},M_{i})\subset\operatorname{Coll}(V,W)\] If \(Y^{\operatorname{pt}}_{r,\overrightarrow{\pi},d}\) is irreducible of the expected codimension \(|\overrightarrow{n}|\), then as cycle _classes_ in \(H^{2|\overrightarrow{\pi}|}(\operatorname{Coll}(V,W))\), then we have \[[Y^{\operatorname{pt}}_{r,n,d}]=\prod_{j=1}^{r}\gamma_{j}^{n_{r-j}}.\] Note, however, that this transversality is not immediate from a Kleiman-Bertini-type argument, as the \(M_{i}\subset W\) are not a general collection of hyperplanes. **Proposition 4.3**.: _Let \(\lambda=(\lambda_{0},\cdots,\lambda_{r})\) be a partition, and let \(\Sigma_{\lambda}^{F}\subset\operatorname{Gr}(r+1,W)\) be the corresponding Schubert variety with respect to the flag \(F\) defined above. Assume as above that the points \(p_{0},p_{1},\ldots,p_{n}\in\mathbb{P}^{1}\) and the subspaces \(L_{1},\ldots,L_{n}\subset V\) are general. Then, the intersection_ \[\pi^{-1}(\Sigma_{\lambda}^{F})\cap Y^{\operatorname{pt}}_{r,\overrightarrow{ \pi},d}\subset\operatorname{Coll}(V,W)\] _is generically smooth of the expected codimension \(|\lambda|+|\overrightarrow{n}|\)._ _Moreover, any generic point \(\phi\) of the intersection is a collineation of full rank, bpf except possibly at \(p_{0}\) (that is, with image contained in no bp-hyperplane except possibly \(M_{p_{0}}=F_{d}\)), and \(\operatorname{im}(\phi)\) has ramification sequence at \(p_{0}\) given exactly by \(\lambda\)._ Recall that the _ramification sequence_ of a linear series \(U\subset H^{0}(C,\mathcal{L})\) of rank \(r\) (which is to say that \(\dim(U)=r+1\)) on a curve \(C\) at a point \(p\in C\) is the sequence of integers \((\mu_{0},\ldots,\mu_{r})\) for which \(\dim(U\cap H^{0}(C,\mathcal{L}(-\mu_{j}-r+j)p))\geq j+1\) for \(j=1,\ldots,r+1\) and each \(\mu_{j}\) is as large as possible. Here, we have \(C=\mathbb{P}^{1}\), \(p=p_{0}\), and \(\mathcal{L}=H^{0}(\mathbb{P}^{1},\mathcal{O}(d))\). In particular, taking \(j=r+1\), part of the claim is that \(\operatorname{im}(\phi)\subset F_{d+1-\mu_{r}}\), but \(\operatorname{im}(\phi)\not\subset F_{d-\mu_{r}}\). We will prove Proposition 4.3 via a dimension count on strata of \(\operatorname{Coll}(V,W)\). Fix a collineation type \(\overrightarrow{r}=(r_{0},\ldots,r_{\ell})\) and a nested sequence of subsets \[S_{\ell}\subseteq S_{\ell-1}\subseteq\cdots\subseteq S_{0}\subseteq\{p_{1}, \ldots,p_{n}\}.\] Write \(s_{j}=|S_{j}|\) for each \(j\). Let \(\operatorname{Coll}^{\circ}_{\overrightarrow{r},S}\subset\operatorname{Coll}(V,W)\) be the locally closed subscheme of collineations \[\phi=\{\phi_{j}:V_{j}\to W_{j}\}_{j=0}^{\ell}\] of type \(\overrightarrow{r}\) which furthermore have the property, for all \(j=0,1,\ldots,\ell\), that \[\phi_{j}(V_{j})\subset\operatorname{pr}_{j}(M_{i})\] if and only if \(p_{i}\in S_{j}\). That is, \(S_{\ell}\) is the set of base-points of \(\phi\), and more generally, \(S_{j}\) is the set of base-points of \(\phi\) "up to level \(j\)." The following is a straightforward consequence of the fact that any distinct points on \(\mathbb{P}^{1}\) impose independent conditions on sections of line bundles of any degree. In fact, the points \(p_{0},p_{1},\ldots,p_{n}\) need only be distinct; one needs no further generality hypothesis. **Lemma 4.4**.: _The intersection \(\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S}:=\pi^{-1}(\Sigma^{ F}_{\lambda})\cap\operatorname{Coll}^{\circ}_{\overrightarrow{r},S}\), if non-empty, is pure of the expected codimension_ \[|\lambda|+\ell+\sum_{j=0}^{\ell}r_{j}s_{j}\] _in \(\operatorname{Coll}(V,W)\). Moreover, given a generic point \(\phi\) of \(\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S}\), the ramification sequence of \(\operatorname{im}(\phi)\) at \(p_{0}\) is exactly \(\lambda\)._ Proof of Proposition 4.3.: Fix \(\overrightarrow{r},S\), and consider the product \(\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S}\times(\mathbb{P}^ {r})^{n}\). Regarding the \(M_{i}\subset W\) as fixed and the \(L_{i}\subset V\) as varying, we form the loci \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\)_relatively_ over \(P=\prod_{i=1}\operatorname{Gr}(r-k_{i},V)\), parametrizing the possible choices of \(L_{i}\). Let \(\mathcal{Y}\subset\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S} \times P\) be the intersection of the relative incidence loci, whose fiber over a point \((L_{1},\ldots,L_{n})\in P\) is the restriction of \[\pi^{-1}(\Sigma^{F}_{\lambda})\cap Y^{\operatorname{pt}}_{r,\overrightarrow{n},d}\subset\operatorname{Coll}(V,W)\] to \(\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S}\). Consider now the other projection \(\mathcal{Y}\to\operatorname{Coll}^{\circ}_{\lambda,\overrightarrow{r},S}\). By Proposition 3.9, the fiber over \(\phi\) consists of collections of linear subspaces \(L_{i}\subset V\) with the property that: * either \(\phi_{j_{i}+1}(L_{i}\cap V_{j_{i}+1})\subset\operatorname{pr}_{j_{i}+1}(M_{i})\) (property \((\dagger)_{j_{i}+1}\)), or * \(L_{i}\) and \(V_{j_{i}+1}\) fail to be transverse in \(V\), where \(j_{i}\) is the largest value of \(j\) for which \(\operatorname{im}(\phi_{j})\subset\operatorname{pr}_{j}(M_{i})\). By convention, we take \(j_{i}=-1\) if \(\operatorname{im}(\phi_{0})\not\subset M_{i}\). The second condition is never satisfied when \(j_{i}=-1\), and both of the conditions are vacuously satisfied \(j_{i}=\ell\). The first condition imposes \(\max((r-k_{i})-(r_{0}+\cdots+r_{j_{i}}),0)\) conditions on \(L_{i}\) (note that this is valid even when \(j_{i}=\ell\)). The second imposes \(|(r-k_{i})-(r_{0}+\cdots+r_{j_{i}})|+1\) conditions if \(j_{i}\geq 0\) and is never satisfied if \(j_{i}=-1\); in particular, we have strictly more conditions than in the first case. In both cases, the subvariety of \(\operatorname{Gr}(r-k_{i},V)\) defined by these conditions is a Schubert variety, and in particular is generically smooth. Combining with Lemma 4.4 and restricting to a general point \((L_{1},\ldots,L_{n})\in\prod_{i}\operatorname{Gr}(r-k_{i},V)\), we conclude that the intersection of interest is generically smooth of codimension at least \[\left(\sum_{i=1}^{n}(r-k_{i})-\sum_{j=0}^{\ell}r_{j}s_{j}\right)+\left(| \lambda|+\ell+\sum_{j=0}^{\ell}r_{j}s_{j}\right)=\sum_{i=1}^{n}(r-k_{i})+| \lambda|+\ell\] in \(\operatorname{Coll}(V,W)\). This number is at least the expected codimension, with equality only when \(\ell=0\), so it follows that the intersection in question must be supported on the open stratum, on which it has expected dimension. The statements about \(\phi\) being bpf (except possibly at \(p_{0}\)) and having ramification exactly given by \(\lambda\) at \(p_{0}\) follow from repeating the dimension counts with \(W\) replaced by an intersection of bp-hyperplanes or \(\lambda\) replaced with a larger partition. Note in particular that requiring \(\operatorname{im}(\phi)\subset M_{i}\) imposes \(r+1\) conditions, which is strictly more than the \(r-k_{i}\) conditions lost from \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\); requiring instead that \(\operatorname{im}(\phi)\) is contained in _any_ bp-hyperplane imposes \(r>0\) additional conditions, as the space of such hyperplanes is \(1\)-dimensional. We remark that Proposition 4.3 immediately implies the following genericity statement by regenerating the \(M_{i}\) back to general hyperplanes and \(F\) back to a general flag. (Here, one can alternatively argue directly by Kleiman-Bertini.) **Corollary 4.5**.: _Let \(F\) be a general complete flag \(0=F_{0}\subset F_{1}\subset\cdots\subset F_{d+1}=W\). Let \(\lambda=(\lambda_{0},\cdots,\lambda_{r})\) be a partition, and let \(\Sigma_{\lambda}^{F}\subset\operatorname{Gr}(r+1,W)\) be the corresponding Schubert cycle with respect to \(F\)._ _Let \(M_{1},\ldots,M_{n}\subset W\) be general hyperplanes, and let \(L_{1},\ldots,L_{n}\subset V\) be general subspaces. Then, the intersection_ \[\pi^{-1}(\Sigma_{\lambda}^{F})\cap Y_{r,\overrightarrow{n},d}\subset \operatorname{Coll}(V,W)\] _is generically smooth of the expected codimension \(|\lambda|+|\overrightarrow{n}|\)._ _Moreover, any generic point \(\phi\) of the intersection is a collineation of full rank, for which \(\operatorname{im}(\phi)\) is not contained in \(M_{i}\). Finally, for such a point, the dimensions \(\dim(F_{j}\cap\operatorname{im}(\phi))\) are dictated exactly by \(\lambda\)._ We also record the following: **Lemma 4.6**.: _The subschemes \(Y_{r,\overrightarrow{n},d}^{\operatorname{pt}}\) are \(Y_{r,\overrightarrow{n},d}\) of \(\operatorname{Coll}(V,W)\) are irreducible of the expected codimension of \(|\overrightarrow{n}|=\sum_{i=1}^{n}(r-k_{i})\)._ Proof.: Taking \(\lambda=\emptyset\) we have already seen that \(Y_{r,\overrightarrow{n},d}^{\operatorname{pt}}\) and \(Y_{r,\overrightarrow{n},d}\) are of expected dimension and generically supported on the locus of full rank collineations. On the other hand, both subschemes are cut out by _linear_ equations on \(\operatorname{Coll}^{\circ}(V,W)\), which is itself an open subset of a projective space. The conclusion follows. In particular, we have \([Y_{r,\overrightarrow{n},d}^{\operatorname{pt}}]=[Y_{r,\overrightarrow{n},d}]= \prod_{j=1}^{r}\gamma_{j}^{n_{r-j}}\). **Definition 4.7**.: _We define_ \[\Gamma_{r,\overrightarrow{n},d}:=\pi_{*}([Y_{r,\overrightarrow{n},d}])=\pi_{* }\left(\prod_{j=1}^{r}\gamma_{r}^{n_{r-j}}\right)\in H^{2(|\overrightarrow{n} |-r(r+2))}(\operatorname{Gr}(r+1,W)).\] ### Degenerations of limit linear collineations Let \(B\) be the spectrum of a discrete valuation ring. Let \((\mathcal{C}\to B,p_{1},\ldots,p_{n})\) be a stable pointed curve of genus \(g\) whose general fiber \(C_{\eta}\) is smooth and whose special fiber \(C_{0}\) has the following form, depicted in Figure 1. To a rational component (spine) \(C_{\text{sp}}\) elliptic tails \(E_{1},\ldots,E_{g}\) are attached at general points \(s_{1},\ldots,s_{g}\), and a rational tail \(R\) is attached at a general point \(s_{0}\), so that \(R\) contains the markings \(p_{1},\ldots,p_{n}\) in general position. Let \(\mathcal{G}(\mathcal{C}/B)\) be the space of relative limit linear series on \(\mathcal{C}\) of rank \(r\) and degree \(d\), see [12]. Let \(\mathcal{G}(\mathcal{C}/B)^{\circ}\) be the open subset of limit linear series that do not have base-points at any of the \(p_{i}\). On the special fiber, the base-point condition is taken as a constraint on the \(R\)-aspect of the limit linear series. Let \(\mathcal{P}(\mathcal{C}/B)\to\mathcal{G}(\mathcal{C}/B)\) be the projective bundle of \((r+1)\)-tuples of sections, taken up to simultaneous scaling, where on the special fiber we take sections of the \(R\)-aspect. The associated vector bundle is \(\operatorname{Hom}(\mathbb{C}^{r+1},\mathcal{W}_{R})\), where \(\mathcal{W}_{R}\) is the universal rank \((r+1)\) bundle given by the universal linear series on \(\mathcal{C}_{\eta}\) and the \(R\)-aspect of the universal series on \(\mathcal{C}_{0}\). Figure 1. The curve \(C_{0}\). Finally, let \(\operatorname{Coll}(\mathcal{C}/B)\to\mathcal{P}(\mathcal{C}/B)\) be the fiber-wise blowup whose fibers are complete collineations \(V\cong\mathbb{C}^{r+1}\to W_{R}\), where the \(W_{R}\) are the fibers of \(\mathcal{W}_{R}\). The points of \(\operatorname{Coll}(\mathcal{C}/B)\) on the general fiber are therefore linear series \((\mathcal{L}_{\eta},W_{\eta}\subset H^{0}(\mathcal{C},\mathcal{L}_{\eta}))\) of rank \(r\) and degree \(d\) on \(\mathcal{C}_{\eta}\) with a collineation \(V\to W_{\eta}\) (or equivalently, collineations \(V\to H^{0}(\mathcal{C},\mathcal{L}_{\eta})\)), and the points of \(\operatorname{Coll}(\mathcal{C},B)\) are limit linear series on \(C_{0}\) with a collineation from \(V\) to the \(R\)-aspect \(W_{R}\). See also [7, SS6] for a detailed construction in the case \(r=1\). The loci \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\) can be globalized to \(\operatorname{Coll}(\mathcal{C}/B)\). More precisely, fix general linear spaces \(L_{i}\subset V\cong\mathbb{C}^{r+1}\) of dimension \(k_{i}\). On the pullback \(\operatorname{Coll}(\mathcal{C}/B)^{\circ}\) of \(\operatorname{Coll}(\mathcal{C}/B)\) over \(\mathcal{G}(\mathcal{C}/B)^{\circ}\), we have, for each \(i\), a rank \(r\) sub-bundle \(\mathcal{M}_{i}\subset\mathcal{W}_{R}\) of sections vanishing at \(p_{i}\). Then, define \(\operatorname{\mathsf{Inc}}(L_{i},\mathcal{M}_{i})^{\circ}\subset \operatorname{Coll}(\mathcal{C}/B)_{bpf}\) fiberwise by the usual incidence locus \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\). Then, we define \(\operatorname{\mathsf{Inc}}(L_{i},\mathcal{M}_{i})\subset\operatorname{Coll} (\mathcal{C}/B)\) to be the closure of \(\operatorname{\mathsf{Inc}}(L_{i},\mathcal{M}_{i})^{\circ}\), and \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\) to be the intersection of all of the \(\operatorname{\mathsf{Inc}}(L_{i},\mathcal{M}_{i})\). **Proposition 4.8**.: _Assume (5). Then, the subscheme \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\subset \operatorname{Coll}(\mathcal{C}/B)\) is finite and etale of degree_ \[\int_{\operatorname{Coll}(V,W)}\pi^{*}(\sigma_{1^{r}}^{g})\cdot\prod_{j=1}^{r} \gamma_{j}^{n_{r-j}}\] _over \(B\)._ _Furthermore, any geometric point of \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\) over the generic point is a collineation of full rank, and is base-point free._ Proof.: We first show that the restriction of \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\) to the special fiber is reduced with the claimed number of points. Let \(W\) be a limit linear series on \(C_{0}\) with a collineation \(\phi:\mathbb{C}^{r+1}\to W_{R}\) lying in \(\mathcal{Y}\). We first claim that the ramification sequence of \(W\) on each \(E_{j}\) must be \((d-r,d-r-1,\ldots,d-r-1)\) at the node, in which case there is a unique linear series on \(E_{j}\) with this property. The argument is as in [13, SS2]. First, if there are strictly more than \(r(d-r-1)+(d-r)\) ramification conditions on \(E_{j}\), then no such \(E_{j}\)-aspect can exist, and when there are exactly this number of conditions, then no such \(E_{j}\)-aspect can exist unless the ramification sequence is equal to \((d-r,d-r-1,\ldots,d-r-1)\), in which case it is uniquely determined. On the other hand, if there are strictly fewer than this number of conditions at any \(E_{j}\), the total number of ramification conditions imposed on \(C_{\mathrm{sp}}\) at the points \(s_{j}\) is at least \(rg+1\), and so the same must be true at \(s_{0}\) on \(R\). However, Proposition 4.3 then shows that no collineation \(\mathbb{C}^{r+1}\to W_{R}\) lying in \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\) with \(rg+1\) Schubert conditions at \(p_{0}\) can exist. Therefore, the ramification sequence at each \(s_{j}\) on \(C_{\mathrm{sp}}\) must be (at least) \((1,\ldots,1,0)\), corresponding to the class \(\sigma_{1^{r}}\). Arguing as above, the number of Schubert conditions imposed at \(s_{0}\) on \(C_{\mathrm{sp}}\) must be exactly \((r+1)(d-r)-rg\), and at \(s_{0}\) on \(R\) must be exactly \(rg=(r+1)(d-r)-r(n-r-2)\). For any pair \((\lambda,\overline{\lambda})\) of complementary partitions in \((r+1)^{d-r}\) of these sizes, we obtain, by the Mukhin-Tarasov-Varchenko Theorem [25] (guaranteeing the transversality of the cycles in the first integral) and Proposition 4.3 (guaranteeing the transversality in the second), \[\left(\int_{\operatorname{Gr}(r+1,d+1)}\sigma_{1^{r}}^{g}\cdot\sigma_{\lambda} \right)\cdot\left(\int_{\operatorname{Coll}(V,W)}\pi^{*}(\sigma_{\overline{ \lambda}})\cdot\prod_{j=1}^{r}\gamma_{j}^{n_{r-j}}\right)\] reduced points of \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\). Summing over all \((\lambda,\overline{\lambda})\) yields the first part of the claim. Now, because \(\mathcal{Y}_{g,r,\overrightarrow{n},d}^{\operatorname{pt}}\) is cut out locally on the special fiber of \(\operatorname{Coll}(\mathcal{C}/B)\) by \(rg+|\overrightarrow{n}|=(r+1)(d+1)-1\) equations, and is reduced of the expected dimension of zero in the special fiber, the same is true in the general fiber. Finally, a geometric point of the generic fiber of length greater than \(0\) or with a base point would specialize to a limit linear series with a complete collineation on \(R\) with the same property (possibly after semi-stable reduction, if the base-point specializes to the node \(s_{0}\in R\)), which does not exist by Proposition 4.3. This completes the proof. Proof of Theorem 1.6.: A point of \(\mathcal{M}_{g,n}^{\operatorname{o}}(\mathbb{P}(V^{\vee}),d)\) is interpreted as an _injective_ map \(\phi:V\to H^{0}(C,\mathcal{L})\), where \(\mathcal{L}\) is a line bundle of degree \(d\) on \(C\), such that the image of \(\phi\) is not contained in any \(H^{0}(C,\mathcal{L}(-p_{i}))\). Proposition 4.8 implies that any point in the general fiber of \(\mathcal{Y}\to B\) has this form. In fact, a standard local computation shows that the pullback of the map \(\tau:\mathcal{M}^{\circ}_{g,n}(\mathbb{P}(V^{\vee}),d)\to\mathcal{M}_{g,n}\times X ^{n}\) by the inclusion of \([(C,p_{1},\dots,p_{n})]\times\prod X_{i}\) is isomorphic as a _scheme_ to the general fiber of \(\mathcal{Y}\to B\). The Theorem now follows from the rest of Proposition 4.8, giving the degree of \(\mathcal{Y}\) over \(B\). ## 5. Comparison to Young tableaux **Definition 5.1**.: _Define the integers \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}\) by_ \[\Gamma_{r,\overrightarrow{n},d}=:\sum_{\lambda}\Gamma^{\lambda}_{r, \overrightarrow{n},d}\cdot\sigma_{\lambda},\] _where the sum is over \(\lambda\subset(d-r)^{r+1}\) with \(|\lambda|=|\overrightarrow{n}|-r(r+2)\)._ Clearly, these coefficients determine the classes \(\Gamma_{r,\overrightarrow{n},d}\) and thus the curve counts \(\mathsf{T}_{r,\overrightarrow{n},d}\). Moreover, because \(\Gamma_{r,\overrightarrow{n},d}\) is the push-forward of an effective cycle from \(\operatorname{Coll}(V,W)\), it is itself effective on \(\operatorname{Gr}(r+1,W)\), and we have \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}\geq 0\) for all \(\lambda\). In this section, we show that, under some conditions, we have the simple formula \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}=|\mathsf{SSYT}_{r+1}(\lambda)|\), where the right hand side is a count of semi-standard Young tableaux, see SS2.2. We do so by replacing the space \(\operatorname{Coll}(V,W)\) with a more naive resolution of the rational map \(\pi:\mathbb{P}\operatorname{Hom}(V,W)\dashrightarrow\operatorname{Gr}(r+1,W)\). The calculation is essentially the main one of [13], in a more general setting. We will deduce that \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}\leq|\mathsf{SSYT}_{r+1}(\lambda)|\) in general, and that the virtual (Gromov-Witten) count \((r+1)^{g}\) gives an upper bound for \(\mathsf{T}^{\mathbb{P}^{r}}_{g,\overrightarrow{n},d}\). ### Calculation on \(\boldsymbol{S(V,W)}\) **Definition 5.2**.: _Let \(S(V,W)\subset\mathbb{P}\operatorname{Hom}(V,W)\times\operatorname{Gr}(r+1,W)\) be the locus of pairs \((\phi,U)\), where \(\phi:V\to W\) is a non-zero linear map (up to scaling) and \(U\in\operatorname{Gr}(r+1,W)\) satisfies \(U\supset\operatorname{im}(\phi)\)._ _Abusing notation, let \(\pi:S(V,W)\to\operatorname{Gr}(r+1,W)\) be the projection remembering \(U\), and let \(\psi:S(V,W)\to\mathbb{P}\operatorname{Hom}(V,W)\) be the projection remembering \(\phi\)._ The map \(\pi\) is the \(\mathbb{P}^{(r+1)^{2}-1}\)-bundle associated to the vector bundle \(\mathcal{U}^{r+1}\) on \(\operatorname{Gr}(r+1,W)\), where \(\mathcal{U}\) is the universal subbundle. Indeed, the fiber over a point \(U\in\operatorname{Gr}(r+1,W)\) is the space of non-zero linear maps \(\mathbb{C}^{r+1}\to U\), up to scaling. Over an injective map \(\phi\in\mathbb{P}\operatorname{Hom}(V,W)\), the map \(\psi\) is an isomorphism, as it must be the case that \(U=\operatorname{im}(\phi)\). When \(r=1\), we have \(S(V,W)=\operatorname{Coll}(V,W)\). Indeed, if \(\phi:V\to W\) has rank \(1\), the point \((\phi,U)\in S(V,W)\) is identified with the length \(1\) collineation given by \(\phi_{0}=\phi\) and \(\phi_{1}:\mathbb{C}\cong\ker(\phi)\to\operatorname{coker}(\phi)\) the unique map for which \(\operatorname{im}(\{\phi_{j}\}_{j=0}^{1})=U\). Let \(L\subset V\) be a linear subspace of dimension \(r-k\) and let \(M\subset W\) be a hyperplane. We (again, abusively) denote by \(\mathsf{Inc}^{\prime}(L,M)\subset S(V,W)\) the _pullback_ of \(\mathsf{Inc}^{\prime}(L,M)\subset\mathbb{P}\operatorname{Hom}(V,W)\) by \(\psi\). Then, in terms of the projective bundle description of \(\pi:S(V,W)\to\operatorname{Gr}(r+1,W)\), the locus \(\mathsf{Inc}^{\prime}(L,M)\) is scheme-theoretically a relative linear subspace of codimension \(r-k\) and class \(c_{1}(\mathcal{O}_{\mathbb{P}(\mathcal{U}^{r+1})}(1))^{r-k}\). Fix now general linear subspaces \(L_{1},\dots,L_{n}\subset V\), where \(L_{i}\) has codimension \(r-k_{i}\), and general hyperplanes \(M_{1},\dots,M_{n}\subset W\). Define \[Y^{\prime}_{r,\overrightarrow{n},d}=\bigcap_{i=1}^{n}\mathsf{Inc}^{\prime}(L_{i },M_{i})\subset S(V,W).\] **Proposition 5.3**.: _Assume one of the following:_ 1. \(n_{0}\geq d+2\)_, or_ 2. \(n_{0}=\dots=n_{r-2}=0\) _(that is,_ \(k_{i}=r-1\) _for all_ \(i\)_, so the_ \(L_{i}\) _are all lines), or_ _,_ 3. \(r=2\) _and_ \(n\geq d+3\)_._ _Then, the intersection of the subschemes \(Y^{\prime}_{r,\overrightarrow{n},d}\) lies generically in the locus where \(\phi\) is injective and has image not contained in any of the \(M_{i}\), and is generically smooth of the expected codimension of \(|\overrightarrow{n}|\)._ Proof.: We first prove the claim under assumption (a). Without loss of generality, suppose that \(L_{1},\ldots,L_{d+2}\subset V\) are all (general) hyperplanes. Let \((\phi,U)\) be any point (we will see that we need not assume this point be general) of the intersection of \(Y^{\prime}_{r,\overrightarrow{n},d}\), and suppose that \(\phi\) has rank \(r_{0}\leq r\). Let \(L=\ker(\phi)\subset V\). Let \(k\) be the number of hyperplanes among \(L_{1},\ldots,L_{d+2}\subset V\) containing \(L\). As the \(L_{i}\) are general, we have \(k\leq r_{0}\). Now, for any \(i\) for which \(L\not\subset L_{i}\), we must have \(\operatorname{im}(\phi)\subset M_{i}\). Indeed, if \(\operatorname{im}(\phi)\not\subset M_{i}\), then \(\phi^{-1}(M_{i})\) is a proper hyperplane containing, and thus equal to, \(L_{i}\), but \(\phi^{-1}(0)=L\), a contradiction. Therefore, \(\operatorname{im}(\phi)\subset W\) has dimension \(r_{0}\), but is contained in the intersection of at least \(d+2-k\geq d+2-r_{0}\) general hyperplanes, which is impossible. We conclude that \(\phi\) must be injective. From here onward, it suffices to work instead on the open locus where \(\phi\) is injective, and therefore on the open subset of \(\mathbb{P}\operatorname{Hom}(V,W)^{\circ}\) consisting of injective maps. We now relativize the construction of the loci \(\mathsf{Inc}^{\prime}(L_{i},M_{i})\subset\mathbb{P}\operatorname{Hom}(V,W)^{\circ}\). Define \[\mathcal{Y}\subset\mathbb{P}\operatorname{Hom}(V,W)^{\circ}\times\prod_{i=1} ^{n}\operatorname{Gr}(r-k_{i},V)\times(\mathbb{P}(W^{\vee}))^{n}\] by the intersection of the linear subspaces \(\mathsf{Inc}^{\prime}(L_{i},M_{i})\) upon restriction to any collection of \(L_{i}\in V\) and \(M_{i}\in W\). We claim that \(\mathcal{Y}\) is smooth of the expected codimension \(|\overrightarrow{n}|\). Indeed, the projection \[\mathcal{Y}\to\mathbb{P}\operatorname{Hom}(V,W)^{\circ}\times\prod_{i=1}^{n} \operatorname{Gr}(r-k_{i},V)\] is a relative product of Grassmannian bundles of relative dimension \(\sum_{i=1}^{n}k_{i}\): the fiber over a point \((\phi,L_{1},\ldots,L_{n})\) consists of space of hyperplanes \(M_{i}\) with \(M_{i}\supset\phi(L_{i})\cong\mathbb{C}^{r-k_{i}}\). Now, by generic smoothness, restricting to a general point \((\{L_{i}\},\{M_{i}\})\in\prod_{i=1}^{n}\operatorname{Gr}(r-k_{i},V)\times( \mathbb{P}(W^{\vee}))^{n}\) yields that the intersection of the \(\mathsf{Inc}^{\prime}(L_{i},M_{i})\) is irreducible and generically smooth of the expected dimension. (In fact, this intersection is smooth, because it is a linear subspace of \(\mathbb{P}\operatorname{Hom}(V,W)\).) Finally, suppose now that for a general point \(\phi\in Y^{\prime}_{r,\overrightarrow{n},d}\), we have \(\operatorname{im}(\phi)\subset M_{i}\) for some \(i\); without loss of generality, take \(i=n\). (We may drop the assumption that the \(L_{1},\ldots,L_{d+2}\) are all hyperplanes, in case \(n_{0}=d+2\), which would not allow \(L_{n}\) to be a hyperplane; the argument that follows works for \(L_{n}\) of any dimension.) Then, we are in the following situation: we have general subspaces \(L_{1},\ldots,L_{n-1}\subset V\) and hyperplanes \(M_{1}\cap M_{n},\ldots,M_{1}\cap M_{n-1}\subset M_{n}\), and, at least set-theoretically, we have \[\bigcap_{i=1}^{n}\mathsf{Inc}^{\prime}(L_{i},M_{i})=\left(\bigcap_{i=1}^{n-1} \mathsf{Inc}^{\prime}(L_{i},M_{i}\cap M_{n})\right)\cap S(V,M)\] We may argue by induction that the right hand side (if non-empty) has codimension \(|\overrightarrow{n}|-(r-k_{n})\) in \(S(V,M)\), and hence the left hand side has codimension \(|\overrightarrow{n}|+k_{n}+1\) in \(S(V,W)\). Thus, the closure of the original point \(\phi\) has codimension greater than the expected, and \(\phi\) cannot be general. This completes the proof under assumption (a). Under assumption (b), Proposition 3.9 implies that \(Y^{\prime}_{r,\overrightarrow{n},d}\) pulls back exactly to \(Y_{r,\overrightarrow{n},d}\subset\operatorname{Coll}(V,W)\) under the map \(b^{\prime}:\operatorname{Coll}(V,W)\to S(V,W)\) remembering the data of \(\phi_{0}\) and \(\operatorname{im}(\phi)\). Then, the claim follows from Proposition 4.5. Finally, assume that we are in the setting of assumption (c). Let \((\phi,U)\) be a general point of \(Y^{\prime}_{r,\overrightarrow{n},d}\). If \(\phi\) is injective, then the further statements follows from Proposition 4.5. If \(\phi\) has rank \(2\), then \((\phi,U)\) can be identified with a point of \(\operatorname{Coll}_{(2,1)}(V,W)\), and we get a contradiction, again from Proposition 4.5. Finally, if \(\phi\) has rank \(1\), then for each \(i\), either \(M_{i}\supset\operatorname{im}(\phi)\) or \(L_{i}\subset\ker(\phi)\). The latter can hold for at most two \(i\) if the \(L_{i}\) are general, so at least \(n-2\geq d+1\) of the \(M_{i}\) must contain \(\operatorname{im}(\phi)\). However, if the \(M_{i}\) are general, the intersection of \(d+1\) of the \(M_{i}\) is zero, a contradiction. One can refine the arguments of Proposition 5.3 to obtain the same conclusion under more general assumptions, but we do not carry out a detailed analysis here. **Corollary 5.4**.: _Assume the setup of Proposition 5.3, and one of the hypotheses (a), (b), (c). Then,_ \[\Gamma_{r,n,d}=\sum_{a_{0}+\cdots+a_{r}=|\overrightarrow{n}|-r(r+2)}\sigma_{a_{0 }}\cdots\sigma_{a_{r}}.\] _In particular,_ \[\Gamma_{r,\overrightarrow{n},d}^{\lambda}=|\mathsf{SSYT}_{r+1}(\lambda)|.\] _for all \(\lambda\subset(d-r)^{(r+1)}\) with \(|\lambda|=|\overrightarrow{n}|-r(r+2)\)._ Proof.: The second formula follows from the first by the Pieri rule. Proposition 5.3 shows that \[\Gamma_{r,n,d}=\pi_{*}([Y^{\prime}_{r,\overrightarrow{n},d}])=\pi_{*}\left( \prod_{i=1}^{n}[\mathsf{lnc}^{\prime}(L_{i},M_{i})]\right),\] where here \(\pi\) denotes the map \(\pi:S(V,W)\to\operatorname{Gr}(r+1,W)\). The class of \(\mathsf{Inc}^{\prime}(L_{i},M_{i})\) is cut out by \(r-k_{i}\) relative linear conditions in the projective bundle \(\pi:S(V,W)\to\operatorname{Gr}(r+1,W)\). Therefore, the class of \(\bigcap_{i=1}^{n}\mathsf{Inc}^{\prime}(L_{i},M_{i})\) is given by \(c_{1}(\mathcal{O}(1))^{|\overrightarrow{n}|}\), where \(\mathcal{O}(1)\) denotes the relative hyperplane class, and upon pushforward, we obtain the Segre class \[s_{|\overrightarrow{n}|-r(r+2)}(\mathcal{U}^{r+1}) =\{s(\mathcal{U})^{r+1}\}_{|\overrightarrow{n}|-r(r+2)}\] \[=\left\{\left(\sum_{a\geq 0}\sigma_{a}\right)^{r+1}\right\}_{| \overrightarrow{n}|-r(r+2)}\] \[=\sum_{a_{0}+\cdots+a_{r}=|\overrightarrow{n}|-r(r+2)}\sigma_{a_{ 0}}\cdots\sigma_{a_{r}},\] as needed. **Corollary 5.5**.: _Assume (5) and one of the hypotheses (a), (b), (c), and furthermore that \(d\geq g+r\) (this is immediate in case (a)). Then,_ \[\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}=(r+1)^{g}.\] Proof.: By Theorem 1.6 and Corollary 5.4, this reduces to the statement \[\int_{\operatorname{Gr}(r+1,d+1)}\sigma_{1^{r}}^{g}\cdot\left(\sum_{a_{0}+ \cdots+a_{r}=|\overrightarrow{n}|-r(r+2)}\sigma_{a_{0}}\cdots\sigma_{a_{r}} \right)=(r+1)^{g},\] which is [15, Theorem 1.3]. Alternatively, one can compare \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\) to the Gromov-Witten count, which is equal to \((r+1)^{g}\). The bound \(d\geq g+r\) ensures that only non-degenerate maps out of \(C\) contribute to the virtual count. ### Upper bounds **Lemma 5.6**.: _Fix general linear spaces \(L,L^{\prime}\subset V\) and hyperplanes \(M,M^{\prime}\subset W\). Let \(\iota:\operatorname{Coll}(V,M)\subset\operatorname{Coll}(V,W)\) be the inclusion. Let \(M^{\prime\prime}\) be a hyperplane in \(M\). Suppose that \(\dim(L)+\dim(L^{\prime})\geq r+1\)._ _Then, we have an equality in \(H^{*}(\operatorname{Coll}(V,W))\)_ \[[\mathsf{Inc}(L,M)]\cdot[\mathsf{Inc}(L^{\prime},M^{\prime})]=[Y]+\iota_{*}[ \mathsf{Inc}(L\cap L^{\prime},M^{\prime\prime})],\] _where \(Y\subset\operatorname{Coll}(V,M)\) is a subscheme of pure codimension \(\dim(L)+\dim(L^{\prime})\)._ If \(\dim(L)+\dim(L^{\prime})=r+1\), then \([\mathsf{Inc}(L_{1}\cap L_{2},M^{\prime})]=[\operatorname{Coll}(V,M)]\). If \(\dim(L)+\dim(L^{\prime})<r+1\), then one has an analogous statement that we will not need. Proof.: Consider a 1-parameter family in which \(M^{\prime}\) degenerates to \(M\). We get a corresponding family of intersections \(\mathsf{Inc}(L,M)\cap\mathsf{Inc}(L^{\prime},M^{\prime})\). The class in \(H^{*}(\operatorname{Coll}(V,W))\) of the flat limit is then equal to \([\mathsf{Inc}(L,M)]\cdot[\mathsf{Inc}(L^{\prime},M^{\prime})]\). Let \(M^{\prime\prime}=\lim(M\cap M^{\prime})\subset M\). Then, it suffices to identify \(\iota(\mathsf{Inc}(L\cap L^{\prime},M^{\prime\prime}))\) (scheme-theoretically) with a component of the flat limit; \(Y\) will then be the union of the remaining components of the flat limit. In fact, because the intersection \(\mathsf{Inc}(L,M)\cap\mathsf{Inc}(L^{\prime},M^{\prime})\) is contained generically in the locus of collineations of full rank, as is \(\iota(\mathsf{Inc}(L\cap L^{\prime},M^{\prime\prime}))\), it suffices to work on the locus of collineations of full rank, which we identify with an open subset of \(\mathbb{P}\operatorname{Hom}(V,W)\). Then, the loci in question are cut out by linear equations, and it is straightforward to verify the claim directly. **Remark 5.7**.: _The preceding lemma can also be cast in the language of maps to \(\mathbb{P}^{r}\). The operation of degenerating \(M^{\prime}\) to \(M\) corresponds to the collision of two points \(p,p^{\prime}\in\mathbb{P}^{1}\), and the codimension 2 linear space \(M^{\prime\prime}\subset W\) may be regarded as \(H^{0}(\mathbb{P}^{1},\mathcal{O}(d)(-2p))\). The statement of the lemma is then that the loci of maps satisfying the conditions \(f(p)\in X\) and \(f(p^{\prime})\in X^{\prime}\), in the limit, contains as a multiplicity 1 component the locus of maps \(f\) with a simple base-point at \(p\), for which, after twisting down by this base-point, we have \(f(p)\in\langle X,X^{\prime}\rangle\)._ **Proposition 5.8**.: _Assume \(r,\overrightarrow{n},d\) are arbitrary, and \(\lambda\subset(d-r)^{(r+1)}\) with \(|\lambda|=|\overrightarrow{n}|-r(r+2)\). Then, we have \(\Gamma^{\lambda}_{r,\overrightarrow{n},d}\leq|\mathsf{SSYT}_{r+1}(\lambda)|\)._ Proof.: For any integer \(k\geq 0\), we write \(\lambda+kr\) for the partition \((\lambda_{0}+kr,\ldots\lambda_{r}+kr)\) and \(\overrightarrow{n}+k(r+1)\) for the tuple \((n_{0}+k(r+1),n_{1},\ldots,n_{r})\). We will show more generally that \[\Gamma^{\lambda}_{r,\overrightarrow{n},d}\leq\Gamma^{\lambda+r}_{r, \overrightarrow{n}+(r+1),d+r}\] so that \[\Gamma^{\lambda}_{r,\overrightarrow{n},d}\leq\Gamma^{\lambda+kr}_{r, \overrightarrow{n}+k(r+1),d+kr}=|\mathsf{SSYT}_{r+1}(\lambda)|\] for any \(k\) sufficiently large, by Corollary 5.4 under assumption (a). Fix now \(L_{1},\ldots,L_{n+r+1}\subset V\) general, where \(L_{n+1},\ldots,L_{n+r+1}\subset V\) are hyperplanes and the dimensions of the remaining \(L_{i}\) are determined (in some order) by \(\overrightarrow{n}\). Fix general hyperplanes \(M_{1},\ldots,M_{n+r+1}\subset W\). Consider the product \[\bigcap_{i=1}^{n+r+1}\left[\mathsf{Inc}(L_{i},M_{i})\right]\] and its pushforward by \(\pi\). We will apply Lemma 5.6\(r\) times, first with \[(L,L^{\prime},M,M^{\prime})=(L_{n+r},L_{n+r+1},M_{n+r},M_{n+r+1}).\] We have \[\prod_{i=1}^{n+r+1}\left[\mathsf{Inc}(L_{i},M_{i})\right]=\left(\prod_{i=1}^{n +r-1}\left[\mathsf{Inc}(L_{i},M_{i})\right]\right)\cdot\left([Y]+\iota_{*}[ \mathsf{Inc}(L_{n+r}\cap L_{n+r+1},M^{\prime\prime})]\right),\] for some \(Y\subset\operatorname{Coll}(V,W)\) and hyperplane \(M^{\prime\prime}\subset M_{n+r}\). For general choices of \(L_{i},M_{i}\), the intersection of \(Y\) with the remaining \(\mathsf{Inc}(L_{i},M_{i})\) may be taken to have the expected codimension. Indeed, we may apply Kleiman-Bertini to a stratum \(\operatorname{Coll}_{\overrightarrow{n}}(V,W)\) on which \(Y\) is supported, and on which, by Corollary 3.10, the \(\mathsf{Inc}(L_{i},M_{i})\) have the expected codimension. In particular, this intersection with \(Y\) pushes forward to an effective cycle on \(\operatorname{Gr}(r+1,W)\). On the other hand, by the projection formula and the fact that \(\iota^{*}\mathsf{Inc}(L_{i},M_{i})=\mathsf{Inc}(L_{i},M_{i}\cap M_{n+r})\), we have \[\left(\prod_{i=1}^{n+r-1}\left[\mathsf{Inc}(L_{i},M_{i})\right] \right)\cdot\iota_{*}[\mathsf{Inc}(L_{n+r}\cap L_{n+r+1},M^{\prime\prime})]\] \[=\iota_{*}\left(\left(\prod_{i=1}^{n+r-1}\mathsf{Inc}(L_{i},M_{i} \cap M_{n+r})\right)\cdot\left[\mathsf{Inc}(L_{n+r}\cap L_{n+r+1},M^{\prime \prime})\right]\right).\] We repeat the argument, next with \(L=L_{n+r-1}\) and \(L^{\prime}=L_{n+r}\cap L_{n+r+1}\), which now has dimension \(r-1\). After \(r+1\) iterations, we obtain that \[\pi_{*}\left(\bigcap_{i=1}^{n+r+1}[\mathsf{Inc}(L_{i},M_{i})]\right)-\iota_{*}^{ (r)}\left(\bigcap_{i=1}^{n}[\mathsf{Inc}(L_{i},M_{i})]\right)\] is an effective cycle on \(\operatorname{Gr}(r+1,W)\), where \(\iota^{(r)}:\operatorname{Gr}(r+1,W^{r})\to\operatorname{Gr}(r+1,W)\) is the inclusion induced by a subspace \(W^{r}\subset W\) of codimension \(r\) (which can be taken to be equal to \(M_{n+2}\cap\dots\cap M_{n+r+1}\)). The coefficient of \(\sigma_{\lambda+r}\) in this difference is on the one hand non-negative, and on the other equal to precisely \(\Gamma_{r,\overrightarrow{n}+(r+1),d+r}^{\lambda+r}-\Gamma_{r,\overrightarrow {n},d}^{\lambda}\). This completes the proof. **Corollary 5.9**.: _Assume (5). Then, \(\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}}\leq(r+1)^{g}\)._ Proof.: Recall from the proof of Proposition 5.8 that \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\leq\Gamma_{r,\overrightarrow{n}+k( r+1),d+kr}^{\lambda+kr}\) for any \(k\geq 0\). Therefore, we also have, by Theorem 1.6, that \[\mathsf{T}_{g,\overrightarrow{n},d}^{\mathbb{P}^{r}} =\sum_{\lambda}\Gamma_{r,\overrightarrow{n},d}^{\lambda}\cdot \int_{\operatorname{Gr}(r+1,d+1)}\sigma_{1^{r}}^{g}\cdot\sigma_{\lambda}\] \[\leq\sum_{\lambda}\Gamma_{r,\overrightarrow{n}+k(r+1),d+kr}^{ \lambda+kr}\cdot\int_{\operatorname{Gr}(r+1,d+1+kr)}\sigma_{1^{r}}^{g}\cdot \sigma_{\lambda+kr}\] \[\leq\mathsf{T}_{g,\overrightarrow{n}+k(r+1),d+kr}^{\mathbb{P}^{ r}}\] \[=(r+1)^{g}\] if \(k\) is sufficienty large. In particular, we have the inequality \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}\leq\mathsf{Tev}_{g,n+r+1,d+r}^{\mathbb{P} ^{r}}\) for geometric Tevelev degrees. Thus, \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}\) is bounded below by the Castelnuovo count (4), so is therefore strictly positive. This implies that the map \(\mathcal{M}_{g,n}^{\circ}(\mathbb{P}^{r},d)\to\mathcal{M}_{g,n}\times X^{n}\) is dominant whenever \(n\geq r+1\) and \(n=\frac{r+1}{r}\cdot d-g+1\), which was previously proven by E. Larson [18, Corollary 1.3]. ## 6. Reduction to \(d=n-1\) We show in this section that the classes \(\Gamma_{r,\overrightarrow{n},d}\) with \(d=n-1\), where we recall that \(n\) is the length of the vector \(\overrightarrow{n}\), determine the classes \(\Gamma_{r,\overrightarrow{n},d}\) for \(d,n\) arbitrary. This will follow from the two statements below. **Proposition 6.1**.: _Let \(\lambda\subset(d-r)^{r+1}\) be a partition with \(|\lambda|=|\overrightarrow{n}|-r(r+2)\). Then, we have \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}=\Gamma_{r,\overrightarrow{n},d+1}^{ \lambda}\)._ Note that the right hand side of Proposition 6.1 makes sense because we also have \(\lambda\subset((d+1)-r)^{r+1}\) and the relation \(|\lambda|=|\overrightarrow{n}|-r(r+2)\) does not depend on \(d\). **Proposition 6.2**.: _Let \(\lambda\subset(d-r)^{r+1}\) be a partition with \(|\lambda|=|\overrightarrow{n}|-r(r+2)\). Suppose further that \(\lambda_{0}>n-1-r\). (In particular, it must be the case that \(n\leq d\).) Then, we have \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}=0\)._ Fix \(r,\overrightarrow{n},\lambda\) with \(\lambda_{0}\leq n-r-1\), and suppose that the coefficients \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\) when \(d=n-1\) are known. Then, Proposition 6.1 determines all of the coefficients \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\) when \(d\geq n-1\), by taking \(d=n-1,n,\dots\) successively; the assumption \(\lambda_{0}\leq n-r-1\) ensures that the hypothesis \(\lambda\subset(d-r)^{r+1}\) is satisfied. Similarly, taking \(d=n-2,\dots,\lambda_{0}+r\) determines all of the coefficients \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}\) when \(\lambda_{0}+r\leq d\leq n-2\). When instead \(d<\lambda_{0}+r\), then \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}=0\) is not defined (or can be taken to be zero). Finally, when \(\lambda_{0}>n-r-1\), Proposition 6.2 shows that \(\Gamma_{r,\overrightarrow{n},d}^{\lambda}=0\) independently of \(d\). Proof of Proposition 6.1.: Write \(\overline{\lambda}\) for the complement of \(\lambda\) in \((d-r)^{r+1}\) and \(\overline{\lambda}^{\prime}\) for the complement of \(\lambda\) in \((d+1-r)^{r+1}\). Note that \[\sigma_{\overline{\lambda}^{\prime}}=\sigma_{\overline{\lambda}}\cdot\sigma_{ 1^{r+1}},\] in \(H^{*}(\operatorname{Gr}(r+1,d+2))\), and furthermore that \(\sigma_{1^{r+1}}\) may be regarded as the class of the loci of \((r+1)\)-planes contained in a fixed hyperplane. Fix vector spaces \(V,W,W^{\prime}\) of dimensions \(r+1,d+1,d+2\), respectively, along with an inclusion \(W\subset W^{\prime}\). Fix general linear spaces \(L_{i}\subset V\) of codimension \(k_{i}+1\), and hyperplanes \(M_{i}^{\prime}\subset W^{\prime}\) intersecting \(W\) transversely in the hyperplanes \(M_{i}\subset W\). Then, the space \(\operatorname{Coll}(V,W)\) is realized naturally as the subscheme of \(\operatorname{Coll}(V,W^{\prime})\) of collineations with image contained in \(W\). Furthermore, the subschemes \(\operatorname{\mathsf{Inc}}(L_{i},M_{i}^{\prime})\) pull back to \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\) under this inclusion. We therefore conclude: \[\Gamma_{r,\overline{\mu},d}^{\lambda} =\int_{\operatorname{Gr}(r+1,W)}\Gamma_{r,\overline{\mu},d}\cdot \sigma_{\overline{\lambda}}\] \[=\int_{\operatorname{Coll}(V,W)}\prod_{j=1}^{r}\gamma_{j}^{n_{r- j}}\cdot\pi^{*}(\sigma_{\overline{\lambda}})\] \[=\int_{\operatorname{Coll}(V,W^{\prime})}\prod_{j=1}^{r}\gamma_{j }^{n_{r-j}}\cdot\pi^{*}(\sigma_{\overline{\lambda}})\cdot\pi^{*}(\sigma_{1^{( r+1)}})\] \[=\int_{\operatorname{Gr}(r+1,W^{\prime})}\Gamma_{r,\overline{ \mu},d+1}\cdot\sigma_{\overline{\lambda}^{\prime}}\] \[=\Gamma_{r,\overline{\mu},d+1}^{\lambda}.\] We now turn to the proof of Proposition 6.2. Fix \(d+1\) hyperplanes \(M_{0},\ldots,M_{d}\subset W\) in linearly general position. In this case, the \(M_{i}\) have no moduli: if we take \(w_{0},\ldots,w_{d}\) to be a basis of \(W\), then we may suppose that \(M_{i}\) is the hyperplane \(\langle w_{0},\ldots,\widehat{w_{i}},\ldots,w_{d}\rangle\). **Definition 6.3**.: _For any \(n\leq d+1\), let \(P_{n}\subset GL(W)\) denote the parabolic subgroup consisting of automorphisms of \(W\) that stabilize (but do not necessarily fix) all of \(M_{0},\ldots,M_{n-1}\)._ With respect to the basis \(w_{0},\ldots,w_{d}\), the subgroup \(P_{n}\) consists of the matrices \(g\) for which \(g_{ij}=0\) whenever \(i\leq n-1\) and \(i\neq j\). From here, it is clear that \(\dim(P_{n})=n+(d+1)(d+1-n)\). When \(n=d+1\), the subgroup \(P_{n}\) is the maximal torus \(T\subset GL(W)\) of diagonal matrices. **Definition 6.4**.: _Fix a subspace \(U\subset W\) of dimension \(r+1\). We denote by \(P_{n}(U)\subset P_{n}\) the subgroup of automorphisms that stabilize the \(M_{i}\) and fix \(U\)._ **Lemma 6.5**.: _Suppose \(U\) satisfies the property that it intersects any intersection of \(k\leq n\) of the hyperplanes \(M_{i}\) (for all \(i=1,2,\ldots,d+1\)) in the expected dimension of \(\max(0,r+1-k)\). Then, we have \(\dim(P_{n}(U))=(d-r)(d+1-n)\)._ In the language of matroids, the hypothesis is that the matroid determined by \(U\) with respect to the \(M_{i}\) is _uniform_. Proof.: Regarding \(P_{n}(U)\) as a locally closed subset of the \((d+1)^{2}\)-dimensional affine space of linear operators \(g:W\to W\), we will show that \(P_{n}(U)\) is cut out by \((r+1)(d+1)+n(d-r)\) independent (affine-)linear equations on the open subset \(GL(W)\) of invertible operators. The claim will then follow from the fact that \(P_{n}(U)\) is non-empty (as it contains the identity) and that \[(d+1)^{2}-(r+1)(d+1)-n(d-r)=(d-r)(d+1-n).\] We work in the basis \(\langle w_{0},\ldots,w_{d}\rangle\) of \(W\) as above. For each hyperplane \(M_{i}\), \(i=0,1,\ldots,n-1\), fix an arbitrary \((d-r)\)-element subset \(S_{i}\) of \(\{0,1,\ldots,d\}-\{i\}\). We claim that the dimension \((d-r)\) subspace \(W_{i}\subset M_{i}\) spanned by the vectors \(w_{j}\), \(j\in S_{i}\), is a complement to \(U\cap M_{i}\) in \(M_{i}\). Indeed, we have \(\dim(U\cap M_{i})=r\), \(\dim(W_{i})=d-r\), and \[\dim((U\cap M_{i})\cap W_{i})=\dim(U\cap W_{i})=0,\] because \(W_{i}\) is an intersection of \(r+1\) of the hyperplanes \(M_{1},\ldots,M_{d+1}\). Fix now a basis \[u_{i}=\sum_{j=0}^{d}u_{ij}w_{j}\] of \(U\), where \(i=0,1,\ldots,r\), and abusively denote by \(U\) the \((r+1)\times(d+1)\) matrix \(\{u_{ij}\}\), where \(0\leq i\leq r\) and \(0\leq j\leq d\). We now cut out \(P_{n}(U)\subset GL(W)\) by linear equations coming from the following two sources: * \(g(u)=u\) for \(u\in U\) (\((r+1)(d+1)\) equations) * \(g(W_{i})\subset M_{i}\), \(i=0,1,\ldots,n-1\) (\(n(d-r)\) equations) First, we see that the above two conditions cut out \(P_{n}(U)\) set-theoretically. Indeed, once we have the first condition that \(g\) restricts to the identity on \(U\), to ensure that \(g(M_{i})\subset M_{i}\), we only need to check this condition on the complement \(W_{i}\) of \(U\cap M_{i}\subset M_{i}\). Regarding \(g\in GL(W)\) as a matrix \(\{g_{ij}\}\), where \(0\leq i,j\leq d\), let us make the conditions more explicit. That \(g\) restricts to the identity on \(U\) amounts to the affine-linear equation \[u_{ij}=u_{i0}g_{j0}+u_{i1}g_{j1}+\cdots+u_{id}g_{jd}\] for \(0\leq i\leq r\) and \(0\leq j\leq d\). On the other hand, the condition \(g(W_{i})\subset M_{i}\) amounts to the linear condition \(g_{ij}=0\) for \(i=0,1,\ldots,n-1\) and \(j\in S_{i}\). It remains to check that these \((r+1)(d+1)+n(d-r)\) equations are linearly independent. Suppose instead that \[\sum_{0\leq i\leq r,0\leq j\leq d}\alpha_{ij}(u_{i0}g_{j0}+u_{i1}g_{j1}+\cdots +u_{id}g_{jd})+\sum_{0\leq i\leq n-1,j\in S_{i}}\beta_{ij}g_{ij}=0\] for scalars \(\alpha_{ij},\beta_{ij}\). For fixed \(i,j\) with \(0\leq i,j\leq d\), the coefficient of \(g_{ij}\) above is \[\sum_{k=0}^{r}\alpha_{ki}u_{kj}+\beta_{ij}\] if \(i\leq n-1\) and \(j\in S_{i}\), and simply \(\sum_{k=0}^{r}\alpha_{ki}u_{kj}\) otherwise (e.g. if \(i\geq n\)); by assumption, this coefficient is \(0\) for all \(i,j\). Summing over all \(j\), we have \[0 =\sum_{j=0}^{d}\left(\sum_{k=0}^{r}\alpha_{ki}u_{kj}\right)w_{j}+ \sum_{j\in S_{i}}\beta_{ij}w_{j}\] \[=\sum_{k=0}^{r}\alpha_{ki}\left(\sum_{j=0}^{d}u_{kj}w_{j}\right)+ \sum_{j\in S_{i}}\beta_{ij}w_{j}\] \[=\sum_{k=0}^{r}\alpha_{ki}u_{k}+\sum_{w_{j}\in W_{i}}\beta_{ij}w _{j}.\] The first term is an element of \(U\) and the second is an element of \(W_{i}\), but because \(U\cap W_{i}=0\), it must be the case that all of the \(\alpha_{ij}\) (reindexed now as \(\alpha_{ki}\)) and \(\beta_{ij}\) are equal to \(0\). Proof of Proposition 6.2.: Recall that we are free to assume that \(n\leq d\). In particular, the general hyperplanes \(M_{1},\ldots,M_{n}\) may be taken to be those in the previous discussion above. Let \(\mu=(\mu_{0},\ldots,\mu_{r})=\overline{\lambda}\) be the complementary partition to \(\lambda\) in \((d-r)^{r+1}\), so that \(\mu_{i}=\lambda_{d-r-i}\) for \(i=0,1,\ldots,r\), and in particular, \(\mu_{r}\leq d-n\). It suffices to show that \[\int_{\operatorname{Gr}(r+1,W)}\Gamma_{r,\overline{\mu},d}\cdot\sigma_{\mu}=0.\] By Corollary 4.5, for a general complete flag \(0=F_{0}\subset F_{1}\subset\cdots\subset F_{d+1}=W\), the subscheme \[Y_{r,\overrightarrow{n},d}=\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\subset \operatorname{Coll}(V,W)\] intersects the pullback by \(\pi\) of the Schubert cycle \(\Sigma_{\mu}^{F}\) in finitely many points, all of which come from full rank collineations whose image is not contained in any \(M_{i}\). We will show that if a general flag \(F\) has the property that the intersection \(Y_{r,\overrightarrow{n},d}\cap\pi^{-1}(\Sigma_{\mu}^{F})\) is non-empty, then, in fact, the intersection is positive-dimensional, meaning that the generic number of intersection points must be zero. Let \(\phi\in\operatorname{Coll}(V,W)\) be a point of \(Y_{r,\overrightarrow{n},d}\cap\pi^{-1}(\Sigma_{\mu}^{F})\). Then, \(\phi:V\to W\) is a full-rank map with image contained in \(F_{d+1-\mu_{r}}\supset F_{n+1}\). We may therefore take \(\phi\) to be a full rank point of \(\mathbb{P}\operatorname{Hom}(V,F_{d+1-\mu_{r}})^{\circ}\subset\operatorname{ Coll}(V,F_{d+1-\mu_{r}})\) contained in the incidence loci \(\mathsf{Inc}(L_{i},M_{i}\cap F_{d+1-\mu_{r}})\). Let \(r^{\prime}<r\) be the largest integer for which \(\mu_{r^{\prime}}>\mu_{r}\). Such an \(r^{\prime}\) exists because, if \(\lambda_{0}=\cdots=\lambda_{r}\geq n-r\), then \(|\lambda|>rn-r(r+2)=\overrightarrow{n}-r(r+2)\), a contradiction. Then, writing \(U=\operatorname{im}(\phi)\), we require \(\dim(U\cap F_{d-r+1+r^{\prime}-\mu_{r^{\prime}}})\geq r^{\prime}+1\); we may assume by a further application of Corollary 4.5 that we have an equality. Let \(U^{\prime}=U\cap F_{d-r+1+r^{\prime}-\mu_{r^{\prime}}}\). By the generality hypotheses, we may assume that the \(n\leq(d+1-\mu_{r})-1\) hyperplanes \(M_{i}\cap F_{d+1-\mu_{r}}\subset F_{d+1-\mu_{r}}\) are in linearly general position. We may also assume the subspaces \(U^{\prime},U\subset F_{d+1-\mu_{r}}\) satisfy the hypothesis of Lemma 6.5, where we replace \(W\) with \(F_{d+1-\mu_{r}}\), by a standard incidence correspondence argument. Applying Lemma 6.5, we have \(\dim(P_{n}(U))<\dim(P_{n}(U^{\prime}))\), and in particular there is a positive-dimensional family of automorphisms of \(F_{d+1-\mu_{r}}\) fixing \(U^{\prime}\) but not \(U\), and also stabilizing each \(M_{i}\cap F_{d+1-\mu_{r}}\). Let \(g\in P_{n}(U^{\prime})\subset GL(F_{d+1-\mu_{r}})\) be any such automorphism. Now, the full rank collineation \(g\circ\phi:V\to F_{d+1-\mu_{r}}\subset W\), by construction, lies in \(Y_{r,\overrightarrow{n},d}\cap\pi^{-1}(\Sigma_{\mu}^{F})\). Indeed, we have \(g(U)=\operatorname{im}(g\circ\phi)\subset F_{d+1-\mu_{r}}\), so \(\dim(g(U)\cap F_{d-r+i+1-\mu_{i}})\geq i+1\) for \(i=r^{\prime}+1,\ldots,r\), and the fact that \(g(U^{\prime})=U^{\prime}\) guarantees the same for \(i=0,1,\ldots,r^{\prime}\). Moreover, because \(g\) stabilizes the \(M_{i}\cap F_{d+1-\mu_{r}}\), it remains the case that \(g\circ\phi\in\mathsf{Inc}(L_{i},M_{i})\) for \(i=1,2,\ldots,n\). Finally, because \(g\notin P_{n}(U)\), we have \(g\circ\phi\neq\phi\) as points of \(\operatorname{Coll}(V,W)\) (note that \(g\) cannot act on \(U\) by \(\lambda\cdot\operatorname{Id}\) for \(\lambda\neq 1\), because \(g\) fixes \(U^{\prime}\) by assumption). Varying over all \(g\), we obtain a positive-dimensional family in the intersection in question, and the proof is complete. ## 7. Torus orbits and geometric Tevelev degrees We now complete the calculation of the geometric Tevelev degrees of \(\mathbb{P}^{r}\) via torus orbit closures on Grassmannians, proving Theorem 1.1. By Theorem 1.6 and the results of the previous section, it suffices to compute the classes \(\Gamma_{r,n,n-1}\). We therefore assume that \(\dim(W)=n\), and that \(L_{1},\ldots,L_{n}\subset V\) and \(M_{1},\ldots,M_{n}\subset W\) are general hyperplanes. Recall that we also assume that \(n\geq r+1\) in order for the maps \(f:C\to\mathbb{P}^{r}\) in question automatically to be non-degenerate. On the other hand, if \(n=r+1\), then \[\dim(Y_{r,n,d})=(r+1)(d-r)+r>\dim(\operatorname{Gr}(r+1,W)),\] so \(\pi_{*}[Y_{r,\overrightarrow{n},d}]=0\) and \(\mathsf{Tev}_{r,n,d}^{\mathbb{P}^{r}}=0\). The formula of Theorem 1.1 also gives zero in this case. We therefore assume that \(n\geq r+2\). Write simply \[Y:=Y_{r,n,d}=\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\subset\operatorname{ Coll}(V,W),\] which is irreducible and generically smooth of codimension \(nr\) by Corollary 4.5, and write \(Z=\pi(Y)\) for its scheme-theoretic image in \(\operatorname{Gr}(r+1,W)\). Recall that we have defined \[\Gamma_{r,n,n-1}=\pi_{*}[Y]\in H^{2r(n-r-2)}(\operatorname{Gr}(r+1,W)).\] ### Torus orbit closures Consider the standard action of \(T\cong(\mathbb{C}^{\times})^{n}\) on \(W\), such that \(M_{1},\ldots,M_{n}\subset W\) are the torus-invariant hyperplanes, which induces an action on \(\operatorname{Gr}(r+1,W)\). The key observation, which one may regard as an incarnation of the Gelfan'd-Macpherson correspondence [14], is the following. **Proposition 7.1**.: _The map \(\pi\) is generically injective on \(Y\). The image \(Z\) is a generic \(T\)-orbit closure in \(\operatorname{Gr}(r+1,W)\)._ Proof.: For the first statement, it is enough to show that \(\pi\) is injective on \(Y\) upon restriction to the locus of full-rank collineations whose image \(U\) is transverse to any intersection of the \(M_{i}\). Indeed, this locus is non-empty and therefore (being open) dense in \(Y\), as the \(L_{i}\) can be chosen generally. For such a \(U\), any full-rank collineation in the fiber over \(U\) in \(Y\) is given by a linear isomorphism \(\phi:V\to U\) with the property that \(\phi(L_{i})=M_{i}\) for \(i=1,2,\ldots,r+1\). If such an isomorphism exists, by the transversality assumption, it must be unique. In particular, the scheme-theoretic image \(Z\subset\operatorname{Gr}(r+1,W)\) of \(Y\) is irreducible and reduced of dimension \((r+1)(d+1)-1-nr=n-1\). Furthermore, because the \(T\)-action on \(W\) stabilizes the \(M_{i}\), it also stabilizes \(Z\). On the other hand, the orbit closure of a generic point \(U\in Z\) as above is also irreducible of dimension \(n-1\), as, by Lemma 6.5, the stabilizer of \(U\) consists only of scalar matrices, and is also contained in \(Z\). Therefore, these two subschemes coincide. The final ingredient we will need to compute \(\operatorname{\mathsf{Tev}}_{g,n,d}^{\mathbb{P}^{r}}\) is the following formula of Berget-Fink [1, Theorem 5.1]. **Theorem 7.2**.: _Let \(Z_{gen}\) be a generic torus orbit closure on \(\operatorname{Gr}(r+1,W)\). Then,_ \[[Z_{gen}]=\sum_{\lambda\subset(n-r-2)^{r}}\sigma_{\lambda}\sigma_{\overline{ \lambda}}.\] Proof of Theorem 1.1.: Let \(V,W^{\prime}\) be vector spaces of dimensions \(r+1,d+1\), respectively, where we use \(W^{\prime}\) to distinguish from the vector space \(W\) above of dimension \(n\). Applying, in order, Theorem 1.6, Proposition 6.2, and Proposition 6.1, we have \[\operatorname{\mathsf{Tev}}_{g,n,d}^{\mathbb{P}^{r}} =\int_{\operatorname{Coll}(V,W^{\prime})}\pi^{*}(\sigma_{1^{r}}^ {g})\cdot(\gamma_{r})^{n}\] \[=\int_{\operatorname{Gr}(r+1,W^{\prime})}\sigma_{1^{r}}^{g}\cdot \Gamma_{r,n,d}\] \[=\int_{\operatorname{Gr}(r+1,W^{\prime})}\sigma_{1^{r}}^{g}\cdot \left(\sum_{|\lambda|=r(n-r-2)}\Gamma_{r,n,d}^{\lambda}\cdot\sigma_{\lambda}\right)\] \[=\int_{\operatorname{Gr}(r+1,W^{\prime})}\sigma_{1^{r}}^{g}\cdot \left(\sum_{\begin{subarray}{c}|\lambda|=r(n-r-2)\\ \lambda_{0}\leq n-r-1\end{subarray}}\Gamma_{r,n,d}^{\lambda}\cdot\sigma_{ \lambda}\right)\] \[=\int_{\operatorname{Gr}(r+1,W^{\prime})}\sigma_{1^{r}}^{g}\cdot \left(\sum_{\begin{subarray}{c}|\lambda|=r(n-r-2)\\ \lambda_{0}\leq n-r-1\end{subarray}}\Gamma_{r,n,n-1}^{\lambda}\cdot\sigma_{ \lambda}\right).\] On the other hand, we also have, by Proposition 7.1 and Theorem 7.2, an equality of cycle classes on \(\operatorname{Gr}(r+1,W)\) \[\sum_{\begin{subarray}{c}|\lambda|=r(n-r-2)\\ \lambda_{0}\leq n-r-1\end{subarray}}\Gamma_{r,n,n-1}^{\lambda}\cdot\sigma_{ \lambda} =\sum_{\lambda\subset(n-r-2)^{r}}\sigma_{\lambda}\sigma_{\overline{ \lambda}}\] \[=\Gamma_{r,n,n-1}\] \[=[Z_{gen}]\] \[=\sum_{\lambda\subset(n-r-2)^{r}}\sigma_{\lambda}\sigma_{\overline {\lambda}},\] where we recall that \(\dim(W)=n\). The expansion in the Schubert basis of the last sum on \(\operatorname{Gr}(r+1,d^{\prime})\) is independent of the value of \(d^{\prime}\), as long as the classes \(\sigma_{\mu}\) are interpreted to be zero whenever \(\mu\not\subset(d^{\prime}-r-1)^{r+1}\). Thus, \[\sum_{\begin{subarray}{c}|\lambda|=r(n-r-2)\\ \lambda_{0}\leq n-r-1\end{subarray}}\Gamma_{r,n,n-1}^{\lambda}\cdot\sigma_{ \lambda}=\left(\sum_{\lambda\subset(n-r-2)^{r}}\sigma_{\lambda}\sigma_{ \overline{\lambda}}\right)_{\lambda_{0}\leq n-r-1},\] from which Theorem 1.1 follows. ### Degenerations of orbit closures: outline In what follows, we sketch an independent proof of Theorem 7.2. The argument essentially appears in [20, SS7] after specializing from the more general setting of torus orbits in the full flag variety, but we explain the method in preparation of our proof of Theorem 1.2. We outline the calculation in the language of maps to \(\mathbb{P}^{r}\), to emphasize the geometry. We are interested in the locus of linear series underlying non-degenerate \(f:\mathbb{P}^{1}\to\mathbb{P}^{r}\) of degree \(n-1\) satisfying \(n\) incidence conditions \(f(p_{i})=x_{i}\). We study such \(f\) under degeneration of the points \(x_{i}\). Specifically, choose a general hyperplane \(H\subset\mathbb{P}^{r}\), and move the points \(x_{i}\) onto \(H\) one-by-one. At each step, \(f\) either remains non-degenerate, or becomes contained in \(H\). If the latter occurs after \(x_{\alpha}\in H\) for some \(\alpha\), then \(p_{\alpha+1},\ldots,p_{n}\) must become base-points of \(f\); one may regard this as a Schubert condition on the linear series underlying \(f\). On the other hand, on the general fiber, there is also a section (secant) underlying \(f\) that vanishes at the divisor \(p_{1}+\cdots+p_{\alpha-1}\); this section vanishes in the limit. However, if we take the limit of \(f\) in the space of complete collineations, rather than the space of maps, then the limit remembers this secant, giving a second Schubert condition on the special fiber. From here, one repeats the degeneration, now moving the points \(x_{1},\ldots,x_{\alpha}\in H\) into a codimension \(2\) linear space, and so forth. As \(f\) degenerates further, we keep track of two sets of Schubert conditions, which may be regarded as coming from base-point and secancy conditions on \(f\), respectively. When \(f\) becomes totally degenerate, these base-point and secancy conditions eventually become the cycles \(\sigma_{\lambda}\), \(\sigma_{\overline{\lambda}}\), respectively, appearing in Theorem 7.2. Our degeneration therefore replaces the orbit closure \(Z\subset\operatorname{Gr}(r+1,W)\) in the end with a union of Richardson varieties (intersections of Schubert varieties) whose class is visibly equal to the right hand side of Theorem 7.2. ### The degenerate components: first step We now set up the same degeneration on \(\operatorname{Coll}(V,W)\). Fix \(T\)-invariant hyperplanes \(M_{1},\ldots,M_{n}\subset W\), and a line \(\Lambda_{1}\subset V\). We introduce the following notation to be used for the rest of the paper: if \([\alpha_{1},\alpha_{2}]\) is an interval, then \(M_{[\alpha_{1},\alpha_{2}]}\) denotes the intersection \(M_{\alpha_{1}}\cap\cdots\cap M_{\alpha_{2}}\). For some integer \(\alpha\in[0,n-1]\), let \(L_{1},\ldots,L_{\alpha}\subset V\) be general hyperplanes containing \(\Lambda_{1}\), and let \(L_{\alpha+1},\ldots,L_{n}\subset V\) be general hyperplanes (with no other restrictions). We begin by defining subschemes \(Y_{\alpha}\) and \(Y_{(\alpha,0)}\) of \(\operatorname{Coll}(V,W)\), along with their images \(Z_{\alpha},Z_{(\alpha,0)}\) in \(\operatorname{Gr}(r+1,W)\). **Definition 7.3**.: _For \(\alpha\in[0,n-1]\), define \(Y_{\alpha}\subset\operatorname{Coll}(V,W)\) by the closure of the intersection_ \[\left(\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\right)\cap\operatorname{Coll }_{(r+1)}^{\circ}(V,W)\] _in \(\operatorname{Coll}(V,W)\). Define \(Z_{\alpha}=\pi(Y_{\alpha})\subset\operatorname{Gr}(r+1,W)\)._ Note that \(Y=Y_{0}\) and \(Z=Z_{0}\). In the language of degenerations of maps given above, one can regard \(Y_{\alpha}\) as the closure of the locus of non-degenerate maps \(f:\mathbb{P}^{1}\to\mathbb{P}^{r}\) with \(f(p_{i})=x_{i}\), after \(\alpha\) of the points \(x_{i}\) are specialized to lie on a hyperplane \(H\subset\mathbb{P}^{r}\). A generic point \(\phi\in Y_{\alpha}\) may be regarded as a \((d+1)\times(r+1)\) matrix \[A_{\alpha}=\begin{bmatrix}0&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&*&\cdots&*\\ *&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ *&*&\cdots&*\end{bmatrix}\] whose first \(\alpha\) rows have a 0 in the left-most column (corresponding to \(\Lambda_{1}\subset V\)), and whose other entries are generic. The assumption that \(\alpha<n\) ensures that the left-most column of \(A_{\alpha}\) is not identically zero. A generic point \(U\in Z_{\alpha}\) may be regarded as the span of the column vectors of \(A_{\alpha}\) as above, and in fact, it is easily checked that \(Z_{\alpha}\) is the \(T\)-orbit closure of \(U\). The subscheme \(Y_{\alpha}\) has the expected dimension of \(n-1\), but the same is only true of \(Z_{\alpha}\) if additionally \(\alpha\leq n-2\). **Definition 7.4**.: _Suppose that \(\alpha\in[r+1,n-1]\). Then, define \(Y_{(\alpha,0)}\subset\operatorname{Coll}_{(r,1)}(V,W)\) to be the closure of the locus of complete collineations \(\phi=\{\phi_{j}\}_{j=0}^{1}\) satisfying the following properties:_ 1. \(\phi\) _has type_ \((r,1)\)_, and_ \(V_{1}=\ker(\phi_{0})=\Lambda_{1}\)_,_ 2. _the map_ \(\phi_{0}:V/\Lambda_{1}\to W\) _satisfies the incidence conditions_ \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\) _for_ \(i=1,2,\ldots,\alpha\)_,_ 3. _the image_ \(\operatorname{im}(\phi_{0})\) _is contained in_ \(M_{[\alpha+1,n]}\)_, but none of_ \(M_{1},\ldots,M_{\alpha}\)_,_ 4. \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha-1]}=0\)_, but_ \(\operatorname{im}(\phi)\cap M_{[1,\alpha-1]}\neq 0\)_._ _Define \(Z_{(\alpha,0)}=\pi(Y_{(\alpha,0)})\subset\operatorname{Gr}(r+1,W)\)._ A generic point of \(Y_{(\alpha,0)}\) may be represented by a \((d+1)\times(r+1)\) matrix \[A_{(\alpha,0)}=\begin{bmatrix}0&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&*&\cdots&*\\ *&*&\cdots&*\\ *&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots\\ *&0&\cdots&0\end{bmatrix}\] whose first \(\alpha-1\) rows have a 0 in the left-most column, and whose last \(n-\alpha\) rows are zero in all other columns. The assumption that \(\alpha>r\) ensures that the last \(r\) columns are independent, if the non-zero entries are chosen generically. The map \(\phi_{0}:V/\Lambda_{1}\to W\) is given by the last \(r\) columns of \(A_{(\alpha,0)}\), and the left-most column spans \(\operatorname{im}(\phi)\cap M_{[1,\alpha-1]}\). A generic point \(U\in Z_{(\alpha,0)}\) is again given by the column span of \(A_{(\alpha,0)}\), and \(Z_{(\alpha,0)}\) is equal to the \(T\)-orbit closure of \(U\). Both \(Y_{(\alpha,0)},Z_{(\alpha,0)}\) have the expected dimension of \(n-1\). **Proposition 7.5**.: _Fix \(\alpha\in[1,n-1]\). Then, there exists a 1-parameter degeneration of \(Y_{\alpha-1}\) into a union of components including \(Y_{\alpha}\) and \(Y_{(\alpha,0)}\) with multiplicity 1._ _In particular, the same 1-parameter family degenerates \(Z_{\alpha-1}\) into a union of components including \(Z_{\alpha}\) and \(Z_{(\alpha,0)}\) with multiplicity 1._ By convention, if either component \(Y_{\alpha},Y_{(\alpha,0)}\) is "out of range," which is to say, undefined for a given value of \(\alpha\), we simply ignore it from the conclusion of the first part of Proposition 7.5. We also ignore the subscheme \(Z_{n-1}\), which has dimension strictly less than \(n-1\), from the second part of the conclusion. Proof.: The degeneration in question sends the entry of \(A_{\alpha-1}\) in the \(\alpha\)-th row and first column from a generic value (represented by \(*\)) to zero. Geometrically, this corresponds exactly to moving the point \(x_{\alpha}\) onto \(H\), or moving the hyperplane \(L_{\alpha}\) to contain \(\Lambda_{1}\). More precisely, let \(Y_{\alpha-1}^{t}\subset\operatorname{Coll}(V,W)\times\mathbb{D}\) be the family of subschemes over a disk \(\mathbb{D}\) such that, for \(t\neq 0\), a generic point \(\phi\in Y_{\alpha-1}^{t}\) takes the form \[A_{\alpha-1}^{t}=\begin{bmatrix}0&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&*&\cdots&*\\ *t&*&\cdots&*\\ *&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ *&*&\cdots&*\end{bmatrix}.\] Taking all of the entries \(*\) to be constant and sending \(t\to 0\), we find the component \(Y_{\alpha}\) in the limit of this degeneration with multiplicity 1. On the other hand, the matrix \[(A_{\alpha-1}^{t})_{0}=\begin{bmatrix}0&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&*&\cdots&*\\ *t&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ *t&*t&\cdots&*t\end{bmatrix}\] is also a point of \(Y_{\alpha-1}^{t}\), whose limit in \(\operatorname{Coll}(V,W)\) is a generic point of \(Y_{(\alpha,0)}\). Thus, we also find the component \(Y_{(\alpha,0)}\) in the limit of this degeneration with multiplicity 1. Geometrically, in the degeneration of \(Y_{\alpha-1}\), the component \(Y_{\alpha}\) represents the maps \(f:\mathbb{P}^{1}\to\mathbb{P}^{r}\) that remain non-degenerate as \(x_{\alpha}\) moves into \(H\), and the component \(Y_{(\alpha,0)}\) represents those that become degenerate. The degenerate maps are contained in \(H\) and have base-points at \(p_{\alpha+1},\ldots,p_{n}\) (captured by the data of \(\phi_{0}\)), but also come with the additional data of a secant vanishing at \(p_{1}+\cdots+p_{\alpha-1}\) (captured by the data of a non-zero element of \(\operatorname{im}(\phi)\cap M_{[1,\alpha-1]}\)). **Corollary 7.6**.: _There exists a degeneration of the generic \(T\)-orbit closure \(Z\) into the union of components containing \(Z_{(r+1,0)},\ldots,Z_{(n-1,0)}\), each with multiplicity 1._ Note that we have not yet claimed that no other components appear. ### Iterating the degeneration We repeat the procedure of the previous section, moving \(L_{1},\ldots,L_{\alpha-1}\) one at a time to general hyperplanes containing a plane \(\Lambda_{2}\supset\Lambda_{1}\), and extracting components parametrizing \(\phi\) of type \((r-1,1,1)\). We continue until reaching totally degenerate collineations. We describe only these objects obtained in the end. Let now \(\alpha=(\alpha_{0},\cdots,\alpha_{r-1})\) denote a tuple of integers with \(n>\alpha_{0}>\cdots>\alpha_{r-1}>1\). Write also \(\alpha_{-1}=n+1\) and \(\alpha_{r}=0\). We define a collection of matrices \(A_{\alpha}\) as follows. We index the columns by the integers \(0,1,\ldots,r\) and the rows by the integers \(1,2,\ldots,n\). For \(j=0,\ldots,r-1\), place generic entries, denoted \(*\), in row \(\alpha_{j}\) and in columns \(j\) and \(j+1\). Consider now all other rows: in the \(k\)-th row, where \(k\in(\alpha_{j},\alpha_{j-1})\) for some \(j\in[0,r]\), place a generic entry in column \(j\). Finally, make all other entries of \(A_{\alpha}\) zero. The case \(r=3\) \(n=12\), \(m=3\), and \(\alpha=(9,7,4)\) is shown below. \[A_{\alpha}=\begin{bmatrix}0&0&0&*\\ 0&0&0&*\\ 0&0&0&*\\ 0&0&*&*\\ 0&0&*&0\\ 0&0&*&0\\ 0&*&*&0\\ 0&*&0&0\\ *&*&0&0\\ *&0&0&0\\ *&0&0&0\end{bmatrix}\] **Definition 7.7**.: _Fix \(\alpha\) as above. Define \(Y_{\alpha}\subset\operatorname{Coll}_{1^{r+1}}(V,W)\) by the locus of totally degenerate collineations whose generic point \(\phi=\{\phi_{j}\}_{j=0}^{r}\) has the property that \(\operatorname{im}(\phi)\) is equal to the span of the last \(j+1\) columns of a matrix of the form \(A_{\alpha}\)._ _Define \(Z_{\alpha}\) to be equal to \(\pi(Y_{\alpha})\), or equivalently, to be the closure of the locus given of subspaces given by the column span of a matrix of the form \(A_{\alpha}\)._ Each \(Y_{\alpha},Z_{\alpha}\) has the expected dimension of \(n-1\), and \(Z_{\alpha}\) is additionally the \(T\)-orbit closure of a generic matrix \(A_{\alpha}\). Iterating the degeneration of the previous section, we obtain: **Proposition 7.8**.: _There exists a degeneration of \(Y\) into a union of components including \(Y_{\alpha}\), each with multiplicity 1, and ranging over all tuples \(\alpha\) defined above._ _Furthermore, the resulting degeneration of \(Z=\pi(Y)\) contains only the components \(Z_{\alpha}\)._ Checking no other components appear in the degeneration of \(Z\) requires more work. This is a consequence of the fact that the associated matroid polytope (moment map image) of the toric variety \(Z\) subdivides into a union of the matroid polytopes of the components \(Z_{\alpha}\). See [20, SS7] for details. Now, \(Z_{\alpha}\) is visibly an intersection of two Schubert varieties (that is, a Richardson variety). More precisely, let \(F_{M}\) denote the flag \[0\subset M_{[1,n-1]}\subset\cdots\subset M_{[1,2]}\subset M_{1}\subset W,\] and let \(F^{\prime}_{M}\) denote the transverse flag \[0\subset M_{[2,n]}\subset\cdots\subset M_{[n-1,n]}\subset M_{n}\subset W.\] Let \(\lambda=(\lambda_{0},\ldots,\lambda_{r-1},0)\) is the partition given by \(\lambda_{j}=(\alpha_{j}-1)-(r-j)\) for \(j=0,1,\ldots,r-1\). Then, we have \[Z_{\alpha}=\Sigma_{\lambda}^{F_{M}}\cap\Sigma_{\overline{\lambda}}^{F^{\prime }_{M}}.\] In the language of maps to \(\mathbb{P}^{r}\), the first Schubert variety simultaneously packages all of the secancy conditions imposed on \(f:\mathbb{P}^{1}\to\mathbb{P}^{r}\) upon iterated degeneration as a complete collineation, and the second packages all of the base-point conditions. We now put everything together: Proof of Theorem 7.2.: By the above discussion, we have \[\Gamma_{r,n,n-1}=[Z]=\sum_{\alpha}[Z_{\alpha}]=\sum_{\lambda\subset(n-r-2)^{ r}}\sigma_{\lambda}\sigma_{\overline{\lambda}}.\] We remark that the entire proof of Theorem 7.2 goes through without change \(T\)-equivariantly. Thus, one recovers in fact Berget-Fink's formula [1, Theorem 5.1] in the equivariant cohomology of \(\operatorname{Gr}(r+1,W)\), using instead the equivariant classes of the Schubert varieties \(\Sigma_{\lambda}^{F_{M}},\Sigma_{\overline{\lambda}}^{F^{\prime}_{M}}\). ### The coefficients \(\Gamma^{\lambda}_{r,n,n-1}\) The integers \(\Gamma^{\lambda}_{r,n,n-1}\), and hence also \(\Gamma^{\lambda}_{r,n,d}\), are understood. Klyachko [17, Theorem 6] proved that \[\Gamma^{\lambda}_{r,n,n-1}=\sum_{j=0}^{m(\lambda)}(-1)^{j}\binom{n}{j}| \mathsf{SSYT}_{r+1-j}(\lambda^{j})|, \tag{6}\] where: * for any partition \(\lambda\subset(n-r-1)^{r+1}\) with \(|\lambda|=r(n-r-2)\), we define \(m(\lambda)\geq 0\) to be the unique integer for which \(\lambda_{0}=\cdots=\lambda_{m(\lambda)-1}=n-r-1\) and \(\lambda_{m(\lambda)}<n-r-1\), and * for \(j=0,1,\ldots,m(\lambda)-1\), we define \(\lambda^{j}=(\lambda_{j},\ldots,\lambda_{r})\) to be the partition obtained by removing the first \(j\) parts of \(\lambda\) (which are all equal to \(n-r-1\)). Proposition 5.8 shows that \(\Gamma^{\lambda}_{r,n,n-1}\) should be the cardinality of a subset of \(\mathsf{SSYT}_{r+1}(\lambda)\); note that \(|\mathsf{SSYT}_{r+1}(\lambda)|\) is the first term in the alternating sum above. We indeed have: **Theorem 7.9**.: _[_21_, Theorem 4]_ _The coefficient \(\Gamma^{\lambda}_{r,n,n-1}\) is equal to the cardinality of the subset of \(\mathsf{SSYT}_{r+1}(\lambda)\) consisting of SSYTs with no \((i,i+1)\)-strip of length \(n-r-1\) for any \(i=1,\ldots,r\), see Definition 2.2._ In particular, we have \(\Gamma^{\lambda}_{r,n,n-1}=|\mathsf{SSYT}_{r+1}(\lambda)|\) unless \(\lambda_{0}=n-r-1\), which is on the one hand clear from (6), and on the other follows from combining Proposition 6.1 and part (a) of Corollary 5.4. Combining Theorem 7.9 with Theorem 1.1 gives the following combinatorial interpretation of \(\mathsf{Tev}^{p^{r}}_{g,n,d}\). See also [13, SS4], but we find it convenient to modify the objects slightly, in addition to adding the final two conditions below to handle geometric Tevelev degrees for all curve classes. Namely, the count \(\mathsf{Tev}^{p^{r}}_{g,n,d}\) is equal to the number of fillings of the boxes of a \((r+1)\times(d-r)\) grid with: * \(rg\) red integers among \(1,2,\ldots,g\), with each appearing exactly \(r\) times * \(r(n-r-2)\) blue integers among \(1,2,\ldots,r+1\), with each appearing any number of times, subject to the following conditions: * the blue integers are top- and left- justified, i.e., they appear above the red integers in the same column and to the left of red integers in the same row, * the red integers are strictly decreasing across rows and weakly decreasing down columns (that is, they form an SSYT after rotation by 180 degrees and conjugation), * the blue integers are weakly increasing across rows and strictly increasing down columns (that is, they form an SSYT), * the blue integers only appear in the leftmost \(n-r-1\) columns of the grid, and * no \((i,i+1)\)-strip of length \(n-r-1\) appears among the blue integers, for any \(i=1,\ldots,r\). An example filling is given in the case \((r,g,n,d)=(3,6,11,12)\) below. \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 1 & 3 & 4 & 4 & 4 & 2 \\ \hline 2 & 2 & 2 & 2 & 6 & 5 & 4 & 3 & 1 \\ \hline 3 & 3 & 3 & 4 & 6 & 5 & 3 & 2 & 1 \\ \hline 4 & 4 & 4 & 6 & 5 & 4 & 3 & 2 & 1 \\ \hline \end{tabular} ## 8. Curve counts in \(\mathbb{P}^{2}\) ### Outline In this section, we extend the degeneration method from our torus-orbit calculation to prove Theorem 1.2. That is, we determine the number of non-degenerate maps from a general pointed curve to \(\mathbb{P}^{2}\) incident at \(n_{0}\) general points \(x_{1},\ldots,x_{n_{0}}\) and \(n_{1}=n-n_{0}\) general lines \(X_{n_{0}+1},\ldots,X_{n}\). We summarize the calculation. 1. Fix a line \(\ell\in\mathbb{P}^{2}\). Move the points \(x_{1},\ldots,x_{n_{0}}\) onto \(\ell\) one at a time. The map \(f:C\to\mathbb{P}^{2}\) either degenerates to a multiple cover of \(\ell\) after \(\alpha_{0}\leq n_{0}\) of the points are contained in \(\ell\), or remains non-degenerate after all of the points are on \(\ell\). If the former, proceed to step 2; if the latter, proceed to step 3. 2. The limit of \(f\) retains, in addition to to the data of a multiple cover of \(\ell\), the data of a non-zero section vanishing on \(p_{1}+\cdots+p_{\alpha_{0}-1}\). Continue the degeneration by moving the points \(x_{1},\ldots,x_{\alpha_{0}-1}\) onto a fixed point \(x\in\ell\). Either \(f\) degenerates to a constant map to \(x\) after \(\alpha_{1}\leq\alpha_{0}\) of the points are equal to \(x\), or, a new phenomenon occurs: \(f\) remains a multiple cover of \(\ell\), but the section vanishing on \(p_{1}+\cdots+p_{\alpha_{1}}\) becomes equal to that vanishing on \(p_{1}+\cdots+p_{\alpha_{0}-1}\), which was previously not seen by the multiple cover \(f\). In order for this to be possible, \(f\) acquires base-points at \(p_{\alpha_{1}+1},\ldots,p_{\alpha_{0}}\), in addition to those at \(p_{n_{0}+1},\ldots,p_{n}\). If the former possibility, then go to step (a) below, and if the latter, go to step (b). 1. \(f\) is now a complete collineation of type \((1,1,1)\), and such objects can be enumerated (more precisely, pushed forward to \(\operatorname{Gr}(3,W)\)) via Schubert calculus as in the orbit closure calculation. 2. Move the points \(X_{n_{0}+1}\cap\ell,\ldots,X_{n}\cap\ell\) onto \(x\), until \(f\) degenerates into a constant map. We again get collineations of type \((1,1,1)\) which can be enumerated. 3. Fix a point \(z\notin\ell\), and move the lines \(X_{n_{0}+1},\ldots,X_{n}\) to contain \(z\). Eventually, say, after \(X_{n_{0}+1},\ldots,X_{\alpha_{0}}\ni z\), \(f\) will degenerate to a constant map with image \(z\). However, the limit of \(f\) as a complete collineation will retain the information of a map \(\widetilde{f}:C\to\mathbb{P}^{1}\), which may be regarded as the limit of the maps obtained from \(f\) by post-composition with projection from \(z\). 4. Move the points \(x_{1},\ldots,x_{n_{0}},X_{n_{0}+1}\cap\ell,\ldots,X_{\alpha_{0}-1}\cap\ell\) onto \(x\in\ell\) until the map \(\widetilde{f}\) degenerates. From here, we will again be able to enumerate the degenerate objects. The situation is different depending on whether \(\widetilde{f}\) degenerates before or after the \(n_{0}\)-th step, but we defer a discussion to later. ### Length 1 components We now carry out the program described in SS8.1 on the space of complete collineations. Let \(L_{1},\ldots,L_{n_{0}}\subset V\cong\mathbb{C}^{3}\) be general planes and \(L_{n_{0}+1},\ldots,L_{n}\subset V\) be general lines. We are interested in the image of the locus \[Y:=\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\subset\operatorname{Coll}(V,W)\] to \(\operatorname{Gr}(3,W)\), which we denote by \(Z\), and the class \([Z]\in H^{2(n+n_{0})}(\operatorname{Gr}(3,W))\). We study this class by degenerating the \(L_{i}\) and studying the corresponding degeneration of \(Z\). We first carry out the analogue of the degeneration in SS7.3. Fix a line \(\Lambda_{1}\subset V\). For a fixed \(\alpha_{0}\leq n_{0}\), now assume that \(L_{1},\ldots,L_{\alpha_{0}}\supset\Lambda_{1}\); all \(L_{i}\) are otherwise assumed to be general. **Definition 8.1**.: _For \(\alpha_{0}\in[0,\min(n-1,n_{0})]\), define \(Y_{\alpha_{0}}^{(3)}\) by the closure of the intersection_ \[\left(\bigcap_{i=1}^{n}\mathsf{Inc}(L_{i},M_{i})\right)\cap\operatorname{Coll} _{(3)}^{\circ}(V,W)\] _in \(\operatorname{Coll}(V,W)\). Define \(Z_{\alpha_{0}}^{(3)}=\pi(Y_{\alpha_{0}}^{(3)})\subset\operatorname{Gr}(3,W)\)._ In particular, we have \(Y=Y_{0}^{(3)}\) and \(Z=Z_{0}^{(3)}\). **Definition 8.2**.: _For \(\alpha_{0}\in[3,\min(n-1,n_{0})]\), define \(Y_{\alpha_{0}}^{(2,1)}\subset\operatorname{Coll}_{(2,1)}(V,W)\) to be the closure of the locus of complete collineations \(\phi:V\to W\) satisfying the following properties._ 1. \(\phi\) _has type_ \((2,1)\)_, and_ \(V_{1}=\ker(\phi_{0})=\Lambda_{1}\)_,_ 2. _the map_ \(\phi_{0}:V/\Lambda_{1}\to W\) _satisfies the incidence conditions_ \(\mathsf{Inc}(L_{i},M_{i})\) _for_ \(i=1,2,\ldots,\alpha_{0}\)_, and the incidence conditions_ \(\mathsf{Inc}(\langle L_{i},\Lambda_{1}\rangle,M_{i})\) _for_ \(i=n_{0}+1,\ldots,n\)_._ 3. _the image_ \(\operatorname{im}(\phi_{0})\) _is contained in_ \(M_{[\alpha_{0}+1,n_{0}]}\)_, but none of_ \(M_{1},\ldots,M_{\alpha_{0}}\) _or_ \(M_{n_{0}+1},\ldots,M_{n}\)_,_ 4. \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{0}-1]}=0\)_, but_ \(\operatorname{im}(\phi)\cap M_{[1,\alpha_{0}-1]}\neq 0\)_._ Both \(Y_{\alpha_{0}}^{(3)}\) and \(Y_{\alpha_{0}}^{(2,1)}\) are checked, by building the usual incidence correspondences, to be irreducible and generically smooth of the expected codimension \(n+n_{0}\) in \(\operatorname{Coll}(V,W)\). We now consider a degeneration over \(\mathbb{D}\) in which, over the general point \(t\in\mathbb{D}\), only \(L_{1},\ldots,L_{\alpha_{0}-1}\) contain \(\Lambda_{1}\), and over \(0\in\mathbb{D}\), \(L_{\alpha_{0}}\supset\Lambda_{1}\); all other \(L_{i}\) do not change. **Proposition 8.3**.: _The flat limit \((Y^{(3)}_{\alpha_{0}-1})_{0}\) of the subscheme \(Y^{(3)}_{\alpha_{0}-1}\subset\operatorname{Coll}(V,W)\), under the degeneration described above, contains the components \(Y^{(3)}_{\alpha_{0}}\) (if \(\alpha_{0}\neq n\)) and \(Y^{(2,1)}_{\alpha_{0}}\) (if \(\alpha_{0}\geq 3\)). Both components appear with multiplicity 1, and there no other components._ Proof.: The following matrices over \(\mathbb{D}\) exhibit a general point of either component as a limit of full rank collineations on the general fiber: \[A^{t}_{(3)}:=\begin{bmatrix}0&a_{1,1}&a_{1,2}\\ \vdots&\vdots&\vdots\\ 0&a_{\alpha_{0}-1,1}&a_{\alpha_{0}-1,2}\\ t&a_{\alpha_{0},1}&a_{\alpha_{0},2}\\ a_{\alpha_{0}+1,0}&a_{\alpha_{0}+1,1}&a_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ a_{n_{0},0}&a_{n_{0},1}&a_{n_{0},2}\\ a_{n_{0}+1,0}&a_{n_{0}+1,1}&a_{n_{0}+1,2}\\ \vdots&\vdots&\vdots\\ a_{n,0}&a_{n,1}&a_{n,2}\end{bmatrix},A^{t}_{(2,1)}:=\begin{bmatrix}0&a_{1,1}&a_{ 1,2}\\ \vdots&\vdots&\vdots\\ 0&a_{\alpha_{0}-1,1}&a_{\alpha_{0}-1,2}\\ t&a_{\alpha_{0},1}&a_{\alpha_{0},2}\\ ta_{\alpha_{0}+1,0}&ta_{\alpha_{0}+1,1}&ta_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ ta_{n_{0},0}&ta_{n_{0},1}&ta_{n_{0},2}\\ ta_{n_{0}+1,0}&a_{n_{0}+1,1}+ta^{\prime}_{n_{0}+1,1}&a_{n_{0}+1,2}+ta^{\prime}_ {n_{0}+1,1}\\ \vdots&\vdots&\vdots\\ ta_{n,0}&a_{n,1}+ta^{\prime}_{n,1}&a_{n,2}+ta^{\prime}_{n,2}\end{bmatrix}\] The first \(n_{0}\) row vectors of each matrix are constrained to be scalar multiples of the given ones. For \(i>n_{0}+1\), the \(i\)-th row is assumed implicitly to satisfy a single linear relation corresponding to the condition \(\mathsf{Inc}(L_{i},M_{i})\). Taking \(t=0\) in \(A^{t}_{(3)}\) gives a general point of \(Y^{(3)}_{\alpha_{0}}\). The limit in \(\operatorname{Coll}(V,W)\) of \(A^{t}_{(2,1)}\) as \(t\to 0\) is given by the data of the map \(\phi_{0}:V/\Lambda_{1}\to W\) obtained from setting \(t=0\) in the right-most two columns, and the additional section of \(\operatorname{im}(\phi)\) given by dividing the first column by \(t\). In this way, we obtain in the limit a general point of \(Y^{(2,1)}_{\alpha_{0}}\). The fact that both components appear with multiplicity 1 is reflected in the fact that both matrices have full rank over \(k[t]/t^{2}\). It remains to check that there are no other components in the limit. The details are straightforward, but somewhat cumbersome, so we do not give them all. First, we rule out components \(Y^{\prime}\subset(Y^{(3)}_{\alpha_{0}-1})_{0}\) for which \(V_{\ell}\neq\Lambda_{1}\), where \(\ell\in\{1,2\}\) is the length of a collineation corresponding to a general point. Let \(\overrightarrow{r}\) be the type of such a collineation. Then, one checks that the closed conditions \(\mathsf{Inc}(L_{i},M_{i})\) (taking \(L^{0}_{\alpha}\supset\Lambda_{1}\) when \(i=\alpha\)) impose the expected number of conditions on \(\operatorname{Coll}_{\overrightarrow{r}}(V,W)\), and thus, too many on \(\operatorname{Coll}(V,W)\). This in particular rules out all components whose generic \(\phi\) has type \((1,2)\). Next, when a generic \(\phi\in Y^{\prime}\) has type \((2,1)\) with \(V_{1}=\Lambda_{1}\), then we must at least have the condition \(\operatorname{im}(\phi)\cap M_{[1,\alpha_{0}-1]}\neq 0\), as this condition is satisfied on the generic fiber and is closed. One checks that requiring additionally that \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{0}-1]}\neq 0\) imposes too many conditions when combined with the conditions \(\mathsf{Inc}(L_{i},M_{i})\). (In particular, at least \(\alpha_{0}-2\) of the hyperplanes \(M_{1},\ldots,M_{\alpha_{0}-1}\) must contain \(\operatorname{im}(\phi_{0})\).) Similarly, if we assume property (iv) in the definition of \(Y^{(2,1)}_{\alpha_{0}}\), then requiring additionally that \(\operatorname{im}(\phi_{0})\) contain be contained in any of \(M_{1},\ldots,M_{\alpha_{0}},M_{n_{0}+1},\ldots,M_{n}\) imposes too many conditions. We therefore get no additional components \(Y^{\prime}\) in this way. It is left to consider \(\phi\) of type \((1,1,1)\) and with \(V_{2}=\Lambda_{1}\). We first observe that if \(V_{1}\) is not equal to one of \(L_{1},\ldots,L_{\alpha_{0}}\) or \(\langle\Lambda_{1},L_{n_{0}+1}\rangle,\ldots,\langle\Lambda_{1},L_{n}\rangle\), then \(\operatorname{im}(\phi_{0})\) will need to be contained in \(M_{[1,n]}=0\) owing to the conditions \(\mathsf{Inc}(L_{i},M_{i})\), a contradiction. If \(V_{1}\) is equal to one of \(L_{1},\ldots,L_{\alpha_{0}-1}\) - without loss of generality, we may take \(V_{1}=L_{1}\) - then we will have \(\operatorname{im}(\phi_{0})\subset M_{[2,n]}\). Now, combining with the requirements that \(\operatorname{im}(\phi_{1})\subset M_{[\alpha_{0}+1,n_{0}]}\) (coming from \(\mathsf{Inc}(L_{i},M_{i})\) for \(i=\alpha_{0}+1,\ldots,n_{0}\)) and \(\operatorname{im}(\phi)\cap M_{[1,\alpha_{0}-1]}\neq 0\) imposes too many conditions on \(\phi\). Finally, suppose instead that \(V_{1}=L_{\alpha_{0}}\) (where we take the plane \(L_{\alpha_{0}}\supset\Lambda_{1}\) in its special position); the case \(V_{1}=\langle L_{i},\Lambda_{1}\rangle\) with \(i>n_{0}\) can be handled similarly. We may change basis on \(V\) in such a way that a degeneration to \(\phi\) takes the form \[\phi^{t}=\begin{bmatrix}0&ta_{1,1}&ta_{1,2}\\ \vdots&\vdots&\vdots\\ 0&ta_{\alpha_{0}-1,1}&ta_{\alpha_{0}-1,2}\\ t&0&1\\ ta_{\alpha_{0}+1,0}&ta_{\alpha_{0}+1,1}&ta_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ ta_{n_{0},0}&ta_{n_{0},1}&ta_{n_{0},2}\\ ta_{n_{0}+1,0}&ta_{n_{0}+1,1}&ta_{n_{0}+1,2}\\ \vdots&\vdots&\vdots\\ ta_{n,0}&ta_{n,1}&ta_{n,2}\end{bmatrix}\] to first order. Indeed, possibly after a base change, the limit of \(\phi\) as a linear map is assumed to be zero upon restriction to the first two columns, representing \(V_{1}=L_{\alpha_{0}}=\langle v_{0},v_{1}\rangle\). The \(a_{i,j}\), with \(i>n_{0}\), are allowed to be zero as long as \(\phi^{t}\) generically has rank \(3\). We must have \(V_{1}=\langle v_{0},v_{1}\rangle=L_{\alpha_{0}}\) and \(V_{2}=\langle v_{0}\rangle=\Lambda_{1}\). Because \(\phi_{1}(V_{2})=0\), the column vectors \(\phi^{t}(v_{0})\) and \(\phi^{t}(v_{2})\) must be linearly dependent to first order, see SS3.2. By computing the \(2\times 2\) minor obtained from rows \(\alpha_{0}\) and \(i=\alpha_{0}+1,\ldots,n\), we find that, in fact, \(a_{\alpha_{0}+1,0}=\cdots=a_{n,0}=0\). Next, assume for the moment that \(a_{1,1},\ldots,a_{\alpha_{0}-1,1},a_{\alpha_{0}+1,1},\ldots,a_{n,1}\) are not all zero. Then, in the limit of \(\phi=\lim_{t\to 0}\phi^{t}\) as a complete collineation, we have \[\operatorname{im}(\phi_{0})=\operatorname{span}\begin{bmatrix}0\\ \vdots\\ 0\\ 1\\ 0\\ \vdots\\ 0\end{bmatrix},\operatorname{im}(\phi_{1})=\operatorname{span}\begin{bmatrix}0 &a_{1,1}\\ \vdots&\vdots\\ 0&a_{\alpha_{0}-1,1}\\ 1&0\\ 0&a_{\alpha_{0}+1,1}\\ \vdots&\vdots\\ 0&a_{n,1}\end{bmatrix}.\] If at least \(\alpha_{0}-2\) of \(a_{1,1},\ldots,a_{\alpha_{0}-1,1}\) are equal to zero, say, \(a_{1,1}=\cdots=,a_{\alpha_{0}-2,1}=0\), then we obtain the condition that \(\operatorname{im}(\phi_{1})\) is contained in \(M_{[1,\alpha_{0}-2]}\). The conditions \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\) for \(i=\alpha_{0}+1,\ldots,n_{0}\) force further that \(\operatorname{im}(\phi_{1})\subset M_{[\alpha_{0}+1,n_{0}]}\). Combining with the condition \(\operatorname{im}(\phi_{0})\subset M_{[1,\alpha_{0}-1]}\cap M_{[\alpha_{0}+1,n]}\), we obtain too many conditions on \(\phi\). Thus, we may conclude that \[\operatorname{im}(\phi)=\begin{bmatrix}0&a_{1,1}&a_{1,2}\\ \vdots&\vdots&\vdots\\ 0&a_{\alpha_{0}-1,1}&a_{\alpha_{0}-1,2}\\ 1&0&0\\ 0&a_{\alpha_{0}+1,1}&a^{\prime}_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ 0&a_{n,1}&a^{\prime}_{n,2}\end{bmatrix}\] where the entries \(a^{\prime}_{i,2}\) depend on the (undepicted) second derivatives of the \((i,0)\)-entry of \(\phi^{t}\). In particular, the \(2\times(\alpha_{0}-1)\) matrix appearing in the top right has full rank, as at least two of its rows are non-zero, and all of its rows are constrained to be scalar multiples of the given ones. We therefore read off the following condition on the limit collineation \(\phi\): there exists a rank \(2\) map \(\phi^{\perp}:\langle v_{1},v_{2}\rangle\to W\) as above satisfying the conditions \(\operatorname{\mathsf{Inc}}(L_{i}\cap\langle v_{1},v_{2}\rangle,M_{i})\) for \(i=1,\ldots,\alpha_{0}-1\), and furthermore, with \(\phi^{\perp}(v_{1})\subset\operatorname{im}(\phi_{1})\) and \(\operatorname{im}(\phi^{\perp})\subset\operatorname{im}(\phi)\). A parameter count now shows that the space of \(\phi\) satisfying both this condition and in addition the condition \(\operatorname{im}(\phi_{0})\subset M_{[1,\alpha_{0}-1]}\cap M_{[\alpha_{0}+1,n]}\) has too small a dimension. Finally, if \(a_{1,1},\ldots,a_{\alpha_{0}-1,1},a_{\alpha_{0}+1,1},\ldots,a_{n,1}\) are instead all zero, then the columns \(\phi^{t}(v_{1}),\phi^{t}(v_{2})\) are also dependent to first order. We may then repeat the argument by passing to second order in the first two columns, and iterate. **Corollary 8.4**.: _We have_ \[[Y]=[Y_{n_{0}}^{(3)}]+\sum_{\alpha_{0}=3}^{n_{0}}[Y_{\alpha_{0}}^{(2,1)}]\] _as cycles on \(\operatorname{Coll}(V,W)\). (If \(n_{0}=n\), we take the first term to be zero.)_ We next study the component \(Y_{n_{0}}^{(3)}\) under further degeneration. Fix general planes \(L_{1},\ldots,L_{n_{0}}\) containing \(\Lambda_{1}\). Fix now a _general_ plane \(\Lambda_{2}\) (not containing \(\Lambda_{1}\)), and, for some integer \(\alpha_{0}>n_{0}\), suppose that \(L_{n_{0}+1},\ldots,L_{\alpha_{0}}\subset\Lambda_{2}\) are general lines, and that \(L_{\alpha_{0}+1},\ldots,L_{n}\) are further general lines with no additional constraints. As before, denote by \(Y_{\alpha_{0}}^{(3)}\) the closure in \(\operatorname{Coll}(V,W)\) of the locus of _full rank_ collineations \(\phi:V\to W\) satisfying each of the conditions \(\mathsf{Inc}(L_{i},M_{i})\). We now introduce additional parameter spaces. **Definition 8.5**.: _Suppose that \(\alpha_{0}\geq n_{0}+3\). Define the closed subvariety_ \[\widetilde{Y}_{\alpha_{0}}\subset\mathbb{P}W\times\operatorname{Coll}( \Lambda_{2},W)\times\operatorname{Gr}(3,W)\] _to be the set of points \((w_{0},\widetilde{\phi_{1}},U)\) satisfying:_ 1. \(w_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\)_,_ 2. \(\widetilde{\phi_{1}}\) _satisfies the incidence conditions_ \(\mathsf{Inc}(L_{i}\cap\Lambda_{2},M_{i})\) _for_ \(i=1,2,\ldots,n_{0}\) _and_ \(\mathsf{Inc}(L_{i},M_{i})\) _for_ \(i=n_{0}+1,\ldots,\alpha_{0}-1\)_._ 3. \(U\supset\langle w_{0},\operatorname{im}(\widetilde{\phi_{1}})\rangle\)_._ It is routine to check that \(\widetilde{Y}_{\alpha_{0}}\) is irreducible and generically smooth of the expected dimension \[(\alpha_{0}-n_{0}-1)+[(2n-1)-(\alpha_{0}-1)]=(3n-1)-(n+n_{0}),\] and that a generic point satisfies all of the needed conditions "generically." That is, the line \(w_{0}\) lies in no other \(M_{i}\), the collineation \(\widetilde{\phi_{1}}\) is of type (2) and has image contained in no \(M_{i}\), and that \(w_{0}\notin\operatorname{im}(\widetilde{\phi_{1}})\), so in fact \(U=\langle w_{0},\operatorname{im}(\widetilde{\phi_{1}})\rangle\). Define \(\widetilde{\pi}:\widetilde{Y}_{\alpha_{0}}\to\operatorname{Gr}(3,W)\) by projection to the last factor. There is a _rational_ map \(\psi:\widetilde{Y}_{\alpha_{0}}\dashrightarrow\operatorname{Coll}_{(1,2)}(V,W)\) sending \((w_{0},\widetilde{\phi_{1}},U)\) to the unique collineation \(\phi\) with the properties: 1. \(\phi\) has type \((1,2)\), and \(V_{1}=\ker(\phi_{0})=\Lambda_{2}\), 2. \(\operatorname{im}(\phi_{0})=w_{0}\), 3. \(\phi_{1}\) is given by composing \(\widetilde{\phi_{1}}\) with the quotient map \(W\to W/w_{0}\). (iii) makes sense if \(\widetilde{\phi_{1}}\) has type (2) and \(\langle w_{0}\rangle\notin\operatorname{im}(\widetilde{\phi_{1}})\). However, the definition of \(\psi\) can be extended further. * If \(\widetilde{\phi_{1}}\) has type (2) and \(\langle w_{0}\rangle\in\operatorname{im}(\widetilde{\phi_{1}})\), then \(\phi_{1}\) may be replaced by the collineation of type \((1,1)\) for which the rank \(1\) map \((\phi_{1})_{0}\) is given by post-composition with the quotient \(W\to W/w_{0}\), and \(\operatorname{im}(\phi_{1})\) is determined by \(U\). * If instead \(\widetilde{\phi_{1}}\) has type \((1,1)\) and \(\langle w_{0}\rangle\notin\operatorname{im}(\widetilde{\phi_{1}})\), then one can make sense of the quotient of \(\widetilde{\phi_{1}}\) by \(w_{0}\), again making \(\phi_{1}\) of type \((1,1)\). * More generally, if \(\widetilde{\phi_{1}}\) has type \((1,1)\) and \(\operatorname{im}((\widetilde{\phi_{1}})_{0})\neq w_{0}\), then one can define \((\phi_{1})_{0}\) by the quotient of \(\langle(\widetilde{\phi_{1}})_{0},w_{0}\rangle\) by \(w_{0}\), and \(\operatorname{im}(\phi_{1})\) by \(U\). Finally, if \(\widetilde{\phi_{1}}\) has type \((1,1)\) and \(\operatorname{im}((\widetilde{\phi_{1}})_{0})=w_{0}\), then \(\psi\) is indeterminate, but this will not be a problem for us. **Definition 8.6**.: _Define \(Y_{\alpha_{0}}^{(1,2)}\) to be the closure in \(\operatorname{Coll}(V,W)\) of the image of \(\widetilde{Y}_{\alpha_{0}}^{(1,2)}\) under \(\psi\)._ The map \(\psi\) is birational onto its image. Indeed, if \((w_{0},\widetilde{\phi_{1}},\langle w_{0},\widetilde{\phi_{1}}\rangle)\) is a general point of \(\widetilde{Y}_{\alpha_{0}}\), at which, in particular, \(w_{0}\) and \(\mathrm{im}(\widetilde{\phi_{1}})\) are not contained in either \(M_{n_{0}+1}\) or \(M_{n_{0}+2}\), then replacing \(\widetilde{\phi_{1}}\) with a different lift of \(\phi_{1}=\widetilde{\phi_{1}}/w_{0}\) will result in a collineation \(\widetilde{\phi_{1}}^{\prime}\) no longer satisfying the incidence conditions \(\mathsf{Inc}(L_{i},M_{i})\). (Note here that we use \(\alpha_{0}\geq n_{0}+3\).) In particular, \(Y_{\alpha_{0}}^{(1,2)}\) is irreducible and generically smooth of codimension \(n+n_{0}\) in \(\mathrm{Coll}(V,W)\). Take now a degeneration over \(\mathbb{D}\) in which, initially, \(L_{1},\ldots,L_{n_{0}}\supset\Lambda_{1}\), and \(L_{n_{0}+1},\ldots,L_{\alpha_{0}-1}\subset\Lambda_{2}\), and then \(L_{\alpha_{0}}\) is moved into \(\Lambda_{2}\). **Proposition 8.7**.: _The flat limit of the subscheme \(Y_{\alpha_{0}-1}^{(3)}\subset\mathrm{Coll}(V,W)\) under the degeneration described above contains the components \(Y_{\alpha_{0}}^{(3)}\) and \(Y_{\alpha_{0}}^{(1,2)}\) (if \(\alpha\geq n_{0}+3\)). Both components appear with multiplicity 1, and there are no other components._ Proof.: The following matrices over \(\mathbb{D}\) exhibit a general point of either component as a limit of full rank collineations on the general fiber: \[\begin{bmatrix}0&a_{1,1}&a_{1,2}\\ \vdots&\vdots&\vdots\\ 0&a_{n_{0},1}&a_{n_{0},2}\\ a_{n_{0}+1,0}&a_{n_{0}+1,1}&a_{n_{0}+1,2}\\ \vdots&\vdots&\vdots\\ a_{\alpha_{0}-1,0}&a_{\alpha_{0}-1,1}&a_{\alpha_{0}-1,2}\\ a_{\alpha_{0},0}&a_{\alpha_{0},1}+ta_{\alpha_{0},1}^{\prime}&a_{\alpha_{0},2}+ ta_{\alpha_{0},2}^{\prime}\\ a_{\alpha_{0}+1,0}&a_{\alpha_{0}+1,1}&a_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ a_{n,0}&a_{n,1}&a_{n,2}\end{bmatrix},\begin{bmatrix}0&ta_{1,1}&ta_{1,2}\\ \vdots&\vdots&\vdots\\ a_{n_{0}+1,0}&ta_{n_{0}+1,1}&ta_{n_{0}+1,2}\\ \vdots&\vdots&\vdots\\ a_{\alpha_{0}-1,0}&ta_{\alpha_{0}-1,1}&ta_{\alpha_{0}-1,2}\\ a_{\alpha_{0},0}&ta_{\alpha_{0},1}&ta_{\alpha_{0},2}\\ ta_{\alpha_{0}+1,0}&ta_{\alpha_{0}+1,1}&ta_{\alpha_{0}+1,2}\\ \vdots&\vdots&\vdots\\ ta_{n,0}&ta_{n,1}&ta_{n,2}\end{bmatrix}\] The first \(n_{0}\) rows each satisfy a linear relation in the rightmost two columns in addition to the vanishing in the first column. Rows \(n_{0}+1\) through \(\alpha_{0}-1\) also satisfy a linear relation in the rightmost two columns. The last \(n-\alpha_{0}\) rows each satisfy a single linear relation involving all three columns. Finally, row \(\alpha_{0}\) satisfies a linear relation of the form \(tv_{0}+\gamma_{1}v_{1}+\gamma_{2}v_{2}=0\). The matrix on the left has full rank upon substituting \(t=0\), which determines the limit of the full rank maps \(\phi^{t}\) in \(\mathrm{Coll}(V,W)\). The corresponding component \(Y_{\alpha_{0}}^{(3)}\) appears with multiplicity 1 because it is cut out by linear equations on \(\mathbb{P}\operatorname{Hom}(V,W)^{\circ}\). On the right, the rightmost two columns become zero upon substituting \(t=0\); substituting \(t=0\) in the left-most column gives the vector \(w_{0}=\mathrm{im}(\phi_{0})\), and dividing the other two columns by \(t\) gives the lift \(\widetilde{\phi_{1}}:\Lambda_{2}\to W\). Here, the multiplicity 1 statement amounts to the fact that the 1-parameter family of _matrices_ defined by \(t\) is transverse to the locus of rank 1 matrices (embedded in \(\mathbb{P}\operatorname{Hom}(V,W)\) by a Segre embedding) when \(t=0\), hence the same is true in \(\mathrm{Coll}(V,W)\) after blowing up this locus. Finally, one needs to argue that there are no further components in the limit, by following the strategy of Proposition 8.3. Note in particular that one can rule out components where \(V_{\ell}=\Lambda_{1}\) by the same calculations appearing there, except now that \(\mathrm{im}(\phi)\) is now constrained to intersect \(M_{[1,\alpha_{0}]}\) non-trivially, not just \(M_{[1,\alpha_{0}-1]}\). Thus, one finds that the only components that can appear at the boundary of \(\mathrm{Coll}(V,W)\) must have \(V_{1}=\Lambda_{2}^{\prime}\), and that all candidates other than \(Y_{\alpha_{0}}^{(1,2)}\) will be over-constrained. The details are omitted. **Corollary 8.8**.: _We have_ \[[Y]=[Y_{n}^{(3)}]+\sum_{\alpha_{0}=3}^{n_{0}}[Y_{\alpha_{0}}^{(2,1)}]+\sum_{ \alpha_{0}=n_{0}+3}^{n}[Y_{\alpha_{0}}^{(1,2)}]\] _as cycles on \(\mathrm{Coll}(V,W)\)._ We will study the components \(Y^{(2,1)}_{\alpha_{0}}\) and \(Y^{(1,2)}_{\alpha_{0}}\) in the next two sections. However, we now observe that we can, for our purposes, ignore the full rank component. **Proposition 8.9**.: _The class \([Y^{(3)}_{n}]\) pushes forward to 0 under \(\pi\)._ Proof.: It suffices to show that the restriction of \(\pi\) to \(Y^{(3)}_{n}\) has positive-dimensional fibers. Let \(\phi:V\to W\) be a general point of \(Y^{(3)}_{n}\). We may choose a basis \(v_{0},v_{1},v_{2}\) of \(V\) for which \(\Lambda_{1}=\langle v_{0}\rangle\) and \(\Lambda_{2}=\langle v_{1},v_{2}\rangle\). The map \(\phi\) is determined the triple \((\phi(v_{0}),\phi(v_{1}),\phi(v_{2}))\in W^{3}\). Now, multiplying \(\phi(v_{1}),\phi(v_{2})\) by the same scalar \(\gamma\neq 0\) gives a 1-dimensional locus of elements of \(\operatorname{Coll}(V,W)\) that on the one hand still lie in \(Y^{(3)}_{n}\), and on the other hand map to the same point of \(\operatorname{Gr}(3,W)\) under \(\pi\). ### Degenerations of \(\boldsymbol{Y^{(2,1)}}\) We now study degenerations of \(Y^{(2,1)}_{\alpha_{0}}\), defined in the previous section, in order to compute its integral in \(\operatorname{Gr}(3,W)\). Recall that we have \(L_{1},\ldots,L_{\alpha_{0}}\supset\Lambda_{1}\), for some fixed \(\alpha_{0}\in[3,n_{0}]\). Fix a general plane \(\Lambda^{\prime}_{2}\supset\Lambda_{1}\) (we use the notation \(\Lambda^{\prime}_{2}\) to distinguish from \(\Lambda_{2}\), used in the previous section). We will move \(L_{1},\ldots,L_{\alpha_{0}}\) successively to be equal to \(\Lambda^{\prime}_{2}\). Suppose that \(L_{1},\ldots,L_{\alpha_{1}}\) have all been made equal to \(\Lambda^{\prime}_{2}\). If \(0\leq\alpha_{1}\leq\alpha_{0}-2\), we first define \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1})}\subset\operatorname{Coll}(V,W)\) exactly in the same way as \(Y^{(2,1)}_{\alpha_{0}}=:Y^{(2,1)}_{(\alpha_{0},0)}\), except that now the planes \(L_{1},\ldots,L_{\alpha_{1}}\) are all equal. The subscheme \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1})}\) is irreducible and generically smooth of the expected codimension \(n+n_{0}\). As soon as \(\alpha_{1}=\alpha_{0}-1\), the condition that \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{0}-1]}=0\) can no longer be satisfied if \(\phi_{0}\) remains of rank 2, because \(\phi_{0}\) satisfies the conditions \(\mathsf{Inc}(L_{i},M_{i})=\mathsf{Inc}(\Lambda^{\prime}_{2},M_{i})\) for \(i=1,2,\ldots,\alpha_{1}-1=\alpha_{0}-1\). Furthermore, a dimension count shows that requiring that \(\operatorname{im}(\phi)\) contain _two_ sections in \(M_{[1,\alpha_{0}-1]}\) (one of which is given by \(\phi_{0}(\Lambda^{\prime}_{2})\)) imposes too many conditions. On the other hand, starting from \(Y^{(2,1)}_{(\alpha_{0},\alpha_{0}-2)}\), if \(L_{\alpha_{0}-1}\) moves to contain \(\Lambda_{1}\), then the section initially in \((\operatorname{im}(\phi)-\operatorname{im}(\phi_{0}))\cap M_{[1,\alpha_{0}-1]}\) may "jump" to \(\operatorname{im}(\phi_{0})\) in the limit, becoming the "secant" \(\phi_{0}(\Lambda^{\prime}_{2})\). The general fiber necessarily has two sections in \(\operatorname{im}(\phi)\cap M_{[1,\alpha_{0}-2]}\), so the special fiber must as well. In fact, the same phenomenon may occur upon degeneration of any \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1}-1)}\) with \(1\leq\alpha_{1}\leq\alpha_{0}-1\), but the conditions \(\mathsf{Inc}(L_{i},M_{i})\) for \(i=\alpha_{1}+1,\ldots,\alpha_{0}-1\) force \(M_{\alpha_{1}+1},\ldots,M_{\alpha_{0}-1}\) to be bp-hyperplanes for \(\phi_{0}\). We therefore make the following definition. **Definition 8.10**.: _If \(2\leq\alpha_{1}\leq\alpha_{0}-1\), we define the subscheme \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\subset\operatorname{ Coll}(V,W)\) to be the closure of the locus of collineations \(\phi\) with the following properties._ 1. \(\phi\) _has type_ \((2,1)\)_, and_ \(V_{1}=\Lambda_{1}\) _for_ \(j=1,2,\ldots,m\)_,_ 2. \(\phi_{0}:V/\Lambda_{1}\to W\) _satisfies the incidence conditions_ \(\mathsf{Inc}(L_{i},M_{i})\) _for_ \(i=1,\ldots,\alpha_{1}\) _and_ \(\alpha_{0}\)_, and the incidence conditions_ \(\mathsf{Inc}(\langle L_{i},\Lambda_{1}\rangle,M_{i})\) _for_ \(i=n_{0}+1,\ldots,n\)_._ 3. \(\operatorname{im}(\phi_{0})\) _is contained in_ \(M_{[\alpha_{1}+1,\alpha_{0}-1]}\) _and_ \(M_{[\alpha_{0}+1,n_{0}]}\)_, but no other_ \(M_{i}\)_,_ 4. \(\operatorname{dim}(\operatorname{im}(\phi)\cap M_{[1,\alpha_{1}-1]})=2\) _and_ \(\operatorname{dim}(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{1}-1]}=1)\) _(in fact,_ \(\operatorname{dim}(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{1}]})=1\)_, with the intersection given by_ \(\phi_{0}(\Lambda^{\prime}_{2})\)_)._ In the language of maps to \(\mathbb{P}^{2}\), such a \(\phi\) corresponds to \(f:\mathbb{P}^{1}\to H\subset\mathbb{P}^{2}\) with base-points at \(p_{\alpha_{1}+1},\ldots,p_{\alpha_{0}-1},p_{\alpha_{0}+1},\ldots,p_{n_{0}}\), sending \(p_{1},\ldots,p_{\alpha_{1}}\) all to the same point \(x\) (so that the divisor \(p_{1}+\cdots+p_{\alpha_{1}}\) is a multi-secant), and with \(f(p_{i})=X_{i}\) for \(i=\alpha_{0},n_{0}+1,\ldots,n\). The collineation \(\phi\) additionally retains the data of a 2-dimensional subspace of \(M_{[1,\alpha_{1}-1]}\) in its image, which is not seen by the map \(f\). We give the naive dimension count: * property (i) imposes 3 conditions, * property (ii) imposes \(\alpha_{1}+1+(n-n_{0})\) conditions, * property (iii) imposes \(2(n_{0}-\alpha_{1}-1)\) conditions, and * property (iv) imposes \(\alpha_{1}-2\)_additional_ conditions. In fact, \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\) is irreducible and generically smooth of the expected codimension \(n+n_{0}\), proven using the usual method. These components make sense in the setting of the previous section, when \(n_{0}=n\), and, in fact, do appear in the limit of the component \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1}-1)}\) under the same degeneration. However, we will see later (Proposition 8.14) that the reason that they play no role there is that they push forward to zero under \(\pi\). We now define the further degenerate loci on \(\operatorname{Coll}(V,W)\) whose integrals can be computed directly. In analogy with the loci \(Y^{(1,1,1)}\) of the previous section, we have: **Definition 8.11**.: _If \(2\leq\alpha_{1}\leq\alpha_{0}-1\), define \(Y^{(1,1,1)}_{(\alpha_{0},\alpha_{1})}\) to be the closure of the locus of collineations \(\phi\) satisfying the following properties._ 1. \(\phi\) _has type_ \((1,1,1)\)_, with_ \(V_{1}=\Lambda_{2}^{\prime}\) _and_ \(V_{2}=\Lambda_{1}\)_,_ 2. \(\operatorname{im}(\phi_{0})\subset M_{[\alpha_{1}+1,n]}\) _and_ \(\operatorname{im}(\phi_{1})\subset M_{[\alpha_{0}+1,n_{0}]}\)_,_ 3. \(\operatorname{im}(\phi_{1})\cap M_{[1,\alpha_{1}-1]}\neq 0\) _and_ \(\operatorname{im}(\phi)\cap M_{[1,\alpha_{0}-1]}\neq 0\)_, but_ \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{1}-1]}=0\) _and_ \(\operatorname{im}(\phi_{1})\cap M_{[1,\alpha_{0}-1]}=0\)_. (In particular,_ \(\dim(\operatorname{im}(\phi)\cap M_{[1,\alpha_{1}-1]})=2\)_.)_ We can now state: **Proposition 8.12**.: _Fix some \(\alpha_{1}\leq\alpha_{0}-1\). Upon the degeneration of \(L_{\alpha_{1}}\to\Lambda_{2}^{\prime}\), the flat limit of \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1}-1)}\) consists of the components:_ * \(Y^{(2,1)}_{(\alpha_{0},\alpha_{1})}\)_, if_ \(\alpha_{1}\leq\alpha_{0}-2\)_,_ * \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\)_, if_ \(\alpha_{1}\geq 2\)_, and_ * \(Y^{(1,1,1)}_{(\alpha_{0},\alpha_{1})}\)_, if_ \(\alpha_{1}\geq 2\)_._ _Furthermore, all of these components appear with multiplicity 1, and there are no others._ _In particular, we have, as cycles in \(\operatorname{Coll}(V,W)\):_ \[[Y^{(2,1)}_{\alpha_{0}}]=[Y^{(2,1)}_{(\alpha_{0},0)}]=\sum_{\alpha_{1}=2}^{ \alpha_{0}-1}[Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}]+\sum_{ \alpha_{1}=2}^{\alpha_{0}-1}[Y^{(1,1,1)}_{(\alpha_{0},\alpha_{1})}].\] The proof follows the same strategy as that of Proposition 8.3 and is omitted. We finally consider degenerations of \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\). Write \(\overline{L}_{i}:=\langle L_{i},\Lambda_{1}\rangle/\Lambda_{1}\) for \(i=n_{0}+1,\ldots,n\), and we abusively write \(\Lambda_{2}^{\prime}\) for \(\Lambda_{2}^{\prime}/\Lambda_{1}\). It will also be convenient to set \(\overline{L}_{n+1}:=L_{\alpha_{0}}/\Lambda_{1}\). We now successively move the lines \(\overline{L}_{n_{0}+1},\ldots,\overline{L}_{n+1}\) to be equal to \(\Lambda_{2}^{\prime}\); suppose that \(\overline{L}_{n_{0}+1},\ldots,\overline{L}_{\alpha_{2}}=\Lambda_{2}^{\prime}\) and the other \(\overline{L}_{i}\) are general. Define now \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\) in exactly the same way as \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\), except with the new arrangement of \(\overline{L}_{i}\). However, it may also happen that a generic \(\phi\) on \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},\alpha_{2}-1)}\) degenerates at this step to one of type \((1,1,1)\). **Definition 8.13**.: _If \(\alpha_{1}\leq\alpha_{2}\leq n\), define \(Y^{(1,1,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\) to be the closure of the locus of collineations \(\phi\) satisfying the following properties._ 1. \(\phi\) _has type_ \((1,1,1)\)_, with_ \(V_{1}=\Lambda_{2}^{\prime}\) _and_ \(V_{2}=\Lambda_{1}\)_,_ 2. \(\operatorname{im}(\phi_{0})\subset M_{[\alpha_{1}+1,n_{0}]}\cap M_{[\alpha_{2} +1,n]}\) _and_ \(\operatorname{im}(\phi_{1})\subset M_{[\alpha_{1}+1,\alpha_{0}-1]}\cap M_{[ \alpha_{0}+1,n_{0}]}\)_,_ 3. \(\dim(\operatorname{im}(\phi)\cap M_{[1,\alpha_{1}-1]})=2\) _and_ \(\dim(\operatorname{im}(\phi_{1})\cap(M_{[1,\alpha_{1}]}\cap M_{[n_{0}+1, \alpha_{2}-1]}))=1\)_, but_ \(\operatorname{im}(\phi_{0})\cap M_{[1,\alpha_{1}-1]}=0\)_._ Note that in the above definition, we have not allowed \(\alpha_{2}=n+1\), that is, for \(\overline{L}_{n+1}\) to become equal to \(\Lambda_{2}^{\prime}\). This is explained by the following: **Proposition 8.14**.: _The class \([Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},n]})\) pushes forward to 0 under \(\pi\)._ Proof.: Given a \(\phi=(\phi_{0},\phi_{1})\in Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{ 1},n)}\), there exist infinitely many collineations in \(Y^{(2,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},n)}\) in the same fiber of \(\phi\). Indeed, one can replace \(\phi_{0}\) with a 1-parameter family of maps still satisfying the conditions \(\operatorname{\mathsf{Inc}}(\Lambda_{2}^{\prime},M_{i})\) for \(i=1,2,\ldots,\alpha_{1},n_{0}+1,\ldots,n\), in addition to the condition \(\operatorname{\mathsf{Inc}}(L_{i},M_{i})\) for \(i=\alpha_{0}\), without changing \(\operatorname{im}(\phi_{0})\). Geometrically, this corresponds to the fact that a map \(f:\mathbb{P}^{1}\to\mathbb{P}^{1}\) constrained to send \(\alpha_{1}+(n-n_{0})\) points to \(0\in\mathbb{P}^{1}\) and one point to \(\infty\in\mathbb{P}^{1}\) may be translated to infinitely many more by post-composing with the \(\mathbb{C}^{*}\)-action on the target. In particular, when \(n_{0}=n\), the class \([Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1})}]\) already pushes forward to zero under \(\pi\), so these subschemes do not contribute in the orbit closure calculation of the previous section (as we already saw, a posteriori). We can now describe the degeneration of the \(Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1})}\) in a straightforward way. **Proposition 8.15**.: _Fix some \(\alpha_{2}\) with \(n_{0}\leq\alpha_{2}\leq n\). For the purposes of this statement, we write \(Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1},n_{0}-1)}:=Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1})}\)._ _Then, the flat limit of the subscheme \(Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2}-1)}\) under the degeneration of \(\overline{L}_{\alpha_{2}}\mapsto\Lambda^{\prime}_{2}\) contains the components:_ * \(Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\)_, if_ \(\alpha_{2}\leq n-1\)_, and_ * \(Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\)_,_ _both with multiplicity 1, and no other components with non-zero push-forward under \(\pi\)._ In particular, we have \[[Y^{(2,1)-\sec}_{(\alpha_{0},\alpha_{1})}]=\sum_{\alpha_{2}=n_{0}+1}^{n}[Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}]\] _after push-forward by \(\pi\)._ Combining with Proposition 8.12, we conclude: **Corollary 8.16**.: _Modulo cycles pushing forward to 0 under \(\pi\), we have:_ \[[Y^{(2,1)}_{\alpha_{0}}]=\sum_{\alpha_{1}=2}^{\alpha_{0}-1}[Y^{(1,1,1)}_{( \alpha_{0},\alpha_{1})}]+\sum_{\alpha_{1}=2}^{\alpha_{0}-1}\sum_{\alpha_{2}=n_ {0}+1}^{n}[Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}].\] _for all \(\alpha_{0}\in[3,n_{0}]\)._ We will compute the pushforwards of the terms on the right hand side to \(\operatorname{Gr}(3,W)\) in SS8.5. ### Degenerations of \(\widetilde{Y}^{(1,2)}\) Because \(Y^{(1,2)}_{\alpha_{0}}\) and \(\widetilde{Y}^{(1,2)}_{\alpha_{0}}\) are birational over \(\operatorname{Gr}(3,W)\), it suffices to consider the push-forwards of \(\widetilde{Y}^{(1,2)}_{\alpha_{0}}\) under \(\widetilde{\pi}\). We therefore study the \(\widetilde{Y}^{(1,2)}_{\alpha_{0}}\) under degeneration. Recall that, for any \(\alpha_{0}\geq n_{0}+3\), the subscheme \(\widetilde{Y}_{\alpha_{0}}\subset\mathbb{P}W\times\operatorname{Coll}(\Lambda _{2},W)\times\operatorname{Gr}(3,W)\) is the closure of the locus of \((w_{0},\widetilde{\phi_{1}},U)\) satisfying: * \(w_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\), and \(w_{0}\) is contained in no other \(M_{i}\), * \(\widetilde{\phi_{1}}\) satisfies the incidence conditions \(\mathsf{Inc}(L_{i},M_{i})\) for \(i=1,2,\ldots,\alpha_{0}-1\), and \(\operatorname{im}(\widetilde{\phi_{1}})\not\subset M_{i}\) for all such \(i\), * \(U=\langle w_{0},\operatorname{im}(\widetilde{\phi_{1}})\rangle\) (in particular, \(w_{0}\notin\operatorname{im}(\widetilde{\phi_{1}})\)). We have abusively replaced \(L_{i}\cap\Lambda_{2}\) with simply \(L_{i}\) for \(i=1,2,\ldots,n_{0}\), in order to simplify notation. Now, fix a general line \(\Lambda^{\prime}_{1}\subset\Lambda_{2}\). We will degenerate the \(L_{i}\) to become equal to the \(\Lambda^{\prime}_{i}\). For some \(\alpha_{1}\in[1,\alpha_{0}-1]\), suppose that \(L_{1}=\cdots=L_{\alpha_{1}}=\Lambda^{\prime}_{1}\), and the \(L_{i}\) are otherwise general. Then, we define \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}\) by the same properties as above, with the \(L_{i}\) now in special position. The definition is the same whether \(\alpha_{1}\leq n_{0}\) or \(\alpha_{1}>n_{0}\). The \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}\) are easily seen to be irreducible and generically smooth of the correct dimension \(2n-n_{0}-1\). Furthermore, upon the degeneration \(L_{\alpha_{1}}\to\Lambda^{\prime}_{1}\), the component \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}\) appears in the limit of \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\) with multiplicity 1. **Lemma 8.17**.: \([\widetilde{Y}_{(\alpha_{0},\alpha_{0}-2)}]\) _and \([\widetilde{Y}_{(\alpha_{0},\alpha_{0}-1)}]\) push forward to zero under \(\widetilde{\pi}\)._ Proof.: One verifies that the restriction of \(\widetilde{\pi}\) to \(\widetilde{Y}_{(\alpha_{0},\alpha_{0}-2)}\) and \(\widetilde{Y}_{(\alpha_{0},\alpha_{0}-1)}\) have positive-dimensional fibers. We now describe the further components arising in this degeneration. There are two main ways in which a point \((w_{0},\widetilde{\phi_{1}},U)\) can become degenerate: either \(w_{0}\) can end up inside \(\operatorname{im}(\widetilde{\phi_{1}})\), or \(\widetilde{\phi_{1}}\) can degenerate into a collineation of type \((1,1)\) (It will follow from the considerations below that both cannot happen at once.) Fix \(\alpha_{1}\geq 2\), and suppose further that \(\alpha_{1}\leq n_{0}\). We consider the conditions on \((w_{0},\widetilde{\phi_{1}},U)\) in the limit of a general point of \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\), for which \(w_{0}\in\operatorname{im}(\widetilde{\phi_{1}})\). We still have that \(w_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\), and the incidence conditions \(\mathsf{Inc}(L_{i},M_{i})\), for \(i=1,2,\ldots,\alpha_{0}-1\), on \(\widetilde{\phi_{1}}\). In particular, we have \(\widetilde{\phi_{1}}(\Lambda^{\prime}_{2})\in M_{[1,\alpha_{1}]}\). However, we also have \(w_{0}\in\operatorname{im}(\widetilde{\phi_{1}})\), so we should expect in fact \(\widetilde{\phi_{1}}(\Lambda^{\prime}_{2})=w_{0}\) (up to scaling), and so \(\widetilde{\phi_{1}}(\Lambda^{\prime}_{2})\subset M_{[1,n_{0}]}\cap M_{[ \alpha_{0}+1,n]}\). (The only other possibility is that \(\operatorname{im}(\widetilde{\phi_{1}})\) be contained in \(M_{[1,\alpha_{1}]}\), but one can rule this out by a parameter count.) On the other hand, because \(L_{\alpha+1},\ldots,L_{n_{0}}\neq\Lambda^{\prime}_{2}\), this forces \(\operatorname{im}(\widetilde{\phi_{1}})\subset M_{[\alpha_{1}+1,n_{0}]}\). Finally, it is true on the general fiber that \(\dim(U\cap M_{[1,\alpha_{1}-1]})=2\), so the same must be true on the special fiber. We now define: **Definition 8.18**.: _Fix \(\alpha_{1}\in[2,n_{0}]\). We define \(\widetilde{Y}^{\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\subset\mathbb{P }W\times\operatorname{Coll}(\Lambda_{2},W)\times\operatorname{Gr}(3,W)\) to be the closure of the locus of \((w_{0},\widetilde{\phi_{1}},U)\) satisfying:_ 1. \(w_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\)_, and_ \(w_{0}\) _is contained in no other_ \(M_{i}\)_,_ 2. \(\widetilde{\phi_{1}}(\Lambda^{\prime}_{2})=w_{0}\)_, and_ \(\widetilde{\phi_{1}}\) _satisfies the further incidence conditions_ \(\mathsf{Inc}(L_{i},M_{i})\) _for_ \(i=n_{0}+1,\ldots,\alpha_{0}-1\)_, as well as_ \(\operatorname{im}(\widetilde{\phi_{1}})\subset M_{[\alpha_{1}+1,n_{0}]}\)_,_ 3. \(\dim(U\cap M_{[1,\alpha_{1}-1]})=2\)_._ The space of \(\widetilde{\phi_{1}}\) satisfying the first two properties has dimension \[(2n-1)-2(n_{0}-\alpha_{1})-(n-1-n_{0}+\alpha_{1}).\] Then, the space of \(U\) containing a generic such \(\operatorname{im}(\widetilde{\phi_{1}})\) and satisfying the last property has dimension \((n-3)-(\alpha_{1}-2)\). Therefore, \(\widetilde{Y}^{\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\) has dimension \(2n-n_{0}-1\). The usual incidence correspondence argument shows that it is in fact generically smooth and irreducible of this dimension. Moreover, \(\widetilde{Y}^{\operatorname{sec}}_{(\alpha_{0},\alpha_{1})}\) appears in the flat limit of \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\) with multiplicity \(1\). Suppose instead that we consider the analogous situation when \(\alpha_{1}>n_{0}\). Then, we see on the one hand that \(\widetilde{\phi_{1}}\) satisfies incidence conditions \(\mathsf{Inc}(L_{i},M_{i})\) for \(i=1,2,\ldots,\alpha_{0}-1\) and \(\mathsf{Inc}(\Lambda^{\prime}_{2},M_{i})\) for \(i=\alpha_{0}+1,\ldots,n\) (as \(\widetilde{\phi_{1}}=w_{0}\)), and on the other that \(\dim(U\cap M_{[1,n_{0}]})=2\). This imposes too many conditions on \(\mathbb{P}W\times\operatorname{Coll}(\Lambda_{2},W)\times\operatorname{Gr}(3,W)\), and we see no such limit components. We next consider the other degenerate possibility, that \(\widetilde{\phi_{1}}\) degenerates into a collineation of type \((1,1)\), with \(\ker((\widetilde{\phi_{1}})_{0})=\Lambda^{\prime}_{1}\) (other possibilities are ruled out by via straightforward parameter counts). Suppose first that \(\alpha_{1}\leq n_{0}+1\). We consider the conditions imposed just on \(U\). We have: * \(\dim(U\cap M_{[1,n_{0}]\cap[\alpha_{0}+1,n]})\geq 1\), because \(w_{0}\in U\), * \(\dim(U\cap M_{[1,\alpha_{0}-1]})\geq 2\), because the same is true on the general fiber, * \(\dim(U\cap M_{[\alpha_{1}+1,\alpha_{0}-1]})\geq 1\), due to the incidence conditions \(\mathsf{Inc}(L_{i},M_{i})\) for \(i=\alpha_{1}+1,\ldots,\alpha_{0}-1\). The first two constraints impose \((n-\alpha_{0}+n_{0}-2)+(\alpha_{1}-2)\) conditions on \(\operatorname{Gr}(3,W)\), and the last imposes \(\alpha_{0}-\alpha_{1}-3\) more, for \(n+n_{0}-7\) in total. As a result, one cannot obtain a non-trivial contribution in \(H^{2(n+n_{0}-8)}(\operatorname{Gr}(3,W))\) after push-forward by \(\widetilde{\pi}\). Suppose on the other hand that \(\alpha_{1}\geq n_{0}+2\). We may define: **Definition 8.19**.: _Fix \(\alpha_{1}\in[n_{0}+2,\alpha_{0}-1]\). We define \(\widetilde{Y}^{(1,1)}_{(\alpha_{0},\alpha_{1})}\subset\mathbb{P}W\times \operatorname{Coll}(\Lambda_{2},W)\times\operatorname{Gr}(3,W)\) to be the closure of the locus of \((w_{0},\widetilde{\phi_{1}},U)\) satisfying:_ 1. \(w_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\)_,_ 2. \(\widetilde{\phi_{1}}\) _has type_ \((1,1)\) _with_ \(\ker((\widetilde{\phi_{1}})_{0})=(\Lambda_{2})_{0}=\Lambda^{\prime}_{1}\) _and_ \(\operatorname{im}((\widetilde{\phi_{1}})_{0})\in M_{[\alpha_{1}+1,\alpha_{0}-1]}\)_,_ 3. \(\operatorname{im}(\widetilde{\phi_{1}})\cap M_{[1,\alpha_{1}-1]}\neq 0\) _._ * \(w_{0}\notin\operatorname{im}(\widetilde{\phi_{1}})\)_. In particular,_ \(U=\langle w_{0},\operatorname{im}(\widetilde{\phi_{1}})\rangle\) _and_ \(\dim(U\cap M_{[1,n_{0}]})\neq 2\)_._ The subscheme \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}^{(1,1)}\) is generically smooth and irreducible of dimension \[(\alpha_{0}-n_{0}-1)+(n-\alpha_{0}+\alpha_{1}-2)+(n-\alpha_{1}+2)=2n-n_{0}-1,\] and appears in the limit of \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\) with multiplicity 1. One can also check that if it is required instead that \(w_{0}\in\operatorname{im}(\widetilde{\phi_{1}})\), then we get too many conditions. Also, once \(L_{\alpha_{0}-3}=\Lambda_{2}^{\prime}\), The degeneration of \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\) is now summarized as follows. **Proposition 8.20**.: _Fix any \(\alpha_{0}\in[n_{0}+3,n]\) and \(\alpha_{1}\in[1,\alpha_{0}-2]\). Up to cycles pushing forward to zero under \(\widetilde{\pi}\), in the limit as \(L_{\alpha_{1}}\to\Lambda_{1}^{\prime}\), the subscheme \(\widetilde{Y}_{(\alpha_{0},\alpha_{1}-1)}\) degenerates to a union of the components:_ * \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}\)_, if_ \(\alpha_{1}\leq\alpha_{0}-3\) _(see Lemma_ 8.4_),_ * \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}^{\operatorname{sec}}\)_, if_ \(2\leq\alpha_{1}\leq n_{0}\)_,_ * \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}^{(1,1)}\)_, if_ \(\alpha_{1}\geq n_{0}+2\)_._ _each with multiplicity 1._ We finally, consider the \(\widetilde{Y}_{(\alpha_{0},\alpha_{1})}^{\operatorname{sec}}\) under degeneration, where the \(L_{i}\), \(i=n_{0}+1,\ldots,\alpha_{0}-1\) are made equal to \(\Lambda_{1}^{\prime}\), one by one. We obtain the following degenerate components in the limits (each with multiplicity 1): **Definition 8.21**.: _Fix \(\alpha_{0}\in[n_{0}+3,n]\), \(\alpha_{1}\in[2,n_{0}]\), and \(\alpha_{2}\in[n_{0}+1,\alpha_{0}-1]\). Define \(\widetilde{Y}_{(\alpha_{0},\alpha_{1},\alpha_{2})}^{(1,1)-\operatorname{sec }}\subset\mathbb{P}W\times\operatorname{Coll}(\Lambda_{2},W)\times\operatorname {Gr}(3,W)\) to be the closure of the locus of \((w_{0},\widetilde{\phi_{1}},U)\) satisfying:_ * \(w_{0}\in M_{[1,\alpha_{2}-1]}\cap M_{[\alpha_{0}+1,n]}\)_,_ * \(\widetilde{\phi_{1}}\) _has type_ \((1,1)\) _with_ \(\ker((\widetilde{\phi_{1}})_{0})=\Lambda_{1}^{\prime}\) _and_ \(\operatorname{im}((\widetilde{\phi_{1}})_{0})=w_{0}\)_,_ * \(\operatorname{im}(\widetilde{\phi_{1}})\subset M_{[\alpha_{1}+1,n_{0}]}\) _and_ \(\operatorname{im}(\widetilde{\phi_{1}})\cap M_{[\alpha_{2}+1,\alpha_{0}-1]}\neq 0\)_,_ * \(\dim(U\cap M_{[1,\alpha_{1}-1]})=2\)_._ Furthermore, once \(L_{\alpha_{0}-2}=\Lambda_{1}^{\prime}\), the locus of \((w_{0},\widetilde{\phi_{1}},U)\) where the \(\widetilde{\phi_{1}}\) still have rank 2 pushes forward to 0 under \(\widetilde{\pi}\). We therefore conclude: **Proposition 8.22**.: _Modulo cycles pushing forward to 0 under \(\widetilde{\pi}\), we have_ \[[\widetilde{Y}_{\alpha_{0}}]=\sum_{\alpha_{1}=n_{0}+2}^{\alpha_{0}-2}[ \widetilde{Y}_{(\alpha_{0},\alpha_{1})}^{(1,1)}]+\sum_{\alpha_{1}=2}^{n_{0}} \sum_{\alpha_{2}=n_{0}+1}^{\alpha_{0}-2}[\widetilde{Y}_{(\alpha_{0},\alpha_{1 },\alpha_{2})}^{(1,1)-\operatorname{sec}}].\] ### Pushing forward to \(\mathbf{Gr(3,n)}\) It is left to compute the pushforwards of the four families of components parametrizing totally degenerate collineations found above. In the entirety of this section, the SSYTs we will considered will always be filled with the entries \(1,2,3\) (not all necessarily appearing). #### 8.5.1. \(Y^{(1,1,1)}\) As in the case \(n=n_{0}\), the image \(\pi\left(Y_{(\alpha_{0},\alpha_{1})}^{(1,1,1)}\right)\) is a generically transverse intersection of two Schubert cycles, of classes \(\sigma_{n-\alpha_{1}-2,n_{0}-\alpha_{0}-1}\) (corresponding to property (ii) in Definition 8.11) and \(\sigma_{\alpha_{0}-3,\alpha_{1}-2}\) (corresponding to property (iii)). Thus, we have \[\sum_{2\leq\alpha_{1}<\alpha_{0}\leq n_{0}}[Y_{(\alpha_{0},\alpha_{1})}^{(1,1,1) }]=\sum_{2\leq\alpha_{1}<\alpha_{0}\leq n_{0}}\sigma_{n-\alpha_{1}-2,n_{0}- \alpha_{0}-1}\cdot\sigma_{\alpha_{0}-3,\alpha_{1}-2}.\] **Proposition 8.23**.: _Let \(\lambda\subset(n-3)^{3}\) be a partition with \(|\lambda|=n+n_{0}-8\). Then, the coefficient of \(\sigma_{\lambda}\) in_ \[\sum_{2\leq\alpha_{1}<\alpha_{0}\leq n_{0}}\sigma_{n-\alpha_{1}-2,n_{0}-\alpha _{0}-1}\cdot\sigma_{\alpha_{0}-3,\alpha_{1}-2}.\] _is equal to the number of SSYTs of shape \(\lambda\) such that_ * \(\lambda\) _has no_ \((1,2)\)_-strip of length_ \(n_{0}-3\)_, and_ * \(\lambda\) _has no_ \((2,3)\)_-strip of (maximal) length_ \(n-3\)_._ The first condition is equivalent to the requirement that the total number of \(1\)'s and \(2\)'s in the first row of \(\lambda\) is at most \(n_{0}-3\). Proof.: This is an application of Coskun's geometric Littlewood-Richardson rule [11] similar to the main calculation of [21]. Choose a basis \(w_{1},\ldots,w_{n}\) of \(W\) dual to the basis of hyperplanes \(M_{1},\ldots,M_{n}\). A generic point of \(\pi\left(Y_{(\alpha_{0},\alpha_{1})}^{(1,1,1)}\right)\) has a basis \(u_{0},u_{1},u_{2}\) with: * \(u_{0}\in\langle w_{1},\ldots,w_{\alpha_{1}}\rangle\), * \(u_{1}\in\langle w_{\alpha_{1}},\ldots,w_{\alpha_{0}},w_{n},\ldots,w_{n_{0}+1}\rangle\), * \(u_{2}\in\langle w_{\alpha_{0}},\ldots,w_{n}\rangle\). The corresponding subscheme of \(\operatorname{Gr}(3,W)\) is associated to the _Mondrian tableau_\(\mathcal{T}\) depicted in Figure 2. The tableau \(\mathcal{T}\) consists of the data of three squares \(S_{0},S_{1},S_{2}\) corresponding to the conditions on the vectors \(u_{0},u_{1},u_{2}\), respectively, and the integers appearing in each square correspond to the basis vectors spanning the subspaces of \(W\) prescribed to contain the \(u_{j}\). Note that we have reversed the order of \(\alpha_{0}+1,\ldots,n\) to make the squares contiguous. Consider now the Mondrian tableaux \(\mathcal{T}^{\prime},\mathcal{T}^{\prime\prime}\) depicted in Figure 3. These define subschemes \(Z^{\prime},Z^{\prime\prime}\), respectively, of \(\operatorname{Gr}(3,W^{+})\), where \(W^{+}\) is obtained from \(W\) by adding an additional basis vector \(u_{0}\). Note that \(\mathcal{T}\) is obtained from \(\mathcal{T}^{\prime}\) from by shifting \(S_{0}\) to the northeast by one unit, and \(\mathcal{T}^{\prime\prime}\) is obtained from \(\mathcal{T}^{\prime}\) by replacing \(S_{0}\) and \(S_{1}\) with the union (or span) \(S_{0}\cup S_{1}\) in \(\mathcal{T}^{\prime}\) and the intersection \(S_{0}\cap S_{1}\) in \(\mathcal{T}\). The geometric Littlewood-Richardson rule [11, Theorem 3.32] implies that \[[Z^{\prime}]=[Z^{\prime\prime}]+\left[\pi\left(Y^{(1,1,1)}_{(\alpha_{0},\alpha_{ 1})}\right)\right],\] on \(\operatorname{Gr}(3,W^{+})\), where the last term is pushed forward from \(\operatorname{Gr}(3,W)\) to \(\operatorname{Gr}(3,W^{+})\). via the inclusion \(W\subset W^{+}\). The geometric content of this relation is that, in the limit of the degeneration of the basis of \(W^{+}\) sending \(w_{0}\to tw_{0}+(1-t)w_{\alpha_{1}}\) over \(t\in\mathbb{D}\), the subvariety \(Z^{\prime}\) breaks into a union of \(Z^{\prime\prime}\) and \(\pi(Y^{(1,1,1)}_{(\alpha_{0},\alpha_{1})})\). We now compute \([Z^{\prime}]\) and \([Z^{\prime\prime}]\). First, \(Z^{\prime}\) is the generically transverse intersection of a Schubert variety of class \(\sigma_{n-\alpha_{1}-1}\) (corresponding to \(S_{0}\)) and a subscheme of class \((\sigma_{\alpha_{0}-2}\cdot\sigma_{\alpha_{1}+n_{0}-\alpha_{0}-2})_{\lambda_{ 0}\leq n_{0}-3}\) (corresponding to \(S_{0}\) and \(S_{1}\)). The computation of the latter class itself follows from the geometric Pieri rule, as the subscheme in question is defined by Schubert cycles of classes \(\sigma_{\alpha_{0}-2},\sigma_{\alpha_{1}+n_{0}-\alpha_{0}-2}\), and applying [11, Algorithm 3.12] will result in an inner square of size at least \(n-n_{0}+2\). Alternatively, one can view this subscheme as pushed forward from \(\operatorname{Gr}(3,\langle w_{\alpha_{1}},\ldots,w_{n}\rangle)\), on which it is given by a _transverse_ intersection of Schubert cycles. Now, by the Pieri rule, the coefficient of \(\sigma_{\lambda}\) in \((\sigma_{\alpha_{0}-2}\cdot\sigma_{\alpha_{1}+n_{0}-\alpha_{0}-2})_{\lambda_{ 0}\leq n_{0}-3}\cdot\sigma_{n-\alpha_{1}-1}\) is given by the number of SSYTs of shape \(\lambda\) with \((c_{1},c_{2},c_{3})=(\alpha_{0}-2,\alpha_{1}+n_{0}-\alpha_{0}-2,n-\alpha_{1}-1)\), with the property that at most \(n_{0}-3\) of the boxes in the first row are filled with \(1\)'s and \(2\)'s (equivalently, there is no \((1,2)\)-strip of length \(n_{0}-2\)). Summing over all \(\alpha_{0},\alpha_{1}\) with \(2\leq\alpha_{1}<\alpha_{0}\leq n_{0}\), we get exactly the SSYTs with \(1,2,3\) each appearing any number of times, with no such \((1,2)\)-strips. On the other hand, \(Z^{\prime\prime}\) is the generically transverse intersection of Schubert varieties of classes \(\sigma_{(n-2,n_{0}-\alpha_{0}-1)}\) (corresponding to the two new squares replacing \(S_{0},S_{1}\)) and \(\sigma_{\alpha_{0}-2}\). By the same argument as in [21, Proposition 11], the coefficient of \(\sigma_{\lambda}\) in the product of these two classes is equal to the number of SSYTs of shape \(\lambda\) with \((c_{1},c_{2},c_{3})=(\alpha_{0}-2,\alpha_{1}+n_{0}-\alpha_{0}-2,n-\alpha_{1}-1)\) and with a \((2,3)\)-strip of maximal length \(n-2\). Summing over all \(\alpha_{0},\alpha_{1}\) with \(2\leq\alpha_{1}<\alpha_{0}\leq n_{0}\), we get exactly the SSYTs with a \((2,3)\)-strip of maximal length. Finally, to obtain \(\left[\pi\left(Y^{(1,1,1)}_{(\alpha_{0},\alpha_{1})}\right)\right]\), we subtract \([Z^{\prime\prime}]\) from those of \([Z^{\prime}]\) and divide by \(\sigma_{1^{3}}\). (One can see directly that the cofficient of \(\sigma_{\lambda}\) in the difference is zero when \(\lambda_{2}=0\), but this was known a priori.) The conclusion follows. #### 8.5.2. \(Y^{(1,1,1)-\sec}\) **Proposition 8.24**.: _Let \(\lambda\subset(n-3)^{3}\) be a partition with \(|\lambda|=n+n_{0}-8\). If \(\lambda_{0}\leq n-4\), then, the coefficient of \(\sigma_{\lambda}\) in_ \[\sum_{\alpha_{0}=3}^{n_{0}}\sum_{\alpha_{1}=2}^{\alpha_{0}-1}\sum_{\alpha_{2} =n_{0}+1}^{n}\pi_{*}\left([Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2} )}]\right)\] _is equal to the number of SSYTs of shape \(\lambda\) such that_ * \(c_{2}\leq n_{0}-3\)_, and_ * \(\lambda\) _has a_ \((1,2)\)_-strip of length (at least)_ \(n_{0}-3\)_._ _If instead \(\lambda_{0}=n-3\), then the coefficient of \(\sigma_{\lambda}\) is zero._ Note that if \(\lambda\) has a \((1,2)\)-strip of length at least \(n_{0}-3\), then such a strip already exists in the first row of \(\lambda\). Thus, this condition is equivalent to the stipulation that the total number of \(1\)'s and \(2\)'s in the first row of \(\lambda\) is at least \(n_{0}-3\). Proof.: A general point of \(\pi\left(Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\right)\) has a basis \(u_{0},u_{1},u_{2}\) with the property that: * \(u_{0}\in M_{[\alpha_{1}+1,n_{0}]}\cap M_{[\alpha_{2}+1,n]}\), * \(u_{1}\in M_{[1,\alpha_{0}-1]}\cap M_{[\alpha_{0}+1,\alpha_{2}-1]}\), * \(u_{2}\in M_{[1,\alpha_{1}-1]}\). Furthermore, \(\pi\) is generically injective on \(Y^{(1,1,1)-\sec}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\). The condition on \(u_{0}\) corresponds to a Schubert subvariety of \(\operatorname{Gr}(3,W)\) of class \(\sigma_{(n-\alpha_{2})+(n_{0}-\alpha_{1})-2}\), and the conditions on \(u_{1},u_{2}\) correspond to one of class \(\sigma_{\alpha_{2}-4,\alpha_{1}-2}\). These two subschemes are not in general position with respect to one another, due to the fact that \(M_{\alpha_{1}}\) does not appear in any of the intersections constraining the \(u_{j}\). Dually, if we choose a dual basis \(w_{1},\ldots,w_{n}\) to the \(M_{i}\), then \(w_{\alpha_{1}}\) appears in all of the subspaces constraining the \(u_{j}\). However, passing from \(W\) to \(W/w_{j}\) and taking the same conditions on \(\operatorname{Gr}(3,W/W_{j})\), one does obtain a generically transverse intersection of Schubert subvarieties of class \(\sigma_{\alpha_{2}-4,\alpha_{1}-2}\cdot\sigma_{(n-\alpha_{2})+(n_{0}-\alpha_{ 1})-2}\). One can in particular apply the geometric Littlewood-Richardson algorithm to deform this intersection into a union of Schubert varieties \(\Sigma_{\lambda}\) in \(\operatorname{Gr}(3,W/w_{\alpha_{1}})\) defined with respect to the basis \(w_{1},\ldots,w_{\alpha_{1}-1},w_{\alpha_{1}+1},\ldots,w_{n}\). More precisely, the \(\Sigma_{\lambda}\) are defined by degeneracy conditions \(\dim(U\cap W^{\prime}_{j})\geq j\) for \(j=1,2,3\), where the subspaces \(W^{\prime}_{3}\subset W^{\prime}_{2}\subset W^{\prime}_{1}\subset W/w_{\alpha _{1}}\) are each spanned by some subset of the \(w_{1},\ldots,w_{\alpha_{1}-1},w_{\alpha_{1}+1},\ldots,w_{n}\). One can now run exactly the same algorithm in \(\operatorname{Gr}(3,W)\) with the conditions on \(u_{0},u_{1},u_{2}\) above, keeping track of the fact that the constraining subspaces now all contain the additional basis vector \(w_{\alpha_{1}}\). In particular, one obtains in the end precisely the same union of Schubert varieties \(\Sigma_{\lambda}\), now defined on \(\operatorname{Gr}(3,W)\) by the conditions \(\dim(U\cap W_{j})\geq j\) for \(j=1,2,3\), where \(W_{j}\) is the pullback of \(W^{\prime}_{j}\) under the quotient by \(w_{\alpha_{1}}\). The class in question is therefore given by computing the product \(\sigma_{\alpha_{2}-4,\alpha_{1}-2}\cdot\sigma_{(n-\alpha_{2})+(n_{0}-\alpha_{ 1})-2}\) in \(\operatorname{Gr}(3,W/W_{j})\), and then applying the inclusion \(H^{2(n+n_{0}-8)}(\operatorname{Gr}(3,W/W_{j}))\to H^{2(n+n_{0}-8)}( \operatorname{Gr}(3,W))\) defined by \(\sigma_{\lambda}\mapsto\sigma_{\lambda}\). Equivalently, one computes \[(\sigma_{\alpha_{2}-4,\alpha_{1}-2}\cdot\sigma_{(n-\alpha_{2})+(n_{0}-\alpha_ {1})-2})_{\lambda_{0}\leq n-4},\] by which we mean that the product is computed on \(\operatorname{Gr}(3,W)\) and only the terms with \(\sigma_{\lambda}\) with \(\lambda_{0}\leq n-4\) are kept. It remains to match the terms in this product, as determined by the Pieri rule, with the described SSYTs after summing over all \((\alpha_{0},\alpha_{1},\alpha_{2})\) with \(2\leq\alpha_{1}<\alpha_{0}\leq n_{0}<\alpha_{2}\leq n\). This is done as follows: given \((\alpha_{0},\alpha_{1},\alpha_{2})\) fixed, a term \(\sigma_{\lambda}\) in the product \(\sigma_{\alpha_{2}-4,\alpha_{1}-2}\cdot\sigma_{(n-\alpha_{2})+(n_{0}-\alpha_{ 1})-2}\) is given by an inclusion of Young tableaux \((\alpha_{2}-4,\alpha_{1}-2)\subset\lambda\), where the boxes of \(\lambda-(\alpha_{2}-4,\alpha_{1}-2)\) are filled with 3's, no two in the same column. Then, the value of \(\alpha_{0}\) further determines the filling of the remaining boxes corresponding to the shape \((\alpha_{2}-4,\alpha_{1}-2)\) with 1's and 2's: one takes the unique such filling with \((c_{1},c_{2})=(\alpha_{2}-\alpha_{0}+\alpha_{1}-3,\alpha_{0}-3)\). Because \(\alpha_{2}-4\geq n_{0}-3\), the condition on \((1,2)\) is automatically satisfied, and we also have \(c_{2}\leq\alpha_{0}-3\). Conversely, given a SSYT as in the statement of the Proposition, the values of \(\alpha_{0},\alpha_{1},\alpha_{2}\) are uniquely determined and satisfy the needed chain of inequalities. (In particular, the assumption that \(\lambda_{0}\leq n-4\) gives that \(\alpha_{2}\leq n\).) This completes the proof. #### 8.5.3. \(\widetilde{Y}^{(1,1)}\) **Proposition 8.25**.: _Let \(\lambda\subset(n-3)^{3}\) be a partition with \(|\lambda|=n+n_{0}-8\). Then, the coefficient of \(\sigma_{\lambda}\) in_ \[\sum_{\alpha_{0}=n_{0}+3}^{n}\sum_{\alpha_{1}=n_{0}+2}^{\alpha_{0}-2}\pi_{*} \left([\widetilde{Y}^{(1,1)}_{(\alpha_{0},\alpha_{1})}]\right)\] _is equal to the number of SSYTs of shape \(\lambda\) with at least \(n_{0}-1\) 2's in the second row._ In particular, we need \(\lambda_{1}\geq n_{0}-1\) in order for this number to be non-zero, or equivalently, \(\lambda_{0}+\lambda_{2}\leq n-7\). Proof.: A general point of \(\widetilde{\pi}\left(Y^{(1,1)}_{(\alpha_{0},\alpha_{1})}\right)\) has a basis \(u_{0},u_{1},u_{2}\) with the property that: * \(u_{0}\in M_{[1,n_{0}]}\cap M_{[\alpha_{0}+1,n]}\), * \(u_{1}\in M_{[\alpha_{1}+1,\alpha_{0}-1]}\), * \(u_{2}\in M_{[1,\alpha_{1}-1]}\). We see that if \(\alpha_{1}\geq n_{0}+2\), \(\alpha_{0}\geq\alpha_{1}+3\), and \(\alpha_{0}<n\), then \(\widetilde{\pi}\) is generically injective on \(Y^{(1,1,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\); otherwise, \(\widetilde{\pi}\) has positive-dimensional fibers and the corresponding summand above is zero. In the former case, we again apply the geometric Littlewood-Richardson rule implicitly. The subvariety of \(\operatorname{Gr}(3,n)\) of \(U\subset W\) having a 2-dimensional subspace of the form \(\langle u_{0},u_{2}\rangle\) as above has class \((\sigma_{n-\alpha_{0}+n_{0}-2}\cdot\sigma_{\alpha_{1}-3})_{\lambda_{1}\geq n_{0 }-1}\). Then, the locus of \(U\subset W\) containing a vector \(u_{1}\) as above is a Schubert variety of class \(\sigma_{\alpha_{0}-\alpha_{1}-3}\). These two subvarieties of \(\operatorname{Gr}(3,n)\) are defined with respect to disjoint subsets of the hyperplanes \(M_{1},\dots,M_{n}\), so their intersection has class \[(\sigma_{n-\alpha_{0}+n_{0}-2}\cdot\sigma_{\alpha_{1}-3})_{\lambda_{1}\geq n_{ 0}-1}\cdot\sigma_{\alpha_{0}-\alpha_{1}-3}.\] For fixed \(\alpha_{0},\alpha_{1}\), the coefficient of \(\sigma_{\lambda}\) is equal to the number of SSYT of shape \(\lambda\) with \((c_{1},c_{2},c_{3})=(n-\alpha_{0}+n_{0}-2,\alpha_{1}-3,\alpha_{0}-\alpha_{1}-3)\) with at least \(n_{0}-1\) 2's in the second row. Note that \(\alpha-3\geq n_{0}-1\). Conversely, given such a SSYT with \((c_{1},c_{2},c_{3})\) arbitrary, we may uniquely recover \(\alpha_{1}=c_{2}+3\geq n_{0}+2\) and \(\alpha_{0}=c_{3}+\alpha_{1}+3=c_{2}+c_{3}+6\leq n-1\). Thus, summing over all \(\alpha_{0},\alpha_{1}\) gives the claimed SSYTs. #### 8.5.4. \(\widetilde{Y}^{(1,1)-\operatorname{sec}}\) **Proposition 8.26**.: _Let \(\lambda\subset(n-3)^{3}\) be a partition with \(|\lambda|=n+n_{0}-8\). If \(\lambda_{0}\leq n-5\), then, the coefficient of \(\sigma_{\lambda}\) in_ \[\sum_{\alpha_{0}=n_{0}+3}^{n}\sum_{\alpha_{1}=2}^{n_{0}}\sum_{\alpha_{2}=n_{0} +1}^{\alpha_{0}-2}[\widetilde{Y}^{(1,1)-\operatorname{sec}}_{(\alpha_{0}, \alpha_{1},\alpha_{2})}].\] _is equal to the number of SSYTs of shape \(\lambda\) with \(c_{2}\geq n_{0}-2\), but with at most \(n_{0}-2\) 2's in the second row._ _If instead \(\lambda_{0}\geq n-4\), then this coefficient is zero._ Proof.: A general point of \(\widetilde{\pi}\left(Y^{(1,1)}_{(\alpha_{0},\alpha_{1})}\right)\) has a basis \(u_{0},u_{1},u_{2}\) with the property that: * \(u_{0}\in M_{[1,\alpha_{2}-1]}\cap M_{[\alpha_{0}+1,n]}\), * \(u_{1}\in M_{[\alpha_{1}+1,n_{0}]}\cap M_{[\alpha_{2}+1,\alpha_{0}-1]}\), * \(u_{2}\in M_{[1,\alpha_{1}-1]}\). Moreover, the map \(\widetilde{\pi}\) is generically injective on \(Y^{(1,1,1)-\operatorname{sec}}_{(\alpha_{0},\alpha_{1},\alpha_{2})}\). The conditions on \(u_{0},u_{2}\) define a Schubert variety of class \(\sigma_{n-\alpha_{0}+\alpha_{2}-3,\alpha_{1}-2}\), and that on \(u_{1}\) defines a Schubert variety of class \(\sigma_{\alpha_{0}-\alpha_{2}+n_{0}-\alpha_{1}-3}\). These Schubert varieties are not defined with respect to transverse flags, but this is true upon passing to the quotient of \(W\) by vectors dual to the hyperplanes \(M_{\alpha_{2}}\) and \(M_{\alpha_{0}}\), which do not appear in any of the intersections above. Thus, the desired class is \[(\sigma_{n-\alpha_{0}+\alpha_{2}-3,\alpha_{1}-2}\cdot\sigma_{\alpha_{0}- \alpha_{2}+n_{0}-\alpha_{1}-3})_{\lambda_{0}\leq n-5}.\] If \(\lambda_{0}\leq n-5\) and \(\alpha_{0},\alpha_{1},\alpha_{2}\) are fixed, then each term \(\sigma_{\lambda}\) in the expansion of the above product corresponds to an inclusion of Young tableaux \((n-\alpha_{0}+\alpha_{2}-3,\alpha_{1}-2)\subset\lambda\), whose complement is filled by 3's, no two in the same column. Then, we fill of the remaining boxes in the unique semi-standard way with \((c_{1},c_{2})=(n-\alpha_{0}+\alpha_{1}-2,\alpha_{2}-3)\); this is possible because \(n-\alpha_{0}+\alpha_{1}-2\leq n-\alpha_{0}+\alpha_{2}-3\), as \(\alpha_{2}>n_{0}\geq\alpha_{1}\). Furthermore, note that \(c_{2}=\alpha_{2}-3\geq n_{0}-2\), and that the number of 2's in the second row is \(\alpha_{1}-2\leq n_{0}-2\). Conversely, given an SSYT of shape \(\lambda\) of the desired form, then we take \(\alpha_{1}\) to be 2 more than the number of 2's in the second row, so that \(2\leq\alpha_{1}\leq n_{0}\), we take \(\alpha_{2}=c_{2}+3\geq n_{0}+1\), and \(\alpha_{0}=n+\alpha_{1}-2-c_{1}\). Then, \(\alpha_{0}\leq n\) because \(c_{1}\) at least the number of 2's in the second row, and the statement \(\alpha_{0}\geq\alpha_{2}+2\) reduces to the fact that the total number of 1's and 2's is at most \(n-5\). This completes the proof. #### 8.5.5. Proof of Theorem 1.2 It suffices to assume, as we have throughout, that \(d=n-1\). We combine the SSYTs underlying the contributions to \([Z]\) coming from components of the four types \(Y^{(1,1,1)},Y^{(1,1,1)-\operatorname{sec}},\widetilde{Y}^{(1,1)},\widetilde{Y}^ {(1,1)-\operatorname{sec}}\). We will see that the subsets of SSYTs of a given shape \(\lambda\) from each case are disjoint, and that their union gives exactly the subsets described in Theorem 1. First, suppose that \(\lambda_{0}\leq n-5\). Then, we get from \(\widetilde{Y}^{(1,1)}\) all SSYTs with at least \(n_{0}-1\) 2's in the 2nd row. Of the remaining SSYTs, with at most \(n_{0}-2\) 2's in the 2nd row, we then get from \(\widetilde{Y}^{(1,1)-\operatorname{sec}}\) those with \(c_{2}\geq n_{0}-2\). The remaining SSYTs are exactly those with \(c_{2}\leq n_{0}-3\). Of those, we get from \(Y^{(1,1,1)-\operatorname{sec}}\) the SSYTs with a \((1,2)\)-strip of length \(n_{0}-3\), and the only remaining SSYTs are those without such a \((1,2)\)-strip, obtained from \(Y^{(1,1,1)}\). Note here that there are automatically no \((2,3)\)-strips of length \(n-3\), and also that an SSYT without a \((1,2)\)-strip of length \(n_{0}-3\) necessarily has \(c_{2}\leq n_{0}-3\). In all, we get exactly the set of all SSYTs, each appearing exactly once in one of the four families of components. Next, suppose that \(\lambda_{0}=n-4\). We only get contributions from \(Y^{(1,1,1)},Y^{(1,1,1)-\sec}\). In both cases, we get SSYTs with \(c_{2}\leq n_{0}-3\). Then, the SSYTs coming from \(Y^{(1,1,1)-\sec}\) are those with a \((1,2)\)-strip of length \(n_{0}-3\), and those coming from \(Y^{(1,1,1)}\) are exactly those without such a strip. Finally, suppose that \(\lambda_{0}=n-3\). Then, the only SSYTs come from \(Y^{(1,1,1)}\), and here we get precisely the claimed subset. \(\square\) As a check, Proposition 5.4 under assumption (c) gives that \(\Gamma^{\lambda}_{2,\overrightarrow{n},d}=|\mathsf{SSYT}_{3}(\lambda)|\) whenever \(n\geq d+3\); by Proposition 6.1, this is equivalent to the statement that \(\Gamma^{\lambda}_{2,\overrightarrow{n},n-1}=|\mathsf{SSYT}_{3}(\lambda)|\) when \(\lambda_{0}\leq n-5\).
2309.08733
Optimal path planning of multi-agent cooperative systems with rigid formation
In this article, we consider the path-planning problem of a cooperative homogeneous robotic system with rigid formation. An optimal controller is designed for each agent in such rigid systems based on Pontryagin's minimum principle theory. We found that the optimal control for each agent is equivalent to the optimal control for the Center of Mass (CoM). This equivalence is then proved by using some analytical mechanics. Three examples are finally simulated to illustrate our theoretical results. One application could be utilizing this equivalence to simplify the original multi-agent optimal control problem.
Ananda Rangan Narayanan, Mi Zhou, Erik Verriest
2023-09-15T19:52:31Z
http://arxiv.org/abs/2309.08733v1
# Optimal path planning of multi-agent cooperative systems with rigid formation ###### Abstract In this article, we consider the path-planning problem of a cooperative homogeneous robotic system with rigid formation. An optimal controller is designed for each agent in such rigid systems based on Pontryagin's minimum principle theory. We found that the optimal control for each agent is equivalent to the optimal control for the Center of Mass (CoM). This equivalence is then proved by using some analytical mechanics. Three examples are finally simulated to illustrate our theoretical results. One application could be utilizing this equivalence to simplify the original multi-agent optimal control problem. s + Footnote †: footnoteinfo \(f(r_{1},r_{2},r_{3},...,t)=0\), then the constraints are said to be **holonomic**. _Fact 3._ The rigid body has holonomic constraints. _Theorem 4._ (Chasles' theorem). Any general displacement of a rigid body can be represented by a translation plus a rotation. This theorem suggests that we can split the problem of rigid body motion into two separate phases, one concerned solely with the translational motion of the body, and the other, with its rotational motion. This lays the foundation for our proposed method. ### Problem formulation Consider a system composed of \(N\) agents. Each agent has dynamics \(\dot{r}_{i}=f(r_{i},u_{i}),\;i=1,2,...,N\). The objective is to regard the whole robotic system as a rigid system and steer it optimally from position \(\mathcal{P}(0)=\{r_{1}(0),r_{2}(0),\ldots,r_{N}(0)\}\) to \(\mathcal{P}(t_{f})=\{r_{1}(t_{f}),r_{2}(t_{f}),\ldots,r_{N}(t_{f})\}\) while keeping their formation \(\mathcal{S}(t)=\{r_{ij}(t)=r_{ij}(0)|\forall i,j\in{1,\ldots,N}\}\) the same during the whole time period. One example is multi-agent systems cooperatively moving a big object from one position to another position while each agent has its all intelligence. As shown in Fig. 1, five agents are at initial position \(\mathcal{P}(0)\) and have initial formation \(\mathcal{S}(0)\) at \(t=0\). Each agent \(i\) has its own controller \(u_{i}\) and they want to move cooperatively to position \(\mathcal{P}(t_{f})\) while keeping the relative position of each agent in this system the same. With these definitions and objectives, we can thus formulate a Bolza-type optimal control problem with some equality constraints: \[\min J =\phi(r(t_{f}))+\int_{0}^{t_{f}}\sum_{i}L_{i}(r_{i},u_{i})dt \tag{1}\] \[s.t.,\;\dot{r}_{i}=f_{i}(r_{i},u_{i}),\;i=1,2,...,N\] (2) \[\mathcal{S}(0)=\mathcal{S}(t_{f})=\mathcal{S}(t),\;\forall t\in[ 0,t_{f}] \tag{3}\] where \(r_{i}\in\mathbb{R}^{2}\) is the state, \(u_{i}\in\mathcal{U}\) is the control input, \(t_{f}\) denotes the terminal time. The system dynamics \(f_{i}:\mathbb{R}^{n}\times\mathcal{U}\rightarrow\mathbb{R}^{n}\) is a continuous and differentiable function. The stage cost \(L_{i}(r_{i},u_{i}):\mathbb{R}^{n}\times\mathcal{U}\rightarrow\mathbb{R}\) is also continuous and differentiable. With these assumptions, we can use Pontryagin minimum principle to obtain the optimality conditions and the Euler-Lagrangian equation, thus solving this optimal control problem for each agent. This is well-developed already so our aim here is to introduce a new analytical-mechanics-based method to simplify this multi-state-multi-controller problem. ## 3 Theoretic Results In this section, we will introduce our method to make the original multi-agent system optimal control problem equivalent to a single-agent optimal control problem. We first derive the optimality conditions for the two-agent case with a simple position-velocity model. Multi-agent cases are then extended using the same logic. ### Two-agent system To make it more clear, we first provide a two-agent system in a two-dimensional plane as an example. Consider the following two-agent system: \[\dot{r}_{i}=u_{i},\;i=1,2 \tag{4}\] where \(r_{i}\triangleq[x,y]^{\top}\), \(u_{i}\in\mathbb{R}^{2}\). The performance index is defined as the kinetic energy expended: \[J=\frac{1}{2}\int_{0}^{t_{f}}\left\|u_{1}\right\|_{2}^{2}+\left\|u_{2}\right\| _{2}^{2}\mathrm{dt}. \tag{5}\] To make the two agents keep the same distance during the whole process, the following rigidity constraint should be added: \[\left\|r_{1}-r_{2}\right\|_{2}=l \tag{6}\] where \(l\) is a constant denoting the distance between two agents. Construct the following Hamiltonian \[H=\frac{1}{2}\left\|u_{1}\right\|_{2}^{2}+\frac{1}{2}\left\|u_{2}\right\|_{2}^ {2}+\lambda_{1}^{\top}u_{1}+\lambda_{2}^{\top}u_{2}+\frac{1}{2}\mu\left\|r_{1 }-r_{2}\right\|_{2}^{2} \tag{7}\] where \(\lambda_{i}\in\mathbb{R}^{2}\) is the Lagrangian multiplier, \(\mu\) is also a multiplier corresponding to the equality constraints (6). Applying the Pontryagin Minimum Principle, we get equations for the optimal control \(u_{i}\) and the dynamics of the Lagrange multipliers \(\lambda_{i}\). The optimality conditions yield: \[\frac{\partial H}{\partial u_{i}}=0\implies u_{i}+\lambda_{i}=0,\;i=1,2.\] The co-state equations yield: \[\dot{\lambda}_{i}=-\frac{\partial H}{\partial r_{i}}\implies\dot{\lambda}_{ i}=-\mu(r_{i}-r_{j}),\;(i,j)=\{(1,2),(2,1)\}.\] Hence, \[\ddot{r}_{i}=\mu(r_{i}-r_{j}),\;(i,j)=\{(1,2),(2,1)\}.\] This leads to \[\ddot{r}_{1}+\ddot{r}_{2} =0, \tag{8}\] \[\ddot{r}_{1}-\ddot{r}_{2} =2\mu(r_{1}-r_{2}). \tag{9}\] Define \[r_{c}=\frac{r_{1}+r_{2}}{2}. \tag{10}\] Eqn. (8) thus implies that the \(\ddot{r}_{c}=0\). In other words, the center of mass does not accelerate. Rewriting the coordinates of the agents \(r_{i}\) in terms of the center of mass, \[r_{1} =r_{c}+\frac{l}{2}\hat{s}(\theta), \tag{11}\] \[r_{2} =r_{c}-\frac{l}{2}\hat{s}(\theta). \tag{12}\] Figure 1: Problem description: a five-agent example where \(r_{c}\) is the coordinate of the center of mass as defined in Eqn.(10) and \(\hat{s}(\theta)\) is a unit vector at an angle \(\theta\) from the horizontal. \[\hat{s}(\theta)=\begin{bmatrix}\cos(\theta)\\ \sin(\theta)\end{bmatrix}.\] This representation of \(r_{i}\) automatically satisfies the constraint equation (6). Taking the translational and angular velocity of the center of mass as \(u_{c}\) and \(\omega_{c}\) respectively, velocities of the agents \(r_{i}\) can be represented as, \[\begin{cases}u_{1}&=\dot{r}_{1}=u_{c}+\dfrac{l\omega}{2}\hat{s}^{\bot}(\theta )\\ u_{2}&=\dot{r}_{2}=u_{c}-\dfrac{l\omega}{2}\hat{s}^{\bot}(\theta)\end{cases} \tag{13}\] where \(\hat{s}\hat{s}^{\bot}(\theta)=0\). The performance index (5) can then be rewritten as, \[J=\frac{1}{2}\int_{0}^{t_{f}}2\left\|u_{c}\right\|_{2}^{2}+\frac{l^{2}\omega^ {2}}{2}\mathrm{dt}. \tag{14}\] Thus the optimal trajectory has the center of mass traveling with constant translation and rotational velocity \[u_{c}=\frac{r_{c}(t_{f})-r_{c}(0)}{t_{f}},\,\omega=\frac{\theta(t_{f})-\theta (0)}{t_{f}}, \tag{15}\] which is obvious from Pontryagin's minimum principle. ### Multi-agent systems The results from the above two-agent system can be extended to a multi-agent system of \(N\) agents. **Lemma 5**: Planar rigidity constraints: For an \(N\)-agent planar system to be rigid, there must be at least \(2N-3\) constraint equations. **Proof.** A rigid planar system has three degrees of freedom, typically represented by the Cartesian coordinates and the heading angles. A point \(r_{i}\) in \(\mathbb{R}^{2}\) has two degrees of freedom, generally denoted by its Cartesian coordinates. Thus, for \(N\) points to form a rigid system, there must be at least \(2N-3\) constraints, leaving exactly \(3\) degrees of freedom. Let us define the system of \(N\) agents by their center of mass and a heading angle. \[\mathcal{P}(r_{c},\theta)=\begin{cases}r_{1}=r_{c}+l_{1}\hat{s}(\theta)\\ r_{i}=r_{c}+l_{i}\hat{s}(\alpha_{i}+\theta),\;i=2,\ldots,n-1\\ r_{N}=r_{c}+\sum_{i=1}^{n-1}(r_{c}-r_{i})\end{cases}. \tag{16}\] As shown in Fig. 2, the first point is at a fixed distance \(l_{1}\) from \(r_{c}\) and chosen to match the heading \(\theta\). The points \(r_{i}\) for \(i=2,\ldots,N-1\) are at a distance \(l_{i}\) from \(r_{c}\) and the angle between \(r_{i}\) and \(r_{1}\) is fixed with respect to the center of mass, \(\angle r_{i}r_{c}r_{1}=\alpha_{i}\). The last point \(r_{N}\) is chosen in such a way that the center of mass of the resulting system lies at \(r_{c}\). This representation automatically satisfies the \(2N-3\) constraints for a rigid system of \(N\) agents since \[\left\|r_{i}-r_{c}\right\|_{2}=l_{i},\;i=1,2,\ldots,N-1\] and \[\left\langle r_{i}-r_{c},r_{1}-r_{c}\right\rangle=d_{1}d_{i}\cos(\alpha_{i}), \;i=2,\ldots,N-1.\] Taking \(\alpha_{1}=0\) the velocity of the agents \(r_{i}\) can now be represented as, \[u_{i}=\dot{r}_{i} =u_{c}+l_{i}\omega\hat{s}^{\bot}(\alpha_{i}+\theta),\;i=1,\ldots,N-1\] \[u_{N}=\dot{r}_{N} =u_{c}-\sum_{i=1}^{N-1}l_{i}\omega\hat{s}^{\bot}(\alpha_{i}+\theta)\] The original performance metric can then be rewritten as \[J=\frac{1}{2}\int_{0}^{t_{f}}N\left\|u_{c}\right\|_{2}^{2}+\\ \omega^{2}(\sum_{i=1}^{N-1}l_{i}^{2}+\sum_{i=1}^{N-1}\sum_{j=1}^{ N-1}l_{i}l_{j}\cos(\alpha_{i}-\alpha_{j}))\mathrm{dt}.\] In other words, the total energy of the \(N\)-agents rigid system can be rewritten as the sum of translational energy and rotational energy of the center of mass. Thus we transform the original multi-agent optimal control problem to a single-agent (i.e., the center of mass) optimal control problem with another performance index function. Hence the solution from (15) holds for the multi-agent case too. ## 4 Numerical applications In this section, we provide three examples to illustrate our theoretical results. The first example is a two-agent system called the pipe model. Then the second example is a three-agent system. The last example is a four-agent linear system. ### Example 1: Pipe model with linear dynamics Imagine two people moving a pipe cooperatively from one position to another in a fixed time interval. The question is how each person's movement can make the energy cost minimal. Eqn. (4), (5), (6) had combined this problem in mathematics language. Here we will find the optimal solution with a specific example. The initial position is set as \([0,0]^{\top}\), \([0,1]^{\top}\) for two agents respectively. The terminal position of the two agents is respectively \([\frac{1}{2},0]^{\top}\) and \([1,\frac{\sqrt{3}}{2}]^{\top}\). The initial time is \(t_{0}=0\) and the terminal time is \(t_{f}=1\). Solving this problem optimally, we obtain the optimal trajectory for each agent and the CoM as shown in Fig. 3. The trajectory of this two-mountaineers-pipe model includes a rotation and a sliding at the same time. The corresponding optimal cost is \(J=0.6355\). In this problem, the optimal control of the center of mass is Figure 2: Illustrating the definition of notations indeed a constant value, which makes the center of mass steer from the initial position to the terminal position in a line, as shown in the red line in Fig. 3. Fig. 4 shows the trajectory of the CoM and the angle \(\theta\) of the mountainer-pipe system with respect to the global coordinates, which shows that the CoM moves in a constant velocity and constant angular velocity \(\omega\) as suggested in Eqn. (15). ### Example 2: three-agents with irregular shape Then we provide a three-agent system. The terminal time is \(t_{f}=1\). Solving this problem, we got optimal cost \(J=9.9131\). Fig. 5 and Fig. 6 show that the CoM moves with a constant speed and constant angular velocity. space is 3 (i.e., \(r_{c}\in\mathbb{R}^{2}\) and \(\theta\in\mathbb{R}^{1}\)) and 3 controllers (i.e., \(u_{c}\in\mathbb{R}^{2}\) and \(\omega\in\mathbb{R}^{1}\)) without other constraints. We can thus solve the optimal control problem for the center of mass, then recover the optimal controller for each agent. ## 5 Conclusion In this work, we studied the optimal path planning of homogeneous multi-agent systems with rigid formation. We proved the equivalence of the original multi-agent system optimal control problem to the new single-CoM optimal control problem. Finally, we provided some simulation results to illustrate our theoretical results. In the future, we can extend our work to heterogeneous multi-agent systems which means agents may have different dynamics. One other extension lies in higher dimensional (\(\geq 3\)) control. One other possible extension is to add constraints that do not make the entire formation rigid. Another interesting extension could be the same path planning problem with minimized energy in a heterogeneous terrain proposed in Verriest (2011a), but with multiple agents.
2309.16519
AtomSurf : Surface Representation for Learning on Protein Structures
While there has been significant progress in evaluating and comparing different representations for learning on protein data, the role of surface-based learning approaches remains not well-understood. In particular, there is a lack of direct and fair benchmark comparison between the best available surface-based learning methods against alternative representations such as graphs. Moreover, the few existing surface-based approaches either use surface information in isolation or, at best, perform global pooling between surface and graph-based architectures. In this work, we fill this gap by first adapting a state-of-the-art surface encoder for protein learning tasks. We then perform a direct and fair comparison of the resulting method against alternative approaches within the Atom3D benchmark, highlighting the limitations of pure surface-based learning. Finally, we propose an integrated approach, which allows learned feature sharing between graphs and surface representations on the level of nodes and vertices $\textit{across all layers}$. We demonstrate that the resulting architecture achieves state-of-the-art results on all tasks in the Atom3D benchmark, while adhering to the strict benchmark protocol, as well as more broadly on binding site identification and binding pocket classification. Furthermore, we use coarsened surfaces and optimize our approach for efficiency, making our tool competitive in training and inference time with existing techniques. Our code and data can be found online: $\texttt{github.com/Vincentx15/atomsurf}$
Vincent Mallet, Souhaib Attaiki, Yangyang Miao, Bruno Correia, Maks Ovsjanikov
2023-09-28T15:25:17Z
http://arxiv.org/abs/2309.16519v3
# AtomSurf : Surface Representation for ###### Abstract Recent advancements in Cryo-EM and protein structure prediction algorithms have made large-scale protein structures accessible, paving the way for machine learning-based functional annotations. The field of geometric deep learning focuses on creating methods working on geometric data. An essential aspect of learning from protein structures is representing these structures as a geometric object (be it a grid, graph, or surface) and applying a learning method tailored to this representation. The performance of a given approach will then depend on both the representation and its corresponding learning method. In this paper, we investigate representing proteins as _3D mesh surfaces_ and incorporate them into an established representation benchmark [31]. Our first finding is that despite promising preliminary results, the surface representation alone does not seem competitive with 3D grids. Building on this, we introduce a synergistic approach, combining surface representations with graph-based methods, resulting in a general framework that incorporates both representations in learning. We show that using this combination, we are able to obtain state-of-the-art results across _all tested tasks_. Our code and data can be found online: [https://github.com/Vincentx15/atom2D](https://github.com/Vincentx15/atom2D). ## 1 Introduction Structural bioinformatics data is becoming available at an unprecedented pace. Advances in cryogenic Electron Microscopy (cryo-EM) in particular, have led to the production of evermore experimentally derived structures, as well as larger systems and better resolutions [9]. The development of AlphaFold [18] along with many subsequent works on protein structure prediction have made protein structures abundantly available, with more than a million high-quality predictions included in the Protein Data Bank (PDB) [2] and over 600 million in the ESM Metagenomic Atlas (ESMatlas) [21]. Machine learning looks promising to leverage this growing data to help advance the fields of structural bioinformatics and drug design. However, structural biology data displays certain additional geometric properties compared to images or sequences. For instance, biological systems lack a canonical orientation due to the insignificance of gravity. Addressing these challenges, the field of geometric deep learning has emerged [4], offering specialized methods to process data ranging from graphs [5; 19], point clouds [25; 34], surfaces [22; 24], _equivariant_ methods that respect a group symmetry of the data [6; 7], and more. A few pioneer works have applied geometric deep learning to structural biology data, using 3D convolutional networks [17], equivariant convolutional networks [36], surfaces [10], graphs [1] and equivariant discrete networks [29]. They were followed by several others, especially in the post-AlphaFold era and we refer the reader to [16] for a review of these methods. At the core of this endeavor lies a dual challenge: selecting a suitable mathematical representation of protein structures (see Figure 1) and devising an effective learning method compatible with the chosen representation. Although benchmarking learning methods is relatively straightforward, the optimal pairing of representation and learning method remains a complex task. The seminal work of Atom3d [31] addresses this question by proposing a set of nine benchmark tasks for three-dimensional molecular structures. They also compare representations with one another by using vanilla networks based on 3D convolutional, graph, and equivariant networks on these tasks and comparing their performance. This comparison does not, however, include the surface representation, despite promising results [10; 30]. The results were achieved with one of the first surface methods [22] and an ad-hoc method. Novel surface methods, such as _DiffusionNet_[27], are now well established. A concurrent work has successfully applied this method to protein data [35]. However, approaches based on the surface representation have followed the initial MaSIF paper validation [11], and hence have never been directly compared to other representations in the context of a single well-established benchmark. Beyond comparing the use of a single representation for proteins, a few papers mention using several representations simultaneously. In [14], the authors propose to use a graph representation with different edge types for sequential and geometric neighborhoods. In [23; 28], the edges of a surface mesh are also used, along with edges connecting residue nodes and mesh vertices. However, these methods still rely on the flexibility of the graph representation to accommodate different representations, losing the specific properties of surface methods. In our study, we bridge this gap by applying _DiffusionNet_ to the Atom3d benchmark, offering a rigorous comparison of the surface representation against other representations for the first time and introducing a batched implementation of _DiffusionNet_ along the way. Our explorations extend to marrying surface architectures with graph methods, evaluating multiple strategies, and culminating in the proposal of a hybrid representation technique that sets new standards in the benchmark. Figure 1: This figure illustrates the diverse mathematical objects used to represent a protein structure, ranging from sequences to atom-level and residue-level point clouds, graphs, and molecular surfaces. Effective machine learning for protein structures hinges on selecting the appropriate mathematical representation, followed by a compatible machine-learning technique. This dual-layered modeling approach underscores the complexity of extrapolating performance solely based on machine learning benchmarks in protein structure tasks. Methods In this section, we describe the methodology behind our proposed approach. We begin by explaining our 3D surface mesh modeling for protein data and the deep neural network architecture designed for its processing in 2.1. Subsequently, in 2.2, we introduce our novel hybrid representation, which synergistically integrates both graph and surface information within a singular architecture. ### Single Representation Learning Given a protein, denoted as \(\mathbf{P}\) and represented by its sequence of amino acids, we cast it in two representations: as a graph \(\mathcal{G}_{\mathbf{P}}=(\mathcal{V}_{g},\mathcal{E}_{g})\) and as a surface \(\mathcal{S}_{\mathbf{P}}\) characterized by vertices \(\mathcal{V}_{s}\). We start by deriving the protein surface using MSMS [26], gradually augmenting the vertex density until a minimum of 128 vertices is obtained. We then apply quadratic decimation [12] to reduce the size of the largest resulting meshes. With the mesh refined, we subsequently precompute the Laplacian operator along with its 128 eigenvectors using the cotangent Laplace-Beltrami decomposition [32]. In parallel, we also compute the graph representation. In the spirit of fairness, we closely adhere to the protocol set by Atom3d. Thus, the graph nodes are set to be all the atoms. For the edges, we consider as neighbors all pairs of atoms below a 4.5A radius cutoff. Finally, we use one hot vector encoding atom type as initial node features. For methods relying on surfaces, we project the graph's initial node features onto the vertices using a Gaussian kernel. The representation of \(\mathbf{P}\) is illustrated in Figure 2. As mentioned above, we base our evaluation of the utility of surface-based learning methods on the recent approach _DiffusionNet_[27], which has demonstrated state-of-the-art results on a wide range of tasks such as classification, segmentation, and shape matching, and is particularly robust to changes in the underlying mesh structure. Specifically, to create our benchmark surface method, we apply three or four _DiffusionNet_ blocks to the aforementioned protein surfaces. To facilitate learning, we introduce a batched implementation of _DiffusionNet_. Learning with batches proved critical for certain tasks, particularly due to enabling the use of BatchNorm [15]. The graph encoder adopted remains consistent with that of Atom3d: it is composed of five layers of alternated Graph Convolutional Networks (GCN) [19] and BatchNorm operations. ### Multi-Representation Learning In addition to evaluating the performance of different representations used in isolation, we also study the utility of combining multiple representations in a single coherent architecture. As we demonstrate below, this allows the resulting method to leverage the strengths of respective representations, achieving state-of-the-art performance. Our approach combines surface and graph-based representations. For this, we first formulate a bipartite graph \(G=(\mathcal{V},E)\), with \(\mathcal{V}=\mathcal{V}_{g}\cup\mathcal{V}_{s}\) representing the nodes in the graph juxtaposed with the vertices on the surface. We calculate the geometric distance between node pairs spanning the two sets, establishing an edge within the bipartite graph if the distance falls below a predetermined Figure 2: The MDM2 protein (pdb 1ycr) is represented as a graph (green) and as a surface (blue). For readability, the graph displayed here relies on chemical connectivity instead of neighborhood. threshold. A restrictive threshold might lead to a disjointed graph, whereas an overly lenient one could produce a biclique. Optionally, we also incorporate edge features based on distances. We designate surface and graph encoding blocks as \(f^{i}_{\theta}\) and \(g^{i}_{\theta}\) respectively. Hence, the outputs after \(i\) blocks are given as \(\mathcal{H}^{i}=\{h^{i}_{n},n\in\mathcal{V}\}\), where \(h^{i}_{n}=f^{i}_{\theta}(x^{i}_{n})\) for a node \(n\in\mathcal{V}_{s}\) (or \(h^{i}_{n}=g^{i}_{\theta}(x^{i}_{n})\) for a \(n\in\mathcal{V}_{g}\)). Our general methodology incorporates message-passing neural networks, denoted \(\text{MP}^{i}_{\theta}\), over the bipartite graph. By employing distinct sets, \(\theta^{i}_{sg}\) and \(\theta^{i}_{gs}\), the architecture handles messages traversing from the surface to the graph and vice versa. The primary node features of the graphs are represented as \(\mathcal{X}=\mathcal{X}_{g}\cup\mathcal{X}_{s}\), with the embeddings post \(i\) blocks labeled as \(\mathcal{X}^{i}\), such that \(\mathcal{X}^{i}=\text{MP}^{i}_{\theta}(\mathcal{H}^{i})\). In what follows, we drop the \(i\) superscript for simplicity and assume we are at a certain layer of the graph. Different forms for the message passing above result in different mixed architectures. The model we propose is denoted bipartite, and it is based on the following equation : \[x_{n}=\alpha h_{n}+\text{MP}_{gs}(\mathcal{H})(n)+\text{MP}_{sg}(\mathcal{H})( n), \tag{1}\] with \(\alpha\in\mathbb{R}\). In our proposed setting, we set \(\alpha\) to a value of one, and use a Graph Attention Layer for the message-passing network. We discuss several ablations in the Results section. Apart from our primary method, we also propose two baseline strategies for using different representations during our learning process. In the sequential setting, we alternate surface and graph encoding. We can write this in two steps. In the first step, a surface encoding is performed, then the features are projected toward the graph using message passing: \(h^{g}_{n}=\text{MP}_{sg}(f_{\theta}(\mathcal{X}))(n)\), resulting in intermediate graph node embeddings that will be denoted as \(\mathcal{H}^{g}=\{h^{g}_{n},n\in\mathcal{V}_{g}\}\). Then, we propagate these node embeddings in the graph and obtain again surface embeddings using: \(\mathcal{X}=\text{MP}_{gs}(g_{\theta}(\mathcal{H}^{g})\). The architecture proposed in [28] falls into this setting with just one block. Alternatively, in the parallel approach, the output from message passing is concatenated with the original embeddings. A multi-layer perceptron (MLP) then processes the resulting vector, which can be formulated as \(x_{n}=\text{MLP}(h_{n}||+\text{MP}(\mathcal{H})(n)\). We illustrate these baseline strategies, as well as our proposed method in Figure 3. Figure 3: Different ways to leverage surface and graph information. Results ### Experimental Setup Our validation employs the Atom3d benchmark, focusing on its three tasks exclusive to proteins. We now briefly introduce these three tasks : Protein Interaction Prediction (PIP) :This task aims to predict which part of a protein interacts with which part of another. Framed as a classification task, pairs of residues from two proteins are labeled 'positive' if they interact and 'negative' if they don't. The dataset comprises 87k, 31k, and 15k training, validation, and test examples. Mutation Stability Prediction (MSP) :The objective here is to determine if a mutation enhances the stability of protein-protein interaction. Given a protein-protein interaction structure and its mutated version, this classification task labels the pair as 'positive' if it exhibits increased stability. This task includes 2864, 937, and 347 examples in each data split. For both PIP and MSP, the performance metric is the Area under the Receiver Operating Characteristic curve (AuROC). Protein Structure Ranking (PSR) :PSR is a regression task and aims to assign a quality score to predicted protein structures from the Critical Assessment of Methods of Protein Structure Prediction (CASP) [20] competition. The PSR data train, validation, and test splits hold 25.4k, 2.8k, and 16k systems respectively. The "global \(R_{S}\)" term represents the mean correlation across all systems and proposals. Meanwhile, "mean \(R_{S}\)" refers to the average correlation for each system. ### Performance of Surface Representation Our initial training is focused on a surface-centric approach. For the sake of fairness, we only use atom types as input and keep the same number of parameters as other methods of the Atom3d benchmark. We expect the surface representation to mostly perform well on tasks relative to interactions, and to underperform on the PSR task that also pertains to subtle changes inside the protein volume with no impact on the protein surface. We present the results in Table 1. Surprisingly, we observe that the surface method consistently falls short in its performance, even on the protein-protein interaction task. Such an observation challenges MaSIF's assertions, as it might turn out that a 3D CNN gives better results on their benchmark, which demonstrates the usefulness of benchmarks in general. Despite promising modeling of protein interfaces, which are intrinsically surface objects, the network's training revealed unstable and could not reach satisfactory test performance. We underline that all networks are trained in a vanilla setting, in particular, unlike MaSIF, our input features are minimalistic. Perhaps surface networks shine when supplemented with richer information. However, when all input features are equal, surface networks are not top performers. ### Synergy in Combined Representations In this section, we assess the performance of our proposed method, which has the particularity of combining different representations, i.e. surface and graph. We use the same experimental setup and \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & 3DCNN & Graph & ENN & Surface \\ \hline PIP & AuROC & **0.844** & 0.669 & - & 0.837 \\ \hline MSP & AuROC & 0.574 & **0.609** & 0.574 & 0.5 \\ \hline PSR & local R & **0.431** & 0.411 & - & 0.32 \\ & global R & **0.789** & 0.75 & - & 0.64 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different representations, including surface performance. The dashes in the equivariant methods’ column refer to the impossibility to use the method because of memory constraints. compare the use of previous state-of-the-art (SOTA) to models exclusively grounded in graphs or surfaces, and to ours. The results are presented in Table 2. Our method outperforms the state-of-the-art in all three tasks. Interestingly, its results surpass both graph-only and surface-only strategies, hinting at a synergy advantage between the two representations. Achieving this is noteworthy because, to maintain a consistent parameter count, both the graph and surface encoders are considerably condensed. An interesting result is that even in the case of PSR, where the use of surface does not intuitively seem relevant, the mixed model outperforms its surface counterpart with a comfortable margin. One possible interpretation for this result is _DiffusionNet_'s ability to perform long-range message passing. ### Further Analysis #### 3.4.1 Qualitative Results To visualize our PIP model's predictions, we aim to display patches of probability across the protein surface. The challenge is that the PIP task operates over pairs of residues in the graph and doesn't inherently provide a global interactability score for individual residues. Given a pair of protein chains A and B, let us assume, without loss of generality, that we want to plot the interaction site of protein B. We retrieve the nodes in protein A that belong to at least one positive pair and call the sets of such nodes \(\mathcal{N}_{A}^{\mathcal{P}}\). We define the interactability score of a node \(I(n)\) as the indicator function of this set over graph nodes. We compute our model predicted probability \(\hat{p}\) on all pairs and get \(\hat{p}(n_{1},n_{2})\in\mathcal{N}_{A}^{\mathcal{P}}\times\mathcal{N}_{B}\) and get a set of predictions. The final predicted interactability \(\hat{I}\) of our model for a node \(n_{b}\in\mathcal{N}_{B}\) is given as \(\hat{I}(n_{b})=\max_{n_{a}\in\mathcal{N}_{A}^{\mathcal{P}}}\hat{p}(n_{a},n_{b})\). For a visually appealing representation of the protein surface, we employ a straightforward message-passing (MP) mechanism without convolution on our standard bipartite graph. This aggregates the data using a distance-weighted average. We project both the ground truth and the predicted interactabilities following this procedure and present the results in Figure 4. \begin{table} \begin{tabular}{l l c c c c} \hline \hline & & SOTA & Graph & Surface & Ours \\ \hline PIP & AuROC & 0.844 & 0.669 & 0.837 & **0.876** \\ \hline MSP & AuROC & 0.609 & 0.609 & 0.5 & **0.707** \\ \hline PSR & local R & 0.432 & 0.411 & 0.32 & **0.452** \\ & global R & 0.796 & 0.75 & 0.64 & **0.83** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of our proposed approach with methods relying only on graphs or surfaces and with state-of-the-art (SOTA). Our method improves current best performance on all tasks. Figure 4: A qualitative view of our results. The top row is the ground truth with interaction sites in red. The bottom row displays our prediction. The two leftmost columns show the chains C and D of the protein with PDB id 2jbr under two rotated views of 180\({}^{\circ}\). On the right, the interaction between chains A and B of the system with PDB id 2ono is shown. From this illustration, it's evident that our model identifies binding sites on proteins. There are observable errors, such as misidentified residues on the lower part of 2jbr. However, we emphasize that given the pairwise formulation of the task, these inaccurately labeled residues may exhibit complementarity to one of the partner's interface residues. #### 3.4.2 Ablation Study Finally, we examine the impact of different design choices on our tasks. We already introduced in the methods three scenarios: parallel, sequential and bipartite. As mentioned above, the proposed mixed scenario HoloProt [28] falls into the _sequential_ framework with just one encoding block. Hence, we add this setting in our benchmark - with an enhanced protein encoder since we replace MeshCNN [13] with DiffusionNet - and refer to it as holo. Another major design choice is the choice of the Message Passing (MP) component. Hence, we explore the use of three possible message-passing networks. Motivated by the success of DGCNN [34], in our Att. setting, we discard the geometric notion of a neighborhood, allowing for potentially long-distance message passing. In this setting, all nodes from the graphs attend to all vertices from the surface. To deal with the incurred computational burden, we use the recent memory-efficient Flash Attention [8]. We also explore the use of more conventional Graph Convolutional Networks (GCN setting) [19] and the use of Graph ATtention networks [3; 33] for our final GAT setting. Finally, we also try using three or four blocks in our networks, always adjusting the network width to keep the number of parameters constant. Our results are presented in Table 3. Both the sequential and parallel strategies display underwhelming results. This is explained in part by their challenging optimization, with the employed message-passing likely being the root cause for this instability. Similarly, holo do not display a top performance, suggesting that the results in their paper could be enhanced by using the better-performing mixing strategies. Among the bipartite settings, Att. is consistently outperformed by the localized message passing networks, with the exception of the MSP task where one network has a good performance. The other scenarios give an overall close performance, with an edge for the GAT network, and more particularly the deeper one. This is especially true for the PSR task, where the surface methods alone were failing. As hypothesized above, the mixed approach could simply use the surface as a way to diffuse information efficiently and at a long distance. This could explain why in this scenario, an attentive mechanism results in a performance boost. ## 4 Conclusion In this paper, we investigated the impact of protein representation and its associated geometric deep-learning network on protein data outcomes, particularly emphasizing surface and mixed surface-graph methods. Although surface methods showed promise, they consistently failed to deliver top-tier performance. However, when integrated with the graph representation through our novel architecture, the results significantly improved, achieving state-of-the-art results. \begin{table} \begin{tabular}{l r r|r r r r} \hline \hline & & & MSP & PIP & \multicolumn{2}{c}{PSR} \\ Method & MP & Depth & AuROC & AuROC & local R & global R \\ \hline parallel & - & - & 0.55 & 0.5 & 0.39 & 0.77 \\ sequential & - & - & 0.609 & 0.855 & 0.319 & 0.71 \\ holo & - & - & 0.537 & 0.824 & 0.383 & 0.715 \\ \hline & Att. & 3 & 0.689 & 0.791 & 0.4 & 0.792 \\ & - & 4 & 0.648 & 0.793 & 0.388 & 0.799 \\ bipartite & GCN & 3 & 0.626 & 0.858 & 0.42 & 0.8 \\ & - & 4 & 0.697 & 0.868 & 0.421 & 0.797 \\ & GAT & 3 & **0.707** & 0.859 & 0.434 & **0.833** \\ & - & 4 & 0.646 & **0.876** & **0.452** & **0.83** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of our method. We compare different architecture designs, message passing methods and depth on our task. A detailed explanation of the different setting is available in text. Conversely, certain combined methods like the sequential and parallel approaches encountered challenges, emphasizing the need for careful planning when merging techniques. In short, while individual methods have their strengths, it's the combination of different approaches that seems most promising for future advances in protein studies. Our work highlights the importance of ongoing innovation and reevaluation in this ever-evolving field. Future directions include applying these mixed approaches to other tasks in structural bioinformatics and optimizing the pipeline to make surface-based methods faster and less memory-demanding. Another promising avenue is to try out more representations for learning on protein structures, especially point clouds, for which recent geometric transformers were designed with outstanding results. ## Acknowledgments and Disclosure of Funding V.M. is paid by DataIA and Sanofi. This work was performed using HPC resources from GENCI-IDRIS (Grant 2023-AD010613356). Parts of this work are supported by the ANR Chair AIGRETTE and the ERC Starting Grant No. 758800 (EXPROTEA).
2302.03500
Rotating Lifshitz-like black holes in F(R) gravity
Regarding a particular class of pure $F(R)$ gravity in three dimensions, we obtain an analytical rotating Lifshitz-like black hole solution. We first investigate some geometrical properties of the obtained solution that reduces to a charged rotating BTZ black hole in a special limit. Then, we study the optical features of such a black hole like the photon orbit and the energy emission rate and discuss how electric charge, angular momentum, and exponents affect them. In order to have an acceptable optical behavior, we should apply some constraints on the exponents. We continue our investigation with the study of the thermodynamic behavior of the solutions in the extended phase space and examine the validity of the first law of thermodynamics besides local thermal stability through using of heat capacity. Evaluating the existence of van der Waals-like phase transition, we obtain critical quantities and show how they change under the variation of black hole parameters. Finally, we construct a holographic heat engine of such a black hole and obtain its efficiency in a cycle. By comparing the obtained efficiency with the Carnot one, we examine the second law of thermodynamics.
Kh. Jafarzade, E. Rezaei, S. H. Hendi
2023-01-31T05:57:58Z
http://arxiv.org/abs/2302.03500v2
# Rotating Lifshitz-like black holes in F(R) gravity ###### Abstract One of the alternative theories of gravitation with a possible UV completion of general relativity is Horava-Lifshitz gravity. Regarding a particular class of pure \(F(R)\) gravity in three dimensions, we obtain an analytical rotating Lifshitz-like black hole solution. We first investigate some geometrical properties of the obtained solution that reduces to a charged rotating BTZ black hole in a special limit. Then, we study the optical features of such a black hole like the photon orbit and the energy emission rate and discuss how electric charge, angular momentum, and exponents affect them. In order to have an acceptable optical behavior, we should apply some constraints on the exponents. We continue our investigation with the study of the thermodynamic behavior of the solutions in the extended phase space and examine the validity of the first law of thermodynamics besides local thermal stability by using the heat capacity. Evaluating the existence of van der Waals-like phase transition, we obtain critical quantities and show how they change under the variation of black hole parameters. Finally, we construct a holographic heat engine of such a black hole and obtain its efficiency in a cycle. By comparing the obtained efficiency with the Carnot one, we examine the second law of thermodynamics. ## I Introduction \(F(R)\) theory of gravity is one of the straightforward generalization of Einstein's theory of general relativity (GR), where the Ricci scalar of GR Lagrangian is replaced with an arbitrary function of \(R\)[1; 2; 3]. Unlike Einstein's gravity, \(F(R)\) theory can explain the accelerated expansion as well as structure formation of the Universe without considering dark sectors [4; 5; 6]. Other motivations to consider \(F(R)\) gravity are including i) this theory seems to be the only one that can avoid the long-known and fatal Ostrogradski instability [7]. ii) \(F(R)\) theories of gravitation can be compatible with Newtonian and post-Newtonian approximations [8; 9]. iii) some viable \(F(R)\) models have no ghosts ( \(dF/dR>0\)), and the stability condition \(d^{2}F/dR^{2}\geq 0\) of essentially amounts to guarantee that the scalaron is not a tachyon [10; 11]. iv) although \(F(R)\) theory is the simplest modification of the gravitational interaction to higher-order known so far, its action is sufficiently general to encapsulate some of the basic characteristics of higher-order gravity. From the geometrical point of view, Einstein's gravity cannot explain the non-relativistic scale-invariant theory. To describe a non-relativistic scale-invariant system that enjoys Galilean symmetry, one can employ Horava-Lifshitz [12; 13] approach. In this approach, an anisotropic scale invariant between time and space directions is considered, i.e. \((t,x)\rightarrow(\lambda^{z}t,\lambda x)\), where the degree of anisotropy is measured by the dynamical exponent \(z\). Theories with \(z\neq 1\) are invariant under non-relativistic transformations [14] while for \(z=1\), the theory reduces to the relativistic isotropic scale invariance model corresponding to the AdS spacetime. Systems with such a Lifshitz scaling appear frequently in quantum/statistical field theory of condensed matter physics and ultra-cold atomic gases (see [15] for more details). Motivated by what was mentioned above, here, we are going to consider a Lifshitz-like geometry in a class of three dimensional \(F(R)\) gravity model. The study of three dimensional black holes known as BTZ (Banados-Teitelboim-Zanelli) solutions [16] has opened different aspects of physics in low dimensional spacetimes. The geometry of \((2+1)-\)dimensional manifold has various interesting properties such as the existence of specific relations between the BTZ black holes and an effective action in string theory [17; 18], developing our understanding of gravitational interaction in low dimensional manifolds [19], improvement in the quantum theory of gravity and gauge field theory [20; 21], the possibility of the existence of gravitational Aharonov Bohm effect in the noncommutative spacetime [22], and so on. Therefore, the study of three-dimensional black holes has attracted physicists not only in the context of Einstein's gravity, but also in modified theories such as massive gravity [23], dilaton gravity [24], gravity's rainbow [25] and also massive gravity's rainbow [26]. Besides, the static vacuum solutions of a Lifshitz model in \((2+1)-\)dimensions have been investigated in [27]. In addition, three dimensional Lifshitz-like charged black hole solutions in \(F(R)\) gravity have been also studied in [28]. In this paper, we introduce a new Lifshitz-like charged rotating black hole solution in three-dimensional \(F(R)\) gravity and investigate its optical and thermodynamical properties. From the theoretical viewpoint, one of the challenging subjects of black hole physics is the thermodynamic behavior of a typical black hole. The possible identification of a black hole as a thermodynamic object was first realized by Bardeen, Carter, and Hawking [29]. They clarified the laws of black hole mechanics and showed that these laws are corresponding to ordinary thermodynamics. Thereafter, various thermodynamic properties of black holes have been widely studied, and we now have a considerable understanding of the microscopic origin of these properties due to a pioneering work by Strominger and Vafa [30]. The investigation of black hole thermodynamics in anti-de Sitter (AdS) spacetime provided us with a deep insight into understanding the quantum nature of gravity which has been one of the open problems of physical communities [31; 32; 33; 34]. Besides, the existence of the cosmological constant can change both the geometry and thermodynamic properties of spacetime. Notably, the study in the context of black hole thermodynamics showed that the correspondence between black hole mechanics and ordinary thermodynamic systems is completed by considering the cosmological constant as a variable parameter [35]. To complete the thermodynamic properties of a system, it is inevitable to investigate the existence of phase transition and thermal stability. The investigation of phase transition has a crucial role in exploring the critical behavior of a system near its critical point. The black hole phase transition was first studied by Hawking and Page who demonstrated the existence of a certain phase transition (so-called Hawking-Page) between thermal AdS and Schwarzschild-AdS BH which corresponds to the confinement/deconfinement phase transition in the dual strongly coupled gauge theory [36]. The discovery of the first-order phase transition in the charged AdS black hole spacetime has gained a lot of attention in recent years [37; 38]. This transition displays a classical critical behavior and is superficially analogous to a van der Waals liquid-gas phase transition. Especially, considering the cosmological constant as a thermodynamical variable and working in the extended phase space led to finding the additional analogy between the black holes and the behavior of the van der Waals liquid/gas system [57; 58; 59; 60]. In this regard, some efforts have been made in the context of \(P-V\) criticality of black holes in modified theories of gravitation, such as Horava-Lifshitz gravity [39; 40; 41], Gauss-Bonnet gravity [42; 43; 44], Lovelock gravity [45; 46], dilaton gravity [47; 48], \(F(R)\) gravity [49; 50], massive gravity [51; 52; 53], gravity's rainbow [54; 55], and massive gravity's rainbow [56]. In addition, from the thermodynamics point of view, one may consider a black hole as a heat engine in the extended phase space. Indeed, the mechanical term PdV in the first law provides the possibility of extracting the mechanical work and consequently calculating the efficiency of a typical heat engine. The concept of the holographic heat engine was first proposed by Johnson in Ref.[61]. He creatively employed the charged AdS black hole as a heat engine working substance to construct a holographic heat engine and calculated the heat engine efficiency. Afterward, holographic heat engines were investigated in different black hole backgrounds, such as the rotating BHs [62; 63], Horava-Lifshitz BHs [64], Born-Infeld BHs [65], charged BTZ BHs [66], accelerating BHs [67], black holes in massive gravity [68] and gravity's rainbow [69]. This paper is organized in the following manner: In Sec. II, we consider three-dimensional Lifshitz-like background spacetime and obtain charged rotating black hole solutions in a special class of \(F(R)\) gravity. We also determine the null geodesics equations as well as the radius of the photon orbit, and explore the conditions to find an acceptable optical behavior. The energy emission rate for these black holes and the influence of the model's parameters on the emission of particles are investigated. In Sec. III, we study the thermodynamic properties of the corresponding black hole. With thermodynamic quantities in hand, we study the thermal stability of these black holes in the context of the canonical ensemble. We also investigate the critical behavior of the system and discuss how the parameters of the black holes affect critical quantities. The heat engine efficiency is the other interesting quantity that we will evaluate in an independent subsection. Finally, we present our conclusions in the last section. ## II Geometrical properties Our line of work in this paper includes investigating three-dimensional rotating Lifshitz-like black holes in \(F(R)\) gravity and studying their geometrical and thermodynamical properties. In this section, we first introduce the action of \(3-\)dimensional \(F(R)\) gravity, and then we obtain the metric function and confirm the existence of black holes. At the end of this section, we investigate optical features of the black hole including the photon orbit and energy emission rate, and examine the effects of electric charge, angular momentum and dynamical exponents on the optical properties. ### Constructing the solutions Here, we intend to construct three-dimensional rotating Lifshitz-like black holes in \(F(R)\) gravity and study their geometrical properties. To do so, we consider the following action \[S=\int_{\mathcal{M}}d^{2+1}x\sqrt{-g}F(R), \tag{1}\] in which \(\mathcal{M}\) is a \(3-\)dimensional spacetime and \(F(R)\) is an arbitrary function of Ricci scalar \(R\). Variation with respect to metric tensor, \(g_{\mu\nu}\), leads to the following field equation \[G_{\mu\nu}F_{R}-\frac{1}{2}g_{\mu\nu}[F(R)-RF_{R}]-[\nabla_{\mu}\nabla_{\nu}-g _{\mu\nu}\square]F_{R}=0, \tag{2}\] where \(G_{\mu\nu}\) is the Einstein tensor and \(F_{R}\equiv dF(R)/dR\). Now, we want to obtain the rotating Lifshitz-like solutions of Eq. (2). For this purpose, we assume that the metric has the following ansatz \[ds^{2}=-\left(\frac{r}{r_{0}}\right)^{z}f\left(r\right)dt^{2}+\frac{dr^{2}}{f \left(r\right)}+r^{2}\left(d\varphi-\frac{J\left(\frac{r}{r_{0}}\right)^{w}}{ 2r^{2}}dt\right)^{2}, \tag{3}\] where \(z\) and \(w\) play the role of dynamical exponents and \(r_{0}\) is an arbitrary (positive) length scale. Here, we study black hole solutions with constant Ricci scalar with the condition of \(F(R_{0})=F_{R}=0\), and therefore, it is easy to show that the equation of motion reduces to the following differential equation \[R=R_{0}=\frac{J^{2}}{2r^{4}}\left(\frac{r}{r_{0}}\right)^{2w-z}\left(\frac{w^{ 2}}{4}-w+1\right)-\frac{z^{2}f}{2r^{2}}-\left(2+\frac{3z}{2}\right)\frac{f^{ \prime}}{r}-f^{\prime\prime}, \tag{4}\] with the following metric function as the solution \[f\left(r\right)=-\Lambda r^{2}-\frac{m}{r^{\gamma}}+\frac{2^{-\frac{1}{4}}q^{ \frac{3}{2}}}{r^{\delta}}+\frac{(w-2)^{2}\left(\frac{r}{r_{0}}\right)^{(2w-z) }J^{2}}{8r^{2}\left[4w^{2}-(z+6)w+2\right]}, \tag{5}\] where \(m\) and \(q\) are two integration constants related to the total mass and electric charge of the black hole, respectively. It is worth mentioning that these two integration constants are set in such a way that for the case of \(w=z=0\), the solution reduces to the rotating BTZ black hole solution in the presence of a special model of the Power-Maxwell field. Here, \(\Lambda\) is a (positive/negative or zero) constant which depends on the sign/value of \(R_{0}\) as \[\Lambda=\frac{2R_{0}}{z^{2}+6z+12}. \tag{6}\] Besides, \(\gamma\) and \(\delta\) are defined as \[\gamma = \frac{1}{4}\left(3z+2-\sqrt{z^{2}+12z+4}\right), \tag{7}\] \[\delta = \frac{1}{4}\left(3z+2+\sqrt{z^{2}+12z+4}\right).\] ### Singularity and Event horizon With the exact solution in hand, we examine whether the obtained solution could be interpreted as a black hole. The interpretation of solution as a (singular) black hole has two criteria: I) Presence of singularity, II) Existence of an event horizon covering the singularity. In order to look for the singularity, we use the Kretschmann scalar as \[K = f^{\prime\prime^{2}}+\left(\frac{3zf^{\prime}}{r}+\frac{z\left(z-2 \right)f}{r^{2}}-\frac{3J^{2}\left(\frac{r}{r_{0}}\right)^{2w-z}\left(w-2 \right)^{2}}{4r^{2}}\right)f^{\prime\prime}+ \tag{8}\] \[\frac{\left(\frac{9z^{2}}{4}+2\right)}{r^{2}}f^{\prime^{2}}+ \left(\frac{3\left(z^{2}-2z+\frac{4}{3}\right)zf}{2r^{3}}-\frac{J^{2}\left( \frac{r}{r_{0}}\right)^{2w-z}\left(w-2\right)^{2}\left(\frac{9z}{4}-1\right)} {2r^{5}}\right)f^{\prime}+\] \[\frac{z^{2}\left(\frac{z^{2}}{4}-z+2\right)}{r^{4}}f^{2}-\frac{J ^{2}\left(\frac{r}{r_{0}}\right)^{2w-z}\left(w-2\right)^{2}\left[z^{2}-z\left( w+2\right)+w^{2}\right]}{2r^{6}}f+\frac{11J^{2}\left(\frac{r}{r_{0}}\right)^{4w-2z} \left(w-2\right)^{4}}{64r^{8}}.\] Inserting the metric function \(f(r)\) into Eq. (8), one finds \[K = \Upsilon_{1}r^{2(w-z-4)}+(\Upsilon_{2}r^{\gamma}+\Upsilon_{3}r^{ \gamma+\delta+2}+\Upsilon_{4}r^{\delta})r^{2w-2.5z-7}\] \[+\left[(\Upsilon_{5}r^{\gamma+2}+\Upsilon_{6})r^{\gamma+\delta}+( \Upsilon_{7}r^{2\gamma+4}+\Upsilon_{8}r^{\gamma+2}+\Upsilon_{9})r^{2\delta}+ \Upsilon_{10}r^{2\gamma}\right]r^{-3(z+2)},\] where \(\Upsilon_{i}\)'s are functions of \(z\), \(w\), \(J\), \(r_{0}\), \(\Lambda\), \(m\) and \(q\). According to our analysis, the scalar curvature diverges in the limit of \(r\to 0\) which confirms that there is a curvature singularity at \(r=0\). Figure 1: The admissible parameter space to have a physical solution for \(m=1\), \(r_{0}=1\), \(\Lambda=-0.5\) (dotted line), \(\Lambda=-1\) (dash-dotted line) and \(\Lambda=-1.5\) (dashed line). Now, we require to investigate the second condition (existence of a horizon(s)). Strictly speaking, roots of the metric function, \(g^{rr}=f(r)\), are where black hole's horizons are. The absence of a root for the metric function indicates that the solution is not a black hole but a naked singularity. Due to the fact that the metric function goes to \(+\infty\) for spatial infinity and also near the origin, one can find that such a function has a minimum (\(r_{min}\)). Depending the sign of \(f(r_{min})\), one may find a black hole with two horizons (\(f(r_{min})<0\)), an extreme black hole (\(f(r_{min})=0\)) or naked singularity (\(f(r_{min})>0\)). We examine the condition of extreme BH by studying the following criteria \[f(r_{min})=0=f^{\prime}(r_{min}). \tag{9}\] Equation (9) shows that the metric function has one degenerate horizon at \(r_{min}\) which corresponds to the radius of extremal black holes (the coincidence of the inner and outer BH horizons). Solving these two equations, simultaneously, leads to \[q = \left(\frac{\left[(2w-z-4)\Lambda r_{min}^{\gamma+2}+m(\gamma+2w- z-2)\right]r_{min}^{\delta}2^{\frac{1}{4}}}{r_{min}^{\gamma}\left(2w-z-2+ \delta\right)}\right)^{\frac{2}{3}}, \tag{10}\] \[J = \left(\frac{8\left(m(\gamma-\delta)-\Lambda(\delta+2)r_{min}^{ \gamma+2}\right)\left(w(z+6)-2-4w^{2}\right)}{r_{min}^{\gamma-2}\left(2w-z-2+ \delta\right)\left(w-2\right)^{2}\left(\frac{r_{min}}{r_{0}}\right)^{2w-z}} \right)^{\frac{1}{2}}.\] The resultant curve provides a lower bound for the existence of the black hole, and is depicted in Fig. 1 see dotted, dash-dotted and dashed lines and Fig. 2 by the dotted line, denoting the extremal limit. Below this line, black holes Figure 2: The admissible parameter space to have a physical solution for \(m=1\), \(r_{0}=1\) and \(\Lambda=-1\). (with two horizons) are present, whereas no black hole exists above it. As it was observed, the second condition for the existence of roots (horizons) for the metric function can be satisfied. In other words, the curvature singularity can be covered by an event horizon. So, the obtained solution can be interpreted as a black hole solution. A significant point regarding these figures is that the admissible parameter space is highly affected by values of the exponent \(w\). For case of \(z=0\), there is no physical solution for values of \(w=1\) and \(w=2\). For case of \(z\neq 0\), a physical solution cannot be observed for small values of the electric charge for \(w=1\) (see the middle panels of Fig. 2). For the same \(w\), the admissible parameter space decreases with increase of \(z\) (compare left panels of these two figures with each other). Also, from Fig. 1, one can find that the cosmological constant has a decreasing effect on admissible parameter space. ### Optical features Here, we are going to investigate another geometric property of the black hole, photon orbit. To do so, we investigate the photon orbit radius of the black hole and explore the effect of the exponents \(z\) and \(w\) and parameters \(q\) and \(J\) on the radii size. At the first step, we employ the Hamilton-Jacobi equation for null curves as [70] \[\frac{\partial S}{\partial\lambda}=-\frac{1}{2}g^{\mu\nu}\frac{\partial S}{ \partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}}, \tag{11}\] where \(S\) and \(\lambda\) denote, respectively, the Jacobi action of the photon and the affine parameter of the null geodesic. Using known constants of the motion, one can separate the Jacobi function as follows \[S=-Et+L\phi+S_{r}(r), \tag{12}\] where \(E\) and \(L\) are, respectively, the energy and angular momentum of the photon in the direction of rotation axis. By inserting the Jacobi action (12) into the Hamilton-Jacobi equation (11), and using also the metric components, we acquire \[r^{2}f(r)\left(\frac{r}{r_{0}}\right)^{z}\left(\frac{dS_{r}}{dr}\right)^{2}- \frac{L^{2}J^{2}}{4r^{2}f(r)}\left(\frac{r}{r_{0}}\right)^{2w}+L^{2}\left( \frac{r}{r_{0}}\right)^{z}-\frac{E^{2}r^{2}}{f(r)}-\frac{ELJ}{f(r)}\left(\frac {r}{r_{0}}\right)^{w}=0. \tag{13}\] Considering \(S_{r}^{\prime}(r)=\frac{\sqrt{\mathcal{R}(r)}}{f(r)}\) and inserting it into Eq. (13), one finds \[\mathcal{R}(r)=\frac{f(r)}{r^{2}\left(\frac{r}{r_{0}}\right)^{z}}\left(\frac{ L^{2}J^{2}}{4r^{2}f(r)}\left(\frac{r}{r_{0}}\right)^{2w}-L^{2}\left(\frac{r}{r_{0} }\right)^{z}+\frac{E^{2}r^{2}}{f(r)}+\frac{ELJ}{f(r)}\left(\frac{r}{r_{0}} \right)^{w}\right). \tag{14}\] Thus, the photon propagation obeys the following three equations of motion, obtained from the variation of the Jacobi action with respect to the affine parameter \(\lambda\) \[\frac{dt}{d\lambda} = \frac{E}{f(r)\left(\frac{r}{r_{0}}\right)^{z}}-\frac{LJ}{2r^{2}f( r)}\left(\frac{r}{r_{0}}\right)^{w-z} \tag{15}\] \[\frac{dr}{d\lambda} = \sqrt{\mathcal{R}(r)},\] (16) \[\frac{d\varphi}{d\lambda} = \frac{L}{r^{2}}-\frac{LJ^{2}}{4r^{4}f(r)}\left(\frac{r}{r_{0}} \right)^{2w-z}-\frac{EJ}{2r^{2}f(r)}\left(\frac{r}{r_{0}}\right)^{w-z}. \tag{17}\] In order to investigate the photon trajectories, one usually expresses the radial geodesics in terms of the effective potential \(V_{\text{eff}}\) as \[\left(\frac{dr}{d\lambda}\right)^{2}+V_{\text{eff}}=0,\] with \[V_{\text{eff}}=\frac{f(r)}{r^{2}\left(\frac{r}{r_{0}}\right)^{z}}\left(L^{2} \left(\frac{r}{r_{0}}\right)^{z}-\frac{E^{2}r^{2}}{f(r)}-\frac{ELJ}{f(r)} \left(\frac{r}{r_{0}}\right)^{w}-\frac{L^{2}J^{2}}{4r^{2}f(r)}\left(\frac{r}{ r_{0}}\right)^{2w}\right). \tag{18}\] Now, we are in a position to obtain the photon critical circular orbit. Therefore, the following unstable conditions should be satisfied, simultaneously \[V_{\rm eff}(r_{ph})=0,\qquad\frac{dV_{\rm eff}(r_{ph})}{dr}=0,\quad\frac{d^{2}V_{ \rm eff}(r_{ph})}{dr^{2}}<0. \tag{19}\] To have an acceptable optical behavior, we need to examine the condition \(r_{e}<r_{ph}\) where \(r_{ph}\) and \(r_{e}\) are the radius of photon orbit and event horizon radius, respectively. Figures (3) and (4) display the admissible parameter space to have a real horizon and real photon orbit. From these two figures, one can find that an acceptable optical behavior cannot be observed for \(w-z<3\). In fact, by comparing the left panels with the right panels of each figure, one can see that the photon orbit radius will be imaginary in the region where the event horizon is real. In other words, in a given region one cannot observe the real horizon and real photon orbit, simultaneously. As was already mentioned, in order to have an acceptable optical result, the condition \(w-z\geq 3\) should be satisfied which is quite evident in the tables I-III. Since relation (19) leads to a complicated equation, it is not possible to solve the equation analytically. Thus, we employ numerical methods to obtain the radius of photon orbit. In this regard, several values of the event horizon and the photon orbit radius are listed in tables I (\(z=0\)), II (\(z=1\)), and III (\(z=2\)). According to these tables, only for limited regions of the electric charge and cosmological constant, one can observe acceptable optical results. As one can see, the increase of \(q\) and \(|\Lambda|\) leads to an imaginary event horizon which is not a physical consequence. From these three tables, it can also be seen that the electric charge, angular momentum and the absolute value of the cosmological constant have decreasing effects on the event horizon and the radius of photon orbit. Taking a close look at the tables, one can notice that the effect of parameter \(r_{0}\) is opposite of that of electric charge and angular momentum. Studying the effect of the exponent \(w\) shows that increasing this Figure 3: The admissible parameter space to have a real horizon (left panels) and a real photon sphere (right panels) for \(m=1\), \(\Lambda=-1\), \(r_{0}=1\) and \(J=0.4\). parameter leads to increasing (decreasing) the event horizon (photon orbit). Comparing these three tables to each other, one can examine the effect of exponent \(z\). Our analysis shows that as the parameter \(z\) increases the size of the event horizon and photon orbit radius increase. ### Energy emission rate Now, we are interested in studying the effect of the black hole parameters on the emission of particles around the black hole. It has been known that at very high energies, the absorption cross-section for black holes oscillates around a limiting constant value \(\sigma_{lim}\) which is defined in the following form for an arbitrary spacetime dimension [71] \[\sigma_{lim}=\frac{\pi^{\frac{d-2}{2}}b_{c}^{d-2}}{\Gamma(\frac{d}{2})}, \tag{20}\] where the critical impact parameter \(b_{c}\) is given by \[b_{c}=\frac{r_{ph}}{\sqrt{f(r_{ph})}}. \tag{21}\] The energy emission rate for three-dimensional spacetime is obtained as [72] \[\frac{d^{2}E(\omega)}{dtd\omega}=\frac{4\pi^{2}\omega^{2}b_{c}}{e^{\frac{ \omega}{2}}-1}, \tag{22}\] Figure 4: The admissible parameter space to have a real horizon (left panels) and a real photon sphere (right panels) for \(m=1\), \(\Lambda=-1\), \(r_{0}=1\) and \(J=0.5\). where \(\omega\) is the emission frequency and \(T\) is the Hawking temperature. For the corresponding black hole, the Hawking temperature is given by \[T=\frac{\kappa}{2\pi}=\frac{f^{\prime}(r)}{4\pi}\left(\frac{r}{r_{0}}\right)^{ \frac{\kappa}{2}}\Bigg{|}_{r=r_{e}}, \tag{23}\] in which \(\kappa\) is the surface gravity. Using Eqs. (23) and (5), one can find \[T=\frac{\left(\frac{r_{e}}{r_{0}}\right)^{\frac{\kappa}{2}}}{4\pi}\left(-\frac {A_{1}\left(w-2\right)^{2}J^{2}\left(\frac{r_{e}}{r_{0}}\right)^{2w-z}}{32A_{2} r_{e}^{3}}-\Lambda\left(\gamma+2\right)r_{e}-\frac{2^{-\frac{1}{4}}q^{\frac{3}{2}} \left(\delta-\gamma\right)}{r_{e}^{\gamma+1}}\right), \tag{24}\] where \[A_{1} = 2\delta-2\gamma-8w+z+6, \tag{25}\] \[A_{2} = 2-zw-6w+4w^{2},\] To study the impact of black hole parameters on the energy emission rate, we have plotted Fig. 5. Figure 5(a) illustrates the influence of the electric charge on the emission rate of a Lifshitz rotating black hole. As it is clear, there exists a peak of the energy emission rate for the black hole which shifts to the low frequency with the increase of \(q\). From this figure, one can also find that this parameter has a decreasing contribution to the energy emission rate. This reveals the fact that the evaporation process would be slow for a black hole located in a powerful electric field. The effect of the angular momentum on the emission rate is depicted in Fig. 5(b), indicating that the impact of this parameter is opposite of that of the electric charge. Studying the impact of parameter \(r_{0}\) and cosmological constant, we observe that both parameters have a decreasing effect on this optical quantity similar to the electric charge (see \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline \(q\) & \(0.3\) & \(0.4\) & \(0.5\) & \(0.6\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(J=0.4\), \(r_{0}=1\), \(w=3\)) & \(0.9216\) & \(0.8684\) & \(0.7890\) & \(0.57+0.05I\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(J=0.4\), \(r_{0}=1\), \(w=3\)) & \(1.7455\) & \(1.6989\) & \(1.6385\) & \(1.5576\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\check{\check{\check{\chi}}}}}}}\) & \(\check{\check{\check{\check{\check{\chi}}}}}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\times\) \\ \hline \hline & & & & \\ \(J\) & \(0.3\) & \(0.4\) & \(0.5\) & \(0.6\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(q=0.3\), \(r_{0}=1\), \(w=3\)) & \(0.9217\) & \(0.9216\) & \(0.9213\) & \(0.9210\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(q=0.3\), \(r_{0}=1\), \(w=3\)) & \(1.9067\) & \(1.7455\) & \(1.6325\) & \(1.5474\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\check{\chi}}}}}\) & \(\check{\check{\check{\check{\check{\chi}}}}}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\check{\check{\check{\check{\chi}}}}\) \\ \hline \hline & & & & \\ \(r_{0}\) & \(0.7\) & \(0.9\) & \(1\) & \(1.1\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(q=0.3\), \(J=0.4\), \(w=3\)) & \(0.9204\) & \(0.9212\) & \(0.9216\) & \(0.9217\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(q=0.3\), \(J=0.4\), \(w=3\)) & \(1.4354\) & \(1.5883\) & \(1.7455\) & \(1.9057\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\check{\check{\check{\chi}}}\) & \(\check{\check{\check{\chi}}}\) \\ \hline \hline & & & & \\ \(\Lambda\) & \(-0.5\) & \(-0.8\) & \(-1\) & \(-1.2\) \\ \hline \(r_{e}\) (\(r_{0}=1\), \(q=0.3\), \(J=0.4\), \(w=3\)) & \(1.3367\) & \(1.0404\) & \(0.9216\) & \(0.83+0.06I\) \\ \hline \(r_{ph}\) (\(r_{0}=1\), \(q=0.3\), \(J=0.4\), \(w=3\)) & \(2.0506\) & \(1.8348\) & \(1.7455\) & \(1.6777\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\check{\check{\check{\chi}}}\) & \(\check{\check{\check{\chi}}}\) & \(\times\) \\ \hline \hline & & & & \\ \(w\) & \(4\) & \(5\) & \(6\) & \(7\) \\ \hline \(r_{e}\) (\(r_{0}=1\), \(q=0.2\), \(J=0.4\), \(\Lambda=-1.5\)) & \(0.77572\) & \(0.77576\) & \(0.77581\) & \(0.77584\) \\ \hline \(r_{ph}\) (\(r_{0}=1\), \(q=0.2\), \(J=0.4\), \(\Lambda=-1.5\)) & \(1.2488\) & \(1.1189\) & \(1.0566\) & \(1.0213\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\chi}}}}\) & \(\check{\check{\check{\chi}}}\) & \(\check{\check{\check{\chi}}}\) & \(\check{\check{\check{\chi}}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The event horizon (\(r_{e}\))and photon sphere radius (\(r_{ph}\)) for the variation of \(q\), \(J\), \(r_{0}\), \(w\) and \(\Lambda\) for \(m=1\) and \(z=0\). Figs. 5(c) and 5(d)). This reveals the fact that as the effect of these two parameters get weak the energy emission rate becomes significant. To study the effects of two exponents, we plot Figs. 5(e) and 5(f). From Fig. 5(e), it is clear that the parameter \(w\) has an increasing contribution to the energy emission rate, while the effect of the exponent \(z\) is to decrease it (see Fig. 5(f)). From what was expressed, one can find that the black hole has a longer lifetime when it rotates slowly or when it is located in a high curvature background or a powerful electric field. ## III Thermodynamic properties In this section, we would like to study the thermodynamic structure of the system. We first calculate the conserved and thermodynamics quantities of the black hole solution and examine the first law of thermodynamics. Then, we investigate phase transition and thermal stability of the black hole in the context of the canonical ensemble by calculating the heat capacity. We also examine the effects of the black hole parameters on phase transition and stability of the system and show that a certain relation between the exponents \(z\) and \(w\) should be satisfied in order to have a phase transition. We investigate the possibility of the existence of van der Waals-like phase transition and critical behavior for the solutions, and determine critical values. Finally, we construct a heat engine by taking into account this black hole as the working substance, and obtain the heat engine efficiency. Comparing the engine efficiency with Carnot efficiency, we investigate the criteria of having a consistent thermodynamic second law. ### Thermodynamic quantities and the first law In this subsection, we obtain the thermodynamical quantities of the solutions and check the validity of the first law of black hole thermodynamics. Before we go on, we introduce a new notion for our solutions. We consider the negative branch of cosmological constant to be a thermodynamical quantity known as pressure. Considering the cosmological \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline \(q\) & \(0.3\) & \(0.35\) & \(0.4\) & \(0.45\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(J=0.5\), \(r_{0}=1\), \(w=4\)) & \(0.9216\) & \(0.8932\) & \(0.8588\) & \(0.72+0.06I\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(J=0.5\), \(r_{0}=1\), \(w=4\)) & \(1.5012\) & \(1.4898\) & \(1.4768\) & \(1.4552\) \\ \hline \(r_{ph}>r_{e}\) & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\checkcheck{\ constant as a thermodynamical pressure and its conjugate quantity as a thermodynamical volume leads to a new insight into thermodynamical structure of the black holes, called extended phase space thermodynamics. From now on, we replace the cosmological constant with the pressure using the following relation [35] \[\Lambda\ =\ -8\pi P\] The finite mass is the first quantity that we would like to calculate. In the non-extended phase space (the cosmological constant is not allowed to vary), the total mass of the black holes is depicted as internal energy. Considering the variable cosmological constant, the role of the mass is changed to enthalpy. There are several methods for calculating the mass. Here, to calculate this property, we use ADM (Arnowitt-Deser-Misner) approach which yields \[M=\frac{m}{16\pi}r_{0}^{-\gamma}. \tag{26}\] Evaluating the metric function on horizon (\(f(r=r_{e})=0\)) and solving it with respect to geometrical mass result into the following relation for total mass of the black hole \[M=\left(\frac{(w-2)^{2}\left(\frac{r_{e}}{r_{0}}\right)^{2w-z}}{128\pi A_{2}r_ {e}^{2}}J^{2}+\frac{r_{e}^{2}P}{2}+\frac{2^{-\frac{1}{4}}q^{\frac{3}{2}}}{16 \pi r_{e}^{\delta}}\right)\left(\frac{r_{e}}{r_{0}}\right)^{\gamma}. \tag{27}\] The temperature of the black hole has already been obtained from Eq. (24). Replacing cosmological constant with pressure, it can be rewritten as \[T=\frac{\left(\frac{r_{e}}{r_{0}}\right)^{\frac{2}{2}}}{4\pi}\left(-\frac{A_{ 1}\left(w-2\right)^{2}J^{2}\left(\frac{r_{e}}{r_{0}}\right)^{2w-z}}{32A_{2}r_ {e}^{3}}+8\pi P\left(\gamma+2\right)r_{e}-\frac{2^{-\frac{1}{4}}q^{\frac{3}{2 }}\left(\delta-\gamma\right)}{r_{e}^{\gamma+1}}\right). \tag{28}\] \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline \(q\) & \(0.2\) & \(0.3\) & \(0.4\) & \(0.5\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(J=0.5\), \(r_{0}=1\), \(w=5\)) & \(0.9640\) & \(0.9248\) & \(0.8281\) & \(0.81+0.15I\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(J=0.5\), \(r_{0}=1\), \(w=5\)) & \(1.4011\) & \(1.3891\) & \(1.3733\) & \(1.3525\) \\ \hline \(r_{ph}>r_{e}\) & ✓ & ✓ & ✓ & \(\times\) \\ \hline \hline \(r_{ph}>r_{e}\) & ✓ & ✓ & ✓ & \(\times\) \\ \hline \hline \(r_{0}\) & \(0.9\) & \(1\) & \(1.1\) & \(1.2\) \\ \hline \(r_{e}\) (\(\Lambda=-1\), \(q=0.4\), \(J=0.5\), \(w=5\)) & \(0.8248\) & \(0.8281\) & \(0.8290\) & \(0.8300\) \\ \hline \(r_{ph}\) (\(\Lambda=-1\), \(q=0.4\), \(J=0.5\), \(w=5\)) & \(1.1836\) & \(1.3733\) & \(1.5908\) & \(1.8348\) \\ \hline \(r_{ph}>r_{e}\) & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \(r_{e}\) (\(r_{0}=1\), \(q=0.4\), \(J=0.5\), \(w=5\)) & \(0.9872\) & \(0.8417\) & \(0.8281\) & \(0.76+0.07I\) \\ \hline \(r_{ph}\) (\(r_{0}=1\), \(q=0.4\), \(J=0.5\), \(w=5\)) & \(1.4472\) & \(1.3738\) & \(1.3733\) & \(1.3726\) \\ \hline \hline \(r_{ph}>r_{e}\) & ✓ & ✓ & ✓ & \(\times\) \\ \hline \hline \(r_{e}\) (\(r_{0}=1\), \(q=0.2\), \(J=0.4\), \(\Lambda=-1.5\)) & \(0.81788\) & \(0.81794\) & \(0.81799\) & \(0.81800\) \\ \hline \(r_{ph}\) (\(r_{0}=1\), \(q=0.2\), \(J=0.4\), \(\Lambda=-1.5\)) & \(1.2768\) & \(1.1616\) & \(1.1013\) & \(1.0649\) \\ \hline \hline \(r_{ph}>r_{e}\) & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 3: The event horizon (\(r_{e}\)) and photon sphere radius (\(r_{ph}\)) for the variation of \(q\), \(J\), \(r_{0}\), \(w\) and \(\Lambda\) for \(m=1\) and \(z=2\). As the next step, we calculate the entropy of the black hole. The method for obtaining the entropy of black holes depends on gravities under consideration and topological structure of the black holes. In Einsteinian black holes, without higher curvature terms, the entropy could be obtained by using the area law. But, since our solutions are obtained in a class of \(F(R)\) gravity with \(F_{R}=0\), we suppose the validity of the first law of thermodynamics to calculate the entropy as \[\delta S=\frac{1}{T}\delta M, \tag{29}\] yielding \[S=\frac{\left(4+2\delta-\gamma\right)r_{e}\left(\frac{r_{e}}{r_{0}}\right)^{ \gamma-\frac{\gamma}{2}}}{24}. \tag{30}\] As we see, entropy depends only on the exponent \(z\) and there is no direct contributions of matter field. Besides, using the concept of enthalpy, one can obtain the volume of these black holes as \[V=\left(\frac{\partial H}{\partial P}\right)\Bigg{|}_{S,Q,J}=\frac{r_{e}^{2} \left(\frac{r_{e}}{r_{0}}\right)^{\gamma}}{2}. \tag{31}\] Evidently, there is a direct relationship between the total volume of the black holes and the horizon radius, indicating that one can use the horizon radius instead of using volume in calculations. Figure 5: Energy emission rate for the corresponding black hole with \(m=1\) and different values of black hole parameters. The total electric charge of the black hole can be obtained from the power Maxwell nonlinear electrodynamics as \[Q=\frac{3\sqrt{q}^{2\frac{3}{4}}}{32\pi}, \tag{32}\] and the electric potential is determined as \[U=r_{e}^{-\delta}q\left(\frac{r_{e}}{r_{0}}\right)^{\gamma}. \tag{33}\] To obtain the angular velocity, we take advantage of the standard equation as \[\Omega=-\frac{g_{t\varphi}}{g_{\varphi\varphi}}=\frac{J\left(\frac{r_{e}}{r_{0 }}\right)^{w}}{2r_{e}^{2}}. \tag{34}\] Considering the above equation and the first law, the angular momentum can be obtained as \[\xi=\frac{\left(\frac{r_{e}}{r_{0}}\right)^{w+\gamma-z}\left(w-2\right)^{2}}{32 \pi\left(2-zw-6w+4w^{2}\right)}J. \tag{35}\] It is easy to show that the first law of thermodynamics is as follows \[dM=TdS+UdQ+VdP+\Omega d\xi, \tag{36}\] Taking into account the scaling argument for our Lifshitz like solutions in the extended phase space, one can find the following Smarr relation holds \[\gamma M=\frac{\left(4-\delta+2\gamma\right)}{3}TS+\frac{\delta}{3}QU-2PV- \frac{\left(3w-\delta-\gamma\right)}{3}\Omega\xi. \tag{37}\] It is notable that for \(z=w=0\), Eq. (37) reduces to that of nonlinearly charged rotating BTZ black holes in which mass term has no scaling. ### Thermal stability and phase transition Heat capacity is one of the interesting thermodynamical quantities which could be used to extract two important properties of the solutions: I) Phase transition points. II) Thermal stability of the solutions. The signature of heat capacity determines the thermal stability/instability of the system. The positivity of heat capacity represents the black hole being in thermally stable state, while the opposite corresponds to thermally unstable case. As was already mentioned, the heat capacity can provide a mechanism to study the phase transition of the system. In fact, this thermodynamic quantity can be employed to investigate two distinctive points, bound and phase transition points. The bound point is where the sign of temperature is changed. In other words, the root of temperature (or heat capacity) indicates a limitation point, which separates physical solutions (positive temperature) from non-physical ones (negative temperature). The phase transition point may be related to the divergence points of \(C\). Indeed, the divergencies of the heat capacity are where the system goes under phase transition. The heat capacity is given by \[C_{P,Q,J}\ =\ T\left(\frac{\partial S}{\partial T}\right)_{P,Q,J}. \tag{38}\] Employing Eqs. (30), (28) and (38), one can find \[C_{P,Q,J}\ =\ \frac{\pi r_{e}\left(\frac{r_{e}}{r_{0}}\right)^{\gamma-\frac{ 3}{2}}}{B_{1}+B_{2}-B_{3}}((1+\gamma-\frac{z}{2})(\gamma-2\delta-4)B_{4}), \tag{39}\] in which \[B_{1} = -3A_{1}r_{e}^{1+\delta}\left(w-2\right)^{2}\left(4w-z-6\right) \left(\frac{r_{e}}{r_{0}}\right)^{2w-z}J^{2}\] \[B_{2} = 768\pi PA_{2}(2+z)(\gamma+2)r_{e}^{5+\delta}\] \[B_{3} = -48A_{2}(A_{1}+8w)(\delta-\gamma)^{2-\frac{1}{4}}q^{\frac{3}{2}} r_{+}^{3}\] \[B_{4} = \frac{A_{1}J^{2}\left(w-2\right)^{2}r_{e}^{1+\delta}}{2}\left( \frac{r_{e}}{r_{0}}\right)^{2w-z}-512r_{e}^{3}\left((\gamma+2)\pi Pr_{e}^{2+ \delta}-\frac{2^{-\frac{1}{4}}q^{\frac{3}{2}}A_{2}\left(\gamma+2\right)}{32} \right).\] The behavior of the heat capacity with respect to the horizon radius is addressed in Fig. 6. According to this figure, there are three possibilities for the heat capacity: Case I) One root: there are two phases of small and large black holes. The small one is not physical due to the negativity of the temperature. Whereas, the large black hole phase is thermally stable. Case II) One root and one divergency: in this case, there are three phases small, medium, and large black holes. The small black holes have negative temperatures and therefore, are not physical. The medium and large phases are separated by a divergence point. At this point, two phases of medium and large black holes are in equilibrium and go from one to the other via a critical process. Case III) One root and two divergencies: In this case, four distinguishable phases can be observed for black holes; very small, small, medium, and large black holes. For the very small black hole phase, the temperature is negative and so this phase is not a physical one. For small and large black hole phases, heat capacity is positive and these two phases are thermally stable. Medium black hole phases which are located between two divergencies have negative heat capacity. Therefore, this phase is not physical and accessible to the black holes. Figure 6 also displays the effects of different parameters on the heat capacity. In general, we can highlight the following effects of variation of different parameters on the heat capacity. i) According to Fig. 6(a), there is a critical value for the electric charge where for values smaller than it, two divergencies exist for the heat capacity. These two divergencies coincide with each other for this critical value of the electric charge. For the electric charges larger than this critical value, no divergency appears in the structure of the Figure 6: Heat capacity versus \(r_{e}\) for different values of parameters. heat capacity. ii) Figure 6(b) shows that there is a critical value for the angular momentum as well. The only difference is that for angular momentums smaller than this critical value, no divergence point is observed. iii) The effect of parameter \(r_{0}\) is depicted in Fig. 6(c), indicating that its contribution to the heat capacity is the same as the effect of the electric charge. In other words, for the values of \(r_{0}\) smaller than its critical value, two divergence points appear. While, no divergency observe for larger than the critical value. iv) Figure 6(d) illustrates the effect of the exponent \(w\) on the heat capacity. Taking a closer look at this figure, one can find that its effect is similar to the angular momentum. The difference is that for fixed parameters \(q\), \(J\), \(z\), \(r_{0}\) and \(\Lambda\), there is a specific value of \(w\) for which the heat capacity has only one divergency without any root (see the dashed curve of Fig. 6(d)). For this specific value, two phases exist small and large black holes. Small black holes have a negative heat capacity and are thermally unstable. Whereas, large black holes are in a stable state due to the positivity of heat capacity. For values of \(w\) between this specific value and the critical value, there are two divergencies for the heat capacity. v) To study the effect of exponent \(z\) and the cosmological constant, we plot Figs. 6(e) and 6(f), indicating that their effects are similar to the electric charge. To have a more precise picture regarding the effects of different parameters on thermal stability/instability of the solutions, we have plotted Fig. 7. As we see, by decreasing (increasing) of the electric charge, cosmological constant Figure 7: Thermally stable and/or unstable regions of the black holes. and parameter \(r_{0}\) (angular momentum), the stability region of the system decreases. In the case of exponents, the effect of each one on the stability of the system depends on the value of the other. In fact, the value of \(w\) for which the system is thermally unstable are quite dependent on the value of \(z\) and vice versa. ### van der Waals like behavior Here, we look for the possibility of the existence of van der Waals-like phase transition for the black holes. We also extract critical thermodynamic quantities and analyze the effects of black hole parameters on critical values. To do so, we require to determine the equation of state which is obtained by writing down the pressure as a function of the temperature and thermodynamic volume. Since there is a direct relationship between the thermodynamic volume and the horizon radius, we use the horizon radius instead of using volume in the equation of state. From Eq. (28), the equation of state is obtained as \[P=\frac{A_{1}J^{2}\left(w-2\right)^{2}\left(\frac{r_{b}}{r_{0}}\right)^{2w-z} }{256A_{2}\pi r_{e}^{4}}+\frac{2^{-\frac{1}{4}}q^{\frac{3}{2}}\left(\delta- \gamma\right)}{8\pi\left(\gamma+2\right)r_{e}^{\delta+2}}+\frac{T}{2r_{e} \left(\frac{r_{e}}{r_{0}}\right)^{\frac{2}{2}}\left(\gamma+2\right)}. \tag{40}\] The behaviors of the pressure and temperature under variation of the event horizon radius are depicted in Fig. 8. Figure 8: van der Waals like phase diagrams for \(q=0.1\). Left panels: \(P-r_{e}\) diagram for \(T<T_{c}\) (continuous line), \(T=T_{c}\) (dash-dotted line), and \(T>T_{c}\) (dotted line). Middle panels: \(T-r_{e}\) diagram for \(P<P_{c}\) (continuous line), \(P=P_{c}\) (dash-dotted line), and \(P>P_{c}\) (dotted line). Right panels: \(G-T\) diagram for \(P<P_{c}\) (continuous line), \(P=P_{c}\) (dash-dotted line) and \(P>P_{c}\) (dotted line). Evidently, a van der Waals-like phase transition can be observed for these black holes by suitable choices of different parameters. Indeed, the presence of subcritical isobars in \(T-r_{e}\) and isothermal diagrams in \(P-r_{e}\) confirm the existence of van der Waals-like phase transition. As we know, the van der Waals fluid goes under a first order phase transition for temperatures smaller than the critical temperature (\(T<T_{c}\)). Whereas, at the critical temperature, its phase transition is a second-order one [35]. The formation of the swallow-tail shape in the \(G-T\) diagram is another evidence of the first-order (small-large) phase transition. Figures 8(c) and 8(f), confirm the first-order phase transition for our black hole solutions. Our analysis shows that a first-order phase transition occurs for values of the exponents that satisfy the condition \(x_{1}<z+w<x_{2}\). It is worth mentioning that the values of \(x_{1}\) and \(x_{2}\) are highly governed by the parameter \(z\). Some values of \(x_{1}\) and \(x_{2}\) for different values of \(z\) are as follows \[z = 0.5 \longrightarrow 0.9<z+w<2.1 \tag{41}\] \[z = 1 \longrightarrow 1.35<z+w<2.75\] \[z = 1.5 \longrightarrow 1.8<z+w<3.35\] \[z = 2 \longrightarrow 2.3<z+w<3.85,\] A significant point here is that if \(w>z\) in the mentioned region (see the relation (41)), the Gibbs free energy is positive. Since the Gibbs free energy is obtained as \(G=H-ST\), this reveals the fact that the system is energy-dominated. But for \(z>w\), the Gibbs free energy is negative, indicating that the system is entropy-dominated (see Fig. 9). To obtain the critical values of thermodynamic quantities, we use the concept of the inflection point of isothermal \(P-V\) diagram given by \[\left(\frac{\partial P}{\partial r_{e}}\right)_{T}=0,\quad\left(\frac{\partial ^{2}P}{\partial r_{e}^{2}}\right)_{T}=0 \tag{42}\] It is a matter of calculation to show that the critical horizon radius (volume), temperature and pressure are given by \[r_{c} = \frac{32wA_{2}r_{0}^{2w-z}2^{-\frac{1}{2}}q^{\frac{3}{2}}\left( \delta+2\right)\left(-2\delta+z-2\right)\left(\gamma-\delta\right)}{A_{1}J^{2} \left(w-2\right)^{2}\left(A_{2}-2\right)(2w-z-4)},\] \[T_{c} = \frac{2^{-\frac{1}{4}}q^{\frac{3}{2}}\left(\delta+2\right)\left( \gamma-\delta\right)\left(\frac{r_{c}}{r_{0}}\right)^{\frac{5}{2}}r_{c}^{ \delta-2}}{2(2+z)\pi r_{c}^{3}}+\frac{A_{1}\left(2w-z-4\right)\left(w-2\right)^ {2}J^{2}\left(\frac{r_{c}}{r_{0}}\right)^{2w-\frac{\delta}{2}}}{32A_{2}(2+z) \pi r_{c}^{3}},\] \[P_{c} = \frac{A_{1}J^{2}\left(w-2\right)^{2}\left(A_{2}-2\right)\left( \frac{r_{c}}{r_{0}}\right)^{2w-z}}{256\pi wA_{2}r_{c}^{4}\left(\gamma+2\right) \left(2+z\right)}-\frac{2^{-\frac{1}{4}}q^{\frac{3}{2}}\left(\gamma-\delta \right)\left(z-2-2\gamma\right)}{8(\gamma+2)(2+z)\pi r_{c}^{2+\delta}}. \tag{43}\] Table 4 shows how critical quantities and universal critical ratio \(\left(\frac{P_{c}r_{c}}{T_{c}}\right)\) change under variation of black hole parameters. From this table, one can find that as the electric charge increases, the critical pressure, temperature, and universal critical ratio decrease, whereas the critical horizon radius (volume) increases. Regarding the effect of angular momentum on the critical quantities, one can see that its effect is opposite of that of the electric charge. Studying the effects of exponents and parameter \(r_{0}\) indicates that their contribution to critical values is the same as the electric charge. In other words, the critical volume is an increasing function of these three parameters, whereas the critical pressure, temperature, and universal critical ratio are decreasing functions of them. ### Heat Engine As the final step, we would like to consider the Lifshitz rotating black hole as a heat engine and discuss its efficiency. A heat engine is a physical system that works between two hot and cold reservoirs and its main role is transferring heat from the hot reservoir to the cold one. The total mechanical work done, by the First Law, is \(W=Q_{H}\) - \(Q_{C}\). So, the efficiency of the heat engine is \[\eta=\frac{W}{Q_{H}}=1-\frac{Q_{C}}{Q_{H}}. \tag{44}\] In order to calculate the efficiency, one may use the heat capacity. According to Eqs. (30) and (31), the entropy and thermodynamic volume are related to the horizon radius. So, these two quantities are dependent to each other \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline \(q\) & 0.1 & 0.12 & 0.14 & 0.16 \\ \hline \(P_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(r_{0}=0.2\)) & \(27\times 10^{-5}\) & \(22\times 10^{-5}\) & \(19\times 10^{-5}\) & \(17\times 10^{-5}\) \\ \hline \(T_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(r_{0}=0.2\)) & 0.032 & 0.031 & 0.03 & 0.029 \\ \hline \(r_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(r_{0}=0.2\)) & 1.77 & 2.01 & 2.24 & 2.45 \\ \hline \(P_{c}r_{c}/T_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(r_{0}=0.2\)) & 0.0153 & 0.0148 & 0.0144 & 0.0141 \\ \hline \hline & & & & & \\ \(J\) & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline \(P_{c}\) (\(z=0.5\), \(w=1.5\), \(q=0.1\), \(r_{0}=0.2\)) & \(14\times 10^{-5}\) & \(27\times 10^{-5}\) & \(46\times 10^{-5}\) & \(73\times 10^{-5}\) \\ \hline \(T_{c}\) (\(z=0.5\), \(w=1.5\), \(q=0.1\), \(r_{0}=0.2\)) & 0.021 & 0.032 & 0.045 & 0.060 \\ \hline \(r_{c}\) (\(z=0.5\), \(w=1.5\), \(q=0.1\), \(r_{0}=0.2\)) & 2.10 & 1.77 & 1.54 & 2.45 \\ \hline \(P_{c}r_{c}/T_{c}\) (\(z=0.5\), \(w=1.5\), \(q=0.1\), \(r_{0}=0.2\)) & 0.0147 & 0.0153 & 0.0159 & 0.0163 \\ \hline \hline & & & & & \\ \(r_{0}\) & 0.1 & 0.15 & 0.2 & 0.25 \\ \hline \(P_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(q=0.1\)) & \(51\times 10^{-4}\) & \(93\times 10^{-5}\) & \(27\times 10^{-5}\) & \(11\times 10^{-5}\) \\ \hline \(T_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(q=0.1\)) & 0.263 & 0.076 & 0.032 & 0.016 \\ \hline \(r_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(q=0.1\)) & 0.801 & 1.27 & 1.77 & 2.29 \\ \hline \(P_{c}r_{c}/T_{c}\) (\(z=0.5\), \(w=1.5\), \(J=0.6\), \(q=0.1\)) & 0.0157 & 0.0155 & 0.0153 & 0.0152 \\ \hline \hline & & & & & \\ \(w\) & 1 & 1.25 & 1.5 & 1.6 \\ \hline \(P_{c}\) (\(z=0.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.038 & \(31\times 10^{-4}\) & \(27\times 10^{-5}\) & \(19\times 10^{-6}\) \\ \hline \(T_{c}\) (\(z=0.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.188 & 0.059 & 0.032 & 0.026 \\ \hline \(r_{c}\) (\(z=0.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.342 & 0.787 & 1.77 & 3.87 \\ \hline \(P_{c}r_{c}/T_{c}\) (\(z=0.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.0695 & 0.0423 & 0.0153 & 0.0029 \\ \hline \hline & & & & & \\ \(z\) & 0.5 & 1 & 1.5 & 2 \\ \hline \(P_{c}\) (\(w=1.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & \(27\times 10^{-5}\) & \(91\times 10^{-6}\) & \(16\times 10^{-6}\) & \(19\times 10^{-7}\) \\ \hline \(T_{c}\) (\(w=1.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.032 & 0.016 & 0.007 & 0.002 \\ \hline \(r_{c}\) (\(w=1.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 1.77 & 2.07 & 2.64 & 3.45 \\ \hline \(P_{c}r_{c}/T_{c}\) (\(w=1.5\), \(J=0.6\), \(q=0.1\), \(r_{0}=0.2\)) & 0.0153 & 0.0113 & 0.0057 & 0.0023 \\ \hline \hline \end{tabular} \end{table} Table 4: Critical values for the variation of \(q\), \(J\), \(r_{0}\), \(w\) and \(z\). for this kind of solution. This shows that the specific heat at constant volume vanishes \(C_{V}=0\) which is the "isochore equals adiabat" result [61]. In this case, the specific heat at constant pressure is not zero. An explicit expression for \(C_{P}\) would suggest that we can consider a rectangular cycle such as Fig. 10, involving two isobars (paths of \(1\to 2\) and \(3\to 4\)) and two isochores/adiabats (paths of \(2\to 3\) and \(4\to 1\)). Figure 10 shows a schematic of the proposed cycle. We can calculate the work done along the heat cycle as \[W = \oint PdV=W_{1\longrightarrow 2}+W_{2\longrightarrow 3}+W_{3 \longrightarrow 4}+W_{4\longrightarrow 1} \tag{45}\] \[= W_{1\longrightarrow 2}+W_{3\longrightarrow 4}=P_{1}\left(V_{2}-V_ {1}\right)+P_{4}\left(V_{4}-V_{3}\right).\] The upper isobar will give the net inflow of heat (\(Q_{H}\)) as follows \[Q_{H}=\int_{T_{1}}^{T_{2}}C_{p}\left(P_{1},T\right)dT=\int_{r_{+1}}^{r_{+2}}C_ {p}\left(P_{1},T\right)\frac{\partial T}{\partial r}dr=Q_{H2}-Q_{H1}. \tag{46}\] Taking advantage of Eqs. (28) and (39), the input heat flow to the cycle is \[Q_{H} = \frac{A_{1}(w-2)^{2}J^{2}(1+\gamma-\frac{z}{2})\left(2\delta+4- \gamma\right)\left(\frac{r_{+}}{r_{0}}\right)^{2w-z+\gamma}}{3072\pi A_{2} \left(2w-z+\gamma-2\right)r_{e}^{2}}\Bigg{|}_{r_{e1}}^{r_{e2}} \tag{47}\] \[+\frac{\left(1+\gamma-\frac{z}{2}\right)\left(\gamma-4-2\delta \right)(2^{\frac{4}{3}}q^{\frac{3}{2}}+16P_{1}\pi r_{+}^{\delta+2})(\frac{r_{ +}}{r_{0}})^{\gamma}}{192r_{e}^{\delta}}\Bigg{|}_{r_{e1}}^{r_{e2}}.\] Using Eq. (46) and (47), one can obtain the engine efficiency as \[\eta = \frac{W}{Q_{H}}=\frac{64\pi D_{1}A_{2}r_{e2}^{2}r_{e1}^{2}}{-(D_{2}+D_{3} )\left((6+z)^{2}-(2\delta-2\gamma)^{2}\right)}, \tag{48}\] in which \[D_{1} = \frac{r_{e1}^{2+\gamma}-r_{e2}^{2+\gamma}}{r_{0}^{\gamma}}\] \[D_{2} = \left(4A_{2}P_{1}\pi r_{e1}^{\delta+2}+\frac{J^{2}(w-2)^{2}r_{e1} ^{2w-z+\gamma-1}}{32r_{0}^{2w-z}}+\frac{1}{4}A_{2}2^{-\frac{1}{4}}q^{\frac{3}{ 2}}\right)r_{e2}^{\delta}\left(\frac{r_{e1}}{r_{0}}\right)^{\gamma}\] \[D_{3} = \left(4A_{2}P_{1}\pi r_{e2}^{\delta+2}+\frac{J^{2}(w-2)^{2}r_{e2} ^{2w-z+\gamma-1}}{32r_{0}^{2w-z}}+\frac{1}{4}A_{2}2^{-\frac{1}{4}}q^{\frac{3}{ 2}}\right)r_{e1}^{\delta}\left(\frac{r_{e2}}{r_{0}}\right)^{\gamma}.\] Figure 10: Our engine cycle Among different classical cycles, the Carnot cycle is one of the interesting simplest cycle that can be considered. The efficiency of this cycle is the maximum efficiency of the heat engines in such a way that any higher efficiency would violate the second law of thermodynamics. To calculate the Carnot efficiency, we consider the \(T_{H}\) and \(T_{C}\) in our cycle to correspond to \(T_{2}\) and \(T_{4}\), respectively. So, this efficiency is \[\eta_{c} = 1-\frac{T_{C}}{T_{H}}=1-\frac{X_{1}(\frac{r_{e1}}{r_{0}})^{\frac{ \eta}{2}}}{X_{2}(\frac{r_{e2}}{r_{0}})^{\frac{\eta}{2}}} \tag{49}\] where \[X_{1} = -\frac{A_{1}J^{2}(w-2)^{2}(\frac{r_{e1}}{r_{0}})^{2w-z}}{32A_{2}r_ {e1}^{3}}+8\pi P_{4}r_{e1}\left(\gamma+2\right)-\frac{2^{-\frac{1}{2}}q^{\frac {3}{2}}(\delta-\gamma)}{r_{e1}^{\delta+1}}\] \[X_{2} = -\frac{A_{1}J^{2}(w-2)^{2}(\frac{r_{e2}}{r_{0}})^{2w-z}}{32A_{2}r _{e2}^{3}}+8\pi P_{4}r_{e2}\left(\gamma+2\right)-\frac{2^{-\frac{1}{2}}q^{\frac {3}{2}}(\delta-\gamma)}{r_{e2}^{\delta+1}},\] The behavior of the heat engine efficiency \(\eta\) and the ratio \(\frac{\eta}{\eta c}\) under variation of black hole parameters is depicted in Figs. (11)-(13). In Fig. 11, we examine the influence of electric charge and angular momentum on \(\eta\) and the ratio \(\frac{\eta}{\eta_{C}}\) for the fixed exponents, parameter \(r_{0}\) and pressures \(P_{1}\), \(P_{4}\). As one can check, from the up panels of Fig. 11, both \(\eta\) and the ratio \(\frac{\eta}{\eta_{C}}\) are decreasing functions of the angular momentum. For large values of \(J\), the efficiency monotonically increases as the volume \(V_{2}\) grows (see the bold dotted line of Fig. 11(a)). This means that for rapidly charged rotating black holes, the increase of volume difference between the small black hole (\(V_{1}\)) and larger black hole (\(V_{2}\)) will make the heat engine more efficient. For small angular momentum, the efficiency curve has a local minimum value, indicating that there exists a specific value of the volume \(V_{2}\) at which the black hole heat engine works at the lowest efficiency (see the bold continuous line of Fig. 11(a)). In the absence of the electric charge, for all values of the angular momentum the heat engine efficiency monotonously increases with the growth of \(V_{2}\) and then tends to a constant value (see thin lines of Fig. 11(a)). Taking a close look at Fig. 11(a), one can find that charged rotating black holes have a bigger efficiency than their uncharged counterparts (compare bold and thin lines). Just for large volume difference \(\Delta V=V_{2}-V_{1}\), their efficiency becomes smaller compared to that of a rapidly rotating black hole (compare bold-dotted and thin-dotted lines in Fig. 11(a)). Down panels of Fig. 11 displays the effect of the electric charge on \(\eta\) and the ratio \(\frac{\eta}{\eta_{C}}\). As we see, although this parameter has an increasing contribution to the efficiency, its effect is to decrease the ratio \(\frac{\eta}{\eta_{C}}\). For non-rotating black holes, all curves monotonic reduce rapidly firstly, then the efficiency reaches a constant value after the certain value of volume \(V_{2}\) (see thin lines of Fig. 11(c)). For slowly charged rotating black holes, the efficiency gradually increases as the volume \(V_{2}\) increases and then tends to a constant value (see the bold continuous line of Fig. 11(c)). While for rapidly charged rotating black holes, the efficiency decreases to a minimum value with an increase of \(V_{2}\) and then gradually grows as the volume increases more, and finally reaches a constant value in the limit of that \(V_{2}\) goes to the infinity (see the bold dotted line of Fig. 11(c)). Fig. 11(c) also shows that non-rotating black holes have a bigger efficiency compared to the charged rotating black holes (compare bold lines to thin lines). Comparing Fig. 11(a) to Fig. 11(c), one can find that variation of \(J\) has a stronger effect on the efficiency than the electric charge. Figure 12 shows how \(\eta\) and the ratio \(\frac{\eta}{\eta c}\) are affected by the exponents. According to the up panels of this figure, the exponent \(z\) has an increasing (a decreasing) effect on the efficiency (the ratio \(\frac{\eta}{\eta c}\)). For small values of \(z\), the efficiency monotonically increases as the volume \(V_{2}\) grows and then tends to the saturation value (see the continuous line of Fig. 12(a)). While for large values, the opposite behavior will be observed. Figure 12(c) displays the influence of the exponent \(w\) on the efficiency. Our findings indicate that for \(w<0.5\), the efficiency increases with the increase of \(w\). While for \(w>0.5\), increasing \(w\) leads to the decreasing of the heat engine efficiency. Regarding the effect of this parameter on the ratio \(\frac{\eta}{\eta c}\), in both regions (\(w<0.5\) or \(w>0.5\)), one finds that increasing the this parameter makes the increasing of the ratio \(\frac{\eta}{\eta c}\) (see Fig. 12(d)). Taking a close look at the right panels of Fig. 12, one can notice that in the region of volume \(V_{2}\) near \(V_{1}\) and for small (large) values of \(w\) (\(z\)), the efficiency becomes larger than Carnot efficiency which violates the second law of thermodynamics. This shows that small (large) values of \(z\) (\(w\)) should be considered to observe an acceptable efficiency of the system. The effects of the parameter \(r_{0}\) and pressure on \(\eta\) and the ratio \(\frac{\eta}{\eta c}\) are reflected in Fig. 13. We find that the parameter \(r_{0}\) has an increasing effect on both \(\eta\) and \(\frac{\eta}{\eta c}\). For small (large) values of \(r_{0}\), the efficiency gradually grows (reduces) with the increase of \(V_{2}\) (see Fig. 13(a)). Regarding the pressure, down panels of Fig. 13 show that increasing the difference of pressure \(\Delta P\) makes the increasing of both \(\eta\) and \(\frac{\eta}{\eta c}\). Taking a look at right panels of Fig. 13, one can find that the second law of thermodynamics is satisfied for all values of these two parameters. ## IV Conclusion In this paper, we have obtained a new Lifshitz-like rotating black hole solution in three dimensional \(F(R)\) gravity. Investigating geometrical properties of the solution, we have found that this solution reduces to the charged rotating BTZ-like black hole in special limits. We also studied the optical features of the black hole and noticed that some constraints should be imposed on the exponents to have an acceptable optical behavior. Studying the impact of parameters of the model on the photon orbit radius illustrated that as the electric charge, angular momentum and absolute value of the cosmological constant increase, both event horizon and photon orbit radii decrease. Regarding the effect of exponent \(z\), our analysis showed that this parameter has an increasing contribution to the horizon radii and photon orbit. After that, we continued our analysis by investigating the energy emission rate and examining the influence of parameters on the radiation process. The results indicated that the angular momentum and exponent \(z\) have an increasing contribution to the emission rate, namely, the emission of particles around the BH increases by increasing these two parameters. Regarding the role of the electric charge, cosmological constant and exponent \(w\), we have found that as the effects of these parameters get stronger, the evaporation process gets slower. In other words, the lifetime of a black hole would be longer under such conditions. As the next step, we have studied the thermodynamic properties of the system in the extended phase space thermodynamics. We calculated thermodynamic quantities of black holes and showed that these quantities satisfy the first law of thermodynamics. We also obtained the modified Smarr relation and found that regardless of the cosmological constant term, scaling other thermodynamic quantities is modified. Using heat capacity, we investigated the thermal stability of the system and showed how the parameters of the model affect the region of the stability. Moreover, we look for possible phase transitions and found that three-dimensional Lifshitz-like rotating black hole experiences the first-order/second-order phase transitions with a suitable choice of parameters. Finally, we have considered this kind of black hole as a working substance and studied the holographic heat engine by taking a rectangle heat cycle in the \(P-V\) plot. Investigating the black hole heat engine efficiency and comparing obtained results with the Carnot efficiency led to the following interesting results: I) The angular momentum (electric charge) has a decreasing (an increasing) contribution to the efficiency of the system. For all values of these two parameters, the efficiency is always smaller than the Carnot efficiency which is consistent with the second law of thermodynamics. II) The charged rotating black hole has a bigger (smaller) efficiency than its uncharged (non-rotating) counterpart. It is worth pointing out that for very large volume difference, the efficiency of rapidly rotating black hole becomes bigger than them. III) The heat engine efficiency is an increasing function of the exponent \(z\). For small values of this parameter, the condition \(\frac{n}{n\varepsilon}<1\) is satisfied all the time. But for large values of \(z\), this condition is violated in the region of volume \(V_{2}\) near \(V_{1}\). The contribution of the exponent \(w\) on the efficiency is a little different. For \(w<0.5\), the efficiency increases with the increase of \(w\), whereas for \(w>0.5\), increasing this parameter leads to the decrease of the heat engine efficiency. For \(w<0.5\), the second law of thermodynamics violates for a very small volume difference, whereas for \(w>0.5\) it is always preserved. IV) Increasing the pressure difference makes the increasing the efficiency of the system. For all values of pressure, the efficiency is always smaller than the Carnot efficiency which is consistent with the second law of thermodynamics. ###### Acknowledgements. The authors thank Shiraz University Research Council. KhJ is grateful to the Iran Science Elites Federation for the financial support.
2308.16630
A lattice-ordered monoid on multilayer networks
In the present paper we introduce a lattice-ordered partial monoid structure on a suitable set of multilayer networks. We first study a kind of mappings that preserve the partial order and describe the order structure. After that we define the lattice-ordered monoid and deduce the main properties. lattice-ordered monoid, multilayer network, interior mapping, partial operation.
Joaquin Diaz Boils, Orlando Galdames Bravo
2023-08-31T10:51:31Z
http://arxiv.org/abs/2308.16630v1
# A lattice-ordered monoid on multilayer networks ###### Abstract. In the present paper we introduce a lattice-ordered partial monoid structure on a suitable set of multilayer networks. We first study a kind of mappings that preserve the partial order and describe the order structure. After that we define the lattice-ordered monoid and deduce the main properties. lattice-ordered monoid, multilayer network, interior mapping, partial operation. Key words and phrases:lattice-ordered monoid, multilayer network, interior mapping, partial operation 2020 Mathematics Subject Classification: Primary 06A06, Secondary 05C99 ## 1. Introduction On the one hand a multilayer network can be seen as a graph or a multigraph of graphs structures and they are habitually used as a tool for the study in applied science by means of mathematical formulations evolving for instance graph theory, topology or statistics, see for instance [9, 3] and references therein. On the other hand lattice ordered monoids [2] has been widely studied from several points of view (see e.g. [10, 16] and references therein). In the present paper we propose a join scheme of both concepts, multilayer network and lattice ordered monoid. Our original interest on such structures is due to the fact that they provide an algebraic framework for an abstract notion of _embodiment_ in Neuroscience by means of multilayer networks with a partial structure developed by the first author in [14]. This structure opens the possibility to a dynamical behaviour, which needs a suitable setting for being studied. At this point we obviate the classical interaction of an static network and focus on the algebraic structure that we define and how it can change the network structure. The ideas we develop are mainly oriented to the original example described in [14], but we notice that one can easily extrapolate it to any other contexts where it appear multilayer networks or related structures as, for example, multiplex networks, general networks or simply graphs and multigraphs. We also notice the structure we define is actually a partial commutative monoid for our convenience, but the theory we develope apply to general commutative monoids. As far as we know there is not in the ## 1. Introduction Let \(X\) be a set of \(n\)-dimensional subsets of \(X\). A _multisubset_\(G\) is a set of subsets \(X\) of \(X\) such that \(X\) is a subset of \(X\). A _multisubset_\(G\) is a subset of \(X\) such that \(X\) is a subset of \(X\) such that \ **Definition 1.1**.: Let \(C\) and \(V\) be fixed sets of colors and nodes respectively. Let a \(s\)-colored layer \(G\in MG^{\otimes C}\) and a \(q\)-colored layer \(H\in MG^{\otimes C}\), and assume that \(C_{1}:=col(E(G))\subseteq C\), \(C_{2}:=col(E(H))\subseteq C\) and that \(V(G),V(H)\subseteq V\). Then the operation \[\odot\colon MG^{\otimes C}(n)\times MG^{\otimes C}(m)\longrightarrow MG^{ \otimes C}\] produces a new \((s+q-r)\)-colored layer \(G\odot H\), where \(r=|C_{1}\cap C_{2}|\) with \(n+m-p\) vertices where \(p=|V(G)\cap V(H)|\) defined as \(V(G\odot H):=V(G)\sqcup V(H)\), \(E(G\odot H):=E(G)\cup E(H)\), \(m_{G\odot H}:=m_{G}+m_{H}\) and \(col_{G\odot H}:=col_{G}\cup col_{H}\), where the mappings are defined by a natural way. We set \(\odot\) to be a commutative operation and \(\otimes\) not be and also establish that \(\odot\) has priority over \(\otimes\), that is: \[G\otimes H\odot K=G\otimes(H\odot K)\] Notice we have defined two different ways of composing: \(\otimes\) and \(\odot\). That is, we consider sets \(MG^{\otimes C}\) of concatenations in the form \(G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k}\) with \(\oslash^{i}\in\{\otimes,\odot\}\) for \(|C|=k\) and \(i=1,\ldots,k-1\). Also notice that, with this notation, we have obviated the interactions between layers which are present by the tensor product, but not explicitly: we just take into account the case when the relation between layers dissapear by means of the composition operation \(\odot\). **Example 1.2**.: For \(k=3\) we have the concatenations \[\begin{array}{l}MG^{\otimes C}:=\{G\otimes H\otimes K,G\otimes K\otimes H,H \otimes G\otimes K,H\otimes K\otimes G,K\otimes G\otimes H,K\otimes H\otimes G,\\ G\odot H\otimes K,G\odot K\otimes H,H\odot K\otimes G,G\otimes H\odot K,H \otimes G\odot K,K\otimes G\odot H,G\odot H\odot K\}\end{array}\] The following example illustrates the composition operation \(\odot\): **Example 1.3**.: For \(n=3,m=4,s=q=2\) and \(p=3\): Note that new colors appear in a layer after more applications of \(\odot\). ## 2. The partial ordered structure Operation \(\odot\) defined in previous section can be seen as an accumulation of vertices and edges of two given layers that becomes a new layer with more colors than the original ones. For example, given the multilayers \(G\otimes H\otimes K,G\odot H\otimes K\in MG^{\otimes C}\), we understand that \(G\odot H\otimes K\) is, in some sense, over or below from \(G\otimes H\otimes K\). By convention we say that \(G\otimes H\otimes K\leq G\odot H\otimes K\), since we consider that \(G\odot H\) is more complex, in some sense, than \(G\otimes H\). Let us formalize this idea. A partially ordered set or a _poset_ is a set with a binary operation \(\leq\) wich is reflexive, antisymmetric and transitive (see e.g. [2]). We define the relation \(\leq\) in \(MG^{\otimes C}\) by ordering the concatenations of multigraphs as given in the following. Let \(k=|C|\) for the rest of the section. **Definition 2.1**.: Given \(G_{1}\oslash\cdots\oslash^{k-1}G_{k}\) and \(G_{1}\ominus^{1}\cdots\ominus^{k-1}G_{k}\) in \(MG^{\otimes C}\) with \(\oslash^{i},\ominus^{i}\in\{\otimes,\odot\}\) for \(i=1,\ldots,k-1\) we write \[G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k}\leq G_{1}\ominus^{1}\cdots\ominus^{k-1 }G_{k}\] if and only if there is no \(i\in\{1,\ldots,k-1\}\) such that \(\oslash^{i}=\otimes\) and \(\ominus^{i}=\odot\). This partial order allows us to define the following mappings. In order to simplify the notation, we sometimes will use lowercase letters as multilayers of \(MG^{\otimes C}\). **Definition 2.2**.: Let the mapping \(f_{j}\colon MG^{\otimes C}\to MG^{\otimes C}\): \[f_{j}(x)=\begin{cases}G_{1}\oslash^{1}\cdots G_{j}\odot G_{j+1}\cdots\oslash^{k -1}G_{k}&\text{if }x=G_{1}\oslash^{1}\cdots G_{j}\otimes G_{j+1}\cdots\oslash^{k-1}G_{k}\\ x&\text{otherwise}\end{cases}\] for \(j=1,\ldots,k-1\). We say that \(x,y\in MG^{\otimes C}\) are _comparable through_\(f_{j}\) if \(f_{j}(x)=y\). By adding \(f_{0}\) as the identity, it is easy to see that \(f_{j}\) are order-preserving. For the sake of clarity we use the notation \(f_{j}\) for any mapping defined above, avoiding the list of indexes. These mappings will be useful in the sequel, the next example illustrates how these functions work and describe, in some sense, a flow on \(MG^{\otimes C}\) as a poset. **Example 2.3**.: For the elements in Example 1.2 we have: From the example above we extract two immediate results. The first one establishes that one can obtain the top element after an action of every \(f_{j}\) over a given concatenation whatever ordering could be and the second that \(f_{j}\) are increasing. **Proposition 2.4**.: \(f_{i_{1}}\cdots f_{i_{k}}(G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k})=G_{1}\odot\cdots \odot G_{k}\) _for \(i_{1}<\cdots<i_{k}\) a permutation of \(1,\ldots,k\)._ **Proposition 2.5**.: \(f_{j}(G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k})\geq G_{1}\oslash^{1}\cdots\oslash^ {k-1}G_{k}\)_._ To the aim of simplicity we will focus our study to a fixed set of multilayers/multigraphs. Let us fix a list of multigraphs \(G(k):=(G_{1},\ldots,G_{k})\in(MG^{\otimes C})^{k}\) and denote \[\bigcirc G(k):=\left\{G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k}:\oslash^{i}\in \{\otimes,\odot\}\right\}.\] Notice that \(MG^{\otimes C}=\bigcup_{k=|C|}\{\bigcirc G(k):G(k)\in(MG^{\otimes C})^{k}\}\) and moreover that such subsets of \(MG^{\otimes C}\) are invariant by \(f_{j}\), i.e. \(f_{j}(\bigcirc G(k))\subseteq\bigcirc G(k)\). Hence, from these comments we deduce that \(f_{j}|_{\bigcirc G(k)}\colon\bigcirc G(k)\to\bigcirc G(k)\) is well defined and from now on we understand \(f_{j}\) as \(f_{j}|_{\bigcirc G(k)}\) for some \(\bigcirc G(k)\). Let \(P\) be a poset. We say that \(b\in P\) is a _bottom_ element if \(b\leq x\) for every \(x\in P\) and \(a\in P\) is a _top_ element if \(a\geq x\) for every \(x\in P\) (see [2]). **Lemma 2.6**.: \(\bigcirc G(k)\) _is a partial ordered set with top \(G_{1}\odot\cdots\odot G_{k}\)._ Proof.: Observe the order of \(\bigcirc G(k)\) is described by the mappings \(f_{j}\) (see Example 2.3). Reflexivity is given by \(f_{0}\) while transitivity is immediate by definition of the mappings \(f_{j}\). For antisymmetry we recall the form of the ordering given in the previous definition, now a concatenation can only be compared both ways with another concatenation if they are both the same. In that case they are compared by means of the same \(f_{j}\) whenever a \(\odot\) appears in the \(j\)-position of the concatenation. **Example 2.7**.: The following diagram illustrates the argument used for the antisymmetry in the proof above: while \(G\otimes H\odot K\) and \(G\odot H\otimes K\) are not comparable through any mapping \(f_{j}\). Notice that we cannot dualize the above since inverse mappings in such as \(g_{1}\) for which \[g_{1}(G\odot H\otimes K)=G\otimes H\otimes K\] lose the well-definedness condition for the non commutativity of \(\otimes\). We now prove a notable property that we develop in section below. **Definition 2.8**.: A _closure mapping_ on a poset \(P\) is a monotone map \(g:P\to P\) that 1. increasing, i.e. for all \(x\in P,gx\geq x\) and 2. idempotent, i.e. for all \(x\in P,g^{2}x=gx\). **Proposition 2.9**.: _The mappings \(f_{j}\) are closure mappings._ The mapping determined by two elements is defined in [7] as \[f_{a,b}(x)=\left\{\begin{array}{ll}b&\text{ if }x=a\\ x&\text{ otherwise.}\end{array}\right.\] Let us see by an example that these mappings are closely related to our mappings \(f_{j}\). If we change elements by tuples we obtain the following example. **Example 2.10**.: Let \(\vec{a}_{j}=(G_{1}\oslash^{1}\cdots\oslash^{j-1}G_{j}\otimes G_{j+1}\oslash^{j+1 }\cdots\oslash^{k-1}G_{k})_{\oslash^{i}\in\{\otimes,\odot\}}\) and \(\vec{b}_{j}=(G_{1}\oslash^{1}\cdots\oslash^{j-1}G_{j}\odot G_{j+1}\oslash^{j+1} \cdots\oslash^{k-1}G_{k})_{\oslash^{i}\in\{\otimes,\odot\}}\), where the tuples run all the combinations of \(\oslash^{i}\in\{\otimes,\odot\}\) and \(i\) runs the set \(\{1,\ldots,k-1\}\setminus\{j\}\), taking into account that \(\odot\) is commutative and \(\otimes\) is not commutative. For instance, we get \(k=6\), \(j=2\) and fix the multigraphs \(G_{1},\ldots,G_{6}\) all differents. Then the set of multilayers with the form \(G_{1}\oslash G_{2}\otimes G_{3}\oslash G_{4}\oslash G_{5}\oslash G_{6}\) represent the tuple \(\vec{a}\), namely \[\vec{a}= (G_{1}\otimes G_{2}\otimes G_{3}\otimes G_{4}\otimes G_{5}\otimes G _{6},G_{1}\odot G_{2}\otimes G_{3}\otimes G_{4}\otimes G_{5}\otimes G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\odot G_{4}\otimes G_{5}\otimes G _{6},G_{1}\otimes G_{2}\otimes G_{3}\otimes G_{4}\odot G_{5}\otimes G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\otimes G_{4}\otimes G_{5}\odot G _{6},G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\otimes G_{5}\odot G_{6},\] \[G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\otimes G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\otimes G_{5}\odot G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6},\] \[G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6},\] \[G_{1}\otimes G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6 },G_{1}\odot G_{2}\otimes G_{3}\odot G_{4}\odot G_{5}\odot G_{6})\,.\] And \(G_{1}\oslash G_{2}\odot G_{3}\oslash G_{4}\oslash G_{5}\oslash G_{6}\) represents the tuple \(\vec{b}\), so \(f_{2}=f_{\vec{a},\vec{b}}\). Observe that we must choose an order for the tuple. Also notice that all these elements are different, since we have choosen all multigraphs different. Taking into account that \(\vec{b}\) is the same tuple, just changing the second "\(\otimes\)" by "\(\odot\)" in all entries. Let us finish the section with two interpretations of the content defined so far that can be considered for further developments. ### Levels into \(\bigcirc G(k)\) Looking at Example 2.3 we can organize \(\bigcirc G(k)\) as a disjoint union of _levels_ according to the number of \(\odot\) appearing in every concatenation. That is: \[\bigcirc G(k)=\bigsqcup_{0\leq l\leq k-1}\bigcirc G(k)_{l}\] where every \(\bigcirc G(k)_{l}\) is the set of all concatenations with exactly \(l\) operators \(\odot\) in it. In fact: \[f_{j}:\bigcirc G(k)_{l}\longrightarrow\bigcirc G(k)_{l+1}\] for which, when composing, we can jump more than one level in one step by defining \[f_{i}\circ f_{j}=f_{ij}:\bigcirc G(k)_{l}\longrightarrow\bigcirc G(k)_{l+2}\] which suggests considering mappings \(f_{i_{1}\cdots i_{l}}\) where \(i_{j}\in\{1,\ldots,k-1\}\) for \(1\leq j\leq l\) in the expected way for simultaneous mergings, this allows to jump various levels at a time into the poset. ### Monads as modalities All the content introduced so far can be interpreted in terms of Category Theory as follows, where the terminology can be found for instance in [1] and [13]. Considering posets as categories, it can be proved that \(f_{j}\) are idempotent endofunctors and the fact that they are _monads_. Then, one can see \(f_{j}\) as (possibility) modalities \(\diamondsuit_{j}\) for a certain Modal Logic system where we write \(\diamondsuit_{\alpha}=\diamondsuit_{i_{1}}\cdots\diamondsuit_{i_{l}}\) for \(\alpha=i_{1}...i_{l}\) all distinct for idempotence. Now we have a multimodal system where every modality is a conjunction of possible applications of functions \(f_{j}\) satisfying the following axioms: 1. \(\diamondsuit_{\alpha}(x\wedge y)=x\wedge\diamondsuit_{\alpha}y\) 2. \(\diamondsuit_{\alpha}(x\lor y)=x\vee\diamondsuit_{\alpha}y\) We deduce the following easy property: \(\diamondsuit_{\alpha}\) are _strong functors_ with the identity as the strength, since Axiom (2) implies \(\diamondsuit_{\alpha}(x\lor y)\geq\diamondsuit_{\alpha}x\wedge y\). ## 3. Interior mappings In section above we show that mappings \(f_{j}\) can be seen as determined by two tuples. Despite our study is for interior mappings, as they are dual of closure mappings, results for \(f_{j}\)'s are easily deduced. Interior mappings have important properties for the analysis of posets as is shown in [15]. In this section provide conditions for a mapping defined by two elements be interior mapping and so, conditions to be closure mappings. The following definitions can be found in [7]. Let \((P,\leq)\) be a poset. Let \(A\subset P\), the sets \(\mathcal{L}(A):=\{x\in P:x\leq A\}\) and \(\mathcal{U}(A):=\{x\in P:x\geq A\}\) are respectively the _lower_ and _upper cone_ of \(A\). Let the tuples \(\vec{a}:=(a_{1},\ldots,a_{n}),\vec{b}:=(b_{1},\ldots,b_{n})\in P^{n}\) such that \(a_{i}\neq b_{i}\) for \(i=1,\ldots,n\). The mapping determined by such tuples is defined as \[f_{\vec{a},\vec{b}}(x)=\left\{\begin{array}{ll}b_{i}&\text{ if }x=a_{i}\text{ for }i=1, \ldots,n\\ x&\text{ otherwise.}\end{array}\right.\] This mapping is strictly monotone if and only if \(\vec{a}\) and \(\vec{b}\) are not comparable, \(\mathcal{L}(\{\vec{a}\})\setminus\{\vec{a}\}\subseteq\mathcal{L}(\{\vec{b} \})\setminus\{\vec{b}\}\) and \(\mathcal{U}(\{\vec{a}\})\setminus\{\vec{a}\}\subseteq\mathcal{U}(\{\vec{b} \})\setminus\{\vec{b}\}\) (see [7, Proposition 3.1]). These conditions can be easily changed to obtain a monotone mapping. A mapping \(f\colon P\to P\) is _interior_ when for any \(x,y\in P\): \(f(x)\leq y\) if and only if \(f(x)\leq f(y)\) (see [15, Definition 3.1 and Remark 3.2]). This definition is equivalent to the following three axioms for \(x,y\in P\): 1. Monotonicity: \(x\leq y\) implies \(f(x)\leq f(y)\). 2. Contraction: \(f(x)\leq x\). 3. Idempotence: \(f(f(x))=x\). We say that \(P\) is a _bounded poset_ if it has bottom and top elements, it can also be _lower bounded_ and _upper bounded_. The product poset \((P^{n},\leq)\) is defined by means of the natural order and the fact that \(P\) is a bounded poset implies that \(P^{n}\) is bounded again and the top and bottom are \((a,\ldots,a)\) and \((b,\ldots,b)\) respectively, where \(a\) and \(b\) are bottom and top elements of \(P\). **Proposition 3.1**.: _Let \(P\) be a lower bounded poset. If \(\vec{b}\) is a bottom of \(P^{n}\) and \(f_{\vec{a},\vec{b}}\) is monotone, then the mapping \(f_{\vec{a},\vec{b}}\) is interior._ Proof.: Notice that \(\vec{b}=(b,\ldots,b)\) for the bottom \(b\) of \(P\). For the sake of clarity we denote \(f:=f_{\vec{a},\vec{b}}\). Therefore \(f(x)=b\) or \(f(x)=x\) for every \(x\in P\). Assume \(f(x)\leq y\). We have two cases: If \(f(x)=b\), then \(f(x)\leq f(y)\), since \(b\) is bottom in \(P\). If \(f(x)=x\leq y\), then \(f(x)\leq f(y)\), since \(f\) is monotone. Now assume \(f(x)\leq f(y)\). If \(f(x)=b\), then \(f(x)\leq y\) since \(b\) is bottom in \(P\). If \(f(x)=x\) and \(f(y)=b\), then \(x\leq b\), so necessarily \(x=b\). Thus, \(f(x)=b\leq y\) since \(b\) is bottom. If \(f(x)=x\) and \(f(y)=y\), then \(f(x)\leq y\) trivially. Observe that the converse of proposition above is not true in general as is shown in the following example. **Example 3.2**.: Let \(\vec{a}_{j}=(G_{1}\oslash^{1}\cdots\oslash^{j-1}G_{j}\otimes G_{j+1}\oslash^{j+ 1}\cdots\oslash^{k-1}G_{k})_{\oslash^{i}\in\{\otimes,\odot\}}\) and \(\vec{b}_{j}=(G_{1}\oslash^{1}\cdots\oslash^{j-1}G_{j}\odot G_{j+1}\oslash^{j+ 1}\cdots\oslash^{k-1}G_{k})_{\oslash^{i}\in\{\otimes,\odot\}}\), where the tuples run all the combinations of \(\oslash^{i}\in\{\otimes,\odot\}\) and \(i\) runs the set \(\{1,\ldots,k-1\}\setminus\{j\}\), taking into account that \(\odot\) is commutative and \(\otimes\) is not commutative. Then the mapping \(f_{\vec{a}_{j},\vec{b}_{j}}\) is interior, but \(\vec{b}\) is not a bottom. What happens is that \(b\preceq a\), i.e. \(b\) is covered by \(a\) or in other words, there is no elements between \(b\) and \(a\), formally \(b\leq a\) and if \(x\leq a\), then \(x\leq b\). This idea allows us to obtain a better result. Previous lemma could be useful in case we were not be able to find an element covered by another. Observe that \(\vec{b}\preceq\vec{a}\) if and only if \(a_{i}\preceq b_{i}\) for \(i=1,\ldots,n\). We define the set of the tuple as \(\{\vec{a}\}:=\{a_{1},\ldots,a_{n}\}\). **Proposition 3.3**.: _Let \(P\) be a poset. If \(\{\vec{b}\}\cap\{\vec{a}\}=\emptyset\) and \(\vec{b}\preceq\vec{a}\), then the mapping \(f_{\vec{a},\vec{b}}\) is interior._ Proof.: Let us denote \(f:=f_{\vec{a},\vec{b}}\). * Monotonicity: Assume \(x\leq y\) and \(i,j\in\{1,\ldots,n\}\) such that \(i\neq j\). The case \(x=y\) is clear from definition of mapping. Having in mind that \(\{\vec{b}\}\cap\{\vec{a}\}=\emptyset\), it is clear that \(f(b_{i})=b_{i}\) for every \(i\in\{1,\ldots,n\}\), we have the following cases: * If \(x=a_{i}\) and \(y=b_{j}\), then \(f(x)=b_{i}\) and \(f(y)=b_{j}\), by hypothesis \(b_{i}\leq a_{i}=x\leq y=b_{j}\), hence \(f(x)\leq f(y)\). * If \(x=a_{i}\) and \(y=a_{j}\), then \(f(x)=b_{i}\) and \(f(y)=b_{j}\), by hypothesis \(b_{i}\leq a_{i}=x\leq y=a_{j}\), but \(b_{j}\preceq a_{j}\), hence \(a_{i}\leq b_{j}\), so \(f(x)\leq f(y)\). * If \(x\neq a_{i}\) and \(y=a_{j}\), then \(f(x)=x\) and \(f(y)=b_{j}\), by hypothesis \(x\leq y=a_{j}\), but \(b_{j}\preceq a_{j}\), hence \(x\leq b_{j}\), so \(f(x)\leq f(y)\). * The rest of cases brings us to \(f(x)=x\) and \(f(y)=y\), so by hypothesis \(f(x)\leq f(y)\). * Contraction: If \(x=a_{i}\) for some \(i\in\{1,\ldots,n\}\), \(f(x)=b_{i}\leq a_{i}\). If \(x\neq a_{i}\) for every \(i\in\{1,\ldots,n\}\), \(f(x)=x\). In both cases \(f(x)\leq x\). * Idempotence: If \(x=a_{i}\) for some \(i\in\{1,\ldots,n\}\), then \(f(x)=b_{i}\preceq a_{i}\). Since \(\{\vec{b}\}\cap\{\vec{a}\}=\emptyset\), necessarily \(f(b_{i})=b_{i}\), i.e. \(f(f(x))=x\). If \(x\neq a_{i}\) for all \(i\in\{1,\ldots,n\}\), is clear that \(f(f(x))=x\). And the proof is ended. We can obtain, by duality, versions of propositions above for closure mappings. We have omitted the proofs, since they are analog to the ones above. **Proposition 3.4**.: _Let \(P\) be a upper bounded poset. If \(\vec{b}\) is a top of \(P^{n}\) and \(f_{\vec{a},\vec{b}}\) is monotone, then the mapping \(f_{\vec{a},\vec{b}}\) is closure._ **Proposition 3.5**.: _Let \(P\) be a poset. If \(\{\vec{b}\}\cap\{\vec{a}\}=\emptyset\) and \(\vec{a}\preceq\vec{b}\), then the mapping \(f_{\vec{a},\vec{b}}\) is closure._ ## 4. The lattice structure We saw above that we can define a partial order in a set of multilayer networks and show that this order yields several properties in such a framework. In this section we go a little further and provide a lattice structure for \(\bigcirc G(k)\). A meet (resp. join) semilattice is a poset \((L,\leq)\) such that any two elements \(x\) and \(y\) have a greatest lower bound (called meet or infimum) (resp. a smallest upper bound (called join or supremum)), denoted by \(x\wedge y\) (resp. \(x\lor y\)). A poset \((L,\leq)\) is called a _lattice_ and denoted by \((L,\leq,\wedge,\vee)\) if for every pair of elements we can construct into the lattice their meet and their join. These definitions can be found for instance in [2]. Let us define a meet and a join operators for the poset \((\bigcirc G(k),\leq)\): **Definition 4.1**.: Given \(G_{1}\oslash^{1}\cdots\oslash^{k-1}G_{k}\), \(G_{1}\ominus^{1}\cdots\ominus^{k-1}G_{k}\in\bigcirc G(k)\) (in short \(\oslash G\) and \(\ominus G\)) we write \(\oslash G\wedge\ominus G=\ominus G\) for \(G_{1}\ominus^{1}\cdots\ominus^{k-1}G_{k}\) such that \[\ominus^{j}=\begin{cases}\otimes&\text{if $\oslash^{j}=\otimes$ or $\ominus^{j}=\otimes$}\\ \odot&\text{otherwise}\end{cases}\] and we write \(\oslash G\vee\ominus G=\otimes G\) for \(G_{1}\ominus^{1}\cdots\ominus^{k-1}G_{k}\) such that \[\ominus^{j}=\begin{cases}\odot&\text{if $\oslash^{j}=\odot$ or $\ominus^{j}=\odot$}\\ \otimes&\text{otherwise}\end{cases}\] It can be easily checked the usual properties of both operations, that is: \(x\wedge y\leq x,y\) and for every \(z\leq x,y\) one has \(z\leq x\wedge y\) and dually: \(x\lor y\geq x,y\) and for every \(z\geq x,y\) one has \(z\geq x\lor y\) for every \(x,y,z\in\bigcirc G(k)\). **Proposition 4.2**.: _The absorption laws are satisfied for every \(x,y\in\bigcirc G(k)\):_ * \(x\vee(x\wedge y)=x\)__ * \(x\wedge(x\lor y)=x\)__ Proof.: Let us prove the first assertion. For \(x=\ominus G\) and \(y=\Box G\) we construct \(x\wedge y=\oslash G\) such that \[\oslash^{j}=\begin{cases}\otimes&\text{if $\Box^{j}=\otimes$ or $\ominus^{j}=\otimes$}\\ \odot&\text{otherwise}\end{cases}\] and \(x\vee(x\wedge y)=\otimes G\) as \[\ominus^{j}=\begin{cases}\odot&\text{if $\ominus^{j}=\odot$ or $(\Box^{j}=\odot$ and $\ominus^{j}=\odot$)}\\ \otimes&\text{otherwise}\end{cases}\] which can be expressed as \[\begin{cases}\odot&\text{if $\ominus^{j}=\odot$}\\ \otimes&\text{otherwise}\end{cases}\] and becomes the same assignation considered for \(x=\ominus G\). Let us recall that a _minimal element_ into a poset is an element such that it is not greater than any other element in the poset. **Proposition 4.3**.: \((\bigcirc G(k),\leq,\wedge,\vee,1_{\odot},m_{\pi})\) _is an upper-bounded lattice where:_ * \(1_{\odot}=G_{1}\odot\cdots\odot G_{k}\) _is the top and_ * \(m_{\pi}=G_{\pi(1)}\otimes\cdots\otimes G_{\pi(k)}\) _are_ \(k!\) _minimal elements for_ \(\pi\) _a permutation of the set_ \(\{1,\ldots,k\}\)_._ Proof.: Check that \(x\wedge 1_{\odot}=x,x\lor 1_{\odot}=1_{\odot}\) and \(x\wedge m_{\pi}=m_{\pi},x\lor m_{\pi}=x\). **Proposition 4.4**.: \((\bigcirc G(k),\leq,\wedge,\vee,1_{\odot},m_{\pi})\) _is distributive._ Proof.: Let \(x_{1}=\bigcirc G\), \(x_{2}=\ominus G\), \(x_{3}=\Box G\). Now \(x_{1}\wedge(x_{2}\lor x_{3})=\ominus G\) where \[\otimes^{j}=\begin{cases}\otimes&\text{if $\ominus^{j}=\otimes$ or $\Box^{j}=\otimes$ and $\oslash^{j}=\otimes$}\\ \odot&\text{if $\ominus^{j}=\Box^{j}=\odot$ or $\oslash^{j}=\odot$}\end{cases}\] which is exactly the same operator as \[\begin{cases}\otimes&\text{if no $(\oslash^{j}=\odot$ or $\ominus^{j}=\odot$) or no $(\oslash^{j}=\odot$ or $\Box^{j}=\odot$)}\\ \odot&\text{otherwise}\end{cases}\] for \((x_{1}\wedge x_{2})\vee(x_{1}\wedge x_{3})\). **Proposition 4.5**.: _Mappings \(f_{j}\) preserve meets and joins._ Following the notation of previous section we try some conditions in order to find mappings defined by two tuples that also preserve meets and joins. Let \((L,\leq,\wedge,\vee)\) be lattice and define the cartesian product \((L^{n},\leq)\) and the meet and join operations defined coordinatewise for it, i.e. for \(\vec{a},\vec{b}\in L^{n}\) we define \(\vec{a}\wedge\vec{b}:=(a_{1}\wedge b_{1},\ldots,a_{n}\wedge b_{n})\) and \(\vec{a}\vee\vec{b}:=(a_{1}\lor b_{1},\ldots,a_{n}\lor b_{n})\) from which one can easily verify the distributive properties. We need the following property for \(\vec{a}\in L^{n}\): \[x\neq\vec{a}\neq y\Longleftrightarrow x\wedge y\neq\vec{a}\,,\] that we say \(\vec{a}\) is _strictly not absorbing for \(\wedge\)_. In an analogous way we define _strictily not absorbing for \(\vee\)_. In order to simplify the proof of the following proposition we have included the hypothesis \(\{\vec{a}\}\cap\{\vec{b}\}=\emptyset\). **Proposition 4.6**.: _Let \(L\) be a lattice. Assume that \(\{\vec{a}\}\cap\{\vec{b}\}=\emptyset\)._ 1. _If_ \(\vec{b}\) _is bottom element of_ \(L^{n}\) _and_ \(\vec{a}\) _is strictily not absorbing for_ \(\wedge\)_, then mappings_ \(f_{\vec{a},\vec{b}}\) _preserve meets._ 2. _If_ \(\vec{b}\) _is top element of_ \(L^{n}\) _and_ \(\vec{a}\) _is strictily not absorbing for_ \(\vee\)_, then mappings_ \(f_{\vec{a},\vec{b}}\) _preserve joins._ Proof.: (1) Assume \(\vec{b}\) is bottom, then \(\vec{b}\wedge\vec{b}=x\wedge\vec{b}=\vec{b}\wedge y=\vec{b}\). Observe that \(f_{\vec{a},\vec{b}}(x\wedge y)\in\{\vec{b},x\wedge y\}\). Also \(f_{\vec{a},\vec{b}}(x)\in\{\vec{b},x\}\) and \(f_{\vec{a},\vec{b}}(y)\in\{\vec{b},y\}\), thus \(f_{\vec{a},\vec{b}}(x)\wedge f_{\vec{a},\vec{b}}(y)\in\{\vec{b},x\wedge y\}\). As \(\vec{b}\) is bottom \(f_{\vec{a},\vec{b}}(x)\wedge f_{\vec{a},\vec{b}}(y)=x\wedge y\) if and only if \(x\neq\vec{a}\neq y\), in consequence \(x\wedge y\neq\vec{a}\) and we can say that \(f_{\vec{a},\vec{b}}\) preserve meets. (2) The proof is analogous. ### Complements In [2] a _complemented lattice_ is defined as a bounded lattice (with least element \(0\) and greatest element \(1\)), in which every element \(a\) has a _complement_, i.e. an element \(b\) such that \(a\lor b=1\) and \(a\wedge b=0\). Also, given a lattice \(L\) and \(x\in L\) we say that \(\hat{x}\) is an _orthocomplement of \(x\)_ if the following conditions are satisfied: * \(\hat{x}\) is a complement of \(x\) * \(\hat{\hat{x}}=x\) * if \(x\leq y\) then \(\hat{y}\leq\hat{x}\). A lattice is _orthocomplemented_ if every element has an orthocomplement. We give a slightly different approach: **Definition 4.7**.: We say that an upper bounded lattice \((L,\leq,\wedge,\vee,1)\) with a set of minimal elements \(\{m_{1},...,m_{k}\}\) is _semi-orthocomplemented_ if every element \(a\in L\) has a complement, i.e. an element \(b\) such that \(a\lor b=1\) and \(a\wedge b=m_{i}\) for a certain \(i\in\{1,...,k\}\). **Proposition 4.8**.: \((\bigcirc G(k),\leq,\wedge,\vee,1_{\odot},s_{\pi})\) _is a semi-orthocomplemented lattice._ Proof.: For \(x=\bigcirc G\) consider \(\hat{x}=\bigcirc G\) where \[\bigodot^{j}=\begin{cases}\otimes&\text{if }\bigcirc^{j}=\odot\\ \odot&\text{if }\bigcirc^{j}=\otimes\,,\end{cases}\] ### Ideals into \(\bigcirc G(k)\) Now we consider the existence of certain subsets of our lattice in order to show a way to find and organize _autonomous_ subsystems into \(\bigcirc G(k)\). **Definition 4.9**.: Given a lattice \((L,\leq,\wedge,\vee)\), \(I\subseteq L\) is an _ideal_ if and only if for every \(x,y\in I\) it follows that \(x\lor y\in I\). It can be also considered an equivalent definition: **Definition 4.10**.: Given a lattice \((L,\leq)\), \(I\subseteq L\) is an _ideal_ if the following conditions are satisfied: * for every \(a\in I\) and every \(x\in L\) such that \(x\leq a\) then \(x\in I\) * for every \(a,b\in I\) there is \(c\in I\) such that \(a,b\leq c\). One can found these definitions in [2]. **Example 4.11**.: For \(k=3\) we can construct the following ideals into \(\bigcirc G(3)\): * \(\bigcirc G(3)\) itself is an ideal * every subgraph in the form \(K\otimes G\odot H\) is an ideal * every subgraph in the form \(K\otimes G\odot H\) is an ideal * every subgraph in the form \(K\otimes G\odot H\) is an ideal ## 5. Lattice-ordered partial monoid In this section some concepts from [12] are taken and adapted for the case of a partial operation. Observe that the election of the binary operation is fundamental since it will represent the behavior on which we are interested for analyzing. **Definition 5.1**.: A system \((A,+,\leq,\wedge,\vee)\) is called a _lattice-ordered partial monoid_ if * \((A,+)\) is a partial monoid * \((A,\leq)\) is a lattice with \(\wedge\) and \(\vee\) * \(a\leq b\) implies \(a+x\leq b+x\) and \(x+a\leq x+b\) * \(a+(b\lor c)=(a+b)\vee(a+c),(b\lor c)+a=(b+a)\vee(c+a)\) * \(a+(b\wedge c)=(a+b)\wedge(a+c),(b\wedge c)+a=(b+a)\wedge(c+a)\) for every \(a,b,c,x\in A\). We are introducing a different feature from the operation considered in [12] since \(+\) defined here is partial, this is oriented to the study of \(\bigcirc G(k)\) as a lattice-ordered partial monoid. For that we need a partial semigroup structure for our set, this is obtained by endowing it with the partial operation \(+\) defined for \(x,y\in\bigcirc G(k)\) in the form: \[x+y=\begin{cases}y&\text{if }x\geq y\\ x&\text{if }y\geq x\end{cases}\] Now \(+\) is an associative, commutative and partial operation. It is actually a _partial minimum_. The election of this operation is due to the idea that the composition of two comparable multilayers annihilates the bigger one. **Proposition 5.2**.: \((\bigcirc G(k),+)\) _is a partial commutative monoid._ Proof.: Operation \(+\) satisfies associativity: suppose that \(x,y,z\in\bigcirc G(k)\) are comparable to each other. Now: \[x+y+z=min(x,y,z)=min(min(x,y),z)=min(x,min(y,z))\,.\] As \(\bigcirc G(k)\) is finite, the unique top element (see Lemma 2.6) is the identity element. Let \((M,\cdot)\) be a partial monoid and let \(f\colon M\to M\) be a mapping. Recall that a _partial homomorphism_ between partial monoids is mapping that preserves the binary operation, namely \(f(x+y)=f(x)+f(y)\), \(f(1)=1\) and \(x+y\in M\) implies that \(f(x)+f(y)\in M\). A mapping between lattice-ordered partial monoids is a _lattice partial homomorphism_ if it is a partial homomorphism of partial monoids that preserves meets and joins. **Proposition 5.3**.: _Mappings \(f_{j}\) are partial homomorphisms._ Proof.: By virtue of Proposition 4.5, mappings \(f_{j}\) preserve meets and joins. From definition of the partial operation \(+\), we konw that \(x\) and \(y\) are comparable if and only if there exists \(x+y\). As \(f_{j}\) is monotone, if \(x\leq y\), then \(f_{j}(x)\leq f_{j}(y)\) and \(f_{j}(x)\) and \(f_{j}(y)\) are comparable. So \(f_{j}(x+y)=f_{j}(\min(x,y))=f_{j}(x)=\min(f_{j}(x),f_{j}(y))=f_{j}(x)+f_{j}(y)\). Finally as \(1\) is the top element \(x\leq 1\) for every \(x\), thus \(f_{j}(1)\leq 1\). But \(f_{j}\) is closure, hence \(1\leq f_{j}(1)\). Therefore \(f(1)=1\). Let us prove a version for mappings defined by two tuples from Section 3. We notice, as in Proposition 4.6, that the disjointness hypothesis is for simplify the proof. We follow the same notation and definition of strictily not absorbing given in the previous section. Also notice that we show the result for monoids (not partial monoids). **Proposition 5.4**.: _Let \((L,+)\) be a monoid and assume that \(\{\vec{a}\}\cap\{\vec{b}\}=\emptyset\). If \(\vec{b}\) is an absorbing element and \(\vec{a}\) is strictily not absorbing for \(+\), then mapping \(f_{\vec{a},\vec{b}}\) is an homomorphism._ Proof.: As \(\vec{b}\) is absorbing: \[f_{\vec{a},\vec{b}}(x)+f_{\vec{a},\vec{b}}(y)=\begin{cases}\vec{b}+\vec{b}= \vec{b}&\text{if }x=y=a\\ x+\vec{b}=\vec{b}&\text{if }x\neq a;y=a\\ \vec{b}+y=\vec{b}&\text{if }x=a;y\neq a\\ x+y&\text{if }x\neq a;y\neq a\end{cases}\] As \(\vec{a}\) is strictily not absorbing for \(+\): \(f_{\vec{a},\vec{b}}(x)+f_{\vec{a},\vec{b}}(y)=x+y\) if and only if \(x\neq\vec{a}\) and \(y\neq\vec{a}\) if and only if \(x+y\neq\vec{a}\) if and only if \(f_{\vec{a},\vec{b}}(x+y)=x+y\). **Proposition 5.5**.: \((\bigcirc G(k),+,\leq)\) _is a lattice-ordered partial monoid._ Proof.: Suppose that \(x,y,z\in\bigcirc G(k)\) are comparable. Observe that \[x+(y\lor z)=(x+y)\vee(x+z),(y\lor z)+x=(y+x)\vee(z+x)\] and \[x+(y\wedge z)=(x+y)\wedge(x+z),(y\wedge z)+x=(y+x)\wedge(z+x)\] together with the fact that for \(x\leq y\): \[x+z\leq y+z,z+x\leq z+y\,.\] Notice in particular that \[x+(y\lor z)=min(x,y\lor z)=\begin{cases}min(x,\odot)&\text{if }y=\odot\text{ or }z=\odot\\ min(x,\otimes)&\text{else}\end{cases}=\begin{cases}x&\text{if }y=\odot\text{ or }z=\odot\\ \otimes&\text{else}\end{cases}\] equals to \[(x+y)\vee(x+z)=min(x,y)\lor min(x,z)=\begin{cases}\odot&\text{if }min(x,y)= \odot\text{ or }min(x,z)=\odot\\ \otimes&\text{else}\end{cases}\] In [12] we found that if for elements \(x,y\in\bigcirc G(k)\) there exist a least \(a\in\bigcirc G(k)\) such that \(x+a\geq y\), then the element \(a\) is denoted by \(y-x\). **Definition 5.6**.: A system \((A,+,\leq,0,\wedge,\vee,-)\) is called a _dually residuated lattice partial monoid_ (notation DRl-partial monoid) if 1. \((A,+,\leq,\wedge,\vee)\) is a lattice ordered partial monoid with \(0\); 2. for each \(x,y\in A\) there exist an element \(y-x\); 3. \(b+((a-b)\lor 0)\leq a\lor b\), \(((a-b)\lor 0)+b\leq a\lor b\) for each \(x,y\in A\); 4. \(x-x\geq 0\) for each \(x\in A\). **Proposition 5.7**.: \(\bigcirc G(k)\) _is a DRl-partial monoid._ Proof.: For every \(x=\ominus G,y=\Box G\in\bigcirc G(k)\) we define the element \(y-x=\bigcirc G\) as \[\bigcirc^{j}=\begin{cases}\odot&\text{if }\ominus^{j}=\otimes\text{ and } \Box^{j}=\odot\\ \otimes&\text{else}\end{cases}\] and prove condition 3. leaving condition 4. as an easy exercise. For every \(x=\ominus G,y=\Box G\in\bigcirc G(k)\) we have \[\ominus G+(\oslash G\lor 0)=\ominus G+(\oslash G\lor 1_{\odot})=\ominus G+1_{ \odot}=\ominus G\leq\ominus G\lor\Box G\] ### The deletion property We finish the paper with the _deletion property_, which is studied in [6] and also apply to our context. **Definition 5.8**.: A _left-regular band_ is a semigroup \((S,+)\) such that for every \(x\in S\): * \(x\) is idempotent * \(x+y+x=x+y\) The second condition is known as _the deletion property_ (see [6]) since it amounts to the fact that we can remove from every addition a summand that has appeared earlier without changing the value of the addition. **Lemma 5.9**.: _(\(\bigcirc G(k),+\)) is a left-regular band._ Proof.: That the deletion property is satisfied in \(\bigcirc G(k)\) is straightforward and says essentially that \[min(x,y,x)=min(y,x,y)=min(x,y)\] Observe that we could have defined the ordering into \(\bigcirc G(k)\) by means of \[x\leq y\text{ if and only if }x+y=y\] see [6]. ## Acknowledgment We thank the referee for carefully reading and valuable suggestions.
2309.07414
PromptASR for contextualized ASR with controllable style
Prompts are crucial to large language models as they provide context information such as topic or logical relationships. Inspired by this, we propose PromptASR, a framework that integrates prompts in end-to-end automatic speech recognition (E2E ASR) systems to achieve contextualized ASR with controllable style of transcriptions. Specifically, a dedicated text encoder encodes the text prompts and the encodings are injected into the speech encoder by cross-attending the features from two modalities. When using the ground truth text from preceding utterances as content prompt, the proposed system achieves 21.9% and 6.8% relative word error rate reductions on a book reading dataset and an in-house dataset compared to a baseline ASR system. The system can also take word-level biasing lists as prompt to improve recognition accuracy on rare words. An additional style prompt can be given to the text encoder and guide the ASR system to output different styles of transcriptions. The code is available at icefall.
Xiaoyu Yang, Wei Kang, Zengwei Yao, Yifan Yang, Liyong Guo, Fangjun Kuang, Long Lin, Daniel Povey
2023-09-14T03:43:07Z
http://arxiv.org/abs/2309.07414v3
# Promptast for Contextualized ASR with Controllable Style ###### Abstract Prompts are crucial to large language models as they provide context information such as topic or logical relationships. Inspired by this, we propose PromptASR, a framework that integrates prompts in end-to-end automatic speech recognition (E2E ASR) systems to achieve contextualized ASR with controllable style of transcriptions. Specifically, a dedicated text encoder encodes the text prompts and the encodings are injected into the speech encoder by cross-attending the features from two modalities. When using the ground truth text from preceding utterances as content prompt, the proposed system achieves 21.9% and 6.8% relative word error rate reductions on a book reading dataset and an in-house dataset compared to a baseline ASR system. The system can also take word-level biasing lists as prompt to improve recognition accuracy on rare words. An additional style prompt can be given to the text encoder and guide the ASR system to output different styles of transcriptions. The code is available at icefall1. Xiaoyu Yang, Wei Kang, Zengwei Yao, Yifan Yang, Liyong Guo, Fangjun Kuang, Long Lin, Daniel Povey Xiaomi Corp. Beijing, China {yangxiaoyu6, dpovey}@xiaomi.com Contextualized ASR, Prompts, Transducer, ## 1 Introduction External text information is commonly used to improve E2E ASR systems. Traditional approaches use external language models trained on the text corpora and re-rank the n-best hypotheses [1, 2] of ASR systems or perform shallow fusion [3] to modify the posterior predicted by the ASR system. Recently, various methods have been proposed to utilize contextual information to improve the accuracy of speech recognition [4, 5, 6, 7], namely contextualized ASR. Depending on the form of the context, most existing contextualized ASR systems fall into two categories: word-level context and utterance-level context. Word-level context biasing aim to improve the recognition accuracy of rare words such as contact names or application names. Sun et.al [5] used a tree-constrained pointer generator to boost the posterior of words in a context list if the prefix matches during decoding. Huang et.al [6] proposed a neural network-based method to improve rare-word recognition on various E2E ASR architectures with a context phrase prediction network. Unlike word-level context, utterance-level context carries more sophisticated information such as topic and logical relationships. Text embeddings encoded by BERT [8] are utilized [9] to improve ASR performance in multi-turn dialogues. Similarly, Chang et. al [7] improve long-form ASR [10] performance on neural transducer [11] using self-attentive embeddings from BERT. Li et.al [12] used LLaMa [13] as the decoder of a speech encoder to facilitate domain adaptation through text prompts such as titles and topic descriptions. In large language models, prompts are crucial to the correctness, fluency and format of the generated text [14, 15]. Inspired by this, we propose a novel E2E ASR framework named PromptASR, which utilizes text prompts for contextualized speech recognition. In specific, a dedicated text encoder is added to the E2E ASR system to ingest two types of prompt: content prompt and style prompt. The prior provides contextual information and the latter specifies the style of desired ASR transcriptions (e.g. casing and punctuation). The encoded prompts are injected to the ASR encoder via cross-attention with hidden speech representations. Unlike most existing approaches for contextualized ASR that are specialized for either word or utterance-level context, PromptASR is able to benefit from both of them. When decoded with the ground truth preceding text as content prompt, PromptASR achieves 21.9% and 6.3% relative word-error-rate (WER) reduction compared to a baseline ASR system on a book reading dataset and an in-house dataset. On a word-level context biasing task [4], PromptASR achieves 13.4% relative WER reduction even with biasing lists containing 1000 distractors. Finally, we show that the style prompt effectively guides the style of transcriptions. It is noteworthy that Whisper [16] mentioned having a similar prompting mechanism, but the details are not included in their paper. ## 2 Promptasr ### System Architecture The architecture of PromptASR is illustrated in Fig. 1. It consists of a pre-trained text encoder \({\it Enc}^{T}\), a speech encoder \({\it Enc}^{A}\) and an ASR decoder \({\it Dec}^{A}\). The text encoder \({\it Enc}^{T}\) processes prompts and generates text embeddings. \({\it Enc}^{A}\) consists of N transformer-like layers, each with a cross-attention module (with residual connection) placed after the self-attention module. Each layer receives not only the acoustic embed dings, but the text embeddings encoded by \(\textit{Enc}^{T}\). The fusion between the text modality and speech modality is achieved by cross attention, where the text embeddings serve as query and acoustic hidden states serve as key. The whole system can be trained with any ASR objective functions. ### Prompts Two types of prompts are defined in PromptASR: content prompt and style prompt. Content prompt should contain semantic and context-related information, which are usually in the form of sentences or a list of rare-words to be boosted. Most existing ASR systems produce normalized transcriptions, requiring inverse text normalization for production scenarios. Motivated by this, we would like the model to output different styles of transcription given different style prompts. The style prompt should indicate the desired style of the ASR transcription, such as casing and punctuation. During training, the style of the target text should always match the style prompt. Two training samples of PromptASR with different styles are shown in Table 1. The tokenized content prompt \(\textbf{P}_{c}=\textbf{P}_{c,1},\cdots,\textbf{P}_{c,n}\)\(\textbf{P}_{c}\) and style prompt \(\textbf{P}_{s}=\textbf{P}_{s,1},\cdots,\textbf{P}_{s,m}\) are feed to the text encoder to produce prompt embeddings \(\mathcal{E}_{c}\in\mathbb{R}^{n\times c}\) and \(\mathcal{E}_{s}\in\mathbb{R}^{m\times c}\). To distinguish between content prompts and style prompts, a trainable style indicator vector \(v\in\mathbb{R}^{c}\) is added to the embeddings of the style prompts. The forward process of PromptASR is formulated in Eqn 1. \[\mathcal{E}_{c} =\textit{Enc}^{T}(\textbf{P}_{c}); \tag{1}\] \[\mathcal{E}_{s} =\textit{Enc}^{T}(\textbf{P}_{s})+v;\] \[\mathcal{G} =\textit{Enc}^{A}(\textbf{Concat}(\mathcal{E}_{c},\mathcal{E}_{s }),\textbf{X});\] \[\textbf{y} =\textit{Dec}^{A}(\mathcal{G}),\] where **X** is the input speech features and **y** is the output transcription. **Concat** is the operation of concatenating two tensors along time axis. To deal with situations where prompts are unavailable, both prompts are dropped out by a small probability during training so that the model learns to transcribe without prompts. In real-life scenarios, the content prompts can be completely irrelevant to the current utterance, requiring the model to learn to ignore such prompts. Therefore, a small proportion of the content prompts within the mini-batch are exchanged to simulate this scenario for better robustness to irrelevant prompts. ## 3 Experiment Setup ### Dataset The open-sourced dataset Libriheavy [17] is chosen as one of the training sets of PromptASR, as each utterance in Libriheavy is provided with a ground truth transcription and its preceding text of length 1000 bytes. Casing and punctuation of both texts are preserved. The medium subset containing around 5000 hours transcribed book readings is adopted in this work. The official Libriheavy test-clean and test-other sets are adopted for evaluation, which are approximately 20% harder (in terms of word error rates) than the LibriSpeech [18] test-clean and test-other sets. Additional 2000 hours recordings of conversations and podcasts covering different topics are also collected from the National Public Radio (NPR). An official transcript with casing and punctuation is provided for each recording. The text-audio alignments are obtained based on the Levenshtein distance between the output of a pre-trained ASR system and the official transcript and verified by human experts. The 1000-byte-long preceding text for each utterance is also extracted according to the alignment. 18 hours of recordings are hold-out and form the NPR evaluation set. ### Model Selection The pre-trained BERT [8] model is selected as the text encoder as it captures contextualized information through the masked language modeling pre-training. In addition, we also pre-train two transformers using the BERT objective on the text data of Libriheavy or NPR. Note that there are no overlaps between the training and testing books/recordings. The parameters of the text encoder are frozen during training. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Style Prompt & WITHOUT CASING OR PUNCTUATION \\ \hline Content Prompt & Welcome to the UEFA Champions League final! \\ \hline Reference text & TODAY’s MATCH IS BETWEEN REAL MADRID AND LIVERPOOL \\ \hline \hline Style Prompt & Mixed-cased English with punctuation \\ \hline Content Prompt & Welcome to the UEFA Champions League final! \\ \hline Reference text & Today’s match is between Real Madrid and Liverpool. \\ \hline \end{tabular} \end{table} Table 1: Two training samples with different style prompts. Figure 1: The architecture of PromptASR. The module in the dashed block is a transformer-like layer with cross-attention (other modules omitted). The text embeddings are injected as key/value pairs in the cross-attention module. The ASR system is a neural transducer with a Zipformer [19] speech encoder. A stateless decoder [20] seeing two previous tokens and a joint network is added and pruned-RNNT [21] loss is used as the training objective. Chunk-wise streaming [19] is adopted for training streaming ASR models. ### PromptASR Training Each utterance's preceding 1000 bytes are used as the content prompt during training. Two types of style transforms are pre-defined: **U**pper-**C**ased without punctuation (UC) and **M**ixed-**C**ased with **P**unctuation (MCP). A style is sampled for each utterance in the mini-batch and is applied to its style prompt and reference prompt. The **MCP** has a higher sampling probability (0.7) since it is more production-friendly. The style prompt is a sub-string of content prompt from other samples in the same mini-batch. To construct word-list based content prompt, the number of total appearances of each word in the training set is counted and the words outside the most common 10000 words are regarded as rare words. A context list is formed for each training sample by picking up rare words present in this sample and adding 50-100 randomly distractors. Together with the preceding text, both types of content prompts are used by probability of 0.5. During training, SpecAug [22] and MUSAN [23] are adopted to augment the training data. Speed perturbation is not used. The 80-D mel filter bank features are used as the input acoustic features. Byte-pair encoding [24] with byte fallback is used as modelling units and the vocabulary size is 500. The model is trained for 50 epochs on Libriheavy medium subset, and 60 epochs on the NPR dataset. The checkpoints of last ten epochs are averaged and beam search of size 4 is used for inference. ### Metrics Two types of evaluation scenarios are investigated. First, each test sample is given its ground truth preceding 1000 bytes as content prompt during decoding. This can be seen as the performance upper bound of the PromptASR model when dealing with utterance-level content prompt. However, the ground truth preceding text is not always available in real-life scenarios and the model has to rely on the erroneous decoding results of the previous utterances. Therefore, a 15-hour long-form recordings test set is also collected from NPR to validate the long-form ASR performance. The long recordings are split into individual sentences without overlap. The average length of the recording is 20 minutes. The decoding results from previous sentences are concatenated to construct the content prompt for the current utterance and a fixed style prompt irrelevant to the recording is used. To test the capability of PromptASR on word-level context biasing, the biasing list for LibriSpeech [18] test sets in [4] is used. Each utterance in the test set is provided with a biasing word list containing biasing words and distractors. Note that the biasing lists of some utterances are purely distractors. ## 4 Experiment Results ### Utterance-level Context Biasing Experiments are first carried out to validate the benefit of content prompts, where the ground truth preceding text is used as content prompt during decoding. Baseline neural transducer models without text encoder are trained with UC and MCP styles separately. The word-error-rates (WERs) are shown in Table 2 and the following points can be drawn. First, PromptASR model significantly improves the WERs owing to the contextual information from the content prompts. For non-streaming models, relative WER reductions (WERs) of 21.9% and 6.8% are achieved compared to the baseline (**B1-UC**) on the Libriheavy test-other and NPR with the in-domain text encoder using style UC. Similar relative WERRs of 20.0% and 7.8% are observed for streaming models. If no content prompt is given, PromptASR still achieves comparable results as the baseline model, suggesting that the model is robust to decode without any prompts. Second, pre-training the text encoder on in-domain data further improves the performance on the in-domain test sets. Finally, the style change in PromptASR does not affect the WER. After normalizing the transcript with UC style, decoding with MCP style yields similar WERs as using UC style. ### Long-form ASR The performance of PromptASR on long-form ASR is investigated and results are shown in Fig 2. The non-streaming PromptASR model with in-domain text encoder trained on NPR is decoded in UC style with either erroneous decoding results (red) or the ground truth transcripts (blue) of the history utterances as content prompt. For reference, the WER of a baseline ASR system **B1-UC** from Table 2 is also plotted (black). Though both content prompts reduce the WER compared to the baseline, the gain from using erroneous decoding \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & \begin{tabular}{c} Content \\ Prompt \\ \end{tabular} & \begin{tabular}{c} Style \\ Prompt \\ \end{tabular} & \begin{tabular}{c} Libriheavy \\ clean \\ \end{tabular} & \multirow{2}{*}{NPR} \\ \cline{3-5} results of preceding utterances is smaller and it fails to further improve the WER with a history of longer than 4 utterances. This could be caused by the wrong prediction of named entities or keywords, which undermines the benefit from context information for PromptASR. ### Word-level Context Biasing The potential of applying PromptASR for rare-word recognition is investigated. All the models in Table 3 are trained on Libriheavy medium subset. Note that only the in-domain text encoder can be used for this task, as the official release of BERT has a maximum input length constraint of 512 tokens. **B1-UC** is the non-streaming baseline model from Table 2. **M1** and **M2** are PromptASR models sharing the same text encoder pre-trained on the Libriheavy large subset (disjoint from LibriSpeech test sets). **M1** only uses the previous utterances as content prompt during training, while **M2** additionally uses content prompts constructed from rare-words list as described in Sec 3.3. The WERs with different sizes of context lists on the LibriSpeech biasing task are shown in Table 3. All three models yield similar performance without biasing lists. When decoded with biasing lists, **M1** fails to benefit from the word-level context. However, only at the cost of slight WER degradation on the utterance-level context biasing (last two columns), constructing content prompts from rare-words list (**M2**) during training significantly helps word-level contextual biasing, achieving relative WERRs of 29.7% and 21.7% at \(N=100\) compared to the baseline model. However, as the number of distractors increases, the performance gain decreases very quickly and the relative WERRs at \(N=1000\) are 13.4% and 8.6%. One possible reason is that the context list used during training is shorter than 100 words, thus the model did not generalize well to larger \(N\). Despite this, the relative improvements are still comparable with the existing neural network-based context biasing methods [6], which yields 10.6% and 12.6% relative WERRs with \(N=1000\). ### Output Examples A few examples of the output of PromptASR model is shown below. In Table 4, the first block shows an example where the model outputs transcriptions with accurate casing and punctuations using MCP style. The second block shows that the PromptASR model corrects the ASR output with the help of content prompt - it learns from the content prompt that the topic is about "horse", and guides the model to output the in-domain word "phaeton" instead of a made-up word "faithon". ## 5 Conclusion We propose the PromptASR framework, which performs contextualized ASR with controllable style of transcriptions. By passing either the ground truth transcript or decoded transcript of previous utterances as content prompt, the PromptASR model utilizes the cross-utterance context and improves the WER compared to a baseline ASR system. If the content prompt is a list of biasing words, the PromptASR model can also perform word-level biasing and achieve significant WER reduction on biasing words while being robust to distractors. The model can switch the output style (e.g. casing and punctuation) given different style prompts. In the future, we hope to explore more efficient utilization of text embeddings to reduce the computational cost. We are also very interested in how to incorporate large language models into PromptASR. \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|} \hline style prompt & Mixed-cased English with punctuations. \\ \hline PromptASR output & “Do you believe in some education?” asked Mary Taylor. \\ \hline \hline content prompt &... I knew how hard it was upon slow- paced **horses** to be put with fast ones;... \\ \hline Baseline output & She was often used in the faith, and was very much liked by some of the ladies. \\ \hline PromptASR output & She was often used in the phaeton, and was very much liked by some of the ladies. \\ \hline \end{tabular} \end{table} Table 4: Outputs of PromptASR versus normal ASR system. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multirow{2}{*}{Model} & \multicolumn{8}{c|}{Word-level LibriSpeech Biasing} & \multicolumn{3}{c|}{Uterance-level} \\ \cline{2-11} & \multicolumn{2}{c|}{No Biasing} & \multicolumn{2}{c|}{N=0} & \multicolumn{2}{c|}{N=100} & \multicolumn{2}{c|}{N=500} & \multicolumn{2}{c|}{N=1000} & \multicolumn{2}{c|}{Librieavy Biasing} \\ \cline{2-11} & clean & other & clean & other & clean & other & clean & other & clean & other \\ \hline **B1-UC** & 2.46 & 5.11 & 2.46 & 5.11 & 2.46 & 5.11 & 2.46 & 5.11 & 2.46 & 5.11 & 3.0 & 6.72 \\ \hline **M1** & 2.45 & 5.09 & 2.37 & 4.90 & 2.49 & 5.36 & 2.62 & 5.71 & 2.69 & 5.82 & 2.48 & 5.25 \\ \hline **M2** & 2.43 & 5.07 & **1.73** & **4.0** & **1.73** & **4.07** & **2.0** & **4.45** & **2.13** & **4.67** & 2.59 & 5.55 \\ \hline \end{tabular} \end{table} Table 3: WERs (%) with different context list size on the LibriSpeech biasing task. \(N\) is the number of distractors added to the context list. The WERs of the Libriheavy utterance-level biasing test are also shown. Figure 2: WERs (%) of baseline model and PromptASR models after normalization on the long-form ASR task.
2309.10168
Few-Shot Adaptation for Parsing Contextual Utterances with LLMs
We evaluate the ability of semantic parsers based on large language models (LLMs) to handle contextual utterances. In real-world settings, there typically exists only a limited number of annotated contextual utterances due to annotation cost, resulting in an imbalance compared to non-contextual utterances. Therefore, parsers must adapt to contextual utterances with a few training examples. We examine four major paradigms for doing so in conversational semantic parsing i.e., Parse-with-Utterance-History, Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. To facilitate such cross-paradigm comparisons, we construct SMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow with additional annotations. Experiments with in-context learning and fine-tuning suggest that Rewrite-then-Parse is the most promising paradigm when holistically considering parsing accuracy, annotation cost, and error types.
Kevin Lin, Patrick Xia, Hao Fang
2023-09-18T21:35:19Z
http://arxiv.org/abs/2309.10168v1
# Few-Shot Adaptation for Parsing Contextual Utterances with LLMs ###### Abstract We evaluate the ability of semantic parsers based on large language models (LLMs) to handle contextual utterances. In real-world settings, there typically exists only a limited number of annotated contextual utterances due to annotation cost, resulting in an imbalance compared to non-contextual utterances. Therefore, parsers must adapt to contextual utterances with a few training examples. We examine four major paradigms for doing so in conversational semantic parsing _i.e.,_ Parse-with-Utterance-History, Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. To facilitate such cross-paradigm comparisons, we construct SMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow with additional annotations. Experiments with in-context learning and fine-tuning suggest that Rewrite-then-Parse is the most promising paradigm when holistically considering parsing accuracy, annotation cost, and error types. ## 1 Introduction A key challenge in conversational semantic parsing (CSP) is handling _contextual_ utterances (_i.e.,_ utterances that can only be understood with its context) by mapping them to _non-contextual_ programs that can be fulfilled by an executor without relying on the dialogue state. Many approaches have been proposed, _e.g.,_ directly mapping the contextual utterance with utterance history to a non-contextual program Suhr et al. (2018), or mapping to an intermediate contextual program which is then resolved (usually in a deterministic manner) to a non-contextual program Semantic Machines et al. (2020); Cheng et al. (2020). In these prior works, there is often an assumption of having a substantial corpus of annotated data encompassing both non-contextual utterances and contextual utterances for training a parser. However, in practice, it is more expensive to collect and annotate contextual utterances compared to non-contextual utterances, due to the dependency on the conversation history. Furthermore, annotating non-contextual utterances usually precedes annotating contextual utterances. To reflect such real-world settings, we study few-shot adaptation for parsing contextual utterances, where we first build a parser using a large number of annotated non-contextual utterances, and then adapt it for parsing contextual utterances using a few (or even zero) annotated contextual utterances. Recent work has shown that large language models (LLMs) are capable of semantic parsing using a few examples Shin et al. (2021); Shin and Van Durme (2022). Hence, in this work, we conduct a focused study on few-shot adaptation using LLMs for CSP. Specifically, we consider four major paradigms: Parse-with-Utterance-History, Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. One challenge of carrying out a comparative study on these paradigms is the lack of annotated data, since existing CSP datasets such as SMCalFlow Semantic Machines et al. (2020) and CoSQL Yu et al. (2019) are often annotated based on a single paradigm. Therefore, we construct a new dataset, SMCalFlow-EQ, derived from a subset of SMCalFlow dialogues with annotations for all four paradigms. Our experiments consider both in-context learning (ICL) using GPT-3.5 and fine-tuning (FT) using T5-base 220M Raffel et al. (2020) for building and adapting parsers. ICL typically has lower accuracy compared to FT, although the two are not strictly comparable as they use different models. The only exception is Parse-with-Reference-Program, suggesting that GPT-3.5 is effective at editing programs using natural language. Overall, we find Rewrite-then-Parse to be the most promising approach, as it achieves similar accuracy to other paradigms in both ICL and FT experiments, while requiring only a few annotated examples for to de velop a query rewriter and no additional program annotations. We release code and data to facilitate future work on parsing contextual utterances.1 Footnote 1: [https://github.com/microsoft/few_shot_adaptation_for_parsing_contextual_utterances_with_llms](https://github.com/microsoft/few_shot_adaptation_for_parsing_contextual_utterances_with_llms) ## 2 Background: LLM-Based Parsing Following Shin et al. (2021) and Roy et al. (2022), we formulate parsing as a constrained decoding problem, where an LLM is used to predict the next token and a context-free grammar (CFG) is used to validate the predicted token. A program is represented as a sequence of S-expression tokens \(y_{1}y_{2}\ldots y_{L}\). The space of all valid S-expressions is governed by a CFG denoted by \(\mathcal{G}\), which can be automatically derived from function definitions and types used in the domain (see Appendix A). To generate the program for a user utterance, we first feed the LLM with the user utterance and necessary context information as a sequence of tokens. Then the S-expression of the program is generated incrementally. At each decoding step \(l\), we only keep the partial prefix sequence \(y_{1}y_{2}\ldots y_{l}\) if it is allowed by \(\mathcal{G}\). This validation can be efficiently performed via Earley's parsing algorithm (Earley, 1970) using the parsing state of the partial sequence \(y_{1}y_{2}\ldots y_{l-1}\). In this paper, we consider both ICL and FT for constructing LLM-based parsers. For ICL, we prompt the pre-trained LLM with \(K_{\text{ICL}}\) demonstration examples retrieved via BM25 (Robertson and Walker, 1994; Robertson and Zaragoza, 2009), following Rubin et al. (2022) and Roy et al. (2022). For FT, we continue training the LLM on \(K_{\text{FT}}\) demonstration examples, producing a new model to be used during constrained decoding. ## 3 Few-Shot Adaptation In this paper, we assume there are a large number (\(M\)) of annotated non-contextual utterances, \(\mathcal{D}=\{(\mathbf{x}^{(1)},\mathbf{y}^{(1)}),\ldots,(\mathbf{x}^{(M)},\mathbf{y}^{(M)})\}\), where \(\mathbf{x}^{(i)}\) denotes the \(i\)-th non-contextual utterance in the dataset, \(\mathbf{y}^{(i)}\) is the corresponding non-contextual program, and \(M\) is the number of annotated examples. These examples are used to derive a grammar \(\mathcal{G}_{1}\) and build the parser \(\mathcal{P}_{1}\) for non-contextual utterances via either ICL or FT. For a contextual utterance \(\mathbf{u}_{t}\) at the \(t\)-th turn of a dialogue, the goal is to obtain the non-contextual program \(\mathbf{y}_{t}\) using the _utterance history_\(\mathbf{h}_{t}=[\mathbf{u}_{<t}]\), the corresponding programs \(\mathbf{y}_{<t}\), and/or other information recorded in the dialogue state. Figure 1 illustrates four canonical paradigms for parsing contextual utterances. For each of these paradigms, we would like to obtain a new parser by adapting from the base parser \(\mathcal{P}_{1}\) using \(N\) demonstration examples, where \(N\ll M\). ### Parsing Paradigms **Parse-with-Utterance-History**: In this paradigm, the parser directly predicts \(\mathbf{y}_{t}\) by conditioning on the contextual utterance \(\mathbf{u}_{t}\) and its history \(\mathbf{h}_{t}\). This paradigm has been used in contextual semantic parsing (Zettlemoyer and Collins, 2009; Suhr et al., 2018) and belief state tracking (Mrksic et al., 2017). **Parse-with-Reference-Program**: This paradigm assumes that the salient additional context to parse \(\mathbf{u}_{t}\) is captured by a _reference program_, which is a non-contextual program to be revised and typically that from the preceding turn, \(\mathbf{y}_{t-1}\). The parsing process can be viewed as editing the reference program based on the contextual utterance which directly yields \(\mathbf{y}_{t}\). Zhang et al. (2019) employs a similar strategy by using a copy operation during parsing to copy tokens from the reference program for text-to-SQL. **Parse-then-Resolve**: This paradigm divides the task into two steps, leading to a modularized system with a parser followed by a resolver. \(\mathbf{u}_{t}\) is first mapped to an intermediate program \(\tilde{\mathbf{y}}_{t}\) which contains specialized contextual symbols. These context Figure 1: Four canonical paradigms of conversational semantic parsing for contextual utterances. tual symbols (marking ellipsis or coreference) are resolved deterministically using the dialogue state determined from \(\mathbf{y}_{<t}\), resulting in the final non-contextual prediction \(\mathbf{y}_{t}\). Several recent datasets for CSP have adopted this paradigm (Semantic Machines et al., 2020; Cheng et al., 2020). **Rewrite-then-Parse**: This paradigm modularizes the system using a rewriter followed by a parser. The history \(\mathbf{h}_{t}\) and contextual utterance \(\mathbf{u}_{t}\) are first rewritten into a single non-contextual utterance \(\mathbf{u}^{\prime}_{t}\) Then, \(\mathbf{u}^{\prime}_{t}\) is parsed to \(\mathbf{y}_{t}\) by a single-turn semantic parser. This paradigm is closely related to incomplete utterance rewriting (Liu et al., 2020) and conversational query rewriting (Rastogi et al., 2019; Yu et al., 2020; Chen et al., 2020; Song et al., 2020; Inoue et al., 2022; Mao et al., 2023) though the parsing step is usually unnecessary or overlooked in these related studies. Using this paradigm, the rewriter and the parser can be independently developed and maintained. ### Adaptation via ICL For ICL, we use GPT-3.5 and the following prompt template provided by Shin et al. (2021) and Roy et al. (2022), where placeholders {X1}, {X2},... are demonstrations input, {Y1}, {Y2},... are demonstrations output, and {X'} is the test input. ``` Let's translatewhatahumanusersysintowhatscomputermightsay. Human:(X1) Computer:(Y1) Human:(X2) Computer:(Y2)... Human:(X') Computer: ``` For Parse-with-Utterance-History, Parse-with-Reference-Program, and Parse-then-Resolve, the input placeholders are respectively instantiated as \(\mathbf{h}\mid\mathbf{u}\), \(\mathbf{r}\mid\mathbf{u}\), and \(\mathbf{u}\), where the character \(\mid\) is used as the separator. The output placeholders are all instantiated by non-contextual programs \(\mathbf{y}\), except for Parse-then-Resolve which uses \(\tilde{\mathbf{y}}\) instead. The test input placeholder follows the same form as demonstration input placeholders. New CFG rules are derived from the program annotations of contextual utterances, _i.e._, \(\tilde{\mathbf{y}}\) and \(\mathbf{y}\), yielding two new grammars \(\mathcal{G}_{\alpha}\) and \(\mathcal{G}_{\beta}\), respectively. During constrained decoding, the joint grammar \(\mathcal{G}_{1}\cup\mathcal{G}_{\alpha}\) is used for Parse-then-Resolve, whereas \(\mathcal{G}_{1}\cup\mathcal{G}_{\beta}\) is used for the other three paradigms. In other words, the adaptation only changes the set of demonstration examples used during prompt instantiation and augments the CFG used during constrained decoding. For Rewrite-then-Parse, we can re-use the same grammar \(\mathcal{G}_{1}\) and parser \(\mathcal{P}_{1}\) used for non-contextual utterances, without any annotated programs for contextual utterances. ### Adaptation via FT For FT, the parser \(\mathcal{P}_{1}\) for non-contextual utterances uses an LLM \(\mathcal{M}_{1}\) fine-tuned from T5-base 220M (Raffel et al., 2020). To adapt this parser for contextual utterances, we continue fine-tuning \(\mathcal{M}_{1}\) on annotated contextual utterances, except for Rewrite-then-Parse which uses \(\mathcal{P}_{1}\) itself. Similar to ICL, different forms of token sequences are used for different paradigms, _i.e., \(\mathbf{h}\mid\mathbf{u}\mid\mathbf{y}\)_ for Parse-with-Utterance-History, \(\mathbf{r}\mid\mathbf{u}\mid\mathbf{y}\) for Parse-with-Utterance-History, and \(\mathbf{u}\mid\tilde{\mathbf{y}}\) for Parse-then-Resolve. The new grammar is constructed identically to ICL as well. ### Data Annotation Effort An important axis when comparing different parsing paradigms is the data annotation effort. For Parse-with-Utterance-History, annotating the non-contextual program for a contextual utterance can be a cognitively demanding task, as it needs to account for the full utterance history. Data annotation for Parse-with-Reference paradigm is similar to the Parse-with-Utterance-History, though it may be less cognitively intensive because the human annotator only needs to make a a few edits as opposed to performing a full parse. Compared with Parse-with-Utterance-History, annotations of intermediate programs in the Parse-then-Resolve paradigm are much less context-dependent and more concise, which potentially makes the parser more data efficient. However, this comes at a cost of placing a greater burden on the resolver, which uses custom-designed contextual symbols based on the domain; their expressiveness can greatly affect the quality of the annotations and the complexity of the resolver. Finally, collecting annotations for the the utterance rewriting task is relatively easy and domain independent compared to collecting annotations for parsers which often requires learning a domain-specific language. Experiments ### Data Existing CSP datasets are often annotated based on only one or two paradigms, making it difficult to compare across different paradigms comprehensively. To address this challenge, we construct a dataset SMCalFlow-EventQueries (SMCalFlow-EQ) derived from a subset of SMCalFlow (Semantic Machines et al., 2020). It contains 31 training and 100 test instances in total. Each instance consists of a contextual user utterance \(\mathbf{u}\) during an event-related query (_e.g., "what about Tuesday?"_), the corresponding contextual/intermediate program \(\tilde{\mathbf{y}}\) and non-contextual program \(\mathbf{y}\), the utterance history \(\mathbf{h}\), the reference program \(\mathbf{r}\), and the rewritten non-contextual utterance \(\mathbf{u}^{\prime}\). The programs (\(\mathbf{y}\), \(\tilde{\mathbf{y}}\), \(\mathbf{r}\)) are semi-automatically derived from the original SMCalFlow annotations. The rewritten non-contextual utterances \(\mathbf{u}^{\prime}\) are manually annotated by domain experts. See Appendix B for details of the dataset construction and examples. We additionally use 8892 training and 100 test instances of non-contextual utterances (_e.g., "do I have any meetings scheduled after Thursday?"_), each paired with their corresponding non-contextual programs, semi-automatically derived from SMCalFlow as well. These instances are used to construct and evaluate the base parser \(\mathcal{P}_{1}\) for non-contextual utterances. ### Experimental Results For Parse-with-Reference-Program, we use the oracle reference program, which is the non-contextual program of the preceding turn.2 For Parse-then-Resolve, we assume an oracle resolver is available, which in practice can be implemented as a rule-based system. The rewriter used for Rewrite-then-Parse is implemented via GPT-3.5, and details are provided in Appendix D. We also consider using the oracle rewritten utterances annotated in the contextual subset of SMCalFlow-EQ. Footnote 2: It is possible that the reference program is from an earlier turn or does not appear in the history, though the contextual subset does not contain such examples. We evaluate the program exact match accuracy on the SMCalFlow-EQ test set for all paradigms. Table 1 presents the experimental results. Across all paradigms, FT achieves higher exact match than ICL by 7.9% to 29.4% absolute gain. For FT, Rewrite-then-Parse with oracle rewritten utterances performs the best. There is no significant difference among other approaches, including Rewrite-then-Parse using the GPT-3.5 rewriter which does not require additional fine-tuning. For ICL, Parse-with-Reference-Program performs the best, suggesting it is easier for GPT-3.5 to softly edit a program than parsing directly from natural language. Rewrite-then-Parse using oracle rewritten utterances is still better than the remaining approaches. By comparing the results of Rewrite-then-Parse, it is clear that improving the rewriter can lead to a corresponding improvement in parsing accuracy. We manually examine incorrect predictions made by parsers for contextual utterances and identify common error categories: incorrect top-level program types, alternative parses for the input, extra constraints, missing constraints, and constraints with incorrect arguments/functions (see Table A5 for examples). For ICL, the most common error type is incorrect function calls. 30% of the errors made by Parse-with-Reference-Program are due to incorrect function use. In particular, the model struggles with predicting rare functions such as negations, potentially because the only knowledge of the target language is from the contextual subset of SMCalFlow-EQ. For FT, 33% of the errors in Parse-then-Resolve are from incorrect top-level program types. Introducing new symbols increases the program space, especially different intermediate programs that have similar functions, suggesting that the design of these specialized contextual symbols is crucial. For Parse-with-Utterance-History, we find that 40% of the errors come from missing constraints, indicating that jointly learn parsing and consolidating constraints from multiple turns is challenging for the parsing model. For Rewrite-then-Parse, 55% of the errors are due to incorrect arguments, and 45% are due to differences in capitalization (_e.g.,_ the \begin{table} \begin{tabular}{l l l} \hline \hline **Paradigm** & **ICL** & **FT** \\ \hline Parse-with-Utterance-History & 51.8 & 81.2 \\ Parse-with-Reference-Program & 86.1* & 78.2 \\ Parse-then-Resolve & 70.5* & 82.4 \\ Rewrite-then-Parse & 65.3* & 75.2 \\ Rewrite-then-Parse (oracle) & 76.2* & 94.0* \\ \hline \hline \end{tabular} \end{table} Table 1: Exact match accuracy on SMCalFlow-EQ test set. For both ICL and FT, we test each paradigm against the corresponding Parse-with-Utterance-History predictions using McNemar’s test and show statistically significant (\(p<0.05\)) results with *. rewriter converts a lowercase name to uppercase) which is arguably less critical. We also examine the overall parsing accuracy on the joint test set of contextual and non-contextual utterances. We use a binary classifier which takes the user utterance as input and determines whether to use the parser for non-contextual utterances or the parser for contextual utterances. The classifier is obtained by fine-tuning the RoBERTa-base Liu et al. (2019) to on SMCalFlow-EQ utterances. The overall classification accuracy is 95.5%. The results are summarized in Table 2. We use exact match accuracy as the evaluation metric, where the prediction is treated as correct only when classification and parsing are both correct. ## 5 Conclusion We study a real-world CSP setting, _i.e.,_ few-shot adaptation for parsing contextual utterances with LLMs, and compare four different paradigms using both ICL and FT. To facilitate the study, we construct a new dataset, SMCalFlow-EQ with annotations for all paradigms. Experiments show that ICL with GPT-3.5 usually underperforms FT with T5-base except for Parse-with-Reference-Program, suggesting GPT-3.5 is good at editing programs via natural language in these data conditions. Overall, Rewrite-then-Parse stands out as a promising approach for future development of LLM-based CSP, as it performs as well as other paradigms but require only a few annotated exampels for the rewriter and no additional program annotation. ## 6 Limitations Due to the cost of collecting program annotations for all paradigms, the size of the SMCalFlow-EQ test set is relatively small and we only study dialogues from SMCalFlow. While the experiments results are informative under significance test, it would be useful for future work to conduct a similar study on larger and diverse datasets. The LLMs used in this work are pre-trained primarily on English, and the SMCalFlow-EQ also only contains English utterances. It would be interesting to study the few-shot adaptation problem on other languages. ## Acknowledgements We would like to thank Benjamin Van Durme, Matt Gardner, Adam Pauls, and Jason Wolfe for valuable discussions on this paper.
2309.05203
From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery
Molecule discovery serves as a cornerstone in numerous scientific domains, fueling the development of new materials and innovative drug designs. Recent developments of in-silico molecule discovery have highlighted the promising results of cross-modal techniques, which bridge molecular structures with their descriptive annotations. However, these cross-modal methods frequently encounter the issue of data scarcity, hampering their performance and application. In this paper, we address the low-resource challenge by utilizing artificially-real data generated by Large Language Models (LLMs). We first introduce a retrieval-based prompting strategy to construct high-quality pseudo data, then explore the optimal method to effectively leverage this pseudo data. Experiments show that using pseudo data for domain adaptation outperforms all existing methods, while also requiring a smaller model scale, reduced data size and lower training cost, highlighting its efficiency. Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery. Our code and data are available at https://github.com/SCIR-HI/ArtificiallyR2R.
Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin
2023-09-11T02:35:36Z
http://arxiv.org/abs/2309.05203v3
From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery ###### Abstract Molecule discovery serves as a cornerstone in numerous scientific domains, fueling the development of new materials and innovative drug designs. Recent developments of in-silico molecule discovery have highlighted the promising results of cross-modal techniques, which bridge molecular structures with their descriptive annotations. However, these cross-modal methods frequently encounter the issue of data scarcity, hampering their performance and application. In this paper, we address the low-resource challenge by utilizing artificially-real data generated by Large Language Models (LLMs). We first introduce a retrieval-based prompting strategy to construct high-quality pseudo data, then explore the optimal method to effectively leverage this pseudo data. Experiments show that using pseudo data for domain adaptation outperforms all existing methods, while also requiring a smaller model scale, reduced data size and lower training cost, highlighting its efficiency. Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery. ## 1 Introduction Molecule discovery plays a critical role in numerous scientific domains including chemistry [23, 14], pharmacology [17], and materials science [26]. However, traditional molecule design methods are frequently faced with challenges such as high costs, lengthy development processes, and limited success rates. Introducing a new drug to the market, for instance, might demand over a billion dollars and more than a decade of development [1]. With the advent of artificial intelligence (AI), innovative cross-modal methods are ushering in new ways to synthesize and analyze complex molecular structures, enhancing efficiency and reshaping the fields of computational chemistry and material science. Edwards et al. (2022) proposed a novel approach to directly translate molecules to corresponding captions and generate molecular structures from natural language text, shown in Figure 1. This cross-modal method heralds a future in which the design and study of specialized molecules can be achieved through simple natural language sentences. Various attempts have been made to resolve these tasks. MoIoT5 [1] uses SMILES (Simplified Molecular Input Line Entry System) [13] and molecule description respectively for masked language modeling (MLM) [15] as pre-training. Liu et al. (2023) pre-train models with causal language modeling (CLM) on the sequences that blend biomedical literature with molecular structural representations, derived from replacing molecular entities with their SMILES representations. However, these studies are limited by the scarcity of parallel molecule-description pairs, rendering direct sequence-to-sequence training unfeasible. The effectiveness of sequence-to-sequence (seq2seq) training is evident in Christofidellis et al. (2023), where the annotated data from the downstream dataset is incorporated for pre-training, albeit in a significantly lower ratio compared to the unannotated data. The primary bottleneck is the annotation process itself: the annotation of these pairs demands specialized knowledge in molecular chemistry, rendering large-scale human annotation both expensive and difficult. Inspired by the great success of LLMs in natural language processing (NLP) and related fields [1, 13, 14], we propose to mitigate the low-resource difficulty by using artificially-real data generated by LLMs. Unlike "real data", which originates from genuine experimental or observational sources, Figure 1: Illustration of translation between molecule and description in cross-modal molecule discovery. this "pseudo data" or "artificially-real data" is crafted artificially. While it mirrors the format of real data, its content does not depict actual real-world observations, making it potentially unsuitable for direct real-world applications. Our approach begins by creating a comprehensive pseudo dataset intended for seq2seq pre-training. We collect 1M unlabeled molecules from PubChem and use the in-context learning ability of LLMs to generate descriptive captions for these molecules. To ensure the integrity and diversity of this pseudo data, we adopt a retrieval-based one-shot prompting strategy during generation. Through this way, we construct the first artificially-real dataset, PseudoMD-1M, consisting of 1,020,139 pseudo molecule-description pairs. Based on this dataset, we explore the optimal method to leverage pseudo data. We propose two primary methods: 1) using pseudo data exclusively during pre-training for domain adaptation, and 2) integrating pseudo data with real data during fine-tuning as a data augmentation technique. To offer a comprehensive evaluation, we further compile DrugBank-23, a novel dataset derived from a different data source than existing datasets. In summary, our contributions are as follows: * We are the first to incorporate LLMs for low-resource molecule discovery. Using artificially-real data generated by LLMs, we are able to mitigate the data scarcity for the tasks. We release PseudoMD-1M, the first artificially-real dataset for cross-modal molecule discovery, which is 33\(\times\) larger than existing real datasets. * We explore the effective construction and utilization of pseudo data. We specifically investigate two principal techniques, including using pseudo data as domain adaptation and data augmentation. We conduct comprehensive experiments on existing datasets, and provide our new dataset called DrugBank-23, which adds a novel data source compared to current datasets. * Experimental results show that despite smaller model size and amount of pre-training data, models using artificially-real data as domain adaptation outperform all prior methods. Furthermore, our method shows continuous improvement with increasing volumes of pseudo data, underscoring its promising future applications. ## 2 Related Work ### Cross-modal Molecule Discovery With the advancement of in-silico molecule discovery methods, the field of molecule exploration is undergoing a transformative shift away from its resource-intensive and costly origins [14, 15]. Edwards, zhai2019multiresolution introduce a new task Text2Mol, which uses descriptions as search queries to retrieve the target molecules. Following this, Edwards et al. Edwards2020multiresolution propose two innovative tasks: molecule captioning and text-guided de novo molecule generation. These tasks aim at translating between molecular structures and natural language texts. MolXPT [13] leverages literature annotations of molecules to construct a pre-training dataset. Christofidellis et al. Christofidellis2020multiresolution further improves the field with multi-task learning, which combines single-domain and cross-domain datasets for joint training. Most recently, Li et al. Li2023multiresolution propose a strategy that enables LLMs to accomplish both molecule captioning and text-guided molecule generation tasks. Here we take one step further to construct a large number of high-quality parallel data pairs, in response to the data scarcity that limits the performance of the above approaches. ### Large Language Models LLMs have achieved significant success in natural language processing by scaling up to billions of parameters [13, 14, 15]. Trained on vast corpora [12], LLMs show more general intelligence [15] and remarkable capabilities such as in-context learning [16, 17]. They have also obtained promising performance in chemical [1, 18], biological [19] and medical [12] domains. Due to their great generation capability, numerous works have relied on LLMs to generate data for various purposes, including creating semantic textual similarity datasets [13], augmenting natural language inference [13], automatically formulating instructions [12] and improving few-shot retrieval [14]. Inspired by these achievements, we aim to employ LLMs to generate parallel data, addressing data scarcity in cross-modal molecule discovery. ## 3 Methodology ### Task Overview Here we introduce two primary tasks for cross-modal molecule discovery. First proposed by Edwards et al. Edwards2020multiresolution, the two tasks act as a bridge between molecule discovery and NLP and can be considered as cross-modal translation tasks. Molecular captioningAs illustrated in Figure 0(a), Given the SMILES representation \(\mathcal{S}_{\mathcal{M}}\) of molecule \(\mathcal{M}\), the task is to generate the corresponding descriptions \(\mathcal{D}_{\mathcal{M}}\). Text-Based de novo Molecule GenerationAs shown in Figure 0(b), given the descriptions \(\mathcal{D}_{\mathcal{M}}\) of molecules \(\mathcal{M}\), the task is to generate its corresponding SMILES \(\mathcal{S}_{\mathcal{M}}\). ### Artificially-real Data Generation High-quality pseudo data is the foundation for further exploration. Here we propose PseudoMD-1M, the first pseudo dataset composed of 1M parallel molecule-description data pairs. To acquire sufficient data, we leverage a vast number of unlabeled molecules and use LLMs to generate corresponding descriptions. We begin by collecting 1.1 million unannotated SMILES strings of molecules from PubChem [13]. We then employ a rigorous filtering procedure to filter out the SMILES in downstream datasets to ensure that there is no overlap between the collected molecules and those contained in the real datasets [15, 16]. By doing so, we ensure that no supplementary information about the molecules present in the real datasets is accidentally incorporated, thereby maintaining the integrity and independence of the training process. With ChatGPT API, we generate textual descriptions that encompass key aspects such as properties and structural features for each unannotated molecule. To improve the quality of generated descriptions, we implement a retrieval-based prompt paradigm that comprises two main stages as follows: Molecule Retrieval and Few-Shot Prompting. Molecule RetrievalIn-context learning (Brown et al., 2020) is one of the emergent abilities of LLMs, and the instances used in the prompts given to the LLMs play an important role in the generation quality. As molecules with similar structures often display corresponding characteristics (Wang et al., 2016), we retrieve the descriptions of annotated molecules that resemble the unlabeled molecule, using them as the few-shot instance during prompting. Specifically, we collect 37,898 annotated molecules with captions from PubChem(Kim et al., 2023), then retrieve the molecules with top-k Tanimoto similarity (Tanimoto, 1958), a standard measure in cheminformatics. To prevent information leakage during testing, we exclude the molecules that are contained in the real data test set (Edwards et al., 2021; Zeng et al., 2022). This process enables the models to learn from the information embedded within the descriptions of molecules that possess similar properties, ensuring a more tailored and accurate representation. Figure 3 shows the estimate of the data quality, indicating that the few-shot prompting approach (in blue) yields higher-quality data, more closely resembling real data than without. Few-Shot PromptingUpon retrieving the top-k results for each unlabeled molecule from our local database, we select one example using a weighted distribution, where molecules with higher similarity have a greater chance of being chosen. This selected example is then incorporated into the final prompt. We opt for one-shot prompting to minimize generation costs, as expenses increase linearly with the number of instances included in few-shot prompts. This weighted selection method prevents repetitive selection of the same molecule as the few-shot example, thereby improving the diversity during generation while maintaining the similarity between the molecule to be annotated and the few-shot example. As shown in Figure 2, the complete prompt comprises role definition, task description, few-shot example, and output control. The role definition and task description give LLMs the general context and enable its learned knowledge, while the few-shot example acts like a supplementary material for the LLMs to refer to. Then, with the output control for format clarification, the LLMs should be able to generate the desired description. ### Approaches to Utilize Artificially Real Data The ways to utilize the pseudo data decide how the model will perform on real data. We propose and explore two primary strategies to optimize the use of pseudo data. Figure 3: Comparison of data quality. We use the method proposed by Edwards et al. (2022) to evaluate the similarity between molecule-description pairs as an estimation of the data quality. The distribution is visualized using Kernel Distribution Estimation. A higher Text2Mol score signifies closer molecule-description resemblance, and “Density” represents the data concentration in a given region. Figure 2: The workflow for pseudo data generation. Starting with an unlabeled molecule represented by its Morgan Fingerprints, two stages are involved. In stage 1, the input molecule serves as a search query to retrieve the top-k similar molecules from a local database containing 37,898 annotated molecule-caption pairs. In stage 2, the retrieved molecules and their captions are integrated into a prompt. Then LLMs perform in-context learning and generate a description for the input molecule. Pseudo Data as Data AugmentationData augmentation strategy can be roughly categorized into two kinds, modification of existing data and generation of pseudo data. The former takes an existing data instance and makes certain alterations to it without changing its inherent meaning or label, such as rotation, flipping, and cropping for images [10], or synonym replacement for text [23, 24, 25]. This method is more about adding variability and noise to existing data instances than generating completely new ones. The latter, on the other hand, involves creating new data instances that did not exist in the original dataset based on the characteristics and distribution of the original data, which is an efficient alternative when real data is scarce or when creating new real data is costly or unfeasible. Existing applications include back translation for text [23], and GANs for images [1]. Inspired by the latter techniques, we explore the use of pseudo data as data augmentation. As shown in Figure 4, we keep the original data in the training set and augment them with pseudo data during fine-tuning. Using the same method as described in Figure 3, we assess the distribution of the real training set and the sample the augmented pseudo data based on the same distribution, ensuring consistency in the overall dataset distribution before and after data augmentation. We hope that this data augmentation approach using pseudo data will expose the model to a broader range of data patterns and scenarios, thus enhancing its ability to recognize complex patterns and generalize its learning to unseen data. Pseudo Data as Domain AdaptationModels pre-trained on general domain might perform less ideally when it is applied to specific domains for which they were not explicitly trained [1, 22]. In our case, the SMILES appears as an unfamiliar symbol to such models, making the direct fine-tuning approach less efficient. To bridge this gap, we use pseudo data as a second pre-training stage for domain adaptation. As shown in Figure 4, we train the model using pseudo data for two concurrent cross-modal translation tasks: molecular captioning and text-based de novo molecule generation. Using a direct and bidirectional seq2seq approach, this stage is intended to empower the model to not only recognize the SMILES representation but also to grasp the relationship between natural language and SMILES. Given that our primary focus at this stage is not on data authenticity, pseudo data emerges as a preferable choice, particularly because it provides a large number of parallel data pairs for supervised seq2seq training compared to real datasets. We then further fine-tune it on real data to refine and enhance the model's understanding of SMILES for further authenticity - a critical aspect for applications like drug discovery. ## 4 Experiments To validate the effectiveness of using pseudo data, we conduct comprehensive experiments comparing our proposed approaches with existing methods. We further conduct experiments to demonstrate how the balance between real data and pseudo data could affect model performance. All the experiments are conducted on both molecular captioning and molecule generation. The implementation details are listed in Appendix C. ### Settings DatasetsCurrently, only a few datasets with parallel molecule-description pairs exist, including ChEBI-20 [1] and PCdes [20], both constructed using data from PubChem [13]. To enhance evaluation comprehensiveness, we assemble a new dataset called DrugBank-23, based on DrugBank [12]. We experiment on all three datasets (ChEBI-20, PCdes, and DrugBank-23). The detailed information about these datasets is listed in Table 1. ModelsWe evaluate the following methods: * **T5**[14]. T5 directly fine-tuned on downstream datasets. * **MoIT5**[1]. T5 pre-trained with MLM using SMILES and molecule descriptions respectively, then fine-tuned on downstream datasets. Figure 4: Different methods for utilizing pseudo data. Traditional training employs only the real dataset for fine-tuning. The data augmentation approach fine-tunes the model on the combined dataset with pseudo data incorporated. In the domain adaptation method, the model is 1 initially pre-trained on two concurrent cross-modal translation tasks using pseudo data as domain adaptation, and 2 further trained on each task using real data. * **ChatGPT**[11]. GPT-3.5-Turbo using few-shot prompting strategy. We cite the results from the original paper on ChEBI-20, then apply the same strategy to test on the other datasets. * **MolXPT**[12]. GPT-2 pre-trained with CLM using abstracts of biomedical literature where molecules are replaced with the corresponding SMILES, then fine-tuned on downstream datasets. As the model is currently unavailable, we cite their results on ChEBI-20. * **Text&Chem T5**[13]. T5 pre-trained using multi-task learning, then fine-tuned on downstream datasets. * **Aug-T5** (ours). T5 fine-tuned on datasets augmented with pseudo data from PseudoMD-1M, sampled from 1k to 512k, doubling at each step. We report the optimal performances for each dataset. See Appendix D for details. * **Ada-T5** (ours). T5 pre-trained using molecule-description pairs from PseudoMD-1M as domain adaptation, then fine-tuned on downstream datasets. As shown in Table 2, both our proposed methods utilize the smallest model scale, pre-training data, and steps, while Aug-T5 requires no additional pre-training. We first test our methods on T5\({}_{\text{small}}\) (Aug-T5/Ada-T5) and then apply them to T5\({}_{\text{base}}\) (Aug-T5\({}_{\text{base}}\)/Ada-T5\({}_{\text{base}}\)). MetricsFollowing existing studies [1, 19, 12], we evaluate the results for molecular captioning with BLEU-2, BLEU-4 [13], ROUGE-1, ROUGE-2, ROUGE-L [12] and METEOR [14], and BLEU-4 [15], Accuracy [1], Validity [16], Levenshtein distance [12], MACCS-FTS [12], RDK-FTS [13], Morgan-FTS [16] and FCD [21] for text-based de novo molecule generation. Selected metrics are presented in Tables 3, 4 and Figures 5 and 6, with comprehensive results in Appendix D. ### Comparison with Existing Methods Results on Molecular CaptioningTable 3 shows the results of different models for molecule captioning. Ada-T5 outperforms all previous methods and achieves the state-of-the-art on all three datasets across all the metrics. Compared to the previous state-of-the-art, Ada-T5 uses less than 3% of the pre-training data and only a third of the model parameters, yet requires fewer training steps, demonstrating the effectiveness and computational efficiency of high-quality pseudo data. On the other hand, Aug-T5 outperforms T5, MolT5, ChatGPT and has comparable performance with MolXPT and Text&Chem T5, using 9%-30% of the parameters and requires no pre-training. This highlights the benefit from the enhanced diversity of descriptions by incorporating pseudo data into the training set. Meanwhile, Ada-T5\({}_{\text{base}}\) makes an extra but relatively little progress compared to Ada-T5, indicating that although using pseudo data for domain adaptation could also benefit from the expansion of model size like most methods, the exploitation of pseudo data only demands a relatively small number of parameters. In contrast, Aug-T5\({}_{\text{base}}\) mirrors the results of its smaller version, indicating that for data augmentation, simply increasing the model scale may not offer substantial benefits. One thing to notice is that despite the data used to train the model is generated by ChatGPT API, both our trained models can still beat ChatGPT across different metrics. This indicates that although ChatGPT can accomplish the task to a certain extent, the data it generated can still help the models achieve a more seamless transition through pre-training from general domain to this domain. Results on Text-Based Molecule GenerationTable 4 presents the results of different models for molecule generation. Ada-T5 achieves the best performance in all three datasets across almost all metrics, demonstrating its capability to generate high-quality SMILES. The only exception is that the MolXPT slightly surpasses Ada-T5 by 0.009 in ChEBI-20 dataset on the validity metric, which is calculated using RDkit to simply check whether the string can be successfully converted to a molecule object without errors and whether the molecule represents a realistic and feasible chemical structure, without any comparison to the targeted SMILES and the input descriptions. Despite this one slight superiority, MolXPT performs significantly worse than Ada-T5 on other metrics, meaning that although it can generate slightly more valid SMILES, it does not take into account the designated instructions, ergo making it one step away from real-world application. On the other hand, Aug-T5 surpasses some existing methods in certain datasets on specific metrics. However, its consistency falls short compared to Ada-T5. This variabil \begin{table} \begin{tabular}{c|c c c} \hline \hline Info & ChEBI-20 & PCdes & DrugBank-23 \\ \hline Train & 26,407 & 10,500 & 17,109 \\ Validation & 3,301 & 1,500 & 3,667 \\ Test & 3,300 & 3,000 & 3,666 \\ \(\mathcal{L}_{\text{SMILES}}\) & 81.56 & 56.47 & 54.11 \\ \(\mathcal{L}_{\text{Description}}\) & 52.88 & 72.47 & 65.04 \\ Data source & PubChem & DrugBank \\ \hline \hline \end{tabular} \end{table} Table 1: Details about the existing datasets and ours (DrugBank-23). \(\mathcal{L}_{\text{SMILES}}\) denotes the average length of SMILES while \(\mathcal{L}_{\text{Description}}\) denotes the average word count per description. \begin{table} \begin{tabular}{c|c c c} \hline \hline Model & Data scale & Steps & Backbone \\ \hline MolT5 & 500M & 1M & T5\({}_{\text{large}}\) \\ MolXPT & 8M & 200k & GPT2\({}_{\text{medium}}\) \\ Text\&Chem T5 & 33.5M & 131k & T5\({}_{\text{base}}\) \\ Aug-T5 & 0 & 0 & T5\({}_{\text{small}}\) \\ Ada-T5 & 1M & 100k & T5\({}_{\text{small}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Pre-training details for different Models. “M” stands for million and “k” denotes thousand. ity may be traced back to the construction of molecule-description data pairs in pseudo data: the LLMS use the real SMILES are used as the input, leaving only the description part of the pseudo data genuinely "pseudo". This means that when training Aug-T5 on molecule captioning, it gets the authentic SMILES; but when training on molecule generation, it gets the pseudo description. Consequently, the gap between the input training data leads to the gap between the model performance on different tasks. Furthermore, compared with the results for molecular captioning, the base counterparts of both methods for molecule generation exhibit pronounced enhancements, which could also attributed to the gap between the input data, as using the "pseudo" part as the input for molecule generation might offer more space for improvements, especially for larger-scale models that can better tolerate the "pseudo" data nuances. The difference between Aug-T5 and Ada-T5 also indicates the importance of data authenticity and the difference between real data and pseudo data: as Ada-T5 is later fine-tuned with 100% real data (in comparison with Aug-T5, which is fine-tuned with the mix of real data and pseudo data), its misunderstandings about SMILES during domain adaptation through pseudo data are corrected and therefore has a better overall performance. This further stresses that using pseudo data for direct application may not be the optimal way to exploit its potential. ### Effect of the amount of pseudo data In order to further demonstrate how the amount of pseudo data could affect model performance, we experiment on ChEBI-20, the largest and most widely used dataset, with varying numbers of pseudo data samples \(\mathcal{N}\) from 1k to 512k. Results on Molecular CaptioningFigure 5 shows the results of Ada-T5 and Aug-T5 for molecular captioning with different amounts of pseudo data. Both Ada-T5 and Aug-T5 exhibit significant improvements when a modest amount of pseudo data is incorporated into their training. With just 1k pseudo data, both methods can surpass T5large and ChatGPT and achieve a comparative performance to MoI75large and MoIXPT. This phenomena is often seen in other data augmentation strategies [20, 14], and can be attributed to the moder \begin{table} \begin{tabular}{c|c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Parameters} & \multicolumn{3}{c|}{ChEBI-20} & \multicolumn{3}{c|}{PCdes} & \multicolumn{3}{c}{DrugBank-23} \\ & & Acc & Val & MAC & Acc & Val & MAC & Acc & Val & MAC \\ \hline T5 & 800M & 0.2791\({}^{*}\) & 0.9021\({}^{*}\) & 0.8231\({}^{*}\) & 0.089\({}^{\dagger}\) & 0.9101\({}^{*}\) & 0.698\({}^{\dagger}\) & 0.1311\({}^{*}\) & 0.9231\({}^{*}\) & 0.682\({}^{\dagger}\) \\ MoiT5 & 800M & 0.3111\({}^{*}\) & 0.9051\({}^{*}\) & 0.8341\({}^{*}\) & 0.097\({}^{\dagger}\) & 0.925\({}^{\dagger}\) & 0.695\({}^{\dagger}\) & 0.1451\({}^{*}\) & 0.947\({}^{\dagger}\) & 0.686\({}^{\dagger}\) \\ MolXPT & 350M & 0.2151\({}^{*}\) & **0.983** & 0.8591\({}^{*}\) & - & - & - & - & - & - \\ Text\&Chem T5 & 250M & 0.3221\({}^{*}\) & 0.9431\({}^{*}\) & 0.901\({}^{\dagger}\) & 0.105\({}^{\dagger}\) & 0.8491\({}^{*}\) & 0.697\({}^{\dagger}\) & 0.149\({}^{\dagger}\) & 0.8981\({}^{*}\) & 0.705 \\ ChatGPT & - & 0.1391\({}^{*}\) & 0.8871\({}^{*}\) & 0.8471\({}^{*}\) & 0.0441\({}^{*}\) & 0.8671\({}^{*}\) & 0.6711\({}^{*}\) & 0.0481\({}^{*}\) & 0.8521\({}^{*}\) & 0.6651\({}^{*}\) \\ \hline Aug-T5 & 77M & 0.305 & 0.907 & 0.877 & 0.070 & 0.892 & 0.700 & 0.141 & 0.911 & 0.685 \\ Aug-T5base & 250M & 0.386 & 0.955 & 0.884 & 0.098 & 0.927 & 0.696 & 0.158 & 0.952 & 0.681 \\ Ada-T5 & 77M & 0.449 & 0.967 & 0.905 & 0.135 & 0.945 & 0.725 & 0.170 & 0.955 & 0.696 \\ Ada-T5base & 250M & **0.486** & 0.974 & **0.911** & **0.150** & **0.956** & **0.743** & **0.192** & **0.969** & **0.706** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of different models for molecule generation on ChEBI-20, PCdes and DrugBank-23 datasets. \({}^{\dagger}/^{*}\) denotes that Ada-T5base/Aug-T5base perform significantly better than baselines at \(p-\mathrm{value}<0.01\) using t-test. The **best** scores are in bold. **Acc**: Accuracy. **Val**: Validity. **MAC**: MACCS FTS. \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Parameters} & \multicolumn{3}{c|}{ChEBI-20} & \multicolumn{3}{c|}{PCdes} & \multicolumn{3}{c}{DrugBank-23} \\ & & BL & RG & MET & BL & RG & MET & BL & RG & MET \\ \hline T5 & 800M & 0.4671\({}^{*}\) & 0.4781\({}^{*}\) & 0.5861\({}^{*}\) & 0.2521\({}^{*}\) & 0.2591\({}^{*}\) & 0.3671\({}^{*}\) & 0.27271\({}^{*}\) & 0.2991\({}^{*}\) & 0.3961\({}^{*}\) \\ MoiT5 & 800M & 0.508\({}^{\dagger}\) & 0.5101\({}^{*}\) & 0.6141\({}^{\dagger}\) & 0.2666\({}^{\dagger}\) & 0.2721\({}^{\dagger}\) & 0.3801\({}^{*}\) & 0.2931\({}^{\dagger}\) & 0.317\({}^{\dagger}\) & 0.4161\({}^{\dagger}\) \\ MoIXPT & 350M & 0.5051\({}^{*}\) & 0.5111\({}^{*}\) & 0.6266\({}^{\dagger}\) & - & - & - & - & - & - \\ Text\&Chem T5 & 250M & 0.542\({}^{\dagger}\) & 0.543\({}^{\dagger}\) & 0.648\({}^{\dagger}\) & 0.2666\({}^{\dagger}\) & 0.274\({}^{\dagger}\) & 0.382\({}^{\dagger}\) & 0.2801\({}^{*}\) & 0.3121\({}^{*}\) & 0.4131\({}^{*}\) \\ ChatGPT & - & 0.4821\({}^{*}\) & 0.4501\({}^{*}\) & 0.5851\({}^{*}\) & 0.1941\({}^{*}\) & 0.1931\({}^{*}\) & 0.3151\({}^{*}\) & 0.1911\({}^{*}\) & 0.2181\({}^{*}\) & 0.3251\({}^{*}\) \\ \hline Aug-T5 & 77M & 0.515 & 0.517 & 0.621 & 0.270 & 0.275 & 0.385 & 0.297 & 0.322 & 0.421 \\ Aug-T5base & 250M & 0.516 & 0.520 & 0.620 & 0.268 & 0.272 & 0.383 & 0.294 & 0.316 & 0.416 \\ Ada-T5 & 77M & 0.553 & 0.552 & 0.652 & **0.295** & 0.295 & 0.406 & 0.310 & 0.337 & 0.435 \\ Ada-T5base & 250M & **0.564** & **0.562** & **0.660** & **0.295** & **0.297** & **0.409** & **0.322** & **0.346** & **0.445** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of different models for molecular captioning on ChEBI-20, PCdes and DrugBank-23 datasets. \({}^{\dagger}/^{*}\) denotes that Ada-T5base/Aug-T5base perform significantly better than baselines at \(p-\mathrm{value}<0.01\) using t-test. The **best** scores are in bold. **BL**: BLEU-4. **RG**: ROUGE-2. **MET**: METEOR. ate noise introduced by the pseudo data, which in turn bolsters model generalization. As the amount of pseudo data increases, Ada-T5 and Aug-T5 exhibit different tendencies. The performance of Aug-T5 begins to decline when the number of pseudo data samples reaches 4k, and sees a sharp drop when it exceeds 32k. This is possibly due to the imbalance between real data and pseudo data: As the model becomes increasingly exposed to unreal patterns from the pseudo data, it might shift its attention away from genuine patterns. Consequently, the real patterns are overlooked by the model that focuses on the artificial ones. In contrast, Ada-T5 thrives with the increasing amount of pseudo data, evidenced by the growth of overall metrics. One possible explanation is that Ada-T5 only uses pseudo data for pre-training, with follow-up fine-tuning using real data. Thus, the increase of pseudo data does not twist its grasp of genuine patterns, but instead, further amplifies the proficiency of the model during subsequent training. Results on Text-Based Molecule GenerationFigure 6 shows the results of Ada-T5 and Aug-T5 for molecule generation with different amounts of pseudo data. Ada-T5 shows the same superiority and trend as it does in molecular captioning with more pseudo data incorporated, while Aug-T5 displays a non-linear trend, with the optimal choice of the amount of pseudo data significantly larger than when applying Aug-T5 for molecular captioning. The reason might lie in the dual nature of pseudo data: it introduces both linguistic patterns and noise. Initially, a little bit of pseudo data bolsters model generalization by acting as a regularizer. But as more is added, an overbundance of noise degrades the results. However, once a critical mass of pseudo data is reached, the model starts to recognize more subtle and broader linguistic patterns amidst the noise, which helps in generating more accurate SMILES strings, leading to the observed spike in performance. After this peak, the overwhelming volume of pseudo data might reintroduce the dominance of noise, causing a decrease in performance. The distinct behavior of Aug-T5 in molecular captioning versus molecule generation highlights their inherent differences. Molecular captioning, being more flexible, can buffer linguistic variations, downplaying minor gains from pseudo data and instead more affected by noise. In contrast, molecule generation requires recognizing specific linguistic cues from descriptions that lead to exact structural changes in the SMILES output, making it more receptive to the subtle intricacies but can also discern and benefit from the subtle patterns present in pseudo data. Overall, these results indicate that the impact of pseudo data varies, depending on its inherent nature and the specific task at hand. ## 5 Conclusion In this paper, we introduce a novel approach that enhances low-resource cross-modal molecule discovery by leveraging artificially-real data generated by LLMs. By incorporating a retrieval-based few-shot prompting strategy, we are able to produce high-quality pseudo molecule-description pairs. To mitigate the scarcity of data, we released two datasets: PseudoMD-1M, the first artificially-real dataset for molecule description, and DrugBank-23, a real molecule-description dataset constructed from a novel source. We propose to use pseudo data for domain adaptation and for data augmentation to explore its optimal utilization. Experiments across different datasets show that the former can best exploit the potential of pseudo data, achieving better perfor Figure 5: Results of molecular captioning task using different amount of pseudo data. Figure 6: Results of molecule generation task using different amount of pseudo data. mance with less parameters and training data. Furthermore, as the performance of the model continues to benefit from the increasing amount of pseudo data, our approach shows the great potential of pseudo data, thereby providing a novel and promising approach for addressing low-resource challenge in cross-modal molecule discovery.
2309.05565
Introduction to the special issue dedicated to Michael J. Duff FRS on the occasion of his 70th birthday
This special feature, dedicated to Michael J. Duff FRS on the occasion of his 70th birthday, concerns topics in 'Quantum gravity, branes and M-theory'. These three intertwining subjects have been central to Duff's work; indeed many of his contributions have come to define significant aspects of what we actually mean by these terms. From the discovery of Weyl anomalies to recognising superstrings in 10 dimensions as a special case of membranes in an 11-dimensional M-theory, Duff's insights have shaped major developments across these themes. So it is an apposite setting for such a celebration and we are delighted to be able to include in this collection contributions from many of the pioneers of quantum gravity, branes and M-theory. The breadth of these topics has placed little constraint on the multiplicity of ideas appearing in these pages, from astrophysical black holes to chaotic condensed matter. Again, this is fitting as Duff's scientific remit spans a remarkable diversity of motifs, from the fundamentals of M-theory to entanglement in quantum information.
L. Borsten, A. Marrani, C. N. Pope, K. Stelle
2023-09-11T15:52:43Z
http://arxiv.org/abs/2309.05565v1
# The Royal Society PUBLISHING ###### Abstract Introduction to the special issue dedicated to Michael J. Duff FRS on the occasion of his 70th birthday, with a brief scientific biography. This special feature, dedicated to Michael J. Duff FRS on the occasion of his 70th birthday, concerns topics in 'Quantum gravity, branes and M-theory'. These three intertwining subjects have been central to Duff's work; indeed many of his contributions have come to define significant aspects of what we actually mean by these terms. From the discovery of Weyl anomalies to recognising superstrings in 10 dimensions as a special case of membranes in an 11-dimensional M-theory, Duff's insights have shaped major developments across these themes. So it is an apposite setting for such a celebration and we are delighted to be able to include in this collection contributions from many of the pioneers of quantum gravity, branes and M-theory. The breadth of these topics has placed little constraint on the multiplicity of ideas appearing in these pages, from astrophysical black holes to chaotic condensed matter. Again, this is fitting as Duff's scientific remit spans a remarkable diversity of motifs, from the fundamentals of M-theory to entanglement in quantum information. ## 2 A brief scientific biography of Michael J. Duff FRS Michael J. Duff FRS (Mike, from here onwards) did his PhD at Imperial College London under the supervision of Nobel Laureate Abdus Salam KBE FRS, with mentorship also from Christopher J. Isham1. He was somewhat thrown in at the deep end, charged with resolving a bet between Salam and Sir Hermann Bondi KCB FRS, Nobel Laureate Sir Roger Penrose OM FRS HonFlInstP and John Archibald Wheeler, some of the most influential quantum field theorists and general relativists of the twentieth century. Salam maintained that the Schwarzschild black hole solution of general relativity could be perturbatively reconstructed via the Feynman diagrams of quantum field theory. Mike confirmed this speculation [3] in a calculation that could be regarded as an early precursor to a now thriving industry applying scattering amplitudes to classical general relativity [4]2. Mike took up his first postdoctoral position at the International Centre for Theoretical Physics (ICTP), Trieste, Italy, recently established by Salam and so the destination of choice for many a protege. There, in a follow-up paper, Mike showed that loop contributions implied a \(1/r^{3}\) correction to the classical Schwarzschild solution. One should keep in mind that the problem of quantum gravity was still viewed with suspicion, or even contempt3, in certain quarters. Mike would then initiate a fruitful collaboration with Derek M. Capper, who had also recently taken the road from Imperial to ICTP, and Leopold Halpern, further developing the interface between quantum theory and gravity [6, 7]. Although this early work was somewhat forgotten for a period, it pre-empted many future and current themes in quantum gravity. As we shall see, the farsightedness of Mike's work would become a recurring theme. Footnote 1: See [1, 2] for personal tributes to Salam and Isham, authored by Mike. Footnote 2: Before we go any further, we should first sincerely apologise for our omission of the many crucial contributions by others to the story we shall tell. Giving due credit even only to those with direct connections to Mike, with anything like an even hand, would turn this lightning summary into a full blown review of the development of M-theory. Footnote 3: Mike recalls his thesis title “Problems in the Classical and Quantum Theories of Gravitation” being met with “hoots of defusion” [5]. How things have changed. On returning to the UK as part of Dennis Sciama's Oxford group, Mike discovered with Capper [8] the Weyl anomaly. The vanishing of the trace of the stress-energy tensor implied by the local scale (Weyl) invariance, first proposed Hermann Weyl in 1918, is not preserved quantum mechanically. This was a surprise, so much so that it was largely dismissed as wrong [5] by many of the leading lights of the day4. Such doubts, however, were quelled by an influential paper of Mike, Stanley Deser and Isham [9], which provided the most general form of the trace in various dimensions and made it plain that the anomaly could not be removed by local counterterms. It was there to stay. The possibility of Weyl anomalies is, of course, now universally recognised and has had tremendous implications across diverse contexts: Hawking radiation [10]; asymptotic safety [11, 12]; string theory [13]; supersymmetry and supergravity [14, 15, 16, 17]; inflation [18, 19, 20]; holography [21, 22]; braneworlds [23]; condensed matter [24] and conformal colliders [25]. For instance, Tohru Eguchi and Peter G.O. Freund had identified the Pontryagin number as characterising the axial fermion number current anomaly, but noted that there did not seem to be any analogous role for the Euler characteristic [26]. Motivated by this apparent gap, Mike showed [27] that the Euler characteristic corresponds to the integrated trace anomaly, of course! In particular, in \(d\!=\!2\) dimensions the Weyl anomaly is just \(aR\), where \(R\) is the Ricci scalar and \(a\) is the anomaly coefficient. In the context of string theory Polyakov famously showed [13] that the vanishing of the world-sheet Weyl anomaly picks out the critical dimensions, where the \(a\) anomaly coefficient is related to the Virasoro algebra central charge by \(c\!=\!a/24\pi\). Moreover, on including spacetime background fields the vanishing of the world-sheet Weyl anomaly implies the spacetime Einstein equations of (super)gravity [28, 29], a remarkable result sitting at the foundations of string theory. Crossing the pond to Brandeis, Waltham, MA, USA, in 1977, Mike joined forces with Steven M. Christensen at Harvard to compute Weyl and axial anomalies in the then recently discovered theory of supergravity. In particular, they were to show that the superpartner to the graviton, the gravitino, contributes an axial anomaly -21 times that of a Dirac spinor [30]. This was again met with some disbelief, but perhaps most interesting was their approach, generalising the classical index theorems, such as Atiyah-Singer, to arbitrary spin [31]. Such calculations revealed some unexpected subtleties. Together with Peter van Nieuwenhuizen, Mike demonstrated that the partition function and Weyl anomaly of a given field may not coincide with those of its electromagnetic dual [32]. Here the anomaly is given by \(\operatorname{tr}\langle T\rangle_{\text{reg}}-\langle\operatorname{tr}T \rangle_{\text{reg}}\), where \(\operatorname{tr}\) denotes the trace, \(\langle-\rangle_{\text{reg}}\) is the regularized expectation value and \(T\) is the stress-energy tensor [27]. They used this observation to argue that theories, classically equivalent under electromagnetic duality, may fail to be so quantum mechanically [32], which is by now a well-recognised property of quantum field theory on topologically non-trivial manifolds [33, 34, 35, 36]. This anomaly should not be confused with \(\operatorname{tr}\langle T\rangle_{\text{reg}}\) alone which yields equivalent results [37, 38, 39]. Fast-forward some 42 years, Mike demonstrated that the Weyl anomaly of (the massless sector of) type IIA string theory compactified on a 6-manifold is given by a product of Euler characteristics \(\chi(M\times X)=\chi(M)\chi(X)\), where \(M\) is the (Euclidean) spacetime 4-manifold and \(X\) is the internal 6-manifold. Moreover, for (the massless sector of) M-theory compactified on a 7-manifold \(Y\), the Weyl anomaly is given by the product \(\rho(M\times Y)=\chi(M)\rho(Y)\), where \(\rho(Y)\) is a topological invariant reminiscent of the Ray-Singer torsion [36]. If you like, \(\rho\) is to M-theory what \(\chi\) is to strings. This early foray into supergravity marked the beginning of Mike's next major movement: Kaluza-Klein theory. In the early 1980s Mike was to return to Imperial College London and also spend time at CERN, Meyrin, Switzerland, two institutes that played an important role in the development of Kaluza-Klein supergravity. At this time supergravity offered much promise as a unified theory, necessarily including gravity. First, it was hoped that supersymmetry might ameliorate the UV divergences plaguing perturbative quantum gravity. Second, supergravity is unique and particularly elegant in \(D=11\) spacetime dimensions, the maximum allowed by supersymmetry. Thus, when combined with Kaluza-Klein compactification, supergravity stood out as an approach to unification5. In this context, Mike and his colleagues made several key advances. With Christopher N. Pope, Mike showed that \(D=4\), \(\operatorname{SO}(8)\) gauged \(\mathcal{N}=8\) supergravity theory could be derived as a spontaneous Kaluza-Klein compactification of \(D=11\) supergravity on \(\operatorname{AdS}_{4}\times S^{7}\)[40]. Besides its importance for unification at that time, this particular compactification has been a cornerstone of many of the subsequent advances in supergravity and string/M-theory. With Mike's PhD student, Moustafa A. Awada, they further showed that by preserving the \(S^{7}\) topology while deforming its geometry one could break the \(\mathcal{N}=8\) supersymmetry down to \(\mathcal{N}=1\)[41]. This entailed two important insights that would shape much future work on string/M-theory compactifications. First, the holomony of the internal manifold dictates the degree of supersymmetry preserved. In the context of heterotic superstring compactifications with vanishing fluxes this famously picks out Calabi-Yau 3-folds as the internal manifolds of choice for model building. Second, Mike, Pope and Bengt E. W. Nilsson subsequently showed that the supersymmetry breaking induced by the squashed \(S^{7}\) corresponded to a Higgs mechanism from the \(D=4\) perspective [42]. Not long after, the same trio performed the first \(K3\) compactification [43]. This was motivated, in part, by its special \(\operatorname{SU}(2)\) holonomy, a prelude to the all important \(\operatorname{SU}(3)\) holonomy Calabi-Yau 3-fold superstring compactifications that would be initiated shortly after [44]. Moreover, the \(\operatorname{SU}(2)\) holonomy implies that \(K3\) compactifications preserve one half of the supersymmetries, opening the door to type IIA on \(K3\) and heterotic on \(T^{4}\) string/string dualities. More on that later. These developments, along with manifold pioneering contributions made by many others (some of whom can be found in this very collection), were pulled together by Mike, Nilsson and Pope in what has become a standard reference for Kaluza-Klein supergravity [45]. The sharp crescendo of excitement surrounding supergravity was just as quickly muffled6. It had started to seem unlikely that supergravity could ultimately stave off the divergences inherent to a perturbative quantum _field theory_ of gravity (almost 50 years on this chapter is still not quite closed, however) and Edward Witten had demonstrated that \(D\!=\!11\) supergravity compactified on a manifold could not accommodate the chirality needed to make contact with the Standard Model [46]. By the end of 1985 the groundbreaking discoveries of the Green-Schwarz mechanism, heterotic superstrings and Calabi-Yau 3-fold compactifications had firmly, and rightly, cemented themselves as the most promising route to superunification. Footnote 6: For a sense of just how quickly things were moving, during review process of [45] a note was added summarising the developments, which would divert the attention of most away from \(D\!=\!11\) supergravity, that had emerged between submission and acceptance! Yet, Mike and many like-minded folk had not yet given up on \(D\!=\!11\). On the one hand, superstrings were not an open and shut case and in his 1987 'Not the standard superstring review' [47] Mike erred on the side of caution, "In order not to be misunderstood, let me say straight away that I share the conviction that superstrings are the most exciting development in theoretical physics for many years, and that they offer the best promise to date of achieving the twin goals of a consistent quantum gravity and a unification of all the forces and particles of Nature. Where I differ is the degree of emphasis that I would place on the unresolved problems of superstrings, and the likely time scales involved before superstrings (or something like superstrings) make contact with experimental reality." He emphasised, in particular, the challenges (and opportunities) posed by the landscape problem and non-perturbative phenomena, such as black holes. On the other hand, the tension between 10 and 11 raised its own questions. Why did supersymmetry allow for 11, while superstrings only 10? If supergravity was the low-energy effective field theory of superstrings, where did that leave \(D\!=\!11\) supergravity? Mike vigorously maintained that 11 should be taken seriously. Indeed, various clues that \(D\!=\!11\) might yet play a role had been amassing. While there are no superstrings in \(D\!=\!11\), there are supermembranes that couple to \(D\!=\!11\) supergravity [48]. It turns out that this is one of the key bridges between \(D\!=\!10\) string theory and \(D\!=\!11\) M-theory. In 1987 Mike, Paul S. Howe, Takeo Inami and Kellogg Stelle showed [49] by compactifying the \(D\!=\!11\) spacetime manifold on \(S^{1}\) and simultaneously wrapping the supermembrane around the circle ones finds precisely the type IIA superstring in \(D\!=\!10\)! This result pre-empted7 important facets of the M-theory revolution of 1995 by connecting strings and membranes, along with 10 and 11 dimensions. In the same year, Mike and Miles P. Blencowe, again inspired by the discovery of supermembranes in \(D\!=\!11\), conjectured the existence of super \(p\)-branes on the \(S^{1}\times S^{p}\) boundary of AdS\({}_{p+2}\) and presented the corresponding (free) superconformal field theories [51]. The maximal \(p\!=\!2\) case corresponded to the supermembrane on AdS\({}_{4}\times S^{7}\) with the superconformal group OSp\((8|4)\). The maximal \(p\!=\!3\) and \(p\!=\!5\) cases corresponded to the yet to be discovered D3-brane and M5-brane on AdS\({}_{5}\times S^{5}\) and AdS\({}_{7}\times S^{4}\) with superconformal groups SU\((2,2|5)\) and OSp\((8^{*}|4)\), respectively. Footnote 7: In his review of string theory [50], Joseph Conlon remarks that “When I first read this paper I was quite shocked by its existence; according to the supposed history of string theory that I had ‘learned’, such a paper could not have been written for almost another decade.” Further telling clues on the road to M-theory arose in the context of branes and dualities, themselves closely related. By the mid-eighties five _a priori_ independent consistent superstring theories had been established. However, they were not islands; for instance, the IIA and IIB theories could be connected by T-duality or mirror symmetry. What was to emerge over the next decade or so was a web of dualities, suggesting that each string theory was but a corner of a larger framework. During this period of intense activity (in 10 and 11 dimensions) Mike relocated to Texas A&M, just in time for it to host the inaugural 'Strings 89' conference. Apply, that year Mike addressed the question of _manifest_ T-duality [52]. By considering two _dual_ string theories, he introduced the notion of a doubled spacetime with a generalised O\((D,D)\) metric \(H(g,B)\), built from the standard metric \(g\) and the Kalb-Ramond two-form \(B\). This is, today, a key ingredient in the thriving domain of double field theory. The following year 'Strings' would return to Texas A&M and this time around Mike and his then PhD student, Jian Xin Lu, generalised these notions to membranes, where the role \(B\) is replaced by the three-form \(C\) of \(D\!=\!11\) supergravity [53]. The goal here was to make manifest, from the membrane's perspective, the global symmetries of \(D\!=\!11\) supergravity compactified on an \(n\)-torus, which would later be recognised as shadows of the U-dualities of M-theory. This time the spacetime is not merely doubled, but extended by \(C_{2}^{n}\) coordinates corresponding to the possible ways one can wrap a membrane on an \(n\)-torus. There is a generalised metric \(H(g,C)\) manifesting the appropriate symmetry group; for example, \(\mathrm{SL}(5,\mathbb{R})\) for \(n\!=\!4\). It is interesting to note that Mike and Lu puzzled over the cases \(n\!>\!4\), which does not naively work out as expected. They resolved this question, quite naturally, by introducing additional coordinates corresponding to the Hodge dual of \(C\) and so recovered the symmetries of \(D\!=\!11\) supergravity on an \(n\)-torus, for \(1\!\leq\!n\!\leq\!8\). Of course, we now understand these coordinates as corresponding to the possible wrappings of the M5-brane that kick in at \(n\!=\!5\). The extended spacetimes and their generalised metrics \(H(g,C)\) are, today, central to the developments of exceptional field theory. Another central theme of Mike's time as a Texan was the role of solitonic supersymmetric \(p\)-brane solutions that carry topological magnetic charge, and their dual relationship to elementary singular \((D-p-4)\)-brane solutions carrying electric Noether charge [54, 55, 56, 57]. For example, in 1991 Mike and Stelle [58] discovered the elementary multiple membrane solutions of \(D\!=\!11\) supergravity, shortly followed by the dual solitonic superfivebrane solution of Gueven [59]. An other idea introduced by Mike, with Ramzi R. Khuri, Ruben Minasian, and Joachim Rahmfeld, during this period was the identification of solitonic magnetic string states as extremal black holes [60]. Then applying S-duality led Mike and Rahmfeld to relate supersymmetric massive string states with elementary black holes [61]. These ideas generalise to the black and super \(p\)-branes solutions in various dimensions [56] and have become a key concept in the understanding of black holes in string/M-theory. The profound contributions unravelling this web of ideas, by Mike and many others, are far too numerous to do justice to here. Fortunately, Mike, Lu and Khuri put together an influential review [62] of these developments up to 1994 that we can defer to. An important consequence of the \(p/(D-p-4)\)-brane dualities [63] is the implied equivalences among string compactifications; for example, the \(D\!=\!10\) heterotic string compactified on a 4-torus is quantum equivalent to the \(D\!=\!10\) type IIA string on \(K3\). Another related idea introduced by Mike and his colleagues at Texas A&M, including Lu, Khuri, Minasian as well as the newer arrival James T. Liu, was that \(p\)-brane dualities could be used to explain electromagnetic duality in lower dimensions [64], as described by Witten [65]: "Mike Duff and Ramzi Khuri in 1993 had written a paper on what they called string/string duality. They had said there should be a self-dual string theory in six dimensions that, looked at in two different ways, would give electric-magnetic duality of gauge theory in four dimensions. It was actually a brilliant idea. The only trouble was they didn't have an example in which it worked." Mike and his colleagues rapidly developed an intricate web of dualities amongst strings and \(p\)-branes, and their implications for strong/weak coupling dualities, over the following years [66, 67, 66, 67]. Note, the particular case of the self-dual string in \(D\!=\!6\) relates to the role of the \((2,0)\) theory in the geometric Langlands programme. In particular, Mike, Liu and Minasian gave evidence that membrane/fivebrane duality provides an eleven dimensional origin of string/string duality, which in turn bolsters the S-duality conjecture [63]. The original hope of Mike and Khuri was also realised together with Minasian and Witten in the context of a heterotic/heterotic duality [68]. These observations contributed (along with crucial insights of a great many others that we are shamefully unable to pay due homage to here) to the 1995 M-theory revolution led by Witten, which marked a new phase in the development of strings and branes. The supermembrane and fivebrane were duly promoted to the M2- and M5-brane and \(D=11\) found its place, after all, as the low-energy limit of M-theory. Mike's conviction that \(D=11\) should be canon was vindicated. This period was followed by an explosion of ideas in string/M-theory, the anti-de Sitter/conformal field theory correspondence (AdS/CFT) and Randall-Sundrum brane world models to name but two examples. Mike's past work, such as the AdS compactifications and brane-scans [69], fed into many aspects of the renewed research avenues. In fact, jumping ahead a little, his 1973 work on loop corrections to the Schwarzschild solution proved important to (AdS/CFT)/Randall-Sundrum complementarity, as shown by Mike and Liu [70]. Who in 1973 saw that one coming! In particular, Mike and colleagues developed asymptotically flat and AdS black hole and \(p\)-brane solutions to M-theory [71, 72, 73, 74], crucial to the applications of AdS/CFT and the question of Bekenstein-Hawking entropy. During this period Mike left the Texas triangle to, fittingly, take up the Oskar Klein professorship at the University of Michigan, where he would be elected as the first director of the newly created Michigan Center for Theoretical Physics. Again, Mike arrived just in time for Michigan to host Strings, this time the millennial edition, 'Strings 2000' (Figure 1). To a degree it was time to take stock. Mike, David Gross and Witten solicited 'big questions' from the attendees and selected the ten best. Some transcended any particular approach to physics beyond the standard model or quantum gravity, for example 'Why does the universe appear to have one time and three space dimensions?'. But one was squarely in the domain of M-theory: "What are the fundamental degrees of freedom of M-theory (the theory whose low-energy limit is eleven-dimensional supergravity and which subsumes the five consistent superstring theories) and does the theory describe Nature?" This is perhaps the question with which Mike himself has since been most preoccupied: elucidating what M-theory _is._ Although we have collectively uncovered a patchwork understanding, its ultimate formulation requires new ideas and insights, an endeavour Mike has constantly championed. In 2005 Mike would come full circle, returning to Imperial College London, now as the Abdus Salam Professor of Theoretical Physics. Here he would embark on several new research journeys (but always touching on M-theory), such as black holes and qubits [75], quantum optics and Hawking radiation [76] and gravity as the'square' of Yang-Mills theory [77]. For instance, not long after arriving, Mike and Sergio Ferrara, with various of their colleagues and students, would build a dictionary between string/M-theory black holes and various concepts from quantum information theory, qubits and entanglement measures [78, 79, 80, 81, 82, 83]. This Figure 1: Mike presiding over ‘Strings 2000’ at the University of Michigan, with (left to right) James T. Liu, Lars Brink, Ignatios Antoniadis and Sergio Ferrara. programme grew out of the observation that the entropy of the \(STU\) black hole8 and the entanglement shared by three qubits are both described by Cayley's hyperdeterminant [78]. One can only assume that Cayley had anticipated both M-theory and quantum computing. Footnote 8: \(STU\) supergravity was introduced by Mike, Liu and Rahmfeld back in Texas. It is special in that its symmetries correspond not to a string/string duality, but a string/string/string triality! It has since been a paradigmatic model for elucidating facets of string/M-theory. It also connects black hole entropy to the number theory of Manjul Bhargava [84]. In completely separate developments, Mike initiated a programme to understand 'Einstein as the square of Yang-Mills' at the level of off-shell field theories. The notion of gravity as the 'product' of two gauge theories has a long history, but was in particular made concrete through the tree-level Kawai-Lewellen-Tye 'closed \(=\) open \(\times\) open' string scattering relations. This idea has witnessed a recent renaissance driven by the 2008 Bern-Carrasco-Johansson colour/kinematics duality conjecture, which allows one to build graviton scattering amplitudes from the 'double copy' of gluon amplitudes to all orders in perturbation theory.9 Inspired, in part, by the relationship between the symmetries of supergravity and those of super Yang-Mills theory, Mike took an off-shell field theory approach to 'gravity \(=\) gauge \(\times\) gauge'. This yielded remarkable and unexpected insights such as the appearance of the Freudenthal magic square of U-dualities [85, 86] and the Yang-Mills origin of (super)diffeomorphisms [87, 88]. Today scattering amplitudes and the 'double copy' are being used to understand classical gravity, in particular black hole collisions. We would imagine that Salam (or a 20-year-old Mike) would have been surprised and delighted at this. Footnote 8: This reopened the debate concerning the perturbative finiteness of \({\cal N}=8\) supergravity, costing one of us some bottles of wine. ###### Acknowledgements. We are most grateful to all the authors for their wonderful contributions to this collection. We are also immensely grateful to the Proceedings of the Royal Society Publishing Editor Joanna Harries, and all the PRSA staff, for her tremendous work and support in bringing this special issue together.
2309.03114
NUV-DoA: NUV Prior-based Bayesian Sparse Reconstruction with Spatial Filtering for Super-Resolution DoA Estimation
Achieving high-resolution Direction of Arrival (DoA) recovery typically requires high Signal to Noise Ratio (SNR) and a sufficiently large number of snapshots. This paper presents NUV-DoA algorithm, that augments Bayesian sparse reconstruction with spatial filtering for super-resolution DoA estimation. By modeling each direction on the azimuth's grid with the sparsity-promoting normal with unknown variance (NUV) prior, the non-convex optimization problem is reduced to iteratively reweighted least-squares under Gaussian distribution, where the mean of the snapshots is a sufficient statistic. This approach not only simplifies our solution but also accurately detects the DoAs. We utilize a hierarchical approach for interference cancellation in multi-source scenarios. Empirical evaluations show the superiority of NUV-DoA, especially in low SNRs, compared to alternative DoA estimators.
Mengyuan Zhao, Guy Revach, Tirza Routtenberg, Nir Shlezinger
2023-09-06T15:53:03Z
http://arxiv.org/abs/2309.03114v3
NUV-DOA: NUV prior-based Bayesian sparse reconstruction with spatial filtering for super-resolution DOA estimation ###### Abstract Achieving high-resolution Direction of Arrival (DoA) recovery typically requires high Signal to Noise Ratio (SNR) and a sufficiently large number of snapshots. This paper presents NUV-DoA algorithm, that augments Bayesian sparse reconstruction with spatial filtering for super-resolution DoA estimation. By modeling each direction on the azimuth's grid with the sparsity-promoting normal with unknown variance (NUV) prior, the non-convex optimization problem is reduced to iteratively reweighted least-squares under Gaussian distribution, where the mean of the snapshots is a sufficient statistic. This approach not only simplifies our solution but also accurately detects the DoAs. We utilize a hierarchical approach for interference cancellation in multi-source scenarios. Empirical evaluations show the superiority of NUV-DoA, especially in low SNRs, compared to alternative DoA estimators. Mengyuan Zhao, Guy Revach, Tirza Routtenberg, and Nir Shlezinger + Footnote †: M. Zhao and G. Revach contributed equally to this work. M. Zhao was doing this work while at ETH Zurich, now at the Division of ISE, KTH Royal Institute of Technology, Sweden. G. Revach is with ISI, D-ITET, ETH Zurich (email: [email protected]), T. Routtenberg and N. Shlezinger are with the School of ECE, Ben-Gurion University of the Negev, Be'er Sheva, Israel. We thank Hans-Andrea Loeliger for the helpful discussions. DoA estimation, sparse recovery. Mengyuan Zhao, Guy Revach, Tirza Routtenberg, and Nir Shlezinger DoA estimation, sparse recovery. ## 1 Introduction Direction of Arrival (DoA) estimation is the task of determining the azimuth of source signals using multiple measurements from a sensor array [1]. It plays a crucial role in contemporary applications across various fields [2]. While the resolution of conventional approaches based on covariance recovery via beamforming [3] and MVDR [4], is limited by the array geometry, subspace methods (e.g., MUSIC [5]) achieve super-resolution. However, they rely on a relatively large number of snapshots and high Signal to Noise Ratios (SNRs), and their accuracy degrades when these are not met. To address some of the limitations of previous approaches, deep learning-based methods have emerged. These methods utilize either end-to-end neural networks for direct DoA recovery [6, 7, 8, 9] or augment classic algorithms [10, 11], with trainable architectures via model-based deep learning [12]. While these methods often function effectively under challenging conditions, they are inherently data-driven, relying on labeled data from which the estimation mapping is learned. The unique structure of the DoA estimation problem has facilitated the adoption of sparse signal reconstruction (SSR) [13] and compressed sensing (CS) [14]. In this approach, the azimuth range is quantized into a grid, with the steering vectors serving as the dictionary. Methods such as \(\ell_{1}\)-SVD [15] and SPICE [16], as well as others [17, 18], have demonstrated their merit in challenging scenarios, particularly those with few snapshots or correlated sources [2]. Within the scope of _sparse_ modeling, the _Bayesian_ framework [19] can be interpreted as a form of regularization where sparsity is promoted through the adoption of specific prior distributions. Notably, methods such as RVM, SBL, and BCS [20, 21, 22] present an effective surrogate for addressing \(\ell_{0}\) minimization. This often results in more accurate recovery performance compared to many \(\ell_{1}\) minimization approaches. Consequently, Bayesian-based DoA estimation techniques have been proposed, including [23, 24, 25, 26, 27, 28]. Although these methods all stem from a shared concept, they exhibit distinct differences in modeling, choice of sparsifying prior, estimation objective, choice of estimation algorithm, tuning parameters, the handling of multiple measurement vectors (MMV), and decision rules. These nuances impact resolution, recovery performance, stability, and complexity. Despite their potential, SSR-based methods exhibit inherent limitations arising from the unique structure of the DoA estimation problem and the associated resolution trade-off. Achieving higher resolution necessitates refining the grid quantization. However, this refinement leads to a more complex problem and a dictionary with greater mutual coherence, which can, in turn, impair the recovery performance. A further limitation arises from the handling of MMV. While some approaches address MMV by combining multiple single-snapshot (SMV) estimators, others tackle the full MMV problem, frequently incorporating joint sparsity. However, these methods often increase the problem's complexity proportional to the number of snapshots, rendering them less efficient. To tackle these challenges, we present _NUV-DoA_, a super-resolution SSR-based DoA estimation algorithm that utilizes the mean of the snapshots as its sufficient statistic. Despite its simplicity, it offers high accuracy and robustness, especially in scenarios with low SNR and a limited number of snapshots. In this approach, each quantized cell in the azimuth grid is modeled as a complex decision variable, adhering to a sparsity-inducing normal with unknown variance (NUV) prior [29, 30, 31]. The NUV, deeply rooted in _Bayesian_ sparsity, facilitates the transformation of the SSR problem into an iteratively reweighted least-squares Gaussian estimation problem, where the mean of the snapshots serves as a sufficient statistic. To further improve the DoA recovery performance, we harness the inherent spatial correlation of the DoA estimation problem. We employ a spatial filtering-based super-resolution algorithm that segments a single, global SSR problem spanning the entire grid into a series of localized problems. Finally, we propose a hierarchical algorithm designed to reduce computational complexity for super-resolution. This approach is especially vital in scenarios with multiple sources. By incorporating an interference cancellation step, we simplify the multi-source DoA recovery into \(K\) single-source problems without incurring additional complexity. The remainder of this paper is structured as follows: Section 2 formulates the problem of DoA estimation and outlines its sparse representation; Sections 3-4 describe and evaluate NUV-DoA, respectively, while Section 5 concludes the paper. ## 2 Problem Formulation and Model ### DoA Estimation Problem Formulation We study the estimation of the DoAs of \(K\) far-field radio signals originating from directions \(\boldsymbol{\theta}\in\Theta^{K}\), where \(\Theta=\left[-\frac{\pi}{2},\frac{\pi}{2}\right)\). The DoAs are recovered from \(L\) noisy snapshots measured using a uniform linear antenna array (ULA) consisting of \(N\) elements spaced at half-wavelength intervals. A single snapshot \(\mathbf{y}(t)\) can be expressed as \[\mathbf{y}(t)=\mathbf{A}\left(\boldsymbol{\theta}\right)\cdot\mathbf{s}\left( t\right)+\mathbf{v}\left(t\right)\in\mathbb{C}^{N},\quad t\in\left\{1,...,L \right\}. \tag{1}\] Here, \(\mathbf{A}\left(\boldsymbol{\theta}\right)\in\mathbb{C}^{N\times K}\) represents the steering matrix, which projects the \(K\) narrow-band impinging source signals, denoted by vector \(\mathbf{s}\left(t\right)\in\mathbb{C}^{K}\), onto the ULA. The steering matrix is comprised of the \(K\) steering vectors, defined as \[\mathbf{a}\left(\theta_{k}\right)=\left(e^{-i\pi\sin(\theta_{k})},\ldots,e^{- i\pi\left(N-1\right)\sin(\theta_{k})}\right)^{\top}. \tag{2}\] The term \(\mathbf{v}\left(t\right)\) in (1) is an additive noise following a complex Gaussian distribution, \(\mathcal{CN}\left(0,\mathbf{R}\right)\). Aggregating the \(L\) snapshots as the matrix \(\mathbf{Y}\), our objective is to estimate \(\boldsymbol{\theta}\) from \(\mathbf{Y}\). ### Sparse Modeling To frame the DoA estimation problem as an instance of SSR, we quantize the azimuth \(\Theta\) into \(M\) equidistant grid cells \[\boldsymbol{\vartheta}\triangleq\left[\vartheta_{0},\ldots,\vartheta_{M-1} \right],\ \vartheta_{m}=m\cdot\Delta\vartheta-\frac{\pi}{2},\ \Delta\vartheta=\frac{\pi}{M}.\] Using these quantized directions, we can approximate a snapshot as a _sparse_ linear combination of steering vectors. Specifically, let us denote \[\hat{\mathbf{y}}\left(t\right)=\mathbf{A}\left(\boldsymbol{\vartheta}\right) \cdot\mathbf{x}\left(t\right),\ \ \mathbf{A}\left(\boldsymbol{\vartheta}\right)\in\mathbb{C}^{N\times M},\ \ \mathbf{x}\left(t\right)\in\mathbb{C}^{M}, \tag{3}\] where \(\mathbf{A}\left(\boldsymbol{\vartheta}\right)\) is an over-complete dictionary matrix with the \(m\)-th column being a steering vector represented by \(\mathbf{a}\left(\vartheta_{m}\right)\) as in (2). Furthermore, \(\mathbf{x}\left(t\right)\) is a sparse vector in which each non-zero entry corresponds to an active direction. The SMV SSR optimization problem with a single realization \(\mathbf{y}\) (omitting \(t\) for brevity), is represented as: \[\mathbf{x}^{*}=\arg\min_{\mathbf{x}}\left\{\left\|\mathbf{y}-\mathbf{A}\left( \boldsymbol{\vartheta}\right)\cdot\mathbf{x}\right\|_{2}^{2}+\gamma\left\| \mathbf{x}\right\|_{0}\right\}. \tag{4}\] Problem (4) is challenging to address due to the non-convexity introduced by the \(\ell_{0}\) norm. Several approaches have been proposed in the literature for solving this problem, including greedy OMP methods [32], optimization-based methods using \(\ell_{1}\) and \(\ell_{2}\) relaxations [13], and _Bayesian_ methods [21]. For MMV [33], the conventional method employs a block of measurements coupled with a block of sparse vectors, assuming joint sparsity. Yet, this method sees its complexity increase with the number of snapshots. In the following, we demonstrate that by adopting a _Bayesian_ approach with the NUV prior, the MMV problem can be distilled down to an SMV. In this context, \(\bar{\mathbf{y}}_{L}\), representing the mean of multiple snapshots, emerges as a sufficient statistic. ## 3 NUV-DoA High-Resolution Estimation Our NUV-DoA algorithm spans three key ingredients: 1) the NUV-SSR, a general Bayesian-based SSR algorithm leveraging the NUV prior to promote sparsity that is versatile enough to be applied beyond DoA estimation to any dictionary matrix; 2) a _super-resolution_ algorithm that capitalizes on the inherent spatial correlation of the DoA setting through spatial filtering over overlapping sub-bands; and 3) a hierarchical strategy for multi-source _interference cancellation_. ### Nuv-Ssr In our model (3), it holds that the vectors \(\left\{\mathbf{x}\left(t\right)\right\}_{t=1}^{L}\) are jointly sparse; that is, they all share the same support. Moreover, every snapshot is considered independent and of equal importance. Consequently, we represent \(\mathbf{x}\) as a vector of complex decision variables, where each component \(\mathrm{x}_{m}\in\mathbb{C}\) corresponds to a potential hypothesis, such as a direction \(\vartheta_{m}\), and follows a complex NUV prior, specifically: \[\mathrm{x}_{m}\sim\mathcal{CN}\left(0,\mathrm{q}_{m}^{2}\right),\quad\mathbf{ p}\left(\mathrm{q}_{m}^{2}\right)\propto 1,\quad\mathrm{q}_{m}^{2}\in\mathbb{R}_{+}. \tag{5}\] The joint likelihood of the snapshots, given the decision vector, can be decomposed into conditionally independent and identical Gaussian distributions. Given that our prior adheres to a Gaussian distribution, the posterior is Gaussian as well. Thus, the temporal mean \(\bar{\mathbf{y}}_{L}=\frac{1}{L}\sum_{t=1}^{L}\mathbf{y}(t)\), emerges as a sufficient statistic, significantly simplifying inference. The conventional approach for recovery involves a Type I method: specifically, a maximum _a posteriori_ (MAP) estimation of \(\mathbf{x}\) given the sufficient statistic \(\bar{\mathbf{y}}_{L}\). However, studies have shown that Type II offers superior recovery performance [34]. In Type I, the vector of unknown variances \(\mathbf{q}^{2}=\left(\mathrm{q}_{1}^{2},\ldots,\mathrm{q}_{M}^{2}\right)^{\top}\) is integrated out, while here, in Type II, it is estimated through evident maximization: \[\hat{\mathbf{q}}^{2}=\arg\max_{\mathbf{q}^{2}\in\mathbb{R}_{+}^{M}}\left\{ \mathbf{p}\left(\mathbf{q}^{2}\left|\bar{\mathbf{y}}_{L}\right.\right)\right\}. \tag{6}\] To solve this optimization problem, we employ a two-step expectation maximization (EM) algorithm [35]. By considering \(\mathbf{x}\) as a latent vector dependent on \(\mathbf{q}^{2}\) and maximizing the log-posterior lower bound difference, the algorithm reduces to an iteratively reweighted least-squares estimation of the unknown variance, based on the following update rule: \[\hat{\mathbf{q}}_{m,[i+1]}^{2}=\mathds{E}_{[i]}\left(\left|\mathbf{x}_{m} \right|^{2}\right)=\left|\mathds{E}_{[i]}\left(\mathbf{x}_{m}\right)\right|^{2 }+\mathds{V}_{[i]}\left(\mathbf{x}_{m}\right). \tag{7}\] Here, \(\mathds{E}_{[i]}\), and \(\mathds{V}_{[i]}\) are the expectation and variance, respectively, computed under the posterior distribution \(\mathbf{p}\left(\mathbf{x}\left|\mathbf{\tilde{y}}_{L};\hat{\mathbf{q}}_{[i]}^ {2}\right.\right)\), with the NUVs set to their values from the preceding iteration. These values arise from minimizing a least-squares cost function, for which the closed-form solution is \[\hat{\mathbf{x}}_{[i]}=\mathds{E}_{[i]}\left(\mathbf{x}\right)= \Gamma\left(\hat{\mathbf{q}}_{[i]}^{2}\right)\cdot\mathbf{A}^{\mathsf{H}} \cdot\mathbf{W}_{[i]}\cdot\mathbf{\tilde{y}}_{L}, \tag{8a}\] \[\mathds{V}_{[i]}\left(\mathbf{x}\right)=\hat{\mathbf{q}}_{[i]}^{2}- \Gamma\left(\hat{\mathbf{q}}_{[i]}^{4}\right)\cdot\text{diag}\left(\mathbf{A} ^{\mathsf{H}}\cdot\mathbf{W}_{[i]}\cdot\mathbf{A}\right), \tag{8b}\] where \(\mathbf{W}_{[i]}\) is a precision matrix of size \(n\times n\) given by: \[\mathbf{W}_{[i]}=\left(\mathbf{A}\cdot\Gamma\left(\hat{\mathbf{q}}_{[i]}^{2} \right)\cdot\mathbf{A}^{\mathsf{H}}+\frac{\sigma^{2}}{L}\cdot\mathbf{I} \right)^{-1}. \tag{9}\] In the above, \(\left(\boldsymbol{\vartheta}\right)\) was omitted to indicate that \(\mathbf{A}\) can serve as a general dictionary. \(\Gamma\left(\mathbf{u}\right)\) is a diagonal matrix with \(\mathbf{u}\) being its main diagonal, and \(\text{diag}\left(\mathbf{B}\right)\) denotes an operator of extracting the diagonal of matrix \(\mathbf{B}\) and formatting it as a vector. The NUV-SSR is initialized with \(\hat{\mathbf{q}}_{[0]}^{2}\) as a non-zero random value, and continues until a predefined convergence criterion is met. A notable benefit of this method is its single-sparsity tuning parameter \(\sigma^{2}\), further elaborated upon in Section 4. A key feature of the NUV representation is that when \(\mathbf{q}_{m}^{2}=0\), it implies that \(\left|\mathbf{x}_{m}\right|=0\) as well. To select the \(K\) active hypotheses, we introduce \(\Omega=\left(\Omega_{m}=\left|\mathbf{x}_{m}\right|\right)^{\top}\) as the NUV-_spectrum_ from which we choose the \(K\) most dominant peaks. If \(K\) is not pre-defined, we select the most dominant peaks that exceed a threshold \(\eta\). ### Super-Resolution by Spatial Filtering The outlined NUV-SSR is powerful and robust, and we successfully employed it to address an SSR problem encompassing \(M=3000\) atoms. Despite the computational complexity of the problem, the performance is commendable. In DoA recovery, \(M=3000\) translates to a resolution of \(\Delta\vartheta=0.06^{\circ}\). However, aiming for finer resolutions like \(\Delta\vartheta=0.01^{\circ}\) dramatically inflates the problem size to \(M=18,000\) atoms, presenting substantial challenges to all SSR algorithms. The unique structure of the DoA recovery problem further complicates matters: the steering vectors exhibit spatial correlation, thus increasing the dictionary's mutual coherence which can potentially reduce performance. To further enhance DoA recovery performance, we augment NUV-SSR with super-resolution by leveraging the inherent spatial correlation, leading to the implementation of NUV-DoA by spatial filtering. Here, we segment the single, global SSR problem spanning the entire azimuth grid into a series of smaller localized overlapping tasks. Each task targets a sub-band of the azimuth around \(\vartheta_{m}\) of size \(2\alpha\), represented as \(\vartheta_{m}^{\sqcap}=\left[\vartheta_{m}-\alpha,\vartheta_{m}+\alpha\right]\). For every sub-band \(\vartheta_{m}^{\sqcap}\), an SSR problem is solved based on the steering submatrix \(\mathbf{A}_{m}^{\sqcap}\left(\boldsymbol{\vartheta}\right)\) using NUV-SSR. Within each of these sub-bands, only the center element from the sub-spectrum \(\Omega_{m}^{\sqcap}\left(0\right)\) is extracted. Following this, these elements are combined to produce the full spectrum that spans the entire azimuth, after which peak detection is executed. This strategy excels in delivering high resolution, precision, and a universal tuning principle. When targeting a resolution of \(\Delta\vartheta=0.01^{\circ}\) with \(\alpha=0.5\), each sub-band spans \(1^{\circ}\). This results in \(M^{\sqcap}=101\) decision variables for each sub-task. This method proves more effective and efficient than other alternatives. Yet, its efficiency can be further enhanced by adopting a hierarchical approach, as we show next. ### Hierarchical Approach and Multi-Source Estimation While the spatial filtering strategy significantly enhances resolution, it can also introduce an off-grid effect. Energy from sources located outside the band might leak into the sub-band of interest, causing interference. To address this, we employ a hierarchical approach for adjacent channel interference cancellation, transforming multi-source DoA recovery into \(K\) single-source problems, without adding extra complexity. We start with a low-resolution preprocessing phase to coarsely estimate the \(K\) directions, using NUV-SSR in low SNRs and Root-MUSIC in high SNRs. For each single estimated source, interference from its \(K-1\) neighboring sources is canceled by subtracting a linear combination of their steering vectors from the observations. This leaves a single source DoA estimation, to which our high-resolution algorithm can be applied. Given our reliable direction estimates, high-resolution processing is confined to a narrow azimuth sub-band of size \(6\cdot\epsilon\) around the low-resolution estimate, where \(\epsilon\) represents the empirical standard deviation of the low-resolution algorithm's error at the given SNR. This hierarchical method is also adaptable for single-source estimation, offering a significant reduction in complexity. ### Discussion While the sparsifying NUV prior is deeply rooted in SBL, its potential in the Gaussian setting has not been fully harnessed for general SSR and DoA estimation tasks. The fact that the mean of the snapshots is a sufficient statistic not only simplifies the problem but also enhances the robustness of our NUV-DoA, particularly in low SNRs and with a limited number of snapshots. Additionally, since our method does not rely on the covariance of the snapshots, it can effectively manage scenarios with coherent sources. Having only a single tuning parameter, \(\sigma^{2}\) makes NUV-DoA easy to implement. Here, estimating a positive variance directly correlates with the recovery of an atom from the support and with detecting a source signal in a particular direction. From a statistical perspective, our parameter estimation algorithm translates into a simultaneous statistical test spanning a family of \(M\) latent hypotheses, of which only \(K\) are active. Owing to the sparsity-enforcing nature of the NUV, we attain high detection rates while maintaining low false alarms. ## 4 Empirical Evaluation Here, we empirically evaluate the properties and performance of our proposed NUV-DoA.1 We begin by highlighting the operation of sub-spectrums combining. For a single source at \(\theta=0^{\circ}\), the intermediate sub-spectrum centered on this direction, as shown in Fig. 2, achieves a gain of \(\approx 17\,\mathrm{[dB]}\) over those centered at \(\theta=-20^{\circ}\) and \(\theta=+20^{\circ}\), depicted in Figs. 1 and 3, respectively. The combined spectrum, illustrated in Fig. 4 is both sharp and accurate. Footnote 1: The source code can be found at [https://github.com/MengyuanZhao/ICASSP24-NUV-DoA](https://github.com/MengyuanZhao/ICASSP24-NUV-DoA) Next, we examine the impact of the tuning parameter \(\sigma^{2}\). A low \(\sigma^{2}\) means the NUV does not promote sparsity, leading to an imprecise spectrum with numerous false alarms (Fig. 5). Conversely, a high \(\sigma^{2}\) aggressively enforces sparsity, yielding a spectrum that is accurate but wide and of low magnitude (Fig. 7). An optimal \(\sigma^{2}\) produces a spectrum that is precise, narrow, and of high magnitude, as seen in Fig. 6. We conclude by comparing the performance of NUV-DoA, across multiple SNR conditions, with benchmark algorithms. Fig. 8 reveals that our method, with a resolution of \(\Delta\theta^{\circ}=0.01\), significantly surpasses its counterparts in both high and low SNRs, especially when focusing on the mid-interval of the azimuth \((\theta^{\circ}\leq|75|)\). This performance differential is even more pronounced at the azimuth boundaries \((|75|\leq\theta^{\circ}\leq|85|)\) as seen in Fig. 9. Notably, while competing algorithms falter even in medium SNR or with limited snapshots (e.g., \(L=10\)), NUV-DoA retains efficacy even at an extreme low of \(L=2\), as depicted in Fig. 10. Lastly, Fig. 11 highlights the efficacy of our interference-cancellation mechanism, particularly when contending with two non-coherent sources spaced as closely as \(15^{\circ}\) apart. ## 5 Conclusions We proposed NUV-DoA, a high precision DoA estimation algorithm. NUV-DoA combines Bayesian sparse recovery using NUV priors, alongside spatial filtering for super-resolution and hierarchical multi-source estimation. Our numerical results show the superiority of NUV-DoA in robustness and accuracy, particularly in low SNRs.
2309.07372
Training Audio Captioning Models without Audio
Automated Audio Captioning (AAC) is the task of generating natural language descriptions given an audio stream. A typical AAC system requires manually curated training data of audio segments and corresponding text caption annotations. The creation of these audio-caption pairs is costly, resulting in general data scarcity for the task. In this work, we address this major limitation and propose an approach to train AAC systems using only text. Our approach leverages the multimodal space of contrastively trained audio-text models, such as CLAP. During training, a decoder generates captions conditioned on the pretrained CLAP text encoder. During inference, the text encoder is replaced with the pretrained CLAP audio encoder. To bridge the modality gap between text and audio embeddings, we propose the use of noise injection or a learnable adapter, during training. We find that the proposed text-only framework performs competitively with state-of-the-art models trained with paired audio, showing that efficient text-to-audio transfer is possible. Finally, we showcase both stylized audio captioning and caption enrichment while training without audio or human-created text captions.
Soham Deshmukh, Benjamin Elizalde, Dimitra Emmanouilidou, Bhiksha Raj, Rita Singh, Huaming Wang
2023-09-14T01:16:02Z
http://arxiv.org/abs/2309.07372v1
# Training Audio Captioning Models Without Audio ###### Abstract Automated Audio Captioning (AAC) is the task of generating natural language descriptions given an audio stream. A typical AAC system requires manually curated training data of audio segments and corresponding text caption annotations. The creation of these audio-caption pairs is costly, resulting in general data scarcity for the task. In this work, we address this major limitation and propose an approach to train AAC systems using only text. Our approach leverages the multi-modal space of contrastively trained audio-text models, such as CLAP. During training, a decoder generates captions conditioned on the pretrained CLAP text encoder. During inference, the text encoder is replaced with the pretrained CLAP audio encoder. To bridge the modality gap between text and audio embeddings, we propose the use of noise injection or a learnable adapter, during training. We find that the proposed text-only framework performs competitively with state-of-the-art models trained with paired audio, showing that efficient text-to-audio transfer is possible. Finally, we showcase both stylized audio captioning and caption enrichment while training without audio or human-created text captions. Soham Deshmukh\({}^{1}\), Benjamin Elizalde\({}^{1}\), Dimitra Emmanouilidou\({}^{1}\), Bhiksha Raj\({}^{2}\), Rita Singh\({}^{2}\), Huaming Wang\({}^{1}\)\({}^{1}\)Microsoft, \({}^{2}\)Carnegie Mellon University {sdeshmukh, benjaminm, diemmano, huawang}@microsoft.com, {bhiksha, rsingh}@cs.cmu.edu automated audio captioning, text-only training, prefix tuning, contrastive learning ## 1 Introduction Automated Audio Captioning (AAC) task involves describing the content of an audio stream in natural language. This involves describing the audio in terms of audio events, acoustic scenes, temporal relationships between events, actions, interactions between objects, and the environment. The typical AAC model uses an encoder-decoder architecture [1] that can be semantically divided into two components: an audio understanding component and a language generation component. The audio understanding component is an audio encoder that extracts audio features. Examples of audio encoders are the pretrained sound event models like PANN [2], AST [3], HTSAT [4]. The language generation component is a decoder like BART [5] which generates a natural language description conditioned on the audio features. To improve the language generation component, recent approaches use Large Language Models (LLM) with encyclopedic knowledge like GPT2 as the choice of decoder [6, 7, 8]. However, the generated captions still suffer from short or repetitive text descriptions, of limited vocabulary and diversity [9, 10]. Large audio-text corpora can improve AAC systems. However, collecting annotated training data for audio captioning is a challenging task: it requires human annotators to listen to each audio recording and then describe what was heard. This process is time-consuming, expensive, and introduces biases [11, 10]. Recent works try to solve this problem from either a modeling or data-driven perspective. From the modeling side, researchers leverage Sound Event Detection (SED) datasets [12, 13, 14], contrastive losses [15, 16] and use training procedures that increase the diversity of captions [17]. The data-driven approaches include scaling the audio captioning dataset by sourcing data from the web or generating data from LLM. The challenge of sourcing data from the web is the sparsity of audio-text pairs that are aligned and also human-readable. On the other hand, with LLMs, one can generate audio captions with metadata and keywords [18]. However, the audio file corresponding to the generated caption is still required to train the AAC system. Therefore, we ask the question: _"Can we train AAC systems using only text?"_. This way, the requirement for audio segments paired with text descriptions can be lifted during AAC training. In this paper, we propose a method to train an AAC system using only text. Our method is based on the key insight that multimodal contrastive learning models like CLAP [19, 20] force the audio and text embeddings in a common space. First, we train a text decoder to generate captions conditioned on the pretrained CLAP text encoder. Then, during inference, we replace the text encoder with the pretrained CLAP audio encoder. To bridge any modality gap [21], we explore different lightweight approaches during training. We evaluate our method on two AAC datasets and show that our approach achieves competitive results with the state-of-the-art models trained on paired audio-text. We perform multiple ablation studies including LLM-generated text and stylized captions to verify the effectiveness of text-only training. ## 2 Approach CLAP [19] jointly trains an audio and text encoder using contrastive learning. After training, the audio and text encoder can be used in zero-shot setup for classification, retrieval [19, 22, 23] and supervised downstream tasks [18, 24, 25, 8]. Our approach leverages this multimodal space to enable text-to-audio transfer. An AAC model learns \(P(O|E_{a})\) where \(E_{a}\) is the audio embedding and \(O\) is the output caption. In our setup, \(E_{a}\) is the pretrained CLAP audio embedding. As CLAP learns a joint multimodal space between audio and text, \(P(E_{a})=P(E_{t})\) where \(E_{t}\) is the pretrained CLAP text embedding. Therefore, for AAC, we can instead learn \(P(O|E_{t})=P(O|E_{a})\). This has two implications. First, we can use a text encoder during training and then swap it with an audio encoder during inference. Second, this enables training on text-only data without requiring aligned audio-text pairs or corresponding audio of generated caption. ### Modality Gap In practice, \(P(E_{a})\neq P(E_{t})\) but instead \(P(E_{a})\approx P(E_{t})\). This prevents swapping audio and text encoders as the distribution of audio and text embeddings do not exactly coincide. This effect is known as Modality Gap [21] and is shown in Fig 1. It was initially observed in image-text models like CLIP but applies to all multimodal contrastive learning models including CLAP. To bridge the gap in visual models, recent works have explored adding offsets and learning adapters [26, 27]. In this work, we explore two methods to bridge the modality gap. First, we add zero-mean Gaussian noise to CLAP's text embeddings during training. The noise helps in spreading out the text embeddings and intersecting them with the audio embeddings. The Standard Deviation \(\epsilon\) of the Gaussian noise determines the robustness of the model- to the variations in embeddings and the shift caused by the modality switch. Second, we train a lightweight linear adapter to transform text embeddings into audio embeddings. We study the choice of adaptation and performance in Section 4. ### Training The text-only training approach and AAC model architecture are shown in Fig 1. Let the training data in text-to-text format be referred to as \(\{t^{i}\),\(c^{i}\}\) where \(t^{i}\) and \(c^{i}\) are the \(i^{th}\) input and \(i^{th}\) output text caption respectively. Text encoder \(g_{\psi}\) encodes the input text \(t^{i}\) into an embedding. We inject random Gaussian noise \(\mu\sim\mathcal{N}(0,\epsilon)\) with a standard deviation of \(\epsilon\) to the text embedding. \[v=g_{\psi}(t^{i})+\mu \tag{1}\] To create a prefix \(p^{i}\), the embedding \(v\) is projected to a sequence of \(k\) embeddings. The prefix \(p^{i}\) is used to prompt the pretrained frozen language model \(f_{\theta}\). \[p^{i}=p^{i}_{1},...,p^{i}_{k}=m(v) \tag{2}\] The language model \(f_{\theta}\) is fed with the prefix-caption concatenation of all \(\{z_{i}\}_{i=1}^{N}\), where \(z_{i}\) is: \[z^{i}=p^{i}_{1},...,p^{i}_{k},c^{i}_{1},...,c^{i}_{l} \tag{3}\] The model is trained as a standard captioning system, where it learns to predict a caption (text tokens) \(c^{i}\) conditioned on the prefix in an autoregressive fashion. We used Cross-Entropy as the loss function: \[\mathcal{L}=-\sum_{i=1}^{N}\sum_{j=1}^{l}\log p_{\gamma}(c^{i}_{j}|p^{i}_{1},...,p^{i}_{k},c^{i}_{1},...,c^{i}_{j-1}) \tag{4}\] where \(\gamma\) denotes the model's trainable parameters, contained only within the mapping network \(m\) (Figure 1). We use a model based on prefix-tuning architecture [7, 8]. However, in principle, the text-only training can be applied to any encoder-decoder AAC model. The text encoder and GPT2 are frozen. ### Inference At inference time, we swap the text encoder \(g_{\psi}\) with the audio encoder \(h_{\phi}\). We do not inject Gaussian noise or perform any modality adaptation. The audio encoder \(g_{\phi}\) and the mapping network \(m\) project the test audio \(x^{i}\) into a sequence of \(k\) embeddings. \[p^{i}=p^{i}_{1},...,p^{i}_{k}=m(h_{\phi}(x^{i})) \tag{5}\] The causal language model \(f_{\theta}\) generates the next token sequentially conditioned on the prefix \(p^{i}\). The language model Figure 1: The first panel depicts the modality gap between CLAP pretrained audio and pretrained text embeddings in the joint audio-text space. The second panel shows the proposed method of text-only training for Automated Audio Captioning. At inference, the text encoder is swapped with the audio encoder and a caption is produced for the input audio. Only mapping network \(m\) is trainable, while modules with \(\mathfrak{G}\) are frozen. The Prefix is the output of \(m\). Singular arrows depict embedding vectors while multiple arrows indicate a sequence of vectors. assigns probabilities to all vocabulary tokens at each prediction, which are used to determine the next token depending on the choice of decoding. We use beam search with size 5. ## 3 Experiments **Datasets.** We use text data from Clotho [28] and AudioCaps [29] for the text-only audio captioning training. In all, we use 65,002 captions for training, 24,420 captions from Clotho train set and 40,582 from AudioCaps train set. For benchmarking, we use the audios files from the test set of Clotho and test set of AudioCaps. In Section 5.1 we use WavCaps dataset [18] to simulate text data generated by LLMs. **Encoders.** We use the audio encoder and text encoder from CLAP [20]: the audio encoder is audio transformer HTSAT[4] and the text encoder is GPT2. Audio is sampled at 44.1 kHz and converted to 64-bin log Mel spectrograms, with a 1024 secs window in 50-8000 Hz, and 320 secs hop size. **Mapper and decoder.** The mapping network \(m\) shown in Figure 1 consists of a linear layer, an 8-layer transformer, and a learnable constant. The prefix length is set to be 40. For the decoder, we use GPT2, specifically GPT2-base (124M parameters). The decoder is frozen through all the experiments. **Training.** We use Adam Optimiser for 30 epochs with a linear schedule with 2000 warmup steps and a base learning rate of 1e-4. The batch size is 128 and trained on 4 V100 GPUs. ## 4 Results and Discussion SPIDEr is used as the primary evaluation metric in line with the IEEE DCASE 2022 competition. The experiments in this section are designed to understand the utility of the text-only method (section 4.1), and adapter choices (sections 4.2 4.3). ### Text-only training Table 1 shows results of various models trained on both AudioCaps and Clotho. Models in rows 1-4 use both audio and text in training. The proposed text-only model (row 5) uses only text data and random Gaussian noise with a std of 0.015. It achieves comparable performance with the best audio captioning models in the literature and obtains a SPIDEr score of 0.256 on Clotho and 0.455 on AudioCaps, higher than 0.215 and 0.437 reported by Kim et. al[7]. Text-only training is a valid alternative to training and/or initializing audio captioning systems. We also train our model architecture made for text-only training with audio-text pairs. The architecture is similar to Fig 1, where during training we use audio files with an audio encoder instead of text with a text encoder and Gaussian noise. This is the last and grayed row in Table 1. The difference in SPIDEr score between the audio-text and the text-only training is small: +0.02 on AudioCaps and +0.01 on Clotho. This indicates that our text-only training can achieve comparable results without audio data. The main benefit of text-only training is training on unpaired openly available text. We explore this in Section 5.1), whereby using LLM-generated text, we show that text-only training can improve over the audio-text training. ### Gaussian Noise Injection The choice and magnitude of noise used has a significant effect on text-only training. To determine the value of variance, we take the infinity norm between the audio and text embeddings of randomly chosen 30 examples. The variance value obtained is \(\sim\) 0.015. We also verify this empirically. We change the value of Gaussian noise variance from 0.0 to 1.0 and train a text-only model for each value. The SPIDEr score on AudioCaps and Clotho with respect to variance is shown in Figure 2. The experimental result of \(\sim\) 0.015 confirms that with only 30 examples we can approximate the noise variance required for text-only training. ### Learnable Adapter Random Gaussian noise is used as the default adapter in our experiments. However, with access to audio-text pairs, an adapter can be trained to bridge the modality gap. We consider learning a linear adapter to bridge the modality gap. If \(f(a)\) is a pretrained audio encoder and \(g(t)\) is a pretrained text encoder from CLAP. The linear adapter \(h\) is applied on top of pretrained text encoder \(g(t)\). The loss function is MSE between \(f(t)\) and \(h(g(t))\). We use AudioCaps and Clotho train-set for training the linear adapter \(h\). \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{Model} & Eval. dataset & BLUE\({}_{1}\) & BLUE\({}_{2}\) & BLUE\({}_{3}\) & BLUE\({}_{4}\) & METEOR & ROUGE\({}_{L}\) & CIDEr & SPIDEr \\ \hline Chen et al. & AudioCaps & 0.489 & 0.292 & 0.178 & 0.106 & 0.152 & 0.346 & 0.265 & 0.093 & 0.179 \\ Gontier et al. & AudioCaps & 0.635 & 0.461 & 0.322 & 0.219 & 0.208 & 0.450 & 0.612 & 0.153 & 0.383 \\ Mei et al. & AudioCaps & 0.682 & 0.507 & 0.369 & 0.266 & 0.238 & 0.488 & 0.701 & 0.166 & 0.434 \\ Kim et al. & AudioCaps & 0.708 & 0.547 & 0.402 & 0.283 & 0.238 & 0.499 & 0.710 & 0.167 & 0.438 \\ Text-only (proposed) & AudioCaps & 0.645 & 0.481 & 0.338 & 0.227 & 0.220 & 0.458 & 0.697 & **0.178** & 0.437 \\ Audio-text (proposed) & AudioCaps & 0.647 & 0.480 & 0.337 & 0.223 & 0.223 & 0.462 & 0.729 & 0.181 & 0.455 \\ \hline Chen et al. & Clotho & 0.516 & 0.325 & 0.215 & 0.141 & 0.153 & 0.350 & 0.314 & 0.102 & 0.208 \\ Gontier et al. & Clotho & 0.461 & 0.282 & 0.182 & 0.117 & 0.136 & 0.318 & 0.251 & 0.083 & 0.167 \\ Mei et al. & Clotho & 0.516 & 0.318 & 0.204 & 0.127 & 0.157 & 0.351 & 0.313 & 0.105 & 0.209 \\ Kim et al. & Clotho & 0.539 & 0.346 & 0.227 & 0.142 & 0.159 & 0.366 & 0.319 & 0.111 & 0.215 \\ Text-only (proposed) & Clotho & 0.524 & 0.339 & 0.222 & 0.136 & **0.173** & **0.371** & **0.379** & **0.132** & **0.256** \\ Audio-text (proposed) & Clotho & 0.574 & 0.375 & 0.250 & 0.155 & 0.173 & 0.381 & 0.398 & 0.123 & 0.261 \\ \hline \hline \end{tabular} \end{table} Table 1: The proposed text-only method performs comparably with the best Audio Captioning models in the literature, which were trained on audio-text pairs. All models use AudioCaps and Clotho datasets in training. Higher is better for all metrics. The last two gray rows indicate model performance when audio-text pairs are used in training. Once \(h\) is trained, we perform the same text-only training shown in Figure 1 with an additional linear adapter applied before the Gaussian Noise. The performance with linear adapter is shown in Table 3. There is performance improvement achieved on Clotho dataset but the performance drops on the AudioCaps dataset. A potential solution is to train an adapter on 4.3M audio-text pairs used in CLAP training. However, this deviates from our work's theme of lightweight adaptation and hence left for future work. ## 5 Enriching and Stylizing Captions Results above show that efficient text-to-audio transfer is attainable. And if text-only data can sufficiently train an AAC system, this has an important practical significance: additional sources of text can be used for training, including existing datasets and web data, and "unlimited" text from LLMs. Moreover, domain adaption can be less laborious, allowing for stylizing captions with no or limited human curation. ### Training on LLM generated text In this section, we invoke WavCaps [18] to simulate LLM-generated text. WavCaps is a publicly available dataset with text captions generated by OpenAI's ChatGPT, after utilizing human-curated metadata. We augment the training of the text-only model with the WavCaps text data, on top of AudioCaps' and Clotho's training sets. Results, marked with 1 in Table 2, show performance improvements on N-gram and text matching metrics (BLUE\({}_{i}\)) for both datasets. This finding illustrates the efficient utilization of augmented text-only training data, which could also help improve vocabulary diversity. Footnote 1: [https://github.com/microsoft/NoAudioCaptioning](https://github.com/microsoft/NoAudioCaptioning) ### Stylized Audio Captioning An additional consideration for text or caption descriptions is that various sources of text may also contain diverse styles. In this section we demonstrate the ability of the proposed text-only training framework to produce accurate and stylistically correct audio captions. We use Clotho and convert the original human-curated captions to a different style, "humorous", by invoking OpenAI's GPT-4, and prompting the model to stay close to the acoustic descriptions of the original captions. For example, the original caption "Sand is being shoveled and dumped on the ground", was transformed to "Sand relocation program: from shovel to ground, it's a gritty story". Ideally, one can adapt the model to any snippet of text from the web. In Table 4, we show that training the text-only AAC on stylized captions is possible, and useful for model adaptation to different domains. The captions will be released1. Footnote 1: [https://github.com/microsoft/NoAudioCaptioning](https://github.com/microsoft/NoAudioCaptioning) ## 6 Conclusion We introduce a method to train an AAC system on text-only data, without the need for paired audio samples. Our method uses the insight that contrastive models like CLAP force the audio and text embeddings in a common space, with some modality gap. To bridge the modality gap, we explore different light-weight approaches that can be used during training. We evaluate the proposed method on two AAC datasets and show that our text-only approach achieves competitive results with state-of-the-art audio captioning models trained on audio-text data, while it effectively allows for straightforward text-data augmentation and for stylized generated outputs. \begin{table} \begin{tabular}{c c c c c} \hline \hline Train Dataset & Eval. dataset & BLUE\({}_{1}\) & BLUE\({}_{2}\) & SPIDEr \\ \hline Original Clotho & Humor Clotho & 0.370 & 0.162 & 0.092 \\ Humor Clotho & Humor Clotho & **0.410** & **0.214** & **0.102** \\ \hline \hline \end{tabular} \end{table} Table 4: Text-only AAC model trained with the original or humorous stylized captions and evaluated on the humorous styled captions. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Model & Exul. dataset & BLUE\({}_{1}\) & BLUE\({}_{2}\) & BLUE\({}_{3}\) & BLUE\({}_{4}\) & METEOR & ROUGE\({}_{L}\) & CIDEr & SPICE & SPIDEr \\ \hline Text-only & AudioCaps & 0.645 & 0.481 & 0.338 & 0.227 & 0.220 & 0.458 & 0.696 & 0.178 & 0.437 \\ Text-only\({}^{\dagger}\) & AudioCaps & **0.653** & **0.484** & **0.342** & **0.232** & **0.226** & **0.459** & **0.697** & **0.179** & **0.438** \\ \hline Text-only & Clotho & 0.524 & 0.339 & 0.222 & 0.136 & 0.173 & 0.371 & 0.379 & 0.132 & 0.256 \\ Text-only\({}^{\ddagger}\) & Clotho & **0.530** & **0.342** & **0.224** & **0.143** & 0.164 & 0.367 & 0.377 & 0.117 & 0.247 \\ \hline \hline \end{tabular} \end{table} Table 2: Text-only uses AudioCaps and Clotho datasets in training. Symbol \({}^{\dagger}\) indicates LLM generated text [18] is added to training data. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Model & Adapter & Eval. dataset & BLUE\({}_{1}\) & BLUE\({}_{2}\) & BLUE\({}_{3}\) & BLUE\({}_{4}\) & METEOR & ROUGE\({}_{L}\) & CIDEr & SPICE & SPIDEr \\ \hline Text-only & Gaussian & AudioCaps & **0.645** & **0.481** & **0.338** & **0.227** & **0.220** & **0.458** & **0.696** & **0.178** & **0.437** \\ Text-only & Linear\({}_{1}\) & AudioCaps & 0.609 & 0.423 & 0.286 & 0.181 & 0.204 & 0.429 & 0.602 & 0.174 & 0.388 \\ \hline Text-only & Gaussian & Clotho & 0.524 & 0.339 & 0.222 & 0.136 & **0.173** & 0.371 & 0.379 & 0.132 & 0.256 \\ Text-only & Linear\({}_{1}\) & Clotho & **0.568** & **0.375** & **0.251** & **0.158** & 0.172 & **0.378** & **0.394** & 0.127 & **0.261** \\ \hline \hline \end{tabular} \end{table} Table 3: All models use AudioCaps and Clotho datasets in training. Symbol \({}^{\dagger}\) indicates LLM generated text [18] is added to training data. Figure 2: Effect of random Gaussian noise variance.
2303.04620
Followback Clusters, Satellite Audiences, and Bridge Nodes: Coengagement Networks for the 2020 US Election
The 2020 United States presidential election was, and has continued to be, the focus of pervasive and persistent mis- and disinformation spreading through our media ecosystems, including social media. This event has driven the collection and analysis of large, directed social network datasets, but such datasets can resist intuitive understanding. In such large datasets, the overwhelming number of nodes and edges present in typical representations create visual artifacts, such as densely overlapping edges and tightly-packed formations of low-degree nodes, which obscure many features of more practical interest. We apply a method, coengagement transformations, to convert such networks of social data into tractable images. Intuitively, this approach allows for parameterized network visualizations that make shared audiences of engaged viewers salient to viewers. Using the interpretative capabilities of this method, we perform an extensive case study of the 2020 United States presidential election on Twitter, contributing an empirical analysis of coengagement. By creating and contrasting different networks at different parameter sets, we define and characterize several structures in this discourse network, including bridging accounts, satellite audiences, and followback communities. We discuss the importance and implications of these empirical network features in this context. In addition, we release open-source code for creating coengagement networks from Twitter and other structured interaction data.
Andrew Beers, Joseph S. Schafer, Ian Kennedy, Morgan Wack, Emma S. Spiro, Kate Starbird
2023-02-28T23:59:02Z
http://arxiv.org/abs/2303.04620v2
# Followback Clusters, Satellite Audiences, and Bridge Nodes: ###### Abstract The 2020 United States (US) presidential election was -- and has continued to be -- the focus of pervasive and persistent mis- and disinformation spreading through our media ecosystems, including social media. This event has driven the collection and analysis of large, directed social network datasets, but such datasets can resist intuitive understanding. In such large datasets, the overwhelming number of nodes and edges present in typical representations create visual artifacts, such as densely overlapping edges and tightly-packed formations of low-degree nodes, which obscure many features of more practical interest. We apply a method, coensigement transformations, to convert such networks of social data into tractable images. Intuitively, this approach allows for parameterized network visualizations that make shared audiences of engaged viewers salient to viewers. Using the interpretative capabilities of this method, we perform an extensive case study of the 2020 United States presidential election on Twitter, contributing an empirical analysis of coensigement. By creating and contrasting different networks at different parameter sets, we define and characterize several structures in this discourse network, including bridging accounts, satellite audiences, and followback communities. We discuss the importance and implications of these empirical network features in this context. In addition, we release open-source code for creating coengagement networks from Twitter and other structured interaction data. 1Department of Human Centered Design and Engineering. University of Washington, Seattle, WA 2Department of Sociology. University of Washington, Seattle, WA 3Department of Political Science. University of Washington, Seattle, WA 4Information School. University of Washington, Seattle, WA ## Introduction The 2020 United States (US) presidential election was -- and has continued to be -- the focus of pervasive and persistent mis- and disinformation spreading through our media ecosystems, including social media (Center for an Informed Public et al.2021; Benkler et al.2020). Efforts to understand these dynamics have driven the collection and curation of large social media datasets, and the subsequent production of large, directed network representations of social interactions to make sense of them (Abilov et al.2021; Kennedy et al.2022). But such network datasets can resist intuitive understanding. In large network datasets, the overwhelming number of nodes and edges present in typical representations create visual artifacts, such as densely overlapping edges and tightly-packed formations of low-degree nodes, which obscure many features of more practical interest (Nocaj, Ortmann, and Brandes 2015; Schulz and Hurter 2013) (Figure 1). In the case of the US presidential election, one feature of particular interest is the functional level of interaction between different political communities who, due partly to pervasive misinformation spread in this country's right-wing media ecosystems, no longer share a common understanding of the election's outcome (Pennycook and Rand 2021; Reuters 2021). Critical to understanding these inter-community interactions is characterizing the role of platform elites, who are responsible for a disproportionate share of election-related misinformation (Center for an Informed Public et al.2021). Here, we present an extensive case study on a dataset of English-language Twitter posts relating to the 2020 US presidential election. We take advantage of the interpretative capabilities of coengagement networks, which are similar to the co-citation networks widely used in bibliographic scholarship. This dataset, totaling 585M retweets collected from September 1st, 2020 to December 18, 2020, contains tweets referencing generic English-language terms related to voting and the election, with a focus on tweets relating to election misinformation. In practice, this dataset contains public discourse related not only to the presidential election, but also discourse related to the persistent and false claims that the results of the election were fraudulent. We create and interrogate three different coengagement networks of retweets filtered under different parameter sets, describing via a mixed-methods analysis how the salient features of these networks correspond to different discourse phenomena. These phenomena include bridge nodes, users that are retweeted by multiple and disparate audiences; satellite audiences, groups of detached users which connect to mainstream conversations in very specific ways; and followback clusters, unique and highly active groups of users that incessantly retweet each other and very specific mainstream accounts. Our analysis of followback clusters particularly shows how Twitter's much-noted mass account removals in the wake of the 2021 attack on the US Capitol Building particularly affected these followback groups. Our empirical and methodological contributions together are themselves a case study in the proposed _triangulation_ analysis method, where the intersection of understandings from multiple, sometimes contradictory networks generates greater knowledge than any one network alone [1, 13]. We conclude this paper by discussing the advantages of coengagement networks over other social network formats, the importance of triangulation as a method for analysis of social networks, and future extensions and ethical considerations for using such a method. ## Background ### Mis/Disinformation, Platform Elites, and US Presidential Elections Researchers have demonstrated the critical value of networks and network visualizations in efforts to identify key actors in political mis- and disinformation campaigns [2, 16]. Even within work in this domain, however, the terms mis- and disinformation themselves have been variously defined [1], sometimes eschewed in favor of the broader term "influence operations" [20], and sometimes even criticized as a contemporary moral panic [15, 1]. For this paper, we simply define mis- and disinformation of interest as the unintentional and intentional spread of false or misleading claims that the results of the 2020 US presidential election were fraudulent. While previous research on disinformation in the 2016 US presidential election focused on foreign interference [11], recent analyses of disinformation in the 2020 US presidential election have focused on domestic right-wing campaigns coordinated by elites on social media and beyond [3]. Recent research has shown how platform elites vary between different political groupings in the US, and have highlighted their role in spreading misinformation during the COVID-19 pandemic [1]. ### Challenges in Visualizing Social Networks Network visualizations of large social data can provide valuable insight into the structure of online conversations, and these visualizations have become increasingly popular as representations of computational social science's promise [17]. A common goal in social network visualization is to highlight influential nodes and characterize the relationships they hold with one another [16, 15]. The simplest approach is to visualize the network in its observable entirety, with nodes representing user accounts and edges representing interactions between accounts. However, the large size of social media datasets, now often numbering in the millions or billions of nodes and edges so defined, can be intractable to visualize and render the exercise of doing so meaningless. Large social networks often encode multiple and seemingly contradictory dynamics at different scales, further exacerbating the difficulty in faithfully representing these phenomena to scholarly peers and the lay public [1]. These representational ambiguities can become especially misleading in the case of social data, where the lay-public often has strong priors about what to expect from the social world [17]. A common solution to this problem of large graphs is to heuristically filter unimportant nodes (e.g. with low node degree) or edges (e.g. with low edge weight) until the visualization reaches a tractable size [14, 1]. Identifying those unimportant nodes and edges is a significant challenge, as the concept of _importance_ is highly contingent on the interpretative aims of the researcher, and individually unimportant nodes may yet in the aggregate encode relevant structural information. Furthermore, there may be no single definition of node importance which addresses the full spectrum of phenomena represented by a social network. One innovation of particular relevance is the co-citation network developed in bibliographic network science, in which two published articles or authors are connected in a new work if other articles cite both of them together [23]. Importance filtering based on frequency of interaction in co-citation networks is frequently implemented, and its effects on apparent resulting clusters have been analyzed in [24]. Co-citation networks have typically focused on scientific literature, although others have used similar principles in relation to the hyperlink structure of the web, most notably with Kleinberg's concept of online _hubs_ and _authorities_ in search engine retrieval [10]. Here, we extend the concept of co-citation networks to social interaction data in what we call the coengagement network. Like co-citation networks, this method significantly reduces the number of nodes visualized in large datasets, while encoding information from missing nodes in visible edges that can reveal significant relationships. Additionally, this method is also tunable, meaning that researchers can produce different visualizations according to different no Figure 1: Directed retweet networks with typical node layout algorithms. Here, each node is an account, an edge represents a directed retweet, and edge weights represent the number of retweets. Edges with fewer than 50 retweets have been filtered out to aid visualization. The following layout algorithms as implemented in the software package Gephi are used from left to right: ForceAtlas2, YifanHu, OpenOrd. The bottom row contains close-ups of the same figures in the top row, highlighting dense node formations which obscure cluster interpretability. Dataset derived from Twitter data on the 2020 presidential election (size = 142K nodes, 424K edges). tions of node importance as defined by two interpretable parameters. A primary contribution is the application of co-citation principles to social data representing users, rather than documents, interacting with one another online. ## Coengagement Networks Coengagement networks are closely related to the method of projection in bipartite graphs. In a typical bipartite projection, a network with two types of nodes and no within-group connections (such as a network composed of researchers and the papers that they author [15]) is projected into one primary node type, with the remaining node type being collapsed into edge representations. This operation can productively reduce the number of nodes and edges to be visualized and analyzed, and preserves information about a primary node type while still retaining structural information from the projected node type. While we would not expect engagement networks among social media users to be bipartite, we aim to take advantage of the visual and analytical properties of bipartite projections, and therefore propose a transformation of non-bipartite graphs into bipartite graphs via the duplication of each node into two types: engaging and receiving nodes. We then project the resulting bipartite network such that engaging nodes are collapsed into edges between receiving nodes. In the context of users on Twitter, for example, this method privileges users with large audiences engaging in retweeting, commenting, liking, or even viewing, and defines relationships between users in terms of sharing similarly engaged audiences. Formally, we define a directed graph \(G=(V,\,E,\,w)\) with vertices \(V\), edges \(E\), and edge weights \(w\). In the current case study, we interpret \(G\) as a collection of Twitter users (\(V\)) retweeting other users (\(E\)), with edge weights w defined as the total number of retweets from one user to another. We then define a new graph \(G^{\prime}\) with vertices \(V^{\prime}\), which contains duplicate sets of vertices \(V_{S}\) and \(V_{R}\) that send and receive retweets, respectively. We similarly define a new set of undirected, weighted edges \(E^{\prime}\) in \(G^{\prime}\), where directed edges from \(V_{i}\) to \(V_{j}\) are represented as undirected edges from \(V_{Si}\) to \(V_{Rj}\), with the same weights \(w^{\prime}\). In effect, each original user has a vertex representing the instances in which they retweet others, and a separate vertex representing when they are retweeted. We then define a projection of \(G^{\prime}\) as \(X\), such that vertices in \(X\) are the interaction-receiving vertices \(V_{R}\), and edges in \(X\) are defined such that the edge weight between any two vertices \(i,\,j\) in \(X\) is the number of vertices in \(V_{S}\) that have defined edges to both \(V_{Ri}\) and \(V_{Rj}\). This final vertex set in the projected graph represents users that are retweeted and draws edges between them when they are jointly retweeted by at least one other user. We finally define two edge filtering parameters \(n\) and \(s\) on the resulting graph \(X\). Specifically, an edge between two users in \(X\) is defined if at least \(n\) other users have retweeted both users at least \(s\) times each. The parameter \(n\) represents a minimum diversity of users retweeting two users, while \(s\) represents the minimum volume of retweeting a user must do to be considered in n. The \(n\) and \(s\) parameters, which in practice control the number and distribution of edges in a coengagement network, are a powerful tool for targeting specific visualizations. These edge filtering parameters allow researchers to shape the output of their networks along two important, yet distinct, qualitative dimensions by modifying the distribution of edges between users. When researchers filter with a higher \(n\) value, influential users are related only if they attract engagement from large, diverse audiences, a typical goal in influential user analysis. When researchers filter with a higher \(s\) value, nodes are instead related by audiences that frequently retweet their content with a dataset, which can reveal dedicated rather than transient audiences. Different ratios of \(n\) to \(s\) can reveal other relationships: low \(n\) with high \(s\) can make highly active and coordinated audiences more salient, whereas low \(s\) with high \(n\) make more salient those infrequent instances in which content is shared widely across different communities. In what follows, we present a series of case studies using a dataset of tweets related to the 2020 US presidential election. Specifically, we use a dataset of 585M Twitter posts containing substrings related to voting and the election ('vote', 'voting','mail', 'ballot', 'poll', and 'election.'), define nodes as the authors of these posts, and (unprojected) edges as retweets from one user to another. We do not include quote tweets. We show that depending on the choice of the parameters \(n\) and s, different clusters of influential nodes can be distinguished, and different forms of qualitative analysis can be applied. In doing so, we demonstrate the practical implications of coengagement networks and their interpretation. Empirically, these case studies offer a unique look at engagement during the 2020 presidential election. In the first case study, we choose a very high value of \(n\) and \(s=1\) to create a network that shows a broadly two-part structure to Twitter discussions around the presiden Figure 2: Schematic representation of coengagement visualizations. Panel 1) of this figure shows a schematic for transforming A) a unipartite directed graph into B) a bipartite directed graph, and then into C) a coengagement network. The blue node is linked to the yellow node because of their shared engaging node in red, and the blue node is linked to the green node because of their shared engaging node in yellow. Panel 2) shows the effect of the node filtering parameters on an example graph. Engaging nodes, colored grey, are sized by their average out-degree. Under different combinations of the filtering parameters \(n\) and s, different edges will result. tial election, aligned with pro-Trump and pro-Biden accounts. The low \(s\) parameter highlights transient instances of high-volume crossover between these two groups but does not necessarily represent _sustained_ engagement across these groups. We then choose a parameter set with much lower \(n\) and slightly higher s, to illustrate how a third pro-socialist grouping becomes salient at different audience sizes, and how some crossover nodes do not hold sustained engagement. We end with a third case study at very high \(s\) and very low n, to highlight two new followback communities that become salient when active, sustained engagement is prioritized over large audiences. ## Findings ### Case 1: Bridging Between Clusters at High Audience Sizes In Case 1, we generate a coengagement network where node relationships are defined by low restrictions on retweet frequency (_s_ = 1), but high restrictions on total retweet volume (_n_ = 10,000, Figure 3). In this and future visualizations, we use the ForceAtlas2 (Jacomy et al., 2014) visualization algorithm as implemented in the application Gephi (Bastian, Heymann, and Jacomy, 2009). Intuitively, this means that two nodes are connected if at least 10,000 users retweeted both at least once during the period of this dataset, and nodes with more users retweeting both of them are more tightly linked together. Visualizing this projection reveals two tightly interconnected communities of influential users discussing the US presidential election, which we term and briefly describe as pro-Trump, and pro-Biden clusters. The pro-Trump cluster is anchored around Donald Trump's account, and includes an array of pro-Trump political activists, political organizations, politicians, media outlets, journals, anonymous and self-identified online influencers, activists from the antifeminist "manosphere" online culture, and conspiracy-based QAnon communities. The pro-Biden cluster includes an array of politicians, journalists, media outlets, and online influencers, some of which self-identify as pro-Biden or liberal, and others of which, such as the television network CNN, identify as non-partisan but rejected false pro-Trump claims of election fraud. Overall, 2,499 nodes are generated in this graph, with 1,385 nodes in the pro-Trump cluster (55%) and 1,114 nodes in the pro-Biden cluster (45%). However, this set of retweets may be biased towards pro-Trump accounts due to our focus on terms related to election misinformation disproportionately spread by pro-Trump accounts. We use the "pro-" descriptor to describe clusters as a Figure 4: Fig. 4. Example accounts with bipartisan engagement in Case 1. Plots of partisan retweets of four different bridging accounts labeled in Figure 3. Tweet dates are marked by the first time they were retweeted in our dataset. “Pro-Trump Only” partisan retweets indicate users who only tweeted accounts labeled in the pro-Trump cluster in Figure 3. “Pro-Biden Only” signifies the same totals for the pro-Biden cluster. Reweets of bridging nodes themselves are excluded in both retweet totals. Figure 3: Case 1, election discourse for large audiences (s=1, \(n\) = 10,000). A coengagement visualization of retweet relationships in a collection of tweets related to the US presidential election. Each node represents a Twitter account, and two nodes are linked if at least 10,000 users retweeted them both nodes at least once. Edges are undirected and weighted according to how many users retweeted both nodes. Nodes are sized according to their weighted degree, i.e. the sum of the weights of their incoming edges. Highlighted nodes represent nodes with connections to both pro-Trump and pro-Biden clusters. whole, but some individual members of these clusters may identify themselves otherwise. We also use the terms Prompty and pro-Biden, rather than Republican and Democrat, to illustrate the extent to which traditional US party alignments are contradicted in the membership of these clusters. The Lincoln Project, a Republican advocacy group that supports conservative causes, is one of the most prominent accounts in the pro-Biden cluster, while many accounts in the pro-Trump cluster have ambivalent stances towards the Republican party outside of Trump. Their membership in a community is contingent not on their core beliefs or associations, but on their behavior in the dataset we observe, namely their posting behavior in the months before and after the presidential election. In datasets with the same users but different topics of discussion, the membership of individual users may change. Audiences rarely retweet across clusters in large numbers, and when they do, they tend to retweet a select few cluster-spanning nodes. Almost all (98%) cross-cluster connections route through only eight nodes, labelled in Figure 3. These bridging nodes serve different functions in this discourse environment, and connect people in different ways (Figure 4). The most common form of bridging node was one where the account generally created two types of tweets, one that appealed to pro-Trump audiences, and one that appealed to pro-Biden audiences. The most transparent examples of these accounts were those that tracked polling results (@PpollingResults, @Politics_Polls, @AP_Politics, @NBCNews, @APPolitics), where those results and polls that favored Biden were retweeted by pro-Biden accounts, and those that favored Trump were retweeted by pro-Trump accounts. However, this form of apparent bridging also occurred when media accounts reported in neutral tones on events that fed preexisting pro-Trump and pro-Biden narratives respectively. The journalist account @joshgerstein was separately retweeted by both clusters for neutrally reporting on Trump's attempts to contest the election results, with apparent pro-Trump legal judgments being more retweeted by pro-Trump users and their subsequent legal refutations more retweeted by pro-Biden users. The alternating quality of these nodes complicates the notion that they bridge communities, as their tweets are most often disproportionately shown to only one community at a time. A special form of this alternating bridging occurred with the account @FrankLuntz, which posted updates on polls and predictions about the outcome of the presidential election. Before election day, this account was sometimes critical of Biden and released some predictions favorable to Trump, which led it to garner a slightly right-leaning cumulative audience. After the election, this account was resolute in affirming Biden's victory in the face of false pro-Trump claims of election fraud, then earning a growing pro-Biden audience. This account displays the sensitivity of such analyses to dataset selection, as a pre-election discourse analysis would likely place the account firmly in the pro-Trump cluster, whereas a post-election discourse in the pro-Biden, and when combined, firmly between. It was rare when accounts created posts that consistently appealed to audiences in both clusters. Some individual posts had equal appeal across clusters, such as when polling accounts released vote tallies tied at nearly 50% in critical states. Other posts that had a similar appeal were simply neutral statements of fact about recent news relating to the presidential election, which were made particularly often by the account for the online news organization The Hill (@thehill). While pro-Biden leaning, the Hill's account was one of the only accounts to consistently find engagement from both pro-Biden and pro-Trump accounts across this election time period. The last significant point of crossover between the Pro-Trump and pro-Biden clusters is the account for former president Trump itself (@realDonaldTrump). This circumstance reveals that though we imply for much of this analysis that retweets constitute endorsements, they do not always behave as such. Many pro-Biden accounts may be retweeting Trump's account simply because his tweets are often consequential in and of themselves, even when they are clearly opposed to Biden's election to the presidency. In several instances, pro-Biden users likely retweeted Trump's tweets sarcastically, such as when pro-Biden users disproportionately retweeted an old tweet from 2012 reading: "Scary thought-@JoeBiden is a heartbeat away from the Presidency." ### Case 2: Third Party and Satellite Audiences at Lower Audience Sizes In Case 1, a high \(n\) parameter demonstrated the nodes and graph structure of accounts with a relatively high volume of retweets over this dataset. While revealing the relatively few shared points of reference between pro-Biden and pro-Trump audiences, groups of users with smaller retweet bases are not visible in this graph. To visualize these communities, we generate the same data at a dataset with lower n, instead filtering by \(s\) to keep the size of the node and edge set tractable. Specifically, we generate a network in which links are defined when two nodes are shared by 100 users four times (\(n=100\), \(s=5\), Figure 5). After removing non-US clusters, this reveals a third group of nodes that we term the pro-socialist cluster. The pro-socialist cluster, much smaller than either the pro-Biden or pro-Trump clusters, is centered around multiple political activists, journalists, and influencers associated with US democratic socialist candidates and causes. We note that, like the pro-Biden and pro-Trump cluster, that this cluster does not contain all pro-socialist accounts, and some of its members may not identify as such. Indeed, popular democratic socialist presidential candidate Bernie Sanders is located in the pro-Biden cluster rather than pro-socialist cluster, likely due to his widespread popularity and continued public support for Biden during this election. At a lower volume of retweeting, more bridges appear between the pro-Biden and pro-Trump clusters, and new bridges are generated between the pro-socialist and other clusters. Notably, the relative importance of some bridges changes under the new requirement for repeated engagement (\(s=5\)). For example, the account for reporter @joshgerstein has no cross-cutting connections in this graph, despite being responsible for 7% of all connections in the first graph. This discrepancy is likely caused by how much of this account's pro-Trump retweet engagement comes from only two tweets, both, describing in a neutral tone, updates on pro-Trump attempts to legally invalidate the results of the election. This change demonstrates the effect of the \(s\) parameter, which can be tuned upward to select nodes for sustained engagement over a dataset, rather than widespread but momentary engagement in critical posts. The opposite effect can also be seen in those new nodes that appear as significant bridges. The spam media account @spectatorindex becomes a bridging node in Case 2, due to its sustained but low level of engagement, as do circumstantially important accounts like the governor of Arizona's (@dougducey), whose state was the center of many false election fraud claims. Bridges between the pro-socialist and other clusters illustrate the different modes of inter-cluster commonality that can exist between different user clusters in this graph (Table 1). Between the pro-Biden and pro-socialist clusters, there are many nodes that draw consistent engagement, including popular democratic socialist politicians Bernie Sanders, Alexandria Ocasio-Cortez, and Ilhan Omar, and an array of writers, podcast hosts and other online influencers associated with the socialist movement. Between the pro-Trump and pro-socialist clusters, however, there is little engagement, and all of it is mediated through three nodes. Most prominently among these nodes is Glenn Greenwald, a journalist who combined left-leaning views on topics such as surveillance with frequent engagement with right-wing media and criticism of mainstream media. This second case study introduces a visualization feature which we had previously aimed to eliminate: tightly-packed clusters of low-degree nodes, in this case one-degree nodes mostly connected only to @realDonaldTrump. We note that while in ordinary graphs such nodes are usually uninteresting, in coengagement visualizations the relative isolation of these nodes reveals an important function in the election discourse environment. Particularly, many of these one-degree nodes are from communities plausibly isolated from mainstream US political discourse, but still displaying, for example, a contextual support for Trump or Biden. We term edges stemming from these nodes as _satellite audiences_, with inspiration from [2]. These nodes include accounts from other countries and/or in other languages, such as high-follower right-wing accounts writing for Japanese or Brazilian audiences. Such accounts are unlikely to interact with the majority of English-language right-wing accounts prominent in this graph, but may retweet Trump as a signal of nominal allegiance to his movement. Other low-degree nodes may originate in popular English-language, US-based communities that focus on topics usually unrelated to electoral politics. For example, the low-degree account for pop musician Ariana Grande, one of the most followed accounts on Twitter, connects only to Biden and fellow musician Lady Gaga, signaling a possible separation between entertainment-focused audiences and mainstream election-focused audiences. ### Case 3: Followback Clusters at High-Frequency Engagement Implicitly in the previous cases, structure is mostly determined by the number of users choosing to retweet two different accounts. However, as \(s\) increases and \(n\) decreases, structure is increasingly determined by repeated interactions by relatively small groups of users, which makes the actions of well-coordinated groups more salient. To illustrate this, we generate a graph where links are defined by 25 users retweeting two nodes at least 25 times each over the course of the dataset (\(n=25\), \(s=25\), Figure 6). In this case, three new clusters emerge with ties to the existing pro-Trump and pro-Biden clusters. We term the new \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{2}{|l|}{Top Election Accounts by Cross-Cluster Connections (Case 2)} & \multicolumn{2}{|l|}{Biden-socialist} & \multicolumn{2}{|l|}{Trump-socialist} \\ \hline \multirow{4}{*}{Account Name} & Share of & \multirow{4}{*}{Account Name} & Share of & \multirow{4}{*}{Account Name} & Share of \\ & Cross-Cluster & & Cross-Cluster & & Cross-Cluster \\ & Connections & & Connections & & Connections \\ & (\(n=2\),\(515\)) & & (\(n=299\)) & & (\(n=34\)) \\ \hline thehill & 24\% & Proudsocialist & 16\% & ggreenwald & 79\% \\ \hline PpollingNumbers & 18\% & BernieSanders & 13\% & jimmy\_dore & 9\% \\ \hline Politics\_Polls & 11\% & davidsirota & 12\% & TulsiGabbard & 6\% \\ \hline realDonaldTrump & 7\% & AOC & 11\% & aaronjmate & 6\% \\ \hline AP\_Politics & 6\% & briebriejoy & 10\% & & \\ \hline threaderadepapp & 5\% & KyleKulinski & 7\% & & \\ \hline Garrett\_Archer & 5\% & IlhanMN & 6\% & & \\ \hline spectatorindex & 4\% & ryangrim & 5\% & & \\ \hline FrankLuntz & 3\% & peterdaou & 5\% & & \\ \hline DecisionDeskHQ & 3\% & RBReich & 3\% & & \\ \hline \end{tabular} \end{table} Table 1: Bridging connections between pro-Biden, pro-Trump and pro-socialist clusters. The top 10 Twitter accounts in terms of total cross-cluster edges for each two-cluster pairing in Case 2. These accounts share audiences of at least 100 users retweeting at least 5 times each across election discourse clusters. clusters in this graph _followback_ communities, due to their unique method of gaining followers and using Twitter. Accounts in these communities attempt to gain followers by mass-following other accounts in expectation of reciprocal follows, and sometimes explicitly coordinate with other accounts to expose themselves to a wider audience of potential followers. Because Twitter limits the number of users an account can follow by that account's current follower number, these accounts can often be distinguished from others by their nearly 1:1 ratio between followers and following totals (Figure 7). In addition to this follower manipulation practice, these communities have other unique behaviors compared to the clusters previously identified. Their median retweet total is much higher than that of the previously-identified clusters, and retweets are a much higher percentage of their total tweeting behavior. Their frequent retweeting likely propels their visibility in this visualization. Qualitatively, their behavior is also different from other users on the platform. They engage in retweet "trains,' in which they make posts tagging members of their own community and then retweet these posts incessantly in an attempt to garner more followers for all participants (Gallagher 2020). They are almost entirely pseudonymous, with screen names and profile information often detached from any offline presence. One of the followback clusters is much larger than the others and associated with the core pro-Trump cluster, one is smaller and an off-shoot of the larger pro-Trump followback cluster, and the smallest cluster is associated with the core pro-Biden cluster. As with previous clusters, we can investigate the nodes which bridge one cluster to another. In this case, however, both followback clusters have no connections to non-followback clusters not aligned with their preferred candidate. In all three cases, an important point of cross-cluster connection are the accounts for the two presidential candidates themselves (@JoeBiden and @realDonaldTrump), but unlike other groups, most nodes in the followback clusters are connected to these nodes. This feature of dense cross-cluster connectedness reflects a critical function of the engagement from followback clusters: to retweet the followback community, but also to retweet the influential nodes supporting their presidential candidate of choice. Multiple followback clusters can exist supporting the same presidential candidate, and different clusters may share unique user characteristics. By inspecting the usernames and Figure 5: Case 2, Election discourse for medium audience sizes (\(s\) = 5, \(n\) = 100). A coengagement visualization of retweet relationships in a collection of tweets related to the US presidential election. Each node represents a Twitter account, and two nodes are linked if at least 100 users retweeted them both nodes at least five times. Edges are undirected and weighted according to how many users retweeted both nodes. Nodes are sized according to their weighted degree, i.e. the sum of the weights of their incoming edges. Nodes bridging between the pro-socialist and other clusters are labelled, as well as what we term satellite audiences: clusters of nodes with low-degree representing audiences with only tangential connection to mainstream US election discourse. Figure 6: Case 3, Election discourse for small but active audiences (\(s\) = 25, \(n\) = 25). A coengagement visualization of retweet relationships in a collection of tweets related to the US presidential election. Each node represents a Twitter account, and two nodes are linked if at least 25 users retweeted them both nodes at least 25 times. Edges are undirected and weighted according to how many users retweeted both nodes. Nodes are sized according to their weighted degree, i.e. the sum of the weights of their incoming edges. Followback clusters, comprising nodes which retweet and follow other accounts to a relatively extreme degree, are labeled (Pro-Biden Followback Cluster, Pro-Trump Followback Cluster, “Trump’s Italians”). user-entered profile descriptions of the smaller pro-Trump followback cluster, we found that 29/41 accounts provided some indicator of Italian American identity, either explicitly identifying as such, identifying as part of "Trump's Italian Army," or including Italian flag emojis paired with US flag emojis. While the larger pro-Trump followback also contained some self-identified Italian users, they were by no means the majority as in the smaller cluster. This smaller cluster was linked to the main pro-Trump followback cluster only through four bridging nodes, two of which identified as part of Trump's Italian army, and two which self-described as duplicate accounts for the same user. This cluster otherwise only formed internal connections and connections with @realDonaldTrump and @Llinwood, the account of L. Lin Wood, a lawyer who advanced many false conspiracy theories relating to the election and litigated on Trump's behalf in some post-election court cases. All three followback clusters reflect the curious quality of coengagement networks where users can play both the role of edge and node. Recall that coengagement networks are graph projections, where each edge is comprised of engagements from many unseen nodes present in a ordinary, unprojected version of the graph. Almost 38% of the unprojected nodes that comprise the edges in followback clusters are themselves represented as nodes in the coengagement networks, compared to less than 1% of the accounts in the core pro-Trump, pro-Biden, and pro-socialist clusters. This is to say that followback clusters uniquely play the role of their own audience, relentlessly sharing their own members' content in addition to their chosen candidates. This feature sheds light on how the smaller pro-Trump followback cluster can separate itself in this graph, as its self-identified Italian American accounts specifically and at the threshold we set retweet only each other. This analysis does not allow us to speculate as to the level of explicit coordination or the motives behind these accounts for acting in this way. While some of these clusters' unusual behavior could indicate automated or semi-automated activities, previous reporting has indicated that at least some of these accounts are likely operated and coordinated by otherwise ordinary users that simply aim to support Trump's candidacy on Twitter (Gallagher, 2020). ### Synthesis: Cluster Contingency and Differential Moderation By examining the full _n/s_ parameter space, we can develop a sense for the relative size and sharing characteristics of the clusters that we have identified so far. We calculate the results of repeated network clusterings using the Louvain algorithm on a range of possible \(n\) and \(s\) parameter combinations, assigning cluster labels via high-degree landmark nodes associated with each previously-identified cluster. In Figure 8, we illustrate the boundaries at which these clusters are no longer salient. The pro-Trump, pro-Biden, and pro-socialist clusters are all contained within a maximum number of engaged users that steadily declines with restrictions on these users' number of retweets. By contrast, the followback clusters cannot be detected unless users' minimum retweets are elevated, but are never salient at a size of greater than 110 users. We note that the point at which a cluster fails to be detectable is not the point at which all nodes in these clusters are removed, but rather the point at which these nodes become subsumed into larger clusters. For example, the two highest degree pro-socialist nodes from Case 2 are relatively-marginal nodes in Case 1, while many of the most popular Trump followback accounts in Case 3 are found in the core pro-Trump cluster in Cases 1 and 2. These facts iterate that the clusters we identify here are not found with respect to the actual ideologies or social ties between nodes, but rather through an understanding of how nodes are perceived by their engaged audience variously defined. We conclude these case studies by noting how the clusters we identify here were subject to different levels of moderation in the wake of the US presidential election (Figure 9). On January 6, 2021, a large group of retiers, associated with a range of pro-Trump movements contesting the results of the 2020 presidential election, entered the US Capitol building while its representatives were in session. Following this event, Twitter suspended a large number of accounts said to have encouraged this violent protest, as well as accounts associated with the QAnon conspiracy movement (Romm and Dwoskin, 2021). To measure the effect of these and other Figure 7: Follower to following ratios for Case 3 clusters. Scatter plots of the number of followers and the number of accounts followed for each Twitter account visualized in Figure 6 (n = 25, s = 25). Follower and following totals are counted at the time of the latest tweet recorded in the dataset. Plots are separated and colored according to cluster membership, and both axes are on logarithmic scale. more recent suspensions on our dataset, we identified all accounts that were suspended as of September 3, 2021. Since the election, most of the pro-Trump followback cluster in this case study (71%) has been suspended by Twitter, compared to 32% of accounts in the core pro-Trump clusters, 2% of the core pro-Biden cluster, 7% of the pro-Biden followback cluster, and 2% of the pro-socialist cluster. In other words, these suspensions have disproportionately affected the pro-Trump followback users we identified in Case 3, which is to say groups of pro-Trump users with frequent intracommunity retweet activity. ### Comparison: Directed Engagement Graph and Flows We conclude our case studies with a brief comparison to a more standard network form, which we refer to as the directed engagement graph. In directed graphs, each node is an account, each edge is a directed retweeting relationship, and edge weights signify the frequency of engagement. In this present dataset, the unfiltered directed graph consists of 22M nodes and 299M edges. This graph cannot be visualized effectively with our current software and computing power, although a version with heavily filtered edge weights can be observed in Figure 1. We first assess whether coengagement graphs represent nodes typically perceived to be important in the directed graph. We compare the weighted (in)degrees of both graph, which in the case of the directed engagement graph is simply equal to the number of times an account has been retweeted. We find that of the top 1000 most retweeted accounts, 95%, 96% and 85% are represented across Cases 1, 2, and 3 respectively. Furthermore, we find that the nodes we choose to include in Cases 1, 2, and 3 account for 54%, 64%, and 51% of _all_ retweets in this dataset, despite only representing fewer than 0.1% of the accounts in our data. There are some highly-retweeted nodes in the directed graph that are not visible in each coengagement network. However, these missing nodes are also mostly unrelated to the 2020 US presidential election. For example, the most retweeted nodes missing from Cases 1, 2, and 3 respectively are a fan account for pop musician Justin Beiber, a fan account for the Korean pop music group BTS, and the account for Billboard, a music media outlet. These nodes are highly retweeted in these datasets because of competitions in which fans "vote" for their favorite musicians. They may be missing from our coengagement visualization simply because they are essentially apart from the apparent main topic of the dataset, and thus have fewer opportunities to "share" an audience with another node that is collected under these terms. Some relevant highly-retweeted accounts, such as the Twitter account for Donald Trump's daughter Ivanka Trump, are also missing from Case 3 likely because they did not publish enough election-related tweets to reach the 25-retweet threshold. We second assess whether structures found in the directed graph are significantly different from the coengagement graph. To do this, we cluster the directed graph using the Infomap algorithm (Rosvall, Axelsson, and Bergstrom, 2009), a different clustering method which views directed, weighted edge interactions as flows of information between nodes. We find that the pro-Biden, pro-Trump, socialist, and pro-Trump followback clusters are all still found under this algorithm, with the pro-Biden followback cluster being subsumed into the pro-Biden cluster. No other large cluster is found which combines nodes found in these case studies into new mixtures. We do find, however, that clusters apparent in these case studies, such as the pro-Biden cluster, often are subdivided into smaller clusters in the directed network. These smaller clusters reveal the tradeoffs between directed networks and coengagement networks, as they most likely stem from Infomap's tendency to privilege connections between high-indegree nodes, i.e. retweets between influential accounts. For example, a pro-Biden subcluster in the directed graph is centered around official accounts and reporters for the media outlet The New York Times. These popular accounts frequently retweet each other, probably to promote each others' work, which strengthens their association in graph forms and algorithms which understand high-degree nodes as routes through which users travel. By contrast, in coengagement graphs, the influence of interactions between influential nodes on the resulting form and clusters is dampened, due to its focus on _large_ but not necessarily _important_ shared audiences. In social engagement data in which user engagements can be seen as travelling from node to node, such as use clicks through profile pages, directed retweet networks and clustering algorithms that consider them may be more appropriate. In data such as this Twitter retweet dataset, where engagements are enacted from relatively stable positions and circumstances, coengagement Figure 8: Existence map for clusters across filtering values n, s. For each cluster identified in these three case studies, we determine the maximum parameter values at which these clusters can be identified. Clusters are labeled at each (n, s) parameter value if they contain specific high-degree landmark nodes identified in the previous case studies. In the shaded regions, individual clusters are salient to the clustering algorithm. Outside of the shaded regions, these clusters cannot be identified either because their constituent nodes are not present or because they have been subsumed into other clusters. graphs may be more appropriate. ## Discussion We introduce coengagement networks and illustrate their value for a mixed methods analysis of a dataset of Twitter posts related to the 2020 US presidential election. We illustrate how the number of apparent clusters perceived in these networks is contingent on the minimum proposed size and activity level of their engaged audiences. When seen through the lens of large, momentarily interested audiences, there appears to be two dominant pro-Trump and pro-Biden communities. When including smaller engaged audiences, a pro-socialist cluster emerges, and when focused on highly-active but even smaller engaged audiences, unique followback clusters emerge with severely different user behavior. These networks make clear that Twitter's moderation in the wake of the attack on the US Capitol Building disproportionately affected these pro-Trump followback accounts, while also affecting a number of pro-Trump accounts with more ordinary retweeting behavior. Taken together, the insights from these networks depict an ecosystem of popular and activist discourse communities in the presidential election with few but crucial points of overlap, and the _de facto_ removal of the majority of influential member is one of these communities in the wake of the US Capitol attacks. The purpose of our case study is not necessarily to draw definitive conclusions or cause-and-effect relationships, but to use visualization to expose those features of this discourse ecosystem that deserve further study. We describe a mostly binary structure of pro-Trump and pro-Biden engagement in English-language US election discourse, and provide a typification of the points of crossover between these two clusters of accounts. We identify a detectable and yet marginal tide of third-party US political discourse in the pro-socialist cluster whose growth and comparative influence on non-election discourses may yet have further importance. We identify the phenomenon of satellite audiences, where high-degree nodes, and particularly the account @realDonaldTrump, serve as singular points of reference for many communities with only marginal connection to English-language election discourse. And we characterize followback communities, which use unique posting strategies and engage in unusually partisan rhetoric to support specific candidates in our election dataset. The social network visualization approach described here makes transparent many features which contribute to this understanding of Twitter discourse around this election, while reducing visual clutter and artifacts typical to equivalent large datasets. However, we stress that the sum total narrative could not be found from any of these visualizations taken alone, and indeed some visualizations have contradictory features which could, when viewed in isolation, generate misleading insights as to the nature of these communities. For example, we have shown that the pro-Trump followback cluster is from the perspective of the size of their engaged audience a somewhat marginal phenomenon, ceasing to be organizationally coherent at the level of more than a hundred users. Portraying them with equal prominence to more typical political communities may mislead viewers as to their relative impact on the overall election conversation compared to, for example, the core pro-Trump cluster. Yet small groups of coordinated users can nonetheless have an outsize impact in online communities (Center for Countering Digital Hate 2021), and our analysis of Twitter suspensions since the presidential election show that this followback community was specifically targeted for moderation at a much greater rate than other communities. Depending on the visualization goals of the researcher -- proportional impact, or behavioral diversity -- forefronting this community may or may not be informative to their chosen audiences. Portraying multiple visualizations that provide both interpretations, and analyzing the discrepancies between them, provides a level of explanatory power that no single visualization is likely to otherwise achieve. Overall, what we describe in this paper is an interpretative, mixed-methods workflow, in which visual artifacts derived from quantitative network transformations are combined with a deep qualitative understanding of the US Twit Figure 9: Visualization of suspensions across all case studies, labeled (1-3). The original networks from all three case studies are visualized in increasing order from left-to-right. Nodes that have been suspended by Twitter as of January 6, 2021 are colored in black, otherwise coloring remain the same as in the original case studies. ter election context, to the benefit of both. Given the contingency of network visualizations on their initial parameters, deep understanding of the context of a given dataset is necessary to interpret and triangulate their different incarnations. However, given the size of contemporary social interaction datasets, quantitative methods such as the coenagement network are necessary to reduce the complexity of online conversations to artifacts which are tractable to qualitative researchers. We advocate for the continued use of coengagement networks in mixed-methods research, as a tool that both stimulates and benefits from deep contextual understanding of increasingly unwieldy social data. ### Ethical Considerations, Limitations, and Software Sharing As network visualizations continue to be central in social media analysis, we believe it is necessary to briefly examine the ethical considerations on whether network visualizations such as these _should_ be used in every circumstance. We have chosen here to visualize users participating in a high-prominence topic, consisting of mostly public-facing accounts such as politicians and media outlets. The same methods applied to communities with a higher expectation of privacy, or who face higher risks from exposure, may be unethical surveillance if researchers have not derived consent from members of these communities. There are also ethical implications to naming accounts visualized as nodes in networks. Some users, due to gender, race, or other factors, are at higher risk of harassment if identified as influential in a given community, while other users who explicitly seek attention in online communities may use their identification in networks as a propaganda tool in hateful campaigns. In our reporting of this work, we have declined to name some accounts for both reasons. We also stress here how the data collection procedure, and subsequent description of that procedure, affects which communities appear to be participating in a phenomenon. Several politically-active Twitter communities in the US that have been previously described in research, such as Black Twitter [13] and non-English language communities [14, 15], are not explicitly visible in our analysis, likely due to their different posting volumes and the choice of terms and topics on which we chose to center our data collection. Researchers working with such visualizations whose research bears on policy and public perception must explain such limitations in the communication of their work. We have described the basic form of an approach for visualizing engagements in social network data, and there are many ways in which this method can be modified to more saliently capture engagement dynamics. For example, the current formulation of this method places emphasis on users who frequently share content, which is not necessarily undesirable given the external impact of this behavior. However, different formulations of the network projection scheme, such as those that weight users' engagements relative to their average level of engagement, may be a better reflection of real discourse communities that exist at lower sharing volumes. We also observe that many of these coengagement networks create densely-connected subgraphs in which most nodes are connected to most other nodes, making internode relationships difficult to visually identify. Accordingly, these networks may be complementary with other techniques to improve visualizations of social networks, such as the edge sparsification procedures for densely-connected networks proposed by Nocaj et al. [10]. In the hopes that others may replicate our methods of both visualization and analysis on new datasets, we make the code available for generating these graphs either from structured data received from the Twitter API, or in general JSON and CSV-based formats.1 This code uses the visualization capabilities of the open-source network visualization library Gephi, and its implementation of the ForceAtlas2 algorithm for network visualization [17, 1]. We have packaged this code in publicly-available Docker containers, a relatively portable and stable code format which can be run on many machines with relatively few installation requirements. Additionally, we have made available node and link data for all visualizations displayed in this paper, as well as a list of Twitter ID numbers for tweets and users corresponding to data used to generate these graphs. We hope by making the code for generating these graphs open-source, other researchers both qualitative and quantitative will both explore the potential and limitations of this method, as well as contribute modifications to this scheme as appropriate. Footnote 1: [https://github.com/uwcip-research/Coengagement-Networks](https://github.com/uwcip-research/Coengagement-Networks) ## Acknowledgments and Funding We would like to acknowledge Clement Levallois for his advice on the open-source implementation of this project, and Paul Lockaby for ample support in data collection. We received funding from the University of Washington's The Center for an Informed Public, The John S and James L Knight Foundation (G-2019-58788), Craig Newmark Philanthropies, and the Omidyar Network.
2309.11510
When is a Foundation Model a Foundation Model
Recently, several studies have reported on the fine-tuning of foundation models for image-text modeling in the field of medicine, utilizing images from online data sources such as Twitter and PubMed. Foundation models are large, deep artificial neural networks capable of learning the context of a specific domain through training on exceptionally extensive datasets. Through validation, we have observed that the representations generated by such models exhibit inferior performance in retrieval tasks within digital pathology when compared to those generated by significantly smaller, conventional deep networks.
Saghir Alfasly, Peyman Nejat, Sobhan Hemati, Jibran Khan, Isaiah Lahr, Areej Alsaafin, Abubakr Shafique, Nneka Comfere, Dennis Murphree, Chady Meroueh, Saba Yasir, Aaron Mangold, Lisa Boardman, Vijay Shah, Joaquin J. Garcia, H. R. Tizhoosh
2023-09-14T18:03:33Z
http://arxiv.org/abs/2309.11510v1
# When is a Foundation Model a Foundation Model ###### Abstract Recently, several studies have reported on the fine-tuning of foundation models for image-text modeling in the field of medicine, utilizing images from online data sources such as Twitter and PubMed. Foundation models are large, deep artificial neural networks capable of learning the context of a specific domain through training on exceptionally extensive datasets. Through validation, we have observed that the representations generated by such models exhibit inferior performance in retrieval tasks within digital pathology when compared to those generated by significantly smaller, conventional deep networks. The term _foundation model_ (FM) refers to large (i.e., deep) artificial neural networks that, after extensive pre-training and fine-tuning with a very large amount of data, can serve as the backbone (foundation) for a wide range of applications [5, 4]. Training FMs to acquire comprehensive and expressive representations of complex data, such as natural language and digital images, requires massive amounts of data. FMs can be further fine-tuned to better understand domain-specific contexts. Understandably, we need an extremely large and diverse corpus of data to train FMs. This commonly includes, for general-purpose FMs, books, articles, websites, and social media posts, among other sources. However, it is important to note that some of these sources may be inaccessible (e.g., due to copyright issues for medical books) or may not be founded on solid medical evidence (e.g., individual opinions and social media) when seeking reliable medical data. One of the most popular architectures for multimodal foundation models is CLIP (Contrastive Language-Image Pre-training) [9]. Developed by OpenAI, CLIP is designed to learn "joint representations" of images and their corresponding textual descriptions, enabling tasks like automatic image captioning, image retrieval, and even image generation based on textual descriptions. Leveraging a large dataset containing images and their associated textual descriptions, CLIP learns to closely map similar images and their descriptions in the feature space while effectively discriminating between dissimilar pairs. CLIP has demonstrated impressive capabilities across various tasks, including image classification, object detection, and generating images from textual descriptions [9]. Its ability to generalize across diverse datasets and tasks has positioned it as a versatile and powerful tool in the field of AI research and applications. Recently, Huang et al. published a paper introducing PLIP, which is a fine-tuned version of CLIP using histology images from the pathology communities on Twitter [7]. They also conducted comparisons with search techniques for image retrieval, a task of significant interest in the field of digital pathology. Before this, BiomedCLIP had been introduced, once again employing CLIP but fine-tuned on a dataset comprising 15 million online biomedical image-text pairs. Can these foundation models aid in the field of medicine? The adoption of digital pathology, which entails the digitization of tissue slides, has been steadily increasing in recent years [10; 11]. Histopathology primarily involves the visual examination of tissue samples using light microscopy. The process of digitizing glass tissue slides is accomplished through specialized scanners that capture high-resolution whole-slide images (WSIs) from tissue samples. Consequently, pathologists can perform visual inspections of tissue morphology on a computer screen, replacing traditional microscope-based examinations. The availability of tissue samples in digital format opens up opportunities for the application of computer vision and artificial intelligence in the field of pathology [12]. WSI-to-WSI comparison stands as one of the pivotal tasks in computational and diagnostic pathology. It has the potential to enable 'patient matching,' paving the way for real-time, evidence-based, and individualized medicine, especially when combined with other data modalities. Although image search technologies, or more precisely, content-based image retrieval, have been available for nearly three decades, WSI-to-WSI matching has only recently become feasible [1; 2]. The challenge of accurately matching one patient to another at the WSI level persists and relies heavily on representations or embeddings (i.e., features or attributes) extracted by deep networks from patches of a WSI. Processing the entire WSI in one go is infeasible due to its large dimensions. **Note**: WSI matching is merely an organized way of combining multiple patch comparisons. Hence, when we perform WSI matching we are still depending on the quality of embeddings of single patches. Huang et al. did not conduct tests for WSI matching. The search method they tested is a derivative of Yottixel [1], which, due to the ranking concept it employs, cannot perform WSI-to-WSI matching. Furthermore, and more importantly, Huang et al. did not compare PLIP against other deep networks that are not foundational but have been trained specifically for histology/histopathology. Notably, they did not include KimiaNet [3] in their comparisons, despite it being trained on all diagnostic slides from the TCGA repository. This raises an urgent question regarding the use of foundation models: _Can FMs trained on datasets other than high-quality clinical data provide the best-of-breed embeddings (i.e., features) necessary to support downstream tasks like patient matching?_ To answer this urgent question we examined three FMs that can handle digital images: * CLIP (trained with 400 million image-caption pairs) is a collection of multiple deep models with approximately 100 million parameters [9]. * BiomedCLIP (fine-tuned CLIP with 15 million biomedical image-text pairs scraped from online sources) [8]. * PLIP (fine-tuned CLIP with 208,414 histopathology patches scraped from Twitter) [7]. To provide a comprehensive context, we also employed a simple conventional deep network architecture (i.e., KimiaNet which is based on the DenseNet architecture) and a Transformer-based architecture (DinoSSLPath [13] which is based on a small vision transformer). Both models have been trained on TCGA repository in which KimiaNet [3] was trained in a supervised manner, whereas DinoSSLPath was trained in a self-supervised manner. We extracted tissue features at the patch-level using CLIP, BiomedCLIP, PLIP, DinoSSLPath, and KimiaNet. Subsequently, we utilized Yottixel to conduct WSI-to-WSI matching based on the extracted features [1; 2]. Common convolutional neural networks like KimiaNet are expected to be inferior to foundation models like CLIP and its derivatives when it comes to "representation" learning. Foundation models are expected to excel in extracting optimal features for representing input data, thanks to their more complex structure and being trained with substantially more data. Our primary focus was on achieving "zero-shot retrieval" using similarity measurements for WSI-to-WSI matching as a downstream task (see Figure 1). We used four internal to examine the quality of representations: * **Breast Epithelial Tumors** (73 patients) [16 subtypes: 'Adenoid Cystic Carcinoma', 'Adenomy-oepthelioma', 'DCIS', 'LCIS', 'Microglandular Adenosis', etc.] * **Fatty Liver Disease** (324 patients) [3 classes: Normal tissue, Non-alcoholic steatohepatitisitis, alcoholic steatohepatitisitis] * **Cutaneous Squamous Cell Carcinoma** (660 patients) [4 classes: Normal tissue, well/moderately/poorly differentiated] * **Colorectal Polyps** (209 patients) [3 classes: Cancer Adjacent Polyp, Non-recurrent Polyp, Recurrent Polyp] We also tested the models on two public datasets. The results of WSI-to-WSI matching are reported in Table 1. Upon analyzing the results, it becomes evident that both BiomedCLIP and PLIP have enhanced the performance of the original CLIP (when applied to our internal data), which aligns with the common expectation of what fine-tuning any deep model, whether foundational or not, should achieve. The surprising observation is that KimiaNet and DinoSSLPath, relatively standard CNN/transformer models with fewer Figure 1: _Patching to build a mosaic and median-of-min distance measurements enables WSI-to-WSI comparison (patient matching) [1]. Any deep network (conventional or foundational) can be used to extract embeddings (deep features). [The number of patches, and the number and length of deep feature vectors do not correspond to actual numbers in WSI for the sake of less crowded visualization]_ \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{2}{c}{**Internal Datasets**} & \multicolumn{1}{c}{PLIP} & \multicolumn{1}{c}{BiomedCLIP} & CLIP & \multicolumn{1}{c}{DinoSSLPath} & \multicolumn{1}{c}{KimiaNet} \\ \hline \hline Breast & Top 1 & 45\% & 39\% & 33\% & 55\% & 56\% \\ \hline \multirow{3}{*}{Liver} & Top 1 & 58\% & 59\% & 52\% & 65\% & 62\% \\ & MV@3 & 68\% & 68\% & 50\% & 69\% & 67\% \\ & MV@5 & 59\% & 64\% & 56\% & 74\% & 65\% \\ \hline \multirow{3}{*}{Skin} & Top 1 & 63\% & 61\% & 62\% & 71\% & 78\% \\ & MV@3 & 65\% & 62\% & 65\% & 68\% & 70\% \\ & MV@5 & 67\% & 67\% & 67\% & 66\% & 69\% \\ \hline \multirow{3}{*}{Colorectal} & Top 1 & 59\% & 54\% & 54\% & 60\% & 60\% \\ & MV@3 & 60\% & 58\% & 55\% & 61\% & 61\% \\ & MV@5 & 61\% & 57\% & 50\% & 63\% & 60\% \\ \hline Total F1 Score & & 60\% \(\pm\) 6\% & 59\% \(\pm\) 8\% & 55\% \(\pm\) 10\% & **65\% \(\pm\) 5\%** & **65\% \(\pm\) 6\%** \\ \hline \hline \multicolumn{2}{l}{**Public Datasets**} & \multicolumn{1}{c}{PLIP} & \multicolumn{1}{c}{BiomedCLIP} & CLIP & \multicolumn{1}{c}{DinoSSLPath} & \multicolumn{1}{c}{KimiaNet} \\ \hline DigestPath crs & MV@5 & 84.1\% & 87.2\% & 86.9\% & 91.5\% & 89.1\% \\ DigestPath sncc & MV@5 & 90.8\% & 95.4\% & 96.1\% & 98.8\% & 98.0\% \\ \hline WSSS4LUAD & MV@5 & 47.4\% & 53.6\% & 48.6\% & 56.7\% & 51.3\% \\ \hline Total F1 Score & & 79.1\(\pm\)23 & 78.7\(\pm\)22 & 77.2\(\pm\)25 & **82.3\(\pm\)22** & **79.5\(\pm\)24** \\ \hline \hline \end{tabular} \end{table} Table 1: The results of patient matching for classification, subtyping and/or grading for the four internal and two public datasets. We used the “mosaic” patching method and the “median-of-minimum” matching method introduced by the Yottixel search engine, to find the most similar patients. The validation was done using the “leave-one-patient-out” method. The embeddings (i.e., representations or features) were provided by CLIP, BiomedCLIP, PLIP, DinoSSLPath, and KimiaNet. We employed the average of F1 Scores to include both precision and recall values using MV@k (majority vote among top-k retrieved WSIs). The results in green are the best results, whereas the second-best results are highlighted in yellow. parameters and training data, provide superior representations compared to CLIP architectures with approximately 100 million parameters. While there is no empirical or theoretical doubt about the capabilities and reliability of the CLIP topology, it suggests that the data used for fine-tuning, in our case, histopathology data, compensates for this difference. This situation highlights the possibility that models labeled as 'foundational' may struggle to match the performance of 'conventional' models when the latter are trained on more robust data sources. A foundation model earns its title when it manages to surpass the hurdles of generalization and delivers anatomically and semantically reliable, and thus accurate, image representations in histopathology. This underscores the importance of investing time and resources in the creation of high-quality datasets that can truly unlock the potential of foundation models.
2309.07399
Geometric Derivation of the Finite $N$ Master Loop Equation
In this paper we provide a geometric derivation of the master loop equation for the lattice Yang-Mills model with structure group $G \in \{SO(N),SU(N), U(N)\}$. This approach is based on integration by parts on $G$. In the appendix we compare our approach to that of \cite{Ch19a} and \cite{J16} based on Schwinger-Dyson equations, and \cite{SheSmZh22} based on stochastic analysis. In particular these approaches are all easily seen to be equivalent. The novelty in our approach is the use of intrinsic geometry of $G$ which we believe simplifies the derivation.
Omar Abdelghani, Ron Nissim
2023-09-14T03:02:27Z
http://arxiv.org/abs/2309.07399v1
# Geometric derivation of the finite \(N\) master loop equation ###### Abstract. In this paper we provide a geometric derivation of the master loop equation for the lattice Yang-Mills model with structure group \(G\in\{SO(N),SU(N),U(N)\}\). This approach is based on integration by parts on \(G\). In the appendix we compare our approach to that of [1] and [2] based on Schwinger-Dyson equations, and [13] based on stochastic analysis. In particular these approaches are all easily seen to be equivalent. The novelty in our approach is the use of intrinsic geometry of \(G\) which we believe simplifies the derivation. ###### Contents * 1 Introduction * 2 Definitions and notation * 2.1 Lattice gauge theory * 2.2 Path operations * 2.3 \(SO(N)\) definitions and conventions * 2.4 \(SU(N)\) and \(U(N)\) definitions and conventions * 3 Main Results * 4 \(SO(N)\) analysis * 4.1 Gradient identities for Wilson loops * 4.2 Laplacian of Wilson loops * 4.3 \(SO(N)\) master loop equation * 5 \(SU(N)\) and \(U(N)\) analysis * 5.1 Gradients of Wilson loops * 5.2 Laplacian of Wilson loops * 5.3 Master loop equation for \(U(N)\) and \(SU(N)\) * A Comments on Other Approaches * A.1 Integration by parts and exchangeable pairs * A.2 Symmetrized Master Loop Equation and Langevin Dynamics * B Deriving the extrinsic integration by parts formula ## 1. Introduction The study of a Quantum Yang-Mills theory is of great importance for understanding the standard model of particle physics. Although the physically relevant theory takes place in Minkowski space, standard arguments indicate that it is sufficient to study the Euclidean/probabilistic theory. This paper will only deal with the Euclidean theory, for a general survey paper see [1]. Specifically, we will be dealing with the the lattice Yang-Mills model with structure group \(G\in\{SO(N),SU(N),U(N)\}\). One approach to'solving' the lattice Yang-Mills model is through computing Wilson loop expectations. Wilson loop expectations are known to satisfy an equation known as a master loop equation. Master loop equations for lattice gauge theories were first published by Makeenko and Migdal [14] in 1979. However this derivation assumed a 'factorization property' without proof. In recent years there have been several rigorous derivations of master loop equations for classical groups such as \(SO(N),SU(N),U(N)\). For instance see [10], [11], [20], [21], [22], and [23]. The purpose of this paper is to provide a geometric derivation of the master loop equation for the lattice Yang-Mills model with structure group \(G\in\{SO(N),SU(N),U(N)\}\). This approach is based on integration by parts on \(G\), and is inspired by the derivations in [10] and [11]. In particular, the approaches of [10] and [11] rely on Schwinger-Dyson equations which they obtain from Stein's idea of exchangeable pairs. As remarked in a talk of Chatterjee [10] (pointed out by Thierry Levy), these Schwinger-Dyson equations are simply integration by parts on \(G\) written in extrinsic coordinates. We show the equivalence of integration by parts and the Schwinger-Dyson equation in Appendix A.1 and Appendix B, and generalize this to any compact Lie group with Riemannian structure inducing the Haar measure. The main sections of the paper are devoted to deriving the master loop equation for \(G\in\{SO(N),SU(N),U(N)\}\) directly from integration by parts, relying on the intrinsic geometry of \(G\) in order to simplify the calculations of Chatterjee and Jafarov. Lastly in Appendix A.2 we compare our approach to the stochastic analyis approach in [20]. ## Acknowledgements We thank Scott Sheffield and Sky Cao for many useful discussions. ## 2. Definitions and notation ### Lattice gauge theory In the following, \(\Lambda\) is a finite two dimensional cell complex. **Definition 2.1**.: _A path \(\gamma\) is a sequence \(e_{1},\dots,e_{n}\) of oriented \(1\)-cells such that their union is connected in \(\Lambda\). The set of all paths forms a groupoid \(\mathscr{P}(\Lambda)\), where two paths \(\gamma_{1}\) and \(\gamma_{2}\) may be concatenated if_ \[\partial_{0}\gamma_{2}=\partial_{1}\gamma_{1}(1)\] The idea of a lattice gauge theory is to discretize a notion of a connection on a principal \(G\)-bundle. By restricting paths to lie in a discrete set of \(1\)-cells, we may identify a connection with its finite set of parallel transport maps. This leads to the following: **Definition 2.2**.: _Let \(\Lambda\) be as before. A \(G\)-connection or \(G\)-gauge field is a homomorphism_ \[Q:\mathscr{P}(\Lambda)\to G\] _. In other words, if \(\gamma\) is a path from \(a\) to \(b\), and \(\psi\) is a path from \(b\) to \(c\), then_ \[Q(\psi*\gamma)=Q(\psi)Q(\gamma)\] _And_ \[Q(\psi^{-1})=Q^{-1}(\psi)\] _It's clear from the definition that such a homomorphism is determined by its values on an oriented edge. Pick an arbitrary orientation for each edge. Let the set of such edges be denoted \(E_{\Lambda}^{+}\). Then we have_ \[\text{Hom}(\mathscr{P}(\Lambda),G)\cong G^{E_{\Lambda}^{+}}\] _This characterization will be useful in defining lattice Yang-Mills measures._ **Definition 2.3**.: _A path \(\ell\in\mathscr{P}(\Lambda)\) is called a loop if its image has empty boundary._ The most important observables in a lattice gauge theory are the Wilson loops. **Definition 2.4**.: _Let \(\chi\) be an irreducible character on \(G\), and let \(\ell\in\mathcal{P}(\Lambda)\) be a loop. A Wilson loop is a functional of the form_ \[W_{\ell}^{\chi}=\chi(Q(\ell))\] **Remark 2.1**.: _For the rest of the paper we will only consider Wilson loops defined with the character \(\chi(Q)=\operatorname{Tr}(Q)\) for \(G=SO(N)\), and \(\chi(Q)=\operatorname{ReTr}(Q)\) for \(G\in\{SU(N),U(N)\}\). The proofs do not immediately generalize to other irreducible characters._ **Definition 2.5**.: _Let \(f\) be any class function (for our purposes usually an irreducible character). A plaquette is a \(2\)-cell in \(\Lambda\). Let the set of faces be denoted \(\mathcal{P}(\Lambda)\). Pick an arbitrary orientation, and let \(\mathcal{P}_{\Lambda}^{+}\) denote the set of positively oriented plaquettes. The lattice Yang-Mills measure with 't Hooft coupling is the probability measure_ \[Z_{\beta,\Lambda,N}^{-1}\exp\left(\beta N\sum_{p\in\mathcal{P}_{\Lambda}^{+}}f (Q(\partial p))\right)\prod_{e\in E_{\Lambda}}dg_{e}\] _Where \(dg\) is the Haar measure._ ### Path operations The following is a list of operations on loops that will be relevant in defining the master loop equations. The terminology and notation is based on that of [10]. The master loop equation is based around integration by parts on an edge \(e\) in \(\Lambda\). Thus, for every path, We define a set \(C_{\ell}\) indexing the occurrences of \(e^{\pm}\) in \(\ell\). Moreover, if \(x\in C_{\ell}\), then \(\omega_{x}\in\{-1,1\}\) is the orientation of that instance of \(e\). **Definition 2.6**.: _Let \(\ell=a_{1}e^{\omega_{1}}\dots a_{n}e^{\omega_{n}}a_{n+1}\) be a loop. We define_ \[\ell\setminus e_{x}=a_{x+1}\dots e^{\omega_{n}}a_{n+1}\dots a_{x}\] _In other words, this is the string formed by excising \(e_{x}\), ordered starting from the edge after \(e_{x}\)._ **Definition 2.7** (Positive merger).: _Let \(\ell_{1}\) and \(\ell_{2}\) be loops._ \[\ell_{1}\oplus_{x,y}\ell_{2}=(\ell_{1}\setminus e_{x})e^{\omega_{x}}(\ell_{2} \setminus e_{y}e)^{\omega_{x}\omega_{y}}\] **Definition 2.8** (Negative merger).: _Let \(\ell_{1}\) and \(\ell_{2}\) be loops._ \[\ell_{1}\oplus_{x,y}\ell_{2}=(\ell_{1}\setminus e_{x})(\ell_{2}\setminus e_{y })^{-\omega_{x}\omega_{y}}\] **Definition 2.9** (Positive split).: _Let \(\ell=a_{1}e^{\omega_{1}}\dots a_{n}e^{\omega_{n}}a_{n+1}\) be a loop such that \(|C_{\ell}|>1\). If \(x\neq y\in C_{\ell}\) and \(\omega_{x}\omega_{y}=1\), the positive split at \(x,y\) is the pair of loops_ \[\times_{x,y}^{1}\ell =a_{x+1}e^{\omega_{x+1}}\dots a_{y-1}e^{\omega_{y-1}}e^{\omega_{y}}\] \[\times_{x,y}^{2}\ell =a_{y+1}e^{\omega_{y+1}}\dots a_{x-1}e^{\omega_{x-1}}e^{\omega_{x}}\] **Definition 2.10** (Negative split).: _Let \(\ell=a_{1}e^{\omega_{1}}\dots a_{n}e^{\omega_{n}}a_{n+1}\) be a loop such that \(|C_{\ell}|>1\). If \(x\neq y\in C_{\ell}\) and \(\omega_{x}\omega_{y}=-1\), the positive split at \(x,y\) is the pair of loops_ \[\times_{x,y}^{1}\ell =a_{x+1}e^{\omega_{x+1}}\dots a_{y-1}e^{\omega_{y-1}}\] \[\times_{x,y}^{2}\ell =a_{y+1}e^{\omega_{y+1}}\dots a_{x-1}e^{\omega_{x-1}}\] **Definition 2.11** (Positive twist).: _Let \(\ell\) be a loop such that \(|C_{\ell}|>1\). WLOG suppose \(e_{x}\in\times_{x,y}^{1}\ell\) and \(e_{y}\in\times_{x,y}^{2}\). If \(x\neq y\in C_{\ell}\) and \(\omega_{x}\omega_{y}=-1\), the positive twist of \(\ell\) at \(x,y\) is the loop_ \[\propto_{x,y}\ell=\left(\times_{x,y}^{1}\ell\right)e^{\omega_{x}}\left(\times _{x,y}^{2}\ell\right)^{-1}\!\!e^{\omega_{y}}\] **Definition 2.12** (Negative twist).: _Let \(\ell\) be a loop such that \(|C_{\ell}|>1\). Let \(x\neq y\in C_{\ell}\) and \(\omega_{x}\omega_{y}=1\). WLOG let \(e_{x}\in\times^{1})_{x,y}\ell\). Then the negative twist is_ \[\propto_{x,y}\ell=\times_{x,y}^{1}\ell\ominus_{x,y}\left(\times_{x,y}^{2}\ell \right)^{-1}\] ### \(SO(N)\) definitions and conventions Recall the following: \[SO(N)=\left\{g\in GL_{N}(\mathbb{R})|g^{T}g=I\right\}\] The tangent space at \(g\) is the space of matrices \[T_{g}SO(N)=g\mathfrak{so}(N)=\left\{M\in\mathbb{R}^{N^{2}}|gX+X^{T}g^{-1}=0\right\}\] There is a natural bi-invariant metric on \(SO(N)\): **Definition 2.13**.: _Let \(X,Y\in T_{g}SO(N)\). Then_ \[\langle X,Y\rangle=\frac{1}{2}\operatorname{Tr}\bigl{(}X^{T}Y\bigr{)}\] _(Note that this is the restriction of half the Euclidean metric to \(SO(N)\))_ With respect to this metric, \[X_{ij}=g(e_{i}e_{j}^{T}-e_{j}e_{i}^{T})\] is an orthonormal frame on \(TG\). For \(SO(N)\) lattice gauge theory, we use the lattice Yang-Mills measure defined by the character \(f(Q)=\operatorname{Tr}Q\). ### \(Su(n)\) and \(U(n)\) definitions and conventions Recall that \[SU(N)=\bigl{\{}g\in GL_{N}(\mathbb{C})|g^{\dagger}g=I,\det g=1\bigr{\}}\] \[U(N)=\bigl{\{}g\in GL_{N}(\mathbb{C})|g^{\dagger}g=I\bigr{\}}\] The tangent space at \(g\) is the space of matrices \[T_{g}SU(N)=g\mathfrak{su}(N)=\Bigl{\{}M\in\mathbb{R}^{N^{2}}|gX+X^{\dagger}g^ {-1}=0,\operatorname{Tr}\bigl{(}g^{-1}X\bigr{)}=0\Bigr{\}}\] \[T_{g}U(N)=g\mathfrak{su}(N)=\Bigl{\{}M\in\mathbb{R}^{N^{2}}|gX+X^{\dagger}g^{- 1}=0\Bigr{\}}\] There is a natural bi-invariant metric on \(SU(N)\): **Definition 2.14**.: _Let \(X,Y\in T_{g}SU(N)\) or \(T_{g}U(N)\). Then_ \[\langle X,Y\rangle=\frac{1}{2}\operatorname{Re}\operatorname{Tr}\bigl{(}X^{ \dagger}Y\bigr{)}\] (Note that this is the restriction of half the Euclidean metric to \(SU(N)\)) For \(SU(N)\) lattice gauge theory, we use the lattice Yang-mills measure defined by the class function \(f(Q)=\operatorname{Re}\operatorname{Tr}Q\) The \(SU(N)\) case is complicated in that the character associated with the defining representation is now complex valued. Thus we need to make sense of gradients of functions \(f\in C^{\infty}(SU(N),\mathbb{C})\) **Definition 2.15**.: _Let \(f\in C^{\infty}(SU(N),\mathbb{C})\). Define \(\nabla f\in\mathfrak{X}(SU(N))\otimes\mathbb{C}\) by_ \[\nabla f=\nabla\operatorname{Re}f+i\nabla\operatorname{Im}f\] _In other words, we extend the gradient in the natural way to a complex-linear map of smooth functions._ Similarly, we extend the metric. **Definition 2.16**.: _Let \(X,Y\in\mathfrak{X}(SU(N))\otimes\mathbb{C}\). Then_ \[\langle X,Y\rangle=\langle\operatorname{Re}X,\operatorname{Re}Y\rangle-\langle \operatorname{Im}X,\operatorname{Im}Y\rangle+i(\langle\operatorname{Re}X, \operatorname{Im}Y\rangle+\langle\operatorname{Im}X,\operatorname{Re}Y\rangle)\] With this definition in mind, we can formulate the basis of the proof of the main theorem: **Lemma 2.1** (Laplacian integration by parts).: _Let \(f,g\in C^{\infty}(G;\mathbb{C})\), with \(G\) a compact Lie group. Equip \(G\) with a bi-invariant metric. With respect to this metric,_ \[\int_{G}g\Delta f=-\int_{G}\langle\nabla f,\nabla g\rangle\] In the sequel we will show that when \(f\) and \(g\) are Wilson loop correlation functions, laplacian integration by parts directly reduces to the master loop equation. ## 3. Main Results The first main result is the master loop equation for \(SO(N)\) **Theorem 3.1** (\(SO(N)\) master loop equation).: _Let \((\ell_{1}\dots\ell_{n})\) be a sequence of loops. Let \(e\) be an edge that lies in at least one of \(\ell_{i}\). Let \(\mathbb{E}\) denote expectations with respect to the \(SO(N)\) lattice yang mills measure. Let \(\mathcal{P}^{+}(e)\) denote the set of positively oriented plaquettes containing \(e\). Finally, let \(A_{i}\) be the set of occurrences of the edge \(e\in E_{\Lambda}^{+}\) in \(\ell_{i}\), \(B_{i}\) the set of occurrences of \(e^{-1}\), and \(C_{i}=A_{i}\cup B_{i}\). Then_ \[(N-1)m\mathbb{E}[W_{\ell_{1}}\dots W_{\ell_{n}}]=\sum_{x\neq y\in C _{1},\omega_{x}\omega_{y}=1}\mathbb{E}[W_{\infty_{x,y}\ell_{1}}W_{\ell_{2}} \dots W_{\ell_{n}}]\] \[-\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=-1}\mathbb{E}[W_{ \infty_{x,y}\ell_{1}}W_{\ell_{2}}\dots W_{\ell_{n}}]+\sum_{x,y\in C_{1}, \omega_{x}\omega_{y}=-1}\mathbb{E}[W_{\times^{1}_{x,y}\ell_{1}}W_{\times^{2}_ {x,y}\ell_{1}}W_{\ell_{2}}\dots W_{\ell_{n}}] \tag{1}\] \[-\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=1}\mathbb{E}[W_{ \times^{1}_{x,y}\ell_{1}}W_{\times^{2}_{x,y}\ell_{1}}W_{\ell_{2}}\dots W_{ \ell_{n}}]+\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\Bigg{[}W_{\ell _{1}\ominus_{x,y}\ell_{i}}\prod_{j\neq 1,i}W_{\ell_{j}}\Bigg{]}\] \[-\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\Bigg{[}W_{ \ell_{1}\oplus_{x,y}\ell_{i}}\prod_{j\neq i,1}W_{\ell_{j}}\Bigg{]}+N\beta\sum_ {p\in\mathcal{P}^{+}(e)}\sum_{x\in C_{1}}\mathbb{E}[W_{\ell_{1}\ominus_{x}p}W_ {\ell_{2}}\dots W_{\ell_{n}}]\] \[-N\beta\sum_{p\in\mathcal{P}^{+}(e)}\sum_{x\in C_{1}}\mathbb{E}[W_ {\ell_{1}\oplus_{x}p}W_{\ell_{2}}\dots W_{\ell_{n}}]\] **Theorem 3.2** (\(SU(N)\) and \(U(N)\) master loop equation).: _Let \((\ell_{1},\dots,\ell_{n})\) be a sequence of loops. Let \(e\) be an edge that lies in at least one of \(\ell_{i}\). Let \(\eta=0\) for \(G=U(N)\) and \(\eta=1\) when \(G=SU(N)\). Let \(A_{i}\), \(B_{i}\), \(C_{i}\), \(\mathcal{P}^{+}(e)\) be as before. In addition, let \(\mathcal{P}(e)\) be the set of plaquettes containing \(e\). Finally, let \(t_{i}=|A_{i}|-|B_{i}|\) and \(t=\sum_{i}t_{i}\). Then_ \[\bigg{(}mN-\frac{\eta t_{1}t}{N}\bigg{)}\mathbb{E}[W_{\ell_{1}} \dots W_{\ell_{n}}]=\sum_{x,y\in C_{1},\omega_{x}\omega_{y}=-1}\mathbb{E} \Big{[}W_{\times^{1}_{x,y}\ell_{1}}W_{\times^{2}_{x,y}\ell_{1}}W_{\ell_{2}} \dots W_{\ell_{n}}\Big{]}\] \[-\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=1}\mathbb{E}\Big{[}W _{\times^{1}_{x,y}\ell_{1}}W_{\times^{2}_{x,y}\ell_{1}}W_{\ell_{2}}\dots W_{ \ell_{n}}\Big{]}+\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}\omega_{x}\omega_{y }=-1}\mathbb{E}\Bigg{[}W_{\ell_{1}\ominus_{x,y}\ell_{i}}\prod_{j\neq i,1}W_{ \ell_{j}}\Bigg{]}\] \[-\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}\omega_{x}\omega_{y}=1} \mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x,y}\ell_{i}}\prod_{j\neq i,1}W_{\ell_{j} }\Bigg{]}+\frac{\beta N}{2}\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C_{ 1}}\mathbb{E}[W_{\ell_{1}\ominus_{x}p}W_{\ell_{2}}\dots W_{\ell_{n}}]\] \[-\frac{\beta N}{2}\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x \in C_{1}}\mathbb{E}[W_{\ell_{1}\oplus_{x}p}W_{\ell_{2}}\dots W_{\ell_{n}}]- \eta\beta\sum_{p\in\mathcal{P}(e)}t_{1}t_{p}\mathbb{E}[W_{\ell_{1}}W_{p^{-1}}W _{\ell_{2}}\dots W_{\ell_{n}}]\] ## 4. \(So(n)\) analysis The goal of this section is to prove theorem 3.1 ### Gradient identities for Wilson loops **Lemma 4.1**.: _Let \(\ell\) be a Wilson loop. Let \(C\) denote the set of occurrences of the edge \(e\) in \(\ell\). Let \(\nabla_{e}\) denote the gradient with respect to the edge \(e\). Then_ \[\nabla_{e}W_{\ell}=\sum_{x\in C}Q^{-\omega_{x}}_{\ell\ominus_{e}x}-Q_{e}Q^{ \omega_{x}}_{\ell\ominus_{e}x}Q_{e} \tag{2}\] Proof.: \(W_{\ell}\) extends to a smooth function in a neighborhood of \(SO(N)\) in the obvious way. We may thus first compute the euclidean differential: \[d_{e}W_{\ell}=\sum_{x\in C}d_{x}W_{\ell}\] For \(d_{x}\), we may rewrite \(W_{\ell}=\operatorname{Tr}\bigl{(}Q_{\ell\setminus e_{x}}g^{\omega_{x}}\bigr{)}\). By orthogonality, \[=\operatorname{Tr}\Bigl{(}Q_{\ell\setminus e_{x}}^{\omega_{x}}g\Bigr{)}\] Thus, the euclidean differential is \[d_{e}W_{\ell}(H)=\sum_{x\in C}\operatorname{Tr}\Bigl{(}Q_{\ell\setminus e_{x} }^{\omega_{x}}H\Bigr{)}\] Recall that the gradient is defined by the identity \(\langle\nabla f,H\rangle=\frac{1}{2}\operatorname{Tr}\bigl{(}\nabla f^{T}H \bigr{)}=df(H)\). Therefore, the euclidean gradient is \[\nabla_{e}^{euclidean}W_{\ell}(g)=\sum_{x\in C}2Q_{\ell\setminus e_{x}}^{- \omega_{x}}\] Recall that the tangent projection onto \(SO(N)\) is \[P_{g}(X)=\frac{1}{2}X-\frac{1}{2}gX^{T}g\] Setting \(g=Q_{e}\), we finally arrive at the result: \[\nabla_{e}W_{\ell}=\sum_{x\in C}Q_{\ell\setminus e_{x}}^{-\omega_{x}}-Q_{e}Q_ {\ell\setminus e_{x}}^{\omega_{x}}Q_{e}\] **Lemma 4.2**.: _Let \(\ell_{1}\) and \(\ell_{2}\) be Wilson loops._ \[\langle\nabla W_{\ell_{1}},\nabla W_{\ell_{2}}\rangle=\sum_{x\in C_{1},y\in C _{2}}W_{\ell_{1}\ominus_{x},y\ell_{2}}-\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1 }\oplus_{x},y\ell_{2}} \tag{3}\] Proof.: \[\langle\nabla W_{\ell_{1}},\nabla W_{\ell_{2}}\rangle=\sum_{x\in C _{1},y\in C_{2}}\Bigl{\langle}Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}-gQ_{ \ell_{1}\setminus e_{x}}^{\omega_{x}}g,Q_{\ell_{2}\setminus e_{y}}^{-\omega_{ y}}-gQ_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g\Bigr{\rangle}\] \[=\sum_{x\in C_{1},y\in C_{2}}\frac{1}{2}\Bigl{(}\operatorname{Tr} \Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{- \omega_{y}}\Bigr{)}+\operatorname{Tr}\Bigl{(}g^{-1}Q_{\ell_{1}\setminus e_{x} }^{-\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g\Bigr{)}\Bigr{)}- \frac{1}{2}\Bigl{(}\operatorname{Tr}\Bigl{(}g^{-1}Q_{\ell_{1}\setminus e_{x} }^{-\omega_{x}}g^{-1}Q_{\ell_{2}\setminus e_{y}}^{-\omega_{y}}\Bigr{)}+ \operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}gQ_{\ell_{2} \setminus e_{y}}^{\omega_{y}}\Bigr{)}\] \[=\sum_{x\in C_{1},y\in C_{2}}\operatorname{Tr}\Bigl{(}Q_{\ell_{1 }\setminus e_{x}}^{\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{-\omega_{y}}\Bigr{)} -\operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}gQ_{\ell_{2 }\setminus e_{y}}^{\omega_{y}}g\Bigr{)}=\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1 }\ominus_{x},y\ell_{2}}-W_{\ell_{1}\oplus_{x},y\ell_{2}}\] ### Laplacian of Wilson loops **Lemma 4.3**.: _Let \(P_{g}\) denote the tangent projection. Let \(L_{X}\) and \(R_{X}\) denote left and right multiplication, respectively. Then_ \[\operatorname{Tr}(P_{g}L_{X}R_{Y}P_{g})=\frac{1}{2}\operatorname{Tr}X \operatorname{Tr}Y-\frac{1}{2}\operatorname{Tr}\bigl{(}g^{-1}X^{T}gY\bigr{)} \tag{4}\] Proof.: \[\operatorname{Tr}(P_{g}L_{X}R_{Y}P_{g})=\sum_{i<j}\big{\langle}ge_{i }e_{j}^{T}-ge_{j}e_{i}^{T},X(ge_{i}e_{j}^{T}-ge_{j}e_{i}^{T})Y\big{\rangle}\] \[=\frac{1}{2}\sum_{i<j}\big{[}\operatorname{Tr}\bigl{(}e_{j}e_{i}^ {T}g^{-1}Xge_{i}e_{j}^{T}Y\bigr{)}-\operatorname{Tr}\bigl{(}e_{i}e_{j}^{T}g^{-1 }Xge_{i}e_{j}^{T}Y\bigr{)}-\operatorname{Tr}\bigl{(}e_{j}e_{i}^{T}g^{-1}Xge_{ j}e_{i}^{T}Y\bigr{)}+\operatorname{Tr}\bigl{(}e_{i}e_{j}^{T}g^{-1}Xge_{j}e_{i}^{T}Y \bigr{)}\big{]}\] \[=\frac{1}{2}\sum_{i<j}\big{[}(g^{-1}Xg)_{ii}Y_{jj}+(g^{-1}Xg)_{ jj}Y_{ii}-(g^{-1}Xg)_{ij}Y_{ij}-(g^{-1}Xg)_{ji}Y_{ji}\big{]}\] \[=\frac{1}{2}\operatorname{Tr}(X)\operatorname{Tr}(Y)-\frac{1}{2 }\operatorname{Tr}\bigl{(}g^{-1}X^{T}gY\bigr{)}\] **Lemma 4.4**.: _Let \(W_{\ell}\) be a Wilson loop and \(\Delta_{e}\) the Laplace-Beltrami operator at the edge \(e\). Let \(m\) be the number of occurrences of \(\pm e\) in \(\ell\). Then_ \[\begin{split}\Delta_{e}W_{\ell}=-(N-1)mW_{\ell}&- \sum_{x\neq y\in C,\omega_{2}\omega_{y}=1}W_{\ell\times^{1}_{x,y}}W_{\ell \times^{2}_{x,y}}+\sum_{x\neq y\in C,\omega_{2}\omega_{y}=1}W_{\ell\times x_{ x,y}}\\ &+\sum_{x,y\in C,\omega_{2}\omega_{y}=-1}W_{\ell\times^{1}_{x,y} }W_{\ell\times^{2}_{x,y}}-\sum_{x,y\in C,\omega_{2}\omega_{y}=-1}W_{\ell\times x _{x,y}}\end{split} \tag{5}\] Proof.: Recall that for a vector field \(X\), \(\operatorname{div}X=\operatorname{Tr}\nabla X\) where \(\nabla X\) is the covariant derivative of \(X\). On a submanifold, the covariant derivative is the tangent projection of the ambient covariant derivative. Thus it suffices to compute \[\operatorname{div}X=\operatorname{Tr}(P_{g}DXP_{g})\] Where \(DX\) is the euclidean covariant derivative. Now, by lemma 4.1: \[D\nabla_{e}W_{\ell}=\sum_{x\in C}DQ^{-\omega_{x}}_{\ell\setminus e_{x}}-\sum _{x\in C}D(gQ^{\omega_{x}}_{\ell\setminus e_{x}}g)\] \[D\nabla_{e}W_{\ell}(H)=\sum_{x\neq y\in C}D_{y}Q^{-\omega_{x}}_{\ell\setminus e _{x}}(H)-\sum_{x\in C}\Big{(}HQ^{\omega_{x}}_{\ell\setminus e_{x}}g+gQ^{\omega _{x}}_{\ell\setminus e_{x}}H\Big{)}-\sum_{x\neq y\in C}gD_{y}Q^{\omega_{x}}_{ \ell\setminus e_{x}}(H)g\] Now expanding further requires casework. In particular, in the first term, the \(y\)th occurrence of \(e\) is \(g\) if \(\omega_{x}\omega_{y}=1\) and is \(g^{-1}\) otherwise. The opposite is true for the third term. Recall that \[D(g\mapsto g^{-1})(H)=-g^{-1}Hg^{-1}\] Let \(W_{\ell}(g_{x}\to H)\) denote the linear map formed by substituting \(H\) for the \(x\)th occurrence of \(e\). We thus have \[\begin{split} D\nabla_{e}W_{\ell}(H)&=\sum_{x,y\in C,\omega_{2}\omega_{y}=-1}Q^{-\omega_{x}}_{\ell\setminus e_{x}}(g_{y}\mapsto H )-\sum_{x\neq y\in C,\omega_{x}\omega_{y}=1}Q^{-\omega_{x}}_{\ell\setminus e_ {x}}(g_{y}^{-1}\mapsto g^{-1}Hg^{-1})\\ &-\sum_{x\in C}\Big{(}HQ^{\omega_{x}}_{\ell\setminus e_{x}}g+gQ^ {\omega_{x}}_{\ell\setminus e_{x}}H\Big{)}-\sum_{x\neq y\in C,\omega_{x} \omega_{y}=1}gQ^{\omega_{x}}_{\ell\setminus e_{x}}(g_{y}\mapsto H)g\\ &+\sum_{x,y\in C,\omega_{x}\omega_{y}=-1}gQ^{\omega_{x}}_{\ell \setminus e_{x}}(g_{y}\mapsto g^{-1}Hg^{-1})g\end{split} \tag{6}\] We can now apply lemma 4.3, as every term in this sum is an operator of the form \(L_{X}R_{Y}\). We can first consider \[\operatorname{Tr}\Bigl{(}H\mapsto P_{g}Q^{-\omega_{x}}_{\ell\setminus e_{x}} (g_{y}\to H)P_{g}\Bigr{)}\] We can write \(Q_{\ell\setminus e_{x}}=P_{+}g^{\omega_{y}}P_{-}\). Then \(Q^{-\omega_{x}}_{\ell\setminus e_{x}}=P^{-\omega_{x}}_{-\omega_{x}}gP^{- \omega_{x}}_{\omega_{x}}\) and so the trace is \[\frac{1}{2}\operatorname{Tr}(P_{+})\operatorname{Tr}(P_{-})-\frac{1}{2} \operatorname{Tr}\bigl{(}g^{-1}P_{+}^{-1}gP_{-}\bigr{)}=\frac{1}{2}W_{\times^{ 1}_{x,y}\ell}W_{\times^{2}_{x,y}\ell}-\frac{1}{2}W_{\infty_{x,y}\ell}\] Similarly, for the case \(\omega_{x}\omega_{y}=1\), \(Q^{-\omega_{x}}_{\ell\setminus e_{x}}=P^{-\omega_{x}}_{-\omega_{x}}g^{-1}P^{- \omega_{x}}_{\omega_{x}}\). So we compute the trace of \(P^{-\omega_{x}}_{-\omega_{x}}g^{-1}Hg^{-1}P^{-\omega_{x}}_{\omega_{x}}\) Which equals \[\frac{1}{2}\operatorname{Tr}\bigl{(}P^{-\omega_{x}}_{-\omega_{x}}g^{-1} \bigr{)}\operatorname{Tr}\bigl{(}g^{-1}P^{-\omega_{x}}_{\omega_{x}}\bigr{)}- \frac{1}{2}\operatorname{Tr}\bigl{(}P^{\omega_{x}}_{-\omega_{x}}P^{-\omega_{x} }_{\omega_{x}}\bigr{)}=\frac{1}{2}W_{\times^{1}_{x,y}\ell}W_{\times^{2}_{x,y} \ell}-\frac{1}{2}W_{\times_{x,y}\ell}\] Next we have \[\operatorname{Tr}\Bigl{(}H\mapsto HQ^{\omega_{x}}_{\ell\setminus e_{x}}g+gQ^ {\omega_{x}}_{\ell\setminus e_{x}}H\Bigr{)}\] By lemma 4.3, \[\begin{split}&=\frac{N}{2}\operatorname{Tr}\Bigl{(}Q^{\omega_{x} }_{\ell\setminus e_{x}}g\Bigr{)}-\frac{1}{2}\operatorname{Tr}\Bigl{(}Q^{ \omega_{x}}_{\ell\setminus e_{x}}g\Bigr{)}+\frac{N}{2}\operatorname{Tr} \Bigl{(}Q^{\omega_{x}}_{\ell\setminus e_{x}}g\Bigr{)}-\frac{1}{2} \operatorname{Tr}\Bigl{(}Q^{\omega_{x}}_{\ell\setminus e_{x}}g\Bigr{)}\\ &=(N-1)W_{\ell}\end{split}\] We finally come to our last type of expression. \[\operatorname{Tr}\Bigl{(}H\mapsto gQ^{\omega_{x}}_{\ell\setminus e_{x}}(g_{y} \mapsto H)g\Bigr{)}\] Writing \(Q_{\ell\backslash e_{x}}=P_{+}g^{\omega_{y}}P_{-}\), we have \(gQ_{\ell\backslash e_{x}}^{\omega_{x}}g(H)=gP_{\omega_{x}}^{\omega_{x}}HP_{- \omega_{x}}^{\omega_{x}}g\) Thus, the trace is \[=\frac{1}{2}\operatorname{Tr}\!\left(gP_{\omega_{x}}^{\omega_{x}}\right) \operatorname{Tr}\!\left(P_{-\omega_{x}}^{\omega_{x}}g\right)-\frac{1}{2} \operatorname{Tr}\!\left(g^{-1}P_{\omega_{x}}^{-\omega_{x}}g^{-1}gP_{-\omega_ {x}}^{\omega_{x}}g\right)=\frac{1}{2}W_{\times_{\pm,y}}W_{\times_{\pm,y}^{2} \ell}-\frac{1}{2}W_{\times_{x,y}\ell}\] For \(\omega_{x}\omega_{y}=-1\), \(gQ_{\ell\backslash e_{x}}^{\omega_{x}}g(H)=gP_{\omega_{x}}^{\omega_{x}}g^{-1} Hg^{-1}P_{-\omega_{x}}^{\omega_{x}}g\). Thus the final trace is \[\frac{1}{2}\operatorname{Tr}\!\left(gP_{\omega_{x}}^{\omega_{x}}g^{-1}\right) \operatorname{Tr}\!\left(g^{-1}P_{-\omega_{x}}^{\omega_{x}}g\right)-\frac{1}{ 2}\operatorname{Tr}\!\left(g^{-1}gP_{\omega_{x}}^{-\omega_{x}}g^{-1}gg^{-1}P_ {-\omega_{x}}^{\omega_{x}}g\right)\] We can now insert these identities back into eq. (6). \[\Delta_{e}W_{\ell} =\sum_{x,y\in C\omega_{x}\omega_{y}=-1}\left(\frac{1}{2}W_{\times_ {\pm,y}^{1}\ell}W_{\times_{\pm,y}^{2}\ell}-\frac{1}{2}W_{\times_{x,y}\ell} \right)-\sum_{x\neq y\in C,\omega_{x}\omega_{y}=1}\left(\frac{1}{2}W_{\times_ {\pm,y}^{1}\ell}W_{\times_{\pm,y}^{2}\ell}-\frac{1}{2}W_{\times_{x,y}\ell}\right)\] \[-\sum_{x\in C}(N-1)W_{\ell}-\sum_{x\neq y\in C,\omega_{x}\omega_{y }=1}\left(\frac{1}{2}W_{\times_{\pm,y}^{1}\ell}W_{\times_{\pm,y}^{2}\ell}- \frac{1}{2}W_{\times_{x,y}\ell}\right)\] \[+\sum_{x,y\in C\omega_{x}\omega_{y}=-1}\left(\frac{1}{2}W_{\times _{\pm,y}^{1}\ell}W_{\times_{\pm,y}^{2}\ell}-\frac{1}{2}W_{\times_{x,y}\ell}\right)\] \[=-(N-1)mW_{\ell}+\sum_{x,y\in C\omega_{x}\omega_{y}=-1}\left(W_{ \times_{\pm,y}^{1}\ell}W_{\times_{\pm,y}^{2}\ell}-W_{\times_{x,y}\ell}\right)- \sum_{x\neq y\in C,\omega_{x}\omega_{y}=1}\left(W_{\times_{\pm,y}^{1}\ell}W_{ \times_{\pm,y}^{2}\ell}-W_{\times_{x,y}\ell}\right)\] ### \(So(n)\) master loop equation Proof of theorem 3.1.: Let \((\ell_{1},\ldots\ell_{n})\) be a sequence of loops. We apply Laplace integration by parts to the Wilson loop correlation function: \[-\mathbb{E}[(\Delta_{e}W_{\ell_{1}})W_{\ell_{2}}\ldots W_{\ell_{ n}}]=Z^{-1}\int_{G^{\pm}_{\Lambda}}(\Delta_{e}W_{\ell_{1}})W_{\ell_{2}}\ldots W _{\ell_{n}}\exp\!\left(\beta N\sum_{p\in\mathcal{P}_{\Lambda}^{+}}\! \operatorname{Tr}\!\left(Q_{p}\right)\right)d\mu\] \[=Z^{-1}\int_{G^{\pm}_{\Lambda}}\left\langle\nabla W_{\ell_{1}}, \nabla\!\left(W_{\ell_{2}}\ldots W_{\ell_{n}}\exp\!\left(\beta N\sum_{p\in \mathcal{P}_{\Lambda}^{+}}\operatorname{Tr}\!\left(Q_{p}\right)\right)\right) \right\rangle d\mu\] \[=\sum_{i=2}^{n}\mathbb{E}\!\left[\left\langle\nabla W_{\ell_{1}}, \nabla W_{\ell_{i}}\right\rangle\prod_{j\neq 1,i}W_{\ell_{j}}\right]+\beta N\sum_{p\in \mathcal{P}_{\Lambda}^{+}}\mathbb{E}\!\left[\left\langle\nabla W_{\ell_{1}}, \nabla W_{p}\right\rangle W_{\ell_{2}}\ldots W_{\ell_{n}}\right]\] \[=\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\!\left[W_{ \ell_{1}\oplus_{x,y}\ell_{i}}\prod_{j\neq 1,i}W_{\ell_{j}}\right]-\sum_{i=2}^{n}\sum_{x\in C _{1},y\in C_{i}}\mathbb{E}\!\left[W_{\ell_{1}\oplus_{x,y}\ell_{i}}\prod_{j\neq 1,i}W_{ \ell_{j}}\right]\] \[+\beta N\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\! \left[W_{\ell_{1}\oplus_{x,y}p}W_{\ell_{2}}\ldots W_{\ell_{n}}\right]-\beta N \sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\!\left[W_{\ell_{1}\oplus _{x,y}p}W_{\ell_{2}}\ldots W_{\ell_{n}}\right]\] On the left hand side, we have \[-\mathbb{E}[(\Delta_{e}W_{\ell_{1}})W_{\ell_{2}}\ldots W_{\ell_{n }}]\] \[=(N-1)m\mathbb{E}[W_{\ell_{1}}\ldots W_{\ell_{n}}]+\sum_{x\neq y \in C_{1},\omega_{x}\omega_{y}=1}\left(\mathbb{E}[W_{\times_{\pm,y}^{1}\ell_{1} }W_{\times_{\pm,y}^{2}\ell_{1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]-\mathbb{E}[W_{ \times_{x,y}\ell_{1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]\right)\] \[-\sum_{x,y\in C_{1},\omega_{x}\omega_{y}=-1}\left(\mathbb{E}[W_{ \times_{\pm,y}\ell_{1}}W_{\times_{\pm,y}^{2}\ell_{1}}W_{\ell_{2}}\ldots W_{\ell_ {n}}]-\mathbb{E}[W_{\times_{x,y}\ell_{1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]\right)\] Setting the two sides equal and rearranging terms gives the result: \[(N-1)m\mathbb{E}[W_{\ell_{1}}\dots W_{\ell_{n}}]=\sum_{x,y\in C_{1}, \omega_{x}\omega_{y}=-1}\Big{(}\mathbb{E}[W_{\times_{2}^{1},\ell_{1}}W_{\times _{2}^{2},y\ell_{1}}W_{\ell_{2}}\dots W_{\ell_{n}}]-\mathbb{E}[W_{\times_{x,y }\ell_{1}}W_{\ell_{2}}\dots W_{\ell_{n}}]\Big{)}\] \[-\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=1}\Big{(}\mathbb{E}[ W_{\times_{2}^{1},\ell_{1}}W_{\times_{2}^{2},y\ell_{1}}W_{\ell_{2}}\dots W_{ \ell_{n}}]-\mathbb{E}[W_{\times_{x,y}\ell_{1}}W_{\ell_{2}}\dots W_{\ell_{n}}] \Big{)}\] \[=\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\Bigg{[}W_{ \ell_{1}\ominus_{x},y\ell_{i}}\prod_{j\neq 1,i}W_{\ell_{j}}\Bigg{]}-\sum_{i=2}^{n} \sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x},y\ell_{i}} \prod_{j\neq 1,i}W_{\ell_{j}}\Bigg{]}\] \[+\beta N\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\big{[} W_{\ell_{1}\ominus_{x},y\ell_{2}}\dots W_{\ell_{n}}\big{]}-\beta N\sum_{i=2}^{n} \sum_{x\in C_{1},y\in C_{i}}\mathbb{E}\big{[}W_{\ell_{1}\oplus_{x},y\ell}W_{ \ell_{2}}\dots W_{\ell_{n}}\big{]}\] This concludes the proof of the \(SO(N)\) master loop equation. ## 5. \(Su(n)\) and \(U(n)\) analysis We now move on to the proof of theorem 3.2 The procedure is mostly analogous. However unlike \(SO(N)\), not every element of these groups is conjugate to its inverse. Thus, the orientations of Wilson loops will become more relevant in the analysis. This is reflected in the fact that Wilson loops are now complex valued. We will introduce a parameter \(\eta\) that vanishes when \(G=U(N)\) and is \(1\) when \(G=SU(N)\). ### Gradients of Wilson loops **Lemma 5.1**.: _Let \(\ell\) be a loop._ \[\nabla\operatorname{Re}W_{\ell}=\sum_{x\in C_{\ell}}Q_{\ell\setminus e_{x}}^{ -\omega_{x}}-Q_{e}Q_{\ell\setminus e_{x}}^{\omega_{x}}Q_{e}+\eta\frac{2i\omega _{x}}{N}\operatorname{Im}W_{\ell}Q_{e} \tag{7}\] Proof.: As before, the differential is \[d\operatorname{Re}W_{\ell}(H)=\sum_{x\in C_{\ell}}d_{x}\operatorname{Re}W_{ \ell}(H)=\sum_{x\in C_{\ell}}\operatorname{Re}\operatorname{Tr}\Big{(}Q_{ \ell\setminus e_{x}}^{\omega_{x}}H\Big{)}\] Thus, \[\nabla_{e}^{\operatorname{euc}}\operatorname{Re}W_{\ell}=\sum_{x\in C_{\ell}}2 Q_{\ell\setminus e_{x}}^{-\omega_{x}}\] Recalling that the tangent projection is \[P_{g}X=\frac{1}{2}X-\frac{1}{2}gX^{\dagger}g-\frac{\eta}{2N}\operatorname{Im} \operatorname{Tr}\bigl{(}g^{-1}X-X^{\dagger}g^{-1}\bigr{)}g\] We thus get \[\nabla\operatorname{Re}W_{\ell}(g) =\sum_{x\in C_{\ell}}Q_{\ell\setminus e_{x}}^{-\omega_{x}}-gQ_{ \ell\setminus e_{x}}^{\omega_{x}}g+\eta\frac{2i}{N}\operatorname{Im} \operatorname{Tr}\Bigl{(}Q_{\ell\setminus e_{x}}^{\omega_{x}}g\Bigr{)}g\] \[=\sum_{x\in C_{\ell}}Q_{\ell\setminus e_{x}}^{-\omega_{x}}-gQ_{ \ell\setminus e_{x}}^{\omega_{x}}g+\eta\frac{2i\omega_{x}}{N}\operatorname{Im} W_{\ell}g\] **Lemma 5.2**.: _Let \(\ell\) be a loop._ \[\nabla\operatorname{Im}W_{\ell}=\sum_{x\in C_{\ell}}\omega_{x}iQ_{\ell\setminus e _{x}}^{-\omega_{x}}+iQ_{e}\omega_{x}Q_{\ell\setminus e_{x}}^{\omega_{x}}Q_{e}- \eta\frac{2i\omega_{x}}{N}\operatorname{Re}W_{\ell}Q_{e}\] Proof.: Recall that \(\operatorname{Im}W_{\ell}=\operatorname{Re}(-iW_{\ell})\). Note that \[\operatorname{Im}\operatorname{Tr}(Ag^{\omega}B)=\operatorname{Im} \operatorname{Tr}((BA)g^{\omega})=\omega\operatorname{Im}\operatorname{Tr}((BA)^ {\omega}g)\] Thus, \[d\operatorname{Im}W_{\ell}(H)=\sum_{x\in C_{\ell}}\omega_{x}\operatorname{Re} \operatorname{Tr}\Bigl{(}-iQ_{\ell\setminus c_{x}}^{\omega_{x}}H\Bigr{)}\] and the euclidean gradient is \[\sum_{x\in C_{\ell}}2i\omega_{x}Q_{\ell\setminus c_{x}}^{-\omega_{x}}\] Applying the tangent projection again finally gives for \[\nabla\operatorname{Im}W_{\ell}=\sum_{x\in C_{\ell}}\omega_{x}iQ_{\ell \setminus c_{x}}^{-\omega_{x}}+ig\omega_{x}Q_{\ell\setminus c_{x}}^{\omega_{ x}}g-\eta\frac{2i\omega_{x}}{N}\operatorname{Re}W_{\ell}g\] **Lemma 5.3**.: _Let \(\ell_{1}\) and \(\ell_{2}\) be loops. Then_ \[\left\langle\nabla W_{\ell_{1}},\nabla W_{\ell_{2}}\right\rangle=\sum_{x\in C _{1},y\in C_{2},\omega_{x}\omega_{y}=-1}2W_{\ell_{1}\ominus_{x},y\ell_{2}}- \sum_{\omega_{x}\omega_{y}=1}2W_{\ell_{1}\oplus_{x},y\ell_{2}}+\eta\frac{2t_{ 1}t_{2}}{N}W_{\ell_{1}}W_{\ell_{2}}\] Proof.: Note that because \(\operatorname{Re}\operatorname{Tr}(X)=\operatorname{Re}\operatorname{Tr} \bigl{(}X^{\dagger}\bigr{)}\), the first part of the gradient has algebra identical to that of the \(SO(N)\) case. Thus \[\left\langle\nabla f^{R},\nabla f^{\prime R}\right\rangle=\sum_{x\in C_{1},y \in C_{2}}\operatorname{Re}(W_{\ell_{1}\ominus_{x},y\ell_{2}})-\operatorname {Re}(W_{\ell_{1}\oplus_{x},y\ell_{2}})+\eta\left\langle\frac{2i\omega_{x}}{N} \operatorname{Im}W_{\ell_{1}}g,Q_{\ell_{2}\setminus c_{y}}^{-\omega_{y}}-gQ_{ \ell\setminus c_{x}}^{\omega_{y}}g\right\rangle\] \[+\eta\left\langle Q_{\ell_{1}\setminus c_{x}}^{-\omega_{x}}-gQ_{\ell_{1} \setminus c_{x}}^{\omega_{x}}g,\frac{2i\omega_{y}}{N}\operatorname{Im}W_{\ell _{2}}g\right\rangle+\eta\left\langle\frac{2i\omega_{x}}{N}\operatorname{Im}W_{ \ell_{1}}g,\frac{2i\omega_{y}}{N}\operatorname{Im}W_{\ell_{2}}g\right\rangle\] The first term after the mergers equals \[\eta\frac{\omega_{x}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname{Im} \operatorname{Tr}\Bigl{(}g^{-1}Q_{\ell_{2}\setminus c_{y}}^{-\omega_{y}}-Q_{ \ell_{2}\setminus c_{y}}^{\omega_{y}}g\Bigr{)}=-\eta\frac{2\omega_{x}\omega_ {y}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname{Im}W_{\ell_{2}}\] Similarly, the second term after mergers is \[\eta\frac{\omega_{y}}{N}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell_{2} \setminus c_{y}}^{\omega_{y}}g\Bigr{)}\operatorname{Re}\operatorname{Tr} \Bigl{(}iQ_{\ell_{1}\setminus c_{x}}^{\omega_{x}}g-ig^{-1}Q_{\ell_{1}\setminus c _{x}}^{-\omega_{x}}\Bigr{)}=-\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{ Im}W_{\ell_{1}}W_{\ell_{2}}\] Finally, the last term is \[\eta\frac{4\omega_{x}\omega_{y}}{N^{2}}\operatorname{Im}W_{\ell_{1}} \operatorname{Im}W_{\ell_{2}}\left\langle g,g\right\rangle=\frac{2\omega_{x} \omega_{y}}{N}\operatorname{Im}W_{\ell_{1}}W_{\ell_{2}}\] In total, \[\left\langle\nabla f^{R},\nabla f^{\prime R}\right\rangle=\sum_{x\in C_{\ell} }\operatorname{Re}(W_{\ell_{1}\ominus_{x},y\ell_{2}})-\operatorname{Re}(W_{ \ell_{1}\oplus_{x},y\ell_{2}})-\eta\frac{2\omega_{x}\omega_{y}}{N} \operatorname{Im}W_{\ell_{1}}W_{\ell_{2}}\] Now for \(f^{I}\). The first term is almost algebraically identical (the \(i\)s cancel and \(\omega\)s factor out). But the negative sign is gone. So \[\left\langle\nabla f^{I},\nabla f^{\prime I}\right\rangle=\sum_{x\in C_{1},y \in C_{2}}\omega_{x}\omega_{y}\operatorname{Re}(W_{\ell_{1}\ominus_{x},y\ell _{2}})+\omega_{x}\omega_{y}\operatorname{Re}(W_{\ell_{1}\oplus_{x},y\ell_{2}})- \eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Re}W_{\ell_{1}}\operatorname{ Re}W_{\ell_{2}}\] \[-\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Re}W_{\ell_{2}}\operatorname{ Re}W_{\ell_{2}}+\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Re}W_{\ell_{1}} \operatorname{Re}W_{\ell_{2}}\] \[=\sum_{x\in C_{1},y\in C_{2}}\omega_{x}\omega_{y}\operatorname{Re}W_{\ell_{1} \ominus_{x},y\ell_{2}}+\omega_{x}\omega_{y}\operatorname{Re}W_{\ell_{1}\oplus_ {x},y\ell_{2}}-\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Re}W_{\ell_{1}} \operatorname{Re}W_{\ell_{2}}\] In total then we have \[\operatorname{Re}\left\langle\nabla W_{\ell_{1}},\nabla W_{\ell_{2}}\right\rangle\] \[=\sum_{x\in C_{1},y\in C_{2}}(1-\omega_{x}\omega_{y})\operatorname{Re}(W_{\ell_{1} \oplus_{x},y\ell_{2}})-(1+\omega_{x}\omega_{y})\operatorname{Re}W_{\ell_{1} \oplus_{x},y\ell_{2}}+\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Re}(W_{ \ell_{1}}W_{\ell_{2}})\] Now for the imaginary part. \[\left\langle Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}-gQ_{\ell_{1}}^{\omega_{x }}g,i\omega_{y}Q_{\ell_{2}\setminus e_{y}}^{-\omega_{y}}+i\omega_{y}gQ_{\ell_{ 2}\setminus e_{y}}^{\omega_{y}}g\right\rangle=\] \[\frac{\omega_{y}}{2}\operatorname{Re}\operatorname{Tr}(iQ_{\ell_{1}\setminus e _{x}}^{\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{-\omega_{y}}+iQ_{\ell_{1} \setminus e_{x}}^{\omega_{x}}gQ_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g-ig^ {-1}Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}g^{-1}Q_{\ell_{2}\setminus e_{ y}}^{-\omega_{y}}\] \[-ig^{-1}Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}Q_{\ell_{2}\setminus e_{y} }^{\omega_{y}}g)\] \[=-\frac{\omega_{y}}{2}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell_{1} \setminus e_{x}}^{\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{-\omega_{y}}-Q_{ \ell_{1}\setminus e_{x}}^{-\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{\omega_{y }}\Bigr{)}\] \[-\frac{\omega_{y}}{2}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell_{1} \setminus e_{x}}^{\omega_{x}}gQ_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g-g^ {-1}Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}g^{-1}Q_{\ell_{2}\setminus e_{ y}}^{-\omega_{y}}\Bigr{)}\] \[=\omega_{y}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e _{x}}^{-\omega_{x}}Q_{\ell_{2}\setminus e_{y}}^{\omega_{y}}\Bigr{)}-\omega_{ y}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{ \omega_{x}}gQ_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g\Bigr{)}\] Recall that in the definition of \(\ell_{1}\oplus\ell_{2}\) or \(\ell_{1}\ominus\ell_{2}\), the orientation of the first term does not change. As a result, \[=-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{1}\ominus\omega_{x},y\ell_{2 }}-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{1}\oplus_{x},y\ell_{2}}\] Now for the remaining terms, \[\eta\left\langle Q_{\ell_{1}\setminus e_{x}}^{-\omega_{x}}-gQ_{\ell_{1} \setminus e_{x}}^{\omega_{x}}g,-\frac{2i\omega_{y}}{N}\operatorname{Re} \operatorname{Tr}\Bigl{(}Q_{\ell_{2}\setminus e_{y}}^{\omega_{y}}g\Bigr{)} \right\rangle=\eta\frac{\omega_{y}}{N}\operatorname{Re}\operatorname{Tr}\Bigl{(}Q _{\ell_{2}\setminus e_{y}}^{\omega_{y}}g\Bigr{)}\operatorname{Im} \operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}g-g^{-1}Q_{ \ell_{1}\setminus e_{x}}^{-\omega_{x}}\Bigr{)}\] \[=\eta\frac{2\omega_{y}}{N}\operatorname{Re}\operatorname{Tr}\Bigl{(}Q_{\ell_{2 }\setminus e_{y}}^{\omega_{y}}g\Bigr{)}\operatorname{Im}\operatorname{Tr} \Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}g\Bigr{)}=\eta\frac{2\omega_{ x}\omega_{y}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname{Re}W_{\ell_{2}}\] Similarly, \[=\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname {Re}W_{\ell_{2}}\] And finally, \[\eta\left\langle\frac{2i}{N}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell _{1}\setminus e_{x}}^{\omega_{x}}g\Bigr{)}g,-\frac{2i\omega_{y}}{N} \operatorname{Re}\operatorname{Tr}\Bigl{(}Q_{\ell_{2}\setminus e_{y}}^{\omega _{y}}g\Bigr{)}g\right\rangle=-\eta\frac{2\omega_{y}}{N}\operatorname{Im} \operatorname{Tr}\Bigl{(}Q_{\ell_{1}\setminus e_{x}}^{\omega_{x}}g\Bigr{)} \operatorname{Re}\operatorname{Tr}\Bigl{(}Q_{\ell_{2}\setminus e_{y}}^{ \omega_{y}}g\Bigr{)}\] \[=-\frac{2\omega_{x}\omega_{y}}{N}\eta\operatorname{Im}W_{\ell_{1}} \operatorname{Re}W_{\ell_{2}}\] In total, \[\left\langle\nabla f^{R},\nabla f^{I}\right\rangle=\sum_{x\in C_{1},y\in C_{2 }}-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{1}\ominus\omega_{x},y\ell_{2 }}-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{1}\oplus_{x},y\ell_{2}}+\eta \frac{2\omega_{x}\omega_{y}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname{Re}W_ {\ell_{2}}\] By symmetry, \(\left\langle\nabla f^{\prime R},\nabla f^{I}\right\rangle\) is the same as \(\left\langle\nabla f^{R},\nabla f^{\prime I}\right\rangle\) with \(\ell_{1}\to\ell_{2}\) and \(\ell_{2}\to\ell_{1}\). So \[\left\langle\nabla f^{\prime R},\nabla f^{I}\right\rangle=\sum_{x\in C_{1},y\in C _{2}}-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{2}\ominus\omega_{x},y\ell_{ 1}}-\omega_{x}\omega_{y}\operatorname{Im}W_{\ell_{2}\oplus_{x},y\ell_{1}}+\eta \frac{2\omega_{x}\omega_{y}}{N}\operatorname{Im}W_{\ell_{2}}\operatorname{Re}W_ {\ell_{1}}\] Now we need to examine the relationship between \(W_{\ell_{1}\oplus\ell_{2}}\) and \(W_{\ell_{2}\oplus\ell_{1}}\) and similarly for \(\ominus\). For a negative merger, an orientation reversal occurs if \(\omega\omega^{\prime}=1\). Otherwise it doesn't happen. Similarly for a positive merger, an orientation reversal happens if \(\omega\omega^{\prime}=-1\). Thus we get \[=\sum_{x\in C_{1},y\in C_{2}}\operatorname{Im}W_{\ell_{1}\ominus\omega_{x},y \ell_{2}}-\operatorname{Im}W_{\ell_{1}\oplus_{x},y\ell_{2}}+\eta\frac{2\omega_{ x}\omega_{y}}{N}\operatorname{Im}W_{\ell_{2}}\operatorname{Re}W_{\ell_{1}}\] And so, \[\operatorname{Im}\left\langle\nabla W_{\ell_{1}},\nabla W_{\ell_{2}}\right\rangle= \sum_{x\in C_{1},y\in C_{2}}(1-\omega_{x}\omega_{y})\operatorname{Im}W_{\ell_{1} \oplus_{x,y}\ell_{2}}-(\omega_{x}\omega_{y}+1)\operatorname{Im}W_{\ell_{1} \oplus_{x,y}\ell_{2}}+\eta\frac{2\omega_{x}\omega_{y}}{N}(\operatorname{Im}W_{ \ell_{1}}\operatorname{Re}W_{\ell_{2}}+\operatorname{Re}W_{\ell}\] \[=\sum_{x\in C_{1},y\in C_{2}}(1-\omega_{x}\omega_{y})\operatorname{Im}W_{\ell _{1}\oplus_{x,y}\ell_{2}}-(1+\omega_{x}\omega_{y})\operatorname{Im}W_{\ell_{1} \oplus_{x,y}\ell_{2}}+\eta\frac{2\omega_{x}\omega_{y}}{N}\operatorname{Im}(W_{ \ell_{1}}W_{\ell_{2}})\] Thus we can conclude: \[\left\langle\nabla_{e}W_{\ell_{1}},\nabla_{e}W_{\ell_{2}}\right\rangle=\sum_{ x\in C_{1},y\in C_{2}}(1-\omega_{x}\omega_{y})W_{\ell_{1}\oplus_{x,y}\ell_{2}}-(1+ \omega_{x}\omega_{y})W_{\ell_{1}\oplus_{x,y}\ell_{2}}+\eta\frac{2\omega_{x} \omega_{y}}{N}W_{\ell_{1}}W_{\ell_{2}}\] \[=\sum_{\omega_{x}\omega_{y}=-1}2W_{\ell_{1}\ominus_{x,y}\ell_{2}}-\sum_{ \omega_{x}\omega_{y}=1}2W_{\ell_{1}\oplus_{x,y}\ell_{2}}+\eta\sum_{x,y}\frac{2 \omega_{x}\omega_{y}}{N}W_{\ell_{1}}W_{\ell_{2}}\] Note that \[\sum_{x\in C_{1},y\in C_{2}}\omega_{x}\omega_{y}=(|A_{1}||A_{2}|+|B_{1}||B_{2} |)-(|A_{1}||B_{2}|+|B_{1}||A_{2}|)\] \[=(|A_{1}|-|B_{1}|)(|A_{2}|-|B_{2}|)=t_{1}t_{2}\] Completing the proof. This final lemma is required to account for terms from the measure. **Lemma 5.4**.: _Let \(\ell_{1}\) and \(\ell_{2}\) be loops. Then_ \[\left\langle\nabla W_{\ell_{1}},\operatorname{Re}\nabla W_{\ell_{2}}\right\rangle =\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1}\ominus_{x,y}\ell_{2}}-W_{ \ell_{1}\oplus_{x,y}\ell_{2}}+\eta\frac{t_{1}t_{2}}{N}\Big{(}W_{\ell_{1}}W_{ \ell_{2}^{-1}}-W_{\ell_{1}}W_{\ell_{2}}\Big{)}\] Proof.: Most of the algebra has already been carried out. This inner product is \[\left\langle\nabla\operatorname{Re}W_{\ell_{1}},\nabla\operatorname{Im}W_{ \ell_{1}}\right\rangle+i\left\langle\nabla\operatorname{Im}W_{\ell_{1}}, \nabla\operatorname{Re}W_{\ell_{1}}\right\rangle\] The first term is, as we've computed before, \[=\sum_{x\in C_{1},y\in C_{2}}\operatorname{Re}(W_{\ell_{1}\ominus_{x,y}\ell_{ 2}})-\operatorname{Re}(W_{\ell_{1}\oplus_{x,y}\ell_{2}})-\eta\frac{2\omega_{x} \omega_{y}}{N}\operatorname{Im}W_{\ell_{1}}\operatorname{Im}W_{\ell_{2}}\] The next term is \[=\sum_{x\in C_{1},y\in C_{2}}\operatorname{Im}W_{\ell_{1}\ominus_{x,y}\ell_{2} }-\operatorname{Im}W_{\ell_{1}\oplus_{x,y}\ell_{2}}+\eta\frac{2\omega_{x} \omega_{y}}{N}\operatorname{Re}W_{\ell_{1}}\operatorname{Im}W_{\ell_{2}}\] Putting them together, this gives \[\left\langle\nabla_{e}W_{\ell},\nabla_{e}\operatorname{Re}W_{\ell}\right\rangle =\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1}\ominus_{x,y}\ell_{2}}-W_{ \ell_{1}\oplus-x,y\ell_{2}}-\eta\frac{2\omega_{x}\omega_{y}}{N}(i \operatorname{Re}W_{\ell_{1}}\operatorname{Im}W_{\ell_{2}}-\operatorname{Im}W_ {\ell_{1}}\operatorname{Im}W_{\ell_{2}})\] That last term can be simplified: \[=(i\operatorname{Re}W_{\ell_{1}}-\operatorname{Im}W_{\ell_{1}})\operatorname{ Im}W_{\ell_{2}}=iW_{\ell_{1}}\operatorname{Im}W_{\ell_{1}}=\frac{1}{2}W_{\ell_{1}}W_{ \ell_{2}}-\frac{1}{2}W_{\ell_{1}}W_{-\ell_{2}}\] So in total, \[\left\langle\nabla_{e}W_{\ell_{1}},\nabla_{e}\operatorname{Re}W_{\ell_{2}} \right\rangle=\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1}\ominus_{x,y}\ell_{2}}-W_ {\ell_{1}\oplus_{x,y}\ell_{2}}+\eta\frac{\omega_{x}\omega_{y}}{N}W_{\ell_{1}}W_ {-\ell_{2}}-\eta\frac{\omega_{x}\omega_{y}}{N}W_{\ell_{1}}W_{\ell_{2}}\] \[=\sum_{x\in C_{1},y\in C_{2}}W_{\ell_{1}\ominus_{x,y}\ell_{2}}-W_{\ell_{1} \oplus_{x,y}\ell_{2}}+\frac{t_{1}t_{2}}{N}W_{\ell_{1}}W_{-\ell_{2}}-\eta\frac{t _{1}t_{2}}{N}W_{\ell_{1}}W_{\ell_{2}}\] ### Laplacian of Wilson loops **Lemma 5.5**.: _Let \(L_{X}\) and \(R_{Y}\) denote left and right-multiplication, respectively. Then_ \[\operatorname{Tr}(P_{g}L_{X}R_{Y}P_{g})=\operatorname{Re}(\operatorname{Tr}X \operatorname{Tr}Y)-\frac{\eta}{N}\operatorname{Re}\operatorname{Tr}\bigl{(}g^ {-1}XgY\bigr{)}\] Proof.: We first compute the trace on \(U(N)\). An orthonormal basis is \(g(e_{i}e_{j}^{T}-e_{j}e_{i}^{T})\)\(i<j\), \(ig(e_{i}e_{j}^{T}+e_{j}e_{i}^{T})\), \(i<j\), and \(g\sqrt{2}e_{i}e_{i}^{T}\). \[\operatorname{Tr}(P_{g}L_{X}L_{Y}P_{g})=\sum_{i<j}\left<e_{i}e_{j} ^{T}-e_{j}e_{i}^{T},g^{-1}Xg(e_{i}e_{j}^{T}-e_{j}e_{i}^{T})Y\right>+\sum_{i<j} \left<e_{i}e_{j}^{T}+e_{j}e_{i}^{T},g^{-1}Xg(e_{i}e_{j}^{T}+e_{j}e_{i}^{T})Y\right>\] \[+\sum_{i}2\left<e_{i}e_{i}^{T},g^{-1}Xge_{i}e_{i}^{T}\right>\] \[=\sum_{i<j}2\bigl{(}\left<e_{i}e_{j}^{T},g^{-1}Xge_{i}e_{j}^{T}Y \right>+\left<e_{j}e_{i}^{T},g^{-1}Xge_{j}e_{i}^{T}\right>\bigr{)}+\sum_{i}2 \left<e_{i}e_{i}^{T},g^{-1}Xge_{i}e_{i}^{T}\right>\] \[=\sum_{ij}2\left<e_{i}e_{j}^{T},g^{-1}Xge_{i}e_{j}^{T}\right>-2 \sum_{i}\left<e_{i}e_{i}^{T},g^{-1}Xge_{i}e_{i}^{T}\right>+2\sum_{i}\left<e_{ i}e_{i},g^{-1}Xge_{i}e_{i}^{T}\right>=\sum_{ij}\operatorname{Re}(g^{-1}Xg)_{ii}Y_{jj}\] \[=\operatorname{Re}\operatorname{Tr}X\operatorname{Tr}Y\] Now, \(g\mathfrak{u}(N)=g\mathfrak{su}(N)\oplus i\mathbb{R}g\). Thus the trace on \(SU(N)\) is \[\operatorname{Tr}_{g\mathfrak{u}(N)}(L_{X}L_{Y})-\left<\sqrt{ \frac{2}{N}}ig,\sqrt{\frac{2}{N}}iXgY\right>=\operatorname{Tr}_{g\mathfrak{u} (N)}(L_{X}L_{Y})-\frac{1}{N}\operatorname{Re}\operatorname{Tr}\bigl{(}g^{-1} XgY\bigr{)}\] \[=\operatorname{Re}\operatorname{Tr}X\operatorname{Tr}Y-\frac{1} {N}\operatorname{Re}\operatorname{Tr}\bigl{(}g^{-1}XgY\bigr{)}\] **Lemma 5.6**.: _Let \(W_{\ell}\) be a Wilson loop and \(\Delta_{e}\) the Laplace-Beltrami operator at the edge \(e\). Then_ \[\Delta_{e}W_{\ell}=-\sum_{x\in C_{1},y\in C_{2},\omega_{x}\omega_{y}=1}2W_{ \times_{x,y}^{1}}W_{\times_{x,y}^{2}\ell}+\sum_{\omega_{x}\omega_{y}=-1}2W_{ \times_{x,y}^{1}}W_{\times_{x,y}^{2}\ell}-\left(2mN-\frac{2\eta t^{2}}{N} \right)W_{\ell}\] Proof.: The laplacian does not involve any complex multiplication. We can therefore separately compute \(\Delta_{e}\operatorname{Re}W_{\ell}\) and \(\Delta_{e}\operatorname{Im}W_{\ell}\). \[\nabla_{e}\operatorname{Re}W_{\ell}=\sum_{x\in C}\nabla^{x}\operatorname{Re}W _{\ell}\] so \[\nabla_{H}^{euc}\nabla_{e}\operatorname{Re}W_{\ell}=\sum_{x,y\in C}d_{y} \nabla^{x}\operatorname{Re}W_{\ell}(H)\] Now, recall: \[\nabla_{e}\operatorname{Re}W_{\ell}=\sum_{x\in C_{\ell}}Q_{\ell\setminus e_{ x}}^{-\omega_{x}}-Q_{e}Q_{\ell\setminus e_{x}}^{\omega_{x}}Q_{e}+\eta\frac{2i\omega_{x}}{N} \operatorname{Im}W_{\ell}Q_{e}\] Taking the differential and accounting for orientations as in the \(SO(N)\) proof, \[d\nabla_{e}\operatorname{Re}W_{\ell}(H) =\sum_{x,y\in C_{\ell},\omega_{x}\omega_{y}=-1}Q_{\ell\setminus e_ {x}}^{-\omega_{x}}(g_{y}\to H)-\sum_{x\neq y\in C_{\ell},\omega_{x} \omega_{y}=1}Q_{\ell\setminus e_{x}}^{-\omega_{x}}(g_{y}\to g^{-1}Hg^{-1})\] \[-HQ_{\ell\setminus e_{x}}^{\omega_{x}}Q_{e}-Q_{e}Q_{\ell\setminus e _{x}}^{\omega_{x}}H-\sum_{x\neq y\in C_{\ell},\omega_{x}\omega_{y}=1}Q_{\ell \setminus e_{x}}^{\omega_{x}}(g_{y}\to H)\] \[+\sum_{x,y\in C_{\ell},\omega_{x}\omega_{y}=-1}Q_{e}Q_{\ell\setminus e _{x}}^{\omega_{x}}(g_{y}\to g^{-1}Hg^{-1})+\eta\frac{2i\omega_{x}}{N} \operatorname{Im}W_{\ell}H+\eta\frac{2i\omega_{x}}{N}d\operatorname{Im}W_{\ell}Q _{e}\] For that last term, \[d\operatorname{Im}\operatorname{Tr}W_{\ell}(H)=\sum_{y\in C}\omega_{y} \operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell\setminus e_{y}}^{\omega_{y}}H \Bigr{)}\] Putting it all together, \[d\nabla_{e}\operatorname{Re}W_{\ell}(H)=-\sum_{\omega_{z}\omega_{y}=1}Q_{\ell \backslash e_{x}}^{-\omega_{z}}(Q_{y}\to Q_{e}^{-1}HQ_{e}^{-1})+\sum_{\omega_{z} \omega_{y}=-1}Q_{\ell\backslash e_{x}}^{-\omega_{z}}(Q_{y}\to H)\] \[-\sum_{x\in C}HQ_{\ell\backslash e_{x}}^{\omega_{x}}Q_{e}+Q_{e}Q_{\ell \backslash e_{x}}^{\omega_{x}}H-\sum_{\omega_{z}\omega_{y}=1}Q_{e}Q_{\ell \backslash e_{x}}^{\omega_{x}}(Q_{y}\to H)Q_{e}\] \[+\sum_{\omega_{z}\omega_{y}=-1}Q_{e}Q_{\ell\backslash e_{x}}^{\omega_{x}}(Q_{e }\to Q_{e}^{-1}HQ_{e}^{-1})Q_{e}+\eta\sum_{x\in C}\frac{2i\omega_{x}}{N}H \operatorname{Im}W_{\ell}+\eta\sum_{x\in C,y\in C}\frac{2i\omega_{x}\omega_{y }}{N}Q_{e}\operatorname{Im}\operatorname{Tr}\Bigl{(}Q_{\ell\backslash e_{y}}^ {\omega_{y}}H\Bigr{)}\] The last two terms can be dropped, as both of their images lie in the normal bundle. We now compute the trace (recall lemma 5.5). Once again recall that if \(Q_{\ell\backslash e_{x}}=P_{+}g^{\omega_{y}}P_{-}\), then \[Q_{\ell\backslash e_{x}}^{\pm\omega_{x}}=P_{\pm\omega_{x}}^{\pm\omega_{x}}g^{ \pm\omega_{x}\omega_{y}}P_{\mp\omega_{x}}^{\pm\omega_{x}}\] First: \[\sum_{\omega_{z}\omega_{y}=1}\operatorname{Tr}Q_{\ell\backslash e _{x}}^{-\omega_{x}}(Q_{y}\to Q_{e}^{-1}HQ_{e}^{-1})=\sum_{\omega_{z} \omega_{y}=1}\operatorname{Tr}P_{-\omega_{x}}^{-\omega_{x}}Q_{e}^{-1}HQ_{e}^{ -1}P_{\omega_{x}}^{-\omega_{x}}\] \[=\sum_{\omega_{z}\omega_{y}=1}\operatorname{Re}(\operatorname{ Tr}\bigl{(}P_{-\omega_{x}}^{-\omega_{x}}Q_{e}^{-1}\bigr{)}\operatorname{Tr} \bigl{(}Q_{e}^{-1}P_{\omega_{x}}^{-\omega_{x}}\bigr{)})-\frac{\eta}{N} \operatorname{Re}\operatorname{Tr}\bigl{(}g^{-1}(P_{-\omega_{x}}^{-\omega_{x} }g^{-1})gg^{-1}P_{\omega_{x}}^{-\omega_{x}}\bigr{)}\] \[=\sum_{\omega_{z}\omega_{y}=1}\operatorname{Re}(W_{\times^{1}_{ \text{$\frac{1}{y}$}}\ell}W_{\times^{2}_{\text{$\frac{2}{y}$}}\ell})-\frac{ \eta}{N}\operatorname{Re}\operatorname{Tr}\bigl{(}g^{-1}P_{-\omega_{x}}^{- \omega_{x}}g^{-1}P_{\omega_{x}}^{-\omega_{x}}\bigr{)}=\sum_{\omega_{z}\omega _{y}=1}\operatorname{Re}(W_{\times^{1}_{\text{$\frac{1}{y}$}}\ell}W_{\times^ {2}_{\text{$\frac{2}{y}$}}\ell})-\frac{\eta}{N}\operatorname{Re}W_{\ell}\] Next, \[\operatorname{Tr}\Bigl{(}HQ_{\ell\backslash e_{x}}^{\omega_{x}}Q_{e}+Q_{e}Q_{ \ell\backslash e_{x}}^{\omega_{x}}H\Bigr{)}=2N\operatorname{Re}\operatorname{ Tr}\Bigl{(}Q_{\ell\backslash e_{x}}^{-\omega_{x}}Q_{e}\Bigr{)}-\frac{2\eta}{N} \operatorname{Re}\operatorname{Tr}\Bigl{(}Q_{\ell\backslash e_{x}}^{\omega_ {x}}Q_{e}\Bigr{)}=\left(2N-\frac{2\eta}{N}\right)\operatorname{Re}W_{\ell}\] Next, \[\sum_{\omega_{z}\omega_{y}=1}\operatorname{Tr}Q_{e}Q_{\ell \backslash e_{x}}^{\omega_{x}}(Q_{y}\to H)Q_{e}=\operatorname{Tr}\bigl{(}Q_{e }P_{\omega_{x}}^{\omega_{x}}HP_{-\omega_{x}}^{\omega_{x}}Q_{e}\bigr{)}\] \[=\operatorname{Re}\operatorname{Tr}\bigl{(}Q_{e}P_{\omega_{x}}^{ \omega_{x}}\bigr{)}\operatorname{Tr}\bigl{(}P_{\omega_{x}}^{\omega_{x}}Q_{e} \bigr{)}-\frac{\eta}{N}\operatorname{Re}(Q_{e}^{-1}Q_{e}P_{\omega_{x}}^{\omega _{x}}Q_{e}P_{-\omega_{x}}^{\omega_{x}}Q_{e})\] \[=\operatorname{Re}W_{\times^{1}_{\text{$\frac{1}{y}$}}\ell}W_{ \times^{2}_{\text{$\frac{2}{y}$}}\ell}-\frac{\eta}{N}\operatorname{Re}W_{\ell}\] Finally, \[\sum_{\omega_{z}\omega_{y}=-1}\operatorname{Tr}\Bigl{(}Q_{e}Q_{ \ell\backslash e_{x}}^{\omega_{x}}(Q_{y}\to Q_{e}^{-1}HQ_{e}^{-1})\Bigr{)}= \sum_{\omega_{z}\omega_{y}=-1}\operatorname{Tr}\bigl{(}Q_{e}P_{\omega_{x}}^{ \omega_{x}}Q_{e}^{-1}HQ_{e}^{-1}P_{-\omega_{x}}^{\omega_{x}}Q_{e}\bigr{)}\] \[=\sum_{\omega_{x}\omega_{y}=-1}\operatorname{Re}(\operatorname{ Tr}\bigl{(}Q_{e}P_{\omega_{x}}^{\omega_{x}}Q_{e}^{-1}\bigr{)}\operatorname{Tr} \bigl{(}Q_{e}P_{-\omega_{x}}^{\omega_{x}}Q_{e}^{-1}\bigr{)})-\frac{\eta}{N} \operatorname{Re}\operatorname{Tr}\bigl{(}Q_{e}^{-1}Q_{e}P_{\omega_{x}}^{ \omega_{x}}Q_{e}^{-1}Q_{e}Q_{e}^{-1}P_{-\omega_{x}}^{\omega_{x}}Q_{e}\bigr{)}\] \[=\sum_{\omega_{z}\omega_{y}=-1}\operatorname{Re}W_{\times^{1}_{ \text{$\frac{1}{y}$}}\ell}W_{\times^{2}_{\text{$\frac{2}{y}$}}\ell}-\frac{ \eta}{N}\operatorname{Re}\operatorname{Tr}\bigl{(}P_{\omega_{x}}^{\omega_{x}}Q_{e }^{-1}P_{-\omega_{x}}^{\omega_{x}}Q_{e}\bigr{)}=\sum_{\omega_{z}\omega_{y}=-1} \operatorname{Re}W_{\times^{1}_{\text{$\frac{1}{y}$}}\ell}W_{\times^{2}_{ \text{$\frac{2}{y}$}}\ell}-\frac{\eta}{N}\operatorname{Re}W_{\ell}\] Inserting these identities back in, \[\Delta_{e}W_{\ell} =-\!\left(\sum_{\omega_{x}\omega_{y}=1}\operatorname{Re}(W_{\times \mathbb{1},q}W_{\times\mathbb{2},y}\ell)-\frac{\eta}{N}\operatorname{Re}W_{\ell }\right)+\left(\sum_{\omega_{x}\omega_{y}=-1}\operatorname{Re}(W_{\times \mathbb{1},y}\ell W_{\times\mathbb{2},y}\ell)-\frac{\eta}{N}\operatorname{Re}W_{ \ell}\right)\] \[-\sum_{x}\left(2N-\frac{2\eta}{N}\right)\operatorname{Re}W_{\ell }-\sum_{\omega_{x}\omega_{y}=1}\left(\operatorname{Re}W_{\times\mathbb{1},y} \ell W_{\times\mathbb{2},y}\ell-\frac{\eta}{N}\operatorname{Re}W_{\ell}\right)\] \[+\left(\sum_{\omega_{x}\omega_{y}=-1}\operatorname{Re}W_{\times \mathbb{1}_{x,y}^{1}}W_{\times\mathbb{2},y}\ell-\frac{\eta}{N}\operatorname{ Re}W_{\ell}\right)\] \[=-\sum_{x\in C}\left(2N-\frac{2\eta}{N}\right)\operatorname{Re}W _{\ell}-2\sum_{\omega_{x}\omega_{y}=1}\operatorname{Re}(W_{\times\mathbb{1}_{x,y}^{1}}W_{\times\mathbb{2},y}\ell)+2\sum_{\omega_{x}\omega_{y}=-1} \operatorname{Re}(W_{\times\mathbb{1}_{x,y}^{1}}W_{\times\mathbb{2},y}\ell)\] \[+\frac{2\eta}{N}\bigg{(}\sum_{x\neq y,\omega_{x}\omega_{y}=1}1- \sum_{\omega_{x}\omega_{y}=-1}1\bigg{)}\operatorname{Re}W_{\ell}\] The last term can be simplified as follows: \(\sum_{\omega_{x}\omega_{y}=1,x\neq y}=|A|(|A|-1)+|B|(|B|-1)\) and \(\sum_{\omega_{x}\omega_{y}=-1}=2|A||B|\) So that term reduces to \[\frac{2\eta}{N}\big{[}|A|^{2}+|B|^{2}-2|A||B|-|A|-|B|\big{]} \operatorname{Re}W_{\ell}=\frac{2\eta}{N}(t^{2}-m)\] where \(t=|A|-|B|\) and \(m=|A|+|B|\). \(A,B\) are the number of \(e\)s and \(-e\)s respectively. So in total, \[\Delta_{e}\operatorname{Re}W_{\ell}=-\sum_{\omega_{x}\omega_{y}=1}2 \operatorname{Re}W_{\ell\times\mathbb{1}_{x,y}^{1}}W_{\ell\times\mathbb{2}_{x,y}^{2}}+\sum_{\omega_{x}\omega_{y}=-1}2\operatorname{Re}W_{\ell\times \mathbb{1}_{x,y}^{1}}W_{\ell\times\mathbb{2}_{x,y}^{2}}-\left(2mN-\frac{2\eta t ^{2}}{N}\right)\operatorname{Re}W_{\ell}\] Now for the imaginary part of the laplacian, notice that no part of the algebra actually used that the matrices between occurrences of the edge \(e\) belong to \(SU(N)\). We only needed that they are unitary. Thus, pick any edge that is not \(\pm e\), and replace the matrix there with \(-iA\). This gives \[\Delta_{e}\operatorname{Im}W_{\ell}=-2\sum_{\omega_{x}\omega_{y}=1} \operatorname{Im}W_{\ell\times\mathbb{1},y}W_{\ell\times\mathbb{2}_{x,y}^{2}} +\sum_{\omega_{x}\omega_{y}=-1}2\operatorname{Im}W_{\ell\times\mathbb{1}_{x,y} ^{1}}W_{\ell\times\mathbb{2}_{x,y}^{2}}-\left(2mN-\frac{2\eta t^{2}}{N}\right) \operatorname{Im}W_{\ell}\] And so, we have \[\Delta W_{\ell}=-2\sum_{\omega_{x}\omega_{y}=1}W_{\ell\times\mathbb{1}_{x,y} ^{1}}W_{\ell\times\mathbb{2}_{x,y}^{2}}+\sum_{\omega_{x}\omega_{y}=-1}2W_{\ell \times\mathbb{1}_{x,y}^{1}}W_{\ell\times\mathbb{2}_{x,y}^{2}}-\left(2mN-\frac {2\eta t^{2}}{N}\right)\!W_{\ell}\] ### Master loop equation for \(U(n)\) and \(Su(n)\) Proof of theorem 3.2.: Let \((\ell_{1},\dots,\ell_{n})\) be a sequence of loops. By Laplacian integration by parts, \[-\mathbb{E}[(\Delta_{e}W_{\ell_{1}})W_{\ell_{2}}\dots W_{\ell_{n }}]=Z^{-1}\int_{G^{E_{\Lambda}^{+}}}\left\langle\nabla W_{\ell_{1}},\nabla \Bigg{(}W_{\ell_{2}}\dots W_{\ell_{n}}\exp\!\left(\beta N\sum_{p\in\mathcal{P }_{\Lambda}^{+}}\operatorname{Re}\operatorname{Tr}W_{p}\right)\right)\right\rangle d\mu\] \[=\sum_{i=2}^{n}\mathbb{E}\Bigg{[}\langle\nabla W_{\ell_{1}}, \nabla W_{\ell_{i}}\rangle\prod_{j\neq 1,i}W_{\ell_{i}}\Bigg{]}+\beta N\sum_{i=2}^{n}\sum_{p\in \mathcal{P}^{+}(e)}\mathbb{E}[\langle\nabla W_{\ell_{1}},\operatorname{Re} \nabla W_{p}\rangle\,W_{\ell_{2}}\dots W_{\ell_{n}}]\] Applying lemma 5.3 and lemma 5.4, \[=\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y}=-1}2 \mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{\ell_{j}} \Bigg{]}-\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y}=1}2 \mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{\ell_{j}} \Bigg{]}\] \[+\eta\sum_{i=2}^{n}\frac{2t_{1}t_{i}}{N}\mathbb{E}[W_{\ell_{1}} \ldots W_{\ell_{n}}]+\beta N\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C_{ 1}}\mathbb{E}[W_{\ell_{1}\ominus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] \[-\beta N\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C_{1}} \mathbb{E}[W_{\ell_{1}\oplus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]+\eta\beta N \sum_{p\in\mathcal{P}^{+}_{\Lambda}(e)}\frac{t_{1}}{N}\mathbb{E}[W_{\ell_{1}} W_{p^{-1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] \[-\eta\beta N\sum_{p\in\mathcal{P}^{+}_{\Lambda}(e)}\frac{t_{1}}{ N}\mathbb{E}[W_{\ell_{1}}W_{p}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] Regrouping the plaquettes in the expansion terms, \[=\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y}=-1 }2\mathbb{E}\Bigg{[}W_{\ell_{1}\ominus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{\ell _{j}}\Bigg{]}-\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y}=1} 2\mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{\ell_{ j}}\Bigg{]}\] \[+\eta\sum_{i=2}^{n}\frac{2t_{1}t_{i}}{N}\mathbb{E}[W_{\ell_{1}} \ldots W_{\ell_{n}}]+\beta N\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C _{1}}\mathbb{E}[W_{\ell_{1}\ominus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] \[-\beta N\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C_{1}} \mathbb{E}[W_{\ell_{1}\oplus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]-\eta\beta N \sum_{p\in\mathcal{P}_{\Lambda}(e)}\frac{t_{1}t_{p}}{N}\mathbb{E}[W_{\ell_{1}} W_{p^{-1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] Now for the left hand side, lemma 5.6 gives \[-\mathbb{E}[(\Delta_{e}W_{\ell_{1}})W_{\ell_{2}}\ldots W_{\ell_{n }}]=\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=1}2\mathbb{E}\Big{[}W_{\times_ {x,y}^{1}\ell_{1}}W_{\times_{x,y}^{2}\ell_{1}}W_{\ell_{2}}\ldots W_{\ell_{n}} \Big{]}\] \[-\sum_{x,y\in C_{1},\omega_{x}\omega_{y}=-1}2\mathbb{E}\Big{[}W_{ \times_{x,y}^{1}\ell_{1}}W_{\times_{x,y}^{2}\ell_{1}}W_{\ell_{2}}\ldots W_{ \ell_{n}}\Big{]}+\bigg{(}2mN-\frac{2\eta t_{1}^{2}}{N}\bigg{)}\mathbb{E}[W_{ \ell_{1}}\ldots W_{\ell_{n}}]\] Setting the two sides equal,rearranging terms, and dividing by 2 gives \[\bigg{(}mN-\frac{\eta t_{1}t}{N}\bigg{)}\mathbb{E}[W_{\ell_{1}} \ldots W_{\ell_{n}}]=\sum_{x,y\in C_{1},\omega_{x}\omega_{y}=-1}\mathbb{E} \Big{[}W_{\times_{x,y}^{1}\ell_{1}}W_{\times_{x,y}^{2}\ell_{1}}W_{\ell_{2}} \ldots W_{\ell_{n}}\Big{]}\] \[-\sum_{x\neq y\in C_{1},\omega_{x}\omega_{y}=1}\mathbb{E}\Big{[}W_ {\times_{x,y}^{1}\ell_{1}}W_{\times_{x,y}^{2}\ell_{1}}W_{\ell_{2}}\ldots W_{ \ell_{n}}\Big{]}+\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y }=-1}\mathbb{E}\Bigg{[}W_{\ell_{1}\ominus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{ \ell_{j}}\Bigg{]}\] \[-\sum_{i=2}^{n}\sum_{x\in C_{1},y\in C_{i},\omega_{x}\omega_{y}=1 }\mathbb{E}\Bigg{[}W_{\ell_{1}\oplus_{x},y\ell_{i}}\prod_{j\neq i,1}W_{\ell _{j}}\Bigg{]}+\frac{\beta N}{2}\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x \in C_{1}}\mathbb{E}[W_{\ell_{1}\ominus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]\] \[-\frac{\beta N}{2}\sum_{i=2}^{n}\sum_{p\in\mathcal{P}^{+}(e),x\in C _{1}}\mathbb{E}[W_{\ell_{1}\oplus_{x}p}W_{\ell_{2}}\ldots W_{\ell_{n}}]-\eta \beta\sum_{p\in\mathcal{P}_{\Lambda}(e)}t_{1}t_{p}\mathbb{E}[W_{\ell_{1}}W_{p^{- 1}}W_{\ell_{2}}\ldots W_{\ell_{n}}]\]
2309.14483
Unveiling the Potential of Deep Learning Models for Solar Flare Prediction in Near-Limb Regions
This study aims to evaluate the performance of deep learning models in predicting $\geq$M-class solar flares with a prediction window of 24 hours, using hourly sampled full-disk line-of-sight (LoS) magnetogram images, particularly focusing on the often overlooked flare events corresponding to the near-limb regions (beyond $\pm$70$^{\circ}$ of the solar disk). We trained three well-known deep learning architectures--AlexNet, VGG16, and ResNet34 using transfer learning and compared and evaluated the overall performance of our models using true skill statistics (TSS) and Heidke skill score (HSS) and computed recall scores to understand the prediction sensitivity in central and near-limb regions for both X- and M-class flares. The following points summarize the key findings of our study: (1) The highest overall performance was observed with the AlexNet-based model, which achieved an average TSS$\sim$0.53 and HSS$\sim$0.37; (2) Further, a spatial analysis of recall scores disclosed that for the near-limb events, the VGG16- and ResNet34-based models exhibited superior prediction sensitivity. The best results, however, were seen with the ResNet34-based model for the near-limb flares, where the average recall was approximately 0.59 (the recall for X- and M-class was 0.81 and 0.56 respectively) and (3) Our research findings demonstrate that our models are capable of discerning complex spatial patterns from full-disk magnetograms and exhibit skill in predicting solar flares, even in the vicinity of near-limb regions. This ability holds substantial importance for operational flare forecasting systems.
Chetraj Pandey, Rafal A. Angryk, Berkay Aydin
2023-09-25T19:30:02Z
http://arxiv.org/abs/2309.14483v1
# Unveiling the Potential of Deep Learning Models for Solar Flare Prediction in Near-Limb Regions ###### Abstract This study aims to evaluate the performance of deep learning models in predicting \(\geq\)M-class solar flares with a prediction window of 24 hours, using hourly sampled full-disk line-of-sight (LoS) magnetogram images, particularly focusing on the often overlooked flare events corresponding to the near-limb regions (beyond \(\pm\)70\({}^{\circ}\) of the solar disk). We trained three well-known deep learning architectures-AlexNet, VGG16, and ResNet34 using transfer learning and compared and evaluated the overall performance of our models using true skill statistics (TSS) and Heidke skill score (HSS) and computed recall scores to understand the prediction sensitivity in central and near-limb regions for both X- and M-class flares. The following points summarize the key findings of our study: (1) The highest overall performance was observed with the AlexNet-based model, which achieved an average TSS\(\sim\)0.53 and HSS\(\sim\)0.37; (2) Further, a spatial analysis of recall scores disclosed that for the near-limb events, the VGG16- and ResNet34-based models exhibited superior prediction sensitivity. The best results, however, were seen with the ResNet34-based model for the near-limb flares, where the average recall was approximately 0.59 (the recall for X- and M-class was 0.81 and 0.56 respectively) and (3) Our research findings demonstrate that our models are capable of discerning complex spatial patterns from full-disk magnetograms and exhibit skill in predicting solar flares, even in the vicinity of near-limb regions. This ability holds substantial importance for operational flare forecasting systems. deep learning, solar flares, near-limb prediction ## I Introduction Solar flares are temporary occurrences on the Sun, considered to be the central phenomena in space weather forecasting, manifested as the sudden large eruption of electromagnetic radiation on the outermost atmosphere of the Sun. They are observed by the X-ray sensors on Geostationary Operational Environmental Satellite (GOES) and classified according to their peak X-ray flux level (measured in watt per square meter) into the following five categories by the National Oceanic and Atmospheric Administration (NOAA): \(\mathrm{X}\) (\(\geq 10^{-4}Wm^{-2}\)), M (\(\geq 10^{-5}\) and \(<10^{-4}Wm^{-2}\)), C (\(\geq 10^{-6}\) and \(<10^{-5}Wm^{-2}\)), B (\(\geq 10^{-7}\) and \(<10^{-6}Wm^{-2}\)), and A (\(\geq 10^{-8}\) and \(<10^{-7}Wm^{-2}\)) [1]; areas on the Sun with peak X-ray flux less than \(<10^{-8}Wm^{-2}\) (less than A-class flares) are regarded as flare-quiet (FQ) regions. Large flares (M- and X-class) are rare events and significantly more powerful than other flare classes, with the capability to disrupt several infrastructures in space (e.g., satellite operations) and on Earth (e.g., the electricity power grid and avionics). Hence, a precise and reliable system for solar flare prediction is essential. Active regions (ARs) on the Sun are areas with localized magnetic disturbances, visually indicated by scattered flags in the full-disk magnetogram image shown in Fig. 1. In most operational flare forecasting systems, these ARs are used as regions of interest because they are considered to be the main initiators of space weather events. In AR-based flare prediction, the underlying models issue predictions for each AR individually. To issue a full-disk forecast with an AR-based model, the output flare probabilities for each active region are usually aggregated using a heuristic function as mentioned in [2]. This heuristic function assumes conditional independence among ARs and equal contributions from all ARs to the aggregated full-disk forecast. However, this uniform weighting scheme may not accurately represent the actual impact of each AR on the probability of full-disk flare prediction [3]. Furthermore, the magnetic field measurements, which are Fig. 1: An annotated full-disk line-of-sight magnetogram observed on 2013-01-09 at 00:00:00 UTC as an example, showing the approximate central location (within \(\pm\)70\({}^{\circ}\)) and near-limb (beyond \(\pm\)70\({}^{\circ}\) to \(\pm\)90\({}^{\circ}\)) region with all the NOAA active regions (ARs) present at the noted timestamp. ARs in central and near-limb regions are indicated by blue and red flags respectively. Note that the directions East (E) and West (W) are reversed in solar coordinates. techniques, are susceptible to severe projection effects [4] as ARs get closer to limbs (to such an extent that magnetic field readings become distorted beyond \(\pm\)60\({}^{\circ}\) of the solar-disk [5]). Therefore, the aggregated full-disk flare probability is in fact, limited to ARs in central locations. This further underscores the inherent challenges in issuing a full-disk flare forecast using an AR-based model. As studies in AR-based model for flare prediction include ARs located within \(\pm\)30\({}^{\circ}\)[6] to \(\pm\)70\({}^{\circ}\)[7], in the context of this paper, this upper limit (\(\pm\)70\({}^{\circ}\)) is used as a boundary for central location (within \(\pm\)70\({}^{\circ}\)) and near-limb regions (beyond \(\pm\)70\({}^{\circ}\)) as shown in Fig. 1. Full-disk models, on the other hand, typically use compressed images derived from the original full-depth magnetogram rasters representing the entire solar disk. While projection effects persist in these magnetograms, which contain actual magnetic field readings, the compressed images retain only the shape-based spatial patterns of AR such as size, directionality, sunspot borders, and inversion lines in grayscale. Thus, by incorporating the entire full-disk magnetogram, this approach enables the prediction of solar flares in the Sun's near-limb areas as well, which are often overlooked by AR-based methods. To the best of our knowledge, flare prediction employs four major approaches: (i) empirical human prediction (e.g., [8]), (ii) statistical prediction (e.g., [9]), (iii) physics-based numerical simulations (e.g., [10]), and (iv) machine learning and deep learning approaches (e.g., [11, 12, 13, 14, 15]). The use of machine learning in extracting forecast patterns from the Sun has been an active area of research since the early 1990s [16]. Since then, there has been a significant advancement in machine learning and deep learning techniques, leading to a surge of interest in applying these methods to build more precise flare forecasting models as they can automatically extract relevant spatial features discerning the intricate structures associated with ARs prone to eventual solar flares. In [6], a convolutional neural network (CNN) model was trained using solar Active Region (AR) patches extracted from line-of-sight (LoS) magnetograms within \(\pm\)30\({}^{\circ}\) of the central meridian to predict \(\geq\)C-, \(\geq\)M-, and \(\geq\)X-class flares. Similarly, [11] developed a CNN-based model that issued binary class predictions for \(\geq\)C- and \(\geq\)M-class flares within 24 hours using Space-Weather Helioseismic and Magnetic Imager Active Region Patches (SHARP) data [17]. The SHARP data was extracted from solar magnetograms using AR patches located within \(\pm\)45\({}^{\circ}\) of the central meridian. Notably, both models [6, 11] had limited operational capability, as they were restricted to small portions of the observable disk in central locations (\(\pm\)30\({}^{\circ}\) and \(\pm\)45\({}^{\circ}\)). More recently, we presented full-disk models trained with limited data in [18, 19]. However, these were preliminary studies that did not provide insights into the model's capability for near-limb events. Moreover, our prior work on explainable full-disk deep learning models [13, 20] shows that the features learned by these models are linked to relevant ARs on full-disk magnetograms which underscore their possible implications as a complementary approach. In this study, we present a more comprehensive view of deep learning-based full-disk models for predicting \(\geq\)M-class solar flares in binary mode. We evaluate and compare the performance of three widely used pre-trained CNNs - AlexNet [21], VGG16 [22], and ResNet34 [23] - by utilizing hourly sampled instances of full-disk LoS magnetogram images covering solar cycle 24. The focus of this work is to study whether our models can be relied upon for critical applications, particularly for near-limb forecasting with quantitative spatial analytics. The novel contributions of this paper are as follows: (i) We show an improved overall performance of a full-disk solar flare prediction model by building and comparing three CNN architectures on full-disk magnetogram images, (ii) We provide an extended spatial analysis of the predictive capability of full-disk models on near-limb and central locations, and (iii) We provide results that underscore the role of full-disk models in the prediction of solar flares in near-limb regions of the Sun. The remainder of the paper is structured as follows. In Sec. II, we detail the process of data preparation with consequent data distribution. In Sec. III, we describe all three model architectures explored in this work. Sec. IV presents the experimental design and model evaluation with skill scores and provides models' prediction sensitivity in central and near-limb regions. Finally, in Sec. V, we summarize our findings and suggest avenues for future research. (SDO) [25], publicly available from Helioiewer1. We collected a total of 63,649 magnetogram images by sampling them every hour of the day, starting at 00:00 and ending at 23:00, from December 2010 to December 2018. These images are resized to 512\(\times\)512 for computational efficiency and labeled using a 24-hour prediction window based on the maximum peak X-ray flux (converted to NOAA flare classes), as illustrated in Fig. 2. To elaborate, if the maximum X-ray intensity of a flare was weaker than M, we labeled the observation as "No Flare" (NF: \(<\)M), and if it was \(\geq\)M, we labeled it as "Flare" (FL: \(\geq\)M). This resulted in 54,649 instances for the NF class and 9,000 instances (8,120 instances of M-class and 880 instances of X-class flares) for the FL class. Due to scarce M- and X-class flares, the overall class imbalance in our dataset is \(\sim\)1:6 (FL:NF). Finally, we created a non-chronological split of our data into four temporally non-overlapping tri-monthly partitions introduced in [18] for our 4-fold cross-validation experiments, where three of the partitions are used as a training set, and one partition is used as a test set. The detailed class-wise data distribution is shown in Fig. 3 Footnote 1: Helioiewer : [https://api.heloiewer.org/](https://api.heloiewer.org/) ## III Models In this work, we use three CNN architectures: AlexNet [21], VGG16 [22], and ResNet34 [23]. We used AlexNet [21] due to its inherent architectural simplicity, consisting of 5 convolutional layers, 3 max pool layers, 1 adaptive average pool layer, and three fully connected layers. Moreover, our study included VGG16 [22], a relatively more complex model, to evaluate the hypothesis that an increase in the number of layers might engender enhanced performance. This model augments the foundational structure of AlexNet by integrating additional convolutional layers, all employing uniform 3x3 convolutional kernels. The VGG16 architecture consists 13 convolutional layers, 5 max pool layers, 1 adaptive average pool layer, and 3 fully connected layers. Lastly, we included ResNet34 [23], a CNN model that extends the complexity of the VGG16 design by facilitating the training of deeper networks with fewer parameters. Diverging from the approach of AlexNet and VGG16, ResNet34 integrates residual connections from each layer into subsequent connected layers. The architecture of ResNet34 consists of 33 convolutional layers, including a 7x7 kernel for the initial layer and 3x3 kernels for the remaining layers, along with one max pool layer, one adaptive average pool layer, and one fully connected layer. The primary reason behind our choice of these three architectures was to analyze and evaluate the influence of varying architectural designs and increasing layer depths on performance. Additionally, we factored the simplicity of the architectures into our selection process, in light of the relatively modest scale of our dataset for deep learning models. These pre-trained models require a 3-channel image for input, however, our data comprises compressed solar magnetogram images, which are grayscale. To reconcile this, we incorporated an additional convolutional layer at the onset of the network architecture, which accepts a 1-channel input. This layer employs a 3\(\times\)3 kernel with a size-1 stride, padding, and dilation, and consequently generates a 3-channel feature map. We initialize the added convolutional layer in all three models using Kaiming initialization [26], while all other layers are initialized with pre-trained weights. ## IV Experimental Evaluation ### _Experimental Settings_ We trained our full-disk flare prediction models using Stochastic Gradient Descent (SGD) as the optimizer and Negative Log-Likelihood (NLL) as the objective function. We initialized each of the models with their corresponding pre-trained weights, then further trained it for 50 epochs while employing a dynamic learning rate scheduling strategy, OneCycleLR with cosine annealing [27]. The initial learning rate (LR) employed for all three models was \(1e-5\), with a maximum LR set to \(1e-5\) for VGG16 and ResNet34, and \(1e-4\) for the AlexNet model. This scheduler adjusts the learning rate in a cyclical pattern, gradually increasing it to help the model quickly converge and then decreasing it to fine-tune the performance. The steps per epoch were set to the number of batches in training data, and the batch size was 64. Using the OneCycleLR scheduler simplifies and optimizes the models' learning rate schedule for hyperparameter tuning. As mentioned earlier in Sec. II, our dataset has an inherent class imbalance issue. This imbalance can significantly influence the performance of the models, potentially leading to less precise and reliable predictions for the minority class. To address this, we employed data augmentation and adjusted class weights in the loss function only on the training set. Specifically, we applied three augmentation techniques: vertical flipping, horizontal flipping, and rotations between +5\({}^{\circ}\) and -5\({}^{\circ}\) to both classes. For each instance in the minority class (FL), we applied all three augmentations, quadrupling the total number of instances for the entire FL-class. For each instance in NF-class, we randomly selected one of the three aforementioned augmentation techniques, doubling the total instances for this class. The goal of augmenting the NF-class instances was to ensure that the NF-class, though not uniformly augmented, retained a diversity in its data akin to the FL-class and expanded the overall dataset. Post augmentation, we adjusted the class weights to be inversely proportional to class frequencies, thereby penalizing misclassifications of the minority class. Finally, we evaluated our models using a 4-fold cross-validation approach on tri-monthly partitions. We evaluate the overall performance of our models using two widely-used forecast skills scores: true skill statistic (TSS, in Eq. 1) and Heidke skill score (HSS, in Eq. 2), derived from the elements of confusion matrix: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). In the context of this paper, the FL class is considered as the positive outcome, while NF is negative. Lastly, we report the subclass and overall recall (shown in Eq. 3) for flaring instances (M- and X-class) to assess the prediction sensitivity of our models in central and near-limb regions. \[TSS=\frac{TP}{TP+FN}-\frac{FP}{FP+TN} \tag{1}\] \[HSS=2\times\frac{TP\times TN-FN\times FP}{((P\times(FN+TN)+(TP+FP)\times N))}, \tag{2}\] where \(N=TN+FP\) and \(P=TP+FN\). \[Recall=\frac{TP}{TP+FN} \tag{3}\] TSS and HSS values range from -1 to 1, where 1 indicates all correct predictions, -1 represents all incorrect predictions, and 0 represents no skill. In contrast to TSS, HSS is an imbalance-aware metric, and it is common practice to use HSS in combination with TSS for the solar flare prediction models due to the high class-imbalance ratio present in the datasets. ### _Evaluation_ This section presents an analysis of the results, focusing on the performance comparison of the three models used in this study. The findings reveal that the AlexNet-based model exhibits better performance in relation to both the VGG16- and ResNet34-based models, as evidenced by the HSS and TSS scores provided in Table I. Notably, the AlexNet-based model demonstrates enhanced robustness, as indicated by the lower standard deviation, and achieves an approximate 2% improvement (for both TSS and HSS) compared to the VGG16-based model. Furthermore, when compared to the ResNet34-based model, the AlexNet-based model showcases a 1% higher skill score (for both TSS and HSS). It is important to highlight that the skill scores of the VGG16 and ResNet34 models exhibit greater variability, primarily influenced by the outcomes of Fold-3 in the 4-fold cross-validation experiment, details presented in Table. II. Furthermore, our best results exceed those reported in [13, 20] by approximately 2% in terms of TSS (reported \(\sim\)0.51), and in [20] by about 2% in terms of HSS (\(\sim\)0.35), while remaining comparable to the HSS in [13]. For reproducibility, source code for models along with results is available in our open-source project repository [28]. Moreover, we scrutinized the proficiency of our models both from a quantitative and qualitative standpoint by conducting an intricate spatial analysis of their performance in relation to the locations of M- and X-class solar flares, that were used as the labels. For the purpose of our analysis, we used the predictions made on the test set and created a heatmap by gathering the flares grouped by their location in the Heliographic Stonyhurst (HGS) coordinate system, where each bin represents a 5\({}^{\circ}\)\(\times\) 5\({}^{\circ}\) spatial cell in terms of latitude and longitude. Initially, we computed the recall for the \(\geq\)M-class flares (combined M- and X-class flares) in each spatial cell, providing a comprehensive assessment of the models' performance. Subsequently, we evaluated the recall separately for M-class and X-class flares, allowing us to analyze the models' sensitivity at a more granular level. The heatmaps that illustrate the spatial distribution of recall scores for \(\geq\)M-, X-, and M-class flares are shown in Fig. 4, 5 (a), and 5 (b) respectively. This allowed us to compare all three models on their capabilities to learn spatial patterns that localize the regions where the models were more effective in making accurate predictions and vice versa. Our findings indicate that all three models demonstrated reasonable proficiency in predicting X-class flares in central locations. However, among these, the ResNet34-based model stood out for its overall better performance in accurately forecasting X-class flares, regardless of whether they were in near-limb or central locations as shown in Fig. 5 (a). Upon analysis of the heatmaps for \(\geq\)M- and only M-class flares, as depicted in Fig. 4 and Fig. 5 (b) respectively, we observed that the ResNet34-based model generally yielded more accurate predictions across diverse spatial locations in comparison to the other models. Nonetheless, a common limitation across all three models was an elevated rate of false negatives in near-limb areas for M-class flares. Notably, these regions are often associated with unreliable readings due to projection effects. Despite this challenge, our study signifies a substantial Fig. 5: Individual heatmaps illustrating all three models’ recall performance for subclasses in FL. (a) **X-class flares** and (b) **M-class flares**. The spatially aggregated recall scores in Fig.4 are isolated for two subclasses. White cells in the grid represent unavailable instances. progression in solar flare prediction, enabling the prediction of flares even in these intricate near-limb regions with distorted magnetic fields. This ability to predict flares in near-limb areas has considerable implications in operational forecasting. ## V Conclusion and Future Work Our study involved the evaluation of three deep learning models, namely AlexNet, VGG16, and ResNet34, for the prediction of solar flares, with a primary focus on their performance for near-limb events. Through rigorous evaluation of our models and analysis of the prediction results, we observed a notable performance advantage of the ResNet34-based model in predicting near-limb flares. This finding highlights the efficacy of employing deeper architectures with residual connections, which enhance feature extraction and facilitate the capture of subtle patterns associated with near-limb events. Moreover, our study highlighted the variability in model performance across different flare types and event locations, emphasizing the importance of tailoring models and analyzing results in context-specific manners. This shows the need for further exploration of deep learning models to effectively capture the diverse nature of flare events. The implications of our research extend to operational forecasting systems, where the precise and reliable prediction of solar flares, including near-limb events, holds significant importance. Future research directions can explore the integration of multi-modal data and the development and utilization of deep learning models that can learn from temporally evolving solar activity. **Acknowledgements:** This work is supported in part under two NSF grants (Award #2104004 and #1931555) and a NASA SWR2O2R grant (Award #80NSSC22K0272).
2309.04848
Probing the Supersymmetry-Mass Scale With F-term Hybrid Inflation
We consider F-term hybrid inflation and supersymmetry breaking in the context of a model which largely respects a global U(1) R symmetry. The Kaehler potential parameterizes the Kaehler manifold with an enhanced U(1)x(SU(1,1)/U(1)) symmetry, where the scalar curvature of the second factor is determined by the achievement of a supersymmetry-breaking de Sitter vacuum without ugly tuning. The magnitude of the emergent soft tadpole term for the inflaton can be adjusted in the range (1.2-460) TeV -- increasing with the dimensionality of the representation of the waterfall fields -- so that the inflationary observables are in agreement with the observational requirements. The mass scale of the supersymmetric partners turns out to lie in the region (0.09-253) PeV which is compatible with high-scale supersymmetry and the results of LHC on the Higgs boson mass. The mu parameter can be generated by conveniently applying the Giudice-Masiero mechanism and assures the out-of-equilibrium decay of the R saxion at a low reheat temperature Trh<~163 GeV.
G. Lazarides, C. Pallis
2023-09-09T17:30:14Z
http://arxiv.org/abs/2309.04848v3
# Probing the Supersymmetry-Mass Scale With F-term Hybrid Inflation ###### Abstract We consider F-term hybrid inflation and supersymmetry breaking in the context of a model which largely respects a global \(U(1)\)\(R\) symmetry. The Kahler potential parameterizes the Kahler manifold with an enhanced \(U(1)\times(SU(1,1)/U(1))\) symmetry, where the scalar curvature of the second factor is determined by the achievement of a supersymmetry-breaking de Sitter vacuum without ugly tuning. The magnitude of the emergent soft tadpole term for the inflaton can be adjusted in the range \((1.2-460)\) TeV - increasing with the dimensionality of the representation of the waterfall fields - so that the inflationary observables are in agreement with the observational requirements. The mass scale of the supersymmetric partners turns out to lie in the region \((0.09-253)\) PeV which is compatible with high-scale supersymmetry and the results of LHC on the Higgs boson mass. The \(\mu\) parameter can be generated by conveniently applying the Giudice-Masiero mechanism and assures the out-of-equilibrium decay of the \(R\) saxion at a low reheat temperature \(T_{\rm rh}\leq 163\) GeV. pacs: 98.80.Cq, 12.60.Jv **Published in** Phys. Rev. D **108**, no.9, 095055 (2023) ## I Introduction Among the various inflationary models - for reviews see Refs. [1; 2] -, the simplest and most well-motivated one is undoubtedly the _F-term hybrid inflation_ (FHI) model [3]. It is tied to a renormalizable superpotential uniquely determined by a global \(U(1)\)\(R\) symmetry, it does not require fine tuned parameters and transplanckian inflaton values, and it can be naturally followed by a _Grand Unified Theory_ (GUT) phase transition - see, e.g., Refs. [4; 5; 6]. In the original implementation of FHI [3], the slope of the inflationary path which is needed to drive the inflaton towards the _Supersymmetric_ (SUSY) vacuum is exclusively provided by the inclusion of _radiative corrections_ (RCs) in the tree level (classically flat) inflationary potential. This version of FHI is considered as strongly disfavored by the _Planck_ data [7] fitted to the standard power-law cosmological model with _Cold Dark Matter_ (CDM) and a cosmological constant (\(\Lambda\)CDM). A more complete treatment, though, incorporates also corrections originating from _supergravity_ (SUGRA) which depend on the adopted Kahler potential [8; 9; 10; 11] as well as soft SUSY-breaking terms [12; 13; 14; 15; 16; 17]. Mildly tuning the parameters of the relevant terms, we can achieve [18] mostly hilltop FHI fully compatible with the data [7; 19; 20] - observationally acceptable implementations of FHI can also be achieved by invoking a two-step inflationary scenario [21] or a specific generation [5; 22; 23] of the \(\mu\) term of the _Minimal Supersymmetric Standard Model_ (MSSM). Out of the aforementioned realizations of FHI we focus here on the "tadpole-assisted" one [14; 15] in which the suitable inflationary potential is predominantly generated by the cooperation of the RCs and the soft SUSY-breaking tadpole term. A crucial ingredient for this is the specification of a convenient SUSY-breaking scheme - see, e.g., Refs. [24; 25; 26; 27]. Here, we extend the formalism of FHI to encompass SUSY breaking by imposing a mildly violated \(R\) symmetry introduced in Ref. [28]. Actually, it acts as a junction mechanism of the (visible) _inflationary sector_ (IS) and the _hidden sector_ (HS). A first consequence of this combination is that the \(R\) charge \(2/\nu\) of the goldstino superfield - which is related to the geometry of the HS - is constrained to values with \(0<\nu<1\). A second byproduct is that SUSY breaking is achieved not only in a Minkowski vacuum, as in Ref. [28], but also in a _de Sitter_ (dS) one which allows us to control the notorious _Dark Energy_ (DE) problem by mildly tuning a single superpotential parameter to a value of order \(10^{-12}\). A third consequence is the stabilization [24; 25; 26; 27] of the sgoldstino to low values during FHI. Selecting minimal Kahler potential for the inflaton and computing the suppressed contribution of the sgoldstino to the mass squared of the inflaton, we show that the \(\eta\)-problem of FHI can be elegantly resolved. After these arrangements, the imposition of the inflationary requirements may restrict the magnitude of the naturally induced tadpole term which is a function of the inflationary scale \(M\) and the dimensionality \(\mathsf{N}_{\rm G}\) of the representation of the waterfall fields. The latter quantity depends on the GUT gauge symmetry \(\mathbb{G}\) in which FHI is embedded. We exemplify our proposal by considering three possible \(\mathbb{G}\)'s which correspond to the values \(\mathsf{N}_{\rm G}=1,2\), and \(10\). The analysis for the two latter \(\mathsf{N}_{\rm G}\) values is done for the first time. We find that the required magnitude of the tadpole term increases with \(\mathsf{N}_{\rm G}\). For \(\mathsf{N}_{\rm G}=1\) the scale of formation of the \(B-L\)_cosmic strings_ (CSs) fits well with the bound [29] induced by the observations [19] on the anisotropies of the _cosmic microwave background_ (CMB) radiation. These \(B-L\) CSs are rendered metastable, if the \(U(1)_{B-L}\) symmetry is embedded in a GUT based on a group with higher rank such as \(SO(10)\). In such a case, the CS network decays generating a stochastic background of gravitational waves that may interpret [30; 31] the recent data from NANOGrav [32] and other pulsar timing array experiments [33] - see also Ref. [34]. Finally, a solution to the \(\mu\) problem of MSSM - for an updated review see Ref. [35] - may be achieved by suitably ap plying [28] the Giudice-Masiero mechanism [36; 37]. Contrary to similar attempts [22; 23], the \(\mu\) term here plays no role during FHI. This term assures [38; 39; 40; 41; 42] the timely decay of the sgoldstino (or \(R\) saxion), which dominates the energy density of the Universe, before the onset of the _Big Bang Nucleosynthesis_ (BBN) at cosmic temperature \((2-4)\) MeV [43]. In a portion of the parameter space with \(3/4<\nu<1\) non-thermal production of gravitinos (\(\widetilde{G}\)) is prohibited and so the moduli-induced \(\widetilde{G}\) problem [44] can be easily eluded. Finally, our model sheds light to the rather pressing problem of the determination of the SUSY mass scale \(\widetilde{m}\) which remains open to date [45] due to the lack of any SUSY signal in LHC - for similar recent works see Refs. [46; 47; 48; 49; 50; 51]. In particular, our setting predicts \(\widetilde{m}\) close to the PeV scale and fits well with high-scale SUSY and the Higgs boson mass discovered at LHC [52] if we assume a relatively low \(\tan\beta\) and stop mixing [53]. We describe below how we can interconnect the inflationary and the SUSY-breaking sectors of our model in Sec. II. Then, we propose a resolution to the \(\mu\) problem of MSSM in Sec. III and study the reheating process in Sec. IV. We finally present our results in Sec. VI confronting our model with a number of constraints described in Sec. V. Our conclusions are discussed in Sec. VII. General formulas for the SUGRA-induced corrections to the potential of FHI are arranged in the Appendix. ## II Linking FHI with the SUSY breaking sector As mentioned above, our model consists of two sectors: the HS responsible for the F-term (spontaneous) SUSY breaking and the IS responsible for FHI. In this Section we first - in Sec. II.1 - specify the conditions under which the coexistence of both sectors can occur and then - in Sec. II.2 - we investigate the vacua of the theory. Finally, we derive the inflationary potential in Sec. II.3. ### Set-up Here we determine the particle content, the superpotential, and the Kahler potential of our model. These ingredients are presented in Secs. II.1.1, II.1.2, and II.1.3. Then in Sec. II.1.4 we present the general structure of the SUGRA scalar potential which governs the evolution of the HS and IS. #### ii.1.1 Particle Content As well-known, FHI can be implemented by introducing three superfields \(\tilde{\Phi}\), \(\Phi\), and \(S\). The two first are left-handed chiral superfields oppositely charged under a gauge group \(\mathbb{G}\) whereas the latter is the inflaton and is a \(\mathbb{G}\)-singlet left-handed chiral superfield. Singlet under \(\mathbb{G}\) is also the SUSY breaking (goldstino) superfield \(Z\). In this work we identify \(\mathbb{G}\) with three possible gauge groups with different dimensionality \(\mathsf{N}_{\mathbb{G}}\) of the representations to which \(\bar{\Phi}\) and \(\Phi\) belong. Namely, we consider \(\mathbb{G}=\mathbb{G}_{B-L}\) with \(\mathbb{G}_{B-L}=\mathbb{G}_{\rm SM}\times U(1)_{B-L}\), where \(\mathbb{G}_{\rm SM}\) is the Standard Model gauge group. In this case \(\Phi\) and \(\bar{\Phi}\) belong [14] to the \((\mathbf{1},\mathbf{1},0,-1)\) and \((\mathbf{1},\mathbf{1},0,1)\) representation of \(\mathbb{G}_{B-L}\) respectively and so \(\mathsf{N}_{\mathbb{G}}=1\). \(\mathbb{G}=\mathbb{G}_{\rm LR}\) with \(\mathbb{G}_{\rm LR}=SU(3)_{\mathbb{C}}\times SU(2)_{\rm L}\times SU(2)_{\rm R} \times U(1)_{B-L}\). In this case \(\Phi\) and \(\bar{\Phi}\) belong [4; 5] to the \((\mathbf{1},\mathbf{1},\mathbf{2},-1)\) and \((\mathbf{1},\mathbf{1},\bar{\mathbf{2}},1)\) representation of \(\mathbb{G}_{\rm LR}\) respectively and so \(\mathsf{N}_{\mathbb{G}}=2\). \(\mathbb{G}=\mathbb{G}_{5\rm X}\) with \(\mathbb{G}_{5\rm X}=SU(5)\times U(1)_{X}\), the gauge group of the flipped \(SU(5)\) model. In this case \(\Phi\) and \(\bar{\Phi}\) belong [6] to the \((\mathbf{10},1)\) and \((\overline{\mathbf{10}},-1)\) representation of \(\mathbb{G}_{5\rm X}\) respectively and so \(\mathsf{N}_{\mathbb{G}}=10\). In the cases above, we assume that \(\mathbb{G}\) is completely broken via the _vacuum expectation values_ (VEVs) of \(\Phi\) and \(\bar{\Phi}\) to \(\mathbb{G}_{\rm SM}\). No magnetic monopoles are generated during this GUT transition, in contrast to the cases where \(\mathbb{G}=SU(4)_{\mathbb{C}}\times SU(2)_{\rm L}\times SU(2)_{\rm R}\), \(SU(5)\), or \(SO(10)\). The production of magnetic monopole can be avoided, though, even in these groups if we adopt the shifted [54] or smooth [55] variants of FHI. #### ii.1.2 Superpotential The superpotential of our model has the form \[W=W_{\rm I}(S,\Phi,\bar{\Phi})+W_{\rm H}(Z)+W_{\rm GH}(Z,\bar{\Phi},\Phi), \tag{1}\] where the subscripts "I" and "H" stand for the IS and HS respectively. The three parts of \(W\) are specified as follows: _a._\(W_{\rm I}\) is the IS part of \(W\) written as [3] \[W_{\rm I}=\kappa S\left(\bar{\Phi}\Phi-M^{2}\right),\] (2a) where \[\kappa\] and \[M\] are free parameters which may be made positive by field redefinitions. _b._\(W_{\rm H}\) is the HS part of \[W\] which reads [28] \[W_{\rm H}=mm_{\rm P}^{2}(Z/m_{\rm P})^{\nu}.\] (2b) Here \[m_{\rm P}=2.4\times 10^{18}\ \text{GeV}\] is the reduced Planck mass, \[m\] is a positive free parameter with mass dimensions, and \[\nu\] is an exponent which may, in principle, acquire any real value if \[W_{\rm H}\] is considered as an effective superpotential valid close to the non-zero vacuum value of \[Z\]. We will assume though that the effective superpotential is such that only positive powers of \[Z\] appear. \(W_{\rm GH}\) is an unavoidable term - see below - which mixes \[Z\] with \[\bar{\Phi}\] and \[\Phi\] and has the form \[W_{\rm GH}=-\lambda m_{\rm P}(Z/m_{\rm P})^{\nu}\bar{\Phi}\Phi, \tag{2c}\] with \(\lambda\) a real coupling constant. \(W\) is fixed by imposing an \(R\) symmetry under which \(W\) and the various superfields have the following \(R\) characters \[R(W)=R(S)=2,\ \ R(Z)=2/\nu\ \text{ and }\ R(\bar{\Phi}\Phi)=0. \tag{3}\] As we will see below, we confine ourselves in the range \(3/4<\nu<1\). We assume that \(W\) is holomorphic in \(S\) and so appears with positive integer exponents \(\nu_{s}\). Mixed terms of the form \(S^{\nu_{s}}Z^{\nu_{s}}\) must obey the \(R\) symmetry and thus \[\nu_{s}+\nu_{z}/\nu=1\ \ \Rightarrow\ \nu_{z}=(1-\nu_{s})\nu, \tag{4}\] leading to negative values of \(\nu_{z}\). Therefore no such mixed terms appear in the superpotential. #### ii.1.3 Kahler Potential The Kahler potential has two contributions \[K=K_{\rm I}(S,\Phi,\bar{\Phi})+K_{\rm H}(Z), \tag{5}\] which are specified as follows: \(K_{\rm I}\) is the part of \(K\) which depends on the fields involved in FHI - cf. Eq. (2a). We adopt the simplest possible form \[K_{\rm I}=|S|^{2}+|\Phi|^{2}+|\bar{\Phi}|^{2},\] (6a) which parameterizes the \[U(1)_{S}\times U(1)_{\Phi}\times U(1)_{\bar{\Phi}}\] Kahler manifold - the indices here indicate the moduli which parameterize the corresponding manifolds. \(K_{\rm H}\) is the part of \[K\] devoted to the HS. We adopt the form introduced in Ref. [28] where \[K_{\rm H}=Nm_{\rm P}^{2}\ln\left(1+\frac{|Z|^{2}-k^{2}Z_{-}^{4}/m_{\rm P}^{2} }{Nm_{\rm P}^{2}}\right),\] (6b) with \(Z_{\pm}=Z\pm Z^{*}\). Here, \(k\) is a parameter which mildly violates \(R\) symmetry endowing \(R\) axion with phenomenologically acceptable mass. Despite the fact that there is no string-theoretical motivation for \(K_{\rm H}\), we consider it as an interesting phenomenological option since it ensures a vanishing potential energy density in the vacuum without tuning for \[N=\frac{4\nu^{2}}{3-4\nu} \tag{7}\] when \(\nu\) is confined to the following ranges \[\frac{3}{4}<\nu<\frac{3}{2}\ \ \text{for}\ \ N<0\ \ \text{and}\ \ \nu<\frac{3}{4}\ \ \text{for}\ \ N>0. \tag{8}\] As we will see below the same \(\nu-N\) relation assists us to obtain a dS vacuum of the whole field system with tunable cosmological constant. Our favored \(\nu\) range will finally be \(3/4<\nu<1\). This range is included in Eq. (8) for \(N<0\). Therefore, \(K_{\rm H}\) parameterizes the \(\left(SU(1,1)/U(1)\right)_{Z}\) hyperbolic Kahler manifold. The total \(K\) in Eq. (5) enjoys an enhanced symmetry for the \(S\) and \(Z\) fields, namely \(U(1)_{S}\times\left(SU(1,1)/U(1)\right)_{Z}\). Thanks to this symmetry, mixing terms of the form \(S^{\tilde{\nu}_{s}}Z^{\tilde{\nu}_{s}}\) can be ignored although they may be allowed by the \(R\) symmetry for \(\tilde{\nu}_{z}=\nu\tilde{\nu_{s}}\). #### ii.1.4 SUGRA Potential Denoting the various superfields of our model as \(X^{\alpha}=S,Z,\Phi,\bar{\Phi}\) and employing the same symbol for their complex scalar components, we can find the F-term (tree level) SUGRA scalar potential \(V_{\rm F}\) from \(W\) in Eq. (1) and \(K\) in Eq. (5) by applying the standard formula [56] \[V_{\rm F}=e^{K/m_{\rm P}^{2}}\left(K^{\alpha\bar{\beta}}D_{\alpha}WD_{\bar{ \beta}}W^{*}-3|W|^{2}/m_{\rm P}^{2}\right), \tag{9}\] with \(K_{\alpha\bar{\beta}}=\partial_{X^{\alpha}}\partial_{X^{\beta}}K\), \(K^{\bar{\beta}\alpha}K_{\alpha\bar{\gamma}}=\bar{\delta}_{\bar{\gamma}}^{\bar {\beta}}\) and \[D_{\alpha}W=\partial_{X^{\alpha}}W+W\partial_{X^{\alpha}}K/m_{\rm P}^{2}. \tag{10}\] Thanks to the simple form of \(K\) in Eqs. (5), (6a), and (6b), the Kahler metric \(K_{\alpha\bar{\beta}}\) has diagonal form with only one nontrivial element \[\begin{split}& K_{ZZ^{*}}=\left(Nm_{\rm P}^{4}-k^{2}Z_{-}^{4}+m_{ \rm P}^{2}|Z|^{2}\right)^{-2}m_{\rm P}^{2}N\times\\ &\left(m_{\rm P}^{6}N+12Nk^{2}m_{\rm P}^{4}Z_{-}^{2}+4k^{4}Z_{-}^ {6}+3k^{2}m_{\rm P}^{2}Z_{-}^{2}Z_{+}^{2}\right).\end{split} \tag{11}\] The resulting \(V_{\rm F}\) can be written as \[V_{\rm F}=e^{\frac{K}{m_{\rm P}^{2}}}\Big{(}|v_{S}|^{2}+|v_{\Phi}|^{2}+|v_{ \bar{\Phi}}|^{2}+K_{ZZ^{*}}^{-1}|v_{Z}|^{2}-3|v_{W}|^{2}\Big{)}, \tag{12}\] where the individual contributions are \[v_{S} =\kappa(\bar{\Phi}\Phi-M^{2})\left(|S/m_{\rm P}|^{2}+1\right)\] \[-S^{*}Z^{\nu}/m_{\rm P}^{\nu+1}\left(mm_{\rm P}-\lambda\bar{\Phi} \Phi\right), \tag{13a}\] \[v_{\Phi} =\kappa S(M^{2}m_{\rm P}^{2}-\Phi^{*}-\bar{\Phi}(|\Phi|^{2}m_{\rm P }^{-2}+1))\] \[+Z^{\nu}m_{\rm P}^{1-\nu}\left(\lambda\bar{\Phi}(|\Phi/m_{\rm P}|^ {2}+1)-mm_{\rm P}^{-1}\Phi^{*}\right),\] (13b) \[v_{Z} =\nu(Z/m_{\rm P})^{\nu-1}\left(mm_{\rm P}-\lambda\bar{\Phi}\Phi\right)\] \[+N\left(Z^{*}m_{\rm P}^{2}-4k^{2}Z_{-}^{3}\right)\left(Nm_{\rm P}^{ 4}-k^{2}Z_{-}^{4}+|Z|^{2}m_{\rm P}^{2}\right)^{-1}\] \[\times\left((Z/m_{\rm P})^{\nu}\left(mm_{\rm P}^{2}-\lambda\bar{ \Phi}\Phi\right)+\kappa S(\bar{\Phi}\bar{\Phi}-M^{2})\right),\] (13c) \[v_{W} =\kappa Sm_{\rm P}^{-1}\left(\bar{\Phi}\Phi-M^{2}\right)+Z^{\nu} m_{\rm P}^{-\nu}(mm_{\rm P}-\lambda\bar{\Phi}\Phi). \tag{13d}\] Note that \(v_{\bar{\Phi}}\) is obtained from \(v_{\Phi}\) by interchanging \(\Phi\) with \(\bar{\Phi}\). Obviously, Eq. (7) was not imposed in the formulas above. D-term contributions to the total SUGRA scalar potential arise only from the \(\mathbb{G}\) non-singlet fields. They take the form \[V_{\rm D}=\frac{g^{2}}{2}\left(|\Phi|^{2}-|\bar{\Phi}|^{2}\right)^{2}, \tag{14}\] where \(g\) is the gauge coupling constant of \(\mathbb{G}\). During FHI and at the SUSY vacuum we confine ourselves along the D-flat direction \[|\bar{\Phi}|=|\Phi|, \tag{15}\] which ensures that \(V_{\rm D}=0\). ### SUSY and \(\mathbb{G}\) Breaking Vacuum As we can verify numerically, \(V_{\rm F}\) in Eq. (12) is minimized at the \(\mathbb{G}\)-breaking vacuum \[|\langle\Phi\rangle|=\left|\langle\bar{\Phi}\rangle\right|=M. \tag{16}\] It has also a stable valley along \(\langle\theta\rangle=0\) and \(\langle\theta_{S}/m_{\rm P}\rangle=\pi\), with these fields defined by \[Z=(z+i\theta)/\sqrt{2}\;\;{\rm and}\;\;S=\sigma\;e^{i\theta_{S}/m_{\rm P}}/ \sqrt{2}. \tag{17}\] As we will see below, \(\theta_{S}/m_{\rm P}=\pi\) holds during FHI and we assume that it is also valid at the vacuum. Substituting Eq. (17) in Eq. (12), we obtain the partially minimized \(V_{\rm F}\) as a function of \(z\) and \(\sigma\), i.e., \[V_{\rm F}(z,\sigma)=2^{-(\nu+1)}e^{\langle K_{\rm H}\rangle/m_{ \rm P}^{2}}\Bigg{(}(\lambda M^{2}-mm_{\rm P})^{2}(z/m_{\rm P})^{2\nu}\] \[\bigg{(}\frac{(2Nm_{\rm P}^{2}\nu+(\nu+N)z^{2})^{2}}{N^{2}z^{2}m_ {\rm P}^{2}}-6+\frac{\sigma^{2}}{m_{\rm P}^{2}}\bigg{)}+\Big{(}2^{\frac{1+\nu }{2}}\kappa M\sigma\] \[+\frac{\left(2M(\lambda(M^{2}+m_{\rm P}^{2})-mm_{\rm P})\,z^{\nu }\right)}{m_{\rm P}^{\nu+1}}\Bigg{)}^{2}\Bigg{)}. \tag{18}\] The minimization of the last term implies \[\sigma=-2^{(1-\nu)/2}\left(\lambda(M^{2}+m_{\rm P}^{2})-mm_{\rm P}\right)\;z^ {\nu}/m_{\rm P}^{(\nu+1)}, \tag{19}\] whereas imposing the condition in Eq. (7) we obtain [28] \[\frac{(2Nm_{\rm P}^{2}\nu+(\nu+N)z^{2})^{2}}{N^{2}z^{2}m_{\rm P}^{2}}-6=\frac{ (3z^{2}-8\nu m_{\rm P}^{2})^{2}}{16\nu^{2}z^{2}m_{\rm P}^{2}}. \tag{20}\] Substituting the two last relations into Eq. (18) we arrive at the result \[V_{\rm F}(z)=e^{\langle K_{\rm H}\rangle/m_{\rm P}^{2}}(\lambda M ^{2}-mm_{\rm P})^{2}z^{2\nu}\] \[\left(\frac{((\lambda(M^{2}+m_{\rm P}^{2})-mm_{\rm P})^{2}}{2^{2 \nu}\kappa^{2}m_{\rm P}^{4(1+\nu)}}z^{2\nu}+\frac{(8\nu^{2}m_{\rm P}^{2}-3z^{2 })^{2}}{2^{5+\nu}\nu^{2}z^{2}m_{\rm P}^{2(\nu+1)}}\right), \tag{21}\] which is minimized _with respect to_ (_u.r._t) \(\sigma\) too. From the last expression we can easily find that \(z\) acquires the VEV \[\langle z\rangle=2\sqrt{2/3}|\nu|m_{\rm P}, \tag{22}\] which yields the constant potential energy density \[\langle V_{\rm F}\rangle= \left(\frac{16\nu^{4}}{9}\right)^{\nu}\left(\frac{\lambda M^{2}- mm_{\rm P}}{8m_{\rm P}^{2}}\right)^{2}\omega^{N}\times \tag{23}\] \[\left(\lambda(M^{2}+m_{\rm P}^{2})-mm_{\rm P}\right)^{2},\] with \[\omega=e^{\langle K_{\rm H}\rangle/Nm_{\rm P}^{2}}\simeq 2(3-2\nu)/3, \tag{24}\] given that \(M\ll m_{\rm P}\). Tuning \(\lambda\) to a value \(\lambda\sim m/m_{\rm P}\simeq 10^{-12}\) we can obtain a post-inflationary dS vacuum which corresponds to the current DE density parameter. By virtue of Eq. (19), we also obtain \(\langle\sigma\rangle\simeq 0\). The gravitino (\(\widetilde{G}\)) acquires mass [56] \[m_{3/2}=\langle e^{\frac{\kappa_{\rm H}}{2m_{\rm P}^{2}}}W_{\rm H}\rangle \simeq 2^{\nu}3^{-\nu/2}|\nu|^{\nu}m\omega^{N/2}.\] (25a) Deriving the mass-squared matrix of the field system \[S-\Phi-\widetilde{\Phi}-Z\] at the vacuum we find the residual mass spectrum of the model. Namely, we obtain a common mass for the IS \[m_{\rm I}=e^{\frac{\kappa_{\rm H}}{2m_{\rm P}^{2}}}\sqrt{2}\left(\kappa^{2}M^ {2}+(4\nu^{2}/3)^{\nu}(1+4M^{2}/m_{\rm P}^{2})m^{2}\right)^{\frac{1}{2}}, \tag{25b}\] where the second term arises due to the coexistence of the IS with the HS - cf. Ref. [14]. We also obtain the (canonically normalized) sgoldstino (or \(R\) saxion) and the pseudo-sgoldstino (or \(R\) axion) with respective masses \[m_{z}\simeq\frac{3\omega}{2\nu}m_{3/2}\;\;{\rm and}\;\;m_{\theta}\simeq 12k \omega^{\frac{2}{2}}m_{3/2}. \tag{25c}\] Comparing the last formulas with the ones obtained in the absence of the IS [28] we infer that no mixing appears between the IS and the HS. As in the "isolated" case of Ref. [28] the role of \(k\) in Eq. (6b) remains crucial in providing \(\theta\) with a mass. Some representative values of the masses above are arranged in Table 1 for specific \(\kappa,\nu,\) and \(k\) values and for the three \(\mathbb{G}\)'s considered in Sec. II.1.1. We employ values for \(M\) and the tadpole parameter \({\rm a}_{S}\) compatible with the inflationary requirements exposed in Sec. V - for the definition of \({\rm a}_{S}\) see Sec. II.3.4. We observe that \(m_{\rm I}\) turns out to be of order \(10^{12}\) GeV - cf. Ref. [14] - whereas \(m_{3/2},m_{z},\) and \(m_{\theta}\) lie in the PeV range. For the selected value \(\nu=7/8>3/4\) the phenomenologically desired hierarchy \(m_{z}<2m_{3/2}\) - see Sec. V - is easily achieved. In the same Table we find it convenient to accumulate the values of some inflationary parameters introduced in Secs. II.3.4 and VI and some parameters related to the \(\mu\) term of the MSSM and the reheat temperature given in Secs. III and IV. Our analytic findings related to the stabilization of the vacuum in Eqs. (16) and (22) can be further confirmed by Fig. 1, where the dimensionless quantity \(V_{\rm F}/m^{2}m_{\rm P}^{2}\) in Eq. (18) is plotted as a function of \(z\) and \(\sigma\). We employ the values of the parameters listed in column B of Table 1. We see that the dS vacuum in Eq. (22) - indicated by the black thick point - is placed at \((\langle z\rangle,\langle\sigma\rangle)=(1.43m_{\rm P},0)\) and is stable w.r.t. both directions. Figure 1: The (dimensionless) SUGRA potential \(V_{\rm F}/m^{2}m_{\rm P}^{2}\) in Eq. (18) as a function of \(z\) and \(\sigma\) for the inputs shown in column B of Table 1. The location of the dS vacuum in Eq. (22) is also depicted by a thick point. ### Inflationary Period It is well known [2; 3] that FHI takes place for sufficiently large \(|S|\) values along a F- and D- flat direction of the SUSY potential \[\bar{\Phi}=\Phi=0, \tag{26}\] where the potential in global SUSY \[V_{\rm SUSY}\left(\Phi=0\right)\equiv V_{10}=\kappa^{2}M^{4} \tag{27}\] provides a constant potential energy density with corresponding Hubble parameter \(H_{1}=\sqrt{V_{10}/3m_{\rm P}^{2}}\). In a SUGRA context, though, we first check - in Sec. II.3.1 - the conditions under which such a scheme can be achieved and then we include a number of corrections described in Secs. II.3.2 and II.3.3 below. The final form of the inflationary potential is given in Sec. II.3.4. #### ii.3.1 Hidden Sector's Stabilization The implementation of FHI is feasible in our set-up if \(Z\) is well stabilized during it. As already emphasized [27], \(V_{10}\) in Eq. (26) is expected to transport the value of \(Z\) from the value in Eq. (22) to values well below \(m_{\rm P}\). To determine these values, we construct the complete expression for \(V_{\rm F}\) in Eq. (12) along the inflationary trough in Eq. (26) and then expand the resulting expression for low \(S/m_{\rm P}\) values, assuming that the \(\theta=0\) direction is stable as in the vacuum. Under these conditions \(V_{\rm F}\) takes the form \[V_{\rm F}(z)=e^{\frac{E_{\rm H}}{\mu-\Phi}}\left(\kappa^{2}M^{4}+m^{2}\frac{z ^{2(\nu-1)}(8\nu^{2}m_{\rm P}^{2}-3z^{2})^{2}}{2^{5+\nu}\nu^{2}m_{\rm P}^{2\nu }}\right). \tag{28}\] The extremum condition obtained for \(V_{\rm F}(z)\) w.r.t. \(z\) yields \[m^{2}m_{\rm P}^{-2\nu}\langle z\rangle_{\rm I}^{2(\nu-2)}(64\nu^ {4}m_{\rm P}^{4}-9\langle z\rangle_{\rm I}^{4})\times\] \[\left(8(1-\nu)\nu^{2}m_{\rm P}^{2}+(3-\nu)\langle z\rangle_{\rm I }^{2}\right)=2^{(7+\nu)}\nu^{4}V_{\rm I0}, \tag{29}\] where the subscript I denotes that the relevant quantity is calculated during FHI. Given that \(\langle z\rangle_{\rm I}/m_{\rm P}\ll 1\), the equation above implies \[\langle z\rangle_{\rm I}\simeq\left(\sqrt{3}\times 2^{\nu/2-1}H_{\rm I}/m\nu \sqrt{1-\nu}\right)^{1/(\nu-2)}m_{\rm P}, \tag{30}\] which is in excellent agreement with its precise numerical value. We remark that \(\nu<1\) assures the existence and the reality of \(\langle z\rangle_{\rm I}\), which is indeed much less than \(m_{\rm P}\)since \(H_{\rm I}/m\ll 1\). To highlight further this key point of our scenario, we plot in Fig. 2 the quantity \(10^{5}(V_{\rm F}/\kappa^{2}M^{4}-1)\) with \(V_{\rm F}\) given by Eq. (12) for fixed \(\Phi=\bar{\Phi}=0\) - see Eq. (26) - and the remaining parameters listed in column B of Table 1. In the left panel we use as free coordinates \(z\) and \(\sigma\) with fixed \(\theta=0\). We see that the location of \((\langle z\rangle_{\rm I},\sigma_{\star})=(1.5\times 10^{-3}m_{\rm P},1.4637M)\), where \(\sigma_{\star}\) is the value of \(\sigma\) when the pivot scale crosses outside the horizon and is indicated by a black thick point, is independent from \(\sigma\) as expected from Eq. (30). In the right panel of this figure we use as coordinates \(z\) and \(\theta\) and fix \(\sigma=\sigma_{\star}\). We observe that \((\langle z\rangle_{\rm I},\theta)=(1.5\times 10^{-3}m_{\rm P},0)\) - indicated again by a black thick point - is well stabilized in both directions. The (canonically normalized) components of sgoldstino acquire masses squared, respectively, \[m_{\rm Iz}^{2}\simeq 6(2-\nu)H_{\rm I}^{2}\ \ \mbox{and}\ \ m_{\rm I\theta}^{2} \simeq 3H_{\rm I}^{2}-\] \[m^{2}(8\nu^{2}m_{\rm P}^{2}-3\langle z\rangle_{\rm I}^{2})\frac{4\nu(1-\nu)m _{\rm P}^{2}+(1-96k^{2}\nu)\langle z\rangle_{\rm I}^{2}}{2^{3+\nu}m_{\rm P}^{2 \nu}\langle z\rangle_{\rm I}^{2(2-\nu)}},\] (31a) whereas the mass of \(\widetilde{G}\) turns out to be \[m_{13/2}\simeq\left(\nu(1-\nu)^{1/2}m^{2/\nu}/\sqrt{3}H_{\rm I}\right)^{\nu/( 2-\nu)}. \tag{31b}\] It is evident from the results above that \(m_{\rm Iz}\gg H_{\rm I}\) and therefore \(\langle z\rangle_{\rm I}\) is well stabilized during FHI whereas \(m_{\rm I\theta}\simeq H_{\rm I}\) and \begin{table} \begin{tabular}{c||c c c} \hline \hline Case: & A & B & C \\ \hline \multicolumn{4}{c}{Input Parameters} \\ \hline \(\kappa=5\times 10^{-4},\nu=7/8\) (\(N=-49/8\)) and \(k=0.1\) \\ \hline \({\sf N}_{\rm G}\) & 1 & 2 & 10 \\ \(M\) (\(10^{15}\) GeV) & 1.4 & 1.9 & 3.6 \\ \(m\) (\({\rm PeV}\))1 & 0.5 & 1.15 & 6.3 \\ \(\lambda\) (\(10^{-12}\)) & 0.2 & 1.7 & 2.6 \\ \hline \multicolumn{4}{c}{HS Parameters During FHI} \\ \hline \(\langle z\rangle_{\rm I}\) (\(10^{-3}m_{\rm P}\)) & 1.1 & 1.5 & 2.5 \\ \(m_{\rm I3/2}\) (TeV) & 1.2 & 2.98 & 25 \\ \(m_{\rm Iz}\) (EeV) & 0.64 & 1.1 & 4.1 \\ \(m_{\rm I\theta}\) (EeV) & 0.15 & 0.32 & 1.2 \\ \hline \multicolumn{4}{c}{In Inflationary Parameters} \\ \hline \({\rm a}_{S}\) (TeV) & 2.63 & 6.7 & 56.3 \\ \(H_{\rm I}\) (EeV) & 0.25 & 0.4 & 1.6 \\ \hline \(\sigma_{\star}/\sqrt{2}M\) & 1.026 & 1.035 & 1.067 \\ \(N_{\rm I\star}\) & 40.5 & 40.8 & 40.6 \\ \(\Delta_{\rm c\star}\) (\%) & 2.6 & 3.5 & 6.7 \\ \(\Delta_{\rm max\star}\) (\%) & 2.9 & 3.9 & 7.3 \\ \hline \multicolumn{4}{c}{Inflationary Observables} \\ \hline \(n_{\rm s}\) & \multicolumn{4}{c}{0.967} \\ \(-\alpha_{\rm s}\) (\(10^{-4}\)) & 2.3 & 2.5 & 2.9 \\ \(r\) (\(10^{-12}\)) & 0.9 & 3.1 & 39.7 \\ \hline \multicolumn{4}{c}{Spectrum at the Vacuum} \\ \hline \(m_{\rm I}\) (\(10^{12}\) GeV) & 1.8 & 2.4 & 4.5 \\ \(m_{3/2}\) (PeV) & 0.9 & 2. & 11.2 \\ \(m_{z}\) (PeV) & 1.3 & 2.9 & 16 \\ \(m_{\theta}\) (PeV) & 0.8 & 1.8 & 10 \\ \hline \multicolumn{4}{c}{Reheat Temperature} \\ \multicolumn{4}{c}{For \(\mu=\widetilde{m}\) (\(\lambda_{\mu}=0.69\)) and \(K=K_{1}\)} \\ \hline \(T_{\rm rh}\) (GeV) & 0.07 & 0.18 & \(2.05\) \\ \hline \hline \end{tabular} \end{table} Table 1: A Case Study Overview gets slightly increased as \(k\) increases. We do not think that this fact causes any problem with isocurvature perturbations since these can be observationally dangerous only for \(m_{\rm I\theta}\ll H_{\rm I}\). As verified by our numerical results, all the masses above display no \(S\) dependence and so they do not contribute to the inclination of the inflationary potential via RCs - see Sec. II.3.3 below. #### iii.2.2 SUGRA Corrections The SUGRA potential in Eq. (9) induces a number of corrections to \(V_{\rm I0}\) originating not only from the IS but also from the HS. These corrections are displayed in the Appendix for arbitrary \(W_{\rm H}\) and \(K_{\rm H}\). If we consider the \(W_{\rm H}\) and \(K_{\rm H}\) in Eqs. (2b) and (6b) respectively, the \(v\)'s in Eq. (43) are found to be \[v_{1} =2\kappa M^{2}m_{\rm I3/2}(2-\nu-3\langle z\rangle_{1}^{2}/8\nu m _{\rm P}^{2}), \tag{32a}\] \[v_{2} =\kappa^{2}M^{4}\langle z\rangle_{1}^{2}/2m_{\rm P}^{2},\] (32b) \[v_{3} =\kappa M^{2}m_{\rm I3/2}(1-\nu-3\langle z\rangle_{1}^{2}/8\nu m _{\rm P}^{2}),\] (32c) \[v_{4} =\kappa^{2}M^{4}(1+\langle z\rangle_{1}^{2}/m_{\rm P}^{2})/2. \tag{32d}\] Since \(\langle z\rangle_{\rm I}\ll m_{\rm P}\) we do not discriminate between \(\kappa\) and its rescaled form following the formulas of the Appendix. Despite the fact that \(v_{2}\) and \(v_{4}\) receive contributions from both IS and HS, as noted in the Appendix, here the IS does not participate in \(v_{2}\) thanks to the selected canonical Kahler potential for the \(S\) field in Eq. (6a). This fact together with the smallness of \(\langle z\rangle_{\rm I}^{2}\) assists us to overcome the notorious \(\eta\) problem of FHI. #### iii.2.3 Radiative Corrections These corrections originate [3] from a mass splitting in the \(\Phi-\bar{\Phi}\) supermultiplets due to SUSY breaking on the inflationary valley. To compute them we work out the mass spectrum of the fluctuations of the various fields about the inflationary trough in Eq. (26). We obtain \(2\mathsf{N}_{\rm G}\) Weyl fermions and \(2\mathsf{N}_{\rm G}\) pairs of real scalars with mass squared respectively \[m_{\rm f}^{2}=\kappa^{2}S_{\lambda}^{2}\ \text{and}\ m_{\pm}^{2}=\kappa^{2}(S_{ \lambda}^{2}\pm M^{2}) \tag{33}\] with \(S_{\lambda}=|S|-\lambda\langle Z\rangle_{\rm I}^{\nu}m_{\rm P}^{1-\nu}/\kappa\). SUGRA corrections to these masses are at most of order \(M^{4}/m_{\rm P}^{2}\) and can be safely ignored. Inserting these masses into the well-known Coleman-Weinberg formula, we find the correction \[V_{\rm RC}=\frac{\kappa^{2}\mathsf{N}_{\rm G}}{32\pi^{2}}V_{\rm I0}\left(\sum _{i=\pm}m_{i}^{4}\ln\frac{m_{i}^{2}}{Q^{2}}-2m_{\rm f}^{4}\ln\frac{m_{\rm f}^{ 2}}{Q^{2}}\right), \tag{34}\] where \(Q\) is a renormalization scale. Assuming positivity of \(m_{-}^{2}\), we obtain the lowest possible value \(S_{\rm c}\) of \(S\) which assures stability of the direction in Eq. (26). This critical value is equal to \[|S_{\rm c}|=M+\lambda\langle Z\rangle_{\rm I}^{\nu}m_{\rm P}^{1-\nu}/\kappa. \tag{35}\] Needless to say, the mass spectrum and \(|S_{\rm c}|\) deviate slightly from their values in the simplest model of FHI [3] due to the mixing term in \(W\) - see Eq. (2c). #### iii.2.4 Inflationary Potential Substituting Eqs. (32a) - (32d) into \(V_{\rm F}\) in Eq. (43), including \(V_{\rm RC}\) from Eq. (34), and introducing the canonically normalized inflaton \[\sigma=\sqrt{2K_{SS^{*}}}|S|\ \ \text{with}\ \ K_{SS^{*}}=1, \tag{36}\] the inflationary potential \(V_{\rm I}\) can be cast in the form \[V_{\rm I}\simeq V_{\rm I0}\left(1+C_{\rm RC}+C_{\rm SSB}+C_{\rm SUGRA}\right), \tag{37}\] where the individual contributions are specified as follows: Figure 2: The SUGRA potential \(10^{5}(V_{\rm F}/\kappa^{2}M^{4}-1)\) in Eq. (12) along the path in Eq. (26) as a function of \(z\) and \(\sigma\) for \(\theta=0\) (left panel) or \(z\) and \(\theta\) for \(\sigma=\sigma_{*}\) (right panel). In both cases we take the parameters of column B in Table 1. The location of \((\langle z\rangle_{1},\sigma_{*})\) (left panel) or \((\langle z\rangle_{1},0)\) (right panel) is also depicted by a thick black point. \(C_{\rm RC}\) represents the RCs to \(V_{\rm I}/V_{\rm I0}\) which may be written consistently with Eq. (34) as [2] \[C_{\rm RC}=\frac{\kappa^{2}{\sf N}_{\rm G}}{128\pi^{2}}\left(8\ln\frac{\kappa^{2} M^{2}}{Q^{2}}+f_{\rm RC}(x)\right),\] (38a) with \[x=(\sigma-\sqrt{2}\lambda\langle Z\rangle_{1}^{\nu}m_{\rm P}^{1-\nu}/\kappa)/ M>\sqrt{2}\] and \[f_{\rm RC}(x) =8x^{2}\tanh^{-1}\left(2/x^{2}\right)-4(\ln 4+x^{4}\ln x)\] \[+(4+x^{4})\ln(x^{4}-4). \tag{38b}\] \(C_{\rm SSB}\) is the contribution to \(V_{\rm I}/V_{\rm I0}\) from the soft SUSY-breaking effects [12] parameterized as follows: \[C_{\rm SSB}=m_{\rm I3/2}^{2}\sigma^{2}/2V_{\rm I0}-{\rm a}_{S}\,\sigma/\sqrt{2 V_{\rm I0}}, \tag{38c}\] where the tadpole parameter reads \[{\rm a}_{S}=2^{1-\nu/2}m\frac{\langle z\rangle_{1}^{\nu}}{m_{\rm P}^{\nu}} \left(1+\frac{\langle z\rangle_{1}^{2}}{2Nm_{\rm P}^{2}}\right)\left(2-\nu- \frac{3\langle z\rangle_{1}^{2}}{8\nu m_{\rm P}^{2}}\right). \tag{38d}\] The minus sign results from the minimization of the factor \((S+S^{*})=\sqrt{2}\sigma\cos(\theta_{S}/m_{\rm P})\) which occurs for \(\theta_{S}/m_{\rm P}=\pi\,\,({\sf mod}\,\,2\pi)\) - the decomposition of \(S\) is shown in Eq. (17). We further assume that \(\theta_{S}\) remains constant during FHI. Otherwise, FHI may be analyzed as a two-field model of inflation in the complex plane [15]. Trajectories, though, far from the real axis require a significant amount of tuning. The first term in Eq. (38c) does not play any essential role in our set-up due to low enough \(m_{3/2}\)'s - cf. Ref. [14]. \(C_{\rm SUGRA}\) is the SUGRA correction to \(V_{\rm I}/V_{\rm I0}\), after subtracting the one in \(C_{\rm SSB}\). It reads \[C_{\rm SUGRA}=c_{2\nu}\frac{\sigma^{2}}{2m_{\rm P}^{2}}+c_{4\nu}\frac{\sigma^ {4}}{4m_{\rm P}^{4}}, \tag{38e}\] where the relevant coefficients originate from Eqs. (32b) and (32d) and read \[c_{2\nu}=\langle z\rangle_{1}^{2}/2m_{\rm P}^{2}\,\,\,\mbox{and}\,\,\,c_{4\nu }=(1+\langle z\rangle_{1}^{2}/m_{\rm P}^{2})/2. \tag{38f}\] Note that in similar models - cf. Ref. [14; 15] - without the presence of a HS, \(c_{2\nu}\) is taken identically equal to zero. Our present set-up shows that this assumption may be well motivated. ## III Generation of the \(\mu\) term of MSSM An important issue, usually related to the inflationary dynamics - see, e.g., Refs. [5; 22; 57] - is the generation of the \(\mu\) term of MSSM. Indeed, we would like to avoid the introduction by hand into the superpotential of MSSM of a term \(\mu H_{u}H_{d}\) with \(\mu\) being an energy scale much lower than the GUT scale - \(H_{u}\) and \(H_{d}\) are the Higgs superfields coupled to the up and down quarks respectively. To avoid this we assign \(R\) charges equal to \(2\) to both \(H_{u}\) and \(H_{d}\) whereas all the other fields of MSSM have zero \(R\) charges. Although we employ here the notation used in a \(\mathbb{G}_{B-L}\) model, our construction can be easily extended to the cases of the two other \(\mathbb{G}\)'s considered - see Sec. II.1.1. Indeed, \(H_{u}\) and \(H_{d}\) are included in a bidoublet superfield belonging to the representation \(({\bf 1},{\bf 2},{\bf 2},0)\) in the case of \(\mathbb{G}_{\rm LR}\)[5]. On the other hand, these superfields are included in the representations \(({\bf\bar{5}},2)\) and \(({\bf 5},-2)\) in the case of \(\mathbb{G}_{5\rm X}\)[6]. The mixing term between \(H_{u}\) and \(H_{d}\) may emerge if we incorporate (somehow) into the Kahler potential of our model the following higher order terms \[K_{\mu}=\lambda_{\mu}\frac{{Z^{*}}^{2\nu}}{m_{\rm P}^{2\nu}}H_{u}H_{d}\,\,+\, \,{\rm h.c.}, \tag{39}\] where the dimensionless constant \(\lambda_{\mu}\) is taken real for simplicity. To exemplify our approach - cf. Ref. [28] - we complement the Kahler potential in Eq. (5) with terms involving the left-handed chiral superfields of MSSM denoted by \(Y_{\alpha}\) with \(\alpha=1,...,7\), i.e., \[Y_{\alpha}=Q,L,d^{c},u^{c},e^{c},H_{d},\,\,\,\mbox{and}\,\,\,H_{u},\] where the generation indices are suppressed. Namely we consider the following variants of the total \(K\), \[K_{1} =K_{\rm H}+K_{\rm I}+K_{\mu}+|Y_{\alpha}|^{2}, \tag{40a}\] \[K_{2} =Nm_{\rm P}^{2}\ln\left(1+\frac{1}{N}\left(\frac{|Z|^{2}-k^{2}Z_{ -}^{4}/m_{\rm P}^{2}}{m_{\rm P}^{2}}+K_{\mu}\right)\right)\] \[+K_{\rm I}+|Y_{\alpha}|^{2},\] (40b) \[K_{3} =Nm_{\rm P}^{2}\ln\left(\frac{1+|Z|^{2}-k^{2}Z_{-}^{4}/m_{\rm P}^ {2}+|Y_{\alpha}|^{2}}{Nm_{\rm P}^{2}}\right)\] \[+K_{\rm I}+K_{\mu},\] (40c) \[K_{4} =Nm_{\rm P}^{2}\ln\left(1+\frac{1}{N}\frac{|Z|^{2}-k^{2}Z_{-}^{4} /m_{\rm P}^{2}+|Y_{\alpha}|^{2}}{Nm_{\rm P}^{2}}+\frac{K_{\mu}}{N}\right)\] \[+K_{\rm I}\,. \tag{40d}\] Expanding these \(K\)'s for low values of \(S,\Phi,\bar{\Phi},\) and \(Y_{\alpha}\), we can bring them into the form \[K \simeq K_{\rm H}(Z)+K_{\rm I}+\widetilde{K}(Z)\sum_{\alpha}|Y_{\alpha}|^{2} \tag{41}\] \[+ \lambda_{\mu}\left(c_{H}\frac{{Z^{*}}^{2\nu}}{m_{\rm P}^{2\nu}}H_{u }H_{d}+{\rm h.c.}\right),\] where \(\widetilde{K}\) is determined as follows \[\widetilde{K}=\begin{cases}1&\mbox{for}\,\,\,K=K_{1},K_{4},\\ \left(1+\frac{|Z|^{2}-k^{2}Z_{-}^{4}/m_{\rm P}^{2}}{m_{\rm P}^{2}N}\right)^{-1 }&\mbox{for}\,\,\,K=K_{2},K_{3},\end{cases} \tag{42}\] whereas \(c_{H}\) is found to be \[c_{H}=\begin{cases}1&\mbox{for}\,\,\,K=K_{1},K_{3},\\ \left(1+\frac{|Z|^{2}-k^{2}Z_{-}^{4}/m_{\rm P}^{2}}{m_{\rm P}^{2}N}\right)^{-1 }&\mbox{for}\,\,\,K=K_{2},K_{4}.\end{cases} \tag{43}\] Consistently with our hypothesis about the enhanced symmetry of \(K\) in Sec. II.1.3, we do not consider the possibility of including \(K_{\rm I}\) in the argument of the logarithm of \(K_{\rm H}\) as we have done for \(K_{\mu}\) and/or \(|Y_{\alpha}|\). Applying the relevant formulas of Refs. [28; 37], we find a non-vanishing \(\mu\) term in the superpotential of MSSM \[\mu\widehat{H}_{u}\widehat{H}_{d}, \tag{44}\] where \(\widehat{Y}_{\alpha}=\langle\widetilde{K}\rangle^{1/2}Y_{\alpha}\) and the \(\mu\) parameter reads \[\frac{|\mu|}{m_{3/2}}=\lambda_{\mu}\left(\frac{4\nu^{2}}{3}\right)^{\nu}\times \begin{cases}(5-4\nu)&\text{for}\;\;K=K_{1},\\ 3(4\nu-1)/4\nu&\text{for}\;\;K=K_{2},\\ (5-4\nu)\omega&\text{for}\;\;K=K_{3},\\ 3\omega(4\nu-1)/4\nu&\text{for}\;\;K=K_{4}.\end{cases} \tag{45}\] Moreover, in the effective low energy potential we obtain a common soft-SUSY-breaking mass parameter \(\widetilde{m}\) which is \[\widetilde{m}=m_{3/2}\times\begin{cases}1&\text{for}\;\;K=K_{1}\;\;\text{and} \;\;K_{2},\\ (3/2\nu-1)&\text{for}\;\;K=K_{3}\;\;\text{and}\;\;K_{4},\end{cases} \tag{46}\] Therefore, \(\widetilde{m}\) is a degenerate SUSY mass scale which can indicatively represent the mass level of the SUSY partners. The results in Eqs. (45) and (46) are consistent with those presented in Ref. [28], where further details of the computation are given. The magnitude of the \(\mu\)'s in Eq. (45) is demonstrated in Fig. 3, where we present the ratios \(|\mu|/\lambda_{\mu}m_{3/2}\) for \(K=K_{1}\) (solid line), \(K_{2}\) (dashed line), \(K_{3}\) (dot-dashed line), and \(K_{4}\) (dotted line) versus \(\nu\) for \(3/4<\nu<1\). By coincidence all cases converge at the value \(|\mu|/\lambda_{\mu}m_{3/2}\simeq 1.6\) for \(\nu=3/4\). For \(\lambda_{\mu}\)'s of order unity, the \(|\mu|\) values are a little enhanced w.r.t. \(m_{3/2}\) and increase for \(K=K_{2}\) and \(K_{4}\) or decrease for \(K=K_{1}\) and \(K_{3}\) as \(\nu\) increases. ## IV Reheating stage Soon after FHI the Hubble rate \(H\) becomes of the order of their masses and the IS and \(z\) enter into an oscillatory phase about their minima and eventually decay via their coupling to lighter degrees of freedom. Note that \(\theta\) remains well stabilized at zero during and after FHI and so it does not participate in the phase of dumped oscillations. Since \(\langle z\rangle\sim m_{\rm P}\) - see Eq. (22) -, the initial energy density of its oscillations is \(\rho_{z1}\sim m_{z}^{2}\langle z\rangle^{2}\). It is comparable with the energy density of the Universe at the onset of these oscillations \(\rho_{\rm t}=3m_{\rm P}^{2}H^{2}\simeq 3m_{\rm P}^{2}m_{z}^{2}\) and so we expect that \(z\) will dominate the energy density of the Universe until completing its decay through its weak gravitational interactions. Actually, this is a representative case of the infamous cosmic moduli problem [38; 39] where reheating is induced by long-lived massive particles with mass around the weak scale. The reheating temperature is determined by [58] \[T_{\rm rh}=\left(72/5\pi^{2}g_{\rm rh*}\right)^{1/4}\Gamma_{\delta z}^{1/2}m_{ \rm P}^{1/2}, \tag{47}\] where \(g_{\rm rh*}\simeq 10.75-100\) counts the effective number of the relativistic degrees of freedom at \(T_{\rm rh}\). Moreover, the total decay width \(\Gamma_{\delta z}\) of the (canonically normalized) sgoldstino \[\widehat{\delta z}=\langle K_{ZZ^{*}}^{1/2}\rangle\delta z\;\;\text{with}\; \;\delta z=z-\langle z\rangle\;\;\text{and}\;\;\langle K_{ZZ^{*}}\rangle= \langle\omega\rangle^{-2} \tag{48}\] predominantly includes the contributions from its decay into pseudo-sgoldstinos and Higgs bosons via the kinetic terms \(K_{XX^{*}}\partial_{\mu}X\partial^{\mu}X^{*}\) with \(X=Z,H_{u}\) and \(H_{d}\)[40; 41; 42; 39] of the Lagrangian. In particular, we have \[\Gamma_{\delta z}\simeq\Gamma_{\theta}+\Gamma_{\tilde{h}}, \tag{49}\] where the individual decay widths are given by \[\Gamma_{\theta}\simeq\frac{\lambda_{\theta}^{2}m_{z}^{3}}{32\pi m_{\rm P}^{2} }\sqrt{1-4m_{\theta}^{2}/m_{3/2}^{2}}\] (50a) with \[\lambda_{\theta}=-\langle z\rangle/Nm_{\rm P}=(4\nu-3)/\sqrt{6}\nu\], and \[\Gamma_{\tilde{h}}=\frac{3^{2\nu+1}}{2^{4\nu+1}}\lambda_{\mu}^{2}\frac{\omega ^{2}}{4\pi}\frac{m_{z}^{3}}{m_{\rm P}^{2}}\nu^{-4\nu}\,. \tag{50b}\] Other possible decay channels into gauge bosons through anomalies and three-body MSSM (s)particles are subdominant. On the other hand, we kinematically block the decay of \(\widehat{\delta z}\) into \(\widetilde{G}\)'s [44; 39] in order to protect our setting from complications with BBN due to possible late decay of the produced \(\widetilde{G}\) and problems with the abundance of the subsequently produced lightest SUSY particles. In view of Eqs. (25c) and (25a), this aim can be elegantly achieved if we set \(\nu>3/4\). Taking \(\kappa\) and \(m_{z}\) values allowed by the inflationary part of our model - see Sec. VI - and selecting some specific \(K\) from Eqs. (40a) - (40d), we evaluate \(T_{\rm rh}\) as a function of \(\kappa\) and determine the regions allowed by the BBN constraints in Eqs. (59a) and (59b) - see Sec. V below. The results of this computation are displayed in Fig. 4, where we design allowed contours in the \(\kappa-T_{\rm rh}\) plane for the various \(\mathsf{N}_{\rm G}\)'s and \(\nu=7/8\). This is an intermediate value in the selected margin \((3/4-1)\). The boundary curves of the allowed regions correspond to \(\mu=\widetilde{m}\) or \(\lambda_{\mu}=0.65\) (dot-dashed line) and Figure 3: The ratios \(|\mu|/\lambda_{\mu}m_{3/2}\) for \(K=K_{1},K_{2},K_{3}\), and \(K_{4}\) (solid, dashed, dot-dashed, and dotted line respectively) versus \(\nu\) in the range \(0.75-1\). \(\mu=3\widetilde{m}\) or \(\lambda_{\mu}=1.96\) (dashed line). The \(|\mu|/\widetilde{m}-\lambda_{\mu}\) correspondence is determined via Eq. (45) for a selected \(K\). Here we set \(K=K_{1}\). Qualitatively similar results are obtained for an alternative \(K\) choice. We see that there is an ample parameter space consistent with the BBN bounds depicted by two horizontal lines. Since the satisfaction of the inflationary requirements leads to an increase of the scale \(m\) with \(\mathsf{N}_{\rm G}\) and \(m\) heavily influences \(m_{z}\) and consequently \(T_{\rm rh}\) - see Eq. (47) - this temperature increases with \(\mathsf{N}_{\rm G}\). The maximal values of \(T_{\rm rh}\) for the selected \(\nu\) are obtained for \(\mu=3\widetilde{m}\) and are estimated to be \[T_{\rm rh}^{\rm max}\simeq 14\;\text{GeV},\;33\;\text{GeV},\;\text{and}\;\;49\; \text{GeV} \tag{51}\] for \(\mathsf{N}_{\rm G}=1,\;2,\) and \(10\) respectively. Obviously, reducing \(\mu\) below \(\widetilde{m}\), the parameters \(\lambda_{\mu}\), \(\Gamma_{\delta z}\), and so \(T_{\rm rh}\) decrease too and the slice cut by the BBN bound increases. Therefore, our setting fits better with high-scale SUSY [53] and not with split [53] or natural [39] SUSY which assume \(\mu\ll\widetilde{m}\). ## V Observational requirements Our set-up must satisfy a number of observational requirements specified below. The number of e-foldings that the pivot scale \(k_{\star}=0.05/\text{Mpc}\) undergoes during FHI must be adequately large for the resolution of the horizon and flatness problems of standard Big Bang cosmology. Assuming that FHI is followed, in turn, by a decaying-particle, radiation and matter dominated era, we can derive the relevant condition [7; 18]: \[N_{1\star}=\int_{\sigma_{\rm f}}^{\sigma_{\star}}\frac{d\sigma}{m_{\rm P}^{2} }\,\frac{V_{\rm I}}{V_{\rm I}^{\prime}}\simeq 19.4+\frac{2}{3}\ln\frac{V_{ \rm I0}^{1/4}}{1\;\text{GeV}}+\frac{1}{3}\ln\frac{T_{\rm rh}}{1\;\text{GeV}}, \tag{52}\] where the prime denotes derivation w.r.t. \(\sigma\), \(\sigma_{\star}\) is the value of \(\sigma\) when \(k_{\star}\) crosses outside the inflationary horizon, and \(\sigma_{\rm f}\) is the value of \(\sigma\) at the end of FHI. The latter coincides with either the critical point \(\sigma_{\rm c}=\sqrt{2}|S_{\rm c}|\) - see Eq. (33) -, or the value of \(\sigma\) for which one of the slow-roll parameters [1] \[\epsilon=m_{\rm P}^{2}\,\left(V_{\rm I}^{\prime}/\sqrt{2}V_{\rm I}\right)^{2} \;\;\text{or}\;\;\eta=m_{\rm P}^{2}\;V_{\rm I}^{\prime\prime}/V_{\rm I} \tag{53}\] exceeds unity in absolute value. For \(\lambda\sim 10^{-12}\) as required by the cosmic coincidence problem - see below - we obtain \(\langle\sigma\rangle\simeq 0\) which does not disturb the inflationary dynamics since \(\langle\sigma\rangle\ll\sigma_{\rm c}\). The amplitude \(A_{\rm s}\) of the power spectrum of the curvature perturbation generated by \(\sigma\) during FHI and calculated at \(k_{\star}\) as a function of \(\sigma_{\star}\) must be consistent with the data [19], i.e., \[\sqrt{A_{\rm s}}=\frac{1}{2\sqrt{3}\,\pi m_{\rm P}^{3}}\,\,\frac{V_{\rm I}^{ 3/2}(\sigma_{\star})}{|V_{\rm I}^{\prime}(\sigma_{\star})|}\,\simeq\;4.588\times 1 0^{-5}. \tag{54}\] The observed curvature perturbation is generated wholly by \(\sigma\) since the other scalars are adequately massive during FHI - see Sec. II.3.1. The scalar spectral index \(n_{\rm s}\), its running \(\alpha_{\rm s}\), and the scalar-to-tensor ratio \(r\) must be in agreement with the fitting of the _Planck_ TT, TE, EE+lowE+lensing, BICEP/Keck Array (BK18), and BAO data [7; 20] with the \(\Lambda\)CDM\(+r\) model which approximately requires that, at 95\(\%\)_confidence level_ (c.l.), \[n_{\rm s}=0.967\pm 0.0074\;\;\text{and}\;\;r\leq 0.032, \tag{55}\] with \(|\alpha_{\rm s}|\ll 0.01\). These observables are calculated employing the standard formulas \[n_{\rm s}=1-6\epsilon_{\star}\;+\;2\eta_{\star}, \tag{56a}\] \[\alpha_{\rm s}=2\left(4\eta_{\star}^{2}-(n_{\rm s}-1)^{2}\right)/3-2 \xi_{\star},\;\text{and}\;\;r=16\epsilon_{\star}, \tag{56b}\] where \(\xi\simeq m_{\rm P}^{4}\;V_{\rm I}^{\prime}V_{\rm I}^{\prime\prime\prime}/V_{ \rm I}^{2}\) and all the variables with the subscript \(\star\) are evaluated at \(\sigma=\sigma_{\star}\). The dimensionless tension \(G\mu_{\rm cs}\) of the \(B-L\) CSs produced at the end of FHI in the case \(\mathbb{G}=\mathbb{G}_{B-L}\) is [59] \[G\mu_{\rm cs}\simeq\frac{1}{2}\left(\frac{M}{m_{\rm P}}\right)^{2}\epsilon_{ \rm cs}(r_{\rm cs})\;\;\text{with}\;\;\epsilon_{\rm cs}(r_{\rm cs})=\frac{2.4 }{\ln(2/r_{\rm cs})}. \tag{57}\] Here \(G=1/8\pi m_{\rm P}^{2}\) is the Newton gravitational constant and \(r_{\rm cs}=\kappa^{2}/2g^{2}\leq 10^{-2}\) with \(g\simeq 0.7\) being the gauge coupling constant at a scale close to \(M\). \(G\mu_{\rm cs}\) is restricted by the level of the CS contribution to the observed anisotropies of CMB radiation reported by _Planck_[29] as follows: \[G\mu_{\rm cs}\lesssim 2.4\times 10^{-7}\;\;\text{at 95$\%$ c.l.}\] (58a) On the other hand, the primordial CS loops and segments connecting monopole pairs decay by emitting stochastic gravitational radiation which is measured by the pulsar timing array experiments [32; 33]. If the CS network is stable, the recent observations require [31] \[G\mu_{\rm cs}\lesssim 2\times 10^{-10}\;\;\text{at 95$\%$ c.l.} \tag{58b}\] Figure 4: Allowed strips in the \(\kappa-T_{\rm rh}\) plane compatible with the inflationary requirements in Sec. V for \(n_{\rm s}=0.967\), and various \(\mathsf{N}_{\rm G}\) values indicated in the graph. We take \(K=K_{1}\), \(\nu=7/8\), and \(\mu=\widetilde{m}\) (dot-dashed lines) or \(\mu=3\widetilde{m}\) (dashed lines). The BBN lower bounds on \(T_{\rm rh}\) for hadronic branching ratios \(\mathsf{B}_{\rm h}=1\) and \(0.001\) are also depicted by two thin lines. However, if the CSs are metastable, due to the embedding of \(\mathbb{G}_{B-L}\) into a larger group \(\mathbb{G}_{\rm GUT}\) whose breaking leads to monopoles which can break the CSs, the interpretation [31] of the recent observations [32; 33] dictates \[10^{-8}\lesssim G\mu_{\rm cs}\lesssim 2\times 10^{-7}\;\;{\rm for}\;\;8.2 \gtrsim\sqrt{r_{\rm ms}}\gtrsim 7.9 \tag{58c}\] at \(2\sigma\) where the upper bound originates from Ref. [34] and is valid for a standard cosmological evolution and CSs produced after inflation. Here \(r_{\rm ms}\) is the ratio of the monopole mass squared to \(\mu_{\rm cs}\). Since we do not specify further this possibility in our work, the last restriction does not impact on our parameters. Consistency between theoretical and observational values of light element abundances predicted by BBN imposes a lower bound on \(T_{\rm rh}\), which depends on the mass of the decaying particle \(z\) and the hadronic branching ratio \(\mathsf{B}_{\rm h}\). Namely, for large \(m_{z}\sim 10^{5}\) GeV, the most up-to-date analysis of Ref. [43] entails \[T_{\rm rh} \geq 4.1\,{\rm MeV}\;\;{\rm for}\;\;\mathsf{B}_{\rm h}=1 \tag{59a}\] \[{\rm and}\;\;T_{\rm rh} \geq 2.1\,{\rm MeV}\;\;{\rm for}\;\;\mathsf{B}_{\rm h}=10^{-3}. \tag{59b}\] The BBN bound is mildly softened for larger \(m_{z}\) values. Moreover, the possible production of \(\widetilde{G}\) from the \(z\) decay is mostly problematic [39] since it may lead to overproduction of the LSP (i.e., the lightest SUSY particle), whose non-thermally produced abundance from the \(\widetilde{G}\) decay can drastically overshadow its thermally-produced one. As a consequence, the LSP abundance can easily violate the observational upper bound [19] from CDM considerations. This is the moduli-induced [44] LSP overproduction problem via the \(\widetilde{G}\) decay [39]. To avoid this complication, we kinematically forbid the decay of \(z\) into \(\widetilde{G}\) selecting \(\nu>3/4\) which ensures that \(m_{z}<2m_{3/2}\) - see Eq. (25c). We identify \(\langle V_{\rm F}\rangle\) in Eq. (23) with the DE energy density, i.e., \[\langle V_{\rm F}\rangle=\Omega_{\Lambda}\rho_{\rm c0}=7.2\times 10^{-121}m_{ \rm F}^{4}, \tag{60}\] where \(\Omega_{\Lambda}=0.6889\) and \(\rho_{\rm c0}=2.4\times 10^{-120}h^{2}m_{\rm P}^{4}\) with \(h=0.6732\)[19] are the density parameter of DE and the current critical energy density of the Universe respectively. By virtue of Eq. (23), we see that Eq. (60) can be satisfied for \(\lambda\sim m/m_{\rm P}\). Explicit values are given for the cases in Table 1. Scenarios with large \(\widetilde{m}\), although not directly accessible at the LHC, can be probed via the measured value of the Higgs boson mass. Within high-scale SUSY, updated analysis requires [52; 53] \[3\times 10^{3}\lesssim\widetilde{m}/\text{GeV}\lesssim 3\times 10^{11}, \tag{61}\] for degenerate sparticle spectrum, \(\mu\) and \(\tan\beta\) in the ranges \(\widetilde{m}/3\leq\mu\leq 3\widetilde{m}\) and \(1\leq\tan\beta\leq 50\), and varying the stop mixing. ## VI Results As deduced from Secs. II.1 - II.1.3 and III, our model depends on the parameters \[\mathsf{N}_{\rm G},\;\kappa,\;M,\;m,\;\lambda,\;\nu,\;k,\;\;\text{and}\;\; \lambda_{\mu}\] (recall that \(N\) is related to \(\nu\) via Eq. (7)). Let us initially clarify that \(\lambda\) can be fixed at a rather low value as explained below Eq. (60) and does not influence the rest of our results. Moreover, \(k\) affects \(m_{\theta}\) and \(m_{\rm I\theta}\) via Eqs. (25c) and (31a) and helps us to avoid massless modes. We take \(k=0.1\) throughout our investigation. As shown in Ref. [14], the confrontation of FHI with data for any fixed \(\mathsf{N}_{\rm G}\) requires a specific adjustment between \(\kappa\) or \(M\) and the \(\mathrm{a}_{S}\) which is given in Eq. (38d) as a function of \(m\), \(\nu\), \(\kappa\), and \(M\) - see Eq. (30). Obviously a specific \(\mathrm{a}_{S}\) value can be obtained by several choices of the initial parameters \(\nu\) and \(m\). These parameters influence also the requirement in Eq. (52) via \(T_{\rm rh}\), which is given in Eq. (47). However, to avoid redundant solutions we first explore our results for the IS in terms of the variables \(\kappa,M\), and \(\mathrm{a}_{S}\) in Sec. VI.1 taking a representative \(T_{\rm rh}\) value, e.g., \(T_{\rm rh}\simeq 1\) GeV. Variation of \(T_{\rm rh}\) over one or two orders of magnitude does not affect our findings in any essential way. Therefore, we do not impose in Sec. VI.1 the constraints from the BBN in Eqs. (59a) and (59b). In Sec. VI.2, we then interconnect these results with the HS parameters \(\nu\) and \(m\). ### Inflation Analysis Enforcing the constraints in Eqs. (52) and (54) we can find \(M\) and \(\sigma_{\star}\), for any given \(\mathsf{N}_{\rm G}\), as functions of our free parameters \(\kappa\) and \(\mathrm{a}_{S}\). Let us clarify here that for \(\mathsf{N}_{\rm G}=1\) the parameter space is identical with the one explored in Ref. [14], where the HS is not specified. As explained there - see also Ref. [15] - observationally acceptable values of \(n_{\rm s}\) can be achieved by implementing hilltop FHI. This type of FHI requires a non-monotonic \(V_{\rm I}\) with \(\sigma\) rolling from its value \(\sigma_{\rm max}\) at which the maximum of \(V_{\rm I}\) lies down to smaller values. As for any model of hilltop inflation, \(V_{\rm I}^{\prime}\) and therefore \(\epsilon\) in Eq. (53) and \(r\) in Eq. (56) decrease sharply as \(N_{\rm 1\star}\) increases - see Eq. (52) -, whereas \(V_{\rm I}^{\prime\prime}\) (or \(\eta\)) becomes adequately negative, thereby lowering \(n_{\rm s}\) within its range in Eq. (55). These qualitative features are verified by the approximate computation of the quantities in Eq. (53) for \(\sigma<\sigma_{\rm max}\) which are found to be \[\epsilon\simeq m_{\rm P}^{2}\left(C_{\rm RC}^{\prime}+C_{\rm SSB}^{\prime} \right)^{2}/2\;\text{and}\;\;\eta\simeq m_{\rm P}^{2}C_{\rm RC}^{\prime\prime}, \tag{62}\] where the derivatives of the various contributions read \[C_{\rm SSB}^{\prime} \simeq-\mathrm{a}_{S}/\sqrt{2V_{\rm I0}}, \tag{63a}\] \[C_{\rm RC}^{\prime} \simeq\frac{\mathsf{N}_{\rm G}\kappa^{2}x}{32M\pi^{2}}\left(4 \tanh^{-1}\left(2/x^{2}\right)+x^{2}\ln(1-4/x^{4})\right),\] (63b) \[C_{\rm RC}^{\prime\prime} \simeq\frac{\mathsf{N}_{\rm G}\kappa^{2}}{32M^{2}\pi^{2}}\left(4 \tanh^{-1}\left(2/x^{2}\right)+3x^{2}\ln(1-4/x^{4})\right). \tag{63c}\] The required behavior of \(V_{\rm I}\) in Eq. (37) can be attained, for given \({\sf N}_{\rm G}\), thanks to the similar magnitudes and the opposite signs of the terms \(C^{\prime}_{\rm RC}\) and \(C^{\prime}_{\rm SSB}\) in Eqs. (63a) and (63b) which we can obtain for carefully selecting \(\kappa\) and \({\rm a}_{S}\). Apparently, we have \(C^{\prime}_{\rm SSB}<0\) and \(C^{\prime}_{\rm RC}>0\) for \(\sigma_{\star}<\sigma_{\rm max}\) since \(\left|4\tanh^{-1}\left(2/x^{2}\right)\right|>\left|x^{2}\ln(1-4/x^{4})\right|\). On the contrary, \(C^{\prime\prime}_{\rm RC}<0\), since the negative contribution \(3x^{2}\ln(1-4/x^{4})\) dominates over the first positive one, and so we obtain \(\eta<0\) giving rise to acceptably low \(n_{\rm s}\) values. We can roughly determine \(\sigma_{\rm max}\) by expanding \(C^{\prime}_{\rm RC}\) for large \(\sigma\) and equating the result with \(C^{\prime}_{\rm SSB}\). We obtain \[\frac{{\sf N}_{\rm G}\kappa^{2}}{8\pi^{2}\sigma_{\rm max}}=\frac{{\rm a}_{S}} {\sqrt{2}\kappa M^{2}}\;\Rightarrow\;\sigma_{\rm max}\simeq\frac{\kappa^{3} M^{2}{\sf N}_{\rm G}}{4\sqrt{2}\pi^{2}{\rm a}_{S}}. \tag{64}\] Needless to say, \(V_{\rm I}\) turns out to be bounded from below for large \(\sigma\)'s since in this regime \(C_{\rm SUGRA}\) starts dominating over \(C_{\rm RC}\) generating thereby a (\({\sf N}_{\rm G}\)-independent) minimum at about \[\sigma_{\rm min}\simeq\left(\frac{{\rm a}_{S}m_{\rm P}^{4}}{\sqrt{2}c_{4\nu} \kappa M^{2}}\right)^{1/3}. \tag{65}\] For \(\sigma>\sigma_{\rm min}\), \(V_{\rm I}\) becomes a monotonically increasing function of \(\sigma\) and so the boundedness of \(V_{\rm I}\) is assured. From our numerical computation we observe that, for constant \({\sf N}_{\rm G}\), \(\kappa\), and \({\rm a}_{S}\), the lower the value for \(n_{\rm s}\) we wish to attain, the closer we must set \(\sigma_{\star}\) to \(\sigma_{\rm max}\). Given that \(\sigma_{\rm max}\) turns out to be comparable to \(\sigma_{\rm c}\) and the hierarchy \(\sigma_{\rm c}<\sigma_{\star}<\sigma_{\rm max}\) has to hold, we see that we need two types of mild tunings in order to obtain successful FHI. To quantify the amount of these tunings, we define the quantities \[\Delta_{{\rm c}\star}=\frac{\sigma_{\star}-\sigma_{\rm c}}{\sigma_{\rm c}}\; \;{\rm and}\;\;\Delta_{{\rm max}\star}=\frac{\sigma_{\rm max}-\sigma_{\star }}{\sigma_{\rm max}}\,. \tag{66}\] The naturalness of the hilltop FHI increases with \(\Delta_{{\rm c}\star}\) and \(\Delta_{{\rm max}\star}\). To get an impression of the amount of these tunings and their dependence on the parameters of the model, we display in Table 1 the resulting \(\Delta_{{\rm c}\star}\) and \(\Delta_{{\rm max}\star}\) together with \(M\), \({\rm a}_{S}\), \({\rm a}_{\rm s}\), and \(r\) for \(\kappa=0.0005\) and \(n_{\rm s}\) fixed to its central value in Eq. (55). In all cases, we obtain \(N_{{\sf I}\star}\simeq 40.5\) from Eq. (52). We notice that \(\Delta_{{\rm max}\star}>\Delta_{{\rm c}\star}\) and that their values may be up to \(10\%\) increasing with \({\sf N}_{\rm G}\) (and \({\rm a}_{S}\)). Recall that in Ref. [14] it is shown that \(\Delta_{{\rm c}\star}\) and \(\Delta_{{\rm max}\star}\) increase with \(\kappa\) (and \(M\)). From the observables listed in Table 1 we also infer that \(|\alpha_{\rm s}|\) turns out to be of order \(10^{-4}\), whereas \(r\) is extremely tiny, of order \(10^{-11}\), and therefore far outside the reach of the forthcoming experiments devoted to detect primordial gravity waves. For the preferred \(n_{\rm s}\) values, we observe that \(r\) and \(|\alpha_{\rm s}|\) increase with \({\rm a}_{S}\). The structure of \(V_{\rm I}\) described above is visualized in Fig. 5, where we display a typical variation of \(V_{\rm I}\) as a function of \(\sigma/M\) for the values of the parameters shown in column B of Table 1. The maximum of \(V_{\rm I}\) is located at \(\sigma_{\rm max}/M=1.52\left\{1.38\right\}\), whereas its minimum lies at \(\sigma_{\rm min}/M=29.1\left\{29.5\right\}\) - the values obtained via the approximate Eqs. (64) and (65) are indicated in curly brackets. The values of \(\sigma_{\star}/M\simeq 1.4637\) and \(\sigma_{\rm f}/M\simeq 1.41421\) are also depicted together with \(\sigma_{\rm max}/M\) in the subplot of this figure. We remark that the key \(\sigma\) values for the realization of FHI are squeezed very close to one another and so their accurate determination is essential for obtaining reliable predictions from Eqs. (56a) and (56b). Moreover, \(N_{{\sf I}\star}\) in Eq. (52) can only be found numerically taking all the possible contributions to \(V^{\prime}_{\rm I}\) from Eqs. (63a) and (63b) and thus \(\sigma_{\star}\) can not be expressed analytically in terms of \(N_{{\sf I}\star}\). For these reasons, the results presented in the following are exclusively based on our numerical analysis. We first display in Fig. 6 the contours which are allowed by Eqs. (52) and (54) in the \(\kappa-{\rm a}_{S}\) plane taking \(n_{\rm s}=0.967\) and \({\sf N}_{\rm G}=1\) (dot-dashed line), \({\sf N}_{\rm G}=2\) (solid line), and \({\sf N}_{\rm G}=10\) (dashed line). The various lines terminate at \(\kappa\) values close to \(10^{-3}\) beyond which no observationally acceptable inflationary solutions are possible. We do not depict the very narrow strip obtained for each \({\sf N}_{\rm G}\) by varying \(n_{\rm s}\) in its allowed range in Eq. (55), since the obtained boundaries are almost indistinguishable. From the plotted curves we notice that the required \({\rm a}_{S}\)'s increase with \({\sf N}_{\rm G}\). Working in the same direction, we delineate in Fig. 7 the regions in the \(M-{\rm a}_{S}\) plane allowed by Eqs. (52), (54), and (55) for the considered \(\mathbb{G}\)'s. In particular, we use \({\rm N}_{\rm G}=1,2,\) and \(10\) in subfigures (a), (b), and (c) respectively. The boundaries of the allowed areas in Fig. 7 are determined by the dashed [dot-dashed] lines corresponding to the upper [lower] bound on \(n_{\rm s}\) in Eq. (55). We also display by solid lines the allowed contours for \(n_{\rm s}=0.967\). We observe that the maximal allowed \(M\)'s increase with \({\rm N}_{\rm G}\). The maximal \(r\)'s are encountered in the upper right end of the dashed lines, which correspond to \(n_{\rm s}=0.974\), with the maximal value being \(r=6.2\times 10^{-10}\) for \({\rm N}_{\rm G}=10\). On the other hand, the maximal \(|\alpha_{\rm s}|\)'s are achieved along the dot-dashed lines and the minimal value of \(\alpha_{\rm s}\) is \(-3.2\times 10^{-4}\) for \({\rm N}_{\rm G}=10\) too. Summarizing our findings from Fig. 7 for the central \(n_{\rm s}\) value in Eq. (55) and \({\rm N}_{\rm G}=1,2,\) and \(10\) respectively we end up with the following ranges: \[0.07\lesssim M/10^{15}\ {\rm GeV}\lesssim 2.56\ \ {\rm and}\ \ 0.1 \lesssim{\rm a}_{S}/{\rm TeV}\lesssim 100, \tag{67a}\] \[0.82\lesssim M/10^{15}\ {\rm GeV}\lesssim 3.7\ \ {\rm and}\ \ 0.09 \lesssim{\rm a}_{S}/{\rm TeV}\lesssim 234,\] (67b) \[1.22\lesssim M/10^{15}\ {\rm GeV}\lesssim 4.77\ \ {\rm and}\ \ 0.2 \lesssim{\rm a}_{S}/{\rm TeV}\lesssim 460. \tag{67c}\] Within these margins, \(\Delta_{\rm c*}\) ranges between \(0.5\%\) and \(20\%\) and \(\Delta_{\rm max*}\) between \(0.4\%\) and \(12\%\). The lower bounds of these inequalities are expected to be displaced to slightly larger values due to the post-inflationary requirements in Eqs. (59a) and (59b) which are not considered here for the shake of generality. Recall that precise incorporation of these constraints requires the adoption of a specific \(K\) from Eqs. (40a) - (40d) and corresponding \(\mu/\widetilde{m}\) relation from Eq. (45). In the case \(\mathbb{G}=\mathbb{G}_{B-L}\), CSs may be produced after FHI with \(G\mu_{\rm cs}=(6.5-89)\times 10^{-9}\) for the parameters in Eq. (67a). Therefore, the corresponding parameter space is totally allowed by Eq. (58a) but completely excluded by Eq. (58b), if the CSs are stable. If these CSs are metastable, the explanation [30] of the recent data [32; 33] on stochastic gravity waves is possible for \(M\gtrsim 9\times 10^{14}\) GeV in Eq. (67a) where Eq. (58c) is fulfilled. No similar restrictions exist if \(\mathbb{G}=\mathbb{G}_{\rm LR}\) or \(\mathbb{G}_{5\times}\), which do not lead to the production of any cosmic defect. On the other hand, the unification of gauge coupling constants within MSSM close to \(M_{\rm GUT}=2.86\times 10^{16}\) GeV remains intact if \(\mathbb{G}=\mathbb{G}_{B-L}\) despite the fact that \(M\ll M_{\rm GUT}\) for \(M\) given in Eq. (67a). Indeed, the gauge boson associated with the \(U(1)_{B-L}\) breaking is neutral under \(\mathbb{G}_{\rm SM}\) and so it does not contribute to the relevant renormalization group running. If \(\mathbb{G}=\mathbb{G}_{\rm LR}\) or \(\mathbb{G}_{5\times}\) we may invoke threshold corrections or additional matter supermultiples to restore the gauge coupling unification - for \(\mathbb{G}=\mathbb{G}_{5\times}\) see Ref. [60]. ### Link to the MSSM The inclusion of the HS in our numerical computation assists us to gain information about the mass scale of the SUSY particles through the determination of \(\widetilde{m}\sim m_{3/2}\) - see Eq. (46). Indeed, \({\rm a}_{S}\), which is already restricted as a function of \(\kappa\) or \(M\) for given \({\rm N}_{\rm G}\) in Figs. 6 and 7, can be correlated to \(m\) via Eq. (38d). Taking into account Eq. (30) and the fact that \(\langle z\rangle_{1}/m_{\rm P}\sim 10^{-3}\) - see Table 1 - we can solve analytically and very accurately Eq. (38d) w.r.t. \(m\). We find \[m\simeq\left(\frac{{\rm a}_{S}}{2^{1+\nu}(2-\nu)}\right)^{(2-\nu)/2}\left( \frac{3H_{\rm I}^{2}}{(1-\nu)\nu^{2}}\right)^{\nu/4}. \tag{68}\] Let us clarify here that in our numerical computation we use an iterative process, which though converges quickly, in order to extract consistently \(m\) as a function of \(\kappa\) and \(M\). This is because the determination of the latter parameters via the conditions in Eqs. (52) and (54) requires the introduction of a trial \(m\) value which allows us to use as input the form of \(V_{\rm I}\) in Eq. (37). Thanks to the aforementioned smallness of \(\langle z\rangle_{\rm I}\) in Eq. (38d), \(m\) turns out to be two to three orders of magnitude larger than \({\rm a}_{S}\), suggesting that \(\widetilde{m}\) lies clearly at the PeV scale via Eqs. (46) and (25a). In fact, taking advantage of the resulting \(m\) for fixed \(\nu\) in Eq. (68), we can compute \(m_{3/2}\) from Eq. (25a), and \(m_{z}\) and \(m_{\theta}\) from Eq. (25c). All these masses turn out to be of the same order of magnitude - see Table 1. Then \(\widetilde{m}\) and \(T_{\rm rh}\) can be also estimated from Eqs. (46) and (47) for a specific \(K\) from Eqs. (40a) - (40d). The magnitude of \(\widetilde{m}\) and the necessity for \(\mu\sim\widetilde{m}\), established in Sec. IV, hints towards the high-scale MSSM. To highlight numerically our expectations, we take \(K=K_{1}\) and fix initially \(\nu=7/8\), which is a representative value. The predicted \(\widetilde{m}\) as a function of \(\kappa\) is depicted in Fig. 8 for the three \(\mathsf{N}_{\rm G}\)'s considered in our work. We use the same type of lines as in Fig. 6. Assuming also that \(\mu=\widetilde{m}\) we can determine the segments of these lines that can be excluded by the BBN bound in Eq. (59b). In all, we find that \(\widetilde{m}\) turns out to be confined in the ranges \[0.34\lesssim\widetilde{m}/\text{PeV}\lesssim 13.6\;\;\text{for} \;\;\mathsf{N}_{\rm G}=1, \tag{69a}\] \[0.21\lesssim\widetilde{m}/\text{PeV}\lesssim 32.9\;\;\text{for} \;\;\mathsf{N}_{\rm G}=2,\] (69b) \[0.58\lesssim\widetilde{m}/\text{PeV}\lesssim 46.8\;\;\text{for} \;\;\mathsf{N}_{\rm G}=10. \tag{69c}\] Allowing \(\nu\) and \(\mu\) to vary within their possible respective margins \((0.75-1)\) and \((1-3)\widetilde{m}\), we obtain the gray shaded region in Fig. 8. We present an overall region for the three possible \(\mathsf{N}_{\rm G}\)'s, since the separate ones overlap each other. Obviously the lower boundary curve of the displayed region is obtained for \(\mathsf{N}_{\rm G}=1\) and \(\nu\simeq 0.751\), whereas the upper one corresponds to \(\mathsf{N}_{\rm G}=10\) and \(\nu\simeq 0.99\). The hatched region is ruled out by Eq. (59b). All in all, we obtain the predictions \[1.2\lesssim{\rm a}_{S}/\text{TeV}\lesssim 460\;\;\text{and}\;\;0.09\lesssim \widetilde{m}/\text{PeV}\lesssim 253 \tag{70}\] and \(T_{\rm rh}^{\rm max}\simeq 71\;\text{GeV},139\;\text{GeV}\), and \(163\;\text{GeV}\) for \(\mathsf{N}_{\rm G}=1,2\), and \(10\) respectively attained for \(\mu=3\widetilde{m}\) and \(\nu\simeq 0.99\). The derived allowed margin of \(\widetilde{m}\), which is included in Eq. (61), and the employed \(\mu\) values render our proposal compatible with the mass of the Higgs boson discovered in LHC [52] if we adopt as a low energy effective theory the high-scale version of MSSM [53]. ## VII Conclusions We considered the realization of FHI in the context of an extended model based on the superpotential and Kahler potential in Eqs. (1) and (5), which are consistent with an approximate \(R\) symmetry. The minimization of the SUGRA scalar potential at the present vacuum constrains the curvature of the internal space of the goldstino superfield and provides a tunable energy density which may be interpreted as the DE without the need of an unnaturally small coupling constant. On the other hand, this same potential causes a displacement of the sgoldstino to values much smaller than \(m_{\rm P}\) during FHI. Combining this fact with minimal kinetic terms for the inflaton, the \(\eta\) problem is resolved allowing hilltop FHI. The slope of the inflationary path is generated by the RCs and a tadpole term with a minus sign and values which increase with the dimensionality of the representation of the relevant Higgs superfields. Embedding \(\mathbb{G}_{B-L}\) into a larger gauge group \(\mathbb{G}_{\rm GUT}\) which predicts the production of monopoles prior to FHI that can eventually break the CSs allows the attribution of the observed data on the gravitational waves to the decay of metastable \(B-L\) CSs. We also discussed the generation of the \(\mu\) term of MSSM following the Giudice-Masiero mechanism and restricted further the curvature of the goldstino internal space so that phenomenologically dangerous production of \(\widetilde{G}\) may be avoided. This same term assists in the decay of the sgoldstino, which normally dominates the energy density of the Universe, at a reheat temperature which can be as high as \(163\;\text{GeV}\) provided that the \(\mu\) parameter is of the order of the \(\widetilde{G}\) mass, i.e., of order \(\text{PeV}\). Linking the inflationary sector to a degenerate MSSM mass scale \(\widetilde{m}\) we found that \(\widetilde{m}\) lies in a range consistent with the Higgs boson mass measured at LHC within high-scale SUSY. The long-lasting matter domination obtained in our model because of the sgoldstino oscillations after the end of FHI leads [61] to a suppression at relatively large frequencies (\(f>0.1\;\text{Hz}\)) of the spectrum of the gravitational waves from the decay of the metastable CSs. This effect may be beneficial for spectra based on \(G\mu_{\rm cs}\) values which violate the upper bound of Eq. (58c) from the results of Ref. [34]. Since we do not achieve such \(G\mu_{\rm cs}\) values here we do not analyze further this implication of our scenario. On the other hand, the low reheat temperature encountered in our proposal makes difficult the achievement of baryogenesis. However, there are currently attempts [62] based on the idea of cold electroweak baryogenesis [63] which may overcome this problem. It is also not clear which particle could play the role of CDM in a high-scale SUSY regime. Let us just mention that a thorough investigation is needed including the precise solution of the relevant Bolzmann equations as in Ref. [58] in order to assess if the abundance of the lightest SUSY particle can be confined within the observational limits in this low-reheating scenario. ###### Acknowledgements. We would like to thank I. Antoniadis, H. Baer, and E. Kiritis for useful discussions. This research work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project Number: 2251). ## Appendix A SUGRA corrections to the inflationary potential of FHI As shown in Sec. II.3.2, the presence of \(W_{\rm H}\) and \(K_{\rm H}\) in Eqs. (2b) and (6b) respectively transmit (potentially important) corrections to the inflationary potential. We present here, for the first time to the best of our knowledge, these corrections without specify the form of these functions. The corrections from the IS are also taken into account. In particular, we consider the following superpotential and Kahler potential resulting from the ones in Eqs. (1) and (5) by setting \(\Phi\) and \(\tilde{\Phi}\) to zero: \[W=W_{\rm I}(S)+W_{\rm H}(Z)\;\;\mbox{and}\;\;K=K_{\rm I}(S)+K_{\rm H}(Z), \tag{10}\] where \(W_{\rm I}\) and \(K_{\rm I}\) are given by \[W_{\rm I}=-\hat{\kappa}M^{2}S\;\;\mbox{and}\;\;K_{\rm I}=K_{\rm I}(|S|^{2}) \tag{11}\] (cf. Eqs. (2a) and (6a)). We also assume that \(K_{\rm I}\) can be reliably expanded in powers of \(|S|/m_{\rm P}\) as follows: \[K_{\rm I}\simeq|S|^{2}+\frac{k_{4}}{4}\frac{|S|^{4}}{m_{\rm P}^{2}}+\frac{k_{ 6}}{9}\frac{|S|^{6}}{m_{\rm P}^{3}}+\cdots. \tag{12}\] Under these circumstances, the inverse Kahler metric reads \[K_{\rm I}^{SS^{*}}\simeq 1-k_{4}|S|^{2}m_{\rm P}^{2}+(k_{4}^{2}-k_{6})|S|^{4}/m _{\rm P}^{4}+\cdots\] (13a) and the exponential prefactor of \[V_{\rm F}\] in Eq. ( 9 ) is well approximated by \[e^{K_{\rm I}/m_{\rm P}^{2}}\simeq 1+\frac{|S|^{2}}{m_{\rm P}^{2}}+\frac{1+2k_{4} }{2}\frac{|S|^{4}}{m_{\rm P}^{4}}+\cdots. \tag{13b}\] Taking into account the two last expressions and expanding \(V_{\rm F}\) in Eq. (9) with \(W\) and \(K\) from Eq. (10) up to the forth power in \(|S|/m_{\rm P}\), we obtain the quite generic formula below \[V_{\rm F} \simeq v_{0}+m_{\rm I3/2}^{2}|S|^{2}+(v_{1}S^{*}+{\rm c.c})+v_{2} |S|^{2}/m_{\rm P}^{2}\] \[+(v_{3}S^{*}+{\rm c.c})\,|S|^{2}/m_{\rm P}^{2}+v_{4}|S|^{4}/m_{ \rm P}^{4}+\cdots, \tag{14}\] where the various \(v\)'s are found to be \[v_{0} = \kappa^{2}M^{4}, \tag{15a}\] \[v_{1} = \kappa M^{2}m_{\rm I3/2}\left\langle 2-K_{\rm H}^{ZZ^{*}}\partial_{ 2}G_{\rm H}\right\rangle_{1},\] (15b) \[v_{2} = \kappa^{2}M^{4}\left\langle K_{\rm H}^{ZZ^{*}}|\partial_{Z}K_{ \rm H}|^{2}/m_{\rm P}^{2}-k_{4}\right\rangle_{1},\] (15c) \[v_{3} = \kappa M^{2}m_{\rm I3/2}\left\langle(1+k_{4}/2)-K_{\rm H}^{ZZ^{*} }\partial_{2}G_{\rm H}\right\rangle_{1},\] (15d) \[v_{4} = \kappa^{2}M^{4}\Big{(}1/2+k_{4}(4k_{4}-7)/4-k_{6} \tag{15e}\] Here \(\kappa\) is the rescaled coupling constant \(\hat{\kappa}\) after absorbing the relevant prefactor \(e^{\langle K_{\rm I}\rangle_{1}/2m_{\rm P}}\) in Eq. (9) and we used the definition of the \(\widetilde{G}\) mass \[m_{\rm I3/2}=\left\langle e^{K_{\rm H}/2m_{\rm P}^{2}}W_{\rm H}/m_{\rm P}^{2} \right\rangle_{1},\] and the Kahler invariant function, see, e.g., Ref. [56], \[G_{\rm H}=K_{\rm H}/m_{\rm P}^{2}+\ln|W_{\rm H}/m_{\rm P}^{3}|^{2}. \tag{16}\] From these results we see that \(v_{2}\) and \(v_{4}\) generically receive contributions from both the IS and HS, whereas \(v_{1}\) and \(v_{3}\) exclusively from the HS - cf. Refs. [8; 10]. Specifically, from Eq. (15c), we can recover the miraculous cancellation occurring within minimal FHI [4; 14], where the HS is ignored and \(k_{4}=k_{6}=0\) in Eq. (12). Switching on \(K_{\rm H}\) and noticing that \[k_{4}=\partial_{S}^{2}\partial_{S^{*}}^{2}K_{\rm I}(S=S^{*}=0), \tag{17}\] we can also see that Eq. (15c) agrees with that presented in Ref. [8]. The applicability of our results can be easily checked for other HS settings [23; 24; 26] too.
2309.12794
A weakly coupled system of $p$-Laplace type in a heat conduction problem
We study temperature distribution in a heat conducting problem, for a system of p-Laplace equation, giving rise to a free boundary.
Morteza Fotouhi, Mohammad Safdari, Henrik Shahgholian
2023-09-22T11:09:53Z
http://arxiv.org/abs/2309.12794v1
# A weakly coupled system of \(p\)-Laplace type in a heat conduction problem ###### Abstract. We study temperature distribution in a heat conducting problem, for a system of p-Laplace equation, giving rise to a free boundary. Key words and phrases:Free boundary, Heat conduction, p-Laplacian 2010 Mathematics Subject Classification: 35R35 ###### Contents * 1 Introduction * 1.1 Background * 1.2 Structure of the paper * 2 The Penalized Problem * 3 Regularity of solutions to the penalized problem * 4 The Original Problem * 5 Regularity of the free boundary (case \(p=2\)) ## 1. Introduction ### Background This paper delves into an extension of a classical optimization problem in heat conduction, presenting a succinct description as follows: Given a surface \(\partial\Omega\) in \(\mathbb{R}^{n}\) and positive constant functions defined on it (representing the temperature distribution), the aim is to enclose \(\partial\Omega\) with a prescribed volume of insulating material to minimize heat loss in a stationary scenario. Mathematically, the objective is to discover a vector-valued function \(\mathbf{u}=(u^{1},\cdots,u^{m})\) (\(m\geq 1\)) that corresponds to the temperature within \(\Omega\). Whenever the components of \(\mathbf{u}\) are nonnegative and the volume of its support is equal to \(1\), they become \(p\)-harmonic. The target is to minimize the heat flow, which can be regarded as a continuous family of convex functions dependent on \(\nabla\mathbf{u}\) along \(\partial\Omega\). Our research was inspired by a series of papers [1, 2, 3] and their generalization presented in [12]. The initial two articles focused on studying constant temperature distributions, specifically in the linear case where \(\Gamma(x,t)=t\). This linear setting enabled [1, 3] to reduce the quantity to be minimized to the Dirichlet integral. However, even within the linear case, the problem of nonconstant temperature distribution, examined in [2], introduced various new challenges. The main objective of our article is to explore the system version of the nonlinear case with a nonconstant temperature distribution, wherein the equation is governed by the \(p\)-Laplacian. The nonlinearity addressed in this paper holds significant physical importance, as problems involving monotone operators, akin to those studied in [12], arise in the optimization of domains for electrostatic configurations. The nonlinearity associated with \(\nabla\mathbf{u}\) introduces various new challenges. For instance, computing normal derivatives of \(W^{1,p}\)-functions becomes problematic, leading to difficulties in providing a reasonable mathematical model. In [2], this challenge was overcome by minimizing the total mass of \(\Delta u\), which can be treated as a nonnegative measure when \(u\) is subharmonic. However, in the present case, there is no integral representation available for \(\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{u}(x)\big{)},d\sigma\). To address this issue, similar to [12], we solve appropriate auxiliary variational problems and compare them with the minimizer. Now let us introduce the problem in mathematical framework. Let \(\Omega\subset\mathbb{R}^{n}\) (\(n\geq 2\)) be a bounded open set with smooth boundary whose volume \(|\Omega|>1\). Consider the \(p\)-Laplace differential operator (\(1<p<\infty\)) \[\Delta_{p}u^{i}=\operatorname{div}\!\left(|\nabla u^{i}|^{p-2}\,\nabla u^{i} \right)=\operatorname{div}\!\left(A[u^{i}]\right)\!,\] where we set \(A[u^{i}]=A(\nabla u^{i}):=|\nabla u^{i}|^{p-2}\,\nabla u^{i}\) to simplify the notation. Let \(\boldsymbol{\varphi}:\partial\Omega\to\mathbb{R}^{m}\) be a \(C^{1}\) function with positive components \(\varphi^{i}>0\). For \(\mathbf{u}:\Omega\to\mathbb{R}^{m}\) (\(m\geq 1\)) satisfying \[\begin{cases}\Delta_{p}u^{i}=0&\text{in }\{|\mathbf{u}|>0\},\\ u^{i}=\varphi^{i}&\text{on }\partial\Omega,\\ \operatorname{vol}(\operatorname{spt}|\mathbf{u}|)=1,\end{cases} \tag{1.1}\] we want to minimize the functional \[J(\mathbf{u}):=\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}u^{1}(x),\ldots,A_{ \nu}u^{m}(x)\big{)}\,d\sigma(x),\] where \(\nu\) is the outward normal vector on \(\partial\Omega\), \[A_{\nu}u^{i}:=|\nabla u^{i}|^{p-2}\,\partial_{\nu}u^{i},\] and \(\Gamma(x,\xi):\partial\Omega\times\mathbb{R}^{m}\to\mathbb{R}\) is a continuous function that satisfies: 1. For each fixed \(x\), \(\Gamma(x,\cdot)\) is a convex function. 2. For every \(i\), \(\partial_{\xi_{i}}\Gamma(\cdot,\cdot)\) is positive and has a positive lower bound on any set of the form \(\{(x,\xi):\xi_{i}\geq a\}\). In addition, \(\partial_{\xi_{i}}\Gamma(\cdot,\cdot)\) is bounded above on any set of the form \(\{(x,\xi):\xi_{i}\leq b\}\). (The bounds can depend on \(a,b\).) 3. For each fixed \(\xi\), \(\partial_{\xi_{i}}\Gamma(\cdot,\xi)\) is a \(C^{1}\) function. Note that, as a result, for every \(\xi\) we have \[\Gamma(x,\xi_{1},\ldots,\xi_{m}) \geq\sum_{i\leq m}\partial_{\xi_{i}}\Gamma(x,0)\xi_{i}+\Gamma(x,0)\] \[\geq\sum_{i\leq m}\psi_{i}(x)\xi_{i}-C,\] where \(\psi_{i}(x):=\partial_{\xi_{i}}\Gamma(x,0)>0\) are positive \(C^{1}\) functions and \(C\) is a constant. In particular we have \[\Gamma(x,A_{\nu}u_{1},\ldots,A_{\nu}u_{m})\geq\sum_{i=1}^{m}\psi_{i}(x)A_{\nu} u^{i}-C. \tag{1.2}\] A typical example of \(\Gamma\) is \[\Gamma(x,\xi)=\psi_{1}(x)\gamma_{1}(\xi_{1})+\cdots+\psi_{m}(x)\gamma_{m}(\xi _{m}),\] where \(\psi_{i}\)'s are \(C^{1}\) and positive, and \(\gamma_{i}\)'s are \(C^{1}\) increasing convex functions with positive derivative. ### Structure of the paper The structure of our paper is as follows: In Section 2, we introduce the physical problem under consideration. We then formulate a penalized version of the variational problem for the temperature \(\mathbf{u}\) and define suitable constraint sets as part of our strategy to overcome the challenges arising from the nonlinearity. We solve the optimization problem over weakly closed subsets of \(W^{1,p}\) (the sets \(V_{\delta}\)), establishing the optimal regularity properties of the minimizers, including Lipschitz regularity. These results are crucial for proving the existence of an optimal configuration for the original penalized problem, as discussed in Section 3. Here we also present fundamental geometric-measure properties of the optimal configuration, such as linear growth away from the free boundary and uniformly positive density. These properties allow us to establish a representation theorem following the framework of [3]. In Section 4, we recover the original physical problem from the penalized problem by showing that for sufficiently small \(\varepsilon\), the volume of \(\{|\mathbf{u}_{\varepsilon}|>0\}\) automatically adjusts to be equal to \(1\). Section 5 is dedicated to the optimal regularity of the free boundary, for the case \(p=2\). We demonstrate that the normal derivative of the minimizer along the free boundary is a Holder continuous function, leading to the conclusion that the free boundary is a \(C^{1,\alpha}\) surface. Furthermore, using the free boundary condition obtained during the proof of Holder continuity, we establish that the free boundary is an analytic surface, except for a small singular set. ## 2. The Penalized Problem Let \(\Omega_{\delta}:=\{x\in\Omega:\operatorname{dist}(x,\partial\Omega)<\delta\}\) and \[V_{\delta}:=\{\mathbf{u}\in W^{1,p}(\Omega;\mathbb{R}^{m}):u^{i} \geq 0,\,\Delta_{p}u^{i}\geq 0,\] \[\Delta_{p}u^{i}=0\text{ in }\Omega_{\delta},\,u^{i}=\varphi^{i} \text{ on }\partial\Omega\}.\] Furthermore, we set \[V:=\bigcup_{\delta>0}V_{\delta}.\] Observe that the above definition is consistent due to the assumption \(\varphi^{i}>0\) on \(\partial\Omega\). Also, by \(\Delta_{p}u^{i}\geq 0\) we mean that for any test function \(\zeta\in C_{c}^{\infty}(\Omega)\) with \(\zeta\geq 0\) we have \[-\int_{\Omega}\nabla\zeta\cdot|\nabla u^{i}|^{p-2}\nabla u^{i}\,dx\geq 0.\] This implies that there is a Radon measure \(\mu^{i}\) such that for any test function \(\zeta\in C_{c}^{\infty}(\Omega)\) we have \[\int_{\Omega}\zeta\,d\mu^{i}=-\int_{\Omega}\nabla\zeta\cdot|\nabla u^{i}|^{p-2 }\nabla u^{i}\,dx.\] To simplify the notation, we denote \(\mu^{i}\) by \(\Delta_{p}u^{i}\), and \(d\mu^{i}\) by \(\Delta_{p}u^{i}\,dx\). (It should be noted that this notation is not meant to imply \(\mu^{i}\) is absolutely continuous with respect to the Lebesgue measure. In fact, for the minimizer, the two measures are mutually singular as we will see in Theorem 3.5.) Let \(f_{\varepsilon}:\mathbb{R}\to\mathbb{R}\) be \[f_{\varepsilon}(t):=\begin{cases}1+\frac{1}{\varepsilon}(t-1)&t\geq 1,\\ 1+\varepsilon(t-1)&t<1.\end{cases}\] We are interested in minimizing the penalized functional \[J_{\varepsilon}(\mathbf{u}):=\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{u} (x)\big{)}\,d\sigma+f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|}\big{)}\] over \(V\). The significance of the above penalization is that it forces the volume \(\big{|}\{|\mathbf{u}|>0\}\big{|}\) to be \(1\) for small enough \(\varepsilon\); see Theorem 4.3. (Notice that the components of \(\mathbf{u}\in V\) are \(p\)-harmonic near \(\partial\Omega\); therefore they are smooth enough near the boundary, and it makes sense to compute their derivatives along \(\partial\Omega\).) We first consider the minimizer of \(J_{\varepsilon}\) over \(V_{\delta}\). **Lemma 2.1**.: _Let \(\mathbf{u}\in V\). Then we have_ \[\int_{\Omega}\psi_{i}\Delta_{p}u^{i}\,dx+\int_{\Omega}\nabla\psi_{i}\cdot A[u ^{i}]\,dx=\int_{\partial\Omega}\psi_{i}A_{\nu}u^{i}\,d\sigma, \tag{2.1}\] _where \(\psi_{i}\)'s are \(C^{1}\) functions._ Proof.: Let \(\phi_{k}\in C^{\infty}(\Omega)\) be such that \(\phi_{k}\equiv 1\) on \(\tilde{\Omega}_{k}:=\Omega-\Omega_{1/k}\) and \(\phi_{k}\equiv 0\) on \(\partial\Omega\). We know that \(\mathbf{u}\in V_{\delta}\) for some \(\delta\). Suppose \(k\) is large enough so that \(1/k<\delta\), and thus \(\Omega_{1/k}\subset\Omega_{\delta}\). Then we have \[\int_{\Omega}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]\,dx =\int_{\Omega_{1/k}}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]\,dx\] \[\qquad\qquad+\int_{\tilde{\Omega}_{k}}\nabla(\underbrace{\phi_{k }\psi_{i}}_{\equiv\,1})\cdot A[u^{i}]\,dx\] \[=\int_{\Omega_{1/k}}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]\,dx+ \int_{\tilde{\Omega}_{k}}\nabla\psi_{i}\cdot A[u^{i}]\,dx.\] Now, noting that \(\partial\Omega_{1/k}=\partial\tilde{\Omega}_{k}\cup\partial\Omega\), and by using the integration by parts formula proved in [5], we get \[\int_{\Omega_{1/k}}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]\,dx =\int_{\Omega_{1/k}}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]+\phi_{ k}\psi_{i}\underbrace{\Delta_{p}u^{i}}_{=\,0\text{ on }\Omega_{1/k}}dx\] \[=-\int_{\partial\tilde{\Omega}_{k}}\underbrace{\phi_{k}\psi_{i}A _{\nu}u^{i}\,d\sigma}_{\equiv\,1}\underbrace{\phi_{k}\psi_{i}A_{\nu}u^{i}\,d\sigma }_{\equiv\,0}\] \[=-\int_{\partial\tilde{\Omega}_{k}}\psi_{i}A_{\nu}u^{i}\,d\sigma \underset{k\to\infty}{\longrightarrow}-\int_{\partial\Omega}\psi_{i}A_{\nu}u^ {i}\,d\sigma.\] In addition, we have \(\int_{\tilde{\Omega}_{k}}\nabla\psi_{i}\cdot A[u^{i}]\,dx\underset{k\to\infty}{ \longrightarrow}\int_{\Omega}\nabla\psi_{i}\cdot A[u^{i}]\,dx\), and \[\int_{\Omega}\nabla(\phi_{k}\psi_{i})\cdot A[u^{i}]\,dx=-\int_{ \Omega}\phi_{k}\psi_{i}\,\Delta_{p}u^{i}\,dx\] \[\underset{k\to\infty}{\longrightarrow}-\int_{\Omega}\psi_{i} \Delta_{p}u^{i}\,dx,\] which together give the desired result. We can similarly show that \[\int_{\Omega}\Delta_{p}u^{i}\,dx=\int_{\partial\Omega}A_{\nu}u^{i}\,d\sigma.\] In addition, note that \(\int_{\Omega}u^{i}\Delta_{p}u^{i}\,dx\) is meaningful (since \(u^{i}-\varphi^{i}\in W^{1,p}_{0}\) while \(\Delta_{p}u^{i}\in W^{-1,p/(p-1)}\) and \(\varphi^{i}\) is continuous) and we can similarly show that \[\int_{\Omega}u^{i}\Delta_{p}u^{i}+|\nabla u^{i}|^{p}\,dx=\int_{\partial\Omega }\varphi^{i}A_{\nu}u^{i}\,d\sigma. \tag{2.2}\] Note that \(u^{i}=\varphi^{i}\) on \(\partial\Omega\). **Lemma 2.2**.: _For \(\mathbf{u}\in V\) we have_ \[\sum_{i=1}^{m}\int_{\Omega}|\nabla u^{i}|^{p}\,dx\leq C+C\int_{\partial\Omega} \sum_{i\leq m}\psi_{i}(x)A_{\nu}u^{i}\,d\sigma.\] _Remark_.: As we will see, the above inequality actually holds for each summand. Furthermore, with a slight modification of the last part of the proof we obtain that \[\int_{\Omega}\Delta_{p}u^{i}\,dx\leq C+C\int_{\partial\Omega}\psi_{i}(x)A_{\nu }u^{i}\,d\sigma.\] Proof.: Let \(\mathbf{h}_{0}\) be the vector-valued function in \(\Omega\) satisfying \(\Delta_{p}h^{i}_{0}=0\), and taking the boundary values \(\boldsymbol{\varphi}\) on \(\partial\Omega\). Note that \(h^{i}_{0}\) is \(C^{1}\) and we can plug it in (2.1). By subtracting the resulting relation from (2.2) we get \[\int_{\Omega}(u^{i}-h^{i}_{0})\Delta_{p}u^{i}\,dx+\int_{\Omega}\nabla(u^{i}-h^ {i}_{0})\cdot A[u^{i}]\,dx=0.\] Hence \[\int_{\Omega}|\nabla u^{i}|^{p}\,dx =\int_{\Omega}\nabla u^{i}\cdot A[u^{i}]\,dx=\int_{\Omega}(h_{0}^{i }-u^{i})\Delta_{p}u^{i}\,dx+\int_{\Omega}\nabla h_{0}^{i}\cdot A[u^{i}]\,dx\] \[\leq\int_{\Omega}h_{0}^{i}\Delta_{p}u^{i}\,dx+C\int_{\Omega}| \nabla h_{0}^{i}|^{p}\,dx+\frac{1}{2}\int_{\Omega}|A[u^{i}]|^{\frac{p}{p-1}}\,dx\] \[\leq C\int_{\Omega}\Delta_{p}u^{i}\,dx+C+\frac{1}{2}\int_{\Omega}| \nabla u^{i}|^{p}\,dx,\] where we have used the facts that \(u^{i},\Delta_{p}u^{i}\geq 0\) and \(|A[u^{i}]|^{\frac{p}{p-1}}=|\nabla u^{i}|^{p}\). Thus we have \[\int_{\Omega}|\nabla u^{i}|^{p}\,dx\leq C\int_{\Omega}\Delta_{p}u^{i}\,dx.\] But since \(\psi_{i},\Delta_{p}u^{i}\geq 0\) we get \[\int_{\Omega}|\nabla u^{i}|^{p}\,dx\leq C\int_{\Omega}\Delta_{p}u^{i}\,dx\leq CC _{i}\int_{\Omega}\psi_{i}\,\Delta_{p}u^{i}\,dx,\] where \(C_{i}=\max_{\overline{\Omega}}\frac{1}{\psi_{i}}>0\). Hence by (2.1) we get \[\int_{\Omega}|\nabla u^{i}|^{p}\,dx \leq C\int_{\Omega}\psi_{i}\,\Delta_{p}u^{i}\,dx\] \[=-C\int_{\Omega}\nabla\psi_{i}\cdot A[u^{i}]\,dx+C\int_{\partial \Omega}\psi_{i}A_{\nu}u^{i}\,d\sigma\] \[\leq\tilde{C}\int_{\Omega}|\nabla\psi_{i}|^{p}\,dx+\frac{1}{2} \int_{\Omega}|A[u^{i}]|^{\frac{p}{p-1}}\,dx+C\int_{\partial\Omega}\psi_{i}A_{ \nu}u^{i}\,d\sigma\] \[\leq C+\frac{1}{2}\int_{\Omega}|\nabla u^{i}|^{p}\,dx+C\int_{ \partial\Omega}\psi_{i}A_{\nu}u^{i}\,d\sigma,\] which gives the desired. **Theorem 2.3**.: _There exists a minimizer \(\mathbf{u}_{\varepsilon}^{\delta}\in V_{\delta}\) for \(J_{\varepsilon}\)._ Proof.: Let \(\{\mathbf{u}_{k}\}\subset V_{\delta}\) be a minimizing sequence. Then by the above lemma and (1.2) we have \[\sum_{i=1}^{m}\int_{\Omega}|\nabla u_{k}^{i}|^{p}\,dx \leq C+C\int_{\partial\Omega}\sum_{i\leq m}\psi_{i}A_{\nu}u_{k}^{ i}\,d\sigma\] \[\leq C+C\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}u_{k}^{1}, \ldots,A_{\nu}u_{k}^{m}\big{)}+C\,d\sigma\] \[\leq C+CJ_{\varepsilon}(\mathbf{u}_{k}).\] Hence \(\|\nabla\mathbf{u}_{k}\|_{L^{p}}\) is bounded. In addition, for the dual exponent \(q=\frac{p}{p-1}\) we can see that \(\|A[\mathbf{u}_{k}]\|_{L^{q}}=\|\nabla\mathbf{u}_{k}\|_{L^{p}}^{p-1}\) is also bounded. Hence, up to a subsequence, we can assume that \(\nabla u_{k}^{i}\rightharpoonup\nabla u^{i}\) in \(L^{p}\), \(A[u_{k}^{i}]\rightharpoonup A[u^{i}]\) in \(L^{q}\), and \(\mathbf{u}_{k}\to\mathbf{u}\) a.e in \(\Omega\). Thus \(u^{i}\geq 0\). Also, \(u^{i}=\varphi^{i}\) on \(\partial\Omega\), since \(u^{i}_{k}-\varphi^{i}\in W^{1,p}_{0}(\Omega)\), which is a closed and convex set, hence weakly closed. Finally, to see that \(\Delta_{p}u^{i}\) has the desired properties, notice that for an appropriate test function \(\phi\) we have \[\int_{\Omega}\nabla\phi\cdot A[u^{i}]\,dx=\lim_{k\to\infty}\int_{\Omega}\nabla \phi\cdot A[u^{i}_{k}]\,dx\] due to the weak convergence of \(A[\mathbf{u}_{k}]\). Therefore \(\mathbf{u}\in V_{\delta}\). Now we can repeat the proof of Lemma 3.3 in [12] to deduce the weak lower semicontinuity of \(J_{\varepsilon}\) with respect to this sequence, and conclude the proof (the convexity of \(\Gamma\) is needed here). **Lemma 2.4** (**Hopf's lemma for \(p\)-harmonic functions**).: _Suppose \(h\) is a \(p\)-harmonic function on \(B_{1}(0)\) with nonnegative boundary values on \(\partial B_{1}\). Then we have_ \[h(x)\geq c(n,p)\operatorname{dist}(x,\partial B_{1})\sup_{B_{1/2}}h.\] Proof.: Consider the function \(g(x)=e^{-\lambda|x|^{2}}-e^{-\lambda}\) for some \(\lambda>0\). Note that \(g=0\) on \(\partial B_{1}\), and \(0<g<1\) on \(B_{1}\). We also have \[\partial_{i}g=-2\lambda x_{i}e^{-\lambda|x|^{2}},\qquad\partial_{ij}g=(4 \lambda^{2}x_{i}x_{j}-2\lambda\delta_{ij})e^{-\lambda|x|^{2}}.\] Now we have \(\Delta g=(4\lambda^{2}|x|^{2}-2n\lambda)e^{-\lambda|x|^{2}}\), and \[\Delta_{\infty}g:=\sum_{i,j}\partial_{i}g\partial_{j}g\partial_{ ij}g =\sum_{i,j}4\lambda^{2}(4\lambda^{2}x_{i}^{2}x_{j}^{2}-2\lambda \delta_{ij}x_{i}x_{j})e^{-3\lambda|x|^{2}}\] \[=4\lambda^{2}(4\lambda^{2}|x|^{4}-2\lambda|x|^{2})e^{-3\lambda|x |^{2}}.\] Therefore \[\Delta_{p}g =\operatorname{div}(|\nabla g|^{p-2}\nabla g)=|\nabla g|^{p-4} \big{(}|\nabla g|^{2}\Delta g+(p-2)\Delta_{\infty}g\big{)}\] \[=(2\lambda|x|)^{p-4}\big{(}4\lambda^{2}|x|^{2}(4\lambda^{2}|x|^{2 }-2n\lambda)\] \[\qquad\qquad\qquad+(p-2)4\lambda^{2}(4\lambda^{2}|x|^{4}-2 \lambda|x|^{2})\big{)}e^{-(p-1)\lambda|x|^{2}}\] \[=(2\lambda)^{p-1}|x|^{p-2}\big{(}2\lambda|x|^{2}-n+(p-2)(2 \lambda|x|^{2}-1)\big{)}e^{-(p-1)\lambda|x|^{2}}\] \[=(2\lambda)^{p-1}|x|^{p-2}\big{(}2(p-1)\lambda|x|^{2}-n-p+2\big{)} e^{-(p-1)\lambda|x|^{2}}.\] Thus for \(\frac{1}{2}\leq|x|\leq 1\) and large enough \(\lambda\) we have \[\Delta_{p}g\geq 2\lambda^{p-1}\big{(}(p-1)\lambda/2-n-p+2\big{)}e^{-(p-1)\lambda} >0.\] Now we have \(h\geq\inf_{\overline{B}_{1/2}}h>(\inf_{\overline{B}_{1/2}}h)g\) on \(\overline{B}_{1/2}\) (note that \(h\) is positive on \(B_{1}\) by maximum principle), and on \(B_{1}-\overline{B}_{1/2}\) we have \(\Delta_{p}h=0<\Delta_{p}g\). Also on \(\partial B_{1}\) we have \(h\geq 0=g\). Hence by the maximum principle we have \(h(x)\geq g(x)(\inf_{\overline{B}_{1/2}}h)\) for \(x\in B_{1}\). But by the Harnack's inequality we have \[\inf_{\overline{B}_{1/2}}h\geq C\sup_{\overline{B}_{1/2}}h\] for some constant \(C\) which does not depend on \(h\). Hence we obtain \[h(x)\geq Cg(x)\sup_{\overline{B}_{1/2}}h.\] On the other hand note that \[g(x) =g(x)-g(x/|x|)=\int_{1/|x|}^{1}\frac{d}{dt}g(tx)\,dt=\int_{1/|x|}^ {1}x\cdot\nabla g(tx)\,dt\] \[=\int_{1/|x|}^{1}-2\lambda t|x|^{2}e^{-\lambda t^{2}|x|^{2}}\,dt= 2\lambda|x|^{2}\int_{1}^{1/|x|}te^{-\lambda t^{2}|x|^{2}}\,dt\] \[\geq 2\lambda|x|^{2}\int_{1}^{1/|x|}te^{-\lambda}\,dt=\lambda e^{ -\lambda}|x|^{2}\Big{(}\frac{1}{|x|^{2}}-1\Big{)}=\lambda e^{-\lambda}(1-|x|^ {2})\] \[\geq\lambda e^{-\lambda}(1-|x|)=\lambda e^{-\lambda}\operatorname{ dist}(x,\partial B_{1}),\] which gives the desired. If \(h\) is a \(p\)-harmonic function on \(B_{r}(x_{0})\), then \(\tilde{h}(x):=h(x_{0}+rx)\) is a \(p\)-harmonic function on \(B_{1}(0)\). Hence we have \[h(x_{0}+rx) =\tilde{h}(x)\geq c(n,p)\operatorname{dist}(x,\partial B_{1})\sup _{B_{1/2}}\tilde{h}\] \[=c(n,p)\,(1-|x|)\sup_{B_{1/2}}\tilde{h}=c(n,p)\,\frac{(r-r|x|)}{r }\sup_{B_{r/2}(x_{0})}h\] \[=c(n,p)\operatorname{dist}\bigl{(}x_{0}+rx,\partial B_{r}(x_{0}) \bigr{)}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h.\] **Lemma 2.5**.: _Let \(w\in W^{1,p}(\Omega)\) be a nonnegative function. Then there exists \(c>0\), depending only on \(p\) and the dimension, such that for any ball \(\overline{B}_{r}(x_{0})\subset\Omega\) we have_ \[\Big{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\Big{)}^{p}\cdot\bigl{|}B_{r}(x_{0}) \cap\{w=0\}\bigr{|}\leq c\int_{B_{r}(x_{0})}|\nabla(w-h)|^{p}\,dy,\] _where \(h\) satisfies \(\Delta_{p}h=0\) in \(B_{r}(x_{0})\) taking boundary values equal to \(w\) on \(\partial B_{r}(x_{0})\)._ Proof.: Let \(\tau\in(0,1)\) be fixed. For \(\xi\) with \(|\xi|=1\) we set \[t_{\xi}:=\inf\{t\in[\tau r,r]:w(x_{0}+t\xi)=0\}\] provided that this set is nonempty. Otherwise we set \(t_{\xi}:=r\). Now note that \(w-h\) and \(w\) are absolutely continuous in almost every direction \(\xi\); in particular we have \(w(x_{0}+t_{\xi}\xi)=0\) (note that this will not be necessary true if we allow \(\tau\) to be zero). Also \(w-h\) is \(\mathcal{H}^{n-1}\)-a.e. zero on \(\partial B_{r}(x_{0})\) as its trace is zero there, so \((w-h)(x_{0}+r\xi)=0\). Thus for almost every \(\xi\) for which \(t_{\xi}<r\) we have \[h(x_{0}+t_{\xi}\xi) =(w-h)(x_{0}+r\xi)-(w-h)(x_{0}+t_{\xi}\xi)\] \[=\int_{t_{\xi}}^{r}\frac{d}{dt}\big{(}(w-h)(x_{0}+t\xi)\big{)}\,dt\] \[=\int_{t_{\xi}}^{r}\nabla_{\xi}(w-h)(x_{0}+t\xi)\,dt\] \[\leq(r-t_{\xi})^{\frac{p-1}{p}}\Big{(}\int_{t_{\xi}}^{r}\big{|} \nabla(w-h)(x_{0}+t\xi)\big{|}^{p}\,dt\Big{)}^{\frac{1}{p}}.\] On the other hand, using Hopf's lemma we get \[h(x_{0}+t_{\xi}\xi) \geq c(n,p)\,\mathrm{dist}\big{(}x_{0}+t_{\xi}\xi,\,\partial B_{r }(x_{0})\big{)}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\] \[=c(n,p)(r-t_{\xi})\frac{1}{r}\sup_{B_{r/2}(x_{0})}h.\] Hence we obtain \[(r-t_{\xi})\Big{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\Big{)}^{p}\leq C(n,p)\int_ {t_{\xi}}^{r}\big{|}\nabla(w-h)(x_{0}+t\xi)\big{|}^{p}\,dt.\] Note that this inequality is trivially satisfied if \(t_{\xi}=r\). Now by integrating over \(\xi\) we get \[C(n,p) \int_{B_{r}(x_{0})}\big{|}\nabla(w-h)(x)\big{|}^{p}\,dx\] \[\geq C(n,p)\int_{\partial B_{1}(0)}\int_{t_{\xi}}^{r}\big{|} \nabla(w-h)(x_{0}+t\xi)\big{|}^{p}\,dtd\xi\] \[\geq\Big{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\Big{)}^{p}\int_{ \partial B_{1}(0)}(r-t_{\xi})\,d\xi\] \[=\Big{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\Big{)}^{p}\int_{ \partial B_{1}(0)}\int_{t_{\xi}}^{r}1\,dtd\xi\] \[\geq\Big{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h\Big{)}^{p}\int_{B_{ r}(x_{0})-B_{rr}(x_{0})}\chi_{\{w=0\}}\,dx,\] where the last inequality follows from the definition of \(t_{\xi}\). Finally, we get the desired by letting \(\tau\to 0\) **Lemma 2.6**.: _Let \(\mathbf{u}=\mathbf{u}_{\varepsilon}^{\delta}\) be a minimizer of \(J_{\varepsilon}\) over \(V_{\delta}\), and \(B\subset\Omega\) be a ball. Then there exists a unique \(v^{i}\in W^{1,p}(\Omega)\) that minimizes the functional_ \[\int_{\Omega}|\nabla v^{i}|^{p}\,dx\] _among all functions with \(v^{i}=\varphi^{i}\) on \(\partial\Omega\) and \(v^{i}\leq 0\) on \(\{u^{i}=0\}-B\). The functions \(v^{i}\) also satisfy_ 1. \(v^{i}=0\) _on_ \(\{u^{i}=0\}-B\)_,_ 2. \(\mathbf{v}=(v^{1},\ldots,v^{m})\in V_{\delta}\)_,_ 3. \(0\leq u^{i}\leq v^{i}\leq C_{0}=\max_{\partial\Omega}|\boldsymbol{\varphi}|\)_,_ 4. \(\int_{\Omega}v^{i}\,\Delta_{p}v^{i}\,dx=0\)_._ _Remark_.: Instead of a ball \(B\), we can also use other open subsets of \(\Omega\) in the above lemma. Proof.: The proof is similar to that of Lemma 3.7 in [12] (see also the proof of Theorem 3.6 in [2]). **Theorem 2.7**.: _Let \(\mathbf{u}=\mathbf{u}_{\varepsilon}^{\delta}\) be a minimizer of \(J_{\varepsilon}\) over \(V_{\delta}\). There exists a constant \(M=M_{\varepsilon}\), independent of \(\delta\), such that if for some \(j\) we have_ \[\frac{1}{r}\sup_{B_{r/2}(x)}u^{j}\geq M,\] _then \(B_{r}(x)\subset\{|\mathbf{u}|>0\}\), and \(\Delta_{p}u^{i}=0\) in \(B_{r}(x)\) for every \(i\)._ Proof.: Let \(\mathbf{v}\in V_{\delta}\) be the function given by Lemma 2.6 for \(B_{r}(x)\). Then we have \[J_{\varepsilon}(\mathbf{u})\leq J_{\varepsilon}(\mathbf{v}).\] Let \(\mathbf{h}_{0}\) be the vector-valued function in \(\Omega\) satisfying \(\Delta_{p}h_{0}^{i}=0\), and taking the boundary values \(\boldsymbol{\varphi}\) on \(\partial\Omega\). Since \(0\leq u^{i}\leq v^{i}\leq h_{0}^{i}\), for each \(z\in\partial\Omega\) we have \[\partial_{\nu}h_{0}^{i}(z)\leq\partial_{\nu}v^{i}(z)\leq\partial_{\nu}u^{i}(z )\leq 0.\] Then by using the fact that \(\mathbf{u},\mathbf{v},\mathbf{h}_{0}\) take the same boundary values and therefore have equal tangential derivatives on \(\partial\Omega\), we deduce that \[a\leq A_{\nu}h_{0}^{i}(z)\leq A_{\nu}v^{i}(z)\leq A_{\nu}u^{i}(z),\] where \(a\) is a lower bound for \(A_{\nu}h_{0}^{i}\) (note that \(a\) does not depend on \(\delta\)). Hence by the property (2) of \(\Gamma\) we have \[\begin{split}\int_{\partial\Omega}&\Gamma\big{(}x,A_{ \nu}\mathbf{u}(x)\big{)}-\Gamma\big{(}x,A_{\nu}\mathbf{v}(x)\big{)}\,d\sigma\\ &=\sum_{i=1}^{m}\int_{\partial\Omega}\Gamma(x,A_{\nu}u^{1},\dots, A_{\nu}u^{i-1},A_{\nu}u^{i},A_{\nu}v^{i+1},\dots,A_{\nu}v^{m})\\ &\qquad\qquad-\Gamma(x,A_{\nu}u^{1},\dots,A_{\nu}u^{i-1},A_{\nu} v^{i},A_{\nu}v^{i+1},\dots,A_{\nu}v^{m})\,d\sigma\\ &\geq C_{a}\sum_{i=1}^{m}\int_{\partial\Omega}A_{\nu}u^{i}-A_{\nu }v^{i}\,d\sigma,\end{split} \tag{2.3}\] where \(C_{a}>0\) is the lower bound of \(\partial_{\xi_{i}}\Gamma\)'s on the set \(\{(x,\xi):\xi_{i}\geq a\}\). On the other hand, using the identity (2.2) we get \[\begin{split} C_{0}\int_{\partial\Omega}A_{\nu}u^{i}-A_{\nu}v^{i }\,d\sigma&\geq\int_{\partial\Omega}\varphi^{i}\big{(}A_{\nu}u^{i }-A_{\nu}v^{i}\big{)}\,d\sigma\\ &=\int_{\Omega}u^{i}\,\Delta_{p}u^{i}+|\nabla u^{i}|^{p}\,dy\\ &\qquad\qquad-\int_{\Omega}v^{i}\,\Delta_{p}v^{i}+|\nabla v^{i}|^ {p}\,dy\\ &\geq\int_{\Omega}|\nabla u^{i}|^{p}\,dy-\int_{\Omega}|\nabla v ^{i}|^{p}\,dy,\end{split} \tag{2.4}\] where \(C_{0}=\max_{\partial\Omega}|\boldsymbol{\varphi}|\), and in the last line we used the facts that \(\int_{\Omega}v^{i}\,\Delta_{p}v^{i}\,dy=0\) and \(u^{i},\Delta_{p}u^{i}\geq 0\). Now consider the function \(h^{i}\) in \(B_{r}(x)\) satisfying \(\Delta_{p}h^{i}=0\), and taking boundary values equal to \(u^{i}\). We extend \(h^{i}\) to be equal to \(u^{i}\) outside of \(B_{r}(x)\). Then we have \(\mathbf{h}=(h_{1},\dots,h_{m})\in V_{\delta}\). In addition, \(h^{i}=u^{i}=\varphi^{i}\) on \(\partial\Omega\) and \(h^{i}=u^{i}=0\) on \(\{u^{i}=0\}-B_{r}(x)\). Hence due to the minimality property of \(v^{i}\) given by Lemma 2.6 we have \[\int_{\Omega}|\nabla v^{i}|^{p}\,dy\leq\int_{\Omega}|\nabla h^{i}|^{p}\,dy.\] Combining this with the above inequality we get \[\begin{split} C_{0}\int_{\partial\Omega}A_{\nu}u^{i}-A_{\nu}v^{i }\,d\sigma&\geq\int_{\Omega}|\nabla u^{i}|^{p}-|\nabla v^{i}|^{p} \,dy\\ &\geq\int_{\Omega}|\nabla u^{i}|^{p}-|\nabla h^{i}|^{p}\,dy\\ &\geq C\int_{B_{r}(x)}|\nabla(u^{i}-h^{i})|^{p}\,dy,\end{split}\] where the last inequality can be proved similarly to the proof of Lemma 3.1 in [7]. (Note that in the last line we have also used the fact that \(u^{i}=h^{i}\) outside \(B_{r}(x)\).) Summing the above inequality for each \(i\), and using the facts that \(J_{\varepsilon}(\mathbf{u})\leq J_{\varepsilon}(\mathbf{v})\), and \(f_{\varepsilon}\) has Lipschitz constant equal to \(\frac{1}{\varepsilon}\) we get \[\frac{C_{a}}{C_{0}}\sum_{i\leq m}\int_{B_{r}(x)}|\nabla(u^{i}-h^{ i})|^{p}\,dy \leq C_{a}\int_{\partial\Omega}\sum_{i\leq m}(A_{\nu}u^{i}-A_{\nu} v^{i})\,d\sigma\] \[\leq\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{u}(x) \big{)}-\Gamma\big{(}x,A_{\nu}\mathbf{v}(x)\big{)}\,d\sigma\] \[\leq f_{\varepsilon}\big{(}\big{|}\{|\mathbf{v}|>0\}\big{|}\big{)} -f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|}\big{)}\] \[\leq\frac{1}{\varepsilon}\big{|}B_{r}(x)\cap\{|\mathbf{u}|=0\} \big{|},\] since \(0\leq u^{i}\leq v^{i}\), and outside of \(B_{r}(x)\), \(|\mathbf{u}|=0\) implies \(|\mathbf{v}|=0\). Therefore by Lemma 2.5 applied to \(u^{j}\) we obtain \[\big{|}B_{r}(x)\cap\{|\mathbf{u}|=0\}\big{|} \geq\frac{\varepsilon C_{a}}{C_{0}}\sum_{i\leq m}\int_{B_{r}(x)}| \nabla(u^{i}-h^{i})|^{p}\,dy\] \[\geq\frac{\varepsilon C_{a}}{C_{0}}\int_{B_{r}(x)}|\nabla(u^{j}-h ^{j})|^{p}\,dy\] \[\geq\frac{\varepsilon C_{a}}{cC_{0}}\Big{(}\frac{1}{r}\sup_{B_{r/ 2}(x)}h^{j}\Big{)}^{p}\cdot\big{|}B_{r}(x)\cap\{u^{j}=0\}\big{|}\] \[\geq\frac{\varepsilon C_{a}M^{p}}{cC_{0}}\big{|}B_{r}(x)\cap\{| \mathbf{u}|=0\}\big{|},\] since \(|\mathbf{u}|=0\) implies \(u^{j}=0\), and \(h^{j}\geq u^{j}\) as \(u^{j}\) is \(p\)-subharmonic. Hence if \(M>(\frac{cC_{0}}{\varepsilon C_{a}})^{\frac{1}{p}}\) then \(\big{|}B_{r}(x)\cap\{|\mathbf{u}|=0\}\big{|}\) must be zero, as desired. Note that in this case the above inequality also implies that \(u^{i}=h^{i}\) in \(B_{r}(x)\) for each \(i\); so \(u^{i}\) satisfies the equation in \(B_{r}(x)\). **Corollary 2.8**.: _All minimizers \(\mathbf{u}_{\varepsilon}^{\delta}\) are Lipschitz, and for every \(\Omega^{\prime}\subset\subset\Omega\) there exists a constant \(K_{\varepsilon}=K_{\varepsilon}(\Omega^{\prime})\), independent of \(\delta\), such that_ \[\|\mathbf{u}_{\varepsilon}^{\delta}\|_{\Omega^{\prime}}\|_{\mathrm{Lip}}\leq K _{\varepsilon}.\] _In addition, \(\Delta_{p}(u_{\varepsilon}^{\delta})^{i}=0\) in the open set \(\{|\mathbf{u}_{\varepsilon}^{\delta}|>0\}\)._ Proof.: For simplicity we set \(\mathbf{u}=\mathbf{u}_{\varepsilon}^{\delta}\). First let us show that \(\{|\mathbf{u}|>0\}\) is an open set. Suppose \(x\in\{|\mathbf{u}|>0\}\). Then \(u^{j}(x)>0\) for some \(j\) Then for small enough \(r\) we must have \[\frac{1}{r}\sup_{B_{r/2}(x)}u^{j}\geq\frac{1}{r}u^{j}(x)\geq M.\] Hence the previous theorem implies that \(B_{r}(x)\subset\{|\mathbf{u}|>0\}\) and we have \(\Delta_{p}u^{i}=0\) in \(B_{r}(x)\). Next note that \(\nabla\mathbf{u}=0\) a.e. in \(\{|\mathbf{u}|=0\}\). So suppose \(x\in\{|\mathbf{u}|>0\}\cap\Omega^{\prime}\). Let \(\Omega^{\prime}\subset\subset\tilde{\Omega}\subset\subset\Omega\), and \(B=B_{d}(x)\), where \(d=\text{dist}\big{(}x,\partial(\{|\mathbf{u}|>0\}\cap\tilde{\Omega})\big{)}\). If \(\partial B\) touches \(\partial\{|\mathbf{u}|=0\}\) then \(B_{d+d^{\prime}}(x)\) intersects \(\{|\mathbf{u}|=0\}\), and by previous theorem we have \[\frac{1}{d+d^{\prime}}\sup_{B_{(d+d^{\prime})/2}(x)}u^{i}\leq M\] for every \(i\). Hence in the limit \(d^{\prime}\to 0\) we get \[\frac{1}{d}\sup_{B_{d/2}(x)}u^{i}\leq M.\] Now since \(u^{i}\)'s are \(p\)-harmonic in \(B\), as shown in the proof of Lemma 3.1 of [6], we have \[|\nabla u^{i}(x)|\leq C\frac{1}{d}\sup_{B_{d/2}(x)}u^{i}\leq CM,\] where the constant \(C\) depends only on \(p\) and the dimension \(n\). On the other hand, if \(\partial B\) touches \(\partial\tilde{\Omega}\) then, by the interior derivative estimate of [9], we obtain (the dependence on \(d\) follows from the proof of this estimate; see equation (3.4) in [9]) \[|\nabla u^{i}(x)|\leq\frac{C(n,p)}{d^{n}}\|\mathbf{u}\|_{W^{1,p}}\leq C,\] since \(d\geq\text{dist}(\Omega^{\prime},\partial\tilde{\Omega})\), and \(\|\mathbf{u}\|_{W^{1,p}}\) is bounded independently of \(\delta\) as will be shown now. Let \(\Omega^{\prime}\subset\subset\Omega\) be a smooth open set with \(|\Omega-\Omega^{\prime}|=1\). Let \(\mathbf{u}_{0}\) be a vector-valued function on \(\Omega-\Omega^{\prime}\) that satisfies the equation \(\Delta_{p}u^{i}_{0}=0\), and takes the boundary values \(\boldsymbol{\varphi}\) on \(\partial\Omega\) and \(0\) on \(\partial\Omega^{\prime}\). Then for every small enough \(\delta\) we have \(\mathbf{u}_{0}\in V_{\delta}\). Hence (remember that \(\mathbf{u}=\mathbf{u}_{\varepsilon}^{\delta}\)) \[C=J_{\varepsilon}(\mathbf{u}_{0})\geq J_{\varepsilon}(\mathbf{ u}_{\varepsilon}^{\delta}) \geq\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{u}_{ \varepsilon}^{\delta}(x)\big{)}\,d\sigma\] \[\geq\int_{\partial\Omega}\sum_{i=1}^{m}\psi_{i}(x)A_{\nu}(u_{ \varepsilon}^{\delta})^{i}-C\,d\sigma,\] where we used (1.2) in the last line. Thus by Lemma 2.2 the \(\|\nabla\mathbf{u}_{\varepsilon}^{\delta}\|_{L^{p}(\Omega;\mathbb{R}^{m})}\) is bounded as \(\delta\to 0\), and the boundedness of \(\|\mathbf{u}_{\varepsilon}^{\delta}\|_{W^{1,p}}\) follows from Poincare inequality and the fact that all of \(\mathbf{u}_{\varepsilon}^{\delta}\)'s have the same boundary values. Finally, to see that \(\mathbf{u}\) is Lipschitz continuous on all of \(\Omega\), note that \(\mathbf{u}\) has \(p\)-harmonic components near the smooth boundary \(\partial\Omega\), attaining smooth boundary conditions \(\boldsymbol{\varphi}\); hence the gradient of \(\mathbf{u}\) is bounded near the boundary too. **Lemma 2.9**.: _There exists \(\delta_{0}=\delta_{0}(\varepsilon)>0\), such that for every \(\delta\) we have \(|\mathbf{u}_{\varepsilon}^{\delta}|>0\) on \(\Omega_{\delta_{0}}\)._ _Remark_.: Note that as a consequence, \(\Delta_{p}(u_{\varepsilon}^{\delta})^{i}=0\) on \(\Omega_{\delta_{0}}\) for every \(\delta\) (by Theorem 2.7). In other words, \(\mathbf{u}_{\varepsilon}^{\delta}\in V_{\delta_{0}}\) for every \(\delta\). Proof.: Suppose to the contrary that there is a sequence \(\mathbf{u}_{k}=\mathbf{u}_{\varepsilon}^{\delta_{k}}\) for which we have \[2d_{k}:=\operatorname{dist}(\{|\mathbf{u}_{k}|=0\},\partial\Omega)\to 0.\] Then the midpoint of the closest points on \(\{|\mathbf{u}_{k}|=0\}\) and \(\partial\Omega\), which we call \(x_{k}\), has distance \(d_{k}\) from both of these sets. So the boundary of the ball \(B_{d_{k}}(x_{k})\) touches both of these sets. In addition, by Theorem 2.7, for every \(t>0\) we must have \[\frac{1}{d_{k}}\sup_{B_{d_{k}/2}(x_{k}+t\nu_{k})}u_{k}^{i}\leq M_{\varepsilon}\] for every \(i\) (here \(\nu_{k}\) is the direction of the line segment from \(x_{k}\) to its closest point on \(\{|\mathbf{u}_{k}|=0\}\)). So in the limit \(t\to 0\) we get \[\sup_{B_{d_{k}/2}(x_{k})}u_{k}^{i}\leq M_{\varepsilon}d_{k}. \tag{2.5}\] We also have \[\sup_{B_{d_{k}}(x_{k})}|\mathbf{u}_{k}|\geq c_{0},\] where \(c_{0}=\min_{i}\min_{\partial\Omega}\varphi^{i}>0\). Because at the point \(y_{k}\in\partial B_{d_{k}}(x_{k})\cap\partial\Omega\) we have \(u_{k}^{i}(y_{k})=\varphi^{i}(y_{k})\geq c_{0}\) (note that \(u_{k}^{i}\) is continuous up to the boundary). Next consider the functions \[\hat{\mathbf{u}}_{k}(x):=\frac{\mathbf{u}(x_{k}+d_{k}x)}{\sup_{B_{d_{k}}(x_{k })}|\mathbf{u}_{k}|}\] on \(B_{1}\). Then \(\hat{u}_{k}^{i}\) is positive and \(p\)-harmonic on \(B_{1}\), and we have \(\sup_{B_{1}}|\hat{\mathbf{u}}_{k}|=1\). In addition, by (2.5) we have \[\sup_{B_{1/2}}\hat{u}_{k}^{i}=\frac{\sup_{B_{d_{k}/2}(x_{k})}u_{k}^{i}}{\sup_{ B_{d_{k}}(x_{k})}|\mathbf{u}_{k}|}\leq\frac{M_{\varepsilon}d_{k}}{c_{0}} \underset{k\to\infty}{\longrightarrow}0.\] Furthermore, note that \(\hat{u}_{k}^{i}\) is a uniformly bounded sequence of \(p\)-harmonic functions on \(B_{1}\), so there is \(\alpha>0\) such that for all \(r<1\) the Holder norms \(\|\hat{u}_{k}^{i}\|_{C^{0,\alpha}(\overline{B}_{r})}\) are uniformly bounded (see page 251 of [8]). Hence, by a diagonal argument, we can construct a subsequence of \(\hat{u}_{k}^{i}\), which we still denote by \(\hat{u}_{k}^{i}\), that locally uniformly converges to a nonnegative \(p\)-harmonic function \(\hat{u}_{\infty}^{i}\) on \(B_{1}\). In addition, \(\hat{u}_{\infty}^{i}\) must vanish on \(B_{1/2}\) by the above estimate. Thus by the strong maximum principle we must have \(\hat{\mathbf{u}}_{\infty}\equiv 0\) on \(B_{1}\). Now for \(y_{k}\in\partial B_{d_{k}}(x_{k})\cap\partial\Omega\) and \(r<d_{k}\) we have \[\operatorname*{osc}_{B_{r}(y_{k})\cap\Omega}u_{k}^{i}\leq C(n,p)\Big{(}r^{ \alpha}+\operatorname*{osc}_{B_{r}(y_{k})\cap\partial\Omega}\varphi^{i}\Big{)} \leq Cr^{\alpha}\] for some \(\alpha\in(0,1)\). This estimate holds by Theorem 4.19 of [11] when \(1<p\leq n\). And when \(p>n\) this estimate holds due to the uniform Holder continuity of \(u_{k}^{i}\) on \(\overline{\Omega}\), since \(\|\mathbf{u}_{k}\|_{W^{1,p}(\Omega)}\) is uniformly bounded as we have seen in the proof of Corollary 2.8. Hence for \(r=d_{k}/2\) we have \[\min_{B_{d_{k}/2}(y_{k})\cap B_{d_{k}}(x_{k})}u_{k}^{i}\geq\min_{B_{d_{k}/2}(y _{k})\cap\Omega}u_{k}^{i}\geq\frac{1}{2}c_{0},\] where \(c_{0}=\min_{i}\min_{\partial\Omega}\varphi^{i}\). Therefore for \(\hat{y}_{k}=\frac{1}{d_{k}}(y_{k}-x_{k})\in\partial B_{1}\) we have \[\min_{B_{1/2}(\hat{y}_{k})\cap B_{1}}\hat{u}_{k}^{i}=\frac{1}{\sup_{B_{d_{k}}( x_{k})}|\mathbf{u}_{k}|}\min_{B_{d_{k}/2}(y_{k})\cap B_{d_{k}}(x_{k})}u_{k}^{i} \geq c>0,\] since \(\sup_{B_{d_{k}}(x_{k})}|\mathbf{u}_{k}|\leq mC_{0}\) where \(C_{0}=\max_{\partial\Omega}|\boldsymbol{\varphi}|\). Thus \(\hat{u}_{k}^{i}\) has a uniform positive lower bound on a subset of \(B_{1}\) with positive volume (where the volume is independent of \(k\)). So no subsequence of \(\hat{\mathbf{u}}_{k}\) can converge locally uniformly to \(\hat{\mathbf{u}}_{\infty}\equiv 0\), because otherwise they will uniformly converge to \(0\) outside a set of small volume, contradicting the uniform boundedness from below. Now we can find a minimizer for \(J_{\varepsilon}\) over \(V\). **Theorem 2.10**.: _There exists a minimizer \(\mathbf{u}_{\varepsilon}\in V\) for \(J_{\varepsilon}\). Moreover, \(\mathbf{u}_{\varepsilon}\) is a Lipschitz function, and \(\Delta_{p}u_{\varepsilon}^{i}=0\) in the open set \(\{|\mathbf{u}_{\varepsilon}|>0\}\)._ _Remark_.: As we will see in the following proof, \(\mathbf{u}_{\varepsilon}^{\delta}\in V_{\delta_{0}}\) for \(\delta_{0}=\delta_{0}(\varepsilon)\) given by the above lemma. So in fact \(\mathbf{u}_{\varepsilon}\) is a minimizer of \(J_{\varepsilon}\) over some \(V_{\delta}\), and therefore it has all the properties of \(\mathbf{u}_{\varepsilon}^{\delta}\)'s that we have proved so far. In particular, we have \(|\mathbf{u}_{\varepsilon}|>0\) on \(\Omega_{\delta_{0}}\). Proof.: As we have shown in the proof of Corollary 2.8, \(\|\nabla\mathbf{u}_{\varepsilon}^{\delta}\|_{L^{p}(\Omega;\mathbb{R}^{m})}\) is bounded as \(\delta\to 0\). Hence there is a subsequence such that \(\mathbf{u}_{\varepsilon}^{\delta}\rightharpoonup\mathbf{u}_{\varepsilon}\) weakly in \(W^{1,p}\) (and also a.e.) with \(A(\nabla(u_{\varepsilon}^{\delta})^{i})\rightharpoonup A(\nabla u_{\varepsilon} ^{i})\) in \(L^{q}\) as \(\delta\to 0\). So, in particular, \(u^{i}_{\varepsilon}\geq 0\), \(u^{i}_{\varepsilon}\) is \(p\)-subharmonic, and attains the boundary condition \(\varphi^{i}\). Furthermore, by Corollary 2.8, \(\mathbf{u}^{\delta}_{\varepsilon}\to\mathbf{u}_{\varepsilon}\) uniformly on compact subsets of \(\Omega\). Hence for each ball \(\overline{B}\subset\{|\mathbf{u}_{\varepsilon}|>0\}\) and all small enough \(\delta\) we have \(\overline{B}\subset\{|\mathbf{u}^{\delta}_{\varepsilon}|>0\}\). Therefore by using test functions with support in \(B\) together with \(A(\nabla(u^{\delta}_{\varepsilon})^{i})\rightharpoonup A(\nabla u^{i}_{ \varepsilon})\) we can conclude that \(u^{i}_{\varepsilon}\) is \(p\)-harmonic in \(B\). The same reasoning applied to test functions with support in \(\Omega_{\delta_{0}}\), for \(\delta_{0}\) given by the previous lemma, implies that \(u^{i}_{\varepsilon}\) is \(p\)-harmonic in \(\Omega_{\delta_{0}}\), and thus \(\mathbf{u}_{\varepsilon}\in V_{\delta_{0}}\subset V\). In particular, \(u^{i}_{\varepsilon}\) is \(p\)-harmonic near the smooth boundary \(\partial\Omega\), attaining smooth boundary conditions \(\varphi^{i}\), so it is Lipschitz near \(\partial\Omega\). Moreover, \(\mathbf{u}_{\varepsilon}\) is Lipschitz inside \(\Omega\) away from its boundary, because it is the uniform limit of a sequence of Lipschitz functions with uniformly bounded Lipschitz constants. Hence \(\mathbf{u}_{\varepsilon}\) is Lipschitz on all of \(\Omega\). Finally note that \(\mathbf{u}_{\varepsilon}\) minimizes \(J_{\varepsilon}\) over \(V\), since for every \(\mathbf{w}\in V\) we have \(\mathbf{w}\in V_{\delta}\) for some \(\delta\). Thus \(J_{\varepsilon}(\mathbf{u}^{\delta}_{\varepsilon})\leq J_{\varepsilon}( \mathbf{w})\). However, \(\mathbf{u}^{\delta}_{\varepsilon}\to\mathbf{u}_{\varepsilon}\), so we get \(J_{\varepsilon}(\mathbf{u}_{\varepsilon})\leq J_{\varepsilon}(\mathbf{w})\) due to the semicontinuity of \(J_{\varepsilon}\). ## 3. Regularity of solutions to the penalized problem To simplify the notation, throughout this section we will suppress the index \(\varepsilon\) in \(\mathbf{u}_{\varepsilon}\). **Theorem 3.1**.: _For \(\tau\in(0,1/4)\) there exists \(m_{\varepsilon}(\tau)\) such that if for each \(i\) we have_ \[\frac{1}{r}\sup_{B_{r/2}(x)}u^{i}\leq m_{\varepsilon}(\tau),\] _then \(B_{\tau r}(x)\subset\{|\mathbf{u}|=0\}\)._ Proof.: Similarly to Lemma 2.6, we can show that there is \(v^{i}\in W^{1,p}(\Omega)\) that minimizes the functional \(\int_{\Omega}|\nabla v^{i}|^{p}\,dx\) among all functions with \(v^{i}=\varphi^{i}\) on \(\partial\Omega\) and \(v^{i}\leq 0\) on \(\{u^{i}=0\}\cup\overline{B}_{\tau r}(x)\). The function \(v^{i}\) also satisfies \(\Delta_{p}v^{i}\geq 0\), \(\int_{\Omega}v^{i}\,\Delta_{p}v^{i}\,dx=0\), and \(u^{i}\geq v^{i}\geq 0\) (to see this, note that \(\Delta_{p}v^{i}\geq\Delta_{p}u^{i}\) on \(\Omega-\big{(}\{u^{i}=0\}\cup\overline{B}_{\tau r}(x)\big{)}\subset\{|\mathbf{ u}|>0\}\), and \(v^{i}-u^{i}\leq 0\) on \(\{u^{i}=0\}\cup\overline{B}_{\tau r}(x)\) or \(\partial\Omega\)). In addition, we have \(\mathbf{v}=(v_{1},\ldots,v_{m})\in V_{\delta_{1}}\subset V\) (where \(\delta_{1}\) is small enough so that \(\overline{B}_{\tau r}(x)\subset\Omega-\Omega_{\delta_{1}}\)). Thus \[J_{\varepsilon}(\mathbf{u})\leq J_{\varepsilon}(\mathbf{v}).\] Let us assume that \(\delta_{1}\) is small enough so that \(\overline{B}_{r}(x)\subset\Omega-\Omega_{\delta_{1}}\) and \(\mathbf{u}\in V_{\delta_{1}}\). Let \(\mathbf{w}\) be a vector-valued \(p\)-harmonic function in \(\Omega_{\delta_{1}}\) with boundary values equal to \(\boldsymbol{\varphi}\) on \(\partial\Omega\) and equal to \(0\) on \(\partial\Omega_{\delta_{1}}-\partial\Omega\). Then we have \(u^{i}\geq v^{i}\geq w^{i}\geq 0\) (since \(\mathbf{u},\mathbf{v}\) are also \(p\)-harmonic on \(\Omega_{\delta_{1}}\), and nonnegative everywhere). Thus for each \(z\in\partial\Omega\) we have \[0\geq\partial_{\nu}w^{i}(z)\geq\partial_{\nu}v^{i}(z)\geq\partial_{\nu}u^{i}(z).\] Next using the fact that \(\mathbf{u},\mathbf{v},\mathbf{w}\) take the same boundary values on \(\partial\Omega\), and therefore have equal tangential derivatives on \(\partial\Omega\), we deduce that \[0\geq A_{\nu}w^{i}(z)\geq A_{\nu}v^{i}(z)\geq A_{\nu}u^{i}(z).\] Now similar to (2.3) we can show that \[\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{v}(x)\big{)}-\Gamma\big{(} x,A_{\nu}\mathbf{u}(x)\big{)}\,d\sigma\leq C_{1}\sum_{i=1}^{m}\int_{\partial\Omega} A_{\nu}v^{i}-A_{\nu}u^{i}\,d\sigma,\] where \(C_{1}>0\) is the upper bound of \(\partial_{\xi_{i}}\Gamma\)'s on the set \(\{(x,\xi):\xi_{i}\leq 0\}\). On the other hand, using the identity (2.2) we obtain (using the notation \(c_{0}=\min_{i}\min_{\partial\Omega}\varphi^{i}\)) \[c_{0}\int_{\partial\Omega}A_{\nu}v^{i}-A_{\nu}u^{i}\,d\sigma \leq\int_{\partial\Omega}\varphi^{i}\big{(}A_{\nu}v^{i}-A_{\nu}u^ {i}\big{)}\,d\sigma\] \[=\int_{\Omega}v^{i}\,\Delta_{p}v^{i}+|\nabla v^{i}|^{p}\,dy\] \[\qquad\qquad-\int_{\Omega}u^{i}\,\Delta_{p}u^{i}+|\nabla u^{i}|^ {p}\,dy \tag{3.1}\] \[=\int_{\Omega}|\nabla v^{i}|^{p}\,dy-\int_{\Omega}|\nabla u^{i}|^ {p}\,dy,\] where in the last line we used the facts that \(\int_{\Omega}v^{i}\,\Delta_{p}v^{i}\,dy=0\), and \(\Delta_{p}u^{i}=0\) on \(\{u^{i}\neq 0\}\subset\{|\mathbf{u}|>0\}\). Summing the above inequality for each \(i\), and using the facts that \(J_{\varepsilon}(\mathbf{u})\leq J_{\varepsilon}(\mathbf{v})\), and the derivative of \(f_{\varepsilon}\) is bounded below by \(\varepsilon\), we get \[\frac{C_{1}}{c_{0}}\sum_{i\leq m}\int_{\Omega}|\nabla v^{i}|^{p}-| \nabla u^{i}|^{p}\,dy \geq C_{1}\int_{\partial\Omega}\sum_{i\leq m}(A_{\nu}v^{i}-A_{ \nu}u^{i})\,d\sigma\] \[\geq\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{v}(x) \big{)}-\Gamma\big{(}x,A_{\nu}\mathbf{u}(x)\big{)}\,d\sigma\] \[\geq f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|}\big{)} -f_{\varepsilon}\big{(}\big{|}\{|\mathbf{v}|>0\}\big{|}\big{)} \tag{3.2}\] \[\geq\varepsilon\big{|}\{|\mathbf{u}|>0\}\cap\{|\mathbf{v}|=0\} \big{|}\] \[\geq\varepsilon\big{|}\{|\mathbf{u}|>0\}\cap B_{\tau r}(x)\big{|},\] since \(u^{i}\geq v^{i}\geq 0\) and \(v^{i}=0\) in \(B_{\tau r}(x)\). Next we define \(g:(0,\infty)\to\mathbb{R}\) by \[g(t):=\begin{cases}t^{\frac{p-n}{p-1}}-(\tau r)^{\frac{p-n}{p-1}}&p>n,\\ \log t-\log(\tau r)&p=n,\\ (\tau r)^{\frac{p-n}{p-1}}-t^{\frac{p-n}{p-1}}&p<n.\end{cases}\] Note that \(g\) is an increasing function that vanishes at \(t=\tau r\), and is negative for \(t<\tau r\). In addition, \(g(|x|)\) is a \(p\)-harmonic function in \(\mathbb{R}^{n}-\{0\}\), which is negative on \(B_{\tau r}(x)\) and vanishes on \(\partial B_{\tau r}(x)\). Now let us define \(h^{i}:B_{\sqrt{\tau}r}(x)\to\mathbb{R}\) by \[h^{i}(y):=\min\{u^{i}(y),\,\frac{s_{i}}{g(\sqrt{\tau}r)}\big{(}g(|y-x|)\big{)} ^{+}\},\] where \(s_{i}:=\max_{\overline{B}_{\sqrt{\tau}r}(x)}u^{i}\). We extend \(h^{i}\) by \(u^{i}\) outside of \(B_{\sqrt{\tau}r}(x)\). Note that we have \(h^{i}=0\) on \(\{u^{i}=0\}\cap\overline{B}_{\tau r}(x)\) and \(h^{i}=u^{i}=\varphi^{i}\) on \(\partial\Omega\). Hence \(h^{i}\) competes with \(v^{i}\), and we have \(\int_{\Omega}|\nabla v^{i}|^{p}\,dx\leq\int_{\Omega}|\nabla h^{i}|^{p}\,dx\). Therefore we can exchange \(v^{i}\) by \(h^{i}\) in the inequality (3.2) to get \[\frac{\varepsilon c_{0}}{C_{1}}\big{|}\{|\mathbf{u}|>0\}\cap B_{\tau r}(x) \big{|}\leq\sum_{i\leq m}\int_{B_{\sqrt{\tau}r}(x)}|\nabla h^{i}|^{p}-|\nabla u ^{i}|^{p}\,dy.\] Now since \(h^{i}=0\) on \(B_{\tau r}(x)\), we can rewrite the above inequality as \[\frac{\varepsilon c_{0}}{C_{1}}\big{|}\{|\mathbf{u}|>0\}\cap B_{ \tau r}(x)\big{|}+ \sum_{i\leq m}\int_{B_{\tau r}(x)}|\nabla u^{i}|^{p}\,dy \tag{3.3}\] \[\leq\sum_{i\leq m}\int_{B_{\sqrt{\tau}r}(x)-B_{\tau r}(x)}|\nabla h ^{i}|^{p}-|\nabla u^{i}|^{p}\,dy.\] But \[|\nabla h^{i}|^{p}-|\nabla u^{i}|^{p}\leq-p|\nabla h^{i}|^{p-2}\nabla h^{i} \cdot\nabla(u^{i}-h^{i}),\] since for two vectors \(a,b\) we have \(|a|^{p}-|b|^{p}\leq-p|a|^{p-2}a\cdot(b-a)\). So we can estimate the right hand side of (3.3) as follows (using integration by parts, and the facts that \(\Delta_{p}h^{i}=0\) on \(\{u^{i}>h^{i}\}\), \(h^{i}=0\) on \(\partial B_{\tau r}(x)\) and \(h^{i}=u^{i}\) on \(\partial B_{\sqrt{\tau}r}(x)\)): \[\int_{B_{\sqrt{\tau}r}(x)-B_{\tau r}(x)} |\nabla h^{i}|^{p}-|\nabla u^{i}|^{p}\,dy\] \[\leq-p\int_{B_{\sqrt{\tau}r}(x)-B_{\tau r}(x)}|\nabla h^{i}|^{p-2} \nabla(u^{i}-h^{i})\cdot\nabla h^{i}\,dy\] \[=p\int_{\partial B_{\tau r}(x)}(u^{i}-h^{i})|\nabla h^{i}|^{p-2} \nabla h^{i}\cdot\nu\,d\sigma\] \[\qquad-p\int_{\partial B_{\sqrt{\tau}r}(x)}(u^{i}-h^{i})|\nabla h ^{i}|^{p-2}\nabla h^{i}\cdot\nu\,d\sigma\] \[=p\int_{\partial B_{\tau r}(x)}u^{i}|\nabla h^{i}|^{p-2}\nabla h^ {i}\cdot\nu\,d\sigma=C(n,p,\tau)\frac{s_{i}^{p-1}}{r^{p-1}}\int_{\partial B_{ \tau r}(x)}u^{i}\,d\sigma,\] where the last equality is calculated using the fact \(h^{i}(y)=\frac{s_{i}}{g(\sqrt{\tau}r)}\big{(}g(|y-x|)\big{)}^{+}=0\) on \(\overline{B}_{\tau r}(x)\); hence on \(\partial B_{\tau r}(x)\) we have \[\nabla h^{i}=C(\tau)s_{i}r^{\frac{n-p}{p-1}}\begin{cases}\frac{2|p-n|}{p-1}|y- x|^{\frac{2-n-p}{p-1}}(y-x)&p\neq n,\\ |y-x|^{-2}(y-x)&p=n,\end{cases}\] and thus \[|\nabla h^{i}|^{p-2}\nabla h^{i}\cdot\nu =C(n,p,\tau)s_{i}^{p-1}r^{n-p}|y-x|^{2-n-p}|y-x|^{p-2}(y-x)\cdot \frac{(y-x)}{\tau r}\] \[=C(n,p,\tau)s_{i}^{p-1}r^{n-p}|y-x|^{-n+2}\frac{1}{\tau r}=C(n,p, \tau)\frac{s_{i}^{p-1}}{r^{p-1}}.\] Hence (3.3) becomes \[\frac{\varepsilon c_{0}}{C_{1}}\big{|}\{|\mathbf{u}|>0\}\cap B_{ \tau r}(x)\big{|}+ \sum_{i\leq m}\int_{B_{\tau r}(x)}|\nabla u^{i}|^{p}\,dy \tag{3.4}\] \[\leq C(n,p,\tau)\sum_{i\leq m}\frac{s_{i}^{p-1}}{r^{p-1}}\int_{ \partial B_{\tau r}(x)}u^{i}\,d\sigma.\] On the other hand we have \[\int_{\partial B_{\tau r}(x)}u^{i}\,d\sigma \leq c(n,\tau)\Big{(}\int_{B_{\tau r}(x)}u^{i}\,dy+\int_{B_{ \tau r}(x)}|\nabla u^{i}|\,dy\Big{)} \tag{3.5}\] \[\leq c(n,\tau)\Big{(}(s_{i}+1)\cdot\big{|}\{|\mathbf{u}|>0\}\cap B _{\tau r}(x)\big{|}+\int_{B_{\tau r}(x)}|\nabla u^{i}|^{p}\,dy\Big{)},\] where in the last line we estimated \(u^{i},|\nabla u^{i}|\) from above by \(s_{i},1+|\nabla u^{i}|^{p}\) on the set \(\{u^{i}>0\}\subset\{|\mathbf{u}|>0\}\). Next note that \[s_{i}=\max_{\overline{B}_{\sqrt{\tau}r}(x)}u^{i}\leq\sup_{B_{r/2}(x)}u^{i}\leq rm _{\varepsilon}(\tau), \tag{3.6}\] since \(\sqrt{\tau}<1/2\). Combining the inequalities (3.4), (3.5), and (3.6) we get \[\frac{\varepsilon c_{0}}{C_{1}}\big{|}\{|\mathbf{u}|>0\}\cap B_{ \tau r}(x)\big{|}+\sum_{i\leq m}\int_{B_{\tau r}(x)}|\nabla u^{i}|^{p}\,dy\] \[\leq cC\sum_{i\leq m}\frac{s_{i}^{p-1}}{r^{p-1}}\Big{(}(s_{i}+1) \cdot\big{|}\{|\mathbf{u}|>0\}\cap B_{\tau r}(x)\big{|}+\int_{B_{\tau r}(x)}| \nabla u^{i}|^{p}\,dy\Big{)}\] \[\leq cC\,m_{\varepsilon}^{p-1}(\tau)\Big{(}\big{|}\{|\mathbf{u}| >0\}\cap B_{\tau r}(x)\big{|}\sum_{i\leq m}(s_{i}+1)+\sum_{i\leq m}\int_{B_{ \tau r}(x)}|\nabla u^{i}|^{p}\,dy\Big{)}.\] Now if \(m_{\varepsilon}(\tau)\) is small enough, we must necessarily have \(|\mathbf{u}|=0\) on \(B_{\tau r}(x)\), as desired. Now let us set \[U :=\{x\in\Omega:|\mathbf{u}(x)|>0\},\] \[E :=\{x\in\Omega:|\mathbf{u}(x)|=0\}.\] **Lemma 3.2**.: _For every \(i\) we have_ \[U=\{x\in\Omega:u^{i}(x)>0\},\qquad E=\{x\in\Omega:u^{i}(x)=0\}.\] Proof.: By Theorem 2.10, each \(u^{i}\) is \(p\)-harmonic in the open set \(U\). So in each component of \(U\) either \(u^{i}>0\) or \(u^{i}\equiv 0\) (by the strong maximum principle). Now consider a component of \(U\), say \(U_{1}\). If \(\partial U_{1}\) does not intersect \(\partial\Omega\) then it must be a subset of \(E\). Therefore every \(u^{i}\) vanishes on \(\partial U_{1}\), and hence every \(u^{i}\) vanishes on \(U_{1}\) by the maximum principle. So we would have \(U_{1}\subset E\), which is a contradiction. Thus \(\partial U_{1}\) must intersect \(\partial\Omega\). Hence each \(u^{i}>0\) on \(U_{1}\), since they are positive on \(\partial\Omega\). Therefore each \(u^{i}\) is positive on every component of \(U\), as desired. **Corollary 3.3**.: _There are \(c,C>0\) such that for \(x\in U\) near \(\partial E\) we have_ \[c\cdot\mathrm{dist}(x,\partial E)\leq|\mathbf{u}(x)|\leq C\cdot\mathrm{dist}(x,\partial E).\] Proof.: The right hand side inequality holds according to the Lipschitz regularity of the solutions, Theorem 2.10. To see the left hand side inequality, we argue indirectly. Assume to the contrary that there exists a sequence \(x_{k}\in U\) such that \[|\mathbf{u}(x_{k})|\leq\frac{1}{k_{2}}\mathrm{dist}(x_{k},\partial E). \tag{3.7}\] Let \(r_{k}=\operatorname{dist}(x_{k},\partial E)\) and define \[\mathbf{u}_{k}(x)=\frac{\mathbf{u}(x_{k}+r_{k}x)}{r_{k}}.\] The sequence \(\mathbf{u}_{k}\) is uniformly bounded and uniformly Lipschitz in \(B_{1}\) due to Lipschitz regularity of \(\mathbf{u}\) and assumption (3.7). Recall that \(\Delta_{p}u_{k}^{i}=0\) in \(U\), then we may choose a converging subsequence \(\mathbf{u}_{k}\to\mathbf{u}_{0}\) such that \(u_{0}^{i}\) is also \(p\)-harmonic. Furthermore, by Theorem 3.1 we get that \[\sup_{B_{1/2}(0)}|\mathbf{u}_{0}|=\lim_{k\to\infty}\sup_{B_{1/2}(0)}|\mathbf{u }_{k}|\geq m_{\varepsilon}>0,\] since \(|\mathbf{u}_{k}(0)|>0\). Also, (3.7) yields that \(\mathbf{u}_{0}(0)=0\), which contradicts the maximum (minimum) principle; remember that each component of \(\mathbf{u}_{0}\) is nonnegative. **Corollary 3.4**.: _There exists \(c=c_{\varepsilon}\in(0,1)\) such that for any \(x\in\partial U\) and small enough \(r\) we have_ \[c\leq\frac{\left|E\cap B_{r}(x)\right|}{\left|B_{r}(x)\right|}\leq 1-c. \tag{3.8}\] Proof.: The proof is similar to the proof of Theorem 4.2 in [7]. By Theorem 3.1, there exists \(z\in B_{r/2}(x)\) such that \(|\mathbf{u}(z)|\geq m_{\varepsilon}r>0\). Now for any \(y\in B_{\tau r}(z)\) we have \[|\mathbf{u}(y)-\mathbf{u}(z)|\leq\operatorname{Lip}(\mathbf{u})|y-z|< \operatorname{Lip}(\mathbf{u})\tau r<\frac{m_{\varepsilon}r}{2},\] provided that \(\tau\) is small enough. Hence we must have \(|\mathbf{u}(y)|>\frac{m_{\varepsilon}r}{2}>0\). This gives the upper estimate in (3.8). To prove the estimate from below, suppose to the contrary that there exists a sequence of points \(x_{k}\in\partial U\) and radii \(r_{k}\to 0\) such that \[\left|\{|\mathbf{u}|=0\}\cap B_{r_{k}}(x_{k})\right|<\frac{1}{k}|B_{r_{k}}(x_ {k})|=\frac{1}{k}r_{k}^{n}\left|B_{1}\right|.\] Now let us define \[\mathbf{u}_{k}(x)=\frac{\mathbf{u}(x_{k}+r_{k}x)}{r_{k}}.\] Note that \(\mathbf{u}_{k}(0)=\mathbf{u}(x_{k})=0\), and thus \(\mathbf{u}_{k}\) is uniformly bounded and uniformly Lipschitz in \(B_{1}=B_{1}(0)\) due to Lipschitz regularity of \(\mathbf{u}\). Also \[\left|\{|\mathbf{u}_{k}|=0\}\cap B_{1}\right|=\frac{1}{r_{k}^{n}}\big{|}\{| \mathbf{u}|=0\}\cap B_{r_{k}}(x_{k})\big{|}\underset{k\to\infty}{\longrightarrow}0.\] Let \(v_{k}^{i}\) be a \(p\)-harmonic function in \(B_{1/2}\) with boundary data \(v_{k}^{i}=u_{k}^{i}\) on \(\partial B_{1/2}\). Then \(h_{k}^{i}(x)=r_{k}v_{k}^{i}(\frac{x-x_{k}}{r_{k}})\) is a \(p\)-harmonic function in \(B_{r_{k}/2}(x_{k})\) with boundary data \(h^{i}_{k}=u^{i}\) on \(\partial B_{r_{k}/2}(x_{k})\). Now, similarly to the proof of Theorem 2.7, we can show that \[\int_{B_{1/2}}|\nabla(u^{i}_{k}-v^{i}_{k})|^{p}\,dx =\frac{1}{r_{k}^{n}}\int_{B_{r_{k}/2}(x_{k})}|\nabla(u^{i}-h^{i}_{ k})|^{p}\,dx\] \[\leq\frac{C}{\varepsilon}\frac{1}{r_{k}^{n}}\big{|}\{|\mathbf{u} |=0\}\cap B_{r_{k}}(x_{k})\big{|}\underset{k\to\infty}{\longrightarrow}0. \tag{3.9}\] (Note that the constant \(C\) does not depend on the radius \(r_{k}\) or the point \(x_{k}\).) Since \(u^{i}_{k}\) and therefore \(v^{i}_{k}\) are uniformly Lipschitz in \(B_{1/4}\), we may assume that \(u^{i}_{k}\to u^{i}_{0}\) and \(v^{i}_{k}\to v^{i}_{0}\) uniformly in \(B_{1/4}\). Observe that \(\Delta_{p}v^{i}_{0}=0\), and (3.9) implies that \(u^{i}_{0}=v^{i}_{0}+C\) for some constant \(C\). Thus \(\Delta_{p}u^{i}_{0}=0\) in \(B_{1/4}\) and from the strong minimum principle it follows \(u^{i}_{0}\equiv 0\) in \(B_{1/4}\), since \(u^{i}_{0}\geq 0\) and \(u^{i}_{0}(0)=\lim u^{i}_{k}(0)=0\). On the other hand the nondegeneracy property, Theorem 3.1, implies that (since \(x_{k}\) is not in the interior of \(\{|\mathbf{u}|=0\}\)) \[\|\mathbf{u}_{k}\|_{L^{\infty}(B_{1/4})}=\frac{1}{r_{k}}\|\mathbf{u}\|_{L^{ \infty}(B_{r_{k}/4}(x_{k}))}\geq m_{\varepsilon}/2>0.\] Therefore we get \(\|\mathbf{u}_{0}\|_{L^{\infty}(B_{1/4})}\geq m_{\varepsilon}/2\), which is a contradiction. Hence we can apply the results in section 4 of [3] and in section 3 of [4] to conclude (see also sections 5 and 6 of [7]) **Theorem 3.5**.: _Let \(\mathbf{u}=\mathbf{u}_{\varepsilon}\) be a minimizer of \(J_{\varepsilon}\) over \(V\). Then we have_ 1. _The_ \((n-1)\)_-dimensional Hausdorff measure of_ \(\partial E\) _is locally finite, i.e._ \(\mathcal{H}^{n-1}(\Omega^{\prime}\cap\partial E)<\infty\) _for every_ \(\Omega^{\prime}\subset\subset\Omega\)_. Moreover, there exist positive constants_ \(c_{\varepsilon},C_{\varepsilon}\)_, depending on_ \(n,p,\Omega,\Omega^{\prime},\varepsilon\)_, such that for each ball_ \(B_{r}(x)\subset\Omega^{\prime}\) _with_ \(x\in\partial E\) _we have_ \[c_{\varepsilon}r^{n-1}\leq\mathcal{H}^{n-1}(B_{r}(x)\cap\partial E)\leq C_{ \varepsilon}r^{n-1}.\] 2. _There exist Borel functions_ \(q^{i}=q^{i}_{\varepsilon}\) _such that_ \[\Delta_{p}u^{i}=q^{i}\,\mathcal{H}^{n-1}\mathbin{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}\partial E,\] _that is, for any_ \(\zeta\in C_{0}^{\infty}(\Omega)\) _we have_ \[-\int_{\Omega}A[u^{i}]\cdot\nabla\zeta\,dy=\int_{\partial E}\zeta q^{i}\,d \mathcal{H}^{n-1}.\] 3. _For_ \(\mathcal{H}^{n-1}\)_-a.e. points_ \(x\in\partial E\) _we have_ \[c_{\varepsilon}\leq\sum_{i=1}^{m}q^{i}(x)\leq C_{\varepsilon}.\] _._ 4. _For_ \(\mathcal{H}^{n-1}\)_-a.e. points_ \(x\in\partial E\) _an outward unit normal_ \(\nu=\nu_{E}(x)\) _is defined, and_ \[u^{i}(x+y)=(q^{i}(x))^{\frac{1}{p-1}}(y\cdot\nu)^{+}+o(|y|),\] _which allows us to define_ \(A_{\nu}u^{i}(x)=q^{i}(x)\) _at those points._ 5. _The reduced boundary_ \(\partial_{\mathrm{red}}E\) _satisfies_ \(\mathcal{H}^{n-1}(\partial E-\partial_{\mathrm{red}}E)=0\)_._ ## 4. The Original Problem In this section we will show that for \(\varepsilon>0\) small enough, a minimizer of \(J_{\varepsilon}\) over \(V\) satisfies \(\big{|}\{|\mathbf{u}_{\varepsilon}|>0\}\big{|}=1\), and hence it can be regarded as a solution to our original problem (1.1). Remember that \[U=U_{\varepsilon}=\{|\mathbf{u}_{\varepsilon}|>0\},\qquad E=E_{\varepsilon}= \{|\mathbf{u}_{\varepsilon}|=0\}.\] Note that by Lemma 2.9, the free boundary \(\partial E\) has a positive distance from the fixed boundary \(\partial\Omega\). We say \(x\in\partial E\) is a _regular point_ of the free boundary if it satisfies (3) and (4) in Theorem 3.5. The set of such regular points of the free boundary will be denoted by \(\mathcal{R}=\mathcal{R}_{\varepsilon}\); Theorem 3.5 shows that \(\mathcal{H}^{n-1}(\partial E-\mathcal{R})=0\). **Lemma 4.1**.: _There is a constant \(C>0\), independent of \(\varepsilon\), such that_ \[\inf_{\mathcal{R}_{\varepsilon}}\Big{(}\sum_{i\leq m}q^{i}_{\varepsilon} \Big{)}\leq C.\] _Remark_.: Note that \(\sum_{i\leq m}q^{i}_{\varepsilon}\geq c_{\varepsilon}>0\) by Theorem 3.5. Proof.: Let \(\Omega^{\prime}\subset\subset\Omega\) be a smooth open set with \(|\Omega-\Omega^{\prime}|=1\). Let \(\mathbf{u}_{0}\) be a vector-valued function on \(\Omega-\Omega^{\prime}\) that satisfies the equation \(\Delta_{p}u^{i}_{0}=0\), and takes the boundary values \(\boldsymbol{\varphi}\) on \(\partial\Omega\) and \(0\) on \(\partial\Omega^{\prime}\). Then for some small enough \(\delta_{0}\) we have \(\mathbf{u}_{0}\in V_{\delta_{0}}\subset V\); hence \[C=\int_{\partial\Omega}\Gamma(x,A_{\nu}\mathbf{u}_{0})\,d\sigma +1 =J_{\varepsilon}(\mathbf{u}_{0})\geq J_{\varepsilon}(\mathbf{u}_{ \varepsilon})\] \[=\int_{\partial\Omega}\Gamma(x,A_{\nu}\mathbf{u}_{\varepsilon}) \,d\sigma+f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}_{\varepsilon}|>0\}\big{|} \big{)}\] \[\geq\int_{\partial\Omega}\sum_{i=1}^{m}\psi_{i}A_{\nu}u^{i}_{ \varepsilon}-C\,d\sigma+f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}_{\varepsilon }|>0\}\big{|}\big{)}\] \[\geq C\sum_{i=1}^{m}\int_{\Omega}|\nabla u^{i}_{\varepsilon}|^{p} \,dx-C+f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}_{\varepsilon}|>0\}\big{|} \big{)}\] \[\geq-C+\frac{1}{\varepsilon}\big{(}\big{|}\{|\mathbf{u}_{ \varepsilon}|>0\}\big{|}-1\big{)},\] where we have used (1.2) and Lemma 2.2. Thus we get the bound \[|U|=\big{|}\{|\mathbf{u}_{\varepsilon}|>0\}\big{|}\leq 1+C\varepsilon.\] Note that \(J_{\varepsilon}(\mathbf{u}_{0})\), and thus \(C\), does not depend on \(\varepsilon\) due to the definition of \(f_{\varepsilon}\). As a result, we have a lower bound for the volume of \(E\). Hence, by the isoperimetric inequality, we have a lower bound for \(\mathcal{H}^{n-1}(\partial E)\), independent of \(\varepsilon\). Now note that (keep in mind that \(\nu_{E}\) points to the interior of \(U\)) \[\int_{\partial\Omega}A_{\nu}u_{\varepsilon}^{i}\,d\sigma-\int_{\partial E}A_{ \nu}u_{\varepsilon}^{i}\,d\mathcal{H}^{n-1}=\int_{U}\Delta_{p}u_{\varepsilon}^ {i}\,dx=0.\] Therefore we get \[\int_{\partial E}A_{\nu}u_{\varepsilon}^{i}\,d\mathcal{H}^{n-1}= \int_{\partial\Omega}A_{\nu}u_{\varepsilon}^{i}\,d\sigma =\int_{\Omega}\Delta_{p}u_{\varepsilon}^{i}\,dx\] \[\leq C+C\int_{\partial\Omega}\psi_{i}A_{\nu}u_{\varepsilon}^{i} \,d\sigma,\] where the last inequality follows from the remark below Lemma 2.2. Thus we have \[\inf_{\mathcal{R}_{\varepsilon}}\Big{(}\sum_{i\leq m}A_{\nu}u_{ \varepsilon}^{i}\Big{)}\mathcal{H}^{n-1}(\partial E) \leq\int_{\partial E}\sum_{i\leq m}A_{\nu}u_{\varepsilon}^{i}\, d\mathcal{H}^{n-1}\] \[\leq C+C\int_{\partial\Omega}\sum_{i\leq m}\psi_{i}A_{\nu}u_{ \varepsilon}^{i}\,d\sigma\] (by (1.2)) \[\leq C+C\int_{\partial\Omega}\Gamma\big{(}x,A_{\nu}\mathbf{u}_{ \varepsilon}\big{)}\,d\sigma\] \[\leq C+CJ_{\varepsilon}(\mathbf{u}_{\varepsilon})\leq C+CJ_{ \varepsilon}(\mathbf{u}_{0})\leq C,\] which gives the desired (noting that \(q^{i}=A_{\nu}u_{\varepsilon}^{i}\) by Theorem 3.5). **Lemma 4.2**.: _For small enough \(\varepsilon\) we have_ \[\big{|}\{|\mathbf{u}_{\varepsilon}|>0\}\big{|}\geq 1.\] Proof.: Consider a point \(z_{0}\in\Omega^{c}\) which has distance \(\delta_{0}\) from \(\partial\Omega\). Then the ball \(B_{\delta_{0}}(z_{0})\) is an exterior tangent ball to \(\partial\Omega\). Let \(t=t(\varepsilon)\) be the first time at which \(\partial B_{\delta_{0}+t}(z_{0})\) intersects \(\partial\{|\mathbf{u}_{\varepsilon}|=0\}\), at a point \(x_{0}=x_{0}(\varepsilon)\). Now let \(v\) be a \(p\)-harmonic function in \(B_{\delta_{0}+t}(z_{0})-\overline{B}_{\delta_{0}}(z_{0})\) with boundary values \(0\) on \(\partial B_{\delta_{0}+t}(z_{0})\) and \(c_{0}\) on \(\partial B_{\delta_{0}}(z_{0})\), where \(c_{0}=\min_{i}\min_{\partial\Omega}\varphi^{i}>0\). Then on \(\partial\big{(}\Omega\cap B_{\delta_{0}+t}(z_{0})\big{)}\) we have \(v\leq u^{i}\); so by the maximum principle we have \(v\leq u^{i}\) in \(\Omega\cap B_{\delta_{0}+t}(z_{0})\). However, by an easy modification of the proof of Hopf's lemma (Lemma 2.4), we can see that \[v(x)\geq cc_{0}\operatorname{dist}\bigl{(}x,\partial B_{\delta_{0}+t}(z_{0}) \bigr{)},\] where the constant \(c\) only depends on \(n,p,\delta_{0}\). Therefore, for points \(x\) in the line segment between \(x_{0},z_{0}\) we have \[u^{i}(x)\geq v(x)\geq cc_{0}\operatorname{dist}\bigl{(}x,\partial B_{\delta_{0}+ t}(z_{0})\bigr{)}=cc_{0}|x-x_{0}|.\] Now consider the ball \(B_{r}(x_{0})\) for small enough \(r\). Then we have \[\frac{1}{r}\sup_{B_{r/2}(x_{0})}u^{i}\geq\frac{1}{r}cc_{0}\frac{r}{2}=\frac{cc_ {0}}{2},\] independently of \(\varepsilon\). Let \(\mathbf{h}\) be the vector-valued function which satisfies \(\Delta_{p}h^{i}=0\) in \(B_{r}(x_{0})\), and is equal to \(\mathbf{u}\) in \(\Omega-B_{r}(x_{0})\). By Lemma 2.5 and the fact that \(h^{i}\geq u^{i}\) we have \[\int_{B_{r}(x_{0})}|\nabla(u^{i}-h^{i})|^{p}\,dy \geq C\Bigl{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}h^{i}\Bigr{)}^{p} \cdot\bigl{|}B_{r}(x_{0})\cap\{u^{i}=0\}\bigr{|}\] \[\geq C\Bigl{(}\frac{1}{r}\sup_{B_{r/2}(x_{0})}u^{i}\Bigr{)}^{p} \cdot\bigl{|}B_{r}(x_{0})\cap\{u^{i}=0\}\bigr{|}\] \[\geq C\bigl{|}B_{r}(x_{0})\cap\{u^{i}=0\}\bigr{|}\geq C\bigl{|}B_ {r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|}.\] Next let \(\mathbf{v}\) be the function given by Lemma 2.6 for \(B_{r}(x_{0})\). We know that \(J_{\varepsilon}(\mathbf{u})\leq J_{\varepsilon}(\mathbf{v})\). Then similarly to the proof of Theorem 2.7 we can see that \[C\sum_{i\leq m}\int_{B_{r}(x_{0})}|\nabla(u^{i}-h^{i})|^{p}\,dy \leq\int_{\partial\Omega}\Gamma(x,A_{\nu}\mathbf{u})-\Gamma(x,A_{ \nu}\mathbf{v})\,d\sigma\] \[\leq f_{\varepsilon}\bigl{(}\bigl{|}\{|\mathbf{v}|>0\}\bigr{|} \bigr{)}-f_{\varepsilon}\bigl{(}\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}\bigr{)}.\] (A closer inspection of the proof of Theorem 2.7 reveals that the constant \(C\) in the above estimate only depends on \(n,p,\Omega,\boldsymbol{\varphi},\Gamma\).) Now suppose to the contrary that \(\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}<1\). Then, since \(0\leq u^{i}\leq v^{i}\), and outside of \(B_{r}(x_{0})\), \(|\mathbf{u}|=0\) implies \(|\mathbf{v}|=0\), we have \[\bigl{|}\{|\mathbf{v}|>0\}\bigr{|}\leq\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}+ \bigl{|}B_{r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|}<1\] for small enough \(r\). Hence (using the monotonicity of \(f_{\varepsilon}\)) we have \[f_{\varepsilon}\bigl{(}\bigl{|}\{|\mathbf{v}|>0\}\bigr{|}\bigr{)} -f_{\varepsilon}\bigl{(}\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}\bigr{)}\] \[\leq f_{\varepsilon}\Bigl{(}\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}+ \bigl{|}B_{r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|}\Bigr{)}-f_{\varepsilon} \bigl{(}\bigl{|}\{|\mathbf{u}|>0\}\bigr{|}\bigr{)}\] \[=\varepsilon\bigl{|}B_{r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|}.\] Combining this estimate with the estimates of the above paragraph, and using (3.8), we obtain \[0<C\bigl{|}B_{r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|}\leq\varepsilon\bigl{|}B _{r}(x_{0})\cap\{|\mathbf{u}|=0\}\bigr{|},\] which gives a positive lower bound for \(\varepsilon\), and results in a contradiction. **Theorem 4.3**.: _When \(\varepsilon\) is small enough we have_ \[\bigl{|}\{|{\mathbf{u}}_{\varepsilon}|>0\}\bigr{|}=1.\] Proof.: By the above lemma we only need to show that \(\bigl{|}\{|{\mathbf{u}}_{\varepsilon}|>0\}\bigr{|}\leq 1\). To this end, we will compare \({\mathbf{u}}_{\varepsilon}\) with a suitable perturbation of itself. Let \(x_{0}\in\mathcal{R}\), and let \(\rho:\mathbb{R}\to\mathbb{R}\) be a nonnegative smooth function supported in \((0,1)\). For small enough \(r,\lambda>0\) we consider the vector field \[T_{r}(x):=\begin{cases}x+r\lambda\rho(|x-x_{0}|/r)\nu(x_{0})&\text{if }x\in B_{r}(x _{0}),\\ x&\text{elsewhere}.\end{cases}\] Here, \(\nu(x_{0})\) is the outward normal vector provided in (4) of Theorem 3.5. We can easily see that for \(x\) in \(B_{r}(x_{0})\) we have \[DT_{r}(x)\cdot=I\cdot+\lambda\rho^{\prime}(|x-x_{0}|/r)\frac{\langle x-x_{0},\cdot\,\rangle}{|x-x_{0}|}\nu(x_{0}), \tag{4.1}\] where \(I\) is the identity matrix. Hence, if \(\lambda\) is small enough, \(T_{r}\) is a diffeomorphism that maps \(B_{r}(x_{0})\) onto itself. Now consider \[{\mathbf{v}}_{r}(x):={\mathbf{u}}(T_{r}^{-1}(x))\] for \(r>0\) small enough. Similarly to the proof of Theorem 3.1, we consider the vector-valued function \({\mathbf{w}}\) whose components minimize the Dirichlet \(p\)-energy subject to the condition \[w^{i}\leq 0\ \ \text{on}\ \ \{{\mathbf{u}}=0\}\cup\bigl{(}\overline{B}_{r}(x _{0})\cap\{{\mathbf{v}}_{r}=0\}\bigr{)}.\] With a calculation similar to (3.1) and (2.3) we get \[0\leq J_{\varepsilon}({\mathbf{w}})-J_{\varepsilon}({\mathbf{u}}) \leq C\sum_{i=1}^{m}\int_{\Omega}|\nabla w^{i}|^{p}-|\nabla u^{i}|^{ p}\,dx\] \[\qquad\qquad+f_{\varepsilon}\bigl{(}|\{|{\mathbf{w}}|>0\}|\bigr{)} -f_{\varepsilon}\bigl{(}|\{|{\mathbf{u}}|>0\}|\bigr{)}\] \[\leq C\sum_{i=1}^{m}\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}-| \nabla u^{i}|^{p}\,dx\] \[\qquad\qquad+f_{\varepsilon}\bigl{(}|\{|{\mathbf{w}}|>0\}|\bigr{)} -f_{\varepsilon}\bigl{(}|\{|{\mathbf{u}}|>0\}|\bigr{)}, \tag{4.2}\] where in the last inequality we have compared the Dirichlet \(p\)-energy of \({\mathbf{w}}\) with that of \({\mathbf{v}}_{r}\chi_{B_{r}(x_{0})}+{\mathbf{u}}\chi_{\Omega-B_{r}(x_{0})}\). Now notice that \[\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}\,dx =\int_{B_{r}(x_{0})}\left|DT_{r}(T_{r}^{-1}(x))^{-1}\nabla u^{i}(T_ {r}^{-1}(x))\right|^{p}dx\] \[=\int_{B_{r}(x_{0})}\left|DT_{r}(y)^{-1}\nabla u^{i}(y)\right|^{p} \left|\det DT_{r}(y)\right|dy\] \[(z=\tfrac{y-x_{0}}{r}) =r^{n}\int_{B_{1}}\left|DT_{r}(y)^{-1}\nabla u^{i}(y)\right|^{p} \left|\det DT_{r}(y)\right|dz.\] From (4.1), for small enough \(\lambda\) we can write \[DT_{r}(y)^{-1} =I+\Big{(}\sum_{k=1}^{\infty}(-1)^{k}\lambda^{k}\rho^{\prime}(|z| )^{k}\frac{\langle z,\nu\rangle^{k-1}}{|z|^{k-1}}\Big{)}\frac{\langle z,\cdot \,\rangle}{|z|}\nu(x_{0}) \tag{4.3}\] \[=I-\lambda\rho^{\prime}(|z|)\frac{\langle z,\cdot\,\rangle}{|z|} \nu(x_{0})+\lambda^{2}g(\lambda,z)\frac{\langle z,\cdot\,\rangle}{|z|}\nu(x_{0})\] for some \(g\). Hence we have \[DT_{r}(y)^{-1}\nabla u^{i}(y)=\nabla u^{i}(y)-\lambda\rho^{\prime}(|z|)\frac{ \langle z,\nabla u^{i}(y)\rangle}{|z|}\nu(x_{0})+O(\lambda^{2}).\] Thus \[\left|DT_{r}(y)^{-1}\nabla u^{i}(y)\right|^{2}=|\nabla u^{i}(y)|^{2}-2\lambda \rho^{\prime}(|z|)\frac{\langle z,\nabla u^{i}(y)\rangle}{|z|}\langle\nu(x_{0} ),\nabla u^{i}(y)\rangle+O(\lambda^{2}),\] and therefore \[\left|DT_{r}(y)^{-1}\nabla u^{i}(y)\right|^{p} =|\nabla u^{i}(y)|^{p}\Big{(}1-p\lambda\rho^{\prime}(|z|)\frac{ \langle z,\nabla u^{i}(y)\rangle}{|z||\nabla u^{i}(y)|^{2}}\langle\nu(x_{0}), \nabla u^{i}(y)\rangle\Big{)}\] \[\qquad+O(\lambda^{2}).\] Also, we have (noting that \(DT_{r}\) is the identity matrix plus a rank \(1\) matrix) \[|\det DT_{r}(y)|=1+\lambda\rho^{\prime}(|z|)\frac{\langle z,\nu(x_{0})\rangle }{|z|}.\] All these together, we obtain (remember that \(y=x_{0}+rz\)) \[r^{-n}\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}-|\nabla u^{i}|^{ p}\,dx\] \[\qquad=\lambda\int_{B_{1}}|\nabla u^{i}(y)|^{p}\rho^{\prime}(|z|) \left(\frac{\langle z,\nu(x_{0})\rangle}{|z|}-p\frac{\langle z,\nabla u^{i}(y) \rangle\langle\nabla u^{i}(y),\nu(x_{0})\rangle}{|z||\nabla u^{i}(y)|^{2}} \right)dz\] \[\qquad+O(\lambda^{2}).\] Now consider the blowup sequence \(\mathbf{u}_{r}(z):=\mathbf{u}(x_{0}+rz)/r\). We know that as \(r\to 0\) (see [4]) \[\{u_{r}^{i}>0\}\cap B_{1}\to\{z:z\cdot\nu(x_{0})>0\}\cap B_{1},\] \[\nabla u^{i}(y)=\nabla u_{r}^{i}(z)\to(q^{i}(x_{0}))^{\frac{1}{p-1 }}\nu(x_{0})\chi_{\{z\cdot\nu(x_{0})>0\}},\qquad\text{a.e. in }B_{1}.\] Therefore we get \[r^{-n}\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}-|\nabla u^{i}|^{p }\,dx\] \[\qquad\xrightarrow[r\to 0]{}-(p-1)\lambda|q^{i}(x_{0})|^{\frac{p}{p-1 }}\int_{B_{1}\cap\{z\cdot\nu(x_{0})>0\}}\rho^{\prime}(|z|)\frac{\langle z,\nu (x_{0})\rangle}{|z|}\,dz+O(\lambda^{2}).\] Note that the formula (4.3) for \((DT_{r})^{-1}\) does not depend on \(r\), and the function \(\cdot\mapsto|\cdot|^{p}\) is continuous; so the \(O(\lambda^{2})\) term converges to an \(O(\lambda^{2})\) term as \(r\to 0\). Next note that \[\operatorname{div}(\rho(|z|)\nu)=\frac{\rho^{\prime}(|z|)}{|z|}\langle z,\nu\rangle.\] Thus (noting that \(\rho(|z|)\) is zero near \(\partial B_{1}\)) \[\int_{B_{1}\cap\{z\cdot\nu(x_{0})>0\}}\rho^{\prime}(|z|)\frac{ \langle z,\nu(x_{0})\rangle}{|z|}\,dz =-\int_{B_{1}\cap\{z\cdot\nu(x_{0})=0\}}\rho(|z|)\,dz\] \[=-\omega_{n-1}\int_{0}^{1}\rho(t)t^{n-1}\,dt=-C_{\rho}\omega_{n-1},\] where \(\omega_{n-1}\) is the volume of the \((n-1)\)-dimensional ball of radius \(1\), and \(C_{\rho}\) depends only on \(\rho\). Hence we can write \[\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}-|\nabla u^{i}|^{p}\,dx\] \[\qquad=\big{[}(p-1)\lambda C_{\rho}\omega_{n-1}|q^{i}(x_{0})|^{ \frac{p}{p-1}}+O(\lambda^{2})\big{]}r^{n}+o(r^{n}).\] On the other hand, \[\lim_{r\to 0}r^{-n}\big{|}B_{r}(x_{0})\cap\{|\mathbf{v}_{r}|>0\} \big{|}=\lim_{r\to 0}r^{-n}\int_{\{|\mathbf{v}_{r}|>0\}\cap B_{r}(x_{0})}dx\] \[\qquad=\lim_{r\to 0}r^{-n}\int_{\{|\mathbf{u}|>0\}\cap B_{r}(x_{0 })}|\det DT_{r}(y)|\,dy\] \[\qquad=\int_{B_{1}\cap\{z\cdot\nu(x_{0})>0\}}1+\lambda\rho^{ \prime}(|z|)\frac{\langle z,\nu(x_{0})\rangle}{|z|}\,dz\] \[\qquad=\frac{1}{2}\omega_{n}-\lambda\omega_{n-1}\int_{0}^{1}\rho( t)t^{n-1}\,dt=\frac{1}{2}\omega_{n}-\lambda C_{\rho}\omega_{n-1}.\] Thus for \(A_{0}:=\big{(}\{|\mathbf{u}|>0\}-B_{r}(x_{0})\big{)}\cup\big{(}\{|\mathbf{v}_{r}|>0 \}\cap B_{r}(x_{0})\big{)}\) we have \[|A_{0}|-\big{|}\{|\mathbf{u}|>0\}\big{|} =\big{|}B_{r}(x_{0})\cap\{|\mathbf{v}_{r}|>0\}\big{|}-\big{|}B_{r}( x_{0})\cap\{|\mathbf{u}|>0\}\big{|}\] \[=-\lambda C_{\rho}\omega_{n-1}r^{n}+o(r^{n}).\] In addition, it is easy to see that \(\{|\mathbf{w}|>0\}\subset A_{0}\). Now suppose to the contrary that \(\big{|}\{|\mathbf{u}|>0\}\big{|}>1\). Then we can choose \(r\) small enough so that \[|A_{0}|=\big{|}\{|\mathbf{u}|>0\}\big{|}-\lambda C_{\rho}\omega_{n-1}r^{n}+o(r ^{n})>1.\] Therefore, using the monotonicity of \(f_{\varepsilon}\) we get \[f_{\varepsilon}\big{(}\big{|}\{|\mathbf{w}|>0\}\big{|}\big{)} -f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|}\big{)} \leq f_{\varepsilon}(|A_{0}|)-f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\} \big{|}\big{)}\] \[=\frac{1}{\varepsilon}\big{(}|A_{0}|-\big{|}\{|\mathbf{u}|>0\} \big{|}\big{)}=-\frac{1}{\varepsilon}\lambda C_{\rho}\omega_{n-1}r^{n}+o(r^{n}).\] Finally, by putting all these estimates in (4.2), we obtain \[0 \leq C\sum_{i=1}^{m}\int_{B_{r}(x_{0})}|\nabla v_{r}^{i}|^{p}-| \nabla u^{i}|^{p}\,dx+f_{\varepsilon}(|A_{0}|)-f_{\varepsilon}\big{(}\big{|}\{ |\mathbf{u}|>0\}\big{|}\big{)}\] \[=\big{[}(p-1)\lambda C_{\rho}\omega_{n-1}\sum_{i=1}^{m}|q^{i}(x_{ 0})|^{\frac{p}{p-1}}+O(\lambda^{2})\big{]}r^{n}-\frac{1}{\varepsilon}\lambda C _{\rho}\omega_{n-1}r^{n}+o(r^{n}).\] Dividing by \(r^{n}\) and letting \(r\to 0\), and then dividing by \(\lambda\) and letting \(\lambda\to 0\), we get \[\frac{1}{\varepsilon}\leq(p-1)\sum_{i=1}^{m}|q^{i}(x_{0})|^{\frac{p}{p-1}}.\] Now if we choose \(x_{0}\) such that \[\sum_{i\leq m}q^{i}(x_{0})\leq\inf_{\mathcal{R}_{\varepsilon}}\Big{(}\sum_{i \leq m}q^{i}\Big{)}+1,\] then by Lemma 4.1 (and the equivalence of all norms on the finite-dimensional space \(\mathbb{R}^{m}\)) we have \(\sum_{i\leq m}|q^{i}(x_{0})|^{\frac{p}{p-1}}\leq C\), independently of \(\varepsilon\). However, this implies that \(\varepsilon\) has a positive lower bound, which is a contradiction. ## 5. Regularity of the free boundary (case \(p=2\)) We are going to show that \(\mathcal{R}\) is an analytic hypersurface when \(p=2\). To see this, we first derive the free boundary condition, also known as the optimality condition, in the following lemma. We perturb the optimal set \(\Omega\) and compute the first variation of the energy functional \(J_{\varepsilon}\). To perform this computation, it is crucial to ensure that the \(p\)-harmonic solution within the perturbed domain is differentiable with respect to the perturbation parameter. When \(p=2\), this can be established through the implicit function theorem. However, it is noteworthy that for \(p\neq 2\) the proof does not hold, primarily due to the ill-posedness of the derivative of the map \(u\mapsto\Delta_{p}u\). **Lemma 5.1**.: _Let \(\mathbf{u}\) be a solution of the minimization problem (1.1) for \(p=2\). Let \(h^{i}\) be the solution of_ \[\begin{cases}\Delta h^{i}=0&\text{in }\Omega-E,\\ h^{i}=0&\text{on }E,\\ h^{i}=\partial_{\xi_{i}}\Gamma(x,\partial_{\nu}\mathbf{u})&\text{on }\partial \Omega.\end{cases}\] _Then, on the regular part of the free boundary we have_ \[\sum_{i=1}^{m}\partial_{\nu}h^{i}\partial_{\nu}u^{i}=C \tag{5.1}\] _for some positive constant \(C\)._ Proof.: Let \(x_{1}\) and \(x_{2}\) be two regular points in \(\mathcal{R}\) with corresponding unit normal vectors \(\nu(x_{1})\) and \(\nu(x_{2})\). Also, let \(\rho:\mathbb{R}\to\mathbb{R}\) be a nonnegative smooth function supported in \((0,1)\). Similarly to the proof of Theorem 4.3 we define the vector field \[T_{r,\lambda}(x):=\begin{cases}x-r\lambda\rho(|x-x_{1}|/r)\nu(x_{1})&\text{if } x\in B_{r}(x_{1}),\\ x+r\lambda\rho(|x-x_{2}|/r)\nu(x_{2})&\text{if }x\in B_{r}(x_{2}),\\ x&\text{elsewhere},\end{cases}\] for small enough \(r,\lambda>0\) (which makes \(T_{r,\lambda}\) a diffeomorphism from \(B_{r}(x_{a})\) onto itself for \(a=1,2\)). Now for some fixed \(r>0\) let \(E_{\lambda}=T_{r,\lambda}^{-1}(E)\), and assume that \(\mathbf{w}_{\lambda}\) solves \[\begin{cases}\Delta w_{\lambda}^{i}=0&\text{in }\Omega-E_{\lambda},\\ w_{\lambda}^{i}=\varphi^{i}&\text{on }\partial\Omega,\\ w_{\lambda}^{i}=0&\text{on }\partial E_{\lambda}.\end{cases}\] Define \(\mathbf{v}_{\lambda}(y):=\mathbf{w}_{\lambda}(T_{r,\lambda}^{-1}(y))\). We are going to show that \(\lambda\mapsto\mathbf{v}_{\lambda}\) is a \(C^{1}\) map from a neighborhood of \(\lambda=0\) into \(W^{1,2}(\Omega-E)\). We know that each \(v_{\lambda}^{i}\) satisfies an elliptic PDE of the form \[F[v,\lambda]=F(D_{y}^{2}v,\nabla_{y}v,y,\lambda)=0,\qquad\text{ in }\ U=\Omega-E.\] We also know that \(F=\Delta\) when \(y\notin B_{r}(x_{1})\cup B_{r}(x_{2})\) or when \(\lambda=0\). In addition, we can consider \(F\) as a \(C^{1}\) map \[F:W^{1,2}(U)\times\mathbb{R} \to W^{-1,2}(U),\] \[(v,\lambda) \mapsto F[v,\lambda]\] where \(U=\Omega-E\). Now we employ the implicit function theorem to show that \(\lambda\mapsto\mathbf{v}_{\lambda}\) is \(C^{1}\). This can be readily deduced from the fact that \[\partial_{v}F|_{\lambda=0}:W^{1,2}_{0}(U)\to W^{-1,2}(U)\] is invertible, since we have \[\partial_{v}F|_{\lambda=0}\cdot=\frac{d}{ds}\Big{|}_{s=0}F[v+s\cdot,0]=\frac{ d}{ds}\Big{|}_{s=0}\Delta(v+s\cdot)=\Delta\cdot.\] Therefore, \(\mathbf{v}_{\lambda}=\mathbf{u}+\lambda\mathbf{u}_{0}+o(\lambda)\) in \(W^{1,2}(U)\), where \(\mathbf{u}_{0}\in W^{1,2}_{0}(U)\) solves \[0=\frac{d}{d\lambda}\Big{|}_{\lambda=0}F[v^{i}_{\lambda},\lambda]=\partial_{v }Fu^{i}_{0}+\partial_{\lambda}F.\] In other words \[\Delta u^{i}_{0}=-\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}.\] Note that we also have \(\nabla\mathbf{v}_{\lambda}=\nabla\mathbf{u}+\lambda\nabla\mathbf{u}_{0}+o(\lambda)\), since \(\lambda\mapsto\mathbf{v}_{\lambda}\) is a \(C^{1}\) map into \(W^{1,2}(U)\); so \(\lambda\mapsto\nabla\mathbf{v}_{\lambda}\) is a \(C^{1}\) map into \(L^{2}(U)\). Now let \(h^{i}\) be the solution of \(\Delta h^{i}=0\) in \(U=\Omega-E\) with boundary data \(h^{i}=\partial_{\xi_{i}}\Gamma(x,\partial_{\nu}\mathbf{u})\) on \(\partial\Omega\) and \(h^{i}=0\) on \(\partial E\). Then for small \(\lambda>0\) we have (note that for \(p=2\) we have \(A_{\nu}=\partial_{\nu}\)) \[\int_{\partial\Omega} \Gamma(x,\partial_{\nu}\mathbf{v}_{\lambda})-\Gamma(x,\partial_{ \nu}\mathbf{u})\,d\sigma\] \[=\int_{\partial\Omega}\sum_{i}\partial_{i}\Gamma(x,\partial_{\nu} \mathbf{u})(\partial_{\nu}v^{i}_{\lambda}-\partial_{\nu}u^{i})\,d\sigma+o(\lambda)\] \[=\lambda\int_{\partial\Omega}\sum_{i}\partial_{i}\Gamma(x, \partial_{\nu}\mathbf{u})\partial_{\nu}u^{i}_{0}\,d\sigma+o(\lambda)\] \[=\lambda\int_{\partial\Omega}\sum_{i}h^{i}\partial_{\nu}u^{i}_{0 }\,d\sigma+o(\lambda)\] \[=\lambda\sum_{i}\int_{U}\nabla h^{i}\cdot\nabla u^{i}_{0}+h^{i} \Delta u^{i}_{0}\,dx+o(\lambda)\] \[=-\lambda\sum_{i}\int_{\left(B_{r}(x_{1})\cup B_{r}(x_{2})\right) -E}h^{i}\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}\,dx+o(\lambda).\] Note that in the last line we have used the facts that \(\Delta h^{i}=0\) in \(U\) and \(u^{i}_{0}=0\) on \(\partial U=\partial\Omega\cup\partial E\). Also, \(\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}=0\) outside \(B_{r}(x_{1})\cup B_{r}(x_{2})\), because in that region \(F=\Delta\) for all \(\lambda\). Now let us extend \(\mathbf{w}_{\lambda}\) to all of \(\Omega\) by setting it equal to \(0\) on \(E_{\lambda}\). Note that \(w^{i}_{\lambda}\) is positive on \(\Omega-E_{\lambda}\) by the maximum principle. Hence \[\{|\mathbf{w}_{\lambda}|>0\}=\Omega-E_{\lambda}.\] Furthermore, similarly to the proof of Theorem 4.3, we obtain \[f_{\varepsilon}\big{(}\big{|}\{|\mathbf{w}_{\lambda}|>0\}\big{|} \big{)} -f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|}\big{)}\leq \frac{1}{\varepsilon}(|E|-|E_{\lambda}|)\] \[=\frac{\lambda}{\varepsilon}\Big{(}\int_{B_{r}(x_{2})\cap\{| \mathbf{u}|>0\}}\rho^{\prime}(|x-x_{2}|)\frac{\langle x-x_{2},\nu(x_{2}) \rangle}{|x-x_{2}|}\,dx\] \[\qquad-\int_{B_{r}(x_{1})\cap\{|\mathbf{u}|>0\}}\rho^{\prime}(|x- x_{1}|)\frac{\langle x-x_{1},\nu(x_{1})\rangle}{|x-x_{1}|}\,dx\Big{)}=\frac{ \lambda}{\varepsilon}o(r^{n}).\] Therefore if we compare the energy of \(\mathbf{u}\) with \(\mathbf{w}_{\lambda}\) (it is easy to see that \(\mathbf{w}_{\lambda}\in V\)) we get (in the second equality below we use the fact that \(\mathbf{v}_{\lambda}=\mathbf{w}_{\lambda}\) near \(\partial\Omega\)) \[0\leq J_{\varepsilon}(\mathbf{w}_{\lambda})-J_{\varepsilon}( \mathbf{u}) =\int_{\partial\Omega}\Gamma(x,\partial_{\nu}\mathbf{w}_{\lambda})- \Gamma(x,\partial_{\nu}\mathbf{u})\,d\sigma\] \[\qquad\qquad+f_{\varepsilon}\big{(}\big{|}\{|\mathbf{w}_{\lambda }|>0\}\big{|}\big{)}-f_{\varepsilon}\big{(}\big{|}\{|\mathbf{u}|>0\}\big{|} \big{)}\] \[=\int_{\partial\Omega}\Gamma(x,\partial_{\nu}\mathbf{v}_{\lambda} )-\Gamma(x,\partial_{\nu}\mathbf{u})\,d\sigma+\frac{\lambda}{\varepsilon}o(r^{ n})\] \[=-\lambda\sum_{i}\int_{\big{(}B_{r}(x_{1})\cup B_{r}(x_{2})\big{)} -E}h^{i}\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}\,dx+o(\lambda)+\frac{ \lambda}{\varepsilon}o(r^{n}).\] Hence if we divide by \(\lambda\) and let \(\lambda\to 0\) we obtain \[0\leq-\sum_{i}\int_{\big{(}B_{r}(x_{1})\cup B_{r}(x_{2})\big{)}-E}h^{i} \partial_{\lambda}F|_{v=u^{i},\,\lambda=0}\,dx+o(r^{n}). \tag{5.2}\] So we need to compute \(\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}\). Next let us compute \(F\) explicitly. Set \(x=T_{r,\lambda}^{-1}(y)\) so that \(y=T_{r,\lambda}(x)\). To simplify the notation we suppress the \(\lambda\) or \(r\) in the indices. We have \(v^{i}(T(x))=v^{i}(y)=w^{i}(x)\). Hence \[\partial_{x_{k}}w^{i} =\sum_{j}\partial_{y_{j}}v^{i}\partial_{x_{k}}T^{j},\] \[\partial_{x_{k}x_{k}}^{2}w^{i} =\sum_{j}\partial_{x_{k}}\big{(}\partial_{y_{j}}v^{i}\partial_{x _{k}}T^{j}\big{)}\] \[=\sum_{j,\ell}\partial_{y_{j}y_{\ell}}^{2}v^{i}\partial_{x_{k}}T ^{j}\partial_{x_{k}}T^{\ell}+\sum_{j}\partial_{y_{j}}v^{i}\partial_{x_{k}x_{k }}^{2}T^{j}.\] Therefore \[0=\Delta w^{i}=\sum_{j,\ell,k}\partial_{y_{j}y_{\ell}}^{2}v^{i}\partial_{x_{k }}T^{j}\partial_{x_{k}}T^{\ell}+\sum_{j,k}\partial_{y_{j}}v^{i}\partial_{x_{k }x_{k}}^{2}T^{j}.\] It is easy to see that inside \(B_{r}(x_{a})\) (\(a=1,2\)) we have \[\partial_{x_{k}}T^{j} =\delta_{jk}+(-1)^{a}\lambda\rho^{\prime}(|z|)\frac{z_{k}}{|z|} \nu^{j}(x_{a}),\qquad\big{(}z=\frac{x-x_{a}}{r}\big{)},\] \[\partial_{x_{k}x_{k}}^{2}T^{j} =(-1)^{a}\lambda\partial_{x_{k}}\big{(}\rho^{\prime}(|z|)\frac{z_ {k}}{|z|}\big{)}\nu^{j}(x_{a}).\] Thus \[F[v,\lambda] =\sum_{j,\ell,k}\partial_{y_{j}y_{\ell}}^{2}v\partial_{x_{k}}T^{j }\partial_{x_{k}}T^{\ell}+\sum_{j,k}\partial_{y_{j}}v\partial_{x_{k}x_{k}}^{2} T^{j}\] \[=\sum_{j,\ell,k}\big{[}\delta_{jk}+(-1)^{a}\lambda\rho^{\prime}( |z|)\frac{z_{k}}{|z|}\nu^{j}(x_{a})\big{]}\big{[}\delta_{\ell k}+(-1)^{a} \lambda\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\nu^{\ell}(x_{a})\big{]}\partial_{ y_{j}y_{\ell}}^{2}v\] \[\qquad\qquad+\sum_{j,k}\big{[}(-1)^{a}\lambda\partial_{x_{k}} \big{(}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\big{)}\nu^{j}(x_{a})\big{]}\partial _{y_{j}}v\] in \(B_{r}(x_{a})\) for \(a=1,2\), and \(F[v,\lambda]=\Delta v\) elsewhere. Now note that \[\sum_{k}\partial_{x_{k}}\big{(}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\big{)}=\sum _{k}\big{(}\rho^{\prime\prime}(|z|)\frac{z_{k}^{2}}{r|z|^{2}}+\rho^{\prime}(| z|)\frac{1}{r|z|}-\rho^{\prime}(|z|)\frac{z_{k}^{2}}{r|z|^{3}}\big{)}=\frac{1}{r} \rho^{\prime\prime}(|z|).\] Hence we get \[\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}=(-1)^{a}\Big{(}2\rho^{\prime}(|z|) \sum_{j,k}\frac{z_{k}}{|z|}\nu^{j}(x_{a})\partial_{jk}^{2}u^{i}+\frac{1}{r} \rho^{\prime\prime}(|z|)\sum_{j}\nu^{j}(x_{a})\partial_{j}u^{i}\Big{)}\] in \(B_{r}(x_{a})\) for \(a=1,2\). Note that although a priori \(z,u^{i}\) in the above equation are functions of \(y\), at \(\lambda=0\) we have \(y=x\), and thus we can regard them as functions of \(x\) too. Let \(\mathbf{u}_{r}(z)=\frac{1}{r}\mathbf{u}(x_{a}+rz)=\frac{1}{r}\mathbf{u}(x)\) and \(h^{i}_{r}(z)=\frac{1}{r}h^{i}(x_{a}+rz)=\frac{1}{r}h^{i}(x)\). Putting all these in (5.2) we get (note that in the following integration by parts the boundary term is zero, since \(\rho\) is \(0\) for \(z\) near \(\partial B_{1}\) and \(h^{i}\) is \(0\) on \(\partial E\)) \[0 \leq-\sum_{i}\int_{\big{(}B_{r}(x_{1})\cup B_{r}(x_{2})\big{)}-E}h ^{i}\partial_{\lambda}F|_{v=u^{i},\,\lambda=0}\,dx+o(r^{n})\] \[=\sum_{a,i}(-1)^{a+1}\int_{B_{r}(x_{a})-E}h^{i}\Big{(}2\rho^{ \prime}(|z|)\sum_{j,k}\frac{z_{k}}{|z|}\nu^{j}(x_{a})\partial_{jk}^{2}u^{i}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{1}{r}\rho^{ \prime\prime}(|z|)\sum_{j}\nu^{j}(x_{a})\partial_{j}u^{i}\Big{)}dx+o(r^{n})\] \[=\sum_{a,i}(-1)^{a+1}\int_{B_{r}(x_{a})-E}\Bigl{(}-2\sum_{k}\partial_ {k}\Bigl{[}h^{i}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\Bigr{]}\sum_{j}\nu^{j}(x_{a}) \partial_{j}u^{i}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{1} {r}h^{i}\rho^{\prime\prime}(|z|)\sum_{j}\nu^{j}(x_{a})\partial_{j}u^{i}\Bigr{)} dx+o(r^{n})\] \[=\sum_{a,i}(-1)^{a+1}\int_{B_{r}(x_{a})-E}\Bigl{(}-2\sum_{k} \Big{[}\partial_{k}h^{i}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}+h^{i}\partial_{k} \bigl{(}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\bigr{)}\Bigr{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\frac{1}{r}h^{i}\rho^{\prime\prime}(|z|)\Bigr{)}\sum_{j}\nu^{j}(x_{a}) \partial_{j}u^{i}\,dx+o(r^{n})\] \[=\sum_{a,i}(-1)^{a+1}\int_{B_{r}(x_{a})-E}\Bigl{(}-2\sum_{k} \Big{[}\partial_{k}h^{i}\rho^{\prime}(|z|)\frac{z_{k}}{|z|}\Bigr{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad-\frac{1}{r}h^{i}\rho^{\prime\prime}(|z|)\Bigr{)}\sum_{j}\nu^{j}(x _{a})\partial_{j}u^{i}\,dx+o(r^{n})\] \[=\sum_{a,i}(-1)^{a+1}r^{n}\int_{B_{1}\cap\{|\mathbf{u}_{r}|>0\}} \Bigl{(}-2\sum_{k}\Big{[}\partial_{k}h^{i}_{r}\rho^{\prime}(|z|)\frac{z_{k}}{ |z|}\Bigr{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad-\frac{1}{r}rh^{i}_{r}\rho^{\prime\prime}(|z|)\Bigr{)}\sum_{j }\nu^{j}(x_{a})\partial_{j}u^{i}_{r}\,dz+o(r^{n}).\] Now note that \(\partial_{j}u^{i}_{r}(z)\to q^{i}(x_{a})\nu^{j}(x_{a})=\partial_{j}u^{i}(x_{a})\) when \(z\cdot\nu(x_{a})>0\) by the results of [4]. Next note that \(h^{i}\) is Lipschitz continuous, since \(u^{i}\) is Lipschitz and we have \(0\leq h^{i}\leq cu^{i}\) for some constant \(c\). To see this note that the function \(\partial_{\xi_{i}}\Gamma(x,\partial_{\nu}\mathbf{u})\) is positive and continuous on the compact set \(\partial\Omega\), so it is bounded there, and thus for some \(c>0\) we have \(h^{i}=\partial_{\xi_{i}}\Gamma(x,\partial_{\nu}\mathbf{u})\leq c\varphi^{i}=cu ^{i}\) on \(\partial\Omega\). Hence the claim follows by the maximum principle. Therefore, by Lemma B.1 in [7], we also have \(\partial_{k}h^{i}_{r}(z)\to p^{i}(x_{a})\nu^{k}(x_{a})=\partial_{k}h^{i}(x_{a})\) for some function \(p^{i}\), and \(h^{i}_{r}(z)\to\nabla h^{i}(x_{a})\cdot z\) as \(h^{i}(x_{a})=0\). Thus if we divide the above expression by \(r^{n}\) and let \(r\to 0\) we obtain \[0\leq\sum_{a,i}(-1)^{a+1}\int_{B_{1}\cap\{z\cdot\nu(x_{a})>0\}} \Bigl{(}-2\sum_{k}\Big{[}\partial_{k}h^{i}(x_{a})\rho^{\prime}(|z|)\frac{z_{k} }{|z|}\Bigr{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\bigl{(}\nabla h ^{i}(x_{a})\cdot z\bigr{)}\rho^{\prime\prime}(|z|)\Bigr{)}\sum_{j}\nu^{j}(x_{a} )\partial_{j}u^{i}(x_{a})\,dz\] \[=\sum_{a,i}(-1)^{a+1}\int_{B_{1}\cap\{z\cdot\nu(x_{a})>0\}} \Bigl{(}-2\sum_{k}\Big{[}p^{i}(x_{a})\nu^{k}(x_{a})\rho^{\prime}(|z|)\frac{z_{k }}{|z|}\Bigr{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad-p^{i}(x_{a})\bigl{(}\nu(x_{a})\cdot z\bigr{)}\rho^{ \prime\prime}(|z|)\Bigr{)}\partial_{\nu}u^{i}(x_{a})\,dz\] \[=\sum_{a,i}(-1)^{a}\int_{B_{1}\cap\{z\cdot\nu(x_{a})>0\}}\Big{(} \frac{2}{|z|}\rho^{\prime}(|z|)+\rho^{\prime\prime}(|z|)\Big{)}\big{(}\nu(x_{a}) \cdot z\big{)}p^{i}(x_{a})\partial_{\nu}u^{i}(x_{a})\,dz\] \[=\sum_{a,i}(-1)^{a}\partial_{\nu}h^{i}(x_{a})\partial_{\nu}u^{i}(x _{a})\int_{B_{1}\cap\{z\cdot\nu(x_{a})>0\}}\Big{(}\frac{2}{|z|}\rho^{\prime}(|z |)+\rho^{\prime\prime}(|z|)\Big{)}\big{(}\nu(x_{a})\cdot z\big{)}dz\] \[=C_{\rho}\Big{(}\sum_{i}\partial_{\nu}h^{i}(x_{2})\partial_{\nu}u ^{i}(x_{2})-\sum_{i}\partial_{\nu}h^{i}(x_{1})\partial_{\nu}u^{i}(x_{1})\Big{)},\] where \(C_{\rho}=\int_{B_{1}\cap\{z\cdot\nu(x_{a})>0\}}\big{(}\frac{2}{|z|}\rho^{ \prime}(|z|)+\rho^{\prime\prime}(|z|)\big{)}\big{(}\nu(x_{a})\cdot z\big{)}dz\) does not depend on \(x_{a}\); we have also used the fact that \(p^{i}(x_{a})=\partial_{\nu}h^{i}(x_{a})\). By switching the role of \(x_{1},x_{2}\) we conclude that \[\sum_{i}\partial_{\nu}h^{i}(x_{2})\partial_{\nu}u^{i}(x_{2})-\sum_{i}\partial_ {\nu}h^{i}(x_{1})\partial_{\nu}u^{i}(x_{1})\] must be zero, as desired. The main idea to show the regularity of the free boundary lies in utilizing the boundary Harnack principle, which allows us to reduce the system into a scalar problem. The key tool in employing this approach is non-tangential accessibility of the domain; for the definition of non-tangentially accessible (NTA) domains we refer to [2]. **Lemma 5.2**.: _Let \(\mathbf{u}\) be a solution of the minimization problem (1.1) for \(p=2\). Then \(U=\{x:|\mathbf{u}(x)|>0\}\) is a non-tangentially accessible domain._ Proof.: This result follows from the same analysis as of Theorem 4.8 in [2] for the function \(u=u^{1}+\cdots+u^{m}\). Note that \(u\) is harmonic in \(\{|\mathbf{u}|>0\}=\{\nu>0\}\) (these two sets are equal due to Lemma 3.2), and the function \(u\) is also Lipschitz continuous and satisfies the nondegeneracy property by Corollary 3.3. **Theorem 5.3**.: _Let \(x_{0}\in\mathcal{R}\) be a regular point of the free boundary. Then there is \(r>0\) such that \(B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\) is a \(C^{1,\alpha}\) hypersurface for some \(\alpha>0\)._ Proof.: We may assume that \(u^{1}>0\) in \(B_{r_{0}}(x_{0})\cap\{|\mathbf{u}|>0\}\) for some \(r_{0}>0\). First we show that for some \(0<r\leq r_{0}\) there is a Holder continuous function \(g\) defined on \(B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\), such that in the viscosity sense we have \[\partial_{\nu}h^{1}\partial_{\nu}u^{1}=g\quad\text{ on }\partial E,\] where \(h^{1}\) is defined in Lemma 5.1. Since \(B_{r_{0}}(x_{0})\cap\{|\mathbf{u}|>0\}\) is an NTA domain, the boundary Harnack inequality implies that \(G^{i}:=u^{i}/u^{1}\) and \(H^{i}=h^{i}/h^{1}\) are Holder continuous functions in \(B_{r}(x_{0})\cap\overline{\{|\mathbf{u}|>0\}}\) for some \(0<r\leq r_{0}\). Now if we consider a one-sided tangent ball at some point \(y\in B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\), we have asymptotic developments (see Lemma B.1 in [7], noting that \(h^{i}\) is Lipschitz as we have shown in the proof of Lemma 5.1) \[u^{i}(y+x) =q^{i}(y)(x\cdot\nu(y))^{+}+o(|x|),\] \[h^{i}(y+x) =p^{i}(y)(x\cdot\nu(y))^{+}+o(|x|).\] Therefore \(G^{i}(y)=q^{i}(y)/q^{1}(y)\) and \(H^{i}(y)=p^{i}(y)/p^{1}(y)\). Thus from (5.1) we can infer that \[p^{1}(y)q^{1}(y)\Big{(}1+\sum_{i>1}G^{i}(y)H^{i}(y)\Big{)}=\sum_{i}p^{i}(y)q^{i }(y)=\sum_{i}\partial_{\nu}h^{i}\partial_{\nu}u^{i}\] is constant for every \(y\in B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\). Note that \(G^{i},H^{i}>0\) at \(y\) as \(p^{i},q^{i}>0\). Hence by applying Theorem 3.1 in [10] we get the desired result. **Corollary 5.4**.: _Let \(\mathbf{u}\) be a solution of the minimization problem (1.1) for \(p=2\). Then the regular part of the free boundary, \(\mathcal{R}\), is analytic._ Proof.: Suppose \(0\in\mathcal{R}\) and \(u^{1}>0\) in \(B_{r}\cap\{|\mathbf{u}|>0\}\). Then we apply the hodograph-Legendre transformation \(x\mapsto y=(x_{1},\ldots,x_{n-1},u^{1})\). Next we define the partial Legendre functions \[v^{1}(y):=x_{n}, v^{i}(y):=u^{i}(x), \text{for }i=2,\cdots,m,\] \[w^{i}(y):=h^{i}(x), \text{for }i=1,\cdots,m.\] As \(\mathcal{R}\) is \(C^{1,\alpha}\), it follows that \(u^{i}\) and \(h^{i}\) are in \(C^{1,\alpha}(\overline{B_{r}\cap\{|\mathbf{u}|>0\}})\). So, \(v^{i}\) and \(w^{i}\) are \(C^{1,\alpha}\) in a neighborhood of the origin in \(\{y_{n}\geq 0\}\). Now we have verified all the hypothesis of Theorem 7.1 in [2], and through a similar argument we can obtain the analyticity of \(\mathcal{R}\). ## Acknowledgements This project was initiated while the authors stayed at Institute Mittag Leffler (Sweden), during the program Geometric aspects of nonlinear PDE. H. Shahgholian was supported by Swedish research Council. ## Declarations **Data availability statement:** All data needed are contained in the manuscript. **Funding and/or Conflicts of interests/Competing interests:** The authors declare that there are no financial, competing or conflict of interests.
2309.03841
B-modes from galaxy cluster alignments in future surveys
Intrinsic alignment (IA) of source galaxies represents an important contaminant for upcoming cosmic shear surveys. In particular, it is expected on general grounds that IA contains a B-mode while the weak lensing signal does not. Thus, a detection of B-modes offers the possibility to study directly the IA signal of the sources. Galaxy clusters exhibit strong IA and are therefore a natural candidate to look for a B-mode signal. We forecast the signal-to-noise ratio (SNR) for B-modes from IA of galaxy clusters in the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). We use a perturbative model for the IA multipoles based on the Effective Field Theory of Intrinsic Alignments (EFT of IA), which has recently been validated against N-body simulations. We forecast SNR $\approx 12$ and find that this detectability is not significantly impacted by different analysis choices. Lastly, we also apply our forecast to clusters in the redMaPPer SDSS and DESY1 samples. We find SNR $\approx 5$ and SNR $\approx 3$, respectively, suggesting a detection is within reach, provided accurate redshift information is available.
Christos Georgiou, Thomas Bakx, Juliard van Donkersgoed, Nora Elisa Chisari
2023-09-07T16:55:10Z
http://arxiv.org/abs/2309.03841v2
# B-modes from galaxy cluster alignments in future surveys ###### Abstract Intrinsic alignment (IA) of source galaxies represents an important contaminant for upcoming cosmic shear surveys. In particular, it is expected on general grounds that IA contains a B-mode while the weak lensing signal does not. Thus, a detection of B-modes offers the possibility to study directly the IA signal of the sources. Galaxy clusters exhibit strong IA and are therefore a natural candidate to look for a B-mode signal. We forecast the signal-to-noise ratio (SNR) for B-modes from IA of galaxy clusters in the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). We use a perturbative model for the IA multipoles based on the Effective Field Theory of Intrinsic Alignments (EFT of IA), which has recently been validated against N-body simulations. For LSST, we forecast \(\mathrm{SNR}\approx 12\). We find that this detectability is not significantly impacted by different analysis choices. Lastly, we also apply our forecast to clusters in the SDSS redMaPPer and DESY1 samples, finding \(\mathrm{SNR}\approx 5\) and \(\mathrm{SNR}\approx 3\), respectively. This implies that a detection may already be within reach of current cluster samples. + Footnote †: slugcomment: Version November 8, 2023 ## 1. Introduction Over the past decades, the \(\Lambda\)CDM model has emerged as the dominant paradigm for describing the history of the Universe and its various epochs across a wide range of redshifts with a small number of free parameters. On the other hand, given that several tensions in the measurements of these parameters between different probes within this model have reared their heads in recent years, it is no longer the case that \(\Lambda\)CDM is unchallenged (see Abdalla et al., 2022, for a recent review). In addition, for future experiments such as the ESA _Euclid_ mission1(Laureijs et al., 2011) and the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST2, Ivezic et al., 2019), we must face the challenge of controlling an ever-growing number of sources of systematics when analysing new datasets of cosmological probes. Crucially, our ability to continue improving our understanding of cosmology hinges on the ability to _model_ these systematics, rather than including them in inflated error budgets. Footnote 1: [http://www.cosmos.esa.int/web/euclid](http://www.cosmos.esa.int/web/euclid) One probe of great importance is weak gravitational lensing (Weinberg et al., 2013): the coherent distortion of light from distant galaxies due to the gravitational pull of matter along the line-of-sight, also known as cosmic shear, which directly provides information on the statistics of the dark matter distribution (Bartelmann & Schneider, 2001). A difficulty in using weak lensing as a cosmology probe is the presence of several systematic errors that need to be controlled for an accurate measurement (see, e.g., Mandelbaum, 2018). A prominent source of systematics for weak gravitational lensing is that of intrinsic alignments (IA): the correlation of the intrinsic, unlensed orientation of galaxies with large scale structure in its vicinity also causes coherent distortions, which are in principle degenerate with the lensing signal (Troxel & Ishak, 2015; Kiessling et al., 2015). This effect has indeed been observed (see, e.g., Singh et al., 2015; Johnston et al., 2019; Fortuna et al., 2021; Samoroff et al., 2023, for recent measurements). While the alignment signal for luminous red galaxies (LRGs) is well understood, they only comprise a small portion of typical samples relevant for cosmic shear analyses. As such, the significance for cosmic shear analyses is as of yet unclear (Amon et al., 2022; Secco et al., 2022; Sugiyama et al., 2023; DES and KiDS Collaboration et al., 2023). Various theoretical models for the IA signal have been proposed (Catelan et al., 2001; Hirata & Seljak, 2004; Schneider & Bridle, 2010; Blazek et al., 2015, 2019; Schmitz et al., 2018; Vlah et al., 2020; Fortuna et al., 2021; Maion et al., 2023). Among these, Schneider & Bridle (2010) and Fortuna et al. (2021) offer approaches based on the halo model, which requires making assumptions about the galaxy-halo connection and their IA. All other approaches are based on perturbation theory and they rely on fewer physical assumptions. However, they may not be able to extend to the smaller scales from which lensing derives much of its constraining power. The range of validity of these models is thus a current area of investigation (Bakx et al., 2023; Maion et al., 2023). Perturbative schemes that go beyond linear order have the interesting consequence that the alignment signal exhibits a B-mode. This B-mode is not expected to arise due to weak lensing itself, at least in a leading approximation (but see Schneider et al., 2002; Cooray & Hu, 2002; Hilbert et al., 2009; Krause & Hirata, 2010; Bradshaw et al., 2019). This implies that a detection of such a B-mode which is compatible with predictions of exist ing alignment models can represent a clean and important validation test of the IA model in question, independently of lensing. It is known from N-body simulations that dark matter halos exhibit a B-mode IA signal compatible with expectations from perturbative models (see e.g. Kurita et al., 2021; Bakx et al., 2023). In comparison, nowadays, the B-mode signal is mostly utilised as a null check for systematics in the weak gravitational lensing measurements in the literature (see e.g. Asgari et al., 2019, and references therein). Galaxy clusters are among the largest gravitationally bound objects known in the Universe, and they exhibit strong IA (see e.g. Smargon et al., 2012; van Uitert & Joachimi, 2017). We additionally expect galaxy clusters to be relatively clean tracers of their dark matter halo. For these reasons, they offer a useful first opportunity to validate quasi-linear alignment models and a starting point for studying the IA signal of different tracer populations. A validation of the presence of B-modes from IA could point to the need to include such quasi-linear terms in weak lensing analyses of galaxy shapes. In such a scenario, B-modes might play a role as a source of information in alignment bias parameters (Pen, 2000). In this work, we explore the possibility of detecting an alignment B-mode in galaxy clusters detected in the upcoming LSST survey by using Fourier-space multipoles of the B-mode. In Section 2, we explain the alignment model we employ in this work, which is based on the Effective Field Theory of Intrinsic Alignments (EFT of IA, see below) as well as the observables considered. In Section 3 we detail the forecasting method and subsequently Section 4 shows our baseline forecasting result and several robustness checks. Throughout this paper, we assume a cosmological model in agreement with the Planck 2018 results (Planck Collaboration et al., 2020, TT,EE+lowE+lensing, marginalised means in Table 1) with \(\Omega_{m}=0.315,\Omega_{b}=0.049,h=0.674,n_{s}=0.965\) and \(\sigma_{8}=0.811\). ## 2 Modelling On large, quasi-linear scales, the alignment of tracers of the dark matter field can be modelled with perturbation theory. In particular, the familiar linear alignment model from Catelan et al. (2001); Hirata & Seljak (2004) is simply the leading term in an expansion of the IA signal in terms of the dark matter density perturbation \(\delta\), which, when smoothed on large enough scales, can be assumed to be small. In the past decade, the Effective Field Theory of Large Scale Structure (EFT of LSS, Baumann et al., 2012; Carrasco et al., 2012) paradigm has emerged as a consistent way to treat populations of dark matter tracers on cosmological scales (Desjacques et al., 2018). It was applied to IA in Vlah et al. (2020) and Bakx et al. (2023), thus giving rise to the Effective Field Theory of Intrinsic Alignments (EFT of IA) following earlier work by Blazek et al. (2019); Schmitz et al. (2018). Alignment B-modes only appear at next-to-leading order in perturbation theory. Indeed, the linear alignment model does not predict any B-mode signal. At second order in perturbation theory however, B-modes do appear. Specifically, the model we consider for the intrinsic shape perturbation reads \[g_{ij}=b_{i}^{g}K_{ij}+b_{\delta K}\delta K_{ij}+b_{KK}\mathrm{TF}(K^{2})_{ij}, \tag{1}\] where the trace-free operators appearing on the r.h.s. are given by \[K_{ij}=\bigg{(}\frac{\partial_{i}\partial_{j}}{\nabla^{2}}-\frac{1}{3}\delta _{ij}\bigg{)}\delta =\frac{2}{3\Omega_{m}(aH)^{2}}(\partial_{i}\partial_{j}-\frac{1}{3 }\delta_{ij}\nabla^{2})\Phi, \tag{2}\] \[\mathrm{TF}(K^{2})_{ij} =K_{ik}K_{kj}-\frac{1}{3}\delta_{ij}K_{kl}K_{kl},\] and all quantities are evaluated at position \(\mathbf{x}\) and redshift \(z\). Here \(\Phi\) is the gravitational potential obeying the Poisson equation \(\nabla^{2}\Phi=\frac{3}{2}\Omega_{m}(aH)^{2}\delta\). This model contains all the terms needed to compute B-mode auto-correlations at next-to-leading order in the EFT framework3. In the flat-sky (i.e. distant observer) approximation where the line-of-sight is the \(x^{3}\)-direction, the components of the shape field we are able to observe are Footnote 3: We do not include the operator \(t_{ij}\) at second order. This operator is a pure quadrupole in Fourier space and hence does not produce B–modes for the same reason the tidal field \(K_{ij}\) does not. Additionally, B–modes do not receive any contributions from third–order operators in the EFT of IA. \[\epsilon_{1} =\frac{1}{2}(g_{11}-g_{22}), \tag{3}\] \[\epsilon_{2} =g_{12}.\] These fields are coordinate-dependent, but we can construct E- and B-mode fields (in Fourier space4) as Footnote 4: We choose to use the notation \(f(\mathbf{k})\) rather than \(\tilde{f}(\mathbf{k})\) for the Fourier transform of the quantity \(f(\mathbf{x})\) to avoid clutter, i.e. the argument \(\mathbf{k}\) implies that a Fourier transform is considered. \[E(\mathbf{k}) =\epsilon_{1}(\mathbf{k})\cos 2\phi_{\mathbf{k}}+\epsilon_{2}( \mathbf{k})\sin 2\phi_{\mathbf{k}}, \tag{4}\] \[B(\mathbf{k}) =-\epsilon_{1}(\mathbf{k})\sin 2\phi_{\mathbf{k}}+\epsilon_{2}( \mathbf{k})\cos 2\phi_{\mathbf{k}},\] where \(\phi_{\mathbf{k}}\) is the angle between \(\mathbf{k}\) and the \(x^{1}\)-axis. The Fourier-space auto-spectra of these shape fields are defined as \[\langle E(\mathbf{k})E(\mathbf{k}^{\prime})\rangle =(2\pi)^{3}\delta^{D}(\mathbf{k}+\mathbf{k}^{\prime})P_{EE}( \mathbf{k}), \tag{5}\] \[\langle B(\mathbf{k})B(\mathbf{k}^{\prime})\rangle =(2\pi)^{3}\delta^{D}(\mathbf{k}+\mathbf{k}^{\prime})P_{BB}( \mathbf{k}).\] The B-mode auto-correlation is the only non-vanishing two-point statistic involving B-modes; the cross-correlation \(P_{EB}\) vanishes in a parity-invariant Universe (Biagetti & Orlando, 2020), while the cross-correlation with the matter density \(P_{mB}=0\) even without this assumption. Note that the power spectra \(P_{EE}(\mathbf{k}),P_{BB}(\mathbf{k})\) depend not only on \(k=|\mathbf{k}|\) but also on \(\mu=\mathbf{k}_{3}/k\), since statistical isotropy is broken by the line-of-sight direction \(x^{3}\). In the context of weak lensing, one typically considers observables that are integrated along the line-of-sight and are thus functions of the projected separation (or in Fourier space, the length of the perpendicular mode \(\ell\)). For example, Vlah et al. (2021) derived expressions for flat-sky angular power spectra \(C_{XY}(\ell)\) of IA. Unsurprisingly, if one instead attempts to use full three-dimensional information on the positions of the tracer, higher signal-to-noise can be achieved (Singh et al., 2023; Kurita & Takada, 2023). All information in the three-dimensional power spectra \(P_{XY}\) in Equation (5) can be summarised in terms of multipole moments \(P_{XY}^{(\ell)}(k)\), de fined as \[P^{(\ell)}_{XY}(k)=\frac{2\ell+1}{2}\int_{-1}^{1}\mathrm{d}\mu\mathcal{L}_{\ell} (\mu)P_{XY}(k,\mu), \tag{6}\] where \(\mathcal{L}_{\ell}(\mu)\) is a Legendre polynomial of order \(\ell\). Because the 3D shape field is a statistically isotropic rank two tensor5, the \(\mu\)-dependence of the E- and B-modes is only second order in \(\mu\). As such, the associated spectra are at most fourth order in \(\mu\), and clearly they should be even in \(\mu\). In fact, the B-mode is fully captured by the first two moments \(\ell=0,2\). These arguments are based solely on symmetries and hold independently of perturbation theory. Within the perturbative model of Eq. (1), we obtain the following explicit expressions for the B-mode multipoles: Footnote 5: Throughout this paper, we ignore redshift–space distortions. \[\begin{split} P^{(0)}_{BB}(k,z)=&\frac{1}{3}\big{[} (b_{2,1}^{g})^{2}(2I_{55}+I_{66})+2(b_{2,1}^{g}b_{2,3}^{g})(I_{66}-I_{67})\\ &+(b_{2,3}^{g})^{2}(I_{66}-2I_{67}+I_{77})\big{]},\end{split} \tag{7}\] \[\begin{split} P^{(2)}_{BB}(k,z)=&\frac{2}{3} \big{[}(b_{2,1}^{g})^{2}(I_{66}-I_{55})+2(b_{2,1}^{g}b_{2,3}^{g})(I_{66}-I_{67} )\\ &+(b_{2,3}^{g})^{2}(I_{66}-2I_{67}+I_{77})\big{]},\end{split} \tag{8}\] where the bias parameters \(b_{2,1}^{g}\) and \(b_{2,3}^{g}\) are related to those from the basis above via the linear combinations \[b_{2,1}^{g}=\frac{1}{3}b_{KK}+b_{\delta K};\qquad b_{2,3}^{g}=-\frac{2}{3}b_{ KK}+b_{\delta K}. \tag{9}\] The linear alignment bias \(b_{1}^{g}\) does not appear explicitly in the expressions for the B-modes. The expressions for the loop integrals \(I_{55},I_{66},I_{67},I_{77}\) as a function of wavenumber \(k\) are provided in Appendix A. We adhere to the notation of Vlah et al. (2020); Bakx et al. (2023). These integrals are computed in Mathematica with the FFT-Log method from Simonovic et al. (2018). In Figure 1 we show the dependence of these integrals on wavenumber for our fiducial cosmology. Note that both of these expressions scale with redshift as \(D(z)^{4}\), since the loop integrals \(I_{\mathrm{nm}}(k,z)\) involve the square of the linear power spectrum. We emphasise that these relations are valid for any tensorial tracer of large-scale structure, regardless of its precise definition. This includes halos, galaxies, and galaxy clusters. In this framework, modelling the galaxy-halo connection (or the cluster-halo connection, for that matter) is not necessary, which is a major advantage in terms of reducing the number of free parameters of the model. We expect the EFT framework to yield valid predictions up to wavenumbers of order the non-linear wavenumber \(k_{\mathrm{NL}}\), which we define here through the relation (Chisari and Pontzen, 2019), \[k_{\mathrm{NL}}^{-2}(z)=\frac{1}{12\pi^{2}}\int_{0}^{\infty}P_{\mathrm{L}}(k^ {\prime},z)\mathrm{d}k^{\prime}\,. \tag{10}\] Note that \(k_{\mathrm{NL}}\) grows with increasing redshift, because gravitational non-linearities grow with time. Put differently, the (RMS) density contrast was smaller in the past, because structure has had less time to grow. From Eq. (10) we can deduce that \(k_{\mathrm{NL}}(z)\propto D(z)^{-1}\) where \(D(z)\) is the growth factor. In Bakx et al. (2023) it was demonstrated that the EFT framework describes the B-mode signal for dark matter halos successfully up to \(k_{\mathrm{max}}\simeq 0.28\)\(h/\mathrm{Mpc}\) at \(z=0\). If we assume that the maximum wavenumber is proportional to \(k_{\mathrm{NL}}\) at all redshifts, we see that \(k_{\mathrm{max}}\) also grows as \(D(z)^{-1}\) with increasing redshift. Hence, \[k_{\mathrm{max}}(z)=k_{\mathrm{max}}(z_{*})\frac{D(z_{*})}{D(z)}=(0.28\,h/ \mathrm{Mpc})\frac{1}{D(z)}. \tag{11}\] This is the strategy we adopt to determine scale cuts in this work. That is, we assume the range of validity of the perturbative expansion to be the same for clusters as for the dark matter halos hosting them6. Footnote 6: It is possible that very massive tracers require the inclusion of higher-derivative terms (Fujita et al., 2020). This is beyond the scope of this paper, although we may note that the first term of this kind, namely \(\nabla^{2}K_{ij}\), does not contribute to B–modes. Nontrivial corrections would arise from spatial derivatives of second order quantities. In principle, all coefficients appearing in Eq. (1) are independent, if no assumptions are made about the origin and evolution of the IA signal. However, several works have suggested that, for dark matter halos, to a good approximation, intrinsic alignment is local in Lagrangian space (Bakx et al., 2023; Maion et al., 2023; Akitsu et al., 2023). That is to say, the linear alignment model is a good approximation in the far past, at the initial time when alignments originated. The non-linear terms appearing in Eq. (1) then arise due to advection effects displacing the halo from its initial position. This treatment yields the following co-evolution relations among the higher-order bias parameters (Bakx et al., 2023): \[b_{2,1}^{g}=(b_{1}^{s}-1)b_{1}^{g},\quad b_{2,3}^{g}=b_{1}^{s}b_{1}^{g}, \tag{12}\] where \(b_{1}^{s}\) is the linear galaxy bias defined through \(\delta_{g}=b_{1}^{s}\delta\). Strictly speaking, the coevolution relations have been verified only for halos (rather than clusters). The work of Akitsu et al. (2023) applied the quadratic field method from Schmittfull et al. (2015) to measure second order bias parameters with high accuracy. Their measurements showed good agreement with the coevolution relations in N-body simulations across a halo mass range Figure 1.— The integrals \(I_{\mathrm{nm}}\), evaluated at redshift zero, that go into the calculation of the B–mode power spectra multipoles, Eq. (8). of \(\sim 10^{12}-10^{15}M_{\odot}/h\) and for redshifts \(0<z<1\). The cluster samples we will consider below have a typical host halo mass of \(\sim 10^{14}M_{\odot}/h\). (It is safe to assume that at these masses, every such halo hosts a galaxy cluster - e.g., Kravtsov et al. 2004.) Moreover, Shi et al. (2023) found using cluster mock catalogues from Sunayama et al. (2020) that the shape of the dark matter halo traces well the shape of the cluster (as determined by using all the member galaxies). For this reason, we expect that clusters trace the shapes and positions of their host halos very well, and therefore the coevolution relations hold for clusters too. One caveat in this argument, as is argued in Shi et al. (2023), is that the shape of clusters as determined by cluster finders using photometric redshift data is not equal to the true shape of the cluster because of measurement uncertainties in the redshift and missed satellite members. Their work indicates that this reduces the entire alignment signal by a factor of \(\sim 2.5\) at separations \(r_{\rm p}\sim 6-70\) Mpc /\(h\) irrespective of cluster richness, which includes the scales relevant for this work. Hence the observed cluster alignment also obeys the coevolution relations, as both sides of Equation (11) are linear in the shape bias parameters. The reduction in the alignment signal is taken into account in our analysis, because the model we use for estimating \(b_{1}^{g}\) for clusters is based on observational data rather than estimates from simulations. ## 3 Forecasting Setup The objective of this analysis is to estimate the signal of B-mode shape-shape correlations of galaxy clusters that is expected to be observed by next generation galaxy surveys. In addition, we are interested in estimating the noise to this observable and quantify the expected significance of the detection of the B-mode signal. Specifically, we will focus on estimating this signal for an experiment setup that simulates the upcoming LSST survey. To this end, we need to estimate the redshift distribution of the observed galaxy clusters and the covariance matrix of the measured signal. In this section, we outline the procedure to do so. ### Clusters in LSST We use the halo model formalism to predict the expected number of observed galaxy clusters. According to this, the cluster count can be estimated by \[\frac{\mathrm{d}N}{\mathrm{d}z}=\Omega_{\rm s}\,\frac{\mathrm{d}^{2}V_{\rm c} }{\mathrm{d}z\mathrm{d}\Omega}\int\mathrm{d}M\frac{\mathrm{d}n}{\mathrm{d}M} \int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\frac{\mathrm{d}\lambda}{\lambda}\, p(\ln\lambda|M)\,, \tag{12}\] where \(\Omega_{\rm s}\) is the survey area, \(V_{\rm c}\) the comoving volume, \(n(M,z)\) the halo mass function and \(\lambda\) is the richness of the cluster, with the rightmost integral carried out between the minimum and maximum richness considered. We use the model from Tinker et al. (2010) for the halo mass function. The probability density function \(p(\ln\lambda|M)\) is the mass-richness relation and quantifies the distribution of cluster richness given its halo mass and redshift. We model this relation following Lima and Hu (2005); Murata et al. (2018), \[p(\ln\lambda|M)=\frac{1}{\sqrt{2\pi}\sigma_{\ln\lambda|M}}\exp\left(-\frac{ \left(\ln\lambda-\left\langle\ln\lambda\right\rangle(M)\right)^{2}}{2\sigma_{ \ln\lambda|M}^{2}}\right)\,. \tag{13}\] The distribution above is a log-normal with the mean relation often parametrised by \[\langle\ln\lambda\rangle(M)=A+B\ln\left(\frac{M}{M_{\rm piv}}\right) \tag{14}\] where typically it is chosen that \(M_{\rm piv}=3\times 10^{14}\) M\({}_{\odot}/h\) and the scatter given by \[\sigma_{\ln\lambda|M}=\sigma_{0}+q\,\ln\left(\frac{M}{M_{\rm piv}}\right)\,. \tag{15}\] In both the mean and scatter of the mass-richness relation, a redshift dependence can also be considered, in the form of an additive \(\ln(1+z)\) term, multiplied by a factor \(q_{z}\). However, no such term is supported by current data and we do not consider it in this analysis. We assume the observed cluster sample to be volume limited and base our forecast on a sample generated from red sequence cluster finders, such as redMaPPer (Rykoff et al., 2014). This is the same cluster finder that was used in the analysis of Murata et al. (2018) and we use the best-fit values for the parameters of the mass-richness relation from that work. Specifically, we use \(A=3.207\), \(B=0.993\), \(\sigma_{0}=0.456\) and \(q=-0.169\). In the integration of Equation (13) we use \(\lambda_{\rm min}=20\), \(\lambda_{\rm max}=100\) and integrate over \(10^{12}\leq M/[\mbox{M}_{\odot}/h]\leq 2\times 10^{15}\), following the ranges used in Murata et al. (2018). The redshift out to which the cluster sample is volume limited is determined by the depth of the survey and the wavelength at which the observations are made. When the 4000 A break is redshifted beyond the wavelength of observations, the galaxy red sequence can no longer be detected and that defines the limit of the cluster sample. However, the survey must also be deep enough to detect enough galaxies out to that redshift. A galaxy of \(0.2L_{*}\) luminosity needs to be detected at that redshift for optimum richness estimation of the volume limited cluster catalogue (Rykoff et al., 2014). The filters for LSST are \(u,g,r,i,z\) and \(y\), with \(y\) being the reddest available filter. The turn-on point for the \(z\) filter1 is approximately at 8500 A, which means the maximum redshift that the red sequence can be reliably estimated is \(z_{\rm max}\approx 1.1\). The limiting magnitude in the \(i\)-band is also forecasted to be \(i_{\rm lim}=25.3\) for the Year 10 LSST data (The LSST Dark Energy Science Collaboration et al., 2018). Most recently, the redMaPPer algorithm was applied on data from the Dark Energy Survey Year 1 data release (DESY1, Drlica-Wagner et al., 2018; McClintock et al., 2019) using filters up to \(z\)-band for the determination of the galaxy red sequence. The limiting magnitude in the \(i\)-band for DESY1 is \(i_{\rm lim}\approx 22.9\). We therefore assume2 that survey depth will not be an issue and that our forecasted cluster sample is volume limited out to redshift of \(z_{\rm lim}=1.1\). We note here that the fiducial value for \(q\) in Eq. (15) has been chosen as 0 in several other forecasts of LSST clusters (e.g. The LSST Dark Energy Science Collaboration et al., 2018; Eifler et al., 2021). These parameters can depend on the cluster finder algorithm and even on the specific setup choices of the algorithm. In this work, we do not assume \(q\) to be zero, since our forecast is based on the same cluster finder algorithm and setup as in Murata et al. (2018), where \(q\) was shown to be significantly non-zero. In Section 4, we repeat our analysis with slightly different values for the mass-richness relation parameters and show how these affect our results. ### Bias coefficients Using the halo model formalism we are also able to compute the average linear galaxy cluster position bias \(b_{1}^{\rm s}\) using the halo mass function \(n(M,z)\), the halo bias function \(b_{\rm h}(M,z)\) and the mass-richness relation. We can write \[b_{1}^{\rm s}=\frac{1}{\bar{n}}\int{\rm d}M\,\frac{{\rm d}n}{{\rm d}M}\,b_{\rm h }(M,z)\int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\frac{{\rm d}\lambda}{ \lambda}\,p(\ln\lambda|M)\,, \tag{16}\] where \(\bar{n}\) is the average number density given by \[\bar{n}=\int{\rm d}M\,\frac{{\rm d}n}{{\rm d}M}\int_{\lambda_{\rm min}}^{ \lambda_{\rm max}}\frac{{\rm d}\lambda}{\lambda}\,p(\ln\lambda|M)\,. \tag{17}\] The halo bias function we use is described in Tinker et al. (2010). In a similar way, we can compute the average linear alignment bias \(b_{1}^{\rm s}\) by assuming a mass dependence of the amplitude of the linear alignment model, \(A_{\rm IA}\). To model this, we use measurements from Fortuna et al. (in prep.), who measured the mass dependence of the linear alignment amplitude from several samples of luminous red galaxies and clusters from the SDSS redMaPPer cluster catalogue (Rykoff et al., 2014; van Uitert & Joachimi, 2017). This relation had already been shown to hold for dark matter halos (Piras et al., 2018; Kurita et al., 2021). Therefore, we assume a power-law model, \[A_{\rm IA}(M)=A_{\rm IA,0}\left(\frac{M}{M_{\rm p}}\right)^{\beta_{\rm d}}\,, \tag{18}\] with a pivot mass \(M_{\rm p}=10^{13.5}\)\({\rm M}_{\odot}/h\). The alignment amplitude is related to the linear alignment bias via \[b_{1}^{\rm s}(M)=-2A_{\rm IA}(M)\frac{C_{1}\rho_{\rm crit}\Omega_{\rm m}}{D(z )}\,, \tag{19}\] where \(\rho_{\rm crit}\) is the critical density of the universe and \(C_{1}=5\times 10^{-14}\)\({\rm M}_{\odot}^{-1}h^{-2}{\rm Mpc}^{-3}\), a normalisation constant typically used in intrinsic alignment measurements, first determined by Brown et al. (2002). Following Equation (19), we can estimate \[b_{1}^{\rm s}=\frac{1}{\bar{n}}\int{\rm d}M\,\frac{{\rm d}n}{{\rm d}M}\,b_{1} ^{\rm s}(M)\int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\frac{{\rm d}\lambda}{ \lambda}\,p(\ln\lambda|M)\,. \tag{20}\] \(z=1.1\) gives a limiting magnitude of 24.85 mag. As we will see in Section 4, the majority of the SNR is gained at redshift that is lower than 1.1 hence inaccuracies in the magnitude limit estimation are not expected to qualitatively alter our results. We use the best-fit values from Fortuna et al. (in prep.) and have \(A_{\rm IA,0}=5.74\) and \(\beta_{M}=0.44\). The amplitude, \(A_{\rm IA,0}\), was determined using redMaPPer galaxy clusters to pinpoint the IA behaviour on high masses. The shape of these galaxy clusters are obtained from positions of observed satellite members, weighted by their membership probability. As noted in Section 2, these shapes have been shown to suffer from projection effects and are expected to have lower alignment amplitude compared to alignments of dark matter halos (Shi et al., 2023). However, we don't need to correct for this since our forecasting setup is based on a redMaPPer-like galaxy cluster sample. Thus, we use the value of \(A_{\rm IA,0}\) as is, where the projection effects are implicitly already taken into account. ### Covariance estimation We model the expected error on the measured B-mode IA signal with a Gaussian covariance following Yamamoto et al. (2006); Taruya et al. (2011). The covariance is given by \[{\rm Cov}^{\ell\ell^{\prime}}(k_{n})= \frac{(2\ell+1)(2\ell^{\prime}+1)}{V_{n}\,V_{s}} \tag{21}\] \[\int_{-1}^{1}{\rm d}\mu\,\mathcal{L}_{\ell}(\mu)\mathcal{L}_{\ell ^{\prime}}(\mu)\left(P_{\rm BB}(k_{n},\mu)+\frac{\sigma_{\varepsilon}^{2}}{ \bar{n}}\right)^{2}\,, \tag{22}\] where \(\sigma_{\varepsilon}\) is the RMS ellipticity of the galaxy cluster sample, \(n\) is the number density, \(V_{s}\) is the volume of the survey and \(V_{n}\) the volume element of the thin shell in Fourier space, given by \[V_{n}=\frac{4\pi^{2}k_{n}^{2}{\rm d}k_{n}}{(2\pi)^{3}}\,. \tag{23}\] The volume element above is computed in bins of \(k\) with width \({\rm d}k_{n}\) and central value \(k_{n}\). At each combination of \(\ell,\ell^{\prime}\), the covariance matrix has \(N_{k}\times N_{k}\) dimensions, where \(N_{k}\) is the number of \(k\)-bins used in the analysis. The SNR is then given by \[\left(\frac{\rm S}{\rm N}\right)^{2}=\sum_{\ell,\ell^{\prime}}\left(P_{BB}^{( \ell)}(k_{n})\right)^{\rm T}\left({\rm Cov}^{\ell\ell^{\prime}}(k_{n})\right)^ {-1}P_{BB}^{(\ell^{\prime})}(k_{n})\,, \tag{24}\] where the sum is carried over \(\ell,\ell^{\prime}\in\{0,2\}\). The covariance above ignores the connected non-Gaussian terms and super-sample covariance terms (Takada & Hu, 2013; Kurita et al., 2021). The mean redshift of our forecasted galaxy cluster sample is \(\bar{z}=0.676\), which gives \(k_{\rm max}\approx 0.4\)\(h/\)Mpc. In Kurita et al. (2021, Figure 9) it was shown that the Gaussian covariance overestimates the signal-to-noise ratio by approximately 25% at \(k_{\rm max}\approx 0.4\)\(h/\)Mpc when considering the halo number counts - E-mode power spectra, \(P_{\rm HE}\). We expect this effect to be less dramatic when considering B-mode auto-power spectra, because these are primarily shape-noise dominated. As will be clear in Section 4, our results do not change qualitatively even if the SNR is reduced by 25%. Hence, we limit our analysis to modelling the Gaussian covariance only. One additional assumption that goes into this covariance is that the shape noise statistics are purely Poissonian, meaning the only contribution to the Gaussian noise is the Poissonian shape noise, \(\sigma_{\epsilon}^{2}/\bar{n}\). An additional contribution arises from the non-linear evolution of IA (Blazek et al., 2019). This non-Poissonian shape noise is measured in Kurita et al. (2021) to be 5-10% of the Poissonian contribution. In Section 4, we repeat our analysis with a shape noise that is increased by 10% and show the effect on the overall SNR is small. ## 4. Results In our fiducial setup, we consider a galaxy cluster sample obtained via a redMaPPer-like algorithm applied on LSST Y10 data, the final data products from 10 years of observations. The sample is assumed to be volume limited. The minimum redshift is assumed to be 0.2, similarly to what was found in McClinfock et al. (2019), while the maximum redshift is 1.1 (see Section 3). The redshift distribution is obtained via Eq. (12) and the total number of clusters forecasted is approximately 124,000 with a mean redshift of \(\bar{z}=0.676\) and a total number density of \(\bar{n}=4.44\times 10^{-7}\) [Mpc/\(h\)]\({}^{-1}\). We compute the B-mode theory power spectra in 117 logarithmically spaced bins of \(k\), with central values \(k_{n}\) from 0.005 \(h\)/Mpc up to 1.0 \(h\)/Mpc. The width of the base-10 logarithm of each bin is given by 0.02. The power spectra are computed at the mean redshift of the survey. We assume a survey with \(f_{\rm sky}=0.43\) sky coverage fraction (Ivezic et al., 2019). We also assume an ellipticity RMS dispersion of \(\sigma_{\epsilon}=0.098\), based on what has been measured from existing redMaPPer cluster data (Vedder and Chisari, 2021). The expected IA signal of the galaxy cluster sample together with the forecasted error on the signal are shown in Figure 2. On large scales, the B-mode signal is several orders of magnitude lower than the expected level of noise. On smaller scales, the signal increases, especially on the monopole, which drives all the SNR. As the \(k_{\rm max}\) increases, the SNR is also increasing. We note, however, that this is unlikely for a more realistic covariance and the actual SNR is expected to plateau around \(k_{\rm max}\approx 0.4\)\(h\)/Mpc (Kurita et al., 2021). This is also the maximum wavenumber we consider in our fiducial analysis. The resulting SNR for our fiducial analysis is equal to 11.79. This is much higher than the typical threshold for detection of 5 and suggests that B-mode correlations from galaxy cluster shape-shape 3D power spectra are going to be detectable with high significance in a survey like LSST. So far, we have assumed that the signal measured from shape-shape correlations of the cluster sample can be described by computing the theory power spectra at the mean redshfit of the sample. This is a good approximation when the redshift bin has a small width but might not be accurate over larger redshift baselines. However, the result over the full redshift range is still indicative of the expected SNR, and we keep our fiducial setup with this caveat. Motivated by this, it is interesting to explore the expected SNR in a smaller redshift bin, which will also reveal the redshift that drives the constraining power. Therefore, we divide the redshift distribution into 5 equidistant redshift bins and repeat the analysis in each bin. In Figure 3 we show the forecasted SNR evaluated for each redshift bin and plotted at the mean redshift of each bin. We see that the intermediate redshift bins around 0.45 have a higher SNR. This is important to note since the analysis choices were motivated by cluster samples at low redshift, below 0.35. While the true SNR might be different due to extrapolation of these choices to higher redshift, it is not expected to differ by a very significant amount, since the low redshift clusters contain a large portion of the constraining power. In addition, already at the first redshift bin with mean redshift of 0.3, the forecasted SNR is significant. It is also useful to note that the overall SNR from binning the sample is higher than the SNR obtain when the sample is analysed as a whole. This can be understood in two ways: a larger redshift baseline means that the total volume of the sample increases and can contribute Figure 3.— The dependence of the SNR of the forecasted B–modes of LSST cluster alignments as a function of redshift. The points show the expected SNR in each redshift bin assuming 5 equidistant bins between \(0.2<z<1.1\). The \(x\)–axis shows the mean redshift of each bin. Figure 2.— The signal (solid/dashed lines, computed at the cluster sample mean redshift) and noise (dotted lines, square–root of the variance) of the forecasted B-mode power spectra (monopole and quadropole, \(\ell=0\) and \(\ell=2\), respectively). Solid lines represent positive signal and dashed ones negative. The vertical dashed black line shows the \(k_{\rm max}\) at the redshift considered in our fiducial analysis when calculating the SNR. negatively to the overall SNR due to the increased noise that comes with it. In addition, a large redshift bin includes correlation between shapes of very distant galaxies, which are expected to be very low. Therefore, it is clear that a binning strategy is preferred in increasing the overall SNR of the measurement, and the one presented in Figure 3 is only one such possible strategy. However, optimising the binning is more relevant when measuring this signal in real data and beyond the scope of this work. ### Testing the SNR robustness As noted above, the forecast is based on an extrapolation of existing data to the depth and redshift of LSST. Moreover, there is an inherent uncertainty on the fiducial values we have chosen, which might significantly impact our forecasted SNR. In this section, we vary these analysis choices within their expected errors from observations and check how the forecasted SNR is affected and whether it remains high enough for a conclusive detection. We decide to vary these parameters individually, since it is difficult to assess the correlation between them. This might be possible for the same class of parameters, such as the linear alignment bias mass scaling or the mass-richness relation. Nevertheless, we choose to vary them independently for simplicity and note that this is pessimistic since correlations in the mass-richness relation restrict the allowed values of these parameters in a way that improves the SNR (see e.g. Murata et al., 2018, Figure 6). We vary \(A_{\mathrm{IA},0},\beta_{M}\), by 2 times the error on their measurement in each direction, with \(\sigma_{A_{\mathrm{IA},0}}=0.32\) and \(\sigma_{\beta_{M}}=0.04\). We also vary the mass-richness relation parameters in the same way, where \(\sigma_{A}=+0.044/-0.046\), \(\sigma_{B}=+0.041/-0.055\), \(\sigma_{\sigma_{0}}=+0.047/-0.039\) and \(\sigma_{q}=+0.035/-0.026\) (meaning we vary them differently in the positive and negative direction). We also run our fiducial analysis with a higher assumed \(\sigma_{e}^{2}\) by 10% and another time without evolving \(k_{\mathrm{max}}\) with redshift but fixing it to the value at \(z=0\). The results from these tests are presented in Figure 4. We see that for most of these tests the SNR is significant, above the threshold of 5 for detection. However, it is clear that the mass-richness relation parameters have the largest impact in the signal, with \(B\) and \(q\) being the most relevant. These two parameters are also highly negatively correlated (Murata et al., 2018), where a low value for \(B\) is preferred only for a value of \(q\) that is closer to zero (note that \(q\) is negative, so \(q+2\sigma_{q}\) is closer to zero) and vice versa. Therefore, the negative impact of one of these in the SNR might be alleviated by the expected shift of the other in the opposite direction. It is also clear that assuming a low \(k_{\mathrm{max}}\) has a drastic effect on the SNR, as is evident on Figure 2. This highlights the importance of developing accurate models of the intrinsic alignment power spectra down to small scales in measuring the B-modes with a high SNR. ### Forecast for existing cluster catalogues Since many cluster catalogues have already been produced from previous galaxy surveys, it is interesting to ask whether a B-mode signal would be expected to have been observed in shape correlations of these clusters. In particular, the redMaPPer cluster catalogue from the Sloan Digital Sky Survey (SDSS, York et al., 2000; Rykoff et al., 2014) has been used to produce the highest signal-to-noise measurements of galaxy cluster alignments in data to date (van Uitert & Joachimi, 2017). A similar, deeper catalogue on a much smaller area has been generated using data from DESY1 (McClintock et al., 2019). Here, we forecast the B-mode SNR from these two cluster catalogues. We choose to model the characteristics of these catalogues with the same methodology used for the LSST cluster sample, instead of using the released data. We have checked that our methodology reproduces the redshift distributions of these catalogues. We leave our fiducial setup unchanged, since the mass-richness relation and alignment bias mass scaling were determined on the SDSS redMaPPer cluster catalogue itself. For the SDSS redMaPPer catalogue, we use a survey area of \(\Omega_{\mathrm{s}}=10,400\) deg\({}^{2}\) and a redshift range of \(0.1<z<0.35\), where the sample is shown to be volume limited. We forecast a SNR \(=5.23\), which is slightly above the threshold of 5 for a detection. Repeating this analysis for the DESY1 sample, we use \(\Omega_{\mathrm{s}}=1,500\) deg\({}^{2}\) and \(0.2<z<0.7\), and we forecast SNR \(=3.28\). This result suggests that a marginal detection might already be within reach with current galaxy cluster catalogues. ## 5. Discussion In this work, we forecast the detection of IA B-modes for a sample of galaxy clusters constructed with the redMaPPer algorithm applied on data from the upcoming LSST survey after 10 years of observations. We use existing measurements of the mass-richness relation and mass scaling of the linear alignment bias on similar existing samples to inform our forecast. We predict the signal and covariance of the B-mode shape-shape 3D power spectra for such a sample, arising from next-to-leading-order contributions in the EFT model for intrin Figure 4.— Signal–to–noise ratio of the forecasted B–mode power spectra from intrinsic alignments of galaxy clusters in LSST, for a number of different analysis choices, grouped by colour and symbol. Several setup parameters are allowed to vary individually within twice the value of their standard error. The last two points correspond to fixing the maximum \(k\) to redshift 0 and to increasing the \(\sigma_{e}\) in the covariance by 10%. The horizontal dashed line shows the threshold for detection, SNR \(=5\). The horizontal blue line shows the SNR obtained for our fiducial analysis choices. sic alignments. We forecast a high expected SNR of 11.79. We assess the change in SNR when varying the fiducial parameters and analysis choices of the forecast within their observed error and conclude that the SNR does not change significantly as to prohibit a detection. Therefore, we conclude that B-modes from intrinsic alignments of galaxy clusters should be detectable in LSST Y10 data. Throughout this work, we assume that the shapes of the forecasted clusters are measured using the positions of observed satellite members. Another option is to trace the cluster shape using the shape of the brightest central galaxy (BCG). The alignment amplitude using the BCG shape has been shown to be lower than using the satellite positions, due to the mis-alignment angle of the BCG and the cluster halo (Okumura et al., 2009; Shi et al., 2023). However, the expected SNR is large enough that a BCG B-mode detection might still be worth exploring in the future. It is interesting to consider whether the B-mode signal will be detectable in earlier data releases from LSST, for example, the Y1 data. These data will cover the same area on the sky as the Y10 data but to a different limiting depth, with forecasts showing \(i_{\rm lim}=24.1\) (The LSST Dark Energy Science Collaboration et al., 2018). This magnitude limit is likely not deep enough to allow the detection of clusters out to \(z\sim 1.1\), but it is higher than the \(i_{\rm lim}\) in DESY1 data. This means the LSST Y1 cluster sample will be volume limited between \(0.2<z<0.7\) as well, and Figure 3 indicates the majority of the SNR is gained at those redshifts. Imposing this redshift limit, we forecast that for the LSST Y1 sample the SNR is 11.3, already high enough to be measured and close to the Y10 value. This result stems from the fact that clusters between \(0.7<z<1.1\) do not contribute to the SNR significantly. Measurements of galaxy cluster alignments on similar data have already been performed, most notably in van Uitert & Joachimi (2017). In that work, authors used projected correlation functions to measure the alignment signal. The B-mode correlation (\(w_{\times\times}\)) was not presented in that work. We forecast that, for such a sample, \({\rm SNR}=5.23\) is expected if using the full-shape power spectra. We note that projected correlation functions compress the information along the line-of-sight, resulting in a factor 2 decrease in the signal compared to power spectra estimators (Kurita & Takada, 2023). That would imply the B-mode signal would not have been detected in van Uitert & Joachimi (2017). The forecast carried out in this work has important implications for cosmic shear surveys that measure shape-shape correlations. Typically, B-modes are assumed to be zero in such measurements, since weak gravitational lensing does not produce them and the statistical power of existing surveys is not high enough to reveal a B-mode signal from the many sources that faintly produce it. However, here we show that, at least for galaxy clusters, such a signal will likely be detectable in stage-IV cosmic shear surveys and might even be present in existing data, although with a relatively low significance. Nonetheless, it is difficult to estimate the expected B-mode signal from intrinsic alignments in future cosmic shear data due to the difficulty in understanding the way the signal manifests in galaxies. The co-evolution relations and linear alignment bias mass scaling that was assumed in this work has been shown to hold for dark matter halo masses between \(10^{12}\)-\(10^{15}\) M\({}_{\odot}/h\) and redshift up to \(\sim 1\) and \(\sim 0.5\), respectively. Cosmic shear samples typically extend far beyond these limits, especially for upcoming stage-IV surveys. Moreover, while galaxy clusters are clean tracers of dark matter halos and these relations are true for them, it is harder to understand how these relations transform for the galaxy samples that are typically employed in cosmic shear analyses. The selection effects for cosmic shear samples need to be carefully taken into account in the halo model formalism and the mis-alignment of galaxies with their host halo must be accurately modelled for such a forecast. While forecasting for cosmic shear samples is beyond the scope of this work, it is clear that the B-mode signal of cosmic shear measurements for stage-IV surveys might not be a clean test of systematic errors in the weak gravitational lensing measurements. On the contrary, it is likely a source of extra information that could be leveraged and might contribute to tightening the constraining power of such surveys. For this, the alignment signal needs to be accurately modelled down to quasi-linear scales. However, there exist other sources of B-mode correlations that might also contribute, such as the clustering of source galaxies (Schneider et al., 2002) or the lensing bias (Krause & Hirata, 2010). A careful analysis of the relative amplitudes of the various sources of B-mode shape-shape correlations is, therefore, necessary in order to be able to confidently model the measured B-mode signal. We leave this question for future work. The authors would like to thank Dr. Michel Aguena and Dr. Eli Rykoff for useful discussions on the forecast of the cluster sample. This publication is part of the project "A rising tide: Galaxy intrinsic alignments as a new probe of cosmology and galaxy evolution" (with project number VI.Vidi.203.011) of the Talent programme Vidi which is (partly) financed by the Dutch Research Council (NWO). This work is also part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). We thank Henk Hoekstra and Maria Cristina Fortuna for sharing preliminary results from their manuscript in preparation with us, and Henk Hoekstra for useful discussions. ## Appendix A Next-to-leading order perturbative contributions to the B-mode signal The relevant integrals for B-mode autocorrelations at one-loop order in the EFT of IA are given by \[I_{55}(k,z) =-\int_{\mathbf{q}}\frac{(\mathbf{k}\cdot\mathbf{q}-q^{2})^{2}(k^{2 }-2\mathbf{k}\cdot\mathbf{q})^{2}((\mathbf{k}\cdot\mathbf{q})^{2}-k^{2}q^{2})}{4 k^{4}q^{4}|\mathbf{k}-\mathbf{q}|^{4}}P_{L}(q,z)P_{L}(|\mathbf{k}-\mathbf{q}|,z);\] (A1) \[I_{66}(k,z) =\int_{\mathbf{q}}\frac{(\mathbf{k}\cdot\mathbf{q}-q^{2})^{2}(( \mathbf{k}\cdot\mathbf{q})^{2}-k^{2}q^{2})}{4k^{4}q^{4}|\mathbf{k}-\mathbf{q}|^ {4}}P_{L}(q,z)P_{L}(|\mathbf{k}-\mathbf{q}|,z);\] \[I_{67}(k,z) =-\int_{\mathbf{q}}\frac{(\mathbf{k}\cdot\mathbf{q}-q^{2})(k^{2 }+2q^{2}-2\mathbf{k}\cdot\mathbf{q})((\mathbf{k}\cdot\mathbf{q})^{2}-k^{2}q ^{2})}{8k^{4}q^{4}|\mathbf{k}-\mathbf{q}|^{4}}P_{L}(q,z)P_{L}(|\mathbf{k}- \mathbf{q}|,z);\] \[I_{77}(k,z) =\int_{\mathbf{q}}\frac{(k^{2}+2q^{2}-2\mathbf{k}\cdot\mathbf{q })^{2}((\mathbf{k}\cdot\mathbf{q})^{2}-k^{2}q^{2})}{16k^{4}q^{4}|\mathbf{k}- \mathbf{q}|^{4}}P_{L}(q,z)P_{L}(|\mathbf{k}-\mathbf{q}|,z);\] where the shorthand notation \[\int_{\mathbf{q}}=\int\frac{\mathrm{d}^{3}\mathbf{q}}{(2\pi)^{3}}\] (A2) is used for the integration measure in Fourier space. The integrals all asymptote to the same positive, scale-independent constant in the \(k\to 0\) limit, and this constant is subtracted before computing the signal for a given set of bias parameters (cf. Figure 1).