text
stringlengths
189
1.92k
split
stringclasses
1 value
Mobility-as-a-Service (MaaS) integrates different transport modalities and can support more personalisation of travellers' journey planning based on their individual preferences, behaviours and wishes. To fully achieve the potential of MaaS, a range of AI (including machine learning and data mining) algorithms are needed to learn personal requirements and needs, to optimise journey planning of each traveller and all travellers as a whole, to help transport service operators and relevant governmental bodies to operate and plan their services, and to detect and prevent cyber attacks from various threat actors including dishonest and malicious travellers and transport operators. The increasing use of different AI and data processing algorithms in both centralised and distributed settings opens the MaaS ecosystem up to diverse cyber and privacy attacks at both the AI algorithm level and the connectivity surfaces. In this paper, we present the first comprehensive review on the coupling between AI-driven MaaS design and the diverse cyber security challenges related to cyber attacks and countermeasures. In particular, we focus on how current and emerging AI-facilitated privacy risks (profiling, inference, and third-party threats) and adversarial AI attacks (evasion, extraction, and gamification) may impact the MaaS ecosystem. These risks often combine novel attacks (e.g., inverse learning) with traditional attack vectors (e.g., man-in-the-middle attacks), exacerbating the risks for the wider participation actors and the emergence of new business models.
arXiv
NGC 5139 ($\omega$ Cen), is the closest candidate of a Nuclear Star Cluster that has been stripped of its host galaxy in the Milky Way. Despite extensive studies through the last decades, many open questions about the cluster remain, including the properties of the binary population. In this study we use MUSE multi-epoch spectroscopy to identify binary systems in $\omega$ Cen. The observations span 8 years, with a total of 312 248 radial velocity measurements for 37 225 stars. Following the removal of known photometric variables, we identify 275 stars that show RV variations, corresponding to a discovery fraction of 1.4$\pm$0.1%. Using dedicated simulations, we find that our data is sensitive to 70$\pm$10% of the binaries expected in the sample, resulting in a completeness-corrected binary fraction of 2.1$\pm$0.4% in the central region of $\omega$ Cen. We find similar binary fractions for all stellar evolutionary stages covered by our data, the only notable exception being the blue straggler stars, which show an enhanced binary fraction. We also find no distinct correlation with distance from the cluster centre, indicating a limited amount of mass segregation within the half-light radius of $\omega$ Cen.
arXiv
The dissociation of quarkonium states with different binding energies produced in heavy-ion collisions is a powerful probe for investigating the formation and properties of the quark-gluon plasma. The ratio of production cross-sections of $\psi(2S)$ and $J/\psi$ mesons times the ratio of their branching fractions into the dimuon final state is measured as a function of centrality using data collected by the LHCb detector in PbPb collisions at $\sqrt{s_{\text{NN}}}$ = 5.02 TeV. The measured ratio shows no dependence on the collision centrality, and is compared to the latest theory predictions and to the recent measurements in literature.
arXiv
We define the Super-Unique-Tarski problem, which is a Tarski instance in which all slices are required to have a unique fixed point. We show that Super-Unique-Tarski lies in UEOPL under promise-preserving reductions.
arXiv
We assess the prospects for detecting gravitational wave echoes arising due to the quantum nature of black hole horizons with LISA. In a recent proposal, Bekenstein's black hole area quantization is connected to a discrete absorption spectrum for black holes in the context of gravitational radiation. Consequently, for incoming radiation at the black hole horizon, not all frequencies are absorbed, raising the possibility that the unabsorbed radiation is reflected, producing an echo-like signal closely following the binary coalescence waveform. In this work, we further develop this proposal by introducing a robust, phenomenologically motivated model for black hole reflectivity. Using this model, we calculate the resulting echoes for an ensemble of Numerical Relativity waveforms and examine their detectability with the LISA space-based interferometer. Our analysis demonstrates promising detection prospects and shows that, upon detection, LISA provides a direct probe of the Bekenstein-Hawking entropy. In addition, we find that the information extractable from LISA data offers valuable constraints on a wide range of quantum gravity theories.
arXiv
There is a persistent $\sim 5 \sigma$ tension between the value of the Hubble constant, as derived from either the local distance ladder or the cosmic microwave background, signaling either unaccounted for systematics in the measurements or `new physics'. Determining the Hubble constant using Type Ia supernovae requires non-trivial and accurate corrections for dust extinction. To circumvent this obstacle, we here determine the Hubble constant from blue, and hence presumably unextinguished, supernovae, only. For two different compilations of Type Ia supernova data and lightcurve fitting methods we find that the derived Hubble constant is consistently lower by $\sim$ 3 km s$^{-1}$ Mpc$^{-1}$ ($\sim 70$ km s$^{-1}$ Mpc$^{-1}$), and within 1 $\sigma$ of the Cosmic Microwave Background measurement, when using only blue supernovae as opposed to using all supernovae. Although the number of blue calibrating Type Ia supernovae is small, this indicates potential systematic effects in dust corrections in standard supernova cosmology. Upcoming major transient surveys will discover numerous unextinguished SNe~Ia, and thus be able to increase precision of the Hubble constant measured from blue SNe~Ia, heralding a promising path toward resolving the Hubble constant tension.
arXiv
To address the challenges of high computational costs and long-distance dependencies in exist ing video understanding methods, such as CNNs and Transformers, this work introduces RWKV to the video domain in a novel way. We propose a LSTM CrossRWKV (LCR) framework, designed for spatiotemporal representation learning to tackle the video understanding task. Specifically, the proposed linear complexity LCR incorporates a novel Cross RWKV gate to facilitate interaction be tween current frame edge information and past features, enhancing the focus on the subject through edge features and globally aggregating inter-frame features over time. LCR stores long-term mem ory for video processing through an enhanced LSTM recurrent execution mechanism. By leveraging the Cross RWKV gate and recurrent execution, LCR effectively captures both spatial and temporal features. Additionally, the edge information serves as a forgetting gate for LSTM, guiding long-term memory management.Tube masking strategy reduces redundant information in food and reduces overfitting.These advantages enable LSTM CrossRWKV to set a new benchmark in video under standing, offering a scalable and efficient solution for comprehensive video analysis. All code and models are publicly available.
arXiv
Extended gravitational models have gained large attention in the last couple of decades. In this work, we examine the solution space of vacuum, static, and spherically symmetric spacetimes within $F(R)$ theories, introducing novel methods that reduce the vacuum equations to a single second-order equation. For the first time, we derive analytic expressions for the metric functions in terms of the arbitrary functional $F(R)$, providing detailed insight into how the gravitational action impacts the structure of spacetime. We analyze conditions under which solutions are asymptotically flat, regular at the core, and contain an event horizon, obtaining explicit expressions for entropy, temperature, and specific heat in terms of $F(R)$. By using a single metric degree of freedom, we identify the most general solution and examine its (un)physical properties, showing that resolving singularities is not possible within this restricted framework in vacuum. For the general case involving two metric functions, we use several approximation schemes to explore corrections to Schwarzschild-(anti)de Sitter spacetimes, finding that $F(R)$ extensions to General Relativity induce instabilities that are not negligible. Finally, through an analysis of axial perturbations, we derived a general expression for the potential of quasinormal modes of a black hole as a function of the arbitrary Lagrangian.
arXiv
This paper presents a data-driven min-max model predictive control (MPC) scheme for linear parameter-varying (LPV) systems. Contrary to existing data-driven LPV control approaches, we assume that the scheduling signal is unknown during offline data collection and online system operation. Assuming a quadratic matrix inequality (QMI) description for the scheduling signal, we develop a novel data-driven characterization of the consistent system matrices using only input-state data. The proposed data-driven min-max MPC minimizes a tractable upper bound on the worst-case cost over the consistent system matrices set and over all scheduling signals satisfying the QMI. The proposed approach guarantees recursive feasibility, closed-loop exponential stability and constraint satisfaction if it is feasible at the initial time. We demonstrate the effectiveness of the proposed method in simulation.
arXiv
The butterfly diagram of the solar cycle exhibits a poleward migration of the diffuse magnetic field resulting from the decay of trailing sunspots. It is one component of what is sometimes referred to as the "rush to the poles". We investigate under which conditions the rush to the poles can be reproduced in flux-transport Babcock-Leighton dynamo models. We identify three main ways to reproduce it: a flux emergence probability that decreases rapidly with latitude; a threshold in subsurface toroidal field strength between slow and fast emergence; and an emergence rate based on magnetic buoyancy. We find that all three mechanisms lead to solar-like butterfly diagrams, but which present notable differences between them. The shape of the butterfly diagram is very sensitive to model parameters for the threshold prescription, while most models incorporating magnetic buoyancy converge to very similar butterfly diagrams, with butterfly wings widths of $\lesssim\pm 30^\circ$, in very good agreement with observations. With turbulent diffusivities above $35~\text{km}^2/\text{s}$ but below about $40~\text{km}^2/\text{s}$, buoyancy models are strikingly solar-like. The threshold and magnetic buoyancy prescriptions make the models non-linear and as such can saturate the dynamo through latitudinal quenching. The period of the models involving buoyancy is independent of the source term amplitude, but emergence loss increases it by $\simeq 60\%$. For the rush to the poles to be visible, a mechanism suppressing (enhancing) emergences at high (low) latitudes must operate. It is not sufficient that the toroidal field be stored at low latitudes for emergences to be limited to low latitudes. From these models we infer that the Sun is not in the advection-dominated regime, but also not in the diffusion-dominated regime. The cycle period is set through a balance between advection, diffusion and flux emergence.
arXiv
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and a Large Language Model (LLM) to generate disease diagnoses based on the predicted concepts. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2-step-concept-based-skin-diagnosis.
arXiv
We study all the ways that a given convex body in $d$ dimensions can break into countably many pieces that move away from each other rigidly at constant velocity, with no rotation or shearing. The initial velocity field is locally constant, but may be continuous and/or fail to be integrable. For any choice of mass-velocity pairs for the pieces, such a motion can be generated by the gradient of a convex potential that is affine on each piece. We classify such potentials in terms of a countable version of a theorem of Alexandrov for convex polytopes, and prove a stability theorem. For bounded velocities, there is a bijection between the mass-velocity data and optimal transport flows (Wasserstein geodesics) that are locally incompressible. Given any rigidly breaking velocity field that is the gradient of a continuous potential, the convexity of the potential is established under any of several conditions, such as the velocity field being continuous, the potential being semi-convex, the mass measure generated by a convexified transport potential being absolutely continuous, or there being a finite number of pieces. Also we describe a number of curious and paradoxical examples having fractal structure.
arXiv
Document-level Event Argument Extraction (EAE) faces two challenges due to increased input length: 1) difficulty in distinguishing semantic boundaries between events, and 2) interference from redundant information. To address these issues, we propose two methods. The first method introduces the Co and Structure Event Argument Extraction model (CsEAE) based on Small Language Models (SLMs). CsEAE includes a co-occurrences-aware module, which integrates information about all events present in the current input through context labeling and co-occurrences event prompts extraction. Additionally, CsEAE includes a structure-aware module that reduces interference from redundant information by establishing structural relationships between the sentence containing the trigger and other sentences in the document. The second method introduces new prompts to transform the extraction task into a generative task suitable for Large Language Models (LLMs), addressing gaps in EAE performance using LLMs under Supervised Fine-Tuning (SFT) conditions. We also fine-tuned multiple datasets to develop an LLM that performs better across most datasets. Finally, we applied insights from CsEAE to LLMs, achieving further performance improvements. This suggests that reliable insights validated on SLMs are also applicable to LLMs. We tested our models on the Rams, WikiEvents, and MLEE datasets. The CsEAE model achieved improvements of 2.1\%, 2.3\%, and 3.2\% in the Arg-C F1 metric compared to the baseline, PAIE~\cite{PAIE}. For LLMs, we demonstrated that their performance on document-level datasets is comparable to that of SLMs~\footnote{All code is available at https://github.com/simon-p-j-r/CsEAE}.
arXiv
Let $M$ be a compact complex $n$-manifold. A Gauduchon metric is a Hermitian metric whose fundamental 2-form $\omega$ satisfies the equation $dd^c(\omega^{n-1})=0$. Paul Gauduchon has proven that any Hermitian metric is conformally equivalent to a Gauduchon metric, which is unique (up to a constant multiplier) in its conformal class. Then $d^c(\omega^{n-1})$ is a closed $(2n-1)$-form; the set of cohomology classes of all such forms, called the Lee-Gauduchon cone, is a convex cone, superficially similar to the Kahler cone. We prove that the Lee-Gauduchon cone is a bimeromorphic invariant, and compute it for several classes of non-Kahler manifolds.
arXiv
Background: Screening trials require large sample sizes and long time-horizons to demonstrate mortality reductions. We recently proposed increasing statistical power by testing stored control-arm specimens, called the Intended Effect (IE) design. To evaluate feasibility of the IE design, the US National Cancer Institute (NCI) is collecting blood specimens in the control-arm of the NCI Vanguard Multicancer Detection pilot feasibility trial. However, key assumptions of the IE design require more investigation and relaxation. Methods: We relax the IE design to (1) reduce costs by testing only a stratified sample of control-arm specimens by incorporating inverse-probability sampling weights, (2) correct for potential loss-of-signal in stored control-arm specimens, and (3) correct for non-compliance with control-arm specimen collections. We also examine sensitivity to unintended effects of screening. Results: In simulations, testing all primary-outcome control-arm specimens and a 50% sample of the rest maintains nearly all the power of the IE while only testing half the control-arm specimens. Power remains increased from the IE analysis (versus the standard analysis) even if unintended effects exist. The IE design is robust to some loss-of-signal scenarios, but otherwise requires retest-positive fractions that correct bias at a small loss of power. The IE can be biased and lose power under control-arm non-compliance scenarios, but corrections correct bias and can increase power. Conclusions: The IE design can be made more cost-efficient and robust to loss-of-signal. Unintended effects will not typically reduce the power gain over the standard trial design. Non-compliance with control-arm specimen collections can cause bias and loss of power that can be mitigated by corrections. Although promising, practical experience with the IE design in screening trials is necessary.
arXiv
Game solving is the process of finding the theoretical outcome for a game, assuming that all player choices are optimal. This paper focuses on a technique that can reduce the heuristic search space significantly for 7x7 Killall-Go. In Go and Killall-Go, live patterns are stones that are protected from opponent capture. Mutual life, also referred to as seki, is when both players' stones achieve life by sharing liberties with their opponent. Whichever player attempts to capture the opponent first will leave their own stones vulnerable. Therefore, it is critical to recognize seki patterns to avoid putting oneself in jeopardy. Recognizing seki can reduce the search depth significantly. In this paper, we enumerate all seki patterns up to a predetermined area size, then store these patterns into a seki table. This allows us to recognize seki during search, which significantly improves solving efficiency for the game of Killall-Go. Experiments show that a day-long, unsolvable position can be solved in 482 seconds with the addition of a seki table. For general positions, a 10% to 20% improvement in wall clock time and node count is observed.
arXiv
In this work, we present a novel Koopman spectrum-based reachability verification method for nonlinear systems. Contrary to conventional methods that focus on characterizing all potential states of a dynamical system over a presupposed time span, our approach seeks to verify the reachability by assessing the non-emptiness of estimated time-to-reach intervals without engaging in the explicit computation of reachable set. Based on the spectral analysis of the Koopman operator, we reformulate the problem of verifying existence of reachable trajectories into the problem of determining feasible time-to-reach bounds required for system reachability. By solving linear programming (LP) problems, our algorithm can effectively estimate all potential time intervals during which a dynamical system can enter (and exit) target sets from given initial sets over an unbounded time horizon. Finally, we demonstrate our method in challenging settings, such as verifying the reachability between non-convex or even disconnected sets, as well as backward reachability and multiple entries into target sets. Additionally, we validate its applicability in addressing real-world challenges and scalability to high-dimensional systems through case studies in verifying the reachability of the cart-pole and multi-agent consensus systems.
arXiv
This study explores the effectiveness of Large Language Models in meal planning, focusing on their ability to identify and decompose compound ingredients. We evaluated three models-GPT-4o, Llama-3 (70b), and Mixtral (8x7b)-to assess their proficiency in recognizing and breaking down complex ingredient combinations. Preliminary results indicate that while Llama-3 (70b) and GPT-4o excels in accurate decomposition, all models encounter difficulties with identifying essential elements like seasonings and oils. Despite strong overall performance, variations in accuracy and completeness were observed across models. These findings underscore LLMs' potential to enhance personalized nutrition but highlight the need for further refinement in ingredient decomposition. Future research should address these limitations to improve nutritional recommendations and health outcomes.
arXiv
Rotation is an important, yet poorly-modelled phenomenon of stellar structure and evolution. Accurate estimates of internal rotation rates are therefore valuable for constraining stellar evolution models. We aim to assess the accuracy of asteroseismic estimates of internal rotation rates and how these depend on the fundamental stellar parameters. We apply the recently-developed method called extended-MOLA inversions to infer localised estimates of internal rotation rates of synthetic observations of red giants. We search for suitable reference stellar models following a grid-based approach, and assess the robustness of the resulting inferences to the choice of reference model. We find that matching the mixed mode pattern between the observation and the reference model is an important criterion to select suitable reference models. We propose to i) select a set of reference models based on the correlation between the observed rotational splittings and the mode-trapping parameter ii) compute rotation rates for all these models iii) use the mean value obtained across the whole set as the estimate of the internal rotation rates. We find that the effect of a near surface perturbation in the synthetic observations on the rotation rates estimated based on the correlation between the observed rotational splittings and the mode-trapping parameter is negligible. We conclude that when using an ensemble of reference models, constructed based on matching the mixed mode pattern, the input rotation rates can be recovered across a range of fundamental stellar parameters like mass, mixing-length parameter and composition. Further, red-giant rotation rates determined in this way are also independent of a near surface perturbation of stellar structure.
arXiv
The design and optimization of optical components, such as Bragg gratings, are critical for applications in telecommunications, sensing, and photonic circuits. To overcome the limitations of traditional design methods that rely heavily on computationally intensive simulations and large datasets, we propose an integrated methodology that significantly reduces these burdens while maintaining high accuracy in predicting optical response. First, we employ a Bayesian optimization technique to strategically select a limited yet informative number of simulation points from the design space, ensuring that each contributes maximally to the model's performance. Second, we utilize singular value decomposition to effectively parametrize the entire reflectance spectra into a reduced set of coefficients, allowing us to capture all significant spectral features without losing crucial information. Finally, we apply XGBoost, a robust machine learning algorithm, to predict the entire reflectance spectra from the reduced dataset. The combination of Bayesian optimization for data selection, SVD for full-spectrum fitting, and XGBoost for predictive modeling provides a powerful, generalizable framework for the design of optical components.
arXiv
We review the multivariate holomorphic functional calculus for tuples in a commutative Banach algebra and establish a simple "na\"ive" extension to commuting tuples in a general Banach algebra. The approach is na\"ive in the sense that the na\"ively defined joint spectrum maybe too big. The advantage of the approach is that the functional calculus then is given by a simple concrete formula from which all its continuity properties can easily be derived. We apply this framework to multivariate functions arising as divided differences of a univariate function. This provides a rich set of examples to which our na\"ive calculus applies. Foremost, we offer a natural and straightforward proof of the Connes-Moscovici Rearrangement Lemma in the context of the multivariate holomorphic functional calculus. Secondly, we show that the Daletski-Krein type noncommutative Taylor expansion is a natural consequence of our calculus. Also Magnus' Theorem which gives a nonlinear differential equation for the $\log$ of the solutions to a linear matrix ODE follows naturally and easily from our calculus. Finally, we collect various combinatorial related formulas.
arXiv
Differential privacy (DP) is a formal notion that restricts the privacy leakage of an algorithm when running on sensitive data, in which privacy-utility trade-off is one of the central problems in private data analysis. In this work, we investigate the fundamental limits of differential privacy in online learning algorithms and present evidence that separates three types of constraints: no DP, pure DP, and approximate DP. We first describe a hypothesis class that is online learnable under approximate DP but not online learnable under pure DP under the adaptive adversarial setting. This indicates that approximate DP must be adopted when dealing with adaptive adversaries. We then prove that any private online learner must make an infinite number of mistakes for almost all hypothesis classes. This essentially generalizes previous results and shows a strong separation between private and non-private settings since a finite mistake bound is always attainable (as long as the class is online learnable) when there is no privacy requirement.
arXiv
We explore the phenomenology of Weinberg's $Z_2\times Z_2$ symmetric three-Higgs-doublet potential, allowing for spontaneous violation of CP due to complex vacuum expectation values. An overview of all possible ways of satisfying the stationary-point conditions is given, with one, two or three non-vanishing vacuum expectation values, together with conditions for CP conservation in terms of basis invariants. All possible ways of satisfying the conditions for CP conservation are given. Scans of allowed parameter regions are given, together with measures of CP violation, in terms of the invariants. The light states identified in an earlier paper are further explored in terms of their CP-violating couplings. Loop-induced CP violation in $WWZ$ couplings, as well as charge-asymmetric scattering are also commented on.
arXiv
In this paper, we revisit the concept of noncommuting common causes; refute two objections raised against them, the triviality objection and the lack of causal explanatory force; and explore how their existence modifies the EPR argument. More specifically, we show that 1) product states screening off all quantum correlations do not compromise noncommuting common causal explanations; 2) noncommuting common causes can satisfy the law of total probability; 3) perfect correlations can have indeterministic noncommuting common causes; and, as a combination of the above claims, 4) perfect correlations can have noncommuting common causes which are both nontrivial and satisfy the law of total probability.
arXiv
Given a symmetric monoidal stable $\infty$-category $\mathcal{C}$ which is rigidly-compactly generated and a set of compact objects $\mathcal{K}$ of $\mathcal{C}$, one can form the subcategories of $\mathcal{K}$-complete and $\mathcal{K}$-local objects. The goal of this paper is to explain how to recover $\mathcal{C}$ from its $\mathcal{K}$-local and $\mathcal{K}$-complete subcategories while retaining the symmetric monoidal structure. Specializing to the case where $\mathcal{C}$ is the $\infty$-category of $G$-spectra for a finite group $G$, our result can be viewed as a symmetric monoidal variant of the isotropy separation decomposition, a version of which appeared previously in work of Krause.
arXiv
We analyze the universality and generalization of graph neural networks (GNNs) on attributed graphs, i.e., with node attributes. To this end, we propose pseudometrics over the space of all attributed graphs that describe the fine-grained expressivity of GNNs. Namely, GNNs are both Lipschitz continuous with respect to our pseudometrics and can separate attributed graphs that are distant in the metric. Moreover, we prove that the space of all attributed graphs is relatively compact with respect to our metrics. Based on these properties, we prove a universal approximation theorem for GNNs and generalization bounds for GNNs on any data distribution of attributed graphs. The proposed metrics compute the similarity between the structures of attributed graphs via a hierarchical optimal transport between computation trees. Our work extends and unites previous approaches which either derived theory only for graphs with no attributes, derived compact metrics under which GNNs are continuous but without separation power, or derived metrics under which GNNs are continuous and separate points but the space of graphs is not relatively compact, which prevents universal approximation and generalization analysis.
arXiv
In the acquisition of Magnetic Resonance (MR) images shorter scan times lead to higher image noise. Therefore, automatic image denoising using deep learning methods is of high interest. MR images containing line-like structures such as roots or vessels yield special characteristics as they display connected structures and yield sparse information. For this kind of data, it is important to consider voxel neighborhoods when training a denoising network. In this paper, we translate the Perceptual Loss to 3D data by comparing feature maps of untrained networks in the loss function as done previously for 2D data. We tested the performance of untrained Perceptual Loss (uPL) on 3D image denoising of MR images displaying brain vessels (MR angiograms - MRA) and images of plant roots in soil. We investigate the impact of various uPL characteristics such as weight initialization, network depth, kernel size, and pooling operations on the results. We tested the performance of the uPL loss on four Rician noise levels using evaluation metrics such as the Structural Similarity Index Metric (SSIM). We observe, that our uPL outperforms conventional loss functions such as the L1 loss or a loss based on the Structural Similarity Index Metric (SSIM). The uPL network's initialization is not important, while network depth and pooling operations impact denoising performance. E.g. for both datasets a network with five convolutional layers led to the best performance while a network with more layers led to a performance drop. We also find that small uPL networks led to better or comparable results than using large networks such as VGG. We observe superior performance of our loss for both datasets, all noise levels, and three network architectures. In conclusion, for images containing line-like structures, uPL is an alternative to other loss functions for 3D image denoising.
arXiv
In the rapidly evolving landscape of cyber security, intelligent chatbots are gaining prominence. Artificial Intelligence, Machine Learning, and Natural Language Processing empower these chatbots to handle user inquiries and deliver threat intelligence. This helps cyber security knowledge readily available to both professionals and the public. Traditional rule-based chatbots often lack flexibility and struggle to adapt to user interactions. In contrast, Large Language Model-based chatbots offer contextually relevant information across multiple domains and adapt to evolving conversational contexts. In this work, we develop IntellBot, an advanced cyber security Chatbot built on top of cutting-edge technologies like Large Language Models and Langchain alongside a Retrieval-Augmented Generation model to deliver superior capabilities. This chatbot gathers information from diverse data sources to create a comprehensive knowledge base covering known vulnerabilities, recent cyber attacks, and emerging threats. It delivers tailored responses, serving as a primary hub for cyber security insights. By providing instant access to relevant information and resources, this IntellBot enhances threat intelligence, incident response, and overall security posture, saving time and empowering users with knowledge of cyber security best practices. Moreover, we analyzed the performance of our copilot using a two-stage evaluation strategy. We achieved BERT score above 0.8 by indirect approach and a cosine similarity score ranging from 0.8 to 1, which affirms the accuracy of our copilot. Additionally, we utilized RAGAS to evaluate the RAG model, and all evaluation metrics consistently produced scores above 0.77, highlighting the efficacy of our system.
arXiv
Twists are defects that are used to encode and process quantum information in topological codes like surface and color codes. Color codes can host three basic types of twists viz., charge-permuting, color-permuting and domino twists. In this paper, we study domino twists from the viewpoint of computation. Specifically, we give a systematic construction for domino twists in qubit color codes. We also present protocols for measurement of logical qubits. Finally, we show that all Clifford gates can be implemented by braiding twists.
arXiv
In this study, we report a detailed calculation of the static dipole polarizabilities for group 12 elements using the finite-field approach combined with the relativistic coupled-cluster method, including single, double, and perturbative triple excitations. We examine three types of relativistic effects on dipole polarizabilities: scalar-relativistic, spin-orbit coupling (SOC), and fully relativistic Dirac-Coulomb contributions. The final recommended polarizability values, with their associated uncertainties, are $37.95 \pm 0.72$ for Zn, $45.68 \pm 1.16$ for Cd, $34.04 \pm 0.67$ for Hg, and $27.92 \pm 0.24$ for Cn. Our results align closely with the recommended values in the 2018 Table of static dipole polarizabilities for neutral atoms [Mol. Phys. \textbf{117}, 1200 (2019)], while providing reduced uncertainties for Cd and Cn. The analysis indicates that scalar-relativistic effects are the dominant relativistic contributions to atomic dipole polarizabilities for these atoms, with SOC effects found to be negligible. Furthermore, we evaluate the influence of electron correlation across all relativistic regimes, underscoring its critical role in the precise determination of dipole polarizabilities.
arXiv
A partition is finitary if all its blocks are finite. For a cardinal $\mathfrak{a}$ and a natural number $n$, let $\mathrm{fin}(\mathfrak{a})$ and $\mathscr{B}_{n}(\mathfrak{a})$ be the cardinalities of the set of finite subsets and the set of finitary partitions with exactly $n$ non-singleton blocks of a set which is of cardinality $\mathfrak{a}$, respectively. In this paper, we prove in $\mathsf{ZF}$ (without the axiom of choice) that for all infinite cardinals $\mathfrak{a}$ and all non-zero natural numbers $n$, \[ (2^{\mathscr{B}_{n}(\mathfrak{a})})^{\aleph_0}=2^{\mathscr{B}_{n}(\mathfrak{a})} \] and \[ 2^{\mathrm{fin}(\mathfrak{a})^n}=2^{\mathscr{B}_{2^n-1}(\mathfrak{a})}. \] It is also proved consistent with $\mathsf{ZF}$ that there exists an infinite cardinal $\mathfrak{a}$ such that \[ 2^{\mathscr{B}_{1}(\mathfrak{a})}<2^{\mathscr{B}_{2}(\mathfrak{a})}<2^{\mathscr{B}_{3}(\mathfrak{a})}<\cdots<2^{\mathrm{fin}(\mathrm{fin}(\mathfrak{a}))}. \]
arXiv
By cross-matching the eclipsing binary catalog from TESS with that from LAMOST MRS, semi-detached eclipsing binaries with radial velocities coverage spanning more than 0.3 phases were authenticated. The absolute parameters for these systems were determined by simultaneous modeling of light curves and radial velocities using the Wilson-Devinney program. Additionally, the secular orbital variations were further analyzed using O-C curves. Eight semi-detached eclipsing binaries have been identified. Among them, seven feature primary stars situated within the main-sequence band, while their secondaries are all in evolved stages. This suggests that these systems likely originated as detached binaries and have undergone a reversal of the mass ratio. However, TIC 428257299 is an exception where the primary is Roche lobe-filling, and its secondary has experienced mass loss events. Additionally, TIC 8677671 and TIC 318217844 demonstrate secular cyclical changes of orbital periods. Specifically, for TIC 8677671, the cyclical change could result from magnetic activity or a third body which is likely to be compact, with a mass of at least 2.97 M$_{\odot}$.
arXiv
Practical superconducting nanowire single photon detectors (SNSPDs) demonstrate a strong trade-off between detection sensitivity and the reset time. Often, there are wide variations in sensitivity and response times from SNSPDs of the same superconducting material. Here, using detailed physical models, we show that the dirtiness in a superconductor enforces a sensitivity and bandwidth trade-off in all practical devices. More importantly, a certain degree of dirtiness is a necessary requirement for achieving single photon detection. Under typical bias conditions close to the transition setpoints, the minimum number of photons required to register a voltage pulse decreases by the dirtiness parameter (Ioffe-Regel parameter) and the reset time of SNSPD increases by the same dirtiness parameter, thereby giving a constant value for the sensitivity-bandwidth product. The constant is weakly modified by biasing current and the temperature. Since dirtiness in the superconducting nanowire is a physically controllable parameter with an important bearing on the final response of an SNSPD, this work opens new opportunities to develop SNSPD devices with engineered sensitivity-bandwidth setpoint as dictated by an application.
arXiv
Facial landmark detection is a fundamental problem in computer vision for many downstream applications. This paper introduces a new facial landmark detector based on vision transformers, which consists of two unique designs: Dual Vision Transformer (D-ViT) and Long Skip Connections (LSC). Based on the observation that the channel dimension of feature maps essentially represents the linear bases of the heatmap space, we propose learning the interconnections between these linear bases to model the inherent geometric relations among landmarks via Channel-split ViT. We integrate such channel-split ViT into the standard vision transformer (i.e., spatial-split ViT), forming our Dual Vision Transformer to constitute the prediction blocks. We also suggest using long skip connections to deliver low-level image features to all prediction blocks, thereby preventing useful information from being discarded by intermediate supervision. Extensive experiments are conducted to evaluate the performance of our proposal on the widely used benchmarks, i.e., WFLW, COFW, and 300W, demonstrating that our model outperforms the previous SOTAs across all three benchmarks.
arXiv
Contemporary machine learning models, such as language models, are powerful, but come with immense resource requirements both at training and inference time. It has been shown that decoder-only language models can be trained to a competitive state with ternary weights (1.58 bits per weight), facilitating efficient inference. Here, we start our exploration with non-transformer model architectures, investigating 1.58-bit training for multi-layer perceptrons and graph neural networks. Then, we explore 1.58-bit training in other transformer-based language models, namely encoder-only and encoder-decoder models. Our results show that in all of these settings, 1.58-bit training is on par with or sometimes even better than the standard 32/16-bit models.
arXiv
Let $E$ be a subset in $\mathbb{F}_p^2$ and $S$ be a subset in the special linear group $SL_2(\mathbb{F}_p)$ or the $1$-dimensional Heisenberg linear group $\mathbb{H}_1(\mathbb{F}_p)$. We define $S(E):= \bigcup_{\theta \in S} \theta (E)$. In this paper, we provide optimal conditions on $S$ and $E$ such that the set $S(E)$ covers a positive proportion of all elements in the plane $\mathbb{F}_p$. When the sizes of $S$ and $E$ are small, we prove structural theorems that guarantee that $|S(E)|\gg |E|^{1+\epsilon}$ for some $\epsilon>0$. The main ingredients in our proofs are novel results on algebraic incidence-type structures associated with the groups, in which energy estimates play a crucial role. The higher-dimensional version will also be discussed in this paper.
arXiv
Multimodal foundation models, such as Gemini and ChatGPT, have revolutionized human-machine interactions by seamlessly integrating various forms of data. Developing a universal spoken language model that comprehends a wide range of natural language instructions is critical for bridging communication gaps and facilitating more intuitive interactions. However, the absence of a comprehensive evaluation benchmark poses a significant challenge. We present Dynamic-SUPERB Phase-2, an open and evolving benchmark for the comprehensive evaluation of instruction-based universal speech models. Building upon the first generation, this second version incorporates 125 new tasks contributed collaboratively by the global research community, expanding the benchmark to a total of 180 tasks, making it the largest benchmark for speech and audio evaluation. While the first generation of Dynamic-SUPERB was limited to classification tasks, Dynamic-SUPERB Phase-2 broadens its evaluation capabilities by introducing a wide array of novel and diverse tasks, including regression and sequence generation, across speech, music, and environmental audio. Evaluation results indicate that none of the models performed well universally. SALMONN-13B excelled in English ASR, while WavLLM demonstrated high accuracy in emotion recognition, but current models still require further innovations to handle a broader range of tasks. We will soon open-source all task data and the evaluation pipeline.
arXiv
In this paper, we define double almost-Riordan arrays and find that the set of all double almost-Riordan arrays forms a group, called the double almost-Riordan group. We also obtain the sequence characteristics of double almost-Riordan arrays and give the production matrices for double almost-Riordan arrays. We define the compression of double almost-Riordan arrays and present their sequence characterization. Finally we give a characteristic for the total positivity of double Riordan arrays, by using which we discuss the total positivity for several double almost-Riordan arrays.
arXiv
We establish global well-posedness for both the defocusing and focusing complex-valued modified Korteweg--de Vries equations on the real line in modulation spaces $M_p^{s,2}(\mathbb{R})$, for all $1\leq p<\infty$ and $0\leq s<3/2-1/p$. We will also show that such solutions admit global-in-time bounds in these spaces and that equicontinuous sets of initial data lead to equicontinuous ensembles of orbits. Indeed, such information forms a crucial part of our well-posedness argument.
arXiv
In streaming media services, video transcoding is a common practice to alleviate bandwidth demands. Unfortunately, traditional methods employing a uniform rate factor (RF) across all videos often result in significant inefficiencies. Content-adaptive encoding (CAE) techniques address this by dynamically adjusting encoding parameters based on video content characteristics. However, existing CAE methods are often tightly coupled with specific encoding strategies, leading to inflexibility. In this paper, we propose a model that predicts both RF-quality and RF-bitrate curves, which can be utilized to derive a comprehensive bitrate-quality curve. This approach facilitates flexible adjustments to the encoding strategy without necessitating model retraining. The model leverages codec features, content features, and anchor features to predict the bitrate-quality curve accurately. Additionally, we introduce an anchor suspension method to enhance prediction accuracy. Experiments confirm that the actual quality metric (VMAF) of the compressed video stays within 1 of the target, achieving an accuracy of 99.14%. By incorporating our quality improvement strategy with the rate-quality curve prediction model, we conducted online A/B tests, obtaining both +0.107% improvements in video views and video completions and +0.064% app duration time. Our model has been deployed on the Xiaohongshu App.
arXiv
During intracranial aneurysm (IA) treatment with Diverters (FDs), the device/parent artery diameters ratio may influence the ability of the device to induce aneurysm healing response. Oversized FDs are safer to deploy but may not induce enough hemodynamic resistance to ensure aneurysm occlusion. Methods based on Computational Fluid Dynamics (CFD) could allow optimal device selection but are time-consuming and inadequate for intra-operative guidance. To address this limitation, we propose to investigate a method for optimal FD selection using Angiographic Parametric Imaging (API) and machine learning (ML). We selected 128 pre-treatment angiographic sequences of IAs which demonstrated full occlusion at six months follow-up. For each IA, we extracted five API parameters from the aneurysm dome and normalized them to the feeding artery corresponding parameters. We dichotomized the dataset based on the FD/ proximal artery diameter ratio as undersized, if the ratio<1 or if multiple FDs were used and oversized otherwise. Single API parameter and ML analysis were used to determine whether API parameters could be used to determine the need for FD under-sizing (i.e., increased flow resistance). Classification accuracy was assessed using area under the receiver operator characteristic (AUROC). In total we identified 51 and 77 cases for the undersized and oversized cohorts respectively. Single API parameter analysis yielded an inadequate AUROC ~0.5 while machine learning using all five API parameters yielded and AUROC of 0.72.
arXiv
We present Fox-1, a series of small language models (SLMs) consisting of Fox-1-1.6B and Fox-1-1.6B-Instruct-v0.1. These models are pre-trained on 3 trillion tokens of web-scraped document data and fine-tuned with 5 billion tokens of instruction-following and multi-turn conversation data. Aiming to improve the pre-training efficiency, Fox-1-1.6B model introduces a novel 3-stage data curriculum across all the training data with 2K-8K sequence length. In architecture design, Fox-1 features a deeper layer structure, an expanded vocabulary, and utilizes Grouped Query Attention (GQA), offering a performant and efficient architecture compared to other SLMs. Fox-1 achieves better or on-par performance in various benchmarks compared to StableLM-2-1.6B, Gemma-2B, Qwen1.5-1.8B, and OpenELM1.1B, with competitive inference speed and throughput. The model weights have been released under the Apache 2.0 license, where we aim to promote the democratization of LLMs and make them fully accessible to the whole open-source community.
arXiv
Distractions in mixed reality (MR) environments can significantly influence user experience, affecting key factors such as presence, reaction time, cognitive load, and Break in Presence (BIP). Presence measures immersion, reaction time captures user responsiveness, cognitive load reflects mental effort, and BIP represents moments when attention shifts from the virtual to the real world, breaking immersion. However, the effects of distractions on these elements remain insufficiently explored. To address this gap, we have presented a theoretical model to understand how congruent and incongruent distractions affect all these constructs. We conducted a within-subject study (N=54) where participants performed image-sorting tasks under different distraction conditions. Our findings show that incongruent distractions significantly increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.
arXiv
We study the eigenfunctions of the classical Liouville operator and investigate the conditions they must obey to be separable as a product state. We point out that the conditions for separability are equivalent to requirements of Liouville's integrability theorem, this is, the eigenfunctions are separable if and only if the system is integrable. On the other hand, if the classical system is not integrable, then the eigenfunctions are entangled in all canonical coordinates. This results in a link between the classical notions of chaos and integrability with mathematical concepts that are usually restricted to quantum mechanics.
arXiv
We associate a sequence of positive integers, termed the type sequence, with a cochordal graph. Using this type sequence, we compute all graded Betti numbers of its edge ideal. We then classify all positive integer $n$ such that the zero divisor graph of $\mathbb{Z}/n \mathbb{Z}$ is cochordal and determine all the graded Betti numbers of its edge ideal.
arXiv
The Pettitt test has been widely used in climate change and hydrological analyzes. However, studies evidence difficulties of this test in detecting change points, especially in small samples. This study presents a bootstrap application of the Pettitt test, which is numerically compared with the classical Pettitt test by an extensive Monte Carlo simulation study. The proposed test outperforms the classical test in all simulated scenarios. An application of the tests is conducted in the historical series of naturalized flows of the Itaipu Hydroelectric plant in Brazil, where several studies have shown a change point in the 70s. When the series is split into shorter series, to simulate small sample actual situations, the proposed test is more powerful than the classical Pettitt test to detect the change point. The proposed test can be an important tool to detect abrupt changes in water availability, supporting hydroclimatological resources decision making.
arXiv
Out-of-Time-Order Correlation function measures transport properties of dynamical systems. They are ubiquitously used to measure quantum mechanical quantities, such as scrambling times, criticality in phase transitions, and detect onset of thermalisation. We characterise the computational complexity of estimating OTOCs over all eigenstates and show it is Complete for the One Clean Qubit model (DQC1). We then generalise our setup to establish DQC1-Completeness of N-time Correlation functions over all eigenstates. Building on previous results, the DQC1-Completeness of OTOCs and N-time Correlation functions then allows us to highlight a dichotomy between query complexity and circuit complexity of estimating correlation functions.
arXiv
Context. Star-forming regions are gaining considerable interest in the high-energy astrophysics community as possible Galactic particle accelerators. In general, the role of electrons has not been fully considered in this kind of cosmic-ray source. However, the intense radiation fields inside these regions might make electrons significant gamma-ray contributors. Aims. We study the young and compact star-forming region NGC 3603, a well known gamma-ray emitter. Our intention is to test whether its gamma-ray emission can be produced by cosmic-ray electrons. Methods. We build a novel model by creating realistic 3D distributions of the gas and the radiation field in the region. We introduce these models into PICARD to perform cosmic-ray transport simulations and produce gamma-ray emission maps. The results are compared with a dedicated Fermi Large Area Telescope data analysis at high energies. We also explore the radio and neutrino emissions of the system. Results. We improve the existing upper limits of the NGC 3603 gamma-ray source extension. Although the gamma-ray spectrum is well reproduced with the injection of CR protons, it requires nearly 30\% acceleration efficiency. In addition, the resulting extension of the simulated hadronic source is in mild tension with the extension data upper limit. The radio data disfavours the lepton-only scenario. Finally, combining both populations, the results are consistent with all observables, although the exact contributions are ambiguous.
arXiv
Recent research suggests that the failures of Vision-Language Models (VLMs) at visual reasoning often stem from erroneous agreements -- when semantically distinct images are ambiguously encoded by the CLIP image encoder into embeddings with high cosine similarity. In this paper, we show that erroneous agreements are not always the main culprit, as Multimodal Large Language Models (MLLMs) can still extract distinct information from them. For instance, when distinguishing objects on the left vs right in the What'sUp benchmark, the CLIP image embeddings of the left/right pairs have an average cosine similarity $>0.99$, and CLIP performs at random chance; but LLaVA-1.5-7B, which uses the same CLIP image encoder, achieves nearly $100\%$ accuracy. We find that the extractable information in CLIP image embeddings is likely obscured by CLIP's inadequate vision-language alignment: Its matching score learned by the contrastive objective might not capture all diverse image-text correspondences. We also study the MMVP benchmark, on which prior work has shown that LLaVA-1.5 cannot distinguish image pairs with high cosine similarity. We observe a performance gain brought by attending more to visual input through an alternative decoding algorithm. Further, the accuracy significantly increases if the model can take both images as input to emphasize their nuanced differences. Both findings indicate that LLaVA-1.5 did not utilize extracted visual information sufficiently. In conclusion, our findings suggest that while improving image encoders could benefit VLMs, there is still room to enhance models with a fixed image encoder by applying better strategies for extracting and utilizing visual information.
arXiv
Value-based reinforcement learning (RL) can in principle learn effective policies for a wide range of multi-turn problems, from games to dialogue to robotic control, including via offline RL from static previously collected datasets. However, despite the widespread use of policy gradient methods to train large language models for single turn tasks (e.g., question answering), value-based methods for multi-turn RL in an off-policy or offline setting have proven particularly challenging to scale to the setting of large language models. This setting requires effectively leveraging pretraining, scaling to large architectures with billions of parameters, and training on large datasets, all of which represent major challenges for current value-based RL methods. In this work, we propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning (SFT) problem where the probabilities of tokens directly translate to Q-values. In this way we obtain an algorithm that smoothly transitions from maximizing the likelihood of the data during pretraining to learning a near-optimal Q-function during finetuning. Our algorithm has strong theoretical foundations, enjoying performance bounds similar to state-of-the-art Q-learning methods, while in practice utilizing an objective that closely resembles SFT. Because of this, our approach can enjoy the full benefits of the pretraining of language models, without the need to reinitialize any weights before RL finetuning, and without the need to initialize new heads for predicting values or advantages. Empirically, we evaluate our method on both pretrained LLMs and VLMs, on a variety of tasks including both natural language dialogue and robotic manipulation and navigation from images.
arXiv
Data assimilation (DA) combines partial observations with a dynamical model to improve state estimation. Filter-based DA uses only past and present data and is the prerequisite for real-time forecasts. Smoother-based DA exploits both past and future observations. It aims to fill in missing data, provide more accurate estimations, and develop high-quality datasets. However, the standard smoothing procedure requires using all historical state estimations, which is storage-demanding, especially for high-dimensional systems. This paper develops an adaptive-lag online smoother for a large class of complex dynamical systems with strong nonlinear and non-Gaussian features, which has important applications to many real-world problems. The adaptive lag allows the DA to utilize only observations within a nearby window, significantly reducing computational storage. Online lag adjustment is essential for tackling turbulent systems, where temporal autocorrelation varies significantly over time due to intermittency, extreme events, and nonlinearity. Based on the uncertainty reduction in the estimated state, an information criterion is developed to systematically determine the adaptive lag. Notably, the mathematical structure of these systems facilitates the use of closed analytic formulae to calculate the online smoother and the adaptive lag, avoiding empirical tunings as in ensemble-based DA methods. The adaptive online smoother is applied to studying three important scientific problems. First, it helps detect online causal relationships between state variables. Second, its advantage of computational storage is illustrated via Lagrangian DA, a high-dimensional nonlinear problem. Finally, the adaptive smoother advances online parameter estimation with partial observations, emphasizing the role of the observed extreme events in accelerating convergence.
arXiv
Quantum state change can not occurs instantly, but the speed of quantum evolution is limited to an upper bound value, called quantum speed limit (QSL). Engineering QSL is an important task for quantum information and computation science and technologies. This paper devotes to engineering QSL and quantum correlation in simple two-qubit system suffering dephasing via Periodic Dynamical Decoupling (PDD) method in both Markovian and non-Markovian dynamical regimes. The results show that when decoupling pulses are applied to both qubits this method removes all undesirable effects of the dephasing process, completely. Applying the PDD on only one of the qubits also works but with lower efficiency. Additionally, ultra-high speedup of the quantum processes become possible during the pulse application period, for enough large number of pulses. The results is useful for high speed quantum gate implementation application.
arXiv
Is it possible to comprehensively destroy a piece of quantum information, so that nothing is left behind except the memory of whether one had it at one point? For example, various works, most recently Morimae, Poremba, and Yamakawa (TQC 2024), show how to construct a signature scheme with certified deletion where a user who deletes a signature on m cannot later produce a signature for m. However, in all of the existing schemes, even after deletion the user is still able keep irrefutable evidence that m was signed, and thus they do not fully capture the spirit of deletion. In this work, we initiate the study of certified deniability in order to obtain a more comprehensive notion of deletion. Certified deniability uses a simulation-based security definition, ensuring that any information the user has kept after deletion could have been learned without being given the deleteable object to begin with; meaning that deletion leaves no trace behind! We define and construct two non-interactive primitives that satisfy certified deniability in the quantum random oracle model: signatures and non-interactive zero-knowledge arguments (NIZKs). As a consequence, for example, it is not possible to delete a signature/NIZK and later provide convincing evidence that it used to exist. Notably, our results utilize uniquely quantum phenomena to bypass the celebrated result of Pass (CRYPTO, 2003) showing that deniable NIZKs are impossible even in the random oracle model.
arXiv
The Gaussian process (GP) is a widely used probabilistic machine learning method for stochastic function approximation, stochastic modeling, and analyzing real-world measurements of nonlinear processes. Unlike many other machine learning methods, GPs include an implicit characterization of uncertainty, making them extremely useful across many areas of science, technology, and engineering. Traditional implementations of GPs involve stationary kernels (also termed covariance functions) that limit their flexibility and exact methods for inference that prevent application to data sets with more than about ten thousand points. Modern approaches to address stationarity assumptions generally fail to accommodate large data sets, while all attempts to address scalability focus on approximating the Gaussian likelihood, which can involve subjectivity and lead to inaccuracies. In this work, we explicitly derive an alternative kernel that can discover and encode both sparsity and nonstationarity. We embed the kernel within a fully Bayesian GP model and leverage high-performance computing resources to enable the analysis of massive data sets. We demonstrate the favorable performance of our novel kernel relative to existing exact and approximate GP methods across a variety of synthetic data examples. Furthermore, we conduct space-time prediction based on more than one million measurements of daily maximum temperature and verify that our results outperform state-of-the-art methods in the Earth sciences. More broadly, having access to exact GPs that use ultra-scalable, sparsity-discovering, nonstationary kernels allows GP methods to truly compete with a wide variety of machine learning methods.
arXiv
We introduce average-distortion sketching for metric spaces. As in (worst-case) sketching, these algorithms compress points in a metric space while approximately recovering pairwise distances. The novelty is studying average-distortion: for any fixed (yet, arbitrary) distribution $\mu$ over the metric, the sketch should not over-estimate distances, and it should (approximately) preserve the average distance with respect to draws from $\mu$. The notion generalizes average-distortion embeddings into $\ell_1$ [Rabinovich '03, Kush-Nikolov-Tang '21] as well as data-dependent locality-sensitive hashing [Andoni-Razenshteyn '15, Andoni-Naor-Nikolov-et-al. '18], which have been recently studied in the context of nearest neighbor search. $\bullet$ For all $p \in [1, \infty)$ and any $c$ larger than a fixed constant, we give an average-distortion sketch for $([\Delta]^d, \ell_p)$ with approximation $c$ and bit-complexity $\text{poly}(cp \cdot 2^{p/c} \cdot \log(d\Delta))$, which is provably impossible in (worst-case) sketching. $\bullet$ As an application, we improve on the approximation of sublinear-time data structures for nearest neighbor search over $\ell_p$ (for large $p > 2$). The prior best approximation was $O(p)$ [Andoni-Naor-Nikolov-et-al. '18, Kush-Nikolov-Tang '21], and we show it can be any $c$ larger than a fixed constant (irrespective of $p$) by using $n^{\text{poly}(cp \cdot 2^{p/c})}$ space. We give some evidence that $2^{\Omega(p/c)}$ space may be necessary by giving a lower bound on average-distortion sketches which produce a certain probabilistic certificate of farness (which our sketches crucially rely on).
arXiv
Radiology report generation (RRG) aims to create free-text radiology reports from clinical imaging. Grounded radiology report generation (GRRG) extends RRG by including the localisation of individual findings on the image. Currently, there are no manually annotated chest X-ray (CXR) datasets to train GRRG models. In this work, we present a dataset called PadChest-GR (Grounded-Reporting) derived from PadChest aimed at training GRRG models for CXR images. We curate a public bi-lingual dataset of 4,555 CXR studies with grounded reports (3,099 abnormal and 1,456 normal), each containing complete lists of sentences describing individual present (positive) and absent (negative) findings in English and Spanish. In total, PadChest-GR contains 7,037 positive and 3,422 negative finding sentences. Every positive finding sentence is associated with up to two independent sets of bounding boxes labelled by different readers and has categorical labels for finding type, locations, and progression. To the best of our knowledge, PadChest-GR is the first manually curated dataset designed to train GRRG models for understanding and interpreting radiological images and generated text. By including detailed localization and comprehensive annotations of all clinically relevant findings, it provides a valuable resource for developing and evaluating GRRG models from CXR images. PadChest-GR can be downloaded under request from https://bimcv.cipf.es/bimcv-projects/padchest-gr/
arXiv
In $(2+1)d$ topological quantum field theory, topological entanglement entropy (TEE) can be computed using the replica and surgery methods. We classify all bipartitions on a torus and propose a general method for calculating their corresponding TEEs. For each bipartition, the TEEs for different ground states are bounded by a topological quantity, termed the intrinsic TEE, which depends solely on the number of entanglement interfaces $ \pi_{\partial A}$, $S_{\text{iTEE}}(A) = - \pi_{\partial A} \ln \mathcal{D}$ with $\mathcal{D}$ being the total quantum dimension. We derive a modified form of strong subadditivity (SSA) for the intrinsic TEE, with the modification depending on the genus $g_X$ of the subregions $X$, $S_{\text{iTEE}}(A) + S_{\text{iTEE}}(B) - S_{\text{iTEE}}(A\cup B) - S_{\text{iTEE}}(A\cap B) \geq -2\ln \mathcal{D} (g_A + g_B - g_{A\cup B} - g_{A\cap B})$. Additionally, we show that SSA for the full TEE holds when the intersection number between torus knots of the subregions is not equal to one. When the intersection number is one, the SSA condition is satisfied if and only if $\sum_a |\psi_a|^2 (\ln S_{0a} - \ln |\psi_a|) + |S\psi_a|^2 (\ln S_{0a} - \ln |S\psi_a|) \geq 2 \ln \mathcal{D}$, with $S$ being the modular $S$-matrix and $\psi_a$ being the probability amplitudes. This condition has been verified for unitary modular categories up to rank $11$, while counterexamples have been found in non-pseudo-unitary modular categories, such as the Yang-Lee anyon.
arXiv
The LIGO, Virgo and KAGRA gravitational wave observatories are currently undertaking their O4 observing run offering the opportunity to discover new electromagnetic counterparts to gravitational wave events. We examine the capability of the Neil Gehrels Swift Observatory (Swift) to respond to these triggers, primarily binary neutron star mergers, with both the UV/Optical Telescope (UVOT) and the X-ray Telescope (XRT). We simulate Swift's response to a trigger under different strategies using model skymaps, convolving these with the 2MPZ catalogue to produce an ordered list of observing fields, deriving the time taken for Swift to reach the correct field and simulating the instrumental responses to modelled kilonovae and short gamma-ray burst afterglows. We find that UVOT using the $u$ filter with an exposure time of order 120 s is optimal for most follow-up observations and that we are likely to detect counterparts in $\sim6$% of all binary neutron star triggers. We find that the gravitational wave 90% error area and measured distance to the trigger allow us to select optimal triggers to follow-up. Focussing on sources less than 300 Mpc away or 500 Mpc if the error area is less than a few hundred square degrees, distances greater than previously assumed, offer the best opportunity for discovery by Swift with $\sim5 - 30$% of triggers having detection probabilities $\geq 0.5$. At even greater distances, we can further optimise our follow-up by adopting a longer 250 s or 500 s exposure time.
arXiv
The origin of the binary black hole mergers observed by LIGO-Virgo-KAGRA (LVK) remains an open question. We calculate the merger rate from primordial black holes (PBHs) within the density spike around supermassive black holes (SMBHs) at the center of galaxies. We show that the merger rate within the spike is comparable to that within the wider dark matter halo. We also calculate the extreme mass ratio inspiral (EMRI) signal from PBHs hosted within the density spike spiralling into their host SMBHs due to GW emission. We predict that LISA may detect $\sim10^4$ of these EMRIs with signal-to-noise ratio of 5 within a 4-year observation run, if all dark matter is made up of PBHs. Uncertainties in our rates come from the uncertain mass fraction of PBHs within the dark matter spike, relative to the host central SMBHs, which defines the parameter space LISA can constrain.
arXiv
The large eccentricities of cold Jupiters and the existence of hot Jupiters have long challenged theories of planet formation. A proposed solution to both of these puzzles is high-eccentricity migration, in which an initially cold Jupiter is excited to high eccentricities before being tidally circularized. Secular perturbations from an inclined stellar companion are a potential source of eccentricity oscillations, a phenomenon known as the Eccentric Kozai-Lidov (EKL) mechanism. Previous studies have found that the cold Jupiter eccentricity distribution produced by EKL is inconsistent with observations. However, these studies assumed all planets start on circular orbits. Here, we revisit this question, considering that an initial period of planet-planet scattering on $\sim$Myr timescales likely places planets on slightly eccentric orbits before being modulated by EKL on $\sim$Myr-Gyr timescales. Small initial eccentricities can have a dramatic effect by enabling EKL to act at lower inclinations. We numerically integrate the secular hierarchical three-body equations of motion, including general relativity and tides, for populations of cold giant planets in stellar binaries with varied initial eccentricity distributions. For populations with modest initial mean eccentricities, the simulated eccentricity distribution produced by EKL is statistically consistent with the observed eccentricities of cold single-planet systems. The lower eccentricities in a multi-planet control sample suggest that planetary companions quench stellar EKL. We show that scattering alone is unlikely to reproduce the present-day eccentricity distribution. We also show that the anisotropic inclination distribution produced by EKL may lead radial velocity measurements to underestimate giant planet masses.
arXiv
This paper proposes ProEdit - a simple yet effective framework for high-quality 3D scene editing guided by diffusion distillation in a novel progressive manner. Inspired by the crucial observation that multi-view inconsistency in scene editing is rooted in the diffusion model's large feasible output space (FOS), our framework controls the size of FOS and reduces inconsistency by decomposing the overall editing task into several subtasks, which are then executed progressively on the scene. Within this framework, we design a difficulty-aware subtask decomposition scheduler and an adaptive 3D Gaussian splatting (3DGS) training strategy, ensuring high quality and efficiency in performing each subtask. Extensive evaluation shows that our ProEdit achieves state-of-the-art results in various scenes and challenging editing tasks, all through a simple framework without any expensive or sophisticated add-ons like distillation losses, components, or training procedures. Notably, ProEdit also provides a new way to control, preview, and select the "aggressivity" of editing operation during the editing process.
arXiv
Recently, breakthroughs in video modeling have allowed for controllable camera trajectories in generated videos. However, these methods cannot be directly applied to user-provided videos that are not generated by a video model. In this paper, we present ReCapture, a method for generating new videos with novel camera trajectories from a single user-provided video. Our method allows us to re-generate the reference video, with all its existing scene motion, from vastly different angles and with cinematic camera motion. Notably, using our method we can also plausibly hallucinate parts of the scene that were not observable in the reference video. Our method works by (1) generating a noisy anchor video with a new camera trajectory using multiview diffusion models or depth-based point cloud rendering and then (2) regenerating the anchor video into a clean and temporally consistent reangled video using our proposed masked video fine-tuning technique.
arXiv
This work presents a modification of the self-attention dynamics proposed by Geshkovski et al. (arXiv:2312.10794) to better reflect the practically relevant, causally masked attention used in transformer architectures for generative AI. This modification translates into an interacting particle system that cannot be interpreted as a mean-field gradient flow. Despite this loss of structure, we significantly strengthen the results of Geshkovski et al. (arXiv:2312.10794) in this context: While previous rigorous results focused on cases where all three matrices (Key, Query, and Value) were scaled identities, we prove asymptotic convergence to a single cluster for arbitrary key-query matrices and a value matrix equal to the identity. Additionally, we establish a connection to the classical R\'enyi parking problem from combinatorial geometry to make initial theoretical steps towards demonstrating the existence of meta-stable states.
arXiv
The classification of topological phases of matter is a fundamental challenge in quantum many-body physics, with applications to quantum technology. Recently, this classification has been extended to the setting of Adaptive Finite-Depth Local Unitary (AFDLU) circuits which allow global classical communication. In this setting, the trivial phase is the collection of all topological states that can be prepared via AFDLU. Here, we propose a complete classification of the trivial phase by showing how to prepare all solvable anyon theories that admit a gapped boundary via AFDLU, extending recent results on solvable groups. Our construction includes non-Abelian anyons with irrational quantum dimensions, such as Ising anyons, and more general acyclic anyons. Specifically, we introduce a sequential gauging procedure, with an AFDLU implementation, to produce a string-net ground state in any topological phase described by a solvable anyon theory with gapped boundary. In addition, we introduce a sequential ungauging and regauging procedure, with an AFDLU implementation, to apply string operators of arbitrary length for anyons and symmetry twist defects in solvable anyon theories. We apply our procedure to the quantum double of the group $S_3$ and to several examples that are beyond solvable groups, including the doubled Ising theory, the $\mathbb{Z}_3$ Tambara-Yamagami string-net, and doubled $SU(2)_4$ anyons.
arXiv
We present new advances in achieving exponential quantum speedups for solving optimization problems by low-depth quantum algorithms. Specifically, we focus on families of combinatorial optimization problems that exhibit symmetry and contain planted solutions. We rigorously prove that the 1-step Quantum Approximate Optimization Algorithm (QAOA) can achieve a success probability of $\Omega(1/\sqrt{n})$, and sometimes $\Omega(1)$, for finding the exact solution in many cases. Furthermore, we construct near-symmetric optimization problems by randomly sampling the individual clauses of symmetric problems, and prove that the QAOA maintains a strong success probability in this setting even when the symmetry is broken. Finally, we construct various families of near-symmetric Max-SAT problems and benchmark state-of-the-art classical solvers, discovering instances where all known classical algorithms require exponential time. Therefore, our results indicate that low-depth QAOA could achieve an exponential quantum speedup for optimization problems.
arXiv
Zero-shot coordination (ZSC) is a popular setting for studying the ability of reinforcement learning (RL) agents to coordinate with novel partners. Prior ZSC formulations assume the $\textit{problem setting}$ is common knowledge: each agent knows the underlying Dec-POMDP, knows others have this knowledge, and so on ad infinitum. However, this assumption rarely holds in complex real-world settings, which are often difficult to fully and correctly specify. Hence, in settings where this common knowledge assumption is invalid, agents trained using ZSC methods may not be able to coordinate well. To address this limitation, we formulate the $\textit{noisy zero-shot coordination}$ (NZSC) problem. In NZSC, agents observe different noisy versions of the ground truth Dec-POMDP, which are assumed to be distributed according to a fixed noise model. Only the distribution of ground truth Dec-POMDPs and the noise model are common knowledge. We show that a NZSC problem can be reduced to a ZSC problem by designing a meta-Dec-POMDP with an augmented state space consisting of all the ground-truth Dec-POMDPs. For solving NZSC problems, we propose a simple and flexible meta-learning method called NZSC training, in which the agents are trained across a distribution of coordination problems - which they only get to observe noisy versions of. We show that with NZSC training, RL agents can be trained to coordinate well with novel partners even when the (exact) problem setting of the coordination is not common knowledge.
arXiv
Let $F$ be a non-archimedean local field of characteristic zero. If $F$ has even residual characteristic, we assume $F/\mathbb{Q}_2$ is unramified. Let $V$ be a depth zero, irreducible, nongeneric supercuspidal representation of $GSp(4, F)$. We calculate the dimensions of the spaces of Siegel-invariant vectors in $V$ of level $\mathfrak{p}^n$ for all $n\geq0$.
arXiv
Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this problem, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of all of the 36 datasets that consist of statements or claims. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as insufficient label quality, spurious correlations, or political bias. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. We discuss alternatives to mitigate this problem. Overall, this guide aims to provide a roadmap for obtaining higher quality data and conducting more effective evaluations, ultimately improving research in misinformation detection. All datasets and other artifacts are available at https://misinfo-datasets.complexdatalab.com/.
arXiv
Time-varying inhomogeneities on stellar surfaces constitute one of the largest sources of radial velocity (RV) error for planet detection and characterization. We show that stellar variations, because they manifest on coherent, rotating surfaces, give rise to changes that are complex but useably compact and coherent in the spectral domain. Methods for disentangling stellar signals in RV measurements benefit from modeling the full domain of spectral pixels. We simulate spectra of spotted stars using starry and construct a simple spectrum projection space that is sensitive to the orientation and size of stellar surface features. Regressing measured RVs in this projection space reduces RV scatter by 60-80% while preserving planet shifts. We note that stellar surface variability signals do not manifest in spectral changes that are purely orthogonal to a Doppler shift or exclusively asymmetric in line profiles; enforcing orthogonality or focusing exclusively on asymmetric features will not make use of all the information present in the spectra. We conclude with a discussion of existing and possible implementations on real data based on the presented compact, coherent framework for stellar signal mitigation.
arXiv
We provide a complete classification of the integrability and nonintegrability of the spin-1 bilinear-biquadratic model with a uniaxial anisotropic field, which includes the Heisenberg model and the Affleck-Kennedy-Lieb-Tasaki model. It is rigorously shown that all systems, except for the known integrable systems, are nonintegrable, meaning that they do not have nontrivial local conserved quantities. In particular, this result guarantees the nonintegrability of the Affleck-Kennedy-Lieb-Tasaki model, which is a fundamental assumption for quantum many-body scarring. Furthermore, we give simple necessary conditions for integrability in an extended model of the bilinear-biquadratic model with anisotropic interactions. Our result has accomplished a breakthrough in nonintegrability proofs by expanding their scope to spin-1 systems.
arXiv
There is great interest in fine-tuning frontier large language models (LLMs) to inject new information and update existing knowledge. While commercial LLM fine-tuning APIs from providers such as OpenAI and Google promise flexible adaptation for various applications, the efficacy of fine-tuning remains unclear. In this study, we introduce FineTuneBench, an evaluation framework and dataset for understanding how well commercial fine-tuning APIs can successfully learn new and updated knowledge. We analyze five frontier LLMs with commercially available fine-tuning APIs, including GPT-4o and Gemini 1.5 Pro, on their effectiveness in two settings: (1) ingesting novel information, such as recent news events and new people profiles, and (2) updating existing knowledge, such as updated medical guidelines and code frameworks. Our results reveal substantial shortcomings in all the models' abilities to effectively learn new information through fine-tuning, with an average generalization accuracy of 37% across all models. When updating existing knowledge, such as incorporating medical guideline updates, commercial fine-tuning APIs show even more limited capability (average generalization accuracy of 19%). Overall, fine-tuning GPT-4o mini is the most effective for infusing new knowledge and updating knowledge, followed by GPT-3.5 Turbo and GPT-4o. The fine-tuning APIs for Gemini 1.5 Flesh and Gemini 1.5 Pro are unable to learn new knowledge or update existing knowledge. These findings underscore a major shortcoming in using current commercial fine-tuning services to achieve reliable knowledge infusion in common scenarios. We open source the FineTuneBench dataset at https://github.com/kevinwu23/StanfordFineTuneBench.
arXiv
Galaxy mergers are a key driver of galaxy formation and evolution, including the triggering of AGN and star formation to a still unknown degree. We thus investigate the impact of galaxy mergers on star formation and AGN activity using a sample of 3,330 galaxies at $z = [4.5, 8.5]$ from eight JWST fields (CEERS, JADES GOODS-S, NEP-TDF, NGDEEP, GLASS, El-Gordo, SMACS-0723, and MACS-0416), collectively covering an unmasked area of 189 arcmin$^2$. We focuses on star formation rate (SFR) enhancement, AGN fraction, and AGN excess in major merger ($\mu > 1/4$) close-pair samples, defined by $\Delta z < 0.3$ and projected separations $r_p < 100$ kpc, compared to non-merger samples. We find that SFR enhancement occurs only at $r_p < 20$ kpc, with values of $0.25 \pm 0.10$ dex and $0.26 \pm 0.11$ dex above the non-merger medians for $z = [4.5, 6.5]$ and $z = [6.5, 8.5]$, respectively. No other statistically significant enhancements in galaxy sSFR or stellar mass are observed at any projected separation or redshift bin. We also compare our observational results with predictions from the SC-SAM simulation and find no evidence of star formation enhancement in the simulations at any separation range. Finally, we examine the AGN fraction and AGN excess, finding that the fraction of AGNs in AGN-galaxy pairs, relative to the total AGN population, is $3.25^{+1.50}_{-1.06}$ times greater than the fraction of galaxy pairs relative to the overall galaxy population at the same redshift. We find that nearly all AGNs have a companion within 100 kpc and observe an excess AGN fraction in close-pair samples compared to non-merger samples. This excess is found to be $1.26 \pm 0.06$ and $1.34 \pm 0.06$ for AGNs identified via the inferred BPT diagram and photometric SED selection, respectively.
arXiv
Using optical technology for current injection and electromagnetic emission simplifies the comparison between materials. Here, we inject current into monolayer graphene and bulk gallium arsenide (GaAs) using two-color quantum interference and detect the emitted electric field by electro-optic sampling. We find the amplitude of emitted terahertz (THz) radiation scales in the same way for both materials even though they differ in dimension, band gap, atomic composition, symmetry and lattice structure. In addition, we observe the same mapping of the current direction to the light characteristics. With no electrodes for injection or detection, our approach will allow electron scattering timescales to be directly measured. We envisage that it will enable exploration of new materials suitable for generating terahertz magnetic fields.
arXiv
This paper investigates the impact of noise in the quantum query model, a fundamental framework for quantum algorithms. We focus on the scenario where the oracle is subject to non-unitary (or irreversible) noise, specifically under the \textit{faulty oracle} model, where the oracle fails with a constant probability and acts as identity. Regev and Schiff (ICALP'08) showed that quantum advantage is lost for the search problem under this noise model. Our main result shows that every quantum query algorithm can be made robust in this noise model with a roughly quadratic blow-up in query complexity, thereby preserving quantum speedup for all problems where the quantum advantage is super-cubic. This is the first non-trivial robustification of quantum query algorithms against an oracle that is noisy.
arXiv
Fine-grained alignment between videos and text is challenging due to complex spatial and temporal dynamics in videos. Existing video-based Large Multimodal Models (LMMs) handle basic conversations but struggle with precise pixel-level grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed for fine-grained pixel-level grounding in videos based on user-provided textual inputs. Our design seamlessly connects three key components: a Large Language Model, a dual vision encoder that emphasizes both spatial and temporal details, and a spatio-temporal decoder for accurate mask generation. This connection is facilitated via tunable V-L and L-V adapters that enable close Vision-Language (VL) alignment. The architecture is trained to synchronize both spatial and temporal elements of video content with textual instructions. To enable fine-grained grounding, we curate a multimodal dataset featuring detailed visually-grounded conversations using a semiautomatic annotation pipeline, resulting in a diverse set of 38k video-QA triplets along with 83k objects and 671k masks. We evaluate VideoGLaMM on three challenging tasks: Grounded Conversation Generation, Visual Grounding, and Referring Video Segmentation. Experimental results show that our model consistently outperforms existing approaches across all three tasks.
arXiv
The generalized hydrodynamics (GHD) equation is the equivalent of the Euler equations of hydrodynamics for integrable models. Systems of hyperbolic equations such as the Euler equations usually develop shocks and are plagued by problems of uniqueness. We establish for the first time the existence and uniqueness of solutions to the full GHD equation and the absence of shocks, from a large class of initial conditions with bounded occupation function. We assume only absolute integrability of the two-body scattering shift. In applications to quantum models of fermionic type, this includes all commonly used physical initial states, such as locally thermal states and zero-entropy states. We show in particular that differentiable initial conditions give differentiable solutions at all times and that weak initial conditions such as the Riemann problem have unique weak solutions which preserve entropy. For this purpose, we write the GHD equation as a new fixed-point problem (announced in a companion paper). We show that the fixed point exists, is unique, and is approached, under an iterative solution procedure, in the Banach topology on functions of momenta.
arXiv
In this paper, we study the optimal control for an SEIR model adapted to the vaccination strategy of susceptible individuals. There are factors associated with a vaccination campaign that make this strategy not only a public health issue but also an economic one. In this case, optimal control is important as it minimizes implementation costs. We consider the availability of two vaccines with different efficacy levels, and the control indicates when each vaccine should be used. The optimal strategy specifies in all cases how vaccine purchases should be distributed. For similar efficacy values, we perform a sensitivity analysis on parameters that depend on the intrinsic characteristics of the vaccines. Additionally, we investigate the behavior of the number of infections under the optimal vaccination strategy.
arXiv
In this paper we study flow problems on temporal networks, where edge capacities and travel times change over time. We consider a network with $n$ nodes and $m$ edges where the capacity and length of each edge is a piecewise constant function, and use $\mu=\Omega(m)$ to denote the total number of pieces in all of the $2m$ functions. Our goal is to design exact algorithms for various flow problems that run in time polynomial in the parameter $\mu$. Importantly, the algorithms we design are strongly polynomial, i.e. have no dependence on the capacities, flow value, or the time horizon of the flow process, all of which can be exponentially large relative to the other parameters; and return an integral flow when all input parameters are integral. Our main result is an algorithm for checking feasibility of a dynamic transshipment problem on temporal networks -- given multiple sources and sinks with supply and demand values, is it possible to satisfy the desired supplies and demands within a given time horizon? We develop a fast ($O(\mu^3)$ time) algorithm for this feasibility problem when the input network has a certain canonical form, by exploiting the cut structure of the associated time expanded network. We then adapt an approach of \cite{hoppe2000} to show how other flow problems on temporal networks can be reduced to the canonical format. For computing dynamic transshipments on temporal networks, this results in a $O(\mu^7)$ time algorithm, whereas the previous best integral exact algorithm runs in time $\tilde O(\mu^{19})$. We achieve similar improvements for other flow problems on temporal networks.
arXiv
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems. While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, particularly those with reproducible data processing pipelines and transparent training protocols, remain limited. The scarcity is due to various challenges, including resource constraints, ethical considerations, and the competitive advantages of keeping models advanced. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an "open cookbook" for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Through this comprehensive release, we identify the key ingredients for building a top-tier code LLM: (1) code optimized heuristic rules for data cleaning and methods for data deduplication, (2) recall of text corpus related to code and (3) high-quality synthetic data in both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research, and enable reproducible advancements in code AI.
arXiv
We study $\varepsilon$-stability in continuous logic. We first consider stability in a model, where we obtain a definability of types result with a better approximation than that in the literature. We also prove forking symmetry for $\varepsilon$-stability and briefly discuss finitely satisfiable types. We then do a short survey of $\varepsilon$-stability in a theory. Finally, we consider the map that takes each formula to its "degree" of stability in a given theory and show that it is a seminorm. All of this is done in the context of a first-order formalism that allows predicates to take values in arbitrary compact metric spaces.
arXiv
Cosmological parameters and dark energy (DE) behavior are generally constrained assuming \textit{a priori} models. We work out a model-independent reconstruction to bound the key cosmological quantities and the DE evolution. Through the model-independent \textit{B\'ezier interpolation} method, we reconstruct the Hubble rate from the observational Hubble data and derive analytic expressions for the distances of galaxy clusters, type Ia supernovae, and uncorrelated baryonic acoustic oscillation (BAO) data. In view of the discrepancy between Sloan Digital Sky Survey (SDSS) and Dark Energy Spectroscopic Instrument (DESI) BAO data, they are kept separate in two distinct analyses. Correlated BAO data are employed to break the baryonic--dark matter degeneracy. All these interpolations enable us to single out and reconstruct the DE behavior with the redshift $z$ in a totally model-independent way. In both analyses, with SDSS-BAO or DESI-BAO data sets, the constraints agree at $1$--$\sigma$ confidence level (CL) with the flat $\Lambda$CDM model. The Hubble constant tension appears solved in favor of the Planck satellite value. The reconstructed DE behavior exhibits deviations at small $z$ ($>1$--$\sigma$ CL), but agrees ($<1$--$\sigma$ CL) with the cosmological constant paradigm at larger $z$. Our method hints for a slowly evolving DE, consistent with a cosmological constant at early times.
arXiv
We consider quantum circuit models where the gates are drawn from arbitrary gate ensembles given by probabilistic distributions over certain gate sets and circuit architectures, which we call stochastic quantum circuits. Of main interest in this work is the speed of convergence of stochastic circuits with different gate ensembles and circuit architectures to unitary t-designs. A key motivation for this theory is the varying preference for different gates and circuit architectures in different practical scenarios. In particular, it provides a versatile framework for devising efficient circuits for implementing $t$-designs and relevant applications including random circuit and scrambling experiments, as well as benchmarking the performance of gates and circuit architectures. We examine various important settings in depth. A key aspect of our study is an "ironed gadget" model, which allows us to systematically evaluate and compare the convergence efficiency of entangling gates and circuit architectures. Particularly notable results include i) gadgets of two-qubit gates with KAK coefficients $\left(\frac{\pi}{4}-\frac{1}{8}\arccos(\frac{1}{5}),\frac{\pi}{8},\frac{1}{8}\arccos(\frac{1}{5})\right)$ (which we call $\chi$ gates) directly form exact 2- and 3-designs; ii) the iSWAP gate family achieves the best efficiency for convergence to 2-designs under mild conjectures with numerical evidence, even outperforming the Haar-random gate, for generic many-body circuits; iii) iSWAP + complete graph achieve the best efficiency for convergence to 2-designs among all graph circuits. A variety of numerical results are provided to complement our analysis. We also derive robustness guarantees for our analysis against gate perturbations. Additionally, we provide cursory analysis on gates with higher locality and found that the Margolus gate outperforms various other well-known gates.
arXiv
New metal-organic frameworks (MOFs) are periodically synthesized all over the world due to the wide range of societally and environmentally relevant applications they possess. However, the mechanisms and thermodynamics associated to MOF self-assembly are poorly understood because of the difficulties in studying such a multi-scale process with molecular-level resolution. In this work, we performed well-tempered metadynamics simulations of the early nucleation and late growth steps of the self-assembly of ZIF-4 using a reactive force field. We found that the formation of building blocks is a complex, multi-step process that involves changes in the coordination of the metal ion. Saturating the ligand coordination of a metal ion is more energetically favorable during growth than during early nucleation. The addition of a fourth ligand is less exergonic than it is for the first three and the associated free energy is highly dependent on the local environment of the undercoordinated metal ion. The stability of this bond depends on the strength of the solvent--metal ion interaction. Incorporating a ligand to a ZIF-1 crystal is less favorable compared to the more stable ZIF-4 polymorph. Milder differences were found when comparing the growth of (100), (010) and (001) ZIF-4 surfaces.
arXiv
The $\sigma_{t}$-irregularity (or sigma total index) is a graph invariant which is defined as $\sigma_{t}(G)=\sum_{\{u,v\}\subseteq V(G)}(d(u)-d(v))^{2},$ where $d(z)$ denotes the degree of $z$. This irregularity measure was proposed by R\' {e}ti [Appl. Math. Comput. 344-345 (2019) 107-115], and recently rediscovered by Dimitrov and Stevanovi\'c [Appl. Math. Comput. 441 (2023) 127709]. In this paper we remark that $\sigma_{t}(G)=n^{2}\cdot Var(G)$, where $Var(G)$ is the degree variance of the graph. Based on this observation, we characterize irregular graphs with maximum $\sigma_{t}$-irregularity. We show that among all connected graphs on $n$ vertices, the split graphs $S_{\lceil\frac{n}{4}\rceil, \lfloor\frac{3n}{4}\rfloor }$ and $S_{\lfloor\frac{n}{4}\rfloor, \lceil\frac{3n}{4}\rceil }$ have the maximum $\sigma_{t}$-irregularity, and among all complete bipartite graphs on $n$ vertices, either the complete bipartite graph $K_{\lfloor\frac{n}{4}(2-\sqrt{2})\rfloor, \lceil\frac{n}{4}(2+\sqrt{2})\rceil }$ or $K_{\lceil\frac{n}{4}(2-\sqrt{2})\rceil, \lfloor\frac{n}{4}(2+\sqrt{2})\rfloor }$ has the maximum sigma total index. Moreover, various upper and lower bounds for $\sigma_{t}$-irregularity are provided; in this direction we give a relation between the graph energy $\mathcal{E}(G)$ and sigma total index $\sigma_{t}(G)$ and give another proof of two results by Dimitrov and Stevanovi\'c. Applying Fiedler's characterization of the largest and the second smallest Laplacian eigenvalue of the graph, we also establish new relationships between $\sigma_{t}$ and $\sigma$. We conclude the paper with two conjectures.
arXiv
This paper combines a techno-economic energy system model with an econometric model to maximise electricity price forecasting accuracy. The proposed combination model is tested on the German day-ahead wholesale electricity market. Our paper also benchmarks the results against several econometric alternatives. Lastly, we demonstrate the economic value of improved price estimators maximising the revenue from an electric storage resource. The results demonstrate that our integrated model improves overall forecasting accuracy by 18 %, compared to available literature benchmarks. Furthermore, our robustness checks reveal that a) the Ensemble Deep Neural Network model performs best in our dataset and b) adding output from the techno-economic energy systems model as econometric model input improves the performance of all econometric models. The empirical relevance of the forecast improvement is confirmed by the results of the exemplary storage optimisation, in which the integration of the techno-economic energy system model leads to a revenue increase of up to 10 %.
arXiv
Title: Sentiment Analysis of Spanish Political Party Communications on Twitter Using Pre-trained Language Models Authors: Chuqiao Song, Shunzhang Chen, Xinyi Cai, Hao Chen Comments: 21 pages, 6 figures Abstract: This study investigates sentiment patterns within Spanish political party communications on Twitter by leveraging BETO and RoBERTuito, two pre-trained language models optimized for Spanish text. Using a dataset of tweets from major Spanish political parties: PSOE, PP, Vox, Podemos, and Ciudadanos, spanning 2019 to 2024, this research analyzes sentiment distributions and explores the relationship between sentiment expression and party ideology. The findings indicate that both models consistently identify a predominant Neutral sentiment across all parties, with significant variations in Negative and Positive sentiments that align with ideological distinctions. Specifically, Vox exhibits higher levels of Negative sentiment, while PSOE demonstrates relatively high Positive sentiment, supporting the hypothesis that emotional appeals in political messaging reflect ideological stances. This study underscores the potential of pre-trained language models for non-English sentiment analysis on social media, providing insights into sentiment dynamics that shape public discourse within Spain's multi-party political system. Keywords: Spanish politics, sentiment analysis, pre-trained language models, Twitter, BETO, RoBERTuito, political ideology, multi-party system
arXiv
We show that every Born Lie algebra can be obtained by the bicross product construction starting from two pseudo-Riemannian Lie algebras. We then obtain a classification of all Lie algebras up to dimension four and all six-dimensional nilpotent Lie algebras admitting an integrable Born structure. Finally, we study the curvature properties of the pseudo-Riemannian metrics of the integrable Born structures obtained in our classification results.
arXiv
In many repeated auction settings, participants care not only about how frequently they win but also how their winnings are distributed over time. This problem arises in various practical domains where avoiding congested demand is crucial, such as online retail sales and compute services, as well as in advertising campaigns that require sustained visibility over time. We introduce a simple model of this phenomenon, modeling it as a budgeted auction where the value of a win is a concave function of the time since the last win. This implies that for a given number of wins, even spacing over time is optimal. We also extend our model and results to the case when not all wins result in "conversions" (realization of actual gains), and the probability of conversion depends on a context. The goal is to maximize and evenly space conversions rather than just wins. We study the optimal policies for this setting in second-price auctions and offer learning algorithms for the bidders that achieve low regret against the optimal bidding policy in a Bayesian online setting. Our main result is a computationally efficient online learning algorithm that achieves $\tilde O(\sqrt T)$ regret. We achieve this by showing that an infinite-horizon Markov decision process (MDP) with the budget constraint in expectation is essentially equivalent to our problem, even when limiting that MDP to a very small number of states. The algorithm achieves low regret by learning a bidding policy that chooses bids as a function of the context and the system's state, which will be the time elapsed since the last win (or conversion). We show that state-independent strategies incur linear regret even without uncertainty of conversions. We complement this by showing that there are state-independent strategies that, while still having linear regret, achieve a $(1-\frac 1 e)$ approximation to the optimal reward.
arXiv
We propose the use of the Extended Kalman Filter (EKF) for online data assimilation and update of a dynamic model, preliminary identified through the Sparse Identification of Nonlinear Dynamics (SINDy). This data-driven technique may avoid biases due to incorrect modelling assumptions and exploits SINDy to approximate the system dynamics leveraging a predefined library of functions, where active terms are selected and weighted by a sparse set of coefficients. This results in a physically-sound and interpretable dynamic model allowing to reduce epistemic uncertainty often affecting machine learning approaches. Treating the SINDy model coefficients as random variables, we propose to update them while acquiring (possibly noisy) system measurements, thus enabling the online identification of time-varying systems. These changes can stem from, e.g., varying operational conditions or unforeseen events. The EKF performs model adaptation through joint state-parameters estimation, with the Jacobian matrices required to computed the model sensitivity inexpensively evaluated from the SINDy model formulation. The effectiveness of this approach is demonstrated through three case studies: (i) a Lokta-Volterra model in which all parameters simultaneously evolve during the observation period; (ii) a Selkov model where the system undergoes a bifurcation not seen during the SINDy training; (iii) a MEMS arch exhibiting a 1:2 internal resonance. The ability of EKF of recovering inactivated functional terms from the SINDy library, or discarding unnecessary contribution, is also highlighted. Based on the presented applications, this method shows strong promise for handling time-varying nonlinear dynamic systems possibly experiencing bifurcating behaviours.
arXiv
We study the robust regulation of labour contracts in moral hazard problems. A firm offers a contract to incentivise production by an agent protected by limited liability. A regulator chooses the set of permissible contracts to (i) improve efficiency and (ii) protect the worker. The regulator ignores the agent's productive actions and the firm's costs and evaluates regulation by its worst-case regret. The regret-minimising regulation imposes a linear minimum wage, allowing all contracts above this linear threshold. The slope of the minimum contract balances the worker's protection - by ensuring they receive a minimal share of the production - and the necessary flexibility for incentive provision.
arXiv
The present article is concerned with the nonlinear approximation of functions in the Sobolev space H^q with respect to a tensor-product, or hyperbolic wavelet basis on the unit n-cube. Here, q is a real number, which is not necessarily positive. We derive Jackson and Bernstein inequalities to obtain that the approximation classes contain Besov spaces of hybrid regularity. Especially, we show that all functions that can be approximated by classical wavelets are also approximable by tensor-product wavelets at least at the same rate. In particular, this implies that for nonnegative regularity, the classical Besov spaces of regularity q+sn, integrability and weak index t, with 1/t = s + 1/2, are included in the Besov spaces of hybrid regularity with isotropic regularity q and additional mixed regularity s.
arXiv
As implemented in the commercialized device modeling software, the four-state nonradiative multi-phonon model has attracted intensive attention in the past decade for describing the physics in negative bias temperature instability (NBTI) and other reliability issues of Si/SiO$_\text{2}$ MOSFET devices. It was proposed initially based on the assumption that the oxygen vacancy defects (V$_\text{O}$) in SiO$_\text{2}$ dielectric layer are bistable in the Si-dimer and back-projected structures during carrier capture and emission. Through high-throughput first-principles structural search, we found V$_\text{O}$ on non-equivalent O sites in amorphous SiO$_\text{2}$ can take 4 types of structural configurations in neutral state and 7 types of configurations in +1 charged state after capturing holes, which produce a wide range of charge-state transition levels for trapping holes. The finding contrasts the structural-bistability assumption and makes the four-state model invalid for most of O sites. To describe the reliability physics accurately, we propose an all-state model to consider all these structural configurations as well as all the carrier capture/emission transitions and thermal transitions between them. With the all-state model, we show that the V$_\text{O}$ defects play important roles in causing NBTI, which challenges the recent studies that discarded V$_\text{O}$ as a possible hole trap in NBTI. Our systematical calculations on the diversified V$_\text{O}$ properties and the all-state model provide the microscopic foundation for describing the reliability physics of MOSFETs and other transistors accurately.
arXiv
The detection of cosmic antideuterons ($\overline{\rm D}$) at kinetic energies below a few GeV/n could provide a smoking gun signature for dark matter (DM). However, the theoretical uncertainties of coalescence models have represented so far one of the main limiting factors for precise predictions of the $\overline{\rm D}$ flux. In this Letter we present a novel calculation of the $\overline{\rm D}$ source spectra, based on the Wigner formalism, for which we implement the Argonne $v_{18}$ antideuteron wavefunction that does not have any free parameters related to the coalescence process. We show that the Argonne Wigner model excellently reproduces the $\overline{\rm D}$ multiplicity measured by ALEPH at the $Z$-boson pole, which is usually adopted to tune the coalescence models based on different approaches. Our analysis is based on Pythia~8 Monte Carlo event generator and the state-of-the-art Vincia shower algorithm. We succeed, with our model, to reduce the current theoretical uncertainty on the prediction of the $\overline{\rm D}$ source spectra to a few percent, for $\overline{\rm D}$ kinetic energies relevant to DM searches with GAPS and AMS, and for DM masses above a few tens of GeV. This result implies that the theoretical uncertainties due to the coalescence process are no longer the main limiting factor in the predictions. We provide the tabulated source spectra for all the relevant DM annihilation/decay channels and DM masses between 5 GeV and 100 TeV, on the CosmiXs github repository (https://github.com/ajueid/CosmiXs.git).
arXiv
Deep regression models are used in a wide variety of safety-critical applications, but are vulnerable to backdoor attacks. Although many defenses have been proposed for classification models, they are ineffective as they do not consider the uniqueness of regression models. First, the outputs of regression models are continuous values instead of discretized labels. Thus, the potential infected target of a backdoored regression model has infinite possibilities, which makes it impossible to be determined by existing defenses. Second, the backdoor behavior of backdoored deep regression models is triggered by the activation values of all the neurons in the feature space, which makes it difficult to be detected and mitigated using existing defenses. To resolve these problems, we propose DRMGuard, the first defense to identify if a deep regression model in the image domain is backdoored or not. DRMGuard formulates the optimization problem for reverse engineering based on the unique output-space and feature-space characteristics of backdoored deep regression models. We conduct extensive evaluations on two regression tasks and four datasets. The results show that DRMGuard can consistently defend against various backdoor attacks. We also generalize four state-of-the-art defenses designed for classifiers to regression models, and compare DRMGuard with them. The results show that DRMGuard significantly outperforms all those defenses.
arXiv
We consider the space of all configurations of finitely many (potentially nested) circles in the plane. We prove that this space is aspherical, and compute the fundamental group of each of its connected components. It turns out these fundamental groups are obtained as iterated semdirect products of braid groups, with the structure for each component dictated by a finite rooted tree. These groups can be viewed as "braided" versions of the automorphism groups of such trees. We also discuss connections to statistical mechanics, topological data analysis, and geometric group theory.
arXiv
We present a framework for learning a single policy capable of producing all quadruped gaits and transitions. The framework consists of a policy trained with deep reinforcement learning (DRL) to modulate the parameters of a system of abstract oscillators (i.e. Central Pattern Generator), whose output is mapped to joint commands through a pattern formation layer that sets the gait style, i.e. body height, swing foot ground clearance height, and foot offset. Different gaits are formed by changing the coupling between different oscillators, which can be instantaneously selected at any velocity by a user. With this framework, we systematically investigate which gait should be used at which velocity, and when gait transitions should occur from a Cost of Transport (COT), i.e. energy-efficiency, point of view. Additionally, we note how gait style changes as a function of locomotion speed for each gait to keep the most energy-efficient locomotion. While the currently most popular gait (trot) does not result in the lowest COT, we find that considering different co-dependent metrics such as mean base velocity and joint acceleration result in different `optimal' gaits than those that minimize COT. We deploy our controller in various hardware experiments, showing all 9 typical quadruped animal gaits, and demonstrate generalizability to unseen gaits during training, and robustness to leg failures. Video results can be found at https://youtu.be/OLoWSX_R868.
arXiv
We present a comprehensive first-principles analysis of the thermoelectric transport properties of hole-doped pyrite FeS$_2$ that includes electron-phonon interactions. This work was motivated by the observed variations in the magnitude of thermopower reported in previous experimental and theoretical studies of hole-doped FeS$_2$ systems. Our calculations reveal that hole-doped FeS$_2$ exhibits large positive room-temperature thermopower across all doping levels, with a room-temperature thermopower of 608 $\mu$V/K at a low hole-doping concentration of 10$^{19}$ cm$^{-3}$. This promising thermopower finding prompted a comprehensive investigation of other key thermoelectric parameters governing the thermoelectric figure of merit $ZT$. The calculated electrical conductivity is modest and remains below 10$^5$ S/m at room-temperature for all doping levels, limiting the achievable power factor. Furthermore, the thermal conductivity is found to be phonon driven, with a high room-temperature lattice thermal conductivity of 40.5 W/mK. Consequently, the calculated $ZT$ remains below 0.1, suggesting that hole-doped FeS$_2$ may not a viable candidate for effective thermoelectric applications despite its promising thermopower.
arXiv
The advantage of quantum protocols lies in the inherent properties of the shared quantum states. These states are sometimes provided by sources that are not trusted, and therefore need to be verified. Finding secure and efficient quantum state verification protocols remains a big challenge, and recent works illustrate trade-offs between efficiency and security for different groups of states in restricted settings. However, whether a universal trade-off exists for all quantum states and all verification strategies remains unknown. In this work, we instantiate the categorical composable cryptography framework to show a fundamental limit for quantum state verification for all cut-and-choose approaches used to verify arbitrary quantum states. Our findings show that the prevailing cut-and-choose techniques cannot lead to quantum state verification protocols that are both efficient and secure.
arXiv
Let $n$ be a positive integer. The Diophantine equation $n(x_1+x_2+\dots +x_n)=x_1x_2\dots x_n$, $1 \le x_1\le x_2\le \dots \le x_n$ is called Erd\H{o}s's last equation. We prove that $x_n\to \infty $ as $n\to \infty$ and determine all tuples $(n,x_1,\dots ,x_n)$ with $x_n\le 10$.
arXiv
Surveys are an indispensable source of data for applied economic research; however, their reliance on self-reported information can introduce bias, especially if core variables such as personal income are misreported. To assess the extent and impact of this misreporting bias, we compare self-reported wages from the German Socio-Economic Panel (SOEP) with administrative wages from social security records (IEB) for the same individuals. Using a novel and unique data linkage (SOEP-ADIAB), we identify a modest but economically significant reporting bias, with SOEP respondents underreporting their administrative wages by about 7.3%. This misreporting varies systematically with individual, household, and especially job and firm characteristics. In replicating common empirical analyses in which wages serve as either dependent or independent variables, we find that misreporting is consequential for some, but not all estimated relationships. It turns out to be inconsequential for examining the returns to education, but relevant for analyzing the gender wage gap. In addition we find that misreporting bias can significantly affect the results when wage is used as the independent variable. Specifically, estimates of the wage-satisfaction relationship are substantially overestimated when based on survey data, although this bias is mitigated when focusing on interpersonal changes. Our findings underscore that survey-based measures of individual wages can significantly bias commonly estimated empirical relationships. They also demonstrate the enormous research potential of linked administrative-survey data.
arXiv