text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
This paper presents the GRBSN webtool, a public facing application which
hosts the most complete list of GRB-SN associations to date. In contrast to
other repositories of supernova or gamma-ray burst data, this tool brings
together all of the information required to study a GRB-SN association. GRBSN
allows users to view and interact with plots of the data; search and filter the
whole database; and download all multi-wavelength data related to a GRB-SN
association, including radio, X-ray, optical/NIR photometric and spectroscopic
data. The tool is fully open source and is hosted on a public GitHub
repository, meaning users can upload their own data, flag missing data and
suggest improvements. As the number of confirmed GRB-SN associations increases,
the webtool will provide a robust framework in which to catalogue these
associations and their associated data. The web application is freely available
and publicly accessible at https://grbsn.watchertelescope.ie. | arXiv |
This note discusses some of the aspects of a model for the covariance of
equity returns based on a simple "isotropic" structure in which all pairwise
correlations are taken to be the same value. The effect of the structure on
feasible values for the common correlation of returns and on the "effective
degrees of freedom" within the equity cross-section are discussed, as well as
the impact of this constraint on the asymptotic Normality of portfolio returns.
An eigendecomposition of the covariance matrix is presented and used to
partition variance into that from a common "market" factor and
"non-diversifiable" idiosyncratic risk. A empirical analysis of the recent
history of the returns of S&P 500 Index members is presented and compared to
the expectations from both this model and linear factor models. This analysis
supports the isotropic covariance model and does not seem to provide evidence
in support of linear factor models. Analysis of portfolio selection under
isotropic correlation is presented using mean-variance optimization for both
heteroskedastic and homoskedastic cases. Portfolio selection for negative
exponential utility maximizers is also discussed for the general case of
distributions of returns with elliptical symmetry. The fact that idiosyncratic
risk may not be removed by diversification in a model that the data supports
undermines the basic premises of structures such as the C.A.P.M. and A.P.T. If
the cross-section of equity returns is more accurately described by this
structure then an inevitable consequence is that picking stocks is not a
"pointless" activity, as the returns to residual risk would be non-zero. | arXiv |
The minimum separation between reconnecting vortices in fluids and
superfluids obeys a universal scaling law with respect to time. The
pre-reconnection and the post-reconnection prefactors of this scaling law are
different, a property related to irreversibility and to energy transfer and
dissipation mechanisms. In the present work, we determine the temperature
dependence of these prefactors in superfluid helium from experiments and a
numeric model which fully accounts for the coupled dynamics of the superfluid
vortex lines and the thermal normal fluid component. At all temperatures, we
observe a pre- and post-reconnection asymmetry similar to that observed in
other superfluids and in classical viscous fluids, indicating that vortex
reconnections display a universal behaviour independent of the small-scale
regularising dynamics. We also numerically show that each vortex reconnection
event represents a sudden injection of energy in the normal fluid. Finally we
argue that in a turbulent flow, these punctuated energy injections can sustain
the normal fluid in a perturbed state, provided that the density of superfluid
vortices is large enough. | arXiv |
We present detailed elaboration and first generally applicable linearization
routines of the \textit{Parameter Space Concept} (PSC) for determining
1-dimensionally projected structures of $m$ independent scatterers. This
crystal determination approach does not rely on Fourier inversion but rather
considers all structure parameter combinations consistent with available
diffraction data in a parameter space of dimension $m$. The method utilizes $m$
structure factor amplitudes or intensities represented by piece-wise analytic
hyper-surfaces, to define the acceptable parameter regions. By employing the
isosurfaces, the coordinates of the point scatterers are obtained through the
intersection of multiple isosurfaces. This approach allows for the detection of
all possible solutions for the given structure factor amplitudes in a single
derivation. Taking the resonant contrast into account, the spatial resolution
achieved by the presented method may exceed that of traditional Fourier
inversion, and the algorithms can be significantly optimized by exploiting the
symmetry properties of the isosurfaces. The applied 1-dimensional projection
demonstrates the efficiency of the PSC linearization approach based on fewer
reflections than Fourier sums. The Monte-Carlo simulations, using the
projections of various random two- and three-atom structure examples, are
presented to illustrate the universal applicability of the proposed method.
Furthermore, ongoing efforts aim to enhance the efficiency of data handling and
to overcome current constraints, promising further advancements in the
capabilities and accuracy of the PSC framework. | arXiv |
The effect of replacing individual contributions to an empirical energy
function are assessed for halogenated benzenes (X-Bz, X = H, F, Cl, Br) and
chlorinated phenols (Cl-PhOH). Introducing electrostatic models based on
distributed charges (MDCM) instead of usual atom-centered point charges yields
overestimated hydration free energies unless the van der Waals parameters are
reparametrized. Scaling van der Waals ranges by 10 \% to 20 \% for three
Cl-PhOH and most X-Bz yield results within experimental error bars, which is
encouraging, whereas for benzene (H-Bz) point charge-based models are
sufficient. Replacing the bonded terms by a neural network-trained energy
function with either fluctuating charges or MDCM electrostatics also yields
qualitatively correct hydration free energies which still require adaptation of
the van der Waals parameters. The infrared spectroscopy of Cl-PhOH is rather
well predicted by all models although the ML-based energy function performs
somewhat better in the region of the framework modes. It is concluded that
refinements of empirical energy functions for targeted applications is a
meaningful way towards more quantitative simulations. | arXiv |
Since the initial discovery of two-dimensional van der Waals (vdW) materials,
significant effort has been made to incorporate the three properties of
magnetism, band structure topology, and strong electron correlations $-$ to
leverage emergent quantum phenomena and expand their potential applications.
However, the discovery of a single vdW material that intrinsically hosts all
three ingredients has remained an outstanding challenge. Here we report the
discovery of a Kondo-interacting topological antiferromagnet in the vdW 5$f$
electron system UOTe. It has a high antiferromagnetic (AFM) transition
temperature of 150 K, with a unique AFM configuration that breaks the combined
parity and time reversal ($PT$) symmetry in an even number of layers while
maintaining zero net magnetic moment. Our angle-resolved photoemission
spectroscopy (ARPES) measurements reveal Dirac bands near the Fermi level,
which combined with our theoretical calculations demonstrate UOTe as an AFM
Dirac semimetal. Within the AFM order, we observed the presence of the Kondo
interaction, as evidenced by the emergence of a 5$f$ flat band near the Fermi
level below 100 K and hybridization between the Kondo band and the Dirac band.
Our density functional theory calculations in its bilayer form predict UOTe as
a rare example of a fully-compensated AFM Chern insulator. | arXiv |
This study presents an \textit{ab initio} investigation of the XANES spectra
at the aluminum K edge for three compounds: Al$_2$O$_3$, AlF$_3$ and AlCl$_3$,
where the Al atoms share the same oxidation state~(III) and are coordinated in
an octahedral symmetry. The XANES spectra calculated within the
independent-particle approximation reveal significant differences, including
shifts in the spectrum onset, variations in the spectral shapes, and the
presence of a pre-peak in the case of AlCl$_3$, all in correspondence with the
behavior of the PDOS of the absorbing atom in the different materials. The
origin of the features stems from the specific band structure of each compound.
When electron--hole interactions are taken into account through the solution of
the Bethe-Salpeter equation, a series of dark and bright excitons with large
binding energies and Frenkel character is obtained. The strong excitonic
effects lead to the suppression of the pre-peak in AlCl$_3$ and further
accentuate the differences among the three Al K-edge spectra. | arXiv |
This work focuses on the gradient flow dynamics of a neural network model
that uses correlation loss to approximate a multi-index function on
high-dimensional standard Gaussian data. Specifically, the multi-index function
we consider is a sum of neurons $f^*(x) \!=\! \sum_{j=1}^k \! \sigma^*(v_j^T
x)$ where $v_1, \dots, v_k$ are unit vectors, and $\sigma^*$ lacks the first
and second Hermite polynomials in its Hermite expansion. It is known that, for
the single-index case ($k\!=\!1$), overcoming the search phase requires
polynomial time complexity. We first generalize this result to multi-index
functions characterized by vectors in arbitrary directions. After the search
phase, it is not clear whether the network neurons converge to the index
vectors, or get stuck at a sub-optimal solution. When the index vectors are
orthogonal, we give a complete characterization of the fixed points and prove
that neurons converge to the nearest index vectors. Therefore, using $n \!
\asymp \! k \log k$ neurons ensures finding the full set of index vectors with
gradient flow with high probability over random initialization. When $ v_i^T
v_j \!=\! \beta \! \geq \! 0$ for all $i \neq j$, we prove the existence of a
sharp threshold $\beta_c \!=\! c/(c+k)$ at which the fixed point that computes
the average of the index vectors transitions from a saddle point to a minimum.
Numerical simulations show that using a correlation loss and a mild
overparameterization suffices to learn all of the index vectors when they are
nearly orthogonal, however, the correlation loss fails when the dot product
between the index vectors exceeds a certain threshold. | arXiv |
Local differential privacy (LDP) is increasingly employed in
privacy-preserving machine learning to protect user data before sharing it with
an untrusted aggregator. Most LDP methods assume that users possess only a
single data record, which is a significant limitation since users often gather
extensive datasets (e.g., images, text, time-series data) and frequently have
access to public datasets. To address this limitation, we propose a locally
private sampling framework that leverages both the private and public datasets
of each user. Specifically, we assume each user has two distributions: $p$ and
$q$ that represent their private dataset and the public dataset, respectively.
The objective is to design a mechanism that generates a private sample
approximating $p$ while simultaneously preserving $q$. We frame this objective
as a minimax optimization problem using $f$-divergence as the utility measure.
We fully characterize the minimax optimal mechanisms for general
$f$-divergences provided that $p$ and $q$ are discrete distributions.
Remarkably, we demonstrate that this optimal mechanism is universal across all
$f$-divergences. Experiments validate the effectiveness of our minimax optimal
sampler compared to the state-of-the-art locally private sampler. | arXiv |
We consider a stylized formal model of public transportation, where a set of
agents need to travel along a given road, and there is a bus that runs the
length of this road. Each agent has a left terminal and a right terminal
between which they wish to travel; they can walk all the way, or walk to/from
the nearest stop and use the bus for the rest of their journey. The bus can
make a fixed number of stops, and the planner needs to select locations for
these stops. We study notions of efficiency and fairness for this setting.
First, we give a polynomial-time algorithm for computing a solution that
minimizes the total travel time; our approach can capture further extensions of
the base model, such as more general cost functions or existing infrastructure.
Second, we develop a polynomial-time algorithm that outputs solutions with
provable fairness guarantees (such as a variant of the justified representation
axiom or $2$-approximate core) as long as the agents' costs only depend on the
distance they need to walk. Our simulations indicate that our algorithm almost
always outputs fair solutions, even for parameter regimes that do not admit
theoretical guarantees. | arXiv |
Purpose: Radiotherapy commonly relies on CT, but there is growing interest in
using hybrid PET/MR. Therefore, dedicated hardware setups have been proposed
for PET/MR systems which enable imaging in radiotherapy treatment position.
These radiotherapy setups typically include a flat tabletop, positioning tools
and coil holders specifically tailored to the devices. However, reduced MR
image quality has been reported. Especially in neck and upper thorax,
conventional radiotherapy setups are not optimal as they consist of head-only
coil configurations. The purpose was to develop a novel PET/MR radiotherapy
setup for improved MR image quality in head, neck and thorax and to test
compliance in a multicenter setting. Methods: A novel radiotherapy setup was
designed, prototyped and tested on a 3T PET/MR system in three different
centers. Imaging experiments were conducted in phantoms and healthy volunteers
to compare against a standard radiotherapy setup. Imaging protocols included
T1-, T2-, and diffusion-weighted MR (DWI). Finally, compliance with American
College of Radiology (ACR) and the Quantitative Imaging Biomarker Alliance
(QIBA) acceptance criteria was evaluated. Results: SNR in neck/thorax was
increased by a factor of 1.6 in phantom (p = 0.031) and volunteer images alike.
The new setup passed ACR detectability and QIBA SNR tests, which the standard
setup failed. The new setup passed all but two ACR test criteria in the three
centers, presented repeatability and reproducibility variations of 4.9% and
7.8% and met all QIBA criteria for DWI except ADC precision. Conclusion: The
proposed setup yielded significantly higher SNR, better detectability, and
complied with nearly all ACR and QIBA image quality criteria. It may thus
advance the usage of PET/MR for radiotherapy purposes. | arXiv |
An oblivious subspace embedding is a random $m\times n$ matrix $\Pi$ such
that, for any $d$-dimensional subspace, with high probability $\Pi$ preserves
the norms of all vectors in that subspace within a $1\pm\epsilon$ factor. In
this work, we give an oblivious subspace embedding with the optimal dimension
$m=\Theta(d/\epsilon^2)$ that has a near-optimal sparsity of $\tilde
O(1/\epsilon)$ non-zero entries per column of $\Pi$. This is the first result
to nearly match the conjecture of Nelson and Nguyen [FOCS 2013] in terms of the
best sparsity attainable by an optimal oblivious subspace embedding, improving
on a prior bound of $\tilde O(1/\epsilon^6)$ non-zeros per column [Chenakkod et
al., STOC 2024]. We further extend our approach to the non-oblivious setting,
proposing a new family of Leverage Score Sparsified embeddings with Independent
Columns, which yield faster runtimes for matrix approximation and regression
tasks.
In our analysis, we develop a new method which uses a decoupling argument
together with the cumulant method for bounding the edge universality error of
isotropic random matrices. To achieve near-optimal sparsity, we combine this
general-purpose approach with new traces inequalities that leverage the
specific structure of our subspace embedding construction. | arXiv |
In this paper, we establish the existence of positive density collection of
$d\in\mathbb{N}$ such that class numbers of $\mathbb{Q}(\sqrt{d}), \
\mathbb{Q}(\sqrt{d+1})\dots\mathbb{Q}(\sqrt{d+n})$ are not divisible by $3^k$
for $n=3^{k+1}-5$ for any $k\in\mathbb{N}$. This result constitutes the
indivisibility counterpart of Iizuka's conjecture. For the same choice of $n$,
we prove the existence of positive density collection of $d$, in the set of
negative integers, such that the class numbers of $\mathbb{Q}(\sqrt{d}), \
\mathbb{Q}(\sqrt{d+1}),\dots,\mathbb{Q}(\sqrt{d+n})$ are not divisible by
$3^{k+1}$. Further, we write the set of all square-free natural numbers as an
increasing sequence $(d_n)$ and prove the existence of positive density
collection of $i$ in the set of natural numbers such that the class numbers of
the number fields $\mathbb{Q}(\sqrt{d_i}), \ \mathbb{Q}(\sqrt{d_{i+1}}),\
\mathbb{Q}(\sqrt{d_{i+2}}),$ $ \mathbb{Q}(\sqrt{d_{i+3}}),\dots,
\mathbb{Q}(\sqrt{d_{i+n}})$ are not divisible by $3^k$ for $n=3^{k+1}-5$. For
higher degree, we show that certain limit of the collection of imaginary
bi-quadratic fields whose class number is not divisible by $3$ over all the
imaginary biquadratic fields is positive. | arXiv |
We study the problem of tolerant testing of stabilizer states. In particular,
we give the first such algorithm that accepts mixed state inputs. Formally,
given a mixed state $\rho$ that either has fidelity at least $\varepsilon_1$
with some stabilizer pure state or fidelity at most $\varepsilon_2$ with all
such states, where $\varepsilon_2 \leq \varepsilon_1^{O(1)}$, our algorithm
distinguishes the two cases with sample complexity
$\text{poly}(1/\varepsilon_1)$ and time complexity $O(n \cdot
\text{poly}(1/\varepsilon_1))$. | arXiv |
We apply the singular sequence method to investigate the finiteness problem
for stationary configurations of the planar five-vortex problem. The initial
step of the singular sequence method involves identifying all two-colored
diagrams. These diagrams represent potential scenarios where finiteness may
fail. We determined all such diagrams for the planar five-vortex problem. | arXiv |
Within the context of the ALICE ITS3 collaboration, a set of MAPS small-scale
test structures were developed using the 65 nm TPSCo CMOS imaging process with
the upgrade of the ALICE inner tracking system as its primary focus. One such
sensor, the Circuit Exploratoire 65 nm (CE-65), and its evolution the CE-65v2,
were developed to explore charge collection properties for varying
configurations including collection layer process (standard, blanket, modified
with gap), pixel pitch (15, 18, \SI{22.5}{\micro\meter}), and pixel geometry
(square vs hexagonal/staggered). In this work the characterisation of the
CE-65v2 chip, based on $^{55}$Fe lab measurements and test beams at CERN SPS,
is presented. Matrix gain uniformity up to the $\mathcal{O}$(5\%) level was
demonstrated for all considered chip configurations. The CE-65v2 chip achieves
a spatial resolution of under \SI{2}{\micro\meter} during beam tests. Process
modifications allowing for faster charge collection and less charge sharing
result in decreased spatial resolution, but a considerably wider range of
operation, with both the \SI{15}{\micro\meter} and \SI{22.5}{\micro\meter}
chips achieving over 99\% efficiency up to a $\sim$180 e$^{-}$ seed threshold.
The results serve to validate the 65 nm TPSCo CMOS process, as well as to
motivate design choices in future particle detection experiments. | arXiv |
We investigate the formation of bound states of non-relativistic dark matter
particles subject to long-range interactions through radiative capture. The
initial scattering and final bound states are described by Coulomb potentials
with different strengths, as relevant for non-abelian gauge interactions or
theories featuring charged scalars. For bound states with generic quantum
numbers $n$ and $\ell$, we provide closed-form expressions for the bound-state
formation (BSF) cross sections of monopole, dipole and quadrupole transitions,
and of arbitrary multipole order when $\ell=n-1$. This allows us to investigate
in detail a strong enhancement of BSF that occurs for initial states in a
repulsive potential. For $\ell=n-1\gg 1$, we show that the BSF cross section
for each single bound state violates the perturbative unitarity bound in the
vicinity of a certain critical initial velocity, and provide an interpretation
in terms of a smooth matching of classical trajectories. When summing the BSF
cross section over all possible bound states in the final state, this leads to
a unitarity violation below a certain velocity, but within the validity range
of the weakly coupled non-relativistic description. We identify an effectively
strong interaction as the origin of this unitarity violation, which is caused
by an "anomalously" large overlap of scattering and bound-state wave functions
in Coulomb potentials of different strength. | arXiv |
Aligning Large Language Models (LLMs) traditionally relies on costly training
and human preference annotations. Self-alignment seeks to reduce these expenses
by enabling models to align themselves. To further lower costs and achieve
alignment without any expensive tuning or annotations, we introduce a new
tuning-free approach for self-alignment, Dynamic Rewarding with Prompt
Optimization (DRPO). Our approach leverages a search-based optimization
framework that allows LLMs to iteratively self-improve and craft the optimal
alignment instructions, all without additional training or human intervention.
The core of DRPO is a dynamic rewarding mechanism, which identifies and
rectifies model-specific alignment weaknesses, allowing LLMs to adapt
efficiently to diverse alignment challenges. Empirical evaluations on eight
recent LLMs, both open- and closed-sourced, demonstrate that DRPO significantly
enhances alignment performance, with base models outperforming their
SFT/RLHF-tuned counterparts. Moreover, the prompts automatically optimized by
DRPO surpass those curated by human experts, further validating the
effectiveness of our approach. Our findings highlight the great potential of
current LLMs to achieve adaptive self-alignment through inference-time
optimization, complementing tuning-based alignment methods. | arXiv |
In this article, we apply the techniques developed in our previous article
``Local generation of tilings'', in which we introduced two definitions
capturing the intuitive idea that some subshifts admit a procedure that can
generate any tiling and working in a local way. We classify all the Wang
tilesets with two colors in which each tile has an even number of each color. | arXiv |
We provide a new existence result for abstract nonlinear operator systems in
normed spaces, by means of topological methods. The solution is located within
the product of annular regions and conical shells. The theoretical result
possesses a wide range of applicability, which, for concreteness, we illustrate
in the context of systems of nonlinear Poisson equations subject to homogeneous
Dirichlet boundary conditions. For the latter problem we obtain existence and
localization of solutions having all components nontrivial. This is also
illustrated with an explicit example in which we also furnish a numerically
approximated solution, consistent with the theoretical results. | arXiv |
The rovibrational level populations, and subsequent emission in various
astrophysical environments, is driven by inelastic collision processes. The
available rovibrational rate coefficients for water have been calculated using
a number of approximations. We present a numerically exact calculation for the
rovibrational quenching for all water vibrational modes due to collisions with
atomic hydrogen. The scattering theory implements a quantum close-coupling (CC)
method on a high level ab initio six-dimensional (6D) potential energy surface
(PES). Total rovibrational quenching cross sections for excited bending levels
were compared with earlier results on a 4D PES with the rigid-bender
close-coupling (RBCC) approximation. General agreement between 6D-CC and
4D-RBCC calculations are found, but differences are evident including the
energy and amplitude of low-energy orbiting resonances. Quenching cross
sections from the symmetric and asymmetric stretch modes are provided for the
first time. The current 6D-CC calculation provides accurate inelastic data
needed for astrophysical modeling. | arXiv |
In this article, we investigate the possibility of generating all the
configurations of a subshift in a local way. We propose two definitions of
local generation, explore their properties and develop techniques to determine
whether a subshift satisfies these definitions. We illustrate the results with
several examples. | arXiv |
Billiard models of single particles moving freely in two-dimensional regions
enclosed by hard walls, have long provided ideal toy models for the
investigation of dynamical systems and chaos. Recently, billiards with
(semi-)permeable walls and internal holes have been used to study open systems.
Here we introduce a billiard model containing an internal region with partial
absorption. The absorption does not change the trajectories, but instead
reduces an intensity variable associated with each trajectory. The value of the
intensity can be tracked as a function of the initial configuration and the
number of reflections from the wall and depicted in intensity landscapes over
the Poincar\'e phase space. This is similar in spirit to escape time diagrams
that are often considered in dynamical systems with holes. We analyse the
resulting intensity landscapes for three different geometries; a circular,
elliptic, and oval billiard, respectively, all with a centrally placed circular
absorbing region. The intensity landscapes feature increasingly more complex
structures, organised around the sets of points that are a particular number of
iteration away from the absorbing region, and enriched by effects arising from
multiple absorption events for a given trajectory. | arXiv |
Recently, the temporal evolution of the angles characterizing the spatial
configuration of the jet in the supermassive black hole M87$^\ast$ was measured
exhibiting a precessional pattern around the hole's spin axis. It would be due
to the dragging induced by the fact that the hole's external spacetime is
described by the Kerr metric. Here, it is shown that the Lense-Thirring orbital
precessions of a test particle moving about a rotating massive object,
calculated perturbatively to the first post-Newtonian order, are able to fully
reproduce all the measured features of the jet axis of M87$^\ast$. In
particular, by assuming that the latter is aligned with the angular momentum of
the accretion disk, modelled as an effective particle moving along a circular
orbit, the condition that the absolute value of the predicted Lense-Thirring
precessional frequency of the disk agrees with the measured value of $0.56\pm
0.02$ radians per year of the jet's one is satisfied for a range of physically
meaningful values of the hole's spin parameter, close to unity, and of the
effective disk radius, of the order of just over a dozen gravitational radii.
Relying upon such assumptions and results, it is possible to predict that the
angle between the hole's spin axis and the jet's one stays constant over the
years amounting to $1.16^\circ$, in agreement with its measured value of
$1.25^\circ\pm 0.18^\circ$. Furthermore, also the temporal pattern and the
amplitudes of the time series of the jet's angles are reproduced by the
aforementioned Lense-Thirring precessional model. | arXiv |
Changing-look active galactic nuclei (CLAGNs) show the appearance and
disappearance of broad emission lines in their UV/optical spectra on timescales
of months to decades. Here, we investigate how CL transitions depend on several
AGN parameters such as accretion rate, obscuration properties and black hole
mass. We study a sample of 20 nearby optically-identified CLAGNs from the BAT
AGN Spectroscopic Survey (BASS), using quasi-simultaneous optical and X-ray
observations taken in the last $\sim 40$ years. We find that for all CLAGNs,
the transition is accompanied by a change in Eddington ratio. The CL
transitions are not associated with changes in the obscuration properties of
the AGN. CLAGNs are found to have a median Eddington ratio lower than the AGNs
in the BASS sample in which CL transitions were not detected. The median of the
transition Eddington ratio (Eddington ratio at which AGN changes its state) is
found to be $\sim 0.01$ for type 1 $\leftrightarrow$ 1.8/1.9/2 transition,
which is consistent with the hard $\leftrightarrow$ soft state transition in
black hole X-ray binaries. Most CL events are constrained to occur within 3-4
years, which is considerably shorter than the expected viscous timescale in AGN
accretion disk. The transitions of the optical CLAGNs studied here are likely
associated to state changes in the accretion flow, possibly driven by
disk-instability. | arXiv |
We study the twisted Kitaev quantum double model within the framework of
Local Topological Order (LTO). We extend its definition to arbitrary 2D
lattices, enabling an explicit characterization of the ground state space
through invariant spaces of monomial representations. We reformulate the LTO
conditions for including general lattices and prove that the twisted model
satisfies all four LTO axioms on any 2D lattice. As a corollary, we show that
its ground state space is a quantum error-correcting code. | arXiv |
Byte-Pair Encoding (BPE) is a widely used method for subword tokenization,
with origins in grammar-based text compression. It is employed in a variety of
language processing tasks such as machine translation or large language model
(LLM) pretraining, to create a token dictionary of a prescribed size. Most
evaluations of BPE to date are empirical, and the reasons for its good
practical performance are not well understood.
In this paper we focus on the optimization problem underlying BPE: finding a
pair encoding that achieves optimal compression utility. We show that this
problem is APX-complete, indicating that it is unlikely to admit a
polynomial-time approximation scheme. This answers, in a stronger form, a
question recently raised by Zouhar et al.
On the positive side, we show that BPE approximates the compression utility
of the optimal pair encoding to a worst-case factor between $0.333$ and
$0.625$. Our results aim to explain the ongoing success of BPE and are, to our
knowledge, the first rigorous guarantees on its compression utility that hold
for all inputs. | arXiv |
One of the most important and topical challenges of quantum circuits is their
scalability. Rapid Single Flux Quantum (RSFQ) technology is at the forefront of
replacing current standard CMOS-based control architectures for a number of
applications, including quantum computing and quantum sensor arrays. By
condensing the control and readout to SFQ-based on-chip devices that are
directly connected to the quantum systems, it is possible to minimise the total
system overhead, improving scalability and integration. In this work, we
present a novel RSFQ device that generates multi tone digital signals, based on
complex pulse train sequences using a Circular Shift Register (CSR) and a comb
filter stage. We show that the frequency spectrum of the pulse trains is
dependent on a preloaded pattern on the CSR, as well as on the delay line of
the comb filter stage. By carefully selecting both the pattern and delay, the
desired tones can be isolated and amplified as required. Finally, we propose
architectures where this device can be implemented to control and readout
arrays of quantum devices, such as qubits and single photon detectors. | arXiv |
Advanced Air Mobility aircraft require energy efficient flight plans to be
economically viable. This paper defines minimum energy direct trajectories
between waypoints for Lift+Cruise electric Vertical Take-Off and Landing
(eVTOL) aircraft. Energy consumption is optimized over accelerated and cruise
flight profiles with consideration of mode transitions. Because eVTOL
operations start and end in hover for vertical take-off and landing, hover
waypoints are utilized. Energy consumption is modeled as a function of airspeed
for each flight mode, providing the basis to prove energy optimality for
multi-mode traversal. Wind magnitude and direction dictate feasibility of
straight-line traversal because Lift+Cruise aircraft point into the relative
wind direction while hovering but also have a maximum heading rate constraint.
Energy and power use for an experimentally validated QuadPlane small eVTOL
aircraft are characterized with respect to airspeed and acceleration in all
flight modes. Optimal QuadPlane traversals are presented. Constraints on
acceleration and wind are derived for straight-line QuadPlane traversal.
Results show an optimal QuadPlane $500m$ traversal between hover waypoints
saves $71\%$ energy compared to pure vertical flight traversal for a
representative case study with a direct $4m/s$ crosswind. Energy optimal eVTOL
direct trajectory definition with transitions to and from hover is novel to
this work. Future work should model three-dimensional flight and wind as well
as optimize maneuver primitives when required. | arXiv |
By observing binary black hole (BBH) mergers out to the edge of the Universe,
next-generation (XG) ground-based gravitational-wave (GW) detectors like Cosmic
Explorer and Einstein Telescope will map the BBH merger rate across all of
cosmic history. This merger rate traces the formation rate of their progenitor
stars convolved with a delay time distribution. Given theoretically-motivated
priors on the delay time distribution, we show how XG observations can measure
the BBH progenitor formation rate, probing the star formation rate (SFR) up to
$z > 15$. However, the progenitor formation rate does not directly give a
measurement of the SFR, but rather a combination of the SFR and its metallicity
distribution as a function of redshift. Fortunately, the metallicity-dependence
of BBH formation likely varies as a function of BBH mass and/or formation
channel. We find that if different BBH subpopulations with distinct metallicity
biases can be identified, comparing their rates as a function of redshift
yields a simultaneous measurement of the SFR and its metallicity distribution.
Given optimistic theoretical priors and one year of observation, this may
provide a $\sim10\%$ measurement of the SFR at its peak and a 0.2 dex (0.7 dex)
measurement of the median metallicity out to $z = 10$ ($z = 15$) at 90\%
credibility, although the uncertainties scale with theoretical uncertainties on
BBH delay times and formation efficiencies. | arXiv |
Quasi-static time series (QSTS) simulations have great potential for
evaluating the grid's ability to accommodate the large-scale integration of
distributed energy resources. However, as grids expand and operate closer to
their limits, iterative power flow solvers, central to QSTS simulations, become
computationally prohibitive and face increasing convergence issues. Neural
power flow solvers provide a promising alternative, speeding up power flow
computations by 3 to 4 orders of magnitude, though they are costly to train. In
this paper, we envision how recently introduced grid foundation models could
improve the economic viability of neural power flow solvers. Conceptually,
these models amortize training costs by serving as a foundation for a range of
grid operation and planning tasks beyond power flow solving, with only minimal
fine-tuning required. We call for collaboration between the AI and power grid
communities to develop and open-source these models, enabling all operators,
even those with limited resources, to benefit from AI without building
solutions from scratch. | arXiv |
The evolution of wireless communication systems will be fundamentally
impacted by an open radio access network (O-RAN), a new concept defining an
intelligent architecture with enhanced flexibility, openness, and the ability
to slice services more efficiently. For all its promises, and like any
technological advancement, O-RAN is not without risks that need to be carefully
assessed and properly addressed to accelerate its wide adoption in future
mobile networks. In this paper, we present an in-depth security analysis of the
O-RAN architecture, discussing the potential threats that may arise in the
different O-RAN architecture layers and their impact on the Confidentiality,
Integrity, and Availability (CIA) triad. We also promote the potential of zero
trust, Moving Target Defense (MTD), blockchain, and large language models(LLM)
technologies in fortifying O-RAN's security posture. Furthermore, we
numerically demonstrate the effectiveness of MTD in empowering robust deep
reinforcement learning methods for dynamic network slice admission control in
the O-RAN architecture. Moreover, we examine the effect of explainable AI (XAI)
based on LLMs in securing the system. | arXiv |
Near-eye display plays an important role in emerging spatial computing
systems, providing a distinctive visual effect of virtual-real fusion. However,
its application for all-day wear is greatly limited by the bulky structure,
energy expenditure, and continuous battery heating. Here, we propose a
lightweight holographic near-eye display system that takes advantage of solar
energy for self-charging. To achieve the collection of solar energy and
near-eye display without crosstalk, holographic optical elements (HOE) are used
to diffract sunlight and signal light into a common waveguide. Then, small-area
solar cells convert the collected solar energy and power the system. Compact
power supply components replace heavy batteries, contributing to the
lightweight design. The simple acquisition of solar energy provides the system
with sustainable self-charging capability. We believe that the lightweight
design and continuous energy input solution will significantly promote the
popularity of near-eye display in our daily lives. | arXiv |
This paper considers the application of Model Predictive Control (MPC) to a
weighted coverage path planning (WCPP) problem. The problem appears in a wide
range of practical applications, such as search and rescue (SAR) missions. The
basic setup is that one (or multiple) agents can move around a given search
space and collect rewards from a given spatial distribution. Unlike an
artificial potential field, each reward can only be collected once. In contrast
to a Traveling Salesman Problem (TSP), the agent moves in a continuous space.
Moreover, he is not obliged to cover all locations and/or may return to
previously visited locations. The WCPP problem is tackled by a new Model
Predictive Control (MPC) formulation with so-called Coverage Constraints (CCs).
It is shown that the solution becomes more effective if the solver is
initialized with a TSP-based heuristic. With and without this initialization,
the proposed MPC approach clearly outperforms a naive MPC formulation, as
demonstrated in a small simulation study. | arXiv |
The Segment Anything Model (SAM) and similar models build a family of
promptable foundation models (FMs) for image and video segmentation. The object
of interest is identified using prompts, such as bounding boxes or points. With
these FMs becoming part of medical image segmentation, extensive evaluation
studies are required to assess their strengths and weaknesses in clinical
setting. Since the performance is highly dependent on the chosen prompting
strategy, it is important to investigate different prompting techniques to
define optimal guidelines that ensure effective use in medical image
segmentation. Currently, no dedicated evaluation studies exist specifically for
bone segmentation in CT scans, leaving a gap in understanding the performance
for this task. Thus, we use non-iterative, ``optimal'' prompting strategies
composed of bounding box, points and combinations to test the zero-shot
capability of SAM-family models for bone CT segmentation on three different
skeletal regions. Our results show that the best settings depend on the model
type and size, dataset characteristics and objective to optimize. Overall, SAM
and SAM2 prompted with a bounding box in combination with the center point for
all the components of an object yield the best results across all tested
settings. As the results depend on multiple factors, we provide a guideline for
informed decision-making in 2D prompting with non-interactive, ''optimal''
prompts. | arXiv |
$F(R)$ models for dark energy generally exhibit a weak curvature singularity,
which can be cured by adding an $R^2$ term. This correction allows for a
unified description of primordial and late-time accelerated expansions.
However, most existing models struggle to achieve this, as they become unstable
over certain negative ranges of the Ricci scalar, where either the first or
second derivative of $F(R)$ turns negative. These instabilities may disrupt the
post-inflationary evolution when the Ricci scalar oscillates about the vacuum
state after the $R^2$ inflation. In this work, we introduce a new
model-building to guarantee global stability, i.e., the first and second
derivatives are positive for all real Ricci scalars. By extending the idea from
Appleby and Battye, we demonstrate that viable models can be constructed by
imposing a positive, bounded first derivative of $F(R)$ with a sigmoid shape.
As examples, we first reformulate and generalize the original Appleby-Battye
model. Then, we propose a new dark energy model, which successfully explains
the acceleration of cosmic expansion and passes local gravity tests. | arXiv |
The May 10, 2024 space weather event stands out as the most powerful storm
recorded during the current solar cycle. This study employs a numerical
framework utilizing a semi-empirical coronal model, along with HUXt
(Heliospheric Upwind eXtrapolation with time-dependence) and cone-CME models
for the inner heliosphere, to forecast solar wind velocity and the arrival of
CMEs associated with this event. The simulations were also carried out using
Space Weather Adaptive SimulaTion (SWASTi) and a drag-based model (DBM) for
this complex event of multiple CMEs. Predicted arrival times and velocities
from these models are compared with actual observations at the Sun-Earth L1
point. These simulations reveal that three coronal mass ejections (CMEs)
reached Earth nearly simultaneously, resulting in the extreme space weather
event, followed by the arrival of a few more eruptions. The simulations
accurately predicted arrival times with a discrepancy of approximately 5 hours
or less for these CMEs. Further, the ensemble study of DBM shows the
sensitivity of the CME arrival time to the background solar wind speed and drag
parameters. All three models have done fairly well in reproducing the arrival
time closely to the actual observation of the CMEs responsible for the extreme
geomagnetic storm of May 10, 2024. These rare solar storms offered a unique
opportunity to thoroughly evaluate and validate our advanced models for
predicting their arrival on the Earth. | arXiv |
Vibrating systems can respond to an infinite number of initial conditions and
the overall dynamics of the system can be strongly affected by them. Therefore,
it is of practical importance to have methods by which we can determine the
damping that is in some sense optimal for all initial conditions, or for a
given set of initial conditions. For a single and multi degree of freedom
systems, we determine the optimal damping coefficients adapted to different
sets of initial conditions using the known method of minimizing the (zero to
infinity) time integral of the energy of the system, averaged over a set of
initial conditions, and using two new methods that we introduce. One method is
based on determining the damping for which the energy of the system, averaged
over a set of initial conditions, drops the fastest to a given threshold value.
The other method is based on determining the damping that gives minimal average
settling time of the system, where we take that the system settled when its
energy dropped to a given threshold value. We show that the two new methods
give results for optimal damping that are in excellent agreement with each
other, but are significantly different from the results given by the
minimization of the average energy integral. More precisely, for considered
multi degree of freedom systems and sets of initial conditions, the two new
methods give optimal damping coefficients that converge to the critical damping
of the first mode as the target energy threshold decreases. On the other hand,
for these same systems and sets of initial conditions, the method of minimizing
the average energy integral gives optimal damping coefficients which are deep
in the overdamped regime with respect to the first mode. | arXiv |
Let $M$ be a compact complex manifold, and $D\, \subset\, M$ a reduced normal
crossing divisor on it, such that the logarithmic tangent bundle $TM(-\log D)$
is holomorphically trivial. Let ${\mathbb A}$ denote the maximal connected
subgroup of the group of all holomorphic automorphisms of $M$ that preserve the
divisor $D$. Take a holomorphic Cartan geometry $(E_H,\,\Theta)$ of type $(G,\,
H)$ on $M$, where $H\, \subset\, G$ are complex Lie groups. We prove that
$(E_H,\,\Theta)$ is isomorphic to $(\rho^* E_H,\,\rho^* \Theta)$ for every
$\rho\, \in\, \mathbb A$ if and only if the principal $H$--bundle $E_H$ admits
a logarithmic connection $\Delta$ singular on $D$ such that $\Theta$ is
preserved by the connection $\Delta$. | arXiv |
Associative memory models, such as Hopfield networks and their modern
variants, have garnered renewed interest due to advancements in memory capacity
and connections with self-attention in transformers. In this work, we introduce
a unified framework-Hopfield-Fenchel-Young networks-which generalizes these
models to a broader family of energy functions. Our energies are formulated as
the difference between two Fenchel-Young losses: one, parameterized by a
generalized entropy, defines the Hopfield scoring mechanism, while the other
applies a post-transformation to the Hopfield output. By utilizing Tsallis and
norm entropies, we derive end-to-end differentiable update rules that enable
sparse transformations, uncovering new connections between loss margins,
sparsity, and exact retrieval of single memory patterns. We further extend this
framework to structured Hopfield networks using the SparseMAP transformation,
allowing the retrieval of pattern associations rather than a single pattern.
Our framework unifies and extends traditional and modern Hopfield networks and
provides an energy minimization perspective for widely used
post-transformations like $\ell_2$-normalization and layer normalization-all
through suitable choices of Fenchel-Young losses and by using convex analysis
as a building block. Finally, we validate our Hopfield-Fenchel-Young networks
on diverse memory recall tasks, including free and sequential recall.
Experiments on simulated data, image retrieval, multiple instance learning, and
text rationalization demonstrate the effectiveness of our approach. | arXiv |
Assessing the quality of aleatoric uncertainty estimates from uncertainty
quantification (UQ) deep learning methods is important in scientific contexts,
where uncertainty is physically meaningful and important to characterize and
interpret exactly. We systematically compare aleatoric uncertainty measured by
two UQ techniques, Deep Ensembles (DE) and Deep Evidential Regression (DER).
Our method focuses on both zero-dimensional (0D) and two-dimensional (2D) data,
to explore how the UQ methods function for different data dimensionalities. We
investigate uncertainty injected on the input and output variables and include
a method to propagate uncertainty in the case of input uncertainty so that we
can compare the predicted aleatoric uncertainty to the known values. We
experiment with three levels of noise. The aleatoric uncertainty predicted
across all models and experiments scales with the injected noise level.
However, the predicted uncertainty is miscalibrated to $\rm{std}(\sigma_{\rm
al})$ with the true uncertainty for half of the DE experiments and almost all
of the DER experiments. The predicted uncertainty is the least accurate for
both UQ methods for the 2D input uncertainty experiment and the high-noise
level. While these results do not apply to more complex data, they highlight
that further research on post-facto calibration for these methods would be
beneficial, particularly for high-noise and high-dimensional settings. | arXiv |
We present a semi-algorithm which for any rational function
$r\in\mathbb{K}(x,y)$ and any irreducible polynomial $p\in\mathbb{K}[x,y]$
decides whether the restriction of $r$ to the curve defined by $p$ is the
restriction of an element of $\mathbb{K}(x)+\mathbb{K}(y)$. In case it is, it
finds all such elements. | arXiv |
We arrange the orders in an algebraic number field in a tree. This tree can
be used to enumerate all orders of bounded index in the maximal order as well
as the orders over some given order. | arXiv |
In this paper, we developed a spectral emulator based on the Mapping Nearby
Galaxies at Apache Point Observatory Stellar Library (MaStar) and a grouping
optimization strategy to estimate effective temperature (T_eff), surface
gravity (log g), metallicity ([Fe/H]) and the abundance of alpha elements with
respect to iron ([alpha/Fe]) for O-M-type stars within the Large Sky Area
Multi-Object Fiber Spectroscopic Telescope (LAMOST) low-resolution spectra. The
primary aim is to use a rapid spectral-fitting method, specifically the
spectral emulator with the grouping optimization strategy, to create a
comprehensive catalog for stars of all types within LAMOST, addressing the
shortcomings in parameter estimations for both cold and hot stars present in
the official LAMOST AFGKM-type catalog. This effort is part of our series of
studies dedicated to establishing an empirical spectral library for LAMOST.
Experimental results demonstrate that our method is effectively applicable to
parameter prediction for LAMOST, with the single-machine processing time within
$70$ hr. We observed that the internal error dispersions for T_eff, log g,
[Fe/H], and [alpha/Fe] across different spectral types lie within the ranges of
$15-594$ K, $0.03-0.27$ dex, $0.02-0.10$ dex, and $0.01-0.04$ dex,
respectively, indicating a good consistency. A comparative analysis with
external data highlighted deficiencies in the official LAMOST catalog and
issues with MaStar parameters, as well as potential limitations of our method
in processing spectra with strong emission lines and bad pixels. The derived
atmospheric parameters as a part of this work are available at
https://nadc.china-vo.org/res/r101402/ . | arXiv |
We extend the theory of quantum time loops introduced by Greenberger and
Szovil [1] from the scalar situation (where paths have just an associated
complex amplitude) to the general situation where the time traveling system has
multi-dimensional underlying Hilbert space. The main mathematical tool which
emerges is the noncommutative M\{o}bius Transformation and this affords a
formalism similar to the modular structure well known to feedback control
problems. We argue that a sum-over-all-paths approach may be carried out in the
scalar case, but quickly becomes unwieldy in the general case. It is natural to
replace the beamsplitters of [1] with more general components having their own
quantum structure, in which case the theory starts to resemble the quantum
feedback networks theory for open quantum optical models and indeed we exploit
this to look at more realistic physical models of time loops. We analyze some
Grandfather paradoxes in the new setting | arXiv |
The shuffle product on positive integer points, which corresponds to the
shuffle algebra for multiple zeta values, is extended uniquely to all integer
points, by making the linear operator which decreases the first entry by one a
differential operator. We then show that all convergent integer points form a
subalgebra under this extended shuffle product. By lifting the extended shuffle
product to the locality algebra of Chen symbols, we prove that the multiple
zeta series defines an algebra homomorphism from the subalgebra of convergent
points to real numbers, which shows that the extended shuffle product is a
structure for convergent integer points. | arXiv |
Functional simulation is an essential step in digital hardware design.
Recently, there has been a growing interest in leveraging Large Language Models
(LLMs) for hardware testbench generation tasks. However, the inherent
instability associated with LLMs often leads to functional errors in the
generated testbenches. Previous methods do not incorporate automatic functional
correction mechanisms without human intervention and still suffer from low
success rates, especially for sequential tasks. To address this issue, we
propose CorrectBench, an automatic testbench generation framework with
functional self-validation and self-correction. Utilizing only the RTL
specification in natural language, the proposed approach can validate the
correctness of the generated testbenches with a success rate of 88.85%.
Furthermore, the proposed LLM-based corrector employs bug information obtained
during the self-validation process to perform functional self-correction on the
generated testbenches. The comparative analysis demonstrates that our method
achieves a pass ratio of 70.13% across all evaluated tasks, compared with the
previous LLM-based testbench generation framework's 52.18% and a direct
LLM-based generation method's 33.33%. Specifically in sequential circuits, our
work's performance is 62.18% higher than previous work in sequential tasks and
almost 5 times the pass ratio of the direct method. The codes and experimental
results are open-sourced at the link: https://github.com/AutoBench/CorrectBench | arXiv |
The SPEC CPU2017 benchmark suite is an industry standard for accessing CPU
performance. It adheres strictly to some workload and system configurations -
arbitrary specificity - while leaving other system configurations undefined -
arbitrary ambiguity. This article reveals: (1) Arbitrary specificity proves not
meaningful, obscuring many scenarios, as evidenced by significant performance
variations, a 74.49x performance difference observed on the same CPU. (2)
Arbitrary ambiguity is unfair as it fails to establish the same configurations
for comparing different CPUs.
We propose an innovative CPU evaluation methodology. It considers all
workload and system configurations valid and mandates each configuration to be
well-defined to avoid arbitrary specificity and ambiguity. To reduce the
evaluation cost, a sampling approach is proposed to select a subset of the
configurations. To expose CPU performance under different scenarios, it treats
all outcomes under each configuration as equally important. Finally, it
utilizes confidence level and confidence interval to report the outcomes to
avoid bias. | arXiv |
Many studies have predicted SocioEconomic Position (SEP) for aggregated
spatial units such as villages using satellite data, but SEP prediction at the
household level and other sources of imagery have not been yet explored. We
assembled a dataset of 975 households in a semi-rural district in southern
Mozambique, consisting of self-reported asset, expenditure, and income SEP
data, as well as multimodal imagery including satellite images and a
ground-based photograph survey of 11 household elements. We fine-tuned a
convolutional neural network to extract feature vectors from the images, which
we then used in regression analyzes to model household SEP using different sets
of image types. The best prediction performance was found when modeling
asset-based SEP using random forest models with all image types, while the
performance for expenditure- and income-based SEP was lower. Using SHAP, we
observed clear differences between the images with the largest positive and
negative effects, as well as identified the most relevant household elements in
the predictions. Finally, we fitted an additional reduced model using only the
identified relevant household elements, which had an only slightly lower
performance compared to models using all images. Our results show how
ground-based household photographs allow to zoom in from an area-level to an
individual household prediction while minimizing the data collection effort by
using explainable machine learning. The developed workflow can be potentially
integrated into routine household surveys, where the collected household
imagery could be used for other purposes, such as refined asset
characterization and environmental exposure assessment. | arXiv |
An edge-colored graph is called \textit{rainbow graph} if all the colors on
its edges are distinct. For a given positive integer $n$ and a family of graphs
$\mathcal{G}$, the anti-Ramsey number $ar(n, \mathcal{G})$ is the smallest
number of colors $r$ required to ensure that, no matter how the edges of the
complete graph $K_n$ are colored using exactly $r$ colors, there will always be
a rainbow copy of some graph $G$ from the family $\mathcal{G}$. A friendship
graph $F_k$ is the graph obtained by combining $k$ triangles that share a
common vertex. In this paper, we determine the anti-Ramsey number $ar(n,
\{F_k\})$ for large values of $n$. Additionally, we also determine the $ar(n,
\{K_{1,k}, M_k\}$, where $K_{1,k}$ is a star graph with $ k+1$ vertices and
$M_k$ is a matching of size $k$. | arXiv |
Inertial waves in convective regions of stars exhibit topological properties
linked to a Chern number of 1. The first of these is a unique, unidirectional,
prograde oscillation mode within the cavity, which propagates at arbitrarily
low frequencies for moderate azimuthal wavenumbers. The second one are phase
singularities around which the phase winds in Fourier space, with winding
numbers of $\pm 1$ depending on the hemisphere. Phase winding is a collective
effect over waves propagating in all directions that is strongly robust to
noise. This suggests a topology-based method for wave detection in noisy
observational data. | arXiv |
Pick $n$ independent and uniform random points $U_1,\ldots,U_n$ in a compact
convex set $K$ of $\mathbb{R}^d$ with volume 1, and let $P^{(d)}_K(n)$ be the
probability that these points are in convex position. The Sylvester conjecture
in $\mathbb{R}^d$ is that $\min_K P^{(d)}_K(d+2)$ is achieved by the
$d$-dimensional simplices $K$ (only).
In this paper, we focus on a companion model, already studied in the $2d$
case, which we define in any dimension $d$: we say that $K$ has $F$ as a flat
floor, if $F$ is a subset of $K$, contained in a hyperplan $P$, such that $K$
lies in one of the half-spaces defined by $P$.
We define $Q_K^F(n)$ as the probability that $U_1,\cdots,U_n$ together with
$F$ are in convex position (i.e., the $U_i$ are on the boundary of the convex
hull ${\sf CH}(\{U_1,\cdots,U_n\}\cup F\})$). We prove that, for all fixed $F$,
$K\mapsto Q_K^F(2)$ reaches its minimum on the "mountains" with floor $F$
(mountains are convex hull of $F$ union an additional vertex), while the
maximum is not reached, but $K\mapsto Q_K^F(2)$ has values arbitrary close to
1. If the optimisation is done on the set of $K$ contained in $F\times[0,d]$
(the "subprism case"), then the minimum is also reached by the mountains, and
the maximum by the "prism" $F\times[0,1]$. Since again, $Q_K^F{(2)}$ relies on
the expected volume (of ${\sf CH}(\{V_1,V_2\}\cup F\})$), this result can be
seen as a proof of the Sylvester problem in the floor case.
In $2d$, where $F$ can essentially be the segment $[0,1],$ we give a general
decomposition formula for $Q_K^F(n)$ so to compute several formulas and bounds
for different $K$. In 3D, we give some bounds for $Q_K^F(n)$ for various floors
$F$ and special cases of $K$. | arXiv |
We analyze all individual cosmic strings of various lengths in a large
ensemble of the global cosmic string networks in the post-inflationary
scenario, obtained from numerical simulations on a discrete lattice with $N^3 =
4096^3$. A strong evidence for a logarithmically growing spectral index of the
string power spectrum during the evolution is newly reported as our main
result. The logarithmic scaling is checked against two different approaches for
generating initial random field configurations, namely fat-string type and
thermal phase transition. We derive the analytic relation between two power
spectra of cosmic strings and axions which should be valid under some
assumptions, and the validity of those assumptions is discussed. We argue that
our analytic result strongly supports the correlated spectra of cosmic strings
and axions. Additionally, we initiate the statistical analysis of the causal
dynamics of the cosmic strings. | arXiv |
The remarkable advances in deep learning have led to the emergence of many
off-the-shelf classifiers, e.g., large pre-trained models. However, since they
are typically trained on clean data, they remain vulnerable to adversarial
attacks. Despite this vulnerability, their superior performance and
transferability make off-the-shelf classifiers still valuable in practice,
demanding further work to provide adversarial robustness for them in a post-hoc
manner. A recently proposed method, denoised smoothing, leverages a denoiser
model in front of the classifier to obtain provable robustness without
additional training. However, the denoiser often creates hallucination, i.e.,
images that have lost the semantics of their originally assigned class, leading
to a drop in robustness. Furthermore, its noise-and-denoise procedure
introduces a significant distribution shift from the original distribution,
causing the denoised smoothing framework to achieve sub-optimal robustness. In
this paper, we introduce Fine-Tuning with Confidence-Aware Denoised Image
Selection (FT-CADIS), a novel fine-tuning scheme to enhance the certified
robustness of off-the-shelf classifiers. FT-CADIS is inspired by the
observation that the confidence of off-the-shelf classifiers can effectively
identify hallucinated images during denoised smoothing. Based on this, we
develop a confidence-aware training objective to handle such hallucinated
images and improve the stability of fine-tuning from denoised images. In this
way, the classifier can be fine-tuned using only images that are beneficial for
adversarial robustness. We also find that such a fine-tuning can be done by
updating a small fraction of parameters of the classifier. Extensive
experiments demonstrate that FT-CADIS has established the state-of-the-art
certified robustness among denoised smoothing methods across all
$\ell_2$-adversary radius in various benchmarks. | arXiv |
Peer review, as a widely used practice to ensure the quality and integrity of
publications, lacks a well-defined and common mechanism to self-incentivize
virtuous behavior across all the conferences and journals. This is because
information about reviewer efforts and author feedback typically remains local
to a single venue, while the same group of authors and reviewers participate in
the publication process across many venues. Previous attempts to incentivize
the reviewing process assume that the quality of reviews and papers authored
correlate for the same person, or they assume that the reviewers can receive
physical rewards for their work. In this paper, we aim to keep track of
reviewing and authoring efforts by users (who review and author) across
different venues while ensuring self-incentivization. We show that our system,
DecentPeeR, incentivizes reviewers to behave according to the rules, i.e., it
has a unique Nash equilibrium in which virtuous behavior is rewarded. | arXiv |
Uncertainty visualisation is quickly becomming a hot topic in information
visualisation. Exisiting reviews in the field take the definition and purpose
of an uncertainty visualisation to be self evident which results in a large
amout of conflicting information. This conflict largely stems from a conflation
between uncertainty visualisations designed for decision making and those
designed to prevent false conclusions. We coin the term "signal suppression" to
describe a visualisation that is designed for preventing false conclusions, as
the approach demands that the signal (i.e. the collective take away of the
estimates) is suppressed by the noise (i.e. the variance on those estimates).
We argue that the current standards in visualisation suggest that uncertainty
visualisations designed for decision making should not be considered
uncertainty visualisations at all. Therefore, future work should focus on
signal suppression. Effective signal suppression requires us to communicate the
signal and the noise as a single "validity of signal" variable, and doing so
proves to be difficult with current methods. We illustrate current approaches
to uncertainty visualisation by showing how they would change the visual
apprearance of a choropleth map. These maps allow us to see why some methods
succeed at signal suppression, while others fall short. Evaluating
visualisations on how well they perform signal suppression also proves to be
difficult, as it involves measuring the effect of noise, a variable we
typically try to ignore. We suggest authors use qualitative studies or compare
uncertainty visualisations to the relevant hypothesis tests. | arXiv |
Larger transformer models always perform better on various tasks but require
more costs to scale up the model size. To efficiently enlarge models, the
mixture-of-experts (MoE) architecture is widely adopted, which consists of a
gate network and a series of experts and keep the training cost constant by
routing the input data to a fixed number of experts instead of all. In existing
large-scale MoE training systems, experts would be distributed among different
GPUs for parallelization, and thus input data requires additional all-to-all
communications to access the target experts and conduct corresponding
computations. However, upon evaluating the training process of three mainstream
MoE models on commonly used GPU clusters, we found that the all-to-all
communication ratio averaged around 45%, which significantly hinders the
efficiency and scalability of training MoE models.
In this paper, we propose LSH-MoE, a communication-efficient MoE training
framework using locality-sensitive hashing (LSH). We first present the problems
of scaling MoE training in existing systems and highlight the potential of
exploiting token similarity to facilitate data compression. Then, we introduce
an efficient LSH-based compression technique, which utilizes the cross-polytope
hashing for rapid clustering and implements a residual-based error compensation
scheme to alleviate the adverse impact of compression. To verify the
effectiveness of our methods, we conduct experiments on both language models
(e.g., RoBERTa, GPT, and T5) and vision models (e.g., Swin) for pre-training
and fine-tuning tasks. The results demonstrate that our method substantially
outperforms its counterparts across different tasks by 1.28x - 2.2x of speedup. | arXiv |
Despite its excellent performance in microelectronic industry, silicon was
not able to perform well in photonic devices arena. This is because the silicon
has never been a good optical source mainly due to its indirect band gap
structure. Many of the device functionalities in silicon have been reported,
with an exception of, until recently, a reliable optical source. Silicon is a
nonlinear material which makes use of its nonlinearities to realize various
functionalities. This paper presents a theoretical treatment of generating and
enhancing third-harmonic field which may be used as optical source, crystal
state monitoring and all-optical signal processing applications. | arXiv |
In this paper we investigate the tractability of robust Markov Decision
Processes (RMDPs) under various structural assumptions on the uncertainty set.
Surprisingly, we show that in all generality (i.e. without any assumption on
the instantaneous rewards), s-rectangular and sa-rectangular uncertainty sets
are the only models of uncertainty that are tractable. Our analysis also shows
that existing non-rectangular models, including r-rectangular uncertainty and
new generalizations, are only weakly tractable in that they require an
additional structural assumption that the instantaneous rewards do not depend
on the next state, and in this case they are equivalent to rectangular models,
which severely undermines their significance and usefulness. Interestingly, our
proof techniques rely on identifying a novel simultaneous solvability property,
which we show is at the heart of several important properties of RMDPs,
including the existence of stationary optimal policies and dynamic
programming-based formulations. The simultaneous solvability property enables a
unified approach to studying the tractability of all existing models of
uncertainty, rectangular and non-rectangular alike. | arXiv |
In the distributed localization problem (DLP), n anonymous robots (agents)
A0, A1, ..., A(n-1) begin at arbitrary positions p0, ..., p(n-1) in S, where S
is a Euclidean space. The primary goal in DLP is for agents to reach a
consensus on a unified coordinate system that accurately reflects the relative
positions of all points, p0, ... , p(n-1), in S. Extensive research on DLP has
primarily focused on the feasibility and complexity of achieving consensus when
agents have limited access to inter-agent distances, often due to missing or
imprecise data. In this paper, however, we examine a minimalist,
computationally efficient model of distributed computing in which agents have
access to all pairwise distances, if needed. Specifically, we introduce a novel
variant of population protocols, referred to as the spatial population
protocols model. In this variant each agent can memorise one or a fixed number
of coordinates, and when agents A(i) and A(j) interact, they can not only
exchange their current knowledge but also either determine the distance d(i,j)
between them in S (distance query model) or obtain the vector v(i,j) spanning
points p(i) and p(j) (vector query model).
We examine three DLP scenarios:
- Self-stabilising localisation protocol with distance queries We propose and
analyse self-stabilising localisation protocol based on pairwise distance
adjustment. We also discuss several hard instances in this scenario, and
suggest possible improvements for the considered protocol,
- Leader-based localisation protocol with distance queries We propose and
analyse several leader-based protocols which stabilise in o(n) parallel time.
These protocols rely on efficient solution to multi-contact epidemic, and
- Self-stabilising localisation protocol with vector queries We propose and
analyse superfast self-stabilising DLP protocol which stabilises in O(log n)
parallel time. | arXiv |
In this paper we consider the existence of standing waves for a coupled
system of $k$ equations with Lotka-Volterra type interaction. We prove the
existence of a standing wave solution with all nontrivial components satisfying
a prescribed asymptotic profile. In particular, the $k-1$-last components of
such solution exhibits a concentrating behavior, while the first one keeps a
quantum nature. We analyze first in detail the result with three equations
since this is the first case in which the coupling has a role contrary to what
happens when only two densities appear. We also discuss the existence of
solutions of this form for systems with other kind of couplings making a
comparison with Lotka-Volterra type systems. | arXiv |
The aim of this article is to give an expository account of the equivalence
between modest sets and partial equivalence relations. Our proof is entirely
self-contained in that we do not assume any knowledge of categorical
realizability. At the heart of the equivalence lies the subquotient
construction on a partial equivalence relation. The subquotient construction
embeds the category of partial equivalence relations into the category of
modest sets. We show that this embedding is a split essentially surjective
functor, and thereby, an equivalence of categories. Our development is both
constructive and predicative, and employs the language of homotopy type theory.
All the mathematics presented in this article has been mechanised in Cubical
Agda. | arXiv |
We present a preliminary laboratory test of a setup designed to measure
Hanbury Brown and Twiss-type intensity correlations from a chaotic light source
using five spectral channels simultaneously. After averaging the zero-delay
correlation peaks from all channels, we obtain an improvement of the
signalto-noise ratio fairly consistent with theory. The goal is to demonstrate
the feasibility and scalability of this technique to improve the sensitivity of
stellar intensity interferometry using optical telescopes. | arXiv |
Modern imaging technologies are widely based on classical principles of light
or electromagnetic wave propagation. They can be remarkably sophisticated, with
recent successes ranging from single molecule microscopy to imaging far-distant
galaxies. However, new imaging technologies based on quantum principles are
gradually emerging. They can either surpass classical approaches or provide
novel imaging capabilities that would not otherwise be possible. {Here }we
provide an overview {of the most recently developed quantum imaging systems,
highlighting the non-classical properties of sources such as bright squeezed
light, entangled photons, and single-photon emitters that enable their
functionality.} We outline potential upcoming trends and the associated
challenges, all driven by a central inquiry, which is to understand whether
quantum light can make visible the invisible. | arXiv |
Fluid antenna system (FAS)/movable antenna (MA) has emerged as a promising
technology to fully exploit the spatial degrees of freedom (DoFs). In this
paper, we propose a new rotatable antenna (RA) model, as a simplified
implementation of six-dimensional movable antenna (6DMA), to improve the
performance of wireless communication systems. Different from conventional
fixed-position antenna (FPA), the proposed RA system can independently and
flexibly change the three-dimensional (3D) orientation of each antenna by
adjusting its declination angles to achieve desired channel realizations.
Specifically, we study an RA-enabled uplink communication system, where the
receive beamforming and the declination angles of all RAs are jointly optimized
to maximize the minimum signal-to-interference-plus-noise ratio (SINR) among
all the users. In the special single-user and free-space propagation setup, the
optimal declination angles are derived in closed form with the maximum-ratio
combining (MRC) beamformer applied at the base station (BS). In the general
multi-user and multi-path setup, we propose an alternating optimization (AO)
algorithm to alternately optimize the receive beamforming and the declination
angles in an iterative manner. Simulation results are provided to demonstrate
that the proposed RA-enabled system can significantly outperform other
benchmark schemes. | arXiv |
We classify all possible occurrences of Kazama-Suzuki duality between the
$N=2$ superconformal algebra $L^{N=2}_c$ and the subregular algebra
$\mathcal{W}$-algebra $\mathcal{W}_{k}(sl_4, f_{sub})$. We establish a new
Kazama-Suzuki duality between the subregular $\mathcal{W}$-algebra
$\mathcal{W}^k(sl_4, f_{\text{sub}})$ and the the $N = 2$ superconformal
algebra $L^{N=2}_{c}$ for $c=-15$. As a consequence of duality, we classify the
irreducible $\mathcal{W}_{k=-1}(sl_4, f_{\text{sub}})$-modules. | arXiv |
Evolutionary competition often occurs simultaneously at multiple levels of
organization, in which traits or behaviors that are costly for an individual
can provide collective benefits to groups to which the individual belongs.
Building off of recent work that has used ideas from game theory to study
evolutionary competition within and among groups, we study a PDE model for
multilevel selection that considers group-level evolutionary dynamics through a
pairwise conflict depending on the strategic composition of the competing
groups. This model allows for incorporation of group-level frequency
dependence, facilitating the exploration for how the form of probabilities for
victory in a group-level conflict can impact the long-time support for
cooperation via multilevel selection. We characterize well-posedness properties
for measure-valued solutions of our PDE model and apply these properties to
show that the population will converge to a delta-function at the all-defector
equilibrium when between-group selection is sufficiently weak. We further
provide necessary conditions for the existence of bounded steady state
densities for the multilevel dynamics of Prisoners' Dilemma and Hawk-Dove
scenarios, using a mix of analytical and numerical techniques to characterize
the relative strength of between-group selection required to ensure the
long-time survival of cooperation via multilevel selection. We also see that
the average payoff at steady state appears to be limited by the average payoff
of the all-cooperator group, even for games in which groups achieve maximal
average payoff at intermediate levels of cooperation, generalizing behavior
that has previously been observed in PDE models of multilevel selection with
frequency-indepdent group-level competition. | arXiv |
Video generation has emerged as a promising tool for world simulation,
leveraging visual data to replicate real-world environments. Within this
context, egocentric video generation, which centers on the human perspective,
holds significant potential for enhancing applications in virtual reality,
augmented reality, and gaming. However, the generation of egocentric videos
presents substantial challenges due to the dynamic nature of egocentric
viewpoints, the intricate diversity of actions, and the complex variety of
scenes encountered. Existing datasets are inadequate for addressing these
challenges effectively. To bridge this gap, we present EgoVid-5M, the first
high-quality dataset specifically curated for egocentric video generation.
EgoVid-5M encompasses 5 million egocentric video clips and is enriched with
detailed action annotations, including fine-grained kinematic control and
high-level textual descriptions. To ensure the integrity and usability of the
dataset, we implement a sophisticated data cleaning pipeline designed to
maintain frame consistency, action coherence, and motion smoothness under
egocentric conditions. Furthermore, we introduce EgoDreamer, which is capable
of generating egocentric videos driven simultaneously by action descriptions
and kinematic control signals. The EgoVid-5M dataset, associated action
annotations, and all data cleansing metadata will be released for the
advancement of research in egocentric video generation. | arXiv |
A proper vertex coloring of a graph is equitable if the sizes of all color
classes differ by at most $1$. For a list assignment $L$ of $k$ colors to each
vertex of an $n$-vertex graph $G$, an equitable $L$-coloring of $G$ is a proper
coloring of vertices of $G$ from their lists such that no color is used more
than $\lceil n/k\rceil$ times. Call a graph equitably $k$-choosable if it has
an equitable $L$-coloring for every $k$-list assignment $L$. A graph $G$ is
$(a,b)$-sparse if for every $A\subseteq V(G)$, the number of edges in the
subgraph $G[A]$ of $G$ induced by $A$ is at most $a|A|+b$.
Our first main result is that every $(\frac{7}{6},\frac{1}{3})$-sparse graph
with minimum degree at least $2$ is equitably $3$-colorable and equitably
$3$-choosable. This is sharp. Our second main result is that every
$(\frac{5}{4},\frac{1}{2})$-sparse graph with minimum degree at least $2$ is
equitably $4$-colorable and equitably $4$-choosable. This is also sharp.
One of the tools in the proof is the new notion of strongly equitable (SE)
list coloring. This notion is both stronger and more natural than equitable
list coloring; and our upper bounds are for SE list coloring. | arXiv |
A code ${\mathcal C}$ is a subset of the vertex set of a Hamming graph
$H(n,q)$, and ${\mathcal C}$ is $2$-neighbour-transitive if the automorphism
group $G={\rm Aut}({\mathcal C})$ acts transitively on each of the sets
${\mathcal C}$, ${\mathcal C}_1$ and ${\mathcal C}_2$, where ${\mathcal C}_1$
and ${\mathcal C}_2$ are the (non-empty) sets of vertices that are distances
$1$ and $2$, respectively, (but no closer) to some element of ${\mathcal C}$.
Suppose that ${\mathcal C}$ is a $2$-neighbour-transitive code with minimum
distance at least $5$. For $q=2$, all `minimal' such ${\mathcal C}$ have been
classified. Moreover, it has previously been shown that a subgroup of the
automorphism group of the code induces an affine $2$-transitive group action on
the alphabet of the Hamming graph. The main results of this paper are to show
that this affine $2$-transitive group must be a subgroup of ${\rm A}\Gamma{\rm
L}_1(q)$ and to provide a number of infinite families of examples of such
codes. These examples are described via polynomial algebras related to
representations of certain classical groups. | arXiv |
The recent discovery of an axial amplitude (Higgs) mode in the long-studied
charge density wave (CDW) systems GdTe$_3$ and LaTe$_3$ suggests a heretofore
unidentified hidden order. A theoretical study proposed that the axial Higgs
results from a hidden ferroaxial component of the CDW, which could arise from
non-trivial orbital texture. Here, we report extensive experimental studies on
ErTe$_3$ and HoTe$_3$ that possess a high-temperature CDW similar to other
RTe$_3$ (R = rare earth), along with an additional low-temperature CDW with an
orthogonal ordering vector. Combining Raman spectroscopy with large-angle
convergent beam electron diffraction (LACBED), rotational anisotropy
second-harmonic generation (RA-SHG), and muon-spin relaxation ($\mu$SR), we
provide unambiguous evidence that the high-temperature CDW breaks translation,
rotation, and all vertical and diagonal mirror symmetries, but not
time-reversal or inversion. In contrast, the low-temperature CDW only
additionally breaks translation symmetry. Simultaneously, Raman scattering
shows the high-temperature CDW produces an axial Higgs mode while the
low-temperature mode is scalar. The weak monoclinic structural distortion and
clear axial response in Raman and SHG are consistent with a ferroaxial phase in
\ch{RTe3} driven by coupled orbital and charge orders. Thus, our study provides
a new standard for uncovering unconventional orders and confirms the power of
Higgs modes to reveal them. | arXiv |
Precision medicine leverages patient heterogeneity to estimate individualized
treatment regimens, formalized, data-driven approaches designed to match
patients with optimal treatments. In the presence of competing events, where
multiple causes of failure can occur and one cause precludes others, it is
crucial to assess the risk of the specific outcome of interest, such as one
type of failure over another. This helps clinicians tailor interventions based
on the factors driving that particular cause, leading to more precise treatment
strategies. Currently, no precision medicine methods simultaneously account for
both survival and competing risk endpoints. To address this gap, we develop a
nonparametric individualized treatment regime estimator. Our two-phase method
accounts for both overall survival from all events as well as the cumulative
incidence of a main event of interest. Additionally, we introduce a
multi-utility value function that incorporates both outcomes. We develop random
survival and random cumulative incidence forests to construct individual
survival and cumulative incidence curves. Simulation studies demonstrated that
our proposed method performs well, which we applied to a cohort of peripheral
artery disease patients at high risk for limb loss and mortality. | arXiv |
This paper examines how the mathematicians and astronomers of the Kerala
school tackled the problem of computing the values of the arcsin function. Four
different approaches are discussed all of which are found in Nilakantha
Somayaji's (1444 - 1545 CE) Tantrasangraha and the roots of all of which can be
traced to ideas originally articulated by Sangamagrama Madhava (c. 1340 - 1425
CE): (i) a simple method when the argument is small; (ii) an iterative method
when the argument is small; (iii) a method based on a lookup table; (iv) a
method when the argument is large. The paper also contains the original
Sanskrit verses describing the various methods and English translations
thereof. Moreover, there is a presentation of a novel method for computing the
circumference of a circle found in Jyeshthadeva's (c. 1500 - 1575 CE)
Yuktibhasha which is based on method (i) for computing the arcsin function. All
methods have been illustrated with numerical examples. A surprising by-product
of the investigation is a totally unexpected appearance of a core integer
sequence, namely, the entry A001764 in the Online Encyclopedia of Integer
Sequence, while studying the iterative method for computing the arcsin
function. | arXiv |
The relaxed optimal $k$-thresholding pursuit (ROTP) is a recent algorithm for
linear inverse problems. This algorithm is based on the optimal
$k$-thresholding technique which performs vector thresholding and error metric
reduction simultaneously. Although ROTP can be used to solve small to
medium-sized linear inverse problems, the computational cost of this algorithm
is high when solving large-scale problems. By merging the optimal
$k$-thresholding technique and iterative method with memory as well as
optimization with sparse search directions, we propose the so-called dynamic
thresholding algorithm with memory (DTAM), which iteratively and dynamically
selects vector bases to construct the problem solution. At every step, the
algorithm uses more than one or all iterates generated so far to construct a
new search direction, and solves only the small-sized quadratic subproblems at
every iteration. Thus the computational complexity of DTAM is remarkably lower
than that of ROTP-type methods. It turns out that DTAM can locate the solution
of linear inverse problems if the matrix involved satisfies the restricted
isometry property. Experiments on synthetic data, audio signal reconstruction
and image denoising demonstrate that the proposed algorithm performs comparably
to several mainstream thresholding and greedy algorithms, and it works much
faster than the ROTP-type algorithms especially when the sparsity level of
signal is relatively low. | arXiv |
Streaming systems are present throughout modern applications, processing
continuous data in real-time. Existing streaming languages have a variety of
semantic models and guarantees that are often incompatible. Yet all these
languages are considered "streaming" -- what do they have in common? In this
paper, we identify two general yet precise semantic properties: streaming
progress and eager execution. Together, they ensure that streaming outputs are
deterministic and kept fresh with respect to streaming inputs. We formally
define these properties in the context of Flo, a parameterized streaming
language that abstracts over dataflow operators and the underlying structure of
streams. It leverages a lightweight type system to distinguish bounded streams,
which allow operators to block on termination, from unbounded ones.
Furthermore, Flo provides constructs for dataflow composition and nested graphs
with cycles. To demonstrate the generality of our properties, we show how key
ideas from representative streaming and incremental computation systems --
Flink, LVars, and DBSP -- have semantics that can be modeled in Flo and
guarantees that map to our properties. | arXiv |
The main question of this paper is the following: how much cancellation can
the partial sums restricted to the $k$-free integers up to $x$ of a $\pm 1$
multiplicative function $f$ be in terms of $x$? Building upon the recent paper
by Q. Liu, Acta Math. Sin. (Engl. Ser.) 39 (2023), no. 12, 2316-2328, we prove
that under the Riemann Hypothesis for quadratic Dirichlet $L$-functions, we can
get $x^{1/(k+1)}$ cancellation when $f$ is a modified quadratic Dirichlet
character, i.e., $f$ is completely multiplicative and for some quadratic
Dirichlet character $\chi$, $f(p)=\chi(p)$ for all but a finite subset of prime
numbers. This improves the conditional results by Aymone, Medeiros and the
author cf. Ramanujan J. 59 (2022), no. 3, 713-728. | arXiv |
There has been a recent surge in interest in quantum foundations coming from
incorporating ideas from general relativity and quantum gravity. In particular,
the field of indefinite causal order has emerged and is now an important
research topic in its own right. Many of the tools that we use in quantum
foundations and information, are, however, totally agnostic as to the
underlying spacetime in which the quantum systems live. To give a practical
example, whenever we draw a quantum circuit we are not taking into account the
connectivity of the physical qubits which will realize this circuit. In this
work, we aim to address this limitation. In particular, we show how to extend
the formalism of process theories (a framework to study both quantum and
post-quantum theories) to incorporate a background causal structure arising
from a fixed spacetime. We discuss when processes are embeddable in spacetime
under certain constraints. To this end, we introduce the concept of
implementations of a process, which are decompositions of the process. A
process is then embeddable if one of its implementations can be embedded in
such a way that all the processes are localized and all wires follow time-like
paths. The set of all implementations of a process is a rather unwieldy object
but we show that there exists a subset with useful properties which tells us
everything we need to know about the remaining implementations and the
embeddability of a process. We call this subset the set of minimal
representatives. Future directions include defining and analysing the
compositional structure of the framework more rigorously, extending the
framework to indefinite causal structures, studying exotic causal influence,
and using the minimal representatives to probe the decompositional structure of
quantum theory and beyond. | arXiv |
Modern approaches to perform Bayesian variable selection rely mostly on the
use of shrinkage priors. That said, an ideal shrinkage prior should be adaptive
to different signal levels, ensuring that small effects are ruled out, while
keeping relatively intact the important ones. With this task in mind, we
develop the nonparametric Bayesian Lasso, an adaptive and flexible shrinkage
prior for Bayesian regression and variable selection, particularly useful when
the number of predictors is comparable or larger than the number of available
data points. We build on spike-and-slab Lasso ideas and extend them by placing
a Dirichlet Process prior on the shrinkage parameters. The result is a prior on
the regression coefficients that can be seen as an infinite mixture of Double
Exponential densities, all offering different amounts of regularization,
ensuring a more adaptive and flexible shrinkage. We also develop an efficient
Markov chain Monte Carlo algorithm for posterior inference. Through various
simulation exercises and real-world data analyses, we demonstrate that our
proposed method leads to a better recovery of the true regression coefficients,
a better variable selection, and better out-of-sample predictions, highlighting
the benefits of the nonparametric Bayesian Lasso over existing shrinkage
priors. | arXiv |
Let $X$ be a connected normal scheme of finite type over $\mathbf{Z}$, let
$G$ be a connected reductive group over $\mathbf{Q}$, and let
$\{\rho_\ell\colon\pi_1(X[1/\ell])\to G(\mathbf{Q}_\ell)\}_\ell$ be a
Frobenius-compatible collection of continuous homomorphisms indexed by the
primes. Assume $\mathrm{Img}(\rho_\ell)$ is Zariski-dense in
$G_{\mathbf{Q}_\ell}$ for all $\ell$ in a nonempty finite set $\mathcal{R}$. We
prove that, under certain hypotheses on $\mathcal{R}$ (depending only on $G$),
$\mathrm{Img}(\rho_\ell)$ is Zariski-dense in $G_{\mathbf{Q}_\ell}$ for all
$\ell$ in a set of Dirichlet density $1$. As an application, we combine this
result with a version of Hilbert's irreducibility theorem and recent work of
Klevdal--Patrikis to obtain new information about the "canonical" local systems
attached to Shimura varieties not of Abelian type. | arXiv |
A permutation code is a nonlinear code whose codewords are permutation of a
set of symbols. We consider the use of permutation code in the deletion
channel, and consider the symbol-invariant error model, meaning that the values
of the symbols that are not removed are not affected by the deletion. In 1992,
Levenshtein gave a construction of perfect single-deletion-correcting
permutation codes that attain the maximum code size. Furthermore, he showed in
the same paper that the set of all permutations of a given length can be
partitioned into permutation codes so constructed. This construction relies on
the binary Varshamov-Tenengolts codes. In this paper we give an independent and
more direct proof of Levenshtein's result that does not depend on the
Varshamov-Tenengolts code. Using the new approach, we devise efficient encoding
and decoding algorithms that correct one deletion. | arXiv |
Out-of-distribution (OOD) detection is essential for ensuring the robustness
of machine learning models by identifying samples that deviate from the
training distribution. While traditional OOD detection has primarily focused on
single-modality inputs, such as images, recent advances in multimodal models
have demonstrated the potential of leveraging multiple modalities (e.g., video,
optical flow, audio) to enhance detection performance. However, existing
methods often overlook intra-class variability within in-distribution (ID)
data, assuming that samples of the same class are perfectly cohesive and
consistent. This assumption can lead to performance degradation, especially
when prediction discrepancies are uniformly amplified across all samples. To
address this issue, we propose Dynamic Prototype Updating (DPU), a novel
plug-and-play framework for multimodal OOD detection that accounts for
intra-class variations. Our method dynamically updates class center
representations for each class by measuring the variance of similar samples
within each batch, enabling adaptive adjustments. This approach allows us to
amplify prediction discrepancies based on the updated class centers, thereby
improving the model's robustness and generalization across different
modalities. Extensive experiments on two tasks, five datasets, and nine base
OOD algorithms demonstrate that DPU significantly improves OOD detection
performance, setting a new state-of-the-art in multimodal OOD detection, with
improvements of up to 80 percent in Far-OOD detection. To facilitate
accessibility and reproducibility, our code is publicly available on GitHub. | arXiv |
The computation of magnetizability tensors using gauge-including atomic
orbitals is discussed in the context of Cholesky decomposition for the
two-electron repulsion integrals with a focus on the involved doubly
differentiated integrals. Three schemes for their handling are suggested: the
first exploits the DF aspect of Cholesky decomposition, the second uses
expressions obtained by differentiating the CD expression for the unperturbed
two electron integrals, while the third addresses the issue that the first two
schemes are not able to represent the doubly differentiated integrals with
arbitrary accuracy. This scheme uses a separate Cholesky decomposition for the
cross terms in the doubly differentiated two-electron integrals. Test
calculations reveal that all three schemes are able to represent the integrals
with similar accuracy and yield indistinguishable results for the values of the
computed magnetizability tensor elements. Thus, we recommend our first scheme
which has the lowest computational cost for routine computations. The
applicability of our CD schemes is further shown in large-scale Hartree-Fock
calculations of the magnetizability tensor of coronene (C24H12) with a doubly
polarized triple-zeta basis consisting of 684 basis functions. | arXiv |
The outcome of continuously measuring a quantum system is a string of data
whose intricate correlation properties reflect the underlying quantum dynamics.
In this paper we study the role of these correlation in reconstructing the
probabilities of finite sequences of outcomes, the so-called empirical
distributions. Our approach is cast in terms of generic quantum instruments,
and therefore encompass all types of sequential and continuous quantum
measurements. We also show how this specializes to important cases, such as
quantum jumps. To quantify the precise role of correlations, we introduce a
relative-entropy based measure that quantifies the range of correlations in the
string, and the influence that these correlations have in reconstructing finite
sequences. | arXiv |
Nanophotonic device design aims to optimize photonic structures to meet
specific requirements across various applications. Inverse design has unlocked
non-intuitive, high-dimensional design spaces, enabling the discovery of
high-performance devices beyond heuristic or analytic methods. The adjoint
method, which calculates gradients for all variables using just two
simulations, enables efficient navigation of this complex space. However, many
inverse-designed structures, while numerically plausible, are difficult to
fabricate and sensitive to variations, limiting their practical use. The
discrete nature with numerous local-optimal structures also pose significant
optimization challenges, often causing gradient-based methods to converge on
suboptimal designs. In this work, we formulate inverse design as a
fabrication-restricted, discrete, probabilistic optimization problem and
introduce BOSON-1, an end-to-end, variation-aware subspace optimization
framework to address the challenges of manufacturability, robustness, and
optimizability. To overcome optimization difficulty, we propose dense
target-enhanced gradient flows to mitigate misleading local optima and
introduce a conditional subspace optimization strategy to create
high-dimensional tunnels to escape local optima. Furthermore, we significantly
reduce the runtime associated with optimizing across exponential variation
samples through an adaptive sampling-based robust optimization, ensuring both
efficiency and variation robustness. On three representative photonic device
benchmarks, our proposed inverse design methodology BOSON^-1 delivers
fabricable structures and achieves the best convergence and performance under
realistic variations, outperforming prior arts with 74.3% post-fabrication
performance. We open-source our codes at https://github.com/ScopeX-ASU/BOSON. | arXiv |
Let $G=GL_n(K)$ be the general linear group defined over an infinite field
$K$ of positive characteristic $p$ and let $\Delta(\lambda)$ be the Weyl module
of $G$ which corresponds to a partition $\lambda$. In this paper we classify
all homomorphisms $\Delta(\lambda) \to \Delta(\mu)$ when $\lambda=(a,b,1^d)$
and $\mu=(a+d,b)$, $d>1$. In particular, we show that
$Hom_G(\Delta(\lambda),\Delta(\mu))$ is nonzero if and only if $p=2$ and $a$ is
even. In this case, we show that the dimension of the homomorphism space is
equal to 1 and we provide an explicit generator whose description depends on
binary expansions of various integers. We also show that these generators in
general are not compositions of Carter-Payne homomorphisms. | arXiv |
We characterize the power of constant-depth Boolean circuits in generating
uniform symmetric distributions. Let $f\colon\{0,1\}^m\to\{0,1\}^n$ be a
Boolean function where each output bit of $f$ depends only on $O(1)$ input
bits. Assume the output distribution of $f$ on uniform input bits is close to a
uniform distribution $D$ with a symmetric support. We show that $D$ is
essentially one of the following six possibilities: (1) point distribution on
$0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4)
uniform over strings with even Hamming weights, (5) uniform over strings with
odd Hamming weights, and (6) uniform over all strings. This confirms a
conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023). | arXiv |
Worldline quantum field theory (WQFT) has proven itself a powerful tool for
classical two-body scattering calculations in general relativity. In this paper
we develop a new worldline action involving bosonic oscillators, which enables
the use of the WQFT formalism to describe massive compact bodies to all orders
in their spins. Inspired by bosonic string theory in the tensionless limit, we
augment traditional trajectory variables with bosonic oscillators capturing the
spin dependence. We show its equivalence to the covariant phase space
description of a spinning body in curved space and clarify the role of the
spin-supplementary condition in a Hamiltonian treatment. Higher-spin
Hamiltonians are classified to linear and quadratic order in curvature.
Finally, perturbative computations at 1PM order for arbitrary powers and
orientations of spin and at 2PM up to quartic spin order are performed,
recovering results from the literature. | arXiv |
In this paper, we present a telegraph diffusion model with variable exponents
for image despeckling. Moving beyond the traditional assumption of a constant
exponent in the telegraph diffusion framework, we explore three distinct
variable exponents for edge detection. All of these depend on the gray level of
the image or its gradient. We rigorously prove the existence and uniqueness of
weak solutions of our model in a functional setting and perform numerical
experiments to assess how well it can despeckle noisy gray-level images. We
consider both a range of natural images contaminated by varying degrees of
artificial speckle noise and synthetic aperture radar (SAR) images. We finally
compare our method with the nonlocal speckle removal technique and find that
our model outperforms the latter at speckle elimination and edge preservation. | arXiv |
This study sought to better understand the causes of price disparity in
cesarean sections, using newly released hospital data. Beginning January 1,
2021, Centers for Medicare and Medicaid Services (CMS) requires hospitals
functioning in the United States to publish online pricing information for
items and services these hospitals provide in a machine-readable format and a
consumer friendly shoppable format. Initial analyses of these data have shown
that the price for a given procedure can differ in a hospital and across
hospitals. The cesarean section (C-section) is one of the most common inpatient
procedures performed across all hospitals in the United States as of 2018. This
preliminary study found that for C-section procedures, pricing varied from as
little as \$162 to as high as \$115,483 for a single procedure. Overall,
indicators for quality and whether or not the hospital was a teaching hospital
were found to be significantly significant, while variables including median
income and the gini coefficient for wealth inequality were not shown to be
statistically significant. | arXiv |
We study the problem of multi-agent multi-armed bandits with adversarial
corruption in a heterogeneous setting, where each agent accesses a subset of
arms. The adversary can corrupt the reward observations for all agents. Agents
share these corrupted rewards with each other, and the objective is to maximize
the cumulative total reward of all agents (and not be misled by the adversary).
We propose a multi-agent cooperative learning algorithm that is robust to
adversarial corruptions. For this newly devised algorithm, we demonstrate that
an adversary with an unknown corruption budget $C$ only incurs an additive
$O((L / L_{\min}) C)$ term to the standard regret of the model in
non-corruption settings, where $L$ is the total number of agents, and
$L_{\min}$ is the minimum number of agents with mutual access to an arm. As a
side-product, our algorithm also improves the state-of-the-art regret bounds
when reducing to both the single-agent and homogeneous multi-agent scenarios,
tightening multiplicative $K$ (the number of arms) and $L$ (the number of
agents) factors, respectively. | arXiv |
This paper presents for the first time an approach to minimize direct
operational costs (DOC) for all-electric aircraft during the climb phase,
introducing a time-varying cost index (CI). The CI is modeled as a dynamic
parameter commanded by Air Traffic Control (ATC), allowing the aircraft to
maintain a constant airspeed throughout the climb, while respecting the air
traffic regulations. This paper also explores the implications of a
time-varying CI on the determination of optimal airspeed and climbing time for
all-electric aircraft. Additionally, it provides the necessary equations to
calculate both the optimal climb airspeed and climb duration. The proposed
methodology has been validated through a simulated scenario that reflects
actual operational procedures. As a result, optimal values for climb airspeed,
climbing time, and energy consumption have been established, paving the way for
future applications of this methodology to advanced air mobility all-electric
vehicles. | arXiv |
A new model to follow the complete evolution of a drop in Leidenfrost state
is presented in this work. The main ingredients of the phenomenon were
considered, including: 1) the shape and weight of a sessile drop, according to
its size, compared to the capillary length, using the Young-Laplace equation;
2) the evaporation at the entire surface of the drop, due to the heat transfer
across the vapor film, to the proximitiy of a hot plate and to the diffusion in
air; 3) the velocity, pressure and temperature fields at the vapor film,
between the drop and the hot plate, which are recovered by means of a Hankel
transform method, being valid for any size of drops and any thickness of vapor
films (below the vapor film stability threshold); 4) an estimation of the
thermo-capillary Marangoni convection flow, without simulating numerically the
flow within the drop. The aforementioned features were addressed and
calculated, in order to include their effect within a single non-linear ODE,
describing the temporal evolution of the size of the drop, through the Bond
number. Three dimensionless parameters, relating the thermophysical properties
of the drop fluid and the surrounding air, control the development of the
phenomenon. All those properties were calculated according to the ideal gas
approximation and to widely used empirical correlations, without any fitting
parameter. The model predictions were compared against experimental results,
using different organic and inorganic compounds, for which a good agreement has
been found, when no bounce or rotation of the drop spontaneously occurs. | arXiv |
Careful design of semiconductor manufacturing equipment is crucial for
ensuring the performance, yield, and reliability of semiconductor devices.
Despite this, numerical optimization methods are seldom applied to optimize the
design of such equipment due to the difficulty of obtaining accurate simulation
models. In this paper, we address a practical and industrially relevant
electrostatic chuck (ESC) design optimization problem by proposing a novel
multi-fidelity surrogate modeling approach. The optimization aims to improve
the temperature uniformity of the wafer during the etching process by adjusting
seven parameters associated with the coolant path and embossing. Our approach
combines low-fidelity (LF) and high-fidelity (HF) simulation data to
efficiently predict spatial-field quantities, even with a limited number of
data points. We use proper orthogonal decomposition (POD) to project the
spatially interpolated HF and LF field data onto a shared latent space,
followed by the construction of a multi-fidelity kriging model to predict the
latent variables of the HF output field. In the ESC design problem, with
hundreds or fewer data, our approach achieves a more than 10% reduction in
prediction error compared to using kriging models with only HF or LF data.
Additionally, in the ESC optimization problem, our proposed method yields
better solutions with improvements in all of the quantities of interest, while
requiring 20% less data generation cost compared to the HF surrogate modeling
approach. | arXiv |
The properties of a hypergraph explored through the spectrum of its unified
matrix was made by the authors in [26]. In this paper, we introduce three
different hypergraph matrices: unified Laplacian matrix, unified signless
Laplacian matrix, and unified normalized Laplacian matrix, all defined using
the unified matrix. We show that these three matrices of a hypergraph are
respectively identical to the Laplacian matrix, signless Laplacian matrix, and
normalized Laplacian matrix of the associated graph. This allows us to use the
spectra of these hypergraph matrices as a means to connect the structural
properties of the hypergraph with those of the associated graph. Additionally,
we introduce certain hypergraph structures and invariants during this process,
and relate them to the eigenvalues of these three matrices. | arXiv |
Recognizing and identifying human locomotion is a critical step to ensuring
fluent control of wearable robots, such as transtibial prostheses. In
particular, classifying the intended locomotion mode and estimating the gait
phase are key. In this work, a novel, interpretable, and computationally
efficient algorithm is presented for simultaneously predicting locomotion mode
and gait phase. Using able-bodied (AB) and transtibial prosthesis (PR) data,
seven locomotion modes are tested including slow, medium, and fast level
walking (0.6, 0.8, and 1.0 m/s), ramp ascent/descent (5 degrees), and stair
ascent/descent (20 cm height). Overall classification accuracy was 99.1$\%$ and
99.3$\%$ for the AB and PR conditions, respectively. The average gait phase
error across all data was less than 4$\%$. Exploiting the structure of the
data, computational efficiency reached 2.91 $\mu$s per time step. The time
complexity of this algorithm scales as $O(N\cdot M)$ with the number of
locomotion modes $M$ and samples per gait cycle $N$. This efficiency and high
accuracy could accommodate a much larger set of locomotion modes ($\sim$ 700 on
Open-Source Leg Prosthesis) to handle the wide range of activities pursued by
individuals during daily living. | arXiv |
We investigate the formation history of intrahalo light (IHL) using the
high-resolution (~1 kpc), large-scale (~Gpc) cosmological hydrodynamical
simulation, Horizon Run 5 (HR5). IHL particles are identified by carefully
considering both their binding energies and positions with respect to the tidal
radii of individual galaxies. By analyzing more than 1,200 galaxy groups and
clusters with $\geq 10^{13} M_{\odot}$ and tracing their individual IHL
particles back in time, we classify the origin of each IHL particle at each
epoch based on the status of the originating galaxy into three categories:
brightest halo galaxy (BHG) formation/merger, satellite galaxy stripping, and
pre-processing. Our study reveals that the IHL production through BHG
formation/merger is the predominant production channel, contributing over 60\%
of the total IHL mass across all redshifts. The second most significant IHL
production channel is pre-processing, providing more than 20\% in the final HR5
snapshot. Stripping is negligible at $z>4$ but becomes gradually more important
as halos mature at $z<4$. Finally, we verify that IHL production through the
disruption of dwarf galaxies and in-situ formation is negligible, contributing
less than ~3\% and ~0.5\% to the total IHL production, respectively. | arXiv |
We present several nonlinear wavefront sensing techniques for few-mode
sensors, all of which are empirically calibrated and agnostic to the choice of
wavefront sensor. The first class of techniques involves a straightforward
extension of the linear phase retrieval scheme to higher order; the resulting
Taylor polynomial can then be solved using the method of successive
approximations, though we discuss alternate methods such as homotopy
continuation. In the second class of techniques, a model of the WFS intensity
response is created using radial basis function interpolation. We consider both
forward models, which map phase to intensity and can be solved with nonlinear
least-squares methods such as the Levenberg-Marquardt algorithm, as well as
backwards models which directly map intensity to phase and do not require a
solver. We provide demonstrations for both types of techniques in simulation
using a quad-cell sensor and a photonic lantern wavefront sensor as examples.
Next, we demonstrate how the nonlinearity of an arbitrary sensor may studied
using the method of numerical continuation, and apply this technique both to
the quad-cell sensor and a photonic lantern sensor. Finally, we briefly
consider the extension of nonlinear techniques to polychromatic sensors. | arXiv |
We develop a comprehensive method to construct analytical continuum models
for moir\'e systems directly from first-principle calculations without any
parameter fitting. The core idea of this method is to interpret the terms in
the continuum model as a basis, allowing us to determine model parameters as
coefficients of this basis through Gram-Schmidt orthogonalization. We apply our
method to twisted MoTe$_2$ and WSe$_2$ with twist angles ranging from
2.13$^\circ$ to 3.89$^\circ$, producing continuum models that exhibit excellent
agreement with both energy bands and wavefunctions obtained from
first-principles calculations. We further propose a strategy to integrate out
the higher-energy degrees of freedom to reduce the number of the parameters in
the model without sacrificing the accuracy for low-energy bands. Our findings
reveal that decreasing twist angles typically need an increasing number of
harmonics in the moir\'e potentials to accurately replicate first-principles
results. We provide parameter values for all derived continuum models,
facilitating further robust many-body calculations. Our approach is general and
applicable to any commensurate moir\'e materials accessible by first-principles
calculations. | arXiv |
Quantum memories are a crucial precondition in many protocols for processing
quantum information. A fundamental problem that illustrates this statement is
given by the task of channel discrimination, in which an unknown channel drawn
from a known random ensemble should be determined by applying it for a single
time. In this paper, we characterise the quality of channel discrimination
protocols when the quantum memory, quantified by the auxiliary dimension, is
limited. This is achieved by formulating the problem in terms of separable
quantum states with additional affine constraints that all of their factors in
each separable decomposition obey. We discuss the computation of upper and
lower bounds to the solutions of such problems which allow for new insights
into the role of memory in channel discrimination. In addition to the
single-copy scenario, this methodological insight allows to systematically
characterise quantum and classical memories in adaptive channel discrimination
protocols. Especially, our methods enabled us to identify channel
discrimination scenarios where classical or quantum memory is required, and to
identify the hierarchical and non-hierarchical relationships within adaptive
channel discrimination protocols. | arXiv |
We study a class of supersymmetric models where the strong CP problem is
solved through spontaneous CP violation, carried out by a complex scalar field
that determines the Yukawa couplings of the theory. Assuming that one real
component of this field - the CPon - is light, we examine the conditions under
which it provides a viable Dark Matter candidate. The CPon couplings to
fermions are largely determined by the field-dependent Yukawa interactions, and
induce couplings to gauge bosons at 1-loop that are suppressed by a special sum
rule. All couplings are suppressed by an undetermined UV scale, which needs to
exceed $10^{12}$ GeV in order to satisfy constraints on excessive stellar
cooling and rare Kaon decays. The CPon mass is limited from below by 5th force
experiments and from above by X-ray telescopes looking for CPon decays to
photons, leaving a range roughly between 10 meV and 1 MeV. Everywhere in the
allowed parameter space the CPon can saturate the observed Dark Matter
abundance through an appropriate balance of misalignment and freeze-in
production from heavy SM fermions. | arXiv |
Subsets and Splits