text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
Autonomous vehicles require road information for their operation, usually in
form of HD maps. Since offline maps eventually become outdated or may only be
partially available, online HD map construction methods have been proposed to
infer map information from live sensor data. A key issue remains how to exploit
such partial or outdated map information as a prior. We introduce M3TR
(Multi-Masking Map Transformer), a generalist approach for HD map construction
both with and without map priors. We address shortcomings in ground truth
generation for Argoverse 2 and nuScenes and propose the first realistic
scenarios with semantically diverse map priors. Examining various query
designs, we use an improved method for integrating prior map elements into a HD
map construction model, increasing performance by +4.3 mAP. Finally, we show
that training across all prior scenarios yields a single Generalist model,
whose performance is on par with previous Expert models that can handle only
one specific type of map prior. M3TR thus is the first model capable of
leveraging variable map priors, making it suitable for real-world deployment.
Code is available at https://github.com/immel-f/m3tr | arXiv |
In this work, we propose novel offline and online Inverse Differential Game
(IDG) methods for nonlinear Differential Games (DG), which identify the cost
functions of all players from control and state trajectories constituting a
feedback Nash equilibrium. The offline approach computes the sets of all
equivalent cost function parameters that yield the observed trajectories. Our
online method is guaranteed to converge to cost function parameters of the
offline calculated sets. For both methods, we additionally analyze the case
where the cost and value functions are not given by known parameterized
structures and approximation structures, like polynomial basis functions, need
to be chosen. Here, we found that for guaranteeing a bounded error between the
trajectories resulting from the offline and online IDG solutions and the
observed trajectories an appropriate selection of the cost function structures
is required. They must be aligned to assumed value function structures such
that the coupled Hamilton-Jacobi-Bellman equations can be fulfilled. Finally,
the theoretical results and the effectiveness of our new methods are
illustrated with a numerical example. | arXiv |
In this paper, we investigate the parking process on a uniform random rooted
binary tree with $n$ vertices. Viewing each vertex as a single parking space, a
random number of cars independently arrive at and attempt to park on each
vertex one at a time. If a car attempts to park on an occupied vertex, it
traverses the unique path on the tree towards the root, parking at the first
empty vertex it encounters. If this is not possible, the car exits the tree at
the root.
We shall investigate the limit of the probability of the event that all cars
can park when $\lfloor \alpha n \rfloor$ cars arrive, with $\alpha > 0$. We
find that there is a phase transition at $\alpha_c = 2 - \sqrt{2}$, with this
event having positive limiting probability when $\alpha < \alpha_c$, and the
probability tending to 0 as $n \rightarrow \infty$ for $\alpha > \alpha_c$.
This is analogous to the work done by Goldschmidt and Przykucki
(arXiv:1610.08786) and Goldschmidt and Chen (arXiv:1911.03816), while agreeing
with the general result proven by Curien and H\'enard (arXiv:2205.15932). | arXiv |
We study the problem of clock synchronization in a networked system with
arbitrary starts for all nodes. We consider a synchronous network of $n$ nodes,
where each node has a local clock that is an integer counter. Eventually,
clocks must be all equal and increase by one in each round modulo some period
$P$. The purpose of this paper is to study whether clock synchronization can be
achieved with bounded memory, that is every node maintains a number of states
that does not depend on the network size. In particular, we are interested in
clock synchronization algorithms which work in dynamic networks, i.e., tolerate
that communication links continuously fail and come-up.
We first focus on self-stabilizing solutions for clock synchronization, and
prove that there is no such algorithm that is bounded memory, even in the case
of static networks. More precisely, we show a lower bound of $n+1$ states at
each node required to achieve clock synchronization in static strongly
connected networks with at most $n$ nodes, and derive a lower bound of $n-2$
rounds on synchronization time, in the worst case. We then prove that, when the
self-stabilizing requirement is removed, the impossibility of clock
synchronization with bounded memory still holds in the dynamic setting: every
solution for the clock synchronization problem in dynamic networks with at most
$n$ nodes requires each node to have $\Omega(\log n)$ states. | arXiv |
We consider the Hospital/Residents (HR) problem in the presence of ties in
preference lists. Among the three notions of stability, viz. weak, strong, and
super stability, we focus on the notion of strong stability. Strong stability
has many desirable properties both theoretically and practically; however, its
existence is not guaranteed.
In this paper, our objective is to optimally increase the quotas of hospitals
to ensure that a strongly stable matching exists in the modified instance.
First, we show that if ties are allowed in residents' preference lists, it may
not be possible to augment the hospital quotas to obtain an instance that
admits a strongly stable matching. When residents' preference lists are strict,
we explore two natural optimization criteria: (i) minimizing the maximum
capacity increase for any hospital (MINMAX), and (ii) minimizing the total
capacity increase across all hospitals (MINSUM). We show that the MINMAX
problem is NP-hard in general. When hospital preference lists can have ties of
length at most $\ell+1$, we give a polynomial-time algorithm that increases
each hospital's quota by at most $\ell$, ensuring the resulting instance admits
a strongly stable matching.
We show that the MINSUM problem admits a polynomial-time algorithm. However,
when each hospital incurs a cost for each capacity increase, the problem
becomes NP-hard, even if the costs are 0 or 1. This also implies that the
problem cannot be approximated to any multiplicative factor. We also consider a
related problem under the MINSUM objective. Given an HR instance and a forced
pair $(r^*,h^*)$, the goal is to decide if it is possible to increase hospital
quotas (if necessary) to obtain a strongly stable matching that matches the
pair $(r^*,h^*)$. We show a polynomial-time algorithm for this problem. | arXiv |
We produce twisted derived equivalences between torsors under abelian
varieties and their moduli spaces of simple semi-homogeneous sheaves. We also
establish the natural converse to this result and show that a large class of
twisted derived equivalences, including all derived equivalences, between
torsors arise in this way. As corollaries, we obtain partial extensions of the
usual derived equivalence criterion for abelian varieties established by Orlov
and Polishchuk. | arXiv |
We consider $\mathbb{Z}_q$-valued clock models on a regular tree, for general
classes of ferromagnetic nearest neighbor interactions which have a discrete
rotational symmetry. It has been proved recently that, at strong enough
coupling, families of homogeneous Markov chain Gibbs states $\mu_A$ coexist
whose single-site marginals concentrate on $A\subset \mathbb{Z}_q$, and which
are not convex combinations of each other [AbHeKuMa24]. In this note, we aim at
a description of the extremal decomposition of $\mu_A$ for $|A|\geq 2$ into all
extremal Gibbs measures, which may be spatially inhomogeneous. First, we show
that in regimes of very strong coupling, $\mu_A$ is not extremal. Moreover,
$\mu_A$ possesses a single-site reconstruction property which holds for spin
values sent from the origin to infinity, when these initial values are chosen
from $A$. As our main result, we show that $\mu_A$ decomposes into uncountably
many extremal inhomogeneous states. The proof is based on multi-site
reconstruction which allows to derive concentration properties of branch
overlaps. Our method is based on a new good site/bad site decomposition adapted
to the $A$-localization property, together with a coarse graining argument in
local state space. | arXiv |
Many magnetic white dwarfs exhibit a polarised spectrum that periodically
varies as the star rotates because the magnetic field is not symmetric about
the rotation axis. In this work, we report the discovery that while weakly
magnetic white dwarfs of all ages with M < 1Mo show polarimetric variability
with a period between hours and several days, the large majority of magnetic
white dwarfs in the same mass range with cooling ages older than 2 Gyr and
field strengths > 10 MG show little or no polarimetric variability. This could
be interpreted as extremely slow rotation, but a lack of known white dwarfs
with measured periods longer than two weeks means that we do not see white
dwarfs slowing their rotation. We therefore suggest a different interpretation:
old strongly magnetic white dwarfs do not vary because their fields are roughly
symmetric about the rotation axes. Symmetry may either be a consequence of
field evolution or a physical characteristic intrinsic to the way strong fields
are generated in older stars. Specifically, a strong magnetic field could
distort the shape of a star, forcing the principal axis of maximum inertia away
from the spin axis. Eventually, as a result of energy dissipation, the magnetic
axis will align with the angular momentum axis. We also find that the
higher-mass strongly magnetised white dwarfs, which are likely the products of
the merging of two white dwarfs, may appear as either polarimetrically variable
or constant. This may be the symptom of two different formation channels or the
consequence of the fact that a dynamo operating during a merger may produce
diverse magnetic configurations. Alternatively, the massive white dwarfs with
constant polarisation may be rotating with periods much shorter than the
typical exposure times of the observations. | arXiv |
Super $L_\infty$-algebras unify extended super-symmetry with rational
classifying spaces for higher flux densities: The super-invariant super-fluxes
which control super $p$-branes and their supergravity target super-spaces are,
together with their (non-linear) Bianchi identities, neatly encoded in
(non-abelian) super-$L_\infty$ cocycles. These are the rational shadows of
flux-quantization laws (in ordinary cohomology, K-theory, Cohomotopy, iterated
K-theory, etc).
We first review, in streamlined form while filling some previous gaps,
double-dimensional reduction/oxidation and 10D superspace T-duality along
higher-dimensional super-tori. We do so tangent super-space wise, by viewing it
as an instance of adjunctions (dualities) between super-$L_\infty$-extensions
and -cyclifications, applied to the avatar super-flux densities of 10D
supergravity. In particular, this yields a derivation, at the rational level,
of the traditional laws of "topological T-duality" from the super-$L_\infty$
structure of type II superspace. At this level, we also discuss a higher
categorical analog of T-duality involving M-branes.
Then, by considering super-space T-duality along all 1+9 spacetime dimensions
while retaining the 11th dimension as in F-theory, we find the M-algebra
appearing as the complete brane-charge extension of the fully
T-doubled/correspondence super-spacetime. On this backdrop, we recognize the
"decomposed" M-theory 3-form on the "hidden M-algebra" as an M-theoretic lift
of the Poincar\'e super 2-form that controls superspace T-duality as the
integral kernel of the super Fourier-Mukai transform. This provides the
super-space structure of an M-theory lift of the doubled/correspondence space
geometry, which controls T-duality. | arXiv |
A well-known result of Shalom says that lattices in SO$(n,1)$ are $L^p$
measure equivalent for all $p<n-1$. His proof actually yields the following
stronger statement: the natural coupling resulting from a suitable choice of
fundamental domains from a uniform lattice $\Lambda$ to a uniform one $\Gamma$
is $(L^{\infty},L^p)$. Moreover, the fundamental domain of $\Gamma$ is
contained in a union of finitely many translates of the fundamental domain of
$\Lambda$. The purpose of this note is to prove a converse statement. More
generally, it is proved that if a ME-coupling from a non-hyperbolic group
$\Lambda$ to a hyperbolic group $\Gamma$ is $(L^{\infty},L^p)$ and the
fundamental domain of $\Gamma$ is contained in a union of finitely many
translates of the fundamental domain of $\Lambda$, then $p$ must be less than
some $p_0$ only depending on $\Gamma$. | arXiv |
A central computational task in database theory, finite model theory, and
computer science at large is the evaluation of a first-order sentence on a
finite structure. In the context of this task, the \emph{width} of a sentence,
defined as the maximum number of free variables over all subformulas, has been
established as a crucial measure, where minimizing width of a sentence (while
retaining logical equivalence) is considered highly desirable. An
undecidability result rules out the possibility of an algorithm that, given a
first-order sentence, returns a logically equivalent sentence of minimum width;
this result motivates the study of width minimization via syntactic rewriting
rules, which is this article's focus. For a number of common rewriting rules
(which are known to preserve logical equivalence), including rules that allow
for the movement of quantifiers, we present an algorithm that, given a positive
first-order sentence $\phi$, outputs the minimum-width sentence obtainable from
$\phi$ via application of these rules. We thus obtain a complete algorithmic
understanding of width minimization up to the studied rules; this result is the
first one -- of which we are aware -- that establishes this type of
understanding in such a general setting. Our result builds on the theory of
term rewriting and establishes an interface among this theory, query
evaluation, and structural decomposition theory. | arXiv |
We evaluate the consistency of hadronic interaction models in the CORSIKA
simulation package with publicly available fluorescence telescope data from the
Pierre Auger Observatory. By comparing the first few central moments of the
extended air shower depth maximum distributions, as extracted from measured
events, to those predicted by the best-fit inferred compositions, we derive a
statistical measure of the consistency of a given hadronic model with data. To
mitigate possible systematic biases, we include all primaries up to iron,
compensate for the differences between the measured and simulated energy
spectra of cosmic rays and account for other known systematic effects.
Additionally, we study the effects of including higher central moments in the
fit and project our results to larger statistics. | arXiv |
An angular analysis of the $B_s^0 \rightarrow \phi e^+e^-$ decay is performed
using the proton-proton collision dataset collected between 2011 and 2018 by
the LHCb experiment, corresponding to an integrated luminosity of $9\,{\rm
fb}^{-1}$ at centre-of-mass energies of 7, 8 and $13\,{\rm TeV}$. The analysis
is performed in the very low dielectron invariant mass-squared region between
$0.0009$ and $0.2615\,{\rm GeV}^2\!/c^4$. The longitudinal polarisation
fraction of the $\phi$ meson is measured to be less than $11.5\%$ at $90\%$
confidence level. The $A_{\mathrm{T}}^{\mathcal{R}e C\!P}$ observable, which is
related to the lepton forward-backward asymmetry, is measured to be $0.116 \pm
0.155 \pm 0.006$, where the first uncertainty is statistical and the second
systematic. The transverse asymmetries, $A_{\mathrm{T}}^{(2)}$ and
$A_{\mathrm{T}}^{\mathcal{I}m C\!P}$ , which are sensitive to the virtual
photon polarisation, are found to be $-0.045 \pm 0.235 \pm 0.014$ and $0.002
\pm 0.247 \pm 0.016$, respectively. The results are consistent with Standard
Model predictions. | arXiv |
Large language models (LLMs) and LLM-based Agents have been applied to fix
bugs automatically, demonstrating the capability in addressing software defects
by engaging in development environment interaction, iterative validation and
code modification. However, systematic analysis of these agent and non-agent
systems remain limited, particularly regarding performance variations among
top-performing ones. In this paper, we examine seven proprietary and
open-source systems on the SWE-bench Lite benchmark for automated bug fixing.
We first assess each system's overall performance, noting instances solvable by
all or none of these sytems, and explore why some instances are uniquely solved
by specific system types. We also compare fault localization accuracy at file
and line levels and evaluate bug reproduction capabilities, identifying
instances solvable only through dynamic reproduction. Through analysis, we
concluded that further optimization is needed in both the LLM itself and the
design of Agentic flow to improve the effectiveness of the Agent in bug fixing. | arXiv |
Can outreach inspire and lead to research and vice versa? In this work, we
introduce our approach to the gamification of research in mathematics and
computer science through three illustrative examples. We discuss our primary
motivations and provide insights into what makes our proposed gamification
effective for three research topics in discrete and computational geometry and
topology: (1) DominatriX, an art gallery problem involving polyominoes with
rooks and queens; (2) Cubical Sliding Puzzles, an exploration of the discrete
configuration spaces of sliding puzzles on the $d$-cube with topological
obstructions; and (3) The Fence Challenge, a participatory isoperimetric
problem based on polyforms. Additionally, we report on the collaborative
development of the game Le Carr\'e du Diable, inspired by The Fence Challenge
and created during the workshop Let's talk about outreach!, held in October
2022 in Les Diablerets, Switzerland. All of our outreach encounters and
creations are designed and curated with an inclusive culture and a strong
commitment to welcoming the most diverse audience possible. | arXiv |
The symmetrized Asymptotic Mean Value Laplacian $\tilde{\Delta}$, obtained as
limit of approximating operators $\tilde{\Delta}_r$, is an extension of the
classical Euclidean Laplace operator to the realm of metric measure spaces. We
show that, as $r \downarrow 0$, the operators $\tilde{\Delta}_r$ eventually
admit isolated eigenvalues defined via min-max procedure on any compact locally
Ahlfors regular metric measure space. Then we prove $L^2$ and spectral
convergence of $\tilde{\Delta}_r$ to the Laplace--Beltrami operator of a
compact Riemannian manifold, imposing Neumann conditions when the manifold has
a non-empty boundary. | arXiv |
Spatio-Temporal predictive Learning is a self-supervised learning paradigm
that enables models to identify spatial and temporal patterns by predicting
future frames based on past frames. Traditional methods, which use recurrent
neural networks to capture temporal patterns, have proven their effectiveness
but come with high system complexity and computational demand. Convolutions
could offer a more efficient alternative but are limited by their
characteristic of treating all previous frames equally, resulting in poor
temporal characterization, and by their local receptive field, limiting the
capacity to capture distant correlations among frames. In this paper, we
propose STLight, a novel method for spatio-temporal learning that relies solely
on channel-wise and depth-wise convolutions as learnable layers. STLight
overcomes the limitations of traditional convolutional approaches by
rearranging spatial and temporal dimensions together, using a single
convolution to mix both types of features into a comprehensive spatio-temporal
patch representation. This representation is then processed in a purely
convolutional framework, capable of focusing simultaneously on the interaction
among near and distant patches, and subsequently allowing for efficient
reconstruction of the predicted frames. Our architecture achieves
state-of-the-art performance on STL benchmarks across different datasets and
settings, while significantly improving computational efficiency in terms of
parameters and computational FLOPs. The code is publicly available | arXiv |
In many situations humans have to reason with inconsistent knowledge. These
inconsistencies may occur due to not fully reliable sources of information. In
order to reason with inconsistent knowledge, it is not possible to view a set
of premisses as absolute truths as is done in predicate logic. Viewing the set
of premisses as a set of assumptions, however, it is possible to deduce useful
conclusions from an inconsistent set of premisses. In this paper a logic for
reasoning with inconsistent knowledge is described. This logic is a
generalization of the work of N. Rescher [15]. In the logic a reliability
relation is used to choose between incompatible assumptions. These choices are
only made when a contradiction is derived. As long as no contradiction is
derived, the knowledge is assumed to be consistent. This makes it possible to
define an argumentation-based deduction process for the logic. For the logic a
semantics based on the ideas of Y. Shoham [22, 23], is defined. It turns out
that the semantics for the logic is a preferential semantics according to the
definition S. Kraus, D. Lehmann and M. Magidor [12]. Therefore the logic is a
logic of system P and possesses all the properties of an ideal non-monotonic
logic. | arXiv |
The obtention of quantum-grade rare-earth doped oxide thin films that can be
integrated with optical cavities and microwave resonators is of great interest
for the development of scalable quantum devices. Among the different growth
methods, Chemical Vapour Deposition (CVD) offers high flexibility and has
demonstrated the ability to produce oxide films hosting rare-earth ions with
narrow linewidths. However, growing epitaxial films directly on silicon is
challenging by CVD due to a native amorphous oxide layer formation at the
interface. In this manuscript, we investigate the CVD growth of erbium-doped
yttrium oxide (Er:Y2O3) thin films on different substrates, including silicon,
sapphire, quartz or yttria stabilized zirconia (YSZ). Alternatively, growth was
also attempted on an epitaxial Y2O3 template layer on Si (111) prepared by
molecular beam epitaxy (MBE) in order to circumvent the issue of the amorphous
interlayer. We found that the substrate impacts the film morphology and the
crystalline orientations, with different textures observed for the CVD film on
the MBE-oxide/Si template (111) and epitaxial growth on YSZ (001). In terms of
optical properties, Er3+ ions exhibit visible and IR emission features that are
comparable for all samples, indicating a high-quality local crystalline
environment regardless of the substrate. Our approach opens interesting
prospects to integrate such films into scalable devices for optical quantum
technologies. | arXiv |
Top-quark pair production is observed in lead-lead (Pb+Pb) collisions at
$\sqrt{s_\mathrm{NN}}=5.02$ TeV at the Large Hadron Collider with the ATLAS
detector. The data sample was recorded in 2015 and 2018, amounting to an
integrated luminosity of 1.9 nb$^{-1}$. Events with exactly one electron and
one muon and at least two jets are selected. Top-quark pair production is
measured with an observed (expected) significance of 5.0 (4.1) standard
deviations. The measured top-quark pair production cross-section is
$\sigma_{t\bar{t}} =
3.6\;^{+1.0}_{-0.9}\;\mathrm{(stat.)}\;^{+0.8}_{-0.5}\;\mathrm{(syst.)}
~\mathrm{\mu b}$, with a total relative uncertainty of 31%, and is consistent
with theoretical predictions using a range of different nuclear parton
distribution functions. The observation of this process consolidates the
evidence of the existence of all quark flavors in the pre-equilibrium stage of
the quark-gluon plasma at very high energy densities, similar to the conditions
present in the early universe. | arXiv |
Text-to-image generation and text-guided image manipulation have received
considerable attention in the field of image generation tasks. However, the
mainstream evaluation methods for these tasks have difficulty in evaluating
whether all the information from the input text is accurately reflected in the
generated images, and they mainly focus on evaluating the overall alignment
between the input text and the generated images. This paper proposes new
evaluation metrics that assess the alignment between input text and generated
images for every individual object. Firstly, according to the input text,
chatGPT is utilized to produce questions for the generated images. After that,
we use Visual Question Answering(VQA) to measure the relevance of the generated
images to the input text, which allows for a more detailed evaluation of the
alignment compared to existing methods. In addition, we use Non-Reference Image
Quality Assessment(NR-IQA) to evaluate not only the text-image alignment but
also the quality of the generated images. Experimental results show that our
proposed evaluation approach is the superior metric that can simultaneously
assess finer text-image alignment and image quality while allowing for the
adjustment of these ratios. | arXiv |
We construct a holographic model to study the striped superconductor on ionic
lattices. This model features a phase diagram with three distinct phases,
namely the charge density wave (CDW) phase, ordinary superconducting phase (SC)
and the striped superconducting phase (SSC). The effect of the ionic lattices
on the phase diagram is investigated in detail. First, due to the periodic
nature of the background, different types of CDW solutions can be found below
the critical temperature. Furthermore, with the increase of the lattice
amplitude these solutions are locked in different commensurate states. Second,
we find that the critical temperature of CDW phase decreases with the increase
of the lattice amplitude, while that of the SC phase increases. Additionally,
the background solutions are obtained for different phases, and it is verified
that the SSC phase has the lowest free energy among all three phases. | arXiv |
During the past decade, Deep Neural Networks (DNNs) proved their value on a
large variety of subjects. However despite their high value and public
accessibility, the protection of the intellectual property of DNNs is still an
issue and an emerging research field. Recent works have successfully extracted
fully-connected DNNs using cryptanalytic methods in hard-label settings,
proving that it was possible to copy a DNN with high fidelity, i.e., high
similitude in the output predictions. However, the current cryptanalytic
attacks cannot target complex, i.e., not fully connected, DNNs and are limited
to special cases of neurons present in deep networks.
In this work, we introduce a new end-to-end attack framework designed for
model extraction of embedded DNNs with high fidelity. We describe a new
black-box side-channel attack which splits the DNN in several linear parts for
which we can perform cryptanalytic extraction and retrieve the weights in
hard-label settings. With this method, we are able to adapt cryptanalytic
extraction, for the first time, to non-fully connected DNNs, while maintaining
a high fidelity. We validate our contributions by targeting several
architectures implemented on a microcontroller unit, including a Multi-Layer
Perceptron (MLP) of 1.7 million parameters and a shortened MobileNetv1. Our
framework successfully extracts all of these DNNs with high fidelity (88.4% for
the MobileNetv1 and 93.2% for the MLP). Furthermore, we use the stolen model to
generate adversarial examples and achieve close to white-box performance on the
victim's model (95.8% and 96.7% transfer rate). | arXiv |
Let $\overline X$ be a smooth rigid variety over $C=\mathbb C_p$ admitting a
lift $X$ over $B_{dR}^+$. In this paper, we use the stacky language to prove a
nilpotent $p$-adic Riemann-Hilbert correspondence. After introducing the moduli
stack of $\mathbb B^+_{dR}$-local systems and $t$-connections, we prove that
there is an equivalence of the nilpotent locus of the two stacks: $RH^0:LS^0_X
\to tMIC^0_X$, where $LS^0_X$ is the stack of nilpotent $\mathbb
B^+_{dR}$-local systems on $\overline X_{1,v}$ and $tMIC^0_X$ is the stack of
$\mathcal{O}_X$-bundles with integrable $t$-connection on $X_{et}$. | arXiv |
Multimodal learning, which involves integrating information from various
modalities such as text, images, audio, and video, is pivotal for numerous
complex tasks like visual question answering, cross-modal retrieval, and
caption generation. Traditional approaches rely on modality-specific encoders
and late fusion techniques, which can hinder scalability and flexibility when
adapting to new tasks or modalities. To address these limitations, we introduce
a novel framework that extends the concept of task reformulation beyond natural
language processing (NLP) to multimodal learning. We propose to reformulate
diverse multimodal tasks into a unified next-frame prediction problem, allowing
a single model to handle different modalities without modality-specific
components. This method treats all inputs and outputs as sequential frames in a
video, enabling seamless integration of modalities and effective knowledge
transfer across tasks. Our approach is evaluated on a range of tasks, including
text-to-text, image-to-text, video-to-video, video-to-text, and audio-to-text,
demonstrating the model's ability to generalize across modalities with minimal
adaptation. We show that task reformulation can significantly simplify
multimodal model design across various tasks, laying the groundwork for more
generalized multimodal foundation models. | arXiv |
Various technologies, including computer vision models, are employed for the
automatic monitoring of manual assembly processes in production. These models
detect and classify events such as the presence of components in an assembly
area or the connection of components. A major challenge with detection and
classification algorithms is their susceptibility to variations in
environmental conditions and unpredictable behavior when processing objects
that are not included in the training dataset. As it is impractical to add all
possible subjects in the training sample, an alternative solution is necessary.
This study proposes a model that simultaneously performs classification and
anomaly detection, employing metric learning to generate vector representations
of images in a multidimensional space, followed by classification using
cross-entropy. For experimentation, a dataset of over 327,000 images was
prepared. Experiments were conducted with various computer vision model
architectures, and the outcomes of each approach were compared. | arXiv |
In the past few years, Artificial Intelligence (AI)-based weather forecasting
methods have widely demonstrated strong competitiveness among the weather
forecasting systems. However, these methods are insufficient for
high-spatial-resolution short-term nowcasting within 6 hours, which is crucial
for warning short-duration, mesoscale and small-scale weather events.
Geostationary satellite remote sensing provides detailed, high spatio-temporal
and all-day observations, which can address the above limitations of existing
methods. Therefore, this paper proposed an advanced data-driven thermal
infrared cloud images forecasting model, "DaYu." Unlike existing data-driven
weather forecasting models, DaYu is specifically designed for geostationary
satellite observations, with a temporal resolution of 0.5 hours and a spatial
resolution of ${0.05}^\circ$ $\times$ ${0.05}^\circ$. DaYu is based on a
large-scale transformer architecture, which enables it to capture fine-grained
cloud structures and learn fast-changing spatio-temporal evolution features
effectively. Moreover, its attention mechanism design achieves a balance in
computational complexity, making it practical for applications. DaYu not only
achieves accurate forecasts up to 3 hours with a correlation coefficient higher
than 0.9, 6 hours higher than 0.8, and 12 hours higher than 0.7, but also
detects short-duration, mesoscale, and small-scale weather events with enhanced
detail, effectively addressing the shortcomings of existing methods in
providing detailed short-term nowcasting within 6 hours. Furthermore, DaYu has
significant potential in short-term climate disaster prevention and mitigation. | arXiv |
Blazars are a subclass of active galactic nuclei (AGNs) with relativistic
jets pointing toward the observer. They are notable for their flux variability
at all observed wavelengths and timescales. Together with simultaneous
measurements at lower energies, the very-high-energy (VHE) emission observed
during blazar flares may be used to probe the population of accelerated
particles. However, optimally triggering observations of blazar high states can
be challenging. Notable examples include identifying a flaring episode in real
time and predicting VHE flaring activity based on lower energy observables. For
this purpose, we have developed a novel deep learning analysis framework, based
on data-driven anomaly detection techniques. It is capable of detecting various
types of anomalies in real-world, multiwavelength light curves, ranging from
clear high states to subtle correlations across bands. Based on unsupervised
anomaly detection and clustering methods, we differentiate source variability
from noisy background activity, without the need for a labeled training dataset
of flaring states. The framework incorporates measurement uncertainties and is
robust given data quality challenges, such as varying cadences and
observational gaps. We evaluate our approach using both historical data and
simulations of blazar light curves in two energy bands, corresponding to
sources observable with the Fermi Large Area Telescope, and the upcoming
Cherenkov Telescope Array Observatory (CTAO). In a statistical analysis, we
show that our framework can reliably detect known historical flares. | arXiv |
In this paper, we analyse an extension of the children's higher-or-lower
number guessing game with two guessing players, where players alternate
guessing a secret integer between 1 and n, and it is revealed whether these
guesses are higher or lower than the secret number, with the first player to
guess the number being the loser. We describe and prove the solution when both
players are rational, which involves different guessing strategies dependent on
the value of n modulo 4. We then consider the case where one player is not
rational but instead makes all their guesses uniformly at random, while the
other player plays to exploit this. We show that, in this case, the probability
that the exploitative player wins approaches a constant (approximately 0.599)
as n increases, and that the numbers 2 and n-1 are always optimal guesses for
them. | arXiv |
Important advances have recently been made in the search for materials with
complex multi-phase landscapes that host photoinduced metastable collective
states with exotic functionalities. In almost all cases so far, the desired
phases are accessed by exploiting light-matter interactions via the imaginary
part of the dielectric function through above-bandgap or resonant mode
excitation. Nonresonant Raman excitation of coherent modes has been
experimentally observed and proposed for dynamic material control, but the
resulting atomic excursion has been limited to perturbative levels. Here, we
demonstrate that it is possible to overcome this challenge by employing
nonresonant ultrashort pulses with low photon energies well below the bandgap.
Using mid-infrared pulses, we induce ferroelectric reversal in lithium niobate
and phase switching in tin selenide and characterize the large-amplitude mode
displacements through femtosecond Raman scattering, second harmonic generation,
and x-ray diffraction. This approach, validated by first-principle
calculations, defines a novel method for synthesizing hidden phases with unique
functional properties and manipulating complex energy landscapes at reduced
energy consumption and ultrafast speeds. | arXiv |
Longitudinal analyses are increasingly used in clinical studies as they allow
the study of subtle changes over time within the same subjects. In most of
these studies, it is necessary to align all the images studied to a common
reference by registering them to a template. In the study of white matter using
the recently developed fixel-based analysis (FBA) method, this registration is
important, in particular because the fiber bundle cross-section metric is a
direct measure of this registration. In the vast majority of longitudinal FBA
studies described in the literature, sessions acquired for a same subject are
directly independently registered to the template. However, it has been shown
in T1-based morphometry that a 2-step registration through an intra-subject
average can be advantageous in longitudinal analyses. In this work, we propose
an implementation of this 2-step registration method in a typical longitudinal
FBA aimed at investigating the evolution of white matter changes in Alzheimer's
disease (AD). We compared at the fixel level the mean absolute effect and
standard deviation yielded by this registration method and by a direct
registration, as well as the results obtained with each registration method for
the study of AD in both fixelwise and tract-based analyses. We found that the
2-step method reduced the variability of the measurements and thus enhanced
statistical power in both types of analyses. | arXiv |
We study internal diffusion limited aggregation on $\mathbb{Z}$, where a
cluster is grown by sequentially adding the first site outside the cluster
visited by each random walk dispatched from the origin. We assume that the
increment distribution $X$ of the driving random walks has $\mathbb{E} X =0$,
but may be neither simple nor symmetric, and can have $\mathbb{E} (X^2) =
\infty$, for example. For the case where $\mathbb{E} (X^2) < \infty$, we prove
that after $m$ walks have been dispatched, all but $o(m)$ sites in the cluster
form an approximately symmetric contiguous block around the origin. This
extends known results for simple random walk. On the other hand, if~$X$ is in
the domain of attraction of a symmetric $\alpha$-stable law, $1 < \alpha <2$,
we prove that the cluster contains a contiguous block of $\delta m +o(m)$
sites, where $0 < \delta < 1$, but, unlike the finite-variance case, one may
not take $\delta=1$. | arXiv |
We study functions $f : [0, 1]^d \rightarrow [0, 1]^d$ that are both monotone
and contracting, and we consider the problem of finding an
$\varepsilon$-approximate fixed point of $f$. We show that the problem lies in
the complexity class UEOPL. We give an algorithm that finds an
$\varepsilon$-approximate fixed point of a three-dimensional monotone
contraction using $O(\log (1/\varepsilon))$ queries to $f$. We also give a
decomposition theorem that allows us to use this result to obtain an algorithm
that finds an $\varepsilon$-approximate fixed point of a $d$-dimensional
monotone contraction using $O((c \cdot \log (1/\varepsilon))^{\lceil d / 3
\rceil})$ queries to $f$ for some constant $c$. Moreover, each step of both of
our algorithms takes time that is polynomial in the representation of $f$.
These results are strictly better than the best-known results for functions
that are only monotone, or only contracting.
All of our results also apply to Shapley stochastic games, which are known to
be reducible to the monotone contraction problem. Thus we put Shapley games in
UEOPL, and we give a faster algorithm for approximating the value of a Shapley
game. | arXiv |
We evaluate the Green's function for the insertion of the second moment of
the twist-$2$ flavour nonsinglet Wilson operator in a quark $2$-point function
in all three different single scale external momentum configurations at four
loops in the MSbar scheme and the chiral limit. One configuration is where the
operator is inserted at zero momentum while the other two are where a non-zero
momentum flows out through the operator itself with one external quark momentum
nullified. In the latter two configurations mixing of the operator with a total
derivative twist-$2$ operator is included for renormalization group
consistency. In addition we compute the correlation functions of both gauge
invariant operators to four loops in the same scheme. | arXiv |
Although image-based virtual try-on has made considerable progress, emerging
approaches still encounter challenges in producing high-fidelity and robust
fitting images across diverse scenarios. These methods often struggle with
issues such as texture-aware maintenance and size-aware fitting, which hinder
their overall effectiveness. To address these limitations, we propose a novel
garment perception enhancement technique, termed FitDiT, designed for
high-fidelity virtual try-on using Diffusion Transformers (DiT) allocating more
parameters and attention to high-resolution features. First, to further improve
texture-aware maintenance, we introduce a garment texture extractor that
incorporates garment priors evolution to fine-tune garment feature,
facilitating to better capture rich details such as stripes, patterns, and
text. Additionally, we introduce frequency-domain learning by customizing a
frequency distance loss to enhance high-frequency garment details. To tackle
the size-aware fitting issue, we employ a dilated-relaxed mask strategy that
adapts to the correct length of garments, preventing the generation of garments
that fill the entire mask area during cross-category try-on. Equipped with the
above design, FitDiT surpasses all baselines in both qualitative and
quantitative evaluations. It excels in producing well-fitting garments with
photorealistic and intricate details, while also achieving competitive
inference times of 4.57 seconds for a single 1024x768 image after DiT structure
slimming, outperforming existing methods. | arXiv |
Jointly optimizing power allocation and device association is crucial in
Internet-of-Things (IoT) networks to ensure devices achieve their data
throughput requirements. Device association, which assigns IoT devices to
specific access points (APs), critically impacts resource allocation. Many
existing works often assume all data throughput requirements are satisfied,
which is impractical given resource limitations and diverse demands. When
requirements cannot be met, the system becomes infeasible, causing congestion
and degraded performance. To address this problem, we propose a novel framework
to enhance IoT system robustness by solving two problems, comprising maximizing
the number of satisfied IoT devices and jointly maximizing both the number of
satisfied devices and total network throughput. These objectives often conflict
under infeasible circumstances, necessitating a careful balance. We thus
propose a modified branch-and-bound (BB)-based method to solve the first
problem. An iterative algorithm is proposed for the second problem that
gradually increases the number of satisfied IoT devices and improves the total
network throughput. We employ a logarithmic approximation for a lower bound on
data throughput and design a fixed-point algorithm for power allocation,
followed by a coalition game-based method for device association. Numerical
results demonstrate the efficiency of the proposed algorithm, serving fewer
devices than the BB-based method but with faster running time and higher total
throughput. | arXiv |
Multi-view learning often faces challenges in effectively leveraging images
captured from different angles and locations. This challenge is particularly
pronounced when addressing inconsistencies and uncertainties between views. In
this paper, we propose a novel Multi-View Uncertainty-Weighted Mutual
Distillation (MV-UWMD) method. Our method enhances prediction consistency by
performing hierarchical mutual distillation across all possible view
combinations, including single-view, partial multi-view, and full multi-view
predictions. This introduces an uncertainty-based weighting mechanism through
mutual distillation, allowing effective exploitation of unique information from
each view while mitigating the impact of uncertain predictions. We extend a
CNN-Transformer hybrid architecture to facilitate robust feature learning and
integration across multiple view combinations. We conducted extensive
experiments using a large, unstructured dataset captured from diverse,
non-fixed viewpoints. The results demonstrate that MV-UWMD improves prediction
accuracy and consistency compared to existing multi-view learning approaches. | arXiv |
We introduce FedEvPrompt, a federated learning approach that integrates
principles of evidential deep learning, prompt tuning, and knowledge
distillation for distributed skin lesion classification. FedEvPrompt leverages
two sets of prompts: b-prompts (for low-level basic visual knowledge) and
t-prompts (for task-specific knowledge) prepended to frozen pre-trained Vision
Transformer (ViT) models trained in an evidential learning framework to
maximize class evidences. Crucially, knowledge sharing across federation
clients is achieved only through knowledge distillation on attention maps
generated by the local ViT models, ensuring enhanced privacy preservation
compared to traditional parameter or synthetic image sharing methodologies.
FedEvPrompt is optimized within a round-based learning paradigm, where each
round involves training local models followed by attention maps sharing with
all federation clients. Experimental validation conducted in a real distributed
setting, on the ISIC2019 dataset, demonstrates the superior performance of
FedEvPrompt against baseline federated learning algorithms and knowledge
distillation methods, without sharing model parameters. In conclusion,
FedEvPrompt offers a promising approach for federated learning, effectively
addressing challenges such as data heterogeneity, imbalance, privacy
preservation, and knowledge sharing. | arXiv |
Federated domain generalization (FedDG) aims to improve the global model
generalization in unseen domains by addressing data heterogeneity under
privacy-preserving constraints. A common strategy in existing FedDG studies
involves sharing domain-specific knowledge among clients, such as spectrum
information, class prototypes, and data styles. However, this knowledge is
extracted directly from local client samples, and sharing such sensitive
information poses a potential risk of data leakage, which might not fully meet
the requirements of FedDG. In this paper, we introduce prompt learning to adapt
pre-trained vision-language models (VLMs) in the FedDG scenario, and leverage
locally learned prompts as a more secure bridge to facilitate knowledge
transfer among clients. Specifically, we propose a novel FedDG framework
through Prompt Learning and AggregatioN (PLAN), which comprises two training
stages to collaboratively generate local prompts and global prompts at each
federated round. First, each client performs both text and visual prompt
learning using their own data, with local prompts indirectly synchronized by
regarding the global prompts as a common reference. Second, all domain-specific
local prompts are exchanged among clients and selectively aggregated into the
global prompts using lightweight attention-based aggregators. The global
prompts are finally applied to adapt VLMs to unseen target domains. As our PLAN
framework requires training only a limited number of prompts and lightweight
aggregators, it offers notable advantages in computational and communication
efficiency for FedDG. Extensive experiments demonstrate the superior
generalization ability of PLAN across four benchmark datasets. | arXiv |
Quantitative Information Flow (QIF) provides a robust information-theoretical
framework for designing secure systems with minimal information leakage. While
previous research has addressed the design of such systems under hard
constraints (e.g. application limitations) and soft constraints (e.g. utility),
scenarios often arise where the core system's behavior is considered fixed. In
such cases, the challenge is to design a new component for the existing system
that minimizes leakage without altering the original system. In this work we
address this problem by proposing optimal solutions for constructing a new row,
in a known and unmodifiable information-theoretic channel, aiming at minimizing
the leakage. We first model two types of adversaries: an exact-guessing
adversary, aiming to guess the secret in one try, and a s-distinguishing one,
which tries to distinguish the secret s from all the other secrets.Then, we
discuss design strategies for both fixed and unknown priors by offering, for
each adversary, an optimal solution under linear constraints, using Linear
Programming.We apply our approach to the problem of website fingerprinting
defense, considering a scenario where a site administrator can modify their own
site but not others. We experimentally evaluate our proposed solutions against
other natural approaches. First, we sample real-world news websites and then,
for both adversaries, we demonstrate that the proposed solutions are effective
in achieving the least leakage. Finally, we simulate an actual attack by
training an ML classifier for the s-distinguishing adversary and show that our
approach decreases the accuracy of the attacker. | arXiv |
Having a better understanding of how locational marginal prices (LMPs) change
helps in price forecasting and market strategy making. This paper investigates
the fundamental distribution of the congestion part of LMPs in high-dimensional
Euclidean space using an unsupervised approach. LMP models based on the
lossless and lossy DC optimal power flow (DC-OPF) are analyzed to show the
overlapping subspace property of the LMP data. The congestion part of LMPs is
spanned by certain row vectors of the power transfer distribution factor (PTDF)
matrix, and the subspace attributes of an LMP vector uniquely are found to
reflect the instantaneous congestion status of all the transmission lines. The
proposed method searches for the basis vectors that span the subspaces of
congestion LMP data in hierarchical ways. In the bottom-up search, the data
belonging to 1-dimensional subspaces are detected, and other data are projected
on the orthogonal subspaces. This procedure is repeated until all the basis
vectors are found or the basis gap appears. Top-down searching is used to
address the basis gap by hyperplane detection with outliers. Once all the basis
vectors are detected, the congestion status can be identified. Numerical
experiments based on the IEEE 30-bus system, IEEE 118-bus system, Illinois
200-bus system, and Southwest Power Pool are conducted to show the performance
of the proposed method. | arXiv |
In 2009, Calegari constructed smooth homotopy 4-spheres from monodromies of
fibered knots. We prove that all these are diffeomorphic to the standard
4-sphere. Our method uses 5-dimensional handlebody techniques and results on
mapping class groups of 3-dimensional handlebodies. As an application, we
present potential counterexamples to the smooth 4-dimensional Schoenflies
conjecture which are related to the work of Casson and Gordon on fibered ribbon
knots. | arXiv |
Physics-Informed Neural Networks (PINNs) have emerged as an influential
technology, merging the swift and automated capabilities of machine learning
with the precision and dependability of simulations grounded in theoretical
physics. PINNs are often employed to solve algebraic or differential equations
to replace some or even all steps of multi-stage computational workflows,
leading to their significant speed-up. However, wide adoption of PINNs is still
hindered by reliability issues, particularly at extreme ends of the input
parameter ranges. In this study, we demonstrate this in the context of a system
of coupled non-linear differential reaction-diffusion and heat transfer
equations related to Fischer-Tropsch synthesis, which are solved by a
finite-difference method with a PINN used in evaluating their source terms. It
is shown that the testing strategies traditionally used to assess the accuracy
of neural networks as function approximators can overlook the peculiarities
which ultimately cause instabilities of the finite-difference solver. We
propose a domain knowledge-based modifications to the PINN architecture
ensuring its correct asymptotic behavior. When combined with an improved
numerical scheme employed as an initial guess generator, the proposed
modifications are shown to recover the overall stability of the simulations,
while preserving the speed-up brought by PINN as the workflow component. We
discuss the possible applications of the proposed hybrid transport equation
solver in context of chemical reactors simulations. | arXiv |
We report the discovery of a rare isolated group of five dwarf galaxies
located at z = 0.0086 ($D$ = 36 Mpc). All member galaxies are star-forming,
blue, and gas-rich with $g-r$ indices ranging from 0.2 to 0.6 mag, and two of
them show signs of ongoing mutual interaction. The most massive member of the
group has a stellar mass that is half of the Small Magellanic Cloud stellar
mass, and the median stellar mass of the group members is 7.87 $\times$
10$^{7}$ M$_{\odot}$. The derived total dynamical mass of the group is $M_{\rm
dyn}$ = 6.02$\times$10$^{10}$ M$_{\odot}$, whereas its total baryonic mass
(stellar + HI) is 2.6$\times$10$^{9}$ M$_{\odot}$, which gives us the dynamical
to baryonic mass ratio of 23. Interestingly, all galaxies found in the group
are aligned along a straight line in the plane of the sky. The observed spatial
extent of the member galaxies is 154 kpc, and their relative line-of-sight
velocity span is within 75 km s$^{-1}$. Using the spatially resolved optical
spectra provided by DESI EDR, we find that three group members share a common
rotational direction. With these unique properties of the group and its member
galaxies, we discuss the possible importance of such a system in the formation
and evolution of dwarf galaxy groups and in testing the theory of large-scale
structure formation. | arXiv |
Determining the creep compliances of orthotropic composite materials requires
experiments in at least three different uniaxial and biaxial loading
directions. Up to date, data respecting multiple climates and all anatomical
directions are sparse for hygro-responsive materials like Norway spruce.
Consequently, simulation models of wood frequently over-simplify creep, e.g.,
by proportionally scaling missing components or neglecting climatic influences.
To overcome such simplifications, an automated computer-controlled climatized
creep rack was developed, that experimentally assesses moisture-dependent
viscoelasticity and mechanosorption in all anatomical directions. The device
simultaneously measures the creep strains of three dogbone tension samples,
three flat compression samples, and six Arcan shear samples via Digital Image
Correlation. This allows for ascertaining the complete orthotropic compliance
tensors while accounting for loading direction asymmetries. This paper explains
the creep rack's structure and demonstrates its use by determining all nine
independent creep compliance components of Norway spruce at 65% relative
humidity. The data shows that loading asymmetry effects amount up to 16%.
Furthermore, the found creep compliance tensor is not proportional to the
elastic compliance tensor. By clustering the compliance components, we identify
four necessary components to represent the full orthotropy of the compliance
tensor, obtainable from not less than two experiments. | arXiv |
The constant workspace algorithms use a constant number of words in addition
to the read-only input to the algorithm stored in an array. In this paper, we
devise algorithms to efficiently compute relative hulls in the plane using a
constant workspace. Specifically, we devise algorithms for the following three
problems: \newline (i) Given two simple polygons $P$ and $Q$ with $P \subseteq
Q$, compute a simple polygon $P'$ with a perimeter of minimum length such that
$P \subseteq P' \subseteq Q$. \newline (ii) Given two simple polygons $P$ and
$Q$ such that $Q$ does not intersect the interior of $P$ but it does intersects
with the interior of the convex hull of $P$, compute a weakly simple polygon
$P'$ contained in the convex hull of $P$ such that the perimeter of $P'$ is of
minimum length. \newline (iii) Given a set $S$ of points located in a simple
polygon $P$, compute a weakly simple polygon $P' \subseteq P$ with a perimeter
of minimum length such that $P'$ contains all the points in $S$. \newline To
our knowledge, no prior works devised algorithms to compute relative hulls
using a constant workspace and this work is the first such attempt. | arXiv |
Effective usage of approximate circuits for various performance trade-offs
requires accurate computation of error. Several average and worst case error
metrics have been proposed in the literature. We propose a framework for exact
computation of these error metrics, including the error rate (ER), mean
absolute error (MAE), mean squared error (MSE) and the worst-case error (WCE).
We use a combination of SAT and message-passing algorithms. Our algorithm takes
as input the CNF formula for the exact and approximate circuits followed by a
subtractor that finds the difference of the two outputs. This is converted into
a tree, with each vertex of the tree associated with a sub-formulas and all
satisfying solutions to it. Once this is done, any probability can be computed
by setting appropriate error bits and using a message passing algorithm on the
tree. Since message-passing is fast, besides ER and MAE, computation of metrics
like MSE is also very efficient. In fact, it is possible to get the entire
probability distribution of the error. Besides standard benchmarks, we could
compute the error metrics exactly for approximate Gaussian and Sobel filters,
which has not been done previously. | arXiv |
The goal of multi-object tracking (MOT) is to detect and track all objects in
a scene across frames, while maintaining a unique identity for each object.
Most existing methods rely on the spatial motion features and appearance
embedding features of the detected objects in consecutive frames. Effectively
and robustly representing the spatial and appearance features of long
trajectories has become a critical factor affecting the performance of MOT. We
propose a novel approach for appearance and spatial feature representation,
improving upon the clustering association method MOT\_FCG. For spatial motion
features, we propose Diagonal Modulated GIoU, which more accurately represents
the relationship between the position and shape of the objects. For appearance
features, we utilize a dynamic appearance representation that incorporates
confidence information, enabling the trajectory appearance features to be more
robust and global. Based on the baseline model MOT\_FCG, we achieved 76.1 HOTA,
80.4 MOTA and 81.3 IDF1 on the MOT17 validation set, and also achieved
competitive performance on the MOT20 and DanceTrack validation sets. | arXiv |
The NSGA-II is the most prominent multi-objective evolutionary algorithm
(cited more than 50,000 times). Very recently, a mathematical runtime analysis
has proven that this algorithm can have enormous difficulties when the number
of objectives is larger than two (Zheng, Doerr. IEEE Transactions on
Evolutionary Computation (2024)). However, this result was shown only for the
OneMinMax benchmark problem, which has the particularity that all solutions are
on the Pareto front, a fact heavily exploited in the proof of this result.
In this work, we show a comparable result for the LeadingOnesTrailingZeroes
benchmark. This popular benchmark problem appears more natural in that most of
its solutions are not on the Pareto front. With a careful analysis of the
population dynamics of the NGSA-II optimizing this benchmark, we manage to show
that when the population grows on the Pareto front, then it does so much faster
by creating known Pareto optima than by spreading out on the Pareto front.
Consequently, already when still a constant fraction of the Pareto front is
unexplored, the crowding distance becomes the crucial selection mechanism, and
thus the same problems arise as in the optimization of OneMinMax. With these
and some further arguments, we show that the NSGA-II, with a population size by
at most a constant factor larger than the Pareto front, cannot compute the
Pareto front in less than exponential time. | arXiv |
We aim to provide more insights into the applicability to solar coronal
seismology of the much-studied discrete leaky modes (DLMs) in classic analyses.
Under linear ideal pressureless MHD, we examine two-dimensional (2D) axial
fundamental kink motions that arise when localized velocity exciters impact
some symmetric slab equilibria. Continuous structuring is allowed for. A 1D
initial value problem (IVP) is formulated in conjunction with an eigenvalue
problem (EVP) for laterally open systems, with no strict boundary conditions
(BCs) at infinity. The IVP is solved by eigenfunction expansion, allowing a
clear distinction between the contributions from proper eigenmodes and improper
continuum eigenmodes. Example solutions are offered for parameters typical of
active region loops. Our solutions show that the system evolves towards long
periodicities due to proper eigenmodes (of order the axial Alfven time),
whereas the interference of the improper continuum may lead to short
periodicities initially (of order the lateral Alfven time). Specializing to the
slab axis, we demonstrate that the proper contribution strengthens with the
density contrast, but may occasionally be stronger for less steep density
profiles. Short periodicities are not guaranteed in the improper contribution,
the details of the initial exciter being key. When identifiable, these
periodicities tend to agree with the oscillation frequencies expected for DLMs,
despite the differences in the BCs between our EVP and classic analyses. The
eigenfunction expansion approach enables all qualitative features to be
interpreted as the interplay between the initial exciter and some response
function, the latter solely determined by the equilibria. Classic theories for
DLMs can find seismological applications, with time-dependent studies offering
additional ways for constraining initial exciters. | arXiv |
We present analytical and numerical calculations for the photon polarization
tensor at finite temperature and density in a constant magnetic field. We first
discuss the tensor decomposition in the presence of the magnetic field which
breaks rotational symmetry. Then, we analytically perform all the momentum
integrations and numerically take the Landau level sum. We present the real and
imaginary parts of the photon polarization tensor as functions of the momenta,
the chemical potential, and the finite temperature. As an application, we
consider the real photon limit and estimate the photon decay rate in the hot
and dense medium. We specifically quantify the difference between the X-mode
and the O-mode with the polarization orthogonal and parallel to the magnetic
field. As long as the magnetic field is weak, the decay rate of the X-mode
photon is larger than that of the O-mode photon, while the O-mode becomes
dominant due to the Landau level suppression of the X-mode at strong magnetic
field. | arXiv |
The installation of high-capacity fast chargers for electric vehicles (EVs)
is posing a significant risk to the distribution grid as the increased demand
from widespread residential EV charging could exceed the technical limits of
the distribution system. Addressing this issue is critical, given that current
infrastructure upgrades to enhance EV hosting capacity are both costly and
time-consuming. Moreover, the inherent uncertainties associated with EV
charging parameters make it challenging for power utilities to accurately
assess the impact of EVs added to specific locations. To address these
knowledge gaps, this study (a) introduces an algorithm to coordinate
residential EV charging, and (b) proposes a comprehensive framework that
evaluates all transformers within a feeder. The proposed method is applied to a
real-world feeder, which includes 120 transformers of varying capacities. The
results demonstrate that this approach effectively manages a substantial number
of EVs without overloading any of the transformers, while also pinpointing
locations that must be prioritized for future upgrades. This framework can
serve as a valuable reference for utilities when conducting distribution system
evaluations for supporting the growing EV penetration. | arXiv |
We explore birational geometry of matroids by investigating automorphisms of
their coarse Bergman fans. Combinatorial Cremona maps provide such
automorphisms of Bergman fans which are not induced by matroid automorphisms.
We investigate the structure of matroids allowing combinatorial Cremona maps
and prove a realizability criterion in the presence of two different Cremonas.
We also prove that for all matroids associated to Coxeter arrangements the
group of coarse automorphisms of the Bergman fan is generated by the matroid
automorphisms and at most one combinatorial Cremona map. | arXiv |
Purpose: Journal Impact Factors and other citation-based indicators are
widely used and abused to help select journals to publish in or to estimate the
value of a published article. Nevertheless, citation rates primarily reflect
scholarly impact rather than other quality dimensions, including societal
impact, originality, and rigour. In contrast, Journal Quality Factors (JQFs)
are average quality score estimates given to a journal's articles by ChatGPT.
Design: JQFs were compared with Polish, Norwegian and Finnish journal ranks and
with journal citation rates for 1,300 journals with 130,000 articles from 2021
in large monodisciplinary journals in the 25 out of 27 Scopus broad fields of
research for which it was possible. Outliers were also examined. Findings: JQFs
correlated positively and mostly strongly (median correlation: 0.641) with
journal ranks in 24 out of the 25 broad fields examined, indicating a nearly
science-wide ability for ChatGPT to estimate journal quality. Journal citation
rates had similarly high correlations with national journal ranks, however, so
JQFs are not a universally better indicator. An examination of journals with
JQFs not matching their journal ranks suggested that abstract styles may affect
the result, such as whether the societal contexts of research are mentioned.
Limitations: Different journal rankings may have given different findings
because there is no agreed meaning for journal quality. Implications: The
results suggest that JQFs are plausible as journal quality indicators in all
fields and may be useful for the (few) research and evaluation contexts where
journal quality is an acceptable proxy for article quality, and especially for
fields like mathematics for which citations are not strong indicators of
quality. Originality: This is the first attempt to estimate academic journal
value with a Large Language Model. | arXiv |
A measurement of the $B^{0}$ meson lifetime and related properties using $B^0
\to J/\psi K^{*0}$ decays in data from 13 TeV proton-proton collisions with an
integrated luminosity of 140 fb$^{-1}$ recorded by the ATLAS detector at the
LHC is presented. The measured effective lifetime is $$ \tau = 1.5053 \pm
0.0012 ~\mathrm{(stat.)} \pm 0.0035 ~\mathrm{(syst.)~ps}. $$ The average decay
width extracted from the effective lifetime, using parameters from external
sources, is $$ \Gamma_d = 0.6639 \pm 0.0005 ~\mathrm{(stat.)} \pm 0.0016
~\mathrm{(syst.)}\pm 0.0038 ~\textrm{(ext.)} \textrm{ ps}^{-1}, $$ where the
uncertainties are statistical, systematic and from external sources. The
earlier ATLAS measurement of $\Gamma_s$ in the $B^0_s \to J/\psi\phi$ decay was
used to derive a value for the ratio of the average decay widths $\Gamma_d$ and
$\Gamma_s$ for $B^{0}$ and $B_s^{0}$ mesons respectively, of $$ \frac{\Gamma_d
}{\Gamma_s } = 0.9905 \pm 0.0022 ~\textrm{(stat.)} \pm 0.0036 ~\textrm{(syst.)}
\pm 0.0057 ~\textrm{(ext.)}. $$ The measured lifetime, average decay width and
decay width ratio are in agreement with theoretical predictions and with
measurements by other experiments. This measurement provides the most precise
result of the effective lifetime of the $B^{0}$ meson to date. | arXiv |
Classification in the sense of similarity is an important issue. In this
paper, we study similarity classification in Topological Data Analysis. We
define a pseudometric $d_{S}^{(p)}$ to measure the distance between barcodes
generated by persistent homology groups of topological spaces, and we provide
that our pseudometric $d_{S}^{(2)}$ is a similarity invariant. Thereby, we
establish a connection between Operator Theory and Topological Data Analysis.
We give the calculation formula of the pseudometric $d_{S}^{(2)}$
$(d_{S}^{(1)})$ by arranging all eigenvalues of matrices determined by barcodes
in descending order to get the infimum over all matchings. Since conformal
linear transformation is one representative type of similarity transformations,
we construct comparative experiments on both synthetic datasets and waves from
an online platform to demonstrate that our pseudometric $d_{S}^{(2)}$
$(d_{S}^{(1)})$ is stable under conformal linear transformations, whereas the
bottleneck and Wasserstein distances are not. In particular, our pseudometric
on waves is only related to the waveform but is independent on the frequency
and amplitude. Furthermore, the computation time for $d_{S}^{(2)}$
$(d_{S}^{(1)})$ is significantly less than the computation time for bottleneck
distance and is comparable to the computation time for accelerated Wasserstein
distance between barcodes. | arXiv |
This paper addresses the collision detection problem in population protocols.
The network consists of state machines called agents. At each time step,
exactly one pair of agents is chosen uniformly at random to have an
interaction, changing the states of the two agents. The collision detection
problem involves each agent starting with an input integer between $1$ and $n$,
where $n$ is the number of agents, and requires those agents to determine
whether there are any duplicate input values among all agents. Specifically,
the goal is for all agents to output false if all input values are distinct,
and true otherwise.
In this paper, we present an algorithm that requires a polynomial number of
states per agent and solves the collision detection problem with probability
one in sub-linear parallel time, both with high probability and in expectation.
To the best of our knowledge, this algorithm is the first to solve the
collision detection problem using a polynomial number of states within
sublinear parallel time, affirmatively answering the question raised by Burman,
Chen, Chen, Doty, Nowak, Severson, and Xu [PODC 2021] for the first time. | arXiv |
Autonomous manipulation in everyday tasks requires flexible action generation
to handle complex, diverse real-world environments, such as objects with
varying hardness and softness. Imitation Learning (IL) enables robots to learn
complex tasks from expert demonstrations. However, a lot of existing methods
rely on position/unilateral control, leaving challenges in tasks that require
force information/control, like carefully grasping fragile or varying-hardness
objects. As the need for diverse controls increases, there are demand for
low-cost bimanual robots that consider various motor inputs. To address these
challenges, we introduce Bilateral Control-Based Imitation Learning via Action
Chunking with Transformers(Bi-ACT) and"A" "L"ow-cost "P"hysical "Ha"rdware
Considering Diverse Motor Control Modes for Research in Everyday Bimanual
Robotic Manipulation (ALPHA-$\alpha$). Bi-ACT leverages bilateral control to
utilize both position and force information, enhancing the robot's adaptability
to object characteristics such as hardness, shape, and weight. The concept of
ALPHA-$\alpha$ is affordability, ease of use, repairability, ease of assembly,
and diverse control modes (position, velocity, torque), allowing
researchers/developers to freely build control systems using ALPHA-$\alpha$. In
our experiments, we conducted a detailed analysis of Bi-ACT in unimanual
manipulation tasks, confirming its superior performance and adaptability
compared to Bi-ACT without force control. Based on these results, we applied
Bi-ACT to bimanual manipulation tasks. Experimental results demonstrated high
success rates in coordinated bimanual operations across multiple tasks. The
effectiveness of the Bi-ACT and ALPHA-$\alpha$ can be seen through
comprehensive real-world experiments. Video available at:
https://mertcookimg.github.io/alpha-biact/ | arXiv |
This article is concerned with ``up to $C^{2, \alpha}$-regularity results''
about a mixed local-nonlocal nonlinear elliptic equation which is driven by the
superposition of Laplacian and fractional Laplacian operators.
First of all, an estimate on the $L^\infty$ norm of weak solutions is
established for more general cases than the ones present in the literature,
including here critical nonlinearities.
We then prove the interior $C^{1,\alpha}$-regularity and the
$C^{1,\alpha}$-regularity up to the boundary of weak solutions, which extends
previous results by the authors [X. Su, E. Valdinoci, Y. Wei and J. Zhang,
Math. Z. (2022)], where the nonlinearities considered were of subcritical type.
In addition, we establish the interior $C^{2,\alpha}$-regularity of solutions
for all $s\in(0,1)$ and the $C^{2,\alpha}$-regularity up to the boundary for
all $s\in(0,\frac{1}{2})$, with sharp regularity exponents.
For further perusal, we also include a strong maximum principle and some
properties about the principal eigenvalue. | arXiv |
Variable Subset Forecasting (VSF) refers to a unique scenario in multivariate
time series forecasting, where available variables in the inference phase are
only a subset of the variables in the training phase. VSF presents significant
challenges as the entire time series may be missing, and neither inter- nor
intra-variable correlations persist. Such conditions impede the effectiveness
of traditional imputation methods, primarily focusing on filling in individual
missing data points. Inspired by the principle of feature engineering that not
all variables contribute positively to forecasting, we propose Task-Oriented
Imputation for VSF (TOI-VSF), a novel framework shifts the focus from accurate
data recovery to directly support the downstream forecasting task. TOI-VSF
incorporates a self-supervised imputation module, agnostic to the forecasting
model, designed to fill in missing variables while preserving the vital
characteristics and temporal patterns of time series data. Additionally, we
implement a joint learning strategy for imputation and forecasting, ensuring
that the imputation process is directly aligned with and beneficial to the
forecasting objective. Extensive experiments across four datasets demonstrate
the superiority of TOI-VSF, outperforming baseline methods by $15\%$ on
average. | arXiv |
A $k$-star is a complete bipartite graph $K_{1,k}$. A partial $k$-star design
of order $n$ is a pair $(V,\mathcal{A})$ where $V$ is a set of $n$ vertices and
$\mathcal{A}$ is a set of edge-disjoint $k$-stars whose vertex sets are subsets
of $V$. If each edge of the complete graph with vertex set $V$ is in some star
in $\mathcal{A}$, then $(V,\mathcal{A})$ is a (complete) $k$-star design. We
say that $(V,\mathcal{A})$ is completable if there is a $k$-star design
$(V,\mathcal{B})$ such that $\mathcal{A} \subseteq \mathcal{B}$. In this paper
we determine, for all $k$ and $n$, the minimum number of stars in an
uncompletable partial $k$-star design of order $n$. | arXiv |
We present the first HI mass function (HIMF) measurement for the recent FAST
All Sky HI (FASHI) survey and the most complete measurements of HIMF in the
local universe so far by combining the HI catalogues from HI Parkes All Sky
Survey (HIPASS), Arecibo Legacy Fast ALFA (ALFALFA) and FASHI surveys at
redshift 0 < z < 0.05, covering 76% of the entire sky. We adopt the same
methods to estimate distances, calculate sample completeness, and determine the
HIMF for all three surveys. The best-fitting Schechter function for the total
HIMF has a low-mass slope parameter alpha = -1.30 and a knee mass log(Ms) =
9.86 and a normalization phi_s = 0.00658. This gives the cosmic HI abundance
omega_HI= 0.000454. We find that a double Schechter function with the same
slope alpha better describes our HIMF, and the two different knee masses are
log(Ms1) = 9.96 and log(Ms2) = 9.65. We verify that the measured HIMF is
marginally affected by the choice of distance estimates. The effect of cosmic
variance is significantly suppressed by combining the three surveys and it
provides a unique opportunity to obtain an unbiased estimate of the HIMF in the
local universe. | arXiv |
Visual navigation takes inspiration from humans, who navigate in previously
unseen environments using vision without detailed environment maps. Inspired by
this, we introduce a novel no-RL, no-graph, no-odometry approach to visual
navigation using feudal learning to build a three tiered agent. Key to our
approach is a memory proxy map (MPM), an intermediate representation of the
environment learned in a self-supervised manner by the high-level manager agent
that serves as a simplified memory, approximating what the agent has seen. We
demonstrate that recording observations in this learned latent space is an
effective and efficient memory proxy that can remove the need for graphs and
odometry in visual navigation tasks. For the mid-level manager agent, we
develop a waypoint network (WayNet) that outputs intermediate subgoals, or
waypoints, imitating human waypoint selection during local navigation. For the
low-level worker agent, we learn a classifier over a discrete action space that
avoids local obstacles and moves the agent towards the WayNet waypoint. The
resulting feudal navigation network offers a novel approach with no RL, no
graph, no odometry, and no metric map; all while achieving SOTA results on the
image goal navigation task. | arXiv |
Today navigation applications (e.g., Waze and Google Maps) enable human users
to learn and share the latest traffic observations, yet such information
sharing simply aids selfish users to predict and choose the shortest paths to
jam each other. Prior routing game studies focus on myopic users in
oversimplified one-shot scenarios to regulate selfish routing via information
hiding or pricing mechanisms. For practical human-in-the-loop learning (HILL)
in repeated routing games, we face non-myopic users of differential past
observations and need new mechanisms (preferably non-monetary) to persuade
users to adhere to the optimal path recommendations. We model the repeated
routing game in a typical parallel transportation network, which generally
contains one deterministic path and $N$ stochastic paths. We first prove that
no matter under the information sharing mechanism in use or the latest routing
literature's hiding mechanism, the resultant price of anarchy (PoA) for
measuring the efficiency loss from social optimum can approach infinity,
telling arbitrarily poor exploration-exploitation tradeoff over time. Then we
propose a novel user-differential probabilistic recommendation (UPR) mechanism
to differentiate and randomize path recommendations for users with differential
learning histories. We prove that our UPR mechanism ensures interim individual
rationality for all users and significantly reduces $\text{PoA}=\infty$ to
close-to-optimal $\text{PoA}=1+\frac{1}{4N+3}$, which cannot be further reduced
by any other non-monetary mechanism. In addition to theoretical analysis, we
conduct extensive experiments using real-world datasets to generalize our
routing graphs and validate the close-to-optimal performance of UPR mechanism. | arXiv |
Compute-and-forward (CF) is a relaying strategy which allows the relay to
decode a linear combination of the transmitted messages. This work studies the
optimal power allocation problem for the CF scheme in fast fading channels for
maximizing the symmetric computation rate, which is a non-convex optimization
problem with no simple analytical or numerical solutions. In the first part of
the paper, we investigate the problem when there are finitely many channel
states (discrete case). We establish several important properties of the
optimal solutions and show that if all users share the same power allocation
policy (symmetric policy), the optimal solution takes the form of a
water-filling type when the power constraint exceeds a certain threshold.
However, if asymmetric policies are allowed, the optimal solution does not take
this form for any power constraint. We propose a low-complexity order-based
algorithm for both scenarios and compare its performance with baseline
algorithms. In the second part of the paper, we state relevant results when the
channel coefficients are modelled as continuous random variables (continuous
case) and propose a similar low-complexity iterative algorithm for the
symmetric policy scenario. Numerical results are provided for both discrete and
continuous cases. It is shown that in general our proposed algorithm finds good
suboptimal solutions with low complexity, and for some examples considered,
finds an exact optimal solution. | arXiv |
Mutation testing was proposed to identify weaknesses in test suites by
repeatedly generating artificially faulty versions of the software (mutants)
and determining if the test suite is sufficient to detect them (kill them).
When the tests are insufficient, each surviving mutant provides an opportunity
to improve the test suite. We conducted a study and found that many such
surviving mutants (up to 84% for the subjects of our study) are detectable by
simply augmenting existing tests with additional assertions, or assertion
amplification. Moreover, we find that many of these mutants are detectable by
multiple existing tests, giving developers options for how to detect them. To
help with these challenges, we created a technique that performs memory-state
analysis to identify candidate assertions that developers can use to detect the
surviving mutants. Additionally, we build upon prior research that identifies
``crossfiring'' opportunities -- tests that coincidentally kill multiple
mutants. To this end, we developed a theoretical model that describes the
varying granularities that crossfiring can occur in the existing test suite,
which provide opportunities and options for how to kill surviving mutants. We
operationalize this model to an accompanying technique that optimizes the
assertion amplification of the existing tests to crossfire multiple mutants
with fewer added assertions, optionally concentrated within fewer tests. Our
experiments show that we can kill all surviving mutants that are detectable
with existing test data with only 1.1% of the identified assertion candidates,
and increasing by a factor of 6x, on average, the number of killed mutants from
amplified tests, over tests that do not crossfire. | arXiv |
We study the booklink, a braid-like embedding with local maxima and minima,
and the bridge-braid spectrum of a link, which captures the smallest number of
braid-strands in a booklink with a prescribed number of critical points. This
spectrum spans the gap between the classical bridge and braid indices. We apply
a foliation theory argument to provide a formula for the spectra of both split
and composite links. We then generate a table for the spectra of all prime
knots up to 9 crossings. | arXiv |
Let ${\mathcal M}_g$ be the moduli space of compact connected Riemann
surfaces of genus $g\geq 2$ and let $\widehat{{\mathcal M}_g}$ be its
Deligne-Mumford compactification, which is stratified by the topological type
of the stable Riemann surfaces. We consider the equisymmetric loci in $\mathcal
M_g$ corresponding to Riemann surfaces whose automorphism group is abelian and
determine the topological type of the maximal dimension strata at their
boundary. For the particular cases of the hyperelliptic and the cyclic
$p$-gonal actions, we describe all the topological strata at the boundary in
terms of trees with a fixed number of edges. | arXiv |
For a hypergraph $\mathbb{H}$ on $[n]$, the hypergraphic poset $P_\mathbb{H}$
is the transitive closure of the oriented skeleton of the hypergraphic polytope
$\triangle_\mathbb{H}$ (the Minkowski sum of the standard simplices
$\triangle_H$ for all $H \in \mathbb{H}$). Hypergraphic posets include the weak
order for the permutahedron (when $\mathbb{H}$ is the complete graph on $[n]$)
and the Tamari lattice for the associahedron (when $\mathbb{H}$ is the set of
all intervals of $[n]$), which motivates the study of lattice properties of
hypergraphic posets. In this paper, we focus on interval hypergraphs, where all
hyperedges are intervals of $[n]$. We characterize the interval hypergraphs
$\mathbb{I}$ for which $P_\mathbb{I}$ is a lattice, a distributive lattice, a
semidistributive lattice, and a lattice quotient of the weak order. | arXiv |
Optimal control theory extending from the calculus of variations has not been
used to study the wind turbine power system (WTPS) control problem, which aims
at achieving two targets: (i) maximizing power generation in lower wind speed
conditions; and (ii) maintaining the output power at the rated level in high
wind speed conditions. A lack of an optimal control framework for the WTPS
(i.e., no access to actual optimal control trajectories) reduces optimal
control design potential and prevents competing control methods of WTPSs to
have a reference control solution for comparison. In fact, the WTPS control
literature often relies on reduced and linearized models of WTPSs, and avoids
the nonsmoothness present in the system during transitions between different
conditions of operation. In this paper, we introduce a novel optimal control
framework for the WTPS control problem. We use in our formulation a recent
accurate, nonlinear differential-algebraic equation (DAE) model of WTPSs, which
we then generalize over all wind speed ranges using non-smooth functions. We
also use developments in nonsmooth optimal control theory to take into account
nonsmoothness present in the system. We implement this new WTPS optimal control
approach to solve the problem numerically, including (i) different wind speed
profiles for testing the system response; (ii) real-world wind data; and (iii)
a comparison with smoothing and naive approaches. Results show the
effectiveness of the proposed approach. | arXiv |
While deep learning has revolutionized computer-aided drug discovery, the AI
community has predominantly focused on model innovation and placed less
emphasis on establishing best benchmarking practices. We posit that without a
sound model evaluation framework, the AI community's efforts cannot reach their
full potential, thereby slowing the progress and transfer of innovation into
real-world drug discovery. Thus, in this paper, we seek to establish a new gold
standard for small molecule drug discovery benchmarking, WelQrate.
Specifically, our contributions are threefold: WelQrate Dataset Collection - we
introduce a meticulously curated collection of 9 datasets spanning 5
therapeutic target classes. Our hierarchical curation pipelines, designed by
drug discovery experts, go beyond the primary high-throughput screen by
leveraging additional confirmatory and counter screens along with rigorous
domain-driven preprocessing, such as Pan-Assay Interference Compounds (PAINS)
filtering, to ensure the high-quality data in the datasets; WelQrate Evaluation
Framework - we propose a standardized model evaluation framework considering
high-quality datasets, featurization, 3D conformation generation, evaluation
metrics, and data splits, which provides a reliable benchmarking for drug
discovery experts conducting real-world virtual screening; Benchmarking - we
evaluate model performance through various research questions using the
WelQrate dataset collection, exploring the effects of different models, dataset
quality, featurization methods, and data splitting strategies on the results.
In summary, we recommend adopting our proposed WelQrate as the gold standard in
small molecule drug discovery benchmarking. The WelQrate dataset collection,
along with the curation codes, and experimental scripts are all publicly
available at WelQrate.org. | arXiv |
We computed for the first time the $\tau$ data-driven Euclidean windows for
the hadronic vacuum polarization contribution to the muon g-2. We showed that
$\tau$-based results agree with the available lattice window evaluations and
with the full result. On the intermediate window, where all lattice evaluations
are rather precise and agree, $\tau$-based results are compatible with them.
This is particularly interesting, given that the disagreement of the $e^+e^-$
data-driven result with the lattice values in this window is the main cause for
their discrepancy, affecting the interpretation of the $a_\mu$ measurement in
terms of possible new physics. | arXiv |
A recent result by Kardo\v{s}, M\'a\v{c}ajov\'a and Zerafa [J. Comb. Theory,
Ser. B. 160 (2023) 1--14] related to the famous Berge-Fulkerson conjecture
implies that given an arbitrary set of odd pairwise edge-disjoint cycles, say
$\mathcal O$, in a bridgeless cubic graph, there exists a $1$-factor
intersecting all cycles in $\mathcal O$ in at least one edge. This remarkable
result opens up natural generalizations in the case of an $r$-regular graph $G$
and a $t$-factor $F$, with $r$ and $t$ being positive integers. In this paper,
we start the study of this problem by proving necessary and sufficient
conditions on $G$, $t$ and $r$ to assure the existence of a suitable $F$ for
any possible choice of the set $\mathcal O$. First of all, we show that $G$
needs to be $2$-connected. Under this additional assumption, we highlight how
the ratio $\frac{t}{r}$ seems to play a crucial role in assuring the existence
of a $t$-factor $F$ with the required properties by proving that $\frac{t}{r}
\geq \frac{1}{3}$ is a further necessary condition. We suspect that this
condition is also sufficient, and we confirm it in the case
$\frac{t}{r}=\frac{1}{3}$, generalizing the case $t=1$ and $r=3$ proved by
Kardo\v{s}, M\'a\v{c}ajov\'a, Zerafa, and in the case $\frac{t}{r}=\frac{1}{2}$
with $t$ even. Finally, we provide further results in the case of cycles of
arbitrary length. | arXiv |
We consider fair resource allocation in sequential decision-making
environments modeled as weakly coupled Markov decision processes, where
resource constraints couple the action spaces of $N$ sub-Markov decision
processes (sub-MDPs) that would otherwise operate independently. We adopt a
fairness definition using the generalized Gini function instead of the
traditional utilitarian (total-sum) objective. After introducing a general but
computationally prohibitive solution scheme based on linear programming, we
focus on the homogeneous case where all sub-MDPs are identical. For this case,
we show for the first time that the problem reduces to optimizing the
utilitarian objective over the class of "permutation invariant" policies. This
result is particularly useful as we can exploit Whittle index policies in the
restless bandits setting while, for the more general setting, we introduce a
count-proportion-based deep reinforcement learning approach. Finally, we
validate our theoretical findings with comprehensive experiments, confirming
the effectiveness of our proposed method in achieving fairness. | arXiv |
Guessing Codeword Decoding (GCD) is a recently proposed soft-input forward
error correction decoder for arbitrary linear forward error correction codes.
Inspired by recent proposals that leverage binary linear codebook structure to
reduce the number of queries made by Guessing Random Additive Noise Decoding
(GRAND), for binary linear codes that include one full single parity-check
(SPC) bit, we show that it is possible to reduce the number of queries made by
GCD by a factor of up to 2 without impacting decoding precision. The greatest
guesswork reduction is realized at lower SNRs, where the decoder output is
usually correct but guesswork is most burdensome. Codes without a SPC can be
modified to include one by swapping a column of the generator matrix for an
all-ones column to obtain a decoding complexity advantage, and we demonstrate
that this can often be done without losing decoding precision. To practically
avail of the complexity advantage, a noise effect pattern generator capable of
producing sequences for given Hamming weights, such as the one underlying
ORBGRAND, is necessary. | arXiv |
Self-assembly of nanoscale synthetic subunits is a promising bottom-up
strategy for fabrication of functional materials. Here, we introduce a design
principle for DNA origami nanoparticles of 50-nm size, exploiting modularity,
to make a family of versatile subunits that can target an abundant variety of
self-assembled structures. The subunits are based on a core module that remains
constant among all the subunits. Variable bond modules and angle modules are
added to the exterior of the core to control interaction specificity, strength
and structural geometry. A series of subunits with designed bond/angle modules
are demonstrated to self-assemble into a rich variety of structures with
different Gaussian curvatures, exemplified by sheets, spherical shells, and
tubes. The design features flexible joints implemented using single-stranded
angle modules between adjacent subunits whose mechanical properties, such as
bending elastic moduli, are inferred from cryo-EM. Our findings suggest that
incorporating a judicious amount of flexibility in the bond provides error
tolerances in design and fabrication while still guaranteeing target fidelity.
Lastly, while increasing flexibility could introduce greater variability and
potential errors in assembly, these effects can be counterbalanced by
increasing the number of distinct bonds, thereby allowing for precise targeting
of specific structural binding angles within a broad range of configurations. | arXiv |
In a recent paper (Gonzalez et al., 2023), we investigated the motion of
grains within a granular bed sheared by a viscous fluid, and showed how
segregation and hardening occur in the fluid- (bedload) and solid-like (creep)
regions. In this paper, we inquire further into the mechanisms leading to grain
segregation in a bidisperse bed, and how the forces are distributed. For that,
we carried out numerical simulations at the grain scale by using CFD-DEM
(computational fluid dynamics-discrete element method), with which we were able
to track the positions, velocities, forces, and solid contacts underwent by
each grain. We show that during the upward motion of large grains the direct
action of fluid forces is significant in the middle and upper parts of the
bedload layer, while only contact forces are significant in the creep layer and
lower part of the bedload layer. We also show that in all cases the particles
experience a moment about a -45 degrees contact point (with respect to the
horizontal plane) when migrating upward, whether entrained by other contacts or
directly by the fluid. In addition, we show the variations in the average
solid-solid contacts, and how forces caused either by solid-solid contacts or
directly by the fluid are distributed within the bed. Our results provide the
relationship between force propagation and reorganization of grains in sheared
beds, explaining mechanisms found, for example, in river beds and landslides. | arXiv |
This study aimed to identify and analyze the characteristics of highly cited
publications in the field of artificial intelligence within the Science
Citation Index Expanded from 1991 to 2022. The assessment focused on documents
that garnered 100 citations or more from the Web of Science Core Collection,
spanning from their publication date to the end of 2022. Various aspects of
these documents were analyzed, encompassing document types, the distribution of
annual production, the average number of citations per publication, Web of
Science categories, and journals. Moreover, the publication performance of
countries, institutions, and authors underwent evaluation through six
publication indicators and associated citation metrics. To facilitate a
comprehensive comparison of the authors research performance, the Y-index was
employed. The outcomes of the analysis revealed that a majority of the highly
cited articles were published within the Web of Science categories of
"artificial intelligence computer science" and "electrical and electronic
engineering". Notably, the United States exhibited dominance across all six
publication indicators. Within the realm of average citations per publication,
the United Kingdom emerged as a leader for independent articles, first-author
articles, and corresponding-author articles. Exceptionally, the Chinese Academy
of Sciences in China and the Massachusetts Institute of Technology (MIT) in the
USA, contributed significantly. The significant impact of highly cited articles
extended to the output of Stanford University in the USA. B.L. Bassler
published the most highly cited articles. Upon employing the Y-index analysis,
J.E.P. Santos was identified as having the highest potential for publication.
In addition to the primary analysis, this study also presented nine classic
articles that have left an indelible mark on artificial intelligence research. | arXiv |
Machine learning (ML) defenses protect against various risks to security,
privacy, and fairness. Real-life models need simultaneous protection against
multiple different risks which necessitates combining multiple defenses. But
combining defenses with conflicting interactions in an ML model can be
ineffective, incurring a significant drop in the effectiveness of one or more
defenses being combined. Practitioners need a way to determine if a given
combination can be effective. Experimentally identifying effective combinations
can be time-consuming and expensive, particularly when multiple defenses need
to be combined. We need an inexpensive, easy-to-use combination technique to
identify effective combinations.
Ideally, a combination technique should be (a) accurate (correctly identifies
whether a combination is effective or not), (b) scalable (allows combining
multiple defenses), (c) non-invasive (requires no change to the defenses being
combined), and (d) general (is applicable to different types of defenses).
Prior works have identified several ad-hoc techniques but none satisfy all the
requirements above. We propose a principled combination technique, Def\Con, to
identify effective defense combinations. Def\Con meets all requirements,
achieving 90% accuracy on eight combinations explored in prior work and 81% in
30 previously unexplored combinations that we empirically evaluate in this
paper. | arXiv |
We build and discuss a low energy effective field theory for anisotropic
anti-ferromagnets in presence of an external magnetic field. Such an effective
theory is simple yet rich, and features a number of phenomena such as the
appearance of gapped Goldstones, pseudo-Goldstones and a "spin flop" phase
transition, all within the regime of validity of the theory. We also discuss in
detail, the quantization procedure of the free theory in the presence of a
magnetic field, which is made non-trivial by the presence of a single-time
derivative term. This class of materials make a precious test field for exotic
phenomena in quantum field theory. Moreover, we explicitly perform the matching
of the effective theory to the short distance theory of a specific
anti-ferromagnet, namely, nickel oxide. The latter is particularly relevant in
light of recent proposals of employing this material towards the hunt for light
dark matter. As a byproduct of our study, we also re-evaluate the role played
by discrete symmetries in magnetic materials, presenting it in a way that is
completely consistent with the proper low energy EFT ideology. | arXiv |
Given a source image of a clothed person (an image subject), AI-based
nudification applications can produce nude (undressed) images of that person.
Moreover, not only do such applications exist, but there is ample evidence of
the use of such applications in the real world and without the consent of an
image subject. Still, despite the growing awareness of the existence of such
applications and their potential to violate the rights of image subjects and
cause downstream harms, there has been no systematic study of the nudification
application ecosystem across multiple applications. We conduct such a study
here, focusing on 20 popular and easy-to-find nudification websites. We study
the positioning of these web applications (e.g., finding that most sites
explicitly target the nudification of women, not all people), the features that
they advertise (e.g., ranging from undressing-in-place to the rendering of
image subjects in sexual positions, as well as differing user-privacy options),
and their underlying monetization infrastructure (e.g., credit cards and
cryptocurrencies). We believe this work will empower future, data-informed
conversations -- within the scientific, technical, and policy communities -- on
how to better protect individuals' rights and minimize harm in the face of
modern (and future) AI-based nudification applications. Content warning: This
paper includes descriptions of web applications that can be used to create
synthetic non-consensual explicit AI-created imagery (SNEACI). This paper also
includes an artistic rendering of a user interface for such an application. | arXiv |
The quantum approximate optimization algorithm (QAOA) is a near-term quantum
algorithm aimed at solving combinatorial optimization problems. Since its
introduction, various generalizations have emerged, spanning modifications to
the initial state, phase unitaries, and mixer unitaries. In this work, we
present an analytical study of broad families of QAOA variants. We begin by
examining a family of QAOA with product mixers, which includes single-body
mixers parametrized by multiple variational angles, and derive exact analytical
expressions for the cost expectation on weighted problem graphs in the
single-layer ansatz setting. We then analyze a family of QAOA that employs
many-body Grover-type mixers, deriving analogous analytical expressions for
weighted problem hypergraphs in the setting of arbitrarily many circuit ansatz
layers. For both families, we allow individual phase angles for each node and
edge (hyperedge) in the problem graph (hypergraph). Our results reveal that, in
contrast to product mixers, the Grover mixer is sensitive to contributions from
cycles of all lengths in the problem graph, exhibiting a form of non-locality.
Our study advances the understanding of QAOA's behavior in general scenarios,
providing a foundation for further theoretical exploration. | arXiv |
We present the first set of high-resolution, hydrodynamical cosmological
simulations of galaxy formation in a Fuzzy Dark Matter (FDM) framework. These
simulations were performed with a new version of the GASOLINE2 code, known as
FUZZY-GASOLINE, which can simulate quantum FDM effects alongside a
comprehensive baryonic model that includes metal cooling, star formation,
supernova feedback, and black hole physics, previously used in the NIHAO
simulation suite. Using thirty zoom-in simulations of galaxies with halo masses
in the range $10^9 \lesssim M_{\text{halo}}/M_{\odot} \lesssim 10^{11}$, we
explore how the interplay between FDM quantum potential and baryonic processes
influences dark matter distributions and observable galaxy properties. Our
findings indicate that both baryons and low-mass FDM contribute to core
formation within dark matter profiles, though through distinct mechanisms:
FDM-induced cores emerge in all haloes, particularly within low-mass systems at
high redshift, while baryon-driven cores form within a specific mass range and
at low redshift. Despite these significant differences in dark matter
structure, key stellar observables such as star formation histories and
velocity dispersion profiles remain remarkably similar to predictions from the
Cold Dark Matter (CDM) model, making it challenging to distinguish between CDM
and FDM solely through stellar observations. | arXiv |
The mass of galaxy clusters is a critical quantity for probing cluster
cosmology and testing theories of gravity, but its measurement could be biased
given assumptions are inevitable. In this paper, we employ and compare two mass
proxies for galaxy clusters: thermodynamics of the intracluster medium and
kinematics of member galaxies. We select 22 galaxy clusters from the cluster
catalog in the first SRG/eROSITA All-Sky Survey (eRASS1) that have sufficient
optical and near-infrared observations. We generate multi-band images in the
energy range of (0.3, 7) keV for each cluster, and derive their temperature
profiles, gas mass profiles and hydrostatic mass profiles using a parametric
approach that does not assume dark matter halo models. With spectroscopically
confirmed member galaxies collected from multiple surveys, we numerically solve
the spherical Jeans equation for their dynamical mass profiles. Our results
quantify the correlation between dynamical mass and line-of-sight velocity
dispersion with an rms scatter of 0.14 dex. We find the two mass proxies lead
to roughly the same total mass, with no observed systematic bias. As such, the
$\sigma_8$ tension is not specific to hydrostatic mass or weak lensing shears,
but also appears with galaxy kinematics. We also compare our hydrostatic masses
with the latest weak lensing masses inferred with scaling relations. The
comparison shows the weak lensing mass is significantly higher than our
hydrostatic mass by $\sim$110%. This might explain the significantly larger
value of $\sigma_8$ from the latest measurement using eRASS1 clusters than
almost all previous estimates in the literature. Finally, we test the radial
acceleration relation (RAR) established in disk galaxies. We confirm the
missing baryon problem in the inner region of galaxy clusters using three
independent mass proxies for the first time. | arXiv |
We propose a model for a finite-size particle detector, which allows us to
derive its stress-energy tensor. This tensor is obtained from a covariant
Lagrangian that describes not only the quantum field that models the detector,
$\phi_{\text{d}}$, but also the systems responsible for its localization: a
complex scalar field, $\psi_{\text{c}}$, and a perfect fluid. The local
interaction between the detector and the complex field ensures the square
integrability of the detector modes, while the fluid serves to define the
spatial profile of $\psi_{\text{c}}$, localizing it in space. We then
demonstrate that, under very general conditions, the resulting energy tensor --
incorporating all components of the system -- is physically reasonable and
satisfies the energy conditions. | arXiv |
Given any positive integer $r$, Nahm's problem is to determine all $r\times
r$ rational positive definite matrix $A$, $r$-dimensional rational vector $B$
and rational scalar $C$ such that the rank $r$ Nahm sum associated with
$(A,B,C)$ is modular. Around 2007, Zagier conjectured that if the rank $r$ Nahm
sum for $(A,B,C)$ is modular, then so is the dual Nahm sum associated with
$(A^{-1},A^{-1}B,B^\mathrm{T} A^{-1}B/2-{r}/{24}-C)$. We construct some
explicit rank four Nahm sums which are modular while their duals are not
modular. This provides counterexamples to Zagier's duality conjecture. | arXiv |
One potential route toward fault-tolerant universal quantum computation is to
use non-Abelian topological codes. In this work, we investigate how to achieve
this goal with the quantum double model $\mathcal{D}(S_3)$ -- a specific
non-Abelian topological code. By embedding each on-site Hilbert space into a
qubit-qutrit pair, we give an explicit construction of the circuits for
creating, moving, and locally measuring all non-trivial anyons. We also design
a specialized anyon interferometer to measure the total charge remotely for
well-separated anyons; this avoids fusing them together. These protocols enable
the implementation of a universal gate set proposed by Cui et al. and active
quantum error correction of the circuit-level noise during the computation
process. To further reduce the error rate and facilitate error correction, we
encode each physical degree of freedom of $\mathcal{D}(S_3)$ into a novel,
quantum, error-correcting code, enabling fault-tolerant realization, at the
logical level, of all gates in the anyon manipulation circuits. Our proposal
offers a promising path to realize universal topological quantum computation in
the NISQ era. | arXiv |
In the high energy limit, $s\gg -t$, amplitudes in planar gauge theories
Reggeize, with power law behavior $\big( \frac{s}{-t} \big)^{\alpha(t)}$
governed by the Regge trajectory $\alpha(t)$. Beyond the planar limit this
simplicity is violated by "Regge cuts", for which practical organizational
principles are still being developed. We use a top-down effective field theory
organization based on color projection in the $t$ channel and rapidity
evolution equations for collinear impact factors, to sum large $s\gg -t$
logarithms for Regge cut contributions. The results are matrix equations which
are closed within a given color channel. To illustrate the method we derive in
QCD with $SU(N_c)$ for the first time a closed 6$\times$6 evolution equation
for the "decupletons" in the $\text{10}\oplus\overline{\text{10}}$ Regge color
channel, a 2$\times$2 evolution equation for the "triantapentons" in the
$\text{35}\oplus\overline{\text{35}}$ color channel, and a scalar evolution
equation for the "tetrahexaconton" in the 64 color channel. More broadly, our
approach allows us to describe generic Reggeization phenomena in non-planar
gauge theories, providing valuable data for the all loop structure of
amplitudes beyond the planar limit. | arXiv |
In this article, we introduce and analyse some statistical properties of a
class of models of random landscapes of the form ${\cal H}({\bf
x})=\frac{\mu}{2}{\bf x}^2+\sum_{l=1}^M \phi_l({\bf k}_l\cdot {\bf x}), \, \,
{\bf x}\in \mathbb{R}^N,\,\, \mu>0 $ where both the functions $\phi_l(z)$ and
vectors ${\bf k}_l$ are random. An important example of such landscape
describes superposition of $M$ plane waves with random amplitudes, directions
of the wavevectors, and phases, further confined by a parabolic potential of
curvature $\mu$.
Our main efforts are directed towards analysing the landscape features in the
limit $N\to \infty, M\to \infty$ keeping $\alpha=M/N$ finite. In such a limit
we find (i) the rates of asymptotic exponential growth with $N$ of the mean
number of all critical points and of local minima known as the annealed
complexities and (ii) the expression for the mean value of the deepest
landscape minimum (the ground-state energy). In particular, for the latter we
derive the Parisi-like optimisation functional and analyse conditions for the
optimiser to reflect various phases for different values of $\mu$ and $\alpha$:
replica-symmetric, one-step and full replica symmetry broken, as well as
criteria for continuous, Gardner and random first order transitions between
different phases. | arXiv |
Several statistical models for regression of a function $F$ on $\mathbb{R}^d$
without the statistical and computational curse of dimensionality exist, for
example by imposing and exploiting geometric assumptions on the distribution of
the data (e.g. that its support is low-dimensional), or strong smoothness
assumptions on $F$, or a special structure $F$. Among the latter, compositional
models assume $F=f\circ g$ with $g$ mapping to $\mathbb{R}^r$ with $r\ll d$,
have been studied, and include classical single- and multi-index models and
recent works on neural networks. While the case where $g$ is linear is rather
well-understood, much less is known when $g$ is nonlinear, and in particular
for which $g$'s the curse of dimensionality in estimating $F$, or both $f$ and
$g$, may be circumvented. In this paper, we consider a model
$F(X):=f(\Pi_\gamma X) $ where $\Pi_\gamma:\mathbb{R}^d\to[0,\rm{len}_\gamma]$
is the closest-point projection onto the parameter of a regular curve $\gamma:
[0,\rm{len}_\gamma]\to\mathbb{R}^d$ and $f:[0,\rm{len}_\gamma]\to\mathbb{R}^1$.
The input data $X$ is not low-dimensional, far from $\gamma$, conditioned on
$\Pi_\gamma(X)$ being well-defined. The distribution of the data, $\gamma$ and
$f$ are unknown. This model is a natural nonlinear generalization of the
single-index model, which corresponds to $\gamma$ being a line. We propose a
nonparametric estimator, based on conditional regression, and show that under
suitable assumptions, the strongest of which being that $f$ is coarsely
monotone, it can achieve the $one$-$dimensional$ optimal min-max rate for
non-parametric regression, up to the level of noise in the observations, and be
constructed in time $\mathcal{O}(d^2n\log n)$. All the constants in the
learning bounds, in the minimal number of samples required for our bounds to
hold, and in the computational complexity are at most low-order polynomials in
$d$. | arXiv |
Background: Open-Source Pre-Trained Models (PTMs) and datasets provide
extensive resources for various Machine Learning (ML) tasks, yet these
resources lack a classification tailored to Software Engineering (SE) needs.
Aims: We apply an SE-oriented classification to PTMs and datasets on a popular
open-source ML repository, Hugging Face (HF), and analyze the evolution of PTMs
over time. Method: We conducted a repository mining study. We started with a
systematically gathered database of PTMs and datasets from the HF API. Our
selection was refined by analyzing model and dataset cards and metadata, such
as tags, and confirming SE relevance using Gemini 1.5 Pro. All analyses are
replicable, with a publicly accessible replication package. Results: The most
common SE task among PTMs and datasets is code generation, with a primary focus
on software development and limited attention to software management. Popular
PTMs and datasets mainly target software development. Among ML tasks, text
generation is the most common in SE PTMs and datasets. There has been a marked
increase in PTMs for SE since 2023 Q2. Conclusions: This study underscores the
need for broader task coverage to enhance the integration of ML within SE
practices. | arXiv |
In this article, we study the effect of invisible neutrino decay of the third
neutrino state for accelerator neutrino experiments at two different baselines,
1300 km with a liquid argon time projection chamber (LArTPC) detector (similar
to DUNE) and 2588 km with a water Cherenkov detector (similar to P2O). For such
baselines, the matter effect starts to become important. Our aim is to
ascertain the sensitivity to mass hierarchy and octant of $\theta_{23}$ in
these two experiments in the presence of a decaying neutrino state. We compare
and contrast the results of the two experimental setups. We find that, in
general, hierarchy sensitivity decreases in the presence of decay. However, if
we consider decay only in the opposite hierarchy (test scenario), in the 2588
km setup, the hierarchy sensitivity with the true hierarchy as IH is larger
than the no decay case. We also study the dependence of hierarchy sensitivity
with true $\theta_{23}$. We find that the dominant muon background in P2O plays
an important role in how the hierarchy sensitivity depends on $\theta_{23}$.
The octant sensitivity for both setups increases in the presence of decay
except for the LArTPC setup in case true $\theta_{23}=49^\circ$. To understand
the octant sensitivity results in the two setups, we check the synergy in
sensitivity between electron and muon channels as a function of test
$\theta_{23}$. We also study the degeneracies in the test
$\theta_{23}-\delta_{CP}$ plane and find that combined analysis of the two
setups removes all the degeneracies in the test $\theta_{23}-\delta_{CP}$ plane
at $5\sigma$ significance. | arXiv |
The concept of Vapnik-Chervonenkis (VC) density is pivotal across various
mathematical fields, including model theory, discrete geometry, and probability
theory. In this paper, we introduce a topological generalization of VC-density.
Let $Y$ be a topological space and
$\mathcal{X},\mathcal{Z}^{(0)},\ldots,\mathcal{Z}^{(q-1)}$ be families of
subspaces of $Y$. We define a two parameter family of numbers,
$\mathrm{vcd}^{p,q}_{\mathcal{X},\overline{\mathcal{Z}}}$, which we refer to as
the degree $p$, order $q$, VC-density of the pair \[
(\mathcal{X},\overline{\mathcal{Z}} =
(\mathcal{Z}^{(0)},\ldots,\mathcal{Z}^{(q-1)}). \] The classical notion of
VC-density within this topological framework can be recovered by setting $p=0,
q=1$. For $p=0, q > 0$, we recover Shelah's notion of higher-order VC-density
for $q$-dependent families. Our definition introduces a new notion when $p>0$.
Our main result establishes that that in any model of these theories \[
\mathrm{vcd}^{p,q}_{\mathcal{X},\overline{\mathcal{Z}}} \leq (p+q) \dim X. \]
This result generalizes known VC-density bounds in these structures, extending
them in multiple ways, as well as providing a uniform proof paradigm applicable
to all of them. We give examples to show that our bounds are optimal. We
present combinatorial applications of our higher-degree VC-density bounds,
deriving topological analogs of well-known results such as the existence of
$\varepsilon$-nets and the fractional Helly theorem. We show that with certain
restrictions, these results extend to our higher-degree topological setting. | arXiv |
We holographically study quantum chaos in hyperscaling-violating Lifshitz
(HVL) theories (with charge). Particularly, we present a detailed computation
of the out-of-time ordered correlator (OTOC) via the shockwave analysis in the
bulk HVL geometry with a planar horizon topology. We also compute the butterfly
velocity ($v_{B}$) using the entanglement wedge reconstruction and find that
the result matches the one obtained from shockwave analysis. Furthermore, we
analyze in detail, the behavior of $v_{B}$ with respect to the dynamical
critical exponent (z), hyperscaling-violating parameter ($\theta$), charge (Q)
and the horizon radius ($r_{h}$). We interestingly find non-monotonic behavior
of $v_{B}$ with respect to z (in the allowed region and for certain (not all)
fixed, permissible values of $\theta$, Q and $r_{h}$) and $\theta$ (in the
allowed region and for certain (not all) fixed, permissible values of z, Q and
$r_{h}$). Moreover, $v_{B}$ is found to monotonically decrease with an increase
in charge (for all permissible, fixed values of z, $\theta$ and $r_{h}$),
whereas it is found to monotonically increase with $r_{h}$ (for all fixed,
permissible values of z, $\theta$ and Q). Unpacking these features can offer
some valuable insights into the chaotic nature of HVL theories. | arXiv |
The ATLAS collaboration at the LHC has published inclusive cross-section
measurements for the single-top and \ttbar production modes at center-of-mass
energies of $\sqrt{s} = 5.02, 8.16$, $13$, and $13.6$ TeV. Single-top
measurements are conducted in the $t$-channel and $tW$ channel. In addition to
the nominal cross-section measurements, various measurements of other
interesting observables such as the $V_{tb}$ element of the Cabibbo Kobayashi
Maskawa (CKM) quark-mixing matrix, the ratio of the inclusive cross-sections
between $tq$ and $t\overline{q}$, the ratio of inclusive cross-sections between
$t\overline{t}$ and $Z\rightarrow \ell\ell$, and the nuclear modification
factor (defined as the ratio of the inclusive $t\overline{t}$ cross section in
heavy-ion collisions to the inclusive $t\overline{t}$ cross-section in $pp$
collisions) are also reported. These results are compared to their
corresponding SM predictions, calculated at (N)NLO in QCD. All results are in
good agreement with SM predictions. | arXiv |
During language model decoding, it is known that using higher temperature
sampling gives more creative responses, while lower temperatures are more
factually accurate. However, such models are commonly applied to general
instruction following, which involves both creative and fact seeking tasks,
using a single fixed temperature across all examples and tokens. In this work,
we introduce Adaptive Decoding, a layer added to the model to select the
sampling temperature dynamically at inference time, at either the token or
example level, in order to optimize performance. To learn its parameters we
introduce Latent Preference Optimization (LPO) a general approach to train
discrete latent variables such as choices of temperature. Our method
outperforms all fixed decoding temperatures across a range of tasks that
require different temperatures, including UltraFeedback, Creative Story
Writing, and GSM8K. | arXiv |
We present complete results for the hadronic vacuum polarization (HVP)
contribution to the muon anomalous magnetic moment $a_\mu$ in the short- and
intermediate-distance window regions, which account for roughly 10% and 35% of
the total HVP contribution to $a_\mu$, respectively. In particular, we perform
lattice-QCD calculations for the isospin-symmetric connected and disconnected
contributions, as well as corrections due to strong isospin-breaking. For the
short-distance window observables, we investigate the so-called log-enhancement
effects as well as the significant oscillations associated with staggered
quarks in this region. For the dominant, isospin-symmetric light-quark
connected contribution, we obtain $a^{ll,\,{\mathrm{SD}}}_{\mu}(\mathrm{conn.})
= 48.116(16)(94)[96] \times 10^{-10}$ and
$a^{ll,\,{\mathrm{W}}}_{\mu}(\mathrm{conn.}) = 207.06(17)(63)[66] \times
10^{-10}$. We use Bayesian model averaging combined with a global bootstrap to
fully estimate the covariance matrix between the individual contributions. Our
determinations of the complete window contributions are
$a^{{\mathrm{SD}}}_{\mu} = 69.01(2)(21)[21] \times 10^{-10}$ and
$a^{{\mathrm{W}}}_{\mu} = 236.57(20)(94)[96] \times 10^{-10}$. This work is
part of our ongoing effort to compute all contributions to HVP with an overall
uncertainty at the few permille level. | arXiv |
Supersymmetry (SUSY) addresses several problems of the Standard Model, such
as the naturalness problem and gauge coupling unification, and can provide
cosmologically viable dark matter candidates. SUSY must be broken at high
energy scales with mechanisms like gravity, anomaly, gauge mediation, etc. This
paper revisits the Gauge Mediated SUSY Breaking (GMSB) scenarios in the context
of data from the Large Hadron Collider (LHC) experiment. The ATLAS mono-photon
search at 139 inverse femtobarn integrated luminosity at the 13 TeV LHC, in the
context of a simplified General Gauge Mediation (GGM) scenario (which is a
phenomenological version of GMSB with an agnostic approach to the nature of the
hidden sector), relies on assumptions that do not hold across the entire
parameter space. We identify a few crucial assumptions regarding the decay
widths of SUSY particles into final states with gravitinos that affect the LHC
limits on the masses of the SUSY particles. Our study aims to reinterpret the
ATLAS constraints on the gluino-NLSP mass plane, considering all possible decay
modes of SUSY particles in a realistic GGM model. | arXiv |
We present a polynomial-time reduction from max-average constraints to the
feasibility problem for semidefinite programs. This shows that Condon's simple
stochastic games, stochastic mean payoff games, and in particular mean payoff
games and parity games can all be reduced to semidefinite programming. | arXiv |
Specifying all desirable properties of a language model is challenging, but
certain requirements seem essential. Given samples from an unknown language,
the trained model should produce valid strings not seen in training and be
expressive enough to capture the language's full richness. Otherwise,
outputting invalid strings constitutes "hallucination," and failing to capture
the full range leads to "mode collapse." We ask if a language model can meet
both requirements.
We investigate this within a statistical language generation setting building
on Gold and Angluin. Here, the model receives random samples from a
distribution over an unknown language K, which belongs to a possibly infinite
collection of languages. The goal is to generate unseen strings from K. We say
the model generates from K with consistency and breadth if, as training size
increases, its output converges to all unseen strings in K.
Kleinberg and Mullainathan [KM24] asked if consistency and breadth in
language generation are possible. We answer this negatively: for a large class
of language models, including next-token prediction models, this is impossible
for most collections of candidate languages. This contrasts with [KM24]'s
result, showing consistent generation without breadth is possible for any
countable collection of languages. Our finding highlights that generation with
breadth fundamentally differs from generation without breadth.
As a byproduct, we establish near-tight bounds on the number of samples
needed for generation with or without breadth.
Finally, our results offer hope: consistent generation with breadth is
achievable for any countable collection of languages when negative examples
(strings outside K) are available alongside positive ones. This suggests that
post-training feedback, which encodes negative examples, can be crucial in
reducing hallucinations while limiting mode collapse. | arXiv |