text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
We compute the spectra of anyon quasiparticles in all three super-selection
sectors of the Kitaev model (i.e., visons, fermions and bosons), perturbed by a
Zeeman field away from its exactly solvable limit, to gain insights on the
competition of its non-abelian spin-liquid with other nearby phases, such as
the mysterious intermediate state observed in the antiferromagnetic model. Both
for the ferro- and antiferro-magnetic models we find that the fermions and
visons become gapless at nearly identical critical Zeeman couplings. In the
ferromagnetic model this is consistent with a direct transition into a
polarized state. In the anti-ferromagnetic model this implies that previous
theories of the intermediate phase viewed as a spin liquid with a different
fermion Chern number are inadequate, as they presume that the vison gap does
not close. In the antiferromagnetic model we also find that a bosonic
quasiparticle becomes gapless at nearly the same critical field as the fermions
and visons. This boson carries the quantum numbers of an anti-ferromagnetic
order parameter, suggesting that the intermediate phase has spontaneously
broken symmetry with this order. | arXiv |
The fermionic many-body problem in the strong correlation regime is
notoriously difficult to tackle. In a previous work (Phys. Rev. B 101, 045109
(2020)), we have proposed to extend the single-reference coupled-cluster (SRCC)
method to the strong correlation regime using low-rank tensor decompositions
(LRTD) to express the cluster operator, without truncating it with respect to
the number of excitations. For that purpose, we have proposed a new type of
LRTD called ``superpositions of tree-tensor networks'' (STTN), which use the
same set of building blocs to define all the tensors involved in the CC
equations, and combine different ``channels'', i.e. different types of pairing
among excited particles and holes, in the decomposition of a given tensor.
Those two principles are aimed at globally minimizing the total number of free
parameters required to accurately represent the ground state. In this work, we
show that STTN can indeed be compact and accurate representations of strongly
correlated ground states by using them to express the CC cluster operator
amplitudes and wave function coefficients of exact grounds states of small
two-dimensional Hubbard clusters, at half-filling, up to three particle-hole
excitations. We show the compactness of STTN by using a number of free
parameters smaller than the number of equations in the CCSD approximation, i.e.
much smaller than the number of fitted tensor elements. We find that, for the
systems considered, the STTN are more accurate as the size of the system
increases and that combining different channels in the decompositions of the
most strongly correlated tensors is crucial to obtain good accuracy. | arXiv |
Physical reasoning is an important skill needed for robotic agents when
operating in the real world. However, solving such reasoning problems often
involves hypothesizing and reflecting over complex multi-body interactions
under the effect of a multitude of physical forces and thus learning all such
interactions poses a significant hurdle for state-of-the-art machine learning
frameworks, including large language models (LLMs). To study this problem, we
propose a new physical reasoning task and a dataset, dubbed TraySim. Our task
involves predicting the dynamics of several objects on a tray that is given an
external impact -- the domino effect of the ensued object interactions and
their dynamics thus offering a challenging yet controlled setup, with the goal
of reasoning being to infer the stability of the objects after the impact. To
solve this complex physical reasoning task, we present LLMPhy, a zero-shot
black-box optimization framework that leverages the physics knowledge and
program synthesis abilities of LLMs, and synergizes these abilities with the
world models built into modern physics engines. Specifically, LLMPhy uses an
LLM to generate code to iteratively estimate the physical hyperparameters of
the system (friction, damping, layout, etc.) via an implicit
analysis-by-synthesis approach using a (non-differentiable) simulator in the
loop and uses the inferred parameters to imagine the dynamics of the scene
towards solving the reasoning task. To show the effectiveness of LLMPhy, we
present experiments on our TraySim dataset to predict the steady-state poses of
the objects. Our results show that the combination of the LLM and the physics
engine leads to state-of-the-art zero-shot physical reasoning performance,
while demonstrating superior convergence against standard black-box
optimization methods and better estimation of the physical parameters. | arXiv |
Last passage percolation (LPP) is a model of a directed metric and a
zero-temperature polymer where the main observable is a directed path evolving
in a random environment accruing as energy the sum of the random weights along
itself. When the environment has light tails and a fast decay of correlation,
the fluctuations of LPP are predicted to be explained by the
Kardar-Parisi-Zhang (KPZ) universality theory. However, the KPZ theory is not
expected to apply for many natural environments, particularly "critical" ones
exhibiting a hierarchical structure often leading to logarithmic correlations.
In this article, we initiate a novel study of LPP in such hierarchical
environments by investigating two particularly interesting examples. The first
is an i.i.d. environment but with a power-law distribution with an inverse
quadratic tail decay which is conjectured to be the critical point for the
validity of the KPZ scaling relation. The second is the Branching Random Walk
which is a hierarchical approximation of the two-dimensional Gaussian Free
Field. The second example may be viewed as a high-temperature directed version
of Liouville Quantum Gravity, which is a model of random geometry driven by the
exponential of a logarithmically-correlated field. Due to the underlying
fractal structure, LPP in such environments is expected to exhibit logarithmic
correction terms with novel critical exponents. While discussions about such
critical models appear in the physics literature, precise predictions about
exponents seem to be missing. Developing a framework based on multi-scale
analysis, we obtain bounds on such exponents and prove almost optimal
concentration results in all dimensions for both models. As a byproduct of our
analysis we answer a long-standing question of Martin concerning necessary and
sufficient conditions for the linear growth of the LPP energy in i.i.d.
environments. | arXiv |
Hybrid studies allow investigators to simultaneously study an intervention
effectiveness outcome and an implementation research outcome. In particular,
type 2 hybrid studies support research that places equal importance on both
outcomes rather than focusing on one and secondarily on the other (i.e., type 1
and type 3 studies). Hybrid 2 studies introduce the statistical issue of
multiple testing, complicated by the fact that they are typically also cluster
randomized trials. Standard statistical methods do not apply in this scenario.
Here, we describe the design methodologies available for validly powering
hybrid type 2 studies and producing reliable sample size calculations in a
cluster-randomized design with a focus on binary outcomes. Through a literature
search, 18 publications were identified that included methods relevant to the
design of hybrid 2 studies. Five methods were identified, two of which did not
account for clustering but are extended in this article to do so, namely the
combined outcomes approach and the single 1-degree of freedom combined test.
Procedures for powering hybrid 2 studies using these five methods are described
and illustrated using input parameters inspired by a study from the Community
Intervention to Reduce CardiovascuLar Disease in Chicago (CIRCL-Chicago)
Implementation Research Center. In this illustrative example, the intervention
effectiveness outcome was controlled blood pressure, and the implementation
outcome was reach. The conjunctive test resulted in higher power than the
popular p-value adjustment methods, and the newly extended combined outcomes
and single 1-DF test were found to be the most powerful among all of the tests. | arXiv |
We characterise absolutely dilatable completely positive maps on the space of
all bounded operators on a Hilbert space that are also bimodular over a given
von Neumann algebra as rotations by a suitable unitary on a larger Hilbert
space followed by slicing along the trace of an additional ancilla. We define
the local, quantum and approximately quantum types of absolutely dilatable
maps, according to the type of the admissible ancilla. We show that the local
absolutely dilatable maps admit an exact factorisation through an abelian
ancilla and show that they are limits in the point weak* topology of
conjugations by unitaries in the commutant of the given von Neumann algebra. We
show that the Connes Embedding Problem is equivalent to deciding if all
absolutely dilatable maps are approximately quantum. | arXiv |
Improving hydrocarbon production with hydraulic fracturing from
unconventional reservoirs requires investigating transport phenomena at the
single fracture level. In this study, we simulated geomechanical deformation,
fluid flow, and reactive transport to understand the effect of hydraulic
fracturing treatment on permeability evolution in shale rough-walled fractures.
Using concepts of fractional Brownian motion and surface roughness
characterizations with laser profilometer, we first generated three
rough-walled microfractures consistent with three laboratory experiments (i.e.,
E4, E5 and E6). After that, the generated microfractures were subjected to a
confining pressure in accord with experimental conditions, and geomechanical
deformation was simulated. We used the OpenFOAM software package to simulate
the fluid flow and permeability. By comparing the simulated permeability values
with the experimentally measured ones we found relative errors equal to 28, 15
and 200% respectively for the experiments E4, E5 and E6. After calibration,
however, the relative error dropped below 4%. We next simulated the reactive
transport using the GeoChemFOAM solver and investigated permeability evolution
in the deformed microfractures. We found that after 10 hrs of reactive
transport simulations, permeability increased by 47%, on average, in all cases
studied here. | arXiv |
We demonstrate that Gini coefficients can be used as unified metrics to
evaluate many-versus-many (all-to-all) similarity in vector spaces. Our
analysis of various image datasets shows that images with the highest Gini
coefficients tend to be the most similar to one another, while images with the
lowest Gini coefficients are the least similar. We also show that this
relationship holds true for vectorized text embeddings from various corpuses,
highlighting the consistency of our method and its broad applicability across
different types of data. Additionally, we demonstrate that selecting machine
learning training samples that closely match the distribution of the testing
dataset is far more important than ensuring data diversity. Selection of
exemplary and iconic training samples with higher Gini coefficients leads to
significantly better model performance compared to simply having a diverse
training set with lower Gini coefficients. Thus, Gini coefficients can serve as
effective criteria for selecting machine learning training samples, with our
selection method outperforming random sampling methods in very sparse
information settings. | arXiv |
In the past few years, the improved sensitivity and cadence of wide-field
optical surveys have enabled the discovery of several afterglows without
associated detected gamma-ray bursts (GRBs). We present the identification,
observations, and multiwavelength modeling of a recent such afterglow
(AT2023lcr), and model three literature events (AT2020blt, AT2021any, and
AT2021lfa) in a consistent fashion. For each event, we consider the following
possibilities as to why a GRB was not observed: 1) the jet was off-axis; 2) the
jet had a low initial Lorentz factor; and 3) the afterglow was the result of an
on-axis classical GRB (on-axis jet with physical parameters typical of the GRB
population), but the emission was undetected by gamma-ray satellites. We
estimate all physical parameters using afterglowpy and Markov Chain Monte Carlo
methods from emcee. We find that AT2023lcr, AT2020blt, and AT2021any are
consistent with on-axis classical GRBs, and AT2021lfa is consistent with both
on-axis low Lorentz factor ($\Gamma_0 \approx 5 - 13$) and off-axis
($\theta_\text{obs}=2\theta_\text{jet}$) high Lorentz factor ($\Gamma_0 \approx
100$) jets. | arXiv |
We present our lattice QCD result for the long-distance part of the hadronic
vacuum polarization contribution, $(a_\mu^{\rm hvp})^{\rm LD}$, to the muon
$g-2$ in the time-momentum representation. This is the numerically dominant,
and at the same time the most challenging part regarding statistical precision.
Our calculation is based on ensembles with dynamical up, down and strange
quarks, employing the O($a$)-improved Wilson fermion action with lattice
spacings ranging from $0.035-0.099$ fm. In order to reduce statistical noise in
the long-distance part of the correlator to the per-mille level, we apply
low-mode averaging and combine it with an explicit spectral reconstruction. Our
result is $(a_\mu^{\rm hvp})^{\rm LD} = 423.2(4.2)_{\rm stat}(3.4)_{\rm
syst}\times 10^{-10}$ in isospin-symmetric QCD, where the pion decay constant
is used to set the energy scale. When combined with our previous results for
the short- and intermediate-distance window observables and after including all
sub-dominant contributions as well as isospin-breaking corrections, we obtain
the total leading-order hadronic vacuum polarization contribution as
$a_\mu^{\rm hvp} = 724.9(5.0)_{\rm stat}(4.9)_{\rm syst}\times 10^{-10}$. Our
result displays a tension of 3.9 standard deviations with the data-driven
estimate published in the 2020 White Paper, but leads to a SM prediction for
the total muon anomalous magnetic moment that agrees with the current
experimental average. | arXiv |
In a recent preprint, we constructed a sesquiharmonic Maass form
$\mathcal{G}$ of weight $\frac{1}{2}$ and level $4N$ with $N$ odd and
squarefree. Extending seminal work by Duke, Imamo\={g}lu, and T\'{o}th,
$\mathcal{G}$ maps to Zagier's non-holomorphic Eisenstein series and a linear
combination of Pei and Wang's generalized Cohen--Eisenstein series under the
Bruinier--Funke operator $\xi_{\frac{1}{2}}$. In this paper, we realize
$\mathcal{G}$ as the output of a regularized Siegel theta lift of $1$ whenever
$N=p$ is an odd prime building on more general work by Bruinier, Funke and
Imamo\={g}lu. In addition, we supply the computation of the square-indexed
Fourier coefficients of $\mathcal{G}$. This yields explicit identities between
the Fourier coefficients of $\mathcal{G}$ and all quadratic traces of $1$.
Furthermore, we evaluate the Millson theta lift of $1$ and consider spectral
deformations of $1$. | arXiv |
To date there is little publicly available scientific data on Unidentified
Aerial Phenomena (UAP) whose properties and kinematics purportedly reside
outside the performance envelope of known phenomena. To address this
deficiency, the Galileo Project is designing, building, and commissioning a
multi-modal ground-based observatory to continuously monitor the sky and
conduct a rigorous long-term aerial census of all aerial phenomena, including
natural and human-made. One of the key instruments is an all-sky infrared
camera array using eight uncooled long-wave infrared FLIR Boson 640 cameras.
Their calibration includes a novel extrinsic calibration method using airplane
positions from Automatic Dependent Surveillance-Broadcast (ADS-B) data. We
establish a first baseline for the system performance over five months of field
operation, using a real-world dataset derived from ADS-B data, synthetic 3-D
trajectories, and a hand-labelled real-world dataset. We report acceptance
rates (e.g. viewable airplanes that are recorded) and detection efficiencies
(e.g. recorded airplanes which are successfully detected) for a variety of
weather conditions, range and aircraft size. We reconstruct $\sim$500,000
trajectories of aerial objects from this commissioning period. A toy outlier
search focused on large sinuosity of the 2-D reconstructed trajectories flags
about 16% of trajectories as outliers. After manual review, 144 trajectories
remain ambiguous: they are likely mundane objects but cannot be elucidated at
this stage of development without distance and kinematics estimation or other
sensor modalities. Our observed count of ambiguous outliers combined with
systematic uncertainties yields an upper limit of 18,271 outliers count for the
five-month interval at a 95% confidence level. This likelihood-based method to
evaluate significance is applicable to all of our future outlier searches. | arXiv |
Modern software for propositional satisfiability problems gives a powerful
automated reasoning toolkit, capable of outputting not only a
satisfiable/unsatisfiable signal but also a justification of unsatisfiability
in the form of resolution proof (or a more expressive proof), which is commonly
used for verification purposes. Empirically, modern SAT solvers produce
relatively short proofs, however, there are no inherent guarantees that these
proofs cannot be significantly reduced. This paper proposes a novel
branch-and-bound algorithm for finding the shortest resolution proofs; to this
end, we introduce a layer list representation of proofs that groups clauses by
their level of indirection. As we show, this representation breaks all
permutational symmetries, thereby improving upon the state-of-the-art
symmetry-breaking and informing the design of a novel workflow for proof
minimization. In addition to that, we design pruning procedures that reason on
proof length lower bound, clause subsumption, and dominance. Our experiments
suggest that the proofs from state-of-the-art solvers could be shortened by
30-60% on the instances from SAT Competition 2002 and by 25-50% on small
synthetic formulas. When treated as an algorithm for finding the shortest
proof, our approach solves twice as many instances as the previous work based
on SAT solving and reduces the time to optimality by orders of magnitude for
the instances solved by both approaches. | arXiv |
Tensor parallelism provides an effective way to increase server large
language model (LLM) inference efficiency despite adding an additional
communication cost. However, as server LLMs continue to scale in size, they
will need to be distributed across more devices, magnifying the communication
cost. One way to approach this problem is with quantization, but current
methods for LLMs tend to avoid quantizing the features that tensor parallelism
needs to communicate. Taking advantage of consistent outliers in communicated
features, we introduce a quantization method that reduces communicated values
on average from 16 bits to 4.2 bits while preserving nearly all of the original
performance. For instance, our method maintains around 98.0% and 99.5% of Gemma
2 27B's and Llama 2 13B's original performance, respectively, averaged across
all tasks we evaluated on. | arXiv |
An $r$-graph is called $t$-cancellative if for arbitrary $t+2$ distinct edges
$A_1,\ldots,A_t,B,C$, it holds that $(\cup_{i=1}^t A_i)\cup B\neq (\cup_{i=1}^t
A_i)\cup C$; it is called $t$-union-free if for arbitrary two distinct subsets
$\mathcal{A},\mathcal{B}$, each consisting of at most $t$ edges, it holds that
$\cup_{A\in\mathcal{A}} A\neq \cup_{B\in\mathcal{B}} B$. Let $C_t(n,r)$ and
$U_t(n,r)$ denote the maximum number of edges that can be contained in an
$n$-vertex $t$-cancellative and $t$-union-free $r$-graph, respectively. The
study of $C_t(n,r)$ and $U_t(n,r)$ has a long history, dating back to the
classic works of Erd\H{o}s and Katona, and Erd\H{o}s and Moser in the 1970s. In
2020, Shangguan and Tamo showed that $C_{2(t-1)}(n,tk)=\Theta(n^k)$ and
$U_{t+1}(n,tk)=\Theta(n^k)$ for all $t\ge 2$ and $k\ge 2$. In this paper, we
determine the asymptotics of these two functions up to a lower order term, by
showing that for all $t\ge 2$ and $k\ge 2$,
\begin{align*}
\text{$\lim_{n\rightarrow\infty}\frac{C_{2(t-1)}(n,tk)}{n^k}=\lim_{n\rightarrow\infty}\frac{U_{t+1}(n,tk)}{n^k}=\frac{1}{k!}\cdot
\frac{1}{\binom{tk-1}{k-1}}$.}
\end{align*}
Previously, it was only known by a result of F\"uredi in 2012 that
$\lim_{n\rightarrow\infty}\frac{C_{2}(n,4)}{n^2}=\frac{1}{6}$.
To prove the lower bounds of the limits, we utilize a powerful framework
developed recently by Delcourt and Postle, and independently by Glock, Joos,
Kim, K\"uhn, and Lichev, which shows the existence of near-optimal hypergraph
packings avoiding certain small configurations, and to prove the upper bounds,
we apply a novel counting argument that connects $C_{2(t-1)}(n,tk)$ to a
classic result of Kleitman and Frankl on a special case of the famous Erd\H{o}s
Matching Conjecture. | arXiv |
How does social network structure amplify or stifle behavior diffusion?
Existing theory suggests that when social reinforcement makes the adoption of
behavior more likely, it should spread more -- both farther and faster -- on
clustered networks with redundant ties. Conversely, if adoption does not
benefit from social reinforcement, then it should spread more on random
networks without such redundancies. We develop a novel model of behavior
diffusion with tunable probabilistic adoption and social reinforcement
parameters to systematically evaluate the conditions under which clustered
networks better spread a behavior compared to random networks. Using both
simulations and analytical techniques we find precise boundaries in the
parameter space where either network type outperforms the other or performs
equally. We find that in most cases, random networks spread a behavior equally
as far or farther compared to clustered networks despite strong social
reinforcement. While there are regions in which clustered networks better
diffuse contagions with social reinforcement, this only holds when the
diffusion process approaches that of a deterministic threshold model and does
not hold for all socially reinforced behaviors more generally. At best,
clustered networks only outperform random networks by at least a five percent
margin in 18\% of the parameter space, and when social reinforcement is large
relative to the baseline probability of adoption. | arXiv |
The scarcity of comprehensive datasets in the traffic light detection and
recognition domain and the poor performance of state-of-the-art models under
hostile weather conditions present significant challenges. To address these
issues, this paper proposes a novel approach by merging two widely used
datasets, LISA and S2TLD. The merged dataset is further processed to tackle
class imbalance, a common problem in this domain. This merged dataset becomes
our source domain. Synthetic rain and fog are added to the dataset to create
our target domain. We employ Fourier Domain Adaptation (FDA) to create a final
dataset with a minimized domain gap between the two datasets, helping the model
trained on this final dataset adapt to rainy and foggy weather conditions.
Additionally, we explore Semi-Supervised Learning (SSL) techniques to leverage
the available data more effectively. Experimental results demonstrate that
models trained on FDA-augmented images outperform those trained without FDA
across confidence-dependent and independent metrics, like mAP50, mAP50-95,
Precision, and Recall. The best-performing model, YOLOv8, achieved a Precision
increase of 5.1860%, Recall increase of 14.8009%, mAP50 increase of 9.5074%,
and mAP50-95 increase of 19.5035%. On average, percentage increases of 7.6892%
in Precision, 19.9069% in Recall, 15.8506% in mAP50, and 23.8099% in mAP50-95
were observed across all models, highlighting the effectiveness of FDA in
mitigating the impact of adverse weather conditions on model performance. These
improvements pave the way for real-world applications where reliable
performance in challenging environmental conditions is critical. | arXiv |
Podcasts provide highly diverse content to a massive listener base through a
unique on-demand modality. However, limited data has prevented large-scale
computational analysis of the podcast ecosystem. To fill this gap, we introduce
a massive dataset of over 1.1M podcast transcripts that is largely
comprehensive of all English language podcasts available through public RSS
feeds from May and June of 2020. This data is not limited to text, but rather
includes audio features and speaker turns for a subset of 370K episodes, and
speaker role inferences and other metadata for all 1.1M episodes. Using this
data, we also conduct a foundational investigation into the content, structure,
and responsiveness of this ecosystem. Together, our data and analyses open the
door to continued computational research of this popular and impactful medium. | arXiv |
Machine learning models are often trained on sensitive data (e.g., medical
records and race/gender) that is distributed across different "silos" (e.g.,
hospitals). These federated learning models may then be used to make
consequential decisions, such as allocating healthcare resources. Two key
challenges emerge in this setting: (i) maintaining the privacy of each person's
data, even if other silos or an adversary with access to the central server
tries to infer this data; (ii) ensuring that decisions are fair to different
demographic groups (e.g., race/gender). In this paper, we develop a novel
algorithm for private and fair federated learning (FL). Our algorithm satisfies
inter-silo record-level differential privacy (ISRL-DP), a strong notion of
private FL requiring that silo i's sent messages satisfy record-level
differential privacy for all i. Our framework can be used to promote different
fairness notions, including demographic parity and equalized odds. We prove
that our algorithm converges under mild smoothness assumptions on the loss
function, whereas prior work required strong convexity for convergence. As a
byproduct of our analysis, we obtain the first convergence guarantee for
ISRL-DP nonconvex-strongly concave min-max FL. Experiments demonstrate the
state-of-the-art fairness-accuracy tradeoffs of our algorithm across different
privacy levels. | arXiv |
A unified definition for the rotation angle and rotation angular speed of
general beams, including those with orbital angular momentum (OAM), has been
lacking until now. The rotation of a general beam is characterized by observing
the rotational behavior of the directions of the extreme spot sizes during
propagation. We introduce the beam quality $M^2(\psi)$ factor to characterize
the unique beam quality of a general beam across all directions, not limited to
the $x$- or $y$-axes. Besides that, we present the beam center
$s_{\psi}(\psi,z)$, spot size $w_{\psi}(\psi,z)$, waist position, waist radius,
and divergence angle along the direction that forms an angle $\psi$ with the
$x$-axis in the plane perpendicular to the $z$-axis for the general beam.
Furthermore, this paper presents rapid calculation formulas for these
parameters, utilizing the mode expansion method (MEM). Subsequently, we prove
that only two extreme spot sizes exist in a given detection plane and the angle
between the maximum and minimum spot angles is consistently $90^{\circ}$ during
the propagation. We also prove the spot rotation angles converge as $z$
approaches either positive or negative infinity. We first show the extreme spot
sizes, spot rotation angle, and angular speed for the vortex beam. Our formulas
efficiently differentiate between vortex OAM beams and asymmetry OAM beams. | arXiv |
Arch filament systems (AFSs) are chromospheric and coronal manifestations of
emerging magnetic flux. Using high spatial resolution observations taken at a
high cadence by the Extreme Ultraviolet Imager (EUI) on board Solar Orbiter, we
identified small-scale elongated brightenings within the AFSs. These
brightenings appear as bidirectional flows along the threads of AFSs. For our
study, we investigated the coordinated observations of the AFSs acquired by the
EUI and the Atmospheric Imaging Assembly (AIA) on board SDO on 2022 March 4 and
17. We analyzed 15 bidirectional propagating brightenings from EUI 174 {\AA}
images. These brightenings reached propagating speeds of 100-150 km/s. The
event observed on March 17 exhibits blob-like structures, which may be
signatures of plasmoids and due to magnetic reconnection. In this case, we also
observed counterparts in the running difference slit-jaw images in the 1400
{\AA} passbands taken by the Interface Region Imaging Spectrograph (IRIS). Most
events show co-temporal intensity variations in all AIA EUV passbands.
Together, this implies that these brightenings in the AFSs are dominated by
emission from cool plasma with temperatures well below 1 MK. The magnetograms
taken by the Polarimetric and Helioseismic Imager (PHI) on board Solar Orbiter
show signatures of flux emergence beneath the brightenings. This suggests that
the events in the AFSs are triggered by magnetic reconnection that may occur
between the newly emerging magnetic flux and the preexisting magnetic field
structures in the middle of the AFSs. This would also give a natural
explanation for the bidirectional propagation of the brightenings near the apex
of the AFSs. The interaction of the preexisting field and the emerging flux may
be important for mass and energy transfer within the AFSs. | arXiv |
Humans excel at discovering regular structures from limited samples and
applying inferred rules to novel settings. We investigate whether modern
generative models can similarly learn underlying rules from finite samples and
perform reasoning through conditional sampling. Inspired by Raven's Progressive
Matrices task, we designed GenRAVEN dataset, where each sample consists of
three rows, and one of 40 relational rules governing the object position,
number, or attributes applies to all rows. We trained generative models to
learn the data distribution, where samples are encoded as integer arrays to
focus on rule learning. We compared two generative model families: diffusion
(EDM, DiT, SiT) and autoregressive models (GPT2, Mamba). We evaluated their
ability to generate structurally consistent samples and perform panel
completion via unconditional and conditional sampling. We found diffusion
models excel at unconditional generation, producing more novel and consistent
samples from scratch and memorizing less, but performing less well in panel
completion, even with advanced conditional sampling methods. Conversely,
autoregressive models excel at completing missing panels in a rule-consistent
manner but generate less consistent samples unconditionally. We observe diverse
data scaling behaviors: for both model families, rule learning emerges at a
certain dataset size - around 1000s examples per rule. With more training data,
diffusion models improve both their unconditional and conditional generation
capabilities. However, for autoregressive models, while panel completion
improves with more training data, unconditional generation consistency
declines. Our findings highlight complementary capabilities and limitations of
diffusion and autoregressive models in rule learning and reasoning tasks,
suggesting avenues for further research into their mechanisms and potential for
human-like reasoning. | arXiv |
All but the most massive main-sequence stars are expected to have a rarefied
and hot (million-Kelvin) corona like the Sun. How such a hot corona is formed
and supported has not been completely understood yet, even in the case of the
Sun. Recently, Barbieri et al. (A&A 2024, J. Plasma Phys. 2024) introduced a
new model of a confined plasma atmosphere and applied it to the solar case,
showing that rapid, intense, intermittent and short-lived heating events in the
high chromosphere can drive the coronal plasma into a stationary state with
temperature and density profiles similar to those observed in the solar
atmosphere. In this paper we apply the model to main-sequence stars, showing
that it predicts the presence of a solar-like hot and rarefied corona for all
such stars, regardless of their mass. However, the model is not applicable as
such to the most massive main-sequence stars, because the latter lack the
convective layer generating the magnetic field loop structures supporting a
stationary corona, whose existence is assumed by the model. We also discuss the
role of stellar mass in determining the shape of the temperature and density
profiles. | arXiv |
In an accelerator, the nonlinear behavior near a horizontal resonance line
($n\nu_x$) usually involves the appearance of stable fixed points (SFPs) in the
horizontal phase space, also referred to as transverse resonance island
``buckets" (TRIBs). Specific conditions are required for TRIBs formation. At
the Cornell Electron Storage Ring, a new method is developed to improve the
dynamic and momentum apertures in a 6-GeV lattice as well as to preserve the
conditions for TRIBs formation. This method reduces the dimension of variables
from 76 sextupoles to 8 group variables and then utilizes the robust conjugate
direction search algorithm in optimization. Created with a few harmonic
sextupoles or octupoles, several knobs that can either rotate the TRIBs in
phase space or adjust the actions of SFPs are discussed and demonstrated by
both tracking simulations and experimental results. In addition, a new scheme
to drive all particles into one single island is described. Possible
applications using TRIBs in accelerators are also discussed. | arXiv |
When unsure about an answer, humans often respond with more words than
necessary, hoping that part of the response will be correct. We observe a
similar behavior in large language models (LLMs), which we term "Verbosity
Compensation" (VC). VC is harmful because it confuses the user understanding,
leading to low efficiency, and influences the LLM services by increasing the
latency and cost of generating useless tokens. In this paper, we present the
first work that defines and analyzes Verbosity Compensation, explores its
causes, and proposes a simple mitigating approach. We define Verbosity
Compensation as the behavior of generating responses that can be compressed
without information loss when prompted to write concisely. Our experiments,
conducted on five datasets of knowledge and reasoning-based QA tasks with 14
newly developed LLMs, reveal three conclusions. 1) We reveal a pervasive
presence of verbosity compensation across all models and all datasets. Notably,
GPT-4 exhibits a VC frequency of 50.40%. 2) We reveal the large performance gap
between verbose and concise responses, with a notable difference of 27.61% on
the Qasper dataset. We also demonstrate that this difference does not naturally
diminish as LLM capability increases. Both 1) and 2) highlight the urgent need
to mitigate the frequency of VC behavior and disentangle verbosity with
veracity. We propose a simple yet effective cascade algorithm that replaces the
verbose responses with the other model-generated responses. The results show
that our approach effectively alleviates the VC of the Mistral model from
63.81% to 16.16% on the Qasper dataset. 3) We also find that verbose responses
exhibit higher uncertainty across all five datasets, suggesting a strong
connection between verbosity and model uncertainty. Our dataset and code are
available at https://github.com/psunlpgroup/VerbosityLLM. | arXiv |
Significant advances have been made in natural language processing in recent
years. However, our current deep learning approach to language modeling
requires substantial resources in terms of data and computation. One of the
side effects of this data-hungry paradigm is the current schism between
languages, separating those considered high-resource, where most of the
development happens and resources are available, and the low-resource ones,
which struggle to attain the same level of performance and autonomy. This study
aims to introduce a new set of resources to stimulate the future development of
neural text generation in Portuguese. In this work, we document the development
of GigaVerbo, a concatenation of deduplicated Portuguese text corpora amounting
to 200 billion tokens. Via this corpus, we trained a series of
decoder-transformers named Tucano. Our models perform equal or superior to
other Portuguese and multilingual language models of similar size in several
Portuguese benchmarks. The evaluation of our models also reveals that model
performance on many currently available benchmarks used by the Portuguese NLP
community has little to no correlation with the scaling of token ingestion
during training, highlighting the limitations of such evaluations when it comes
to the assessment of Portuguese generative language models. All derivatives of
our study are openly released on GitHub and Hugging Face. See
https://nkluge-correa.github.io/Tucano/ | arXiv |
We examine the problem of assigning teachers to public schools over time when
teachers have tenured positions and can work simultaneously in multiple
schools. To do this, we investigate a dynamic many-to-many school choice
problem where public schools have priorities over teachers and teachers hold
substitutable preferences over subsets of schools. We introduce a new concept
of dynamic stability that recognizes the tenured positions of teachers and we
prove that a dynamically stable matching always exists. We propose the
Tenured-Respecting Deferred Acceptance $(TRDA)$ mechanism, which produces a
dynamically stable matching that is constrained-efficient within the class of
dynamically stable matchings and minimizes unjustified claims. To improve
efficiency beyond this class, we also propose the Tenured-Respecting
Efficiency-Adjusted Deferred Acceptance $(TREADA)$ mechanism, an adaptation of
the Efficiency-Adjusted Deferred Acceptance mechanism to our dynamic context.
We demonstrate that the outcome of the $TREADA$ mechanism Pareto-dominates any
dynamically stable matching and achieves efficiency when all teachers consent.
Additionally, we examine the issue of manipulability, showing that although the
$TRDA$ and $TREADA$ mechanisms can be manipulated, they remain non-obviously
dynamically manipulable under specific conditions on schools' priorities. | arXiv |
Landmark-based navigation (e.g. go to the wooden desk) and relative
positional navigation (e.g. move 5 meters forward) are distinct navigation
challenges solved very differently in existing robotics navigation methodology.
We present a new dataset, OC-VLN, in order to distinctly evaluate grounding
object-centric natural language navigation instructions in a method for
performing landmark-based navigation. We also propose Natural Language grounded
SLAM (NL-SLAM), a method to ground natural language instruction to robot
observations and poses. We actively perform NL-SLAM in order to follow
object-centric natural language navigation instructions. Our methods leverage
pre-trained vision and language foundation models and require no task-specific
training. We construct two strong baselines from state-of-the-art methods on
related tasks, Object Goal Navigation and Vision Language Navigation, and we
show that our approach, NL-SLAM, outperforms these baselines across all our
metrics of success on OC-VLN. Finally, we successfully demonstrate the
effectiveness of NL-SLAM for performing navigation instruction following in the
real world on a Boston Dynamics Spot robot. | arXiv |
With the increase in the number of parameters in large language models, the
process of pre-training and fine-tuning increasingly demands larger volumes of
GPU memory. A significant portion of this memory is typically consumed by the
optimizer state. To overcome this challenge, recent approaches such as low-rank
adaptation (LoRA (Hu et al., 2021)), low-rank gradient projection (GaLore (Zhao
et al., 2024)), and blockwise optimization (BAdam (Luo et al., 2024)) have been
proposed. However, in all these algorithms, the $\textit{effective rank of the
weight updates remains low-rank}$, which can lead to a substantial loss of
information from the gradient. This loss can be critically important,
especially during the pre-training stage. In this paper, we introduce
$\texttt{FRUGAL}$ ($\textbf{F}$ull-$\textbf{R}$ank $\textbf{U}$pdates with
$\textbf{G}$r$\textbf{A}$dient sp$\textbf{L}$itting), a new memory-efficient
optimization framework. $\texttt{FRUGAL}$ leverages gradient splitting to
perform low-dimensional updates using advanced algorithms (such as Adam), while
updates along the remaining directions are executed via state-free methods like
SGD or signSGD (Bernstein et al., 2018). Our framework can be integrated with
various low-rank update selection techniques, including GaLore and BAdam. We
provide theoretical convergence guarantees for our framework when using SGDM
for low-dimensional updates and SGD for state-free updates. Additionally, our
method consistently outperforms concurrent approaches across various fixed
memory budgets, achieving state-of-the-art results in pre-training and
fine-tuning tasks while balancing memory efficiency and performance metrics. | arXiv |
This study introduces a novel self-supervised learning approach for
volumetric segmentation of defect indications captured by phased array
ultrasonic testing data from Carbon Fiber Reinforced Polymers (CFRPs). By
employing this self-supervised method, defect segmentation is achieved
automatically without the need for labelled training data or examples of
defects. The approach has been tested using artificially induced defects,
including back-drilled holes and Polytetrafluoroethylene (PTFE) inserts, to
mimic different defect responses. Additionally, it has been evaluated on
stepped geometries with varying thickness, demonstrating impressive
generalization across various test scenarios. Minimal preprocessing
requirements are needed, with no removal of geometric features or
Time-Compensated Gain (TCG) necessary for applying the methodology. The model's
performance was evaluated for defect detection, in-plane and through-thickness
localisation, as well as defect sizing. All defects were consistently detected
with thresholding and different processing steps able to remove false positive
indications for a 100% detection accuracy. Defect sizing aligns with the
industrial standard 6 dB amplitude drop method, with a Mean Absolute Error
(MAE) of 1.41 mm. In-plane and through-thickness localisation yielded
comparable results, with MAEs of 0.37 and 0.26 mm, respectively. Visualisations
are provided to illustrate how this approach can be utilised to generate
digital twins of components. | arXiv |
We consider a population spreading across a finite number of sites.
Individuals can move from one site to the other according to a network
(oriented links between the sites) that vary periodically over time. On each
site, the population experiences a growth rate which is also periodically time
varying. Recently, this kind of models have been extensively studied, using
various technical tools to derive precise necessary and sufficient conditions
on the parameters of the system (ie the local growth rate on each site, the
time period and the strength of migration between the sites) for the population
to grow. In the present paper, we take a completely different approach: using
elementary comparison results between linear systems, we give sufficient
condition for the growth of the population This condition is easy to check and
can be applied in a broad class of examples. In particular, in the case when
all sites are sinks (ie, in the absence of migration, the population become
extinct in each site), we prove that when our condition of growth if satisfied,
the population grows when the time period is large and for values of the
migration strength that are exponentially small with respect to the time
period, which answers positively to a conjecture stated by Katriel. | arXiv |
Federated learning (FL) has become one of the key methods for
privacy-preserving collaborative learning, as it enables the transfer of models
without requiring local data exchange. Within the FL framework, an aggregation
algorithm is recognized as one of the most crucial components for ensuring the
efficacy and security of the system. Existing average aggregation algorithms
typically assume that all client-trained data holds equal value or that weights
are based solely on the quantity of data contributed by each client. In
contrast, alternative approaches involve training the model locally after
aggregation to enhance adaptability. However, these approaches fundamentally
ignore the inherent heterogeneity between different clients' data and the
complexity of variations in data at the aggregation stage, which may lead to a
suboptimal global model.
To address these issues, this study proposes a novel dual-criterion weighted
aggregation algorithm involving the quantity and quality of data from the
client node. Specifically, we quantify the data used for training and perform
multiple rounds of local model inference accuracy evaluation on a specialized
dataset to assess the data quality of each client. These two factors are
utilized as weights within the aggregation process, applied through a
dynamically weighted summation of these two factors. This approach allows the
algorithm to adaptively adjust the weights, ensuring that every client can
contribute to the global model, regardless of their data's size or initial
quality. Our experiments show that the proposed algorithm outperforms several
existing state-of-the-art aggregation approaches on both a general-purpose
open-source dataset, CIFAR-10, and a dataset specific to visual obstacle
avoidance. | arXiv |
We develop a novel key routing algorithm for quantum key distribution (QKD)
networks that utilizes a distribution of keys between remote, i.e., not
directly connected by a QKD link, nodes through multiple non-overlapping paths.
This approach enchases the security of QKD network by minimizing potential
vulnerabilities associated with individual trusted nodes. The algorithm ensures
a balanced allocation of the workload across the QKD network links, while
aiming for the target key generation rate between directly connected and remote
nodes. We present the results of testing the algorithm on two QKD network
models consisting of 6 and 10 nodes. The testing demonstrates the ability of
the algorithm to distribute secure keys among the nodes of the network in an
all-to-all manner, ensuring that the information-theoretic security of the keys
between remote nodes is maintained even when one of the trusted nodes is
compromised. These results highlight the potential of the algorithm to improve
the performance of QKD networks. | arXiv |
Given a digraph $H$, we say a digraph $H^\prime$ is an $H$-subdivision if
$H^\prime$ is obtained from $H$ by replacing one or more arcs from $H$ with
internally vertex-disjoint path(s). In this paper, we prove that for any
digraph $H$ with $h$ arcs and no isolated vertices, there is a constant $C_0>0$
such that the following hold.
$(1)$ For any integer $C\geq C_0$ and every digraph $D$ on $n\geq Ch$
vertices, if the minimum in- and out-degree of $D$ is at least $n/2$, then it
contains an $H$-subdivision covering all vertices of $D$.
$(2)$ For any integer partition $n=n_1+\cdots+n_m$ such that the sum of the
$n_i$ less than $\alpha n$ is no more than $\beta n$, if a digraph $D$ has the
order $n\geq Cm$ and the minimum in- and out-degree at least
$\sum_{i=1}^m\lceil\frac{n_i}{2}\rceil$, then it contains $m$ disjoint
$H$-subdivisions, where the order of these $H$-subdivisions is $n_1, \ldots,
n_m$, respectively.
The result of $(1)$ settles the conjecture raised by Pavez-Sign\'{e}
\cite{Pavez} in a stronger form, and ameliorate the result of Lee \cite{Lee}.
Also, the conclusion of $(2)$ partly answers of the conjecture of Lee
\cite{Lee1} and generalizes the recent work of Lee \cite{Lee1}. | arXiv |
Let $F$ be any field containing the finite field of order $q$. A
$q$-polynomial $L$ over $F$ is an element of the polynomial ring $F[x]$ with
the property that all powers of $x$ that appear in $L$ with nonzero coefficient
have exponent a power of $q$. It is well known that given any ordinary
polynomial $f$ in $F[x]$, there exists a $q$-polynomial that is divisible by
$f$. We study the smallest degree of such a $q$-polynomial. This is equivalent
to studying the $\mathbb{F}_q$-span of the roots of $f$ in a splitting field.
We relate this quantity to the representation theory of the Galois group of
$f$. As an application we give a simultaneous construction of the binary Golay
code of length 24, and the Steiner system on 24 points. | arXiv |
An elastic-degenerate (ED) string $T$ is a sequence of $n$ sets
$T[1],\ldots,T[n]$ containing $m$ strings in total whose cumulative length is
$N$. We call $n$, $m$, and $N$ the length, the cardinality and the size of $T$,
respectively. The language of $T$ is defined as $L(T)=\{S_1 \cdots S_n\,:\,S_i
\in T[i]$ for all $i\in[1,n]\}$. ED strings have been introduced to represent a
set of closely-related DNA sequences, also known as a pangenome. The basic
question we investigate here is: Given two ED strings, how fast can we check
whether the two languages they represent have a nonempty intersection? We call
the underlying problem the ED String Intersection (EDSI) problem.For two ED
strings $T_1$ and $T_2$ of lengths $n_1$ and $n_2$, cardinalities $m_1$ and
$m_2$, and sizes $N_1$ and $N_2$, respectively, we show the following:
- There is no $O((N_1N_2)^{1-\epsilon})$-time algorithm, for any constant
$\epsilon>0$, for EDSI even when $T_1$ and $T_2$ are over a binary alphabet,
unless the Strong Exponential-Time Hypothesis is false.
- There is no combinatorial $O((N_1+N_2)^{1.2-\epsilon}f(n_1,n_2))$-time
algorithm, for any constant $\epsilon>0$ and any function $f$, for EDSI even
when $T_1$ and $T_2$ are over a binary alphabet, unless the Boolean Matrix
Multiplication conjecture is false.
- An $O(N_1\log N_1\log n_1+N_2\log N_2\log n_2)$-time algorithm for
outputting a compact (RLE) representation of the intersection language of two
unary ED strings. In the case when $T_1$ and $T_2$ are given in a compact
representation, we show that the problem is NP-complete.
- An $O(N_1m_2+N_2m_1)$-time algorithm for EDSI.
- An $\tilde{O}(N_1^{\omega-1}n_2+N_2^{\omega-1}n_1)$-time algorithm for
EDSI, where $\omega$ is the exponent of matrix multiplication; the $\tilde{O}$
notation suppresses factors that are polylogarithmic in the input size. | arXiv |
Let $s(n)$ denote the number of ones in the binary expansion of the
nonnegative integer $n$. How does $s$ behave under addition of a constant $t$?
In order to study the differences \[s(n+t)-s(n),\] for all $n\ge0$, we consider
the associated characteristic function $\gamma_t$. Our main theorem is a
structural result on the decomposition of $\gamma_t$ into a sum of
\emph{components}. We also study in detail the case that $t$ contains at most
two blocks of consecutive $1$s. The results in this paper are motivated by
\emph{Cusick's conjecture} on the sum-of-digits function. This conjecture is
concerned with the \emph{central tendency} of the corresponding probability
distributions, and is still unsolved. | arXiv |
Existing guarantees for algorithms sampling from nonlogconcave measures on
$\mathbb{R}^d$ are generally inexplicit or unscalable. Even for the class of
measures with logdensities that have bounded Hessians and are strongly concave
outside a Euclidean ball of radius $R$, no available theory is comprehensively
satisfactory with respect to both $R$ and $d$. In this paper, it is shown that
complete polynomial complexity can in fact be achieved if $R\leq c\sqrt{d}$,
whilst an exponential number of point evaluations is generally necessary for
any algorithm as soon as $R\geq C\sqrt{d}$ for constants $C>c>0$. A simple
importance sampler with tail-matching proposal achieves the former, owing to a
blessing of dimensionality. On the other hand, if strong concavity outside a
ball is replaced by a distant dissipativity condition, then sampling guarantees
must generally scale exponentially with $d$ in all parameter regimes. | arXiv |
Energy considerations can significantly affect the behavior of a population
of energy-consuming agents with limited energy budgets, for instance, in the
movement process of people in a city. We consider a population of interacting
agents with an initial energy budget walking on a graph according to an
exploration and return (to home) strategy that is based on the current energy
of the person. Each move reduces the available energy depending on the flow of
movements and the strength of interactions, and the movement ends when an agent
returns home with a negative energy. We observe that a uniform distribution of
initial energy budgets results in a larger number of visited sites per consumed
energy (efficiency) compared to case that all agents have the same initial
energy if return to home is relevant from the beginning of the process. The
uniform energy distribution also reduces the amount of uncertainties in the
total travel times (entropy production) which is more pronounced when the
strength of interactions and exploration play the relevant role in the movement
process. That is variability in the energies can help to increase the
efficiency and reduce the entropy production specially in presence of strong
interactions. | arXiv |
Recommender Systems (RSs) are pivotal in diverse domains such as e-commerce,
music streaming, and social media. This paper conducts a comparative analysis
of prevalent loss functions in RSs: Binary Cross-Entropy (BCE), Categorical
Cross-Entropy (CCE), and Bayesian Personalized Ranking (BPR). Exploring the
behaviour of these loss functions across varying negative sampling settings, we
reveal that BPR and CCE are equivalent when one negative sample is used.
Additionally, we demonstrate that all losses share a common global minimum.
Evaluation of RSs mainly relies on ranking metrics known as Normalized
Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR). We produce
bounds of the different losses for negative sampling settings to establish a
probabilistic lower bound for NDCG. We show that the BPR bound on NDCG is
weaker than that of BCE, contradicting the common assumption that BPR is
superior to BCE in RSs training. Experiments on five datasets and four models
empirically support these theoretical findings. Our code is available at
\url{https://anonymous.4open.science/r/recsys_losses} . | arXiv |
In recent work, Lusztig's positive root vectors (with respect to a
distinguished choice of reduced decomposition of the longest element of the
Weyl group) were shown to give a quantum tangent space for every $A$-series
Drinfeld--Jimbo full quantum flag manifold $\mathcal{O}_q(\mathrm{F}_n)$.
Moreover, the associated differential calculus
$\Omega^{(0,\bullet)}_q(\mathrm{F}_n)$ was shown to have classical dimension,
giving a direct $q$-deformation of the classical anti-holomorphic Dolbeault
complex of $\mathrm{F}_n$. Here we examine in detail the rank two case, namely
the full quantum flag manifold of $\mathcal{O}_q(\mathrm{SU}_3)$. In
particular, we examine the $*$-differential calculus associated to
$\Omega^{(0,\bullet)}_q(\mathrm{F}_3)$ and its non-commutative complex
geometry. We find that the number of almost-complex structures reduces from $8$
(that is $2$ to the power of the number of positive roots of $\frak{sl}_3$) to
$4$ (that is $2$ to the power of the number of simple roots of $\frak{sl}_3$).
Moreover, we show that each of these almost-complex structures is integrable,
which is to say, each of them is a complex structure. Finally, we observe that,
due to non-centrality of all the non-degenerate coinvariant $2$-forms, none of
these complex structures admits a left $\mathcal{O}_q(\mathrm{SU}_3)$-covariant
noncommutative K\"ahler structure. | arXiv |
We unveil a new mechanism of nonreciprocal magneto-transport from cooperative
action of Lorentz force and skew scattering. The significance of this Lorentz
skew scattering mechanism lies in that it dominates both longitudinal and
transverse responses in highly conductive systems, and it exhibits a scaling
behavior distinct from all known mechanisms. At low temperature, it shows a
cubic scaling in linear conductivity, whereas the scaling becomes quartic at
elevated temperature when phonon scattering kicks in. We develop its
microscopic formulation and reveal its close connection with Berry curvature on
Fermi surface. Applying our theory to surface transport in topological
crystalline insulator SnTe and bulk transport in Weyl semimetals leads to
significant results, suggesting a new route to achieve giant transport
nonreciprocity in high-mobility materials with topological band features. | arXiv |
Multi-instance point cloud registration aims to estimate the pose of all
instances of a model point cloud in the whole scene. Existing methods all adopt
the strategy of first obtaining the global correspondence and then clustering
to obtain the pose of each instance. However, due to the cluttered and occluded
objects in the scene, it is difficult to obtain an accurate correspondence
between the model point cloud and all instances in the scene. To this end, we
propose a simple yet powerful 3D focusing-and-matching network for
multi-instance point cloud registration by learning the multiple pair-wise
point cloud registration. Specifically, we first present a 3D multi-object
focusing module to locate the center of each object and generate object
proposals. By using self-attention and cross-attention to associate the model
point cloud with structurally similar objects, we can locate potential matching
instances by regressing object centers. Then, we propose a 3D dual masking
instance matching module to estimate the pose between the model point cloud and
each object proposal. It performs instance mask and overlap mask masks to
accurately predict the pair-wise correspondence. Extensive experiments on two
public benchmarks, Scan2CAD and ROBI, show that our method achieves a new
state-of-the-art performance on the multi-instance point cloud registration
task. Code is available at https://github.com/zlynpu/3DFMNet. | arXiv |
We discuss several aspects of the loss landscape of regularized neural
networks: the structure of stationary points, connectivity of optimal
solutions, path with nonincreasing loss to arbitrary global optimum, and the
nonuniqueness of optimal solutions, by casting the problem into an equivalent
convex problem and considering its dual. Starting from two-layer neural
networks with scalar output, we first characterize the solution set of the
convex problem using its dual and further characterize all stationary points.
With the characterization, we show that the topology of the global optima goes
through a phase transition as the width of the network changes, and construct
counterexamples where the problem may have a continuum of optimal solutions.
Finally, we show that the solution set characterization and connectivity
results can be extended to different architectures, including two-layer
vector-valued neural networks and parallel three-layer neural networks. | arXiv |
Multimodal large language models (MLLMs) have shown impressive capabilities
in document understanding, a rapidly growing research area with significant
industrial demand in recent years. As a multimodal task, document understanding
requires models to possess both perceptual and cognitive abilities. However,
current MLLMs often face conflicts between perception and cognition. Taking a
document VQA task (cognition) as an example, an MLLM might generate answers
that do not match the corresponding visual content identified by its OCR
(perception). This conflict suggests that the MLLM might struggle to establish
an intrinsic connection between the information it "sees" and what it
"understands." Such conflicts challenge the intuitive notion that cognition is
consistent with perception, hindering the performance and explainability of
MLLMs. In this paper, we define the conflicts between cognition and perception
as Cognition and Perception (C&P) knowledge conflicts, a form of multimodal
knowledge conflicts, and systematically assess them with a focus on document
understanding. Our analysis reveals that even GPT-4o, a leading MLLM, achieves
only 68.6% C&P consistency. To mitigate the C&P knowledge conflicts, we propose
a novel method called Multimodal Knowledge Consistency Fine-tuning. This method
first ensures task-specific consistency and then connects the cognitive and
perceptual knowledge. Our method significantly reduces C&P knowledge conflicts
across all tested MLLMs and enhances their performance in both cognitive and
perceptual tasks in most scenarios. | arXiv |
The numerical approximation of low-regularity solutions to the nonlinear
Schr\"odinger equation is notoriously difficult and even more so if
structure-preserving schemes are sought. Recent works have been successful in
establishing symmetric low-regularity integrators for this equation. However,
so far, all prior symmetric low-regularity algorithms are fully implicit, and
therefore require the solution of a nonlinear equation at each time step,
leading to significant numerical cost in the iteration. In this work, we
introduce the first fully explicit (multi-step) symmetric low-regularity
integrators for the nonlinear Schr\"odinger equation. We demonstrate the
construction of an entire class of such schemes which notably can be used to
symmetrise (in explicit form) a large amount of existing low-regularity
integrators. We provide rigorous convergence analysis of our schemes and
numerical examples demonstrating both the favourable structure preservation
properties obtained with our novel schemes, and the significant reduction in
computational cost over implicit methods. | arXiv |
To handle the complexities of real-world traffic, learning planners for
self-driving from data is a promising direction. While recent approaches have
shown great progress, they typically assume a setting in which the ground-truth
world state is available as input. However, when deployed, planning needs to be
robust to the long-tail of errors incurred by a noisy perception system, which
is often neglected in evaluation. To address this, previous work has proposed
drawing adversarial samples from a perception error model (PEM) mimicking the
noise characteristics of a target object detector. However, these methods use
simple PEMs that fail to accurately capture all failure modes of detection. In
this paper, we present EMPERROR, a novel transformer-based generative PEM,
apply it to stress-test an imitation learning (IL)-based planner and show that
it imitates modern detectors more faithfully than previous work. Furthermore,
it is able to produce realistic noisy inputs that increase the planner's
collision rate by up to 85%, demonstrating its utility as a valuable tool for a
more complete evaluation of self-driving planners. | arXiv |
We prove that $\alpha$-dissipative solutions to the Cauchy problem of the
Hunter-Saxton equation, where $\alpha \in W^{1, \infty}(\mathbb{R}, [0, 1))$,
can be computed numerically with order $\mathcal{O}(\Delta x^{{1}/{8}}+\Delta
x^{{\beta}/{4}})$ in $L^{\infty}(\mathbb{R})$, provided there exist constants
$C > 0$ and $\beta \in (0, 1]$ such that the initial spatial derivative
$\bar{u}_{x}$ satisfies $\|\bar{u}_x(\cdot + h) - \bar{u}_x(\cdot)\|_2 \leq
Ch^{\beta}$ for all $h \in (0, 2]$. The derived convergence rate is exemplified
by a number of numerical experiments. | arXiv |
Dynamic programming (DP) is a fundamental and powerful algorithmic paradigm
taught in most undergraduate (and many graduate) algorithms classes. DP
problems are challenging for many computer science students because they
require identifying unique problem structures and a refined understanding of
recursion. In this paper, we present dpvis, a Python library that helps
students understand DP through a frame-by-frame animation of dynamic programs.
dpvis can easily generate animations of dynamic programs with as little as two
lines of modifications compared to a standard Python implementation. For each
frame, dpvis highlight the cells that have been read from and written to during
an iteration. Moreover, dpvis allows users to test their understanding by
prompting them with questions about the next operation performed by the
algorithm.
We deployed dpvis as a learning tool in an undergraduate algorithms class,
and report on the results of a survey. The survey results suggest that dpvis is
especially helpful for visualizing the recursive structure of DP. Although some
students struggled with the installation of the tool (which has been simplified
since the reported deployment), essentially all other students found the tool
to be useful for understanding dynamic programs. dpvis is available at
https://github.com/itsdawei/dpvis. | arXiv |
In many Deep Reinforcement Learning (RL) problems, decisions in a trained
policy vary in significance for the expected safety and performance of the
policy. Since RL policies are very complex, testing efforts should concentrate
on states in which the agent's decisions have the highest impact on the
expected outcome. In this paper, we propose a novel model-based method to
rigorously compute a ranking of state importance across the entire state space.
We then focus our testing efforts on the highest-ranked states. In this paper,
we focus on testing for safety. However, the proposed methods can be easily
adapted to test for performance. In each iteration, our testing framework
computes optimistic and pessimistic safety estimates. These estimates provide
lower and upper bounds on the expected outcomes of the policy execution across
all modeled states in the state space. Our approach divides the state space
into safe and unsafe regions upon convergence, providing clear insights into
the policy's weaknesses. Two important properties characterize our approach.
(1) Optimal Test-Case Selection: At any time in the testing process, our
approach evaluates the policy in the states that are most critical for safety.
(2) Guaranteed Safety: Our approach can provide formal verification guarantees
over the entire state space by sampling only a fraction of the policy. Any
safety properties assured by the pessimistic estimate are formally proven to
hold for the policy. We provide a detailed evaluation of our framework on
several examples, showing that our method discovers unsafe policy behavior with
low testing effort. | arXiv |
We present deep JWST/NIRSpec integral-field spectroscopy (IFS) and ALMA
[CII]$\lambda$158$\mu$m observations of COS-3018, a star-forming galaxy at
z$\sim$6.85, as part of the GA-NIFS programme. Both G395H (R$\sim$ 2700) and
PRISM (R$\sim$ 100) NIRSpec observations revealed that COS-3018 is comprised of
three separate components detected in [OIII]$\lambda$5008, which we dub as
Main, North and East, with stellar masses of 10$^{9.4 \pm 0.1}$, 10$^{9.2 \pm
0.07}$, 10$^{7.7 \pm 0.15}$ M$_{\odot}$. We detect [OIII]$\lambda$5008,
[OIII]$\lambda\lambda$3727,29 and multiple Balmer lines in all three components
together with [OIII]$\lambda$4363 in the Main and North components. This allows
us to measure an ISM temperature of T$_{e}$= 1.27$\pm0.07\times 10^4$ and
T$_{e}$= 1.6$\pm0.14\times 10^4$ K with densities of $n_{e}$ = 1250$\pm$250 and
$n_{e}$ = 700$\pm$200 cm$^{-3}$, respectively. These deep observations allow us
to measure an average metallicity of 12+log(O/H)=7.9--8.2 for the three
components with the T$_{e}$-method. We do not find any significant evidence of
metallicity gradients between the components. Furthermore, we also detect
[NII]$\lambda$6585, one of the highest redshift detections of this emission
line. We find that in a small, metal-poor clump 0.2 arcsec west of the North
component, N/O is elevated compared to other regions, indicating that nitrogen
enrichment originates from smaller substructures, possibly proto-globular
clusters. [OIII]$\lambda$5008 kinematics show that this system is merging,
which is probably driving the ongoing, luminous starburst. | arXiv |
Given a measure equivalence coupling between two finitely generated groups,
Delabie, Koivisto, Le Ma\^itre and Tessera have found explicit upper bounds on
how integrable the associated cocycles can be. These bounds are optimal in many
cases but the integrability of the cocycles with respect to these critical
thresholds remained unclear. For instance, a cocycle from $\mathbb{Z}^{k+\ell}$
to $\mathbb{Z}^{k}$ can be $\mathrm{L}^p$ for all $p<\frac{k}{k+\ell}$ but not
for $p>\frac{k}{k+\ell}$, and the case $p=\frac{k}{k+\ell}$ was an open
question which we answer by the negative. Our main result actually yields much
more examples where the integrability threshold given by Delabie-Koivisto-Le
Ma\^itre-Tessera Theorems cannot be reached. | arXiv |
This paper proposes an automated framework for efficient application
profiling and training of Machine Learning (ML) performance models, composed of
two parts: OSCAR-P and aMLLibrary. OSCAR-P is an auto-profiling tool designed
to automatically test serverless application workflows running on multiple
hardware and node combinations in cloud and edge environments. OSCAR-P obtains
relevant profiling information on the execution time of the individual
application components. These data are later used by aMLLibrary to train
ML-based performance models. This makes it possible to predict the performance
of applications on unseen configurations. We test our framework on clusters
with different architectures (x86 and arm64) and workloads, considering
multi-component use-case applications. This extensive experimental campaign
proves the efficiency of OSCAR-P and aMLLibrary, significantly reducing the
time needed for the application profiling, data collection, and data
processing. The preliminary results obtained on the ML performance models
accuracy show a Mean Absolute Percentage Error lower than 30% in all the
considered scenarios. | arXiv |
This work aims at investigating the impact of DNA geometry, compaction and
calculation chain on DNA break and chromosome aberration predictions for high
charge and energy (HZE) ions, using the Monte Carlo codes Geant4-DNA, RITRACKS
and RITCARD. To ensure consistency of ion transport of both codes, we first
compared microdosimetry and nanodosimetry spectra for different ions of
interest in hadrontherapy and space research. The Rudd model was used for the
transport of ions in both models. Developments were made in Geant4 (v11.2) to
include periodic boundary conditions (PBC) to account for electron equilibrium
in small targets. Excellent agreements were found for both microdosimetric and
nanodosimetric spectra for all ion types, with and without PBC. Some
discrepancies remain for low-energy deposition events, likely due to
differences in electron interaction models. The latest results obtained using
the newly available Geant4 example ``dsbandrepair'' will be presented and
compared to DNA break predictions obtained with RITCARD. | arXiv |
We completely classify the asymptotic behavior of the number of alternating
sign matrices classically avoiding a single permutation pattern, in the sense
of [Johansson and Linusson 2007]. In particular, we give a uniform proof of an
exponential upper bound for the number of alternating sign matrices classically
avoiding one of twelve particular patterns, and a super-exponential lower bound
for all other single-pattern avoidance classes. We also show that for any fixed
integer $k$, there is an exponential upper bound for the number of alternating
sign matrices that classically avoid any single permutation pattern and contain
precisely $k$ negative ones. Finally, we prove that there must be at most $3$
negative ones in an alternating sign matrix which classically avoids both
$2143$ and $3412$, and we exactly enumerate the number of them with precisely
$3$ negative ones. | arXiv |
In the paper we prove criteria for convexity and concavity of $f$-potentials
($f$-means, or Kolvogorov means), which particular cases are the arithmetic,
geometric, harmonic means, the thermodynamic potential (exponential mean), and
the $L^{p}$-norm. Then we compute in quadratures all functions $f$ satisfying
these criteria. | arXiv |
Large Language Models (LLMs) often perpetuate biases in pronoun usage,
leading to misrepresentation or exclusion of queer individuals. This paper
addresses the specific problem of biased pronoun usage in LLM outputs,
particularly the inappropriate use of traditionally gendered pronouns ("he,"
"she") when inclusive language is needed to accurately represent all
identities. We introduce a collaborative agent pipeline designed to mitigate
these biases by analyzing and optimizing pronoun usage for inclusivity. Our
multi-agent framework includes specialized agents for both bias detection and
correction. Experimental evaluations using the Tango dataset-a benchmark
focused on gender pronoun usage-demonstrate that our approach significantly
improves inclusive pronoun classification, achieving a 32.6 percentage point
increase over GPT-4o in correctly disagreeing with inappropriate traditionally
gendered pronouns $(\chi^2 = 38.57, p < 0.0001)$. These results accentuate the
potential of agent-driven frameworks in enhancing fairness and inclusivity in
AI-generated content, demonstrating their efficacy in reducing biases and
promoting socially responsible AI. | arXiv |
Emerging distributed generation demands highly reliable and resilient
coordinating control in microgrids. To improve on these aspects, spiking neural
network is leveraged, as a grid-edge intelligence tool to establish a talkative
infrastructure, Spike Talk, expediting coordination in next-generation
microgrids without the need of communication at all. This paper unravels the
physics behind Spike Talk from the perspective of its distributed
infrastructure, which aims to address the Von Neumann Bottleneck. Relying on
inferring information via power flows in tie lines, Spike Talk allows adaptive
and flexible control and coordination itself, and features in synaptic
plasticity facilitating online and local training functionality. Preliminary
case studies are demonstrated with results, while more extensive validations
are to be included as future scopes of work. | arXiv |
Large language models (LLMs) typically employ greedy decoding or
low-temperature sampling for reasoning tasks, reflecting a perceived trade-off
between diversity and accuracy. We challenge this convention by introducing
top-$n\sigma$, a novel sampling method that operates directly on pre-softmax
logits by leveraging a statistical threshold. Our key insight is that logits
naturally separate into a Gaussian-distributed noisy region and a distinct
informative region, enabling efficient token filtering without complex
probability manipulations. Unlike existing methods (e.g., top-$p$, min-$p$)
that inadvertently include more noise tokens at higher temperatures,
top-$n\sigma$ maintains a stable sampling space regardless of temperature
scaling. We also provide a theoretical analysis of top-$n\sigma$ to better
understand its behavior. The extensive experimental results across four
reasoning-focused datasets demonstrate that our method not only outperforms
existing sampling approaches but also surpasses greedy decoding, while
maintaining consistent performance even at high temperatures. | arXiv |
One of fascinating phenomena of nature is quantum nonlocality, which is
observed upon measurements on spacelike entangled systems. However, there are
sets of post-quantum models which have stronger correlations than quantum
mechanics, wherein instantaneous communication remains impossible. The set of
almost quantum correlations is one of post-quantum models which satisfies all
kinematic axioms of standard quantum correlations except one, meanwhile they
contain correlations slightly stronger than quantum correlations. There arises
the natural question whether there is some fundamental principle of nature
which can genuinely characterizes quantum correlations. Here, we provide an
answer and close this gap by invoking the isotropy and homogeneity principles
of the flat space as a conclusive and distinguishing criterion to rule out the
almost-quantum correlations model. In particular, to characterize quantum
correlations we impose the isotropy and homogeneity symmetry group structure on
the almost quantum correlations model and request that the joint probability
distributions corresponding to the Born rule remain invariant. We prove that
this condition is sufficient (and necessary) to reduce the almost quantum
correlations model to quantum mechanics in both bipartite and multipartite
systems. | arXiv |
The Activated Random Walk (ARW) model is a promising candidate for
demonstrating self-organized criticality due to its potential for universality.
Recent studies have shown that the ARW model exhibits a well-defined critical
density in one dimension, supporting its universality. In this paper, we extend
these results by demonstrating that the ARW model on $\mathbb{Z}$, with a
single initially active particle and all other particles sleeping, maintains
the same critical density. Our findings relax the previous assumption that
required all particles to be initially active. This provides further evidence
of the ARW model's robustness and universality in depicting self-organized
criticality. | arXiv |
Air-to-air missiles are used on many modern military combat aircraft for
self-defence. It is imperative for the pilots using the weapons that the
missiles hit their target first time. The important goals for a missile control
system to achieve are minimising the time constant, overshoot, and settling
time of the missile dynamics. The combination of high angles of attack,
time-varying mass, thrust, and centre of gravity, actuator delay, and signal
noise create a highly non-linear dynamic system with many uncertainties that is
extremely challenging to control. A robust control system based on saturated
sliding mode control is proposed to overcome the time-varying parameters and
non-linearities. A lag compensator is designed to overcome actuator delay. A
second-order filter is selected to reduce high-frequency measurement noise.
When combined, the proposed solutions can make the system stable despite the
existence of changing mass, centre of gravity, thrust, and sensor noise. The
system was evaluated for desired pitch angles of 0{\deg} to 90{\deg}. The time
constant for the system stayed below 0.27s for all conditions, with
satisfactory performance for both settling time and overshoot. | arXiv |
Tilt rotor aircraft combine the benefits of both helicopters and fixed wing
aircraft, this makes them popular for a variety of applications, including
Search and Rescue and VVIP transport. However, due to the multiple flight
modes, significant challenges with regards to the control system design are
experienced. The main challenges with VTOL aircraft, comes during the dynamic
phase (mode transition), where the aircraft transitions from a hover state to
full forwards flight. In this transition phase the aerodynamic lift and torque
generated by the wing/control surfaces increases and as such, the rotor thrust,
and the tilt rate must be carefully considered, such that the height and
attitude remain invariant during the mode transition. In this paper, a digital
PID controller with the applicable digital filter and data hold functions is
designed so that a successful mode transition between hover and forwards flight
can be ascertained. Finally, the presented control system for the tilt-rotor
UAV is demonstrated through simulations by using the MATLAB software suite. The
performance obtained from the simulations confirm the success of the
implemented methods, with full stability in all three degrees of freedom being
demonstrated. | arXiv |
From prehistoric encirclement for hunting to GPS orbiting the earth for
positioning, target encirclement has numerous real world applications. However,
encircling multiple non-cooperative targets in GPS-denied environments remains
challenging. In this work, multiple targets encirclement by using a minimum of
two tasking agents, is considered where the relative distance measurements
between the agents and the targets can be obtained by using onboard sensors.
Based on the measurements, the center of all the targets is estimated directly
by a fuzzy wavelet neural network (FWNN) and the least squares fit method.
Then, a new distributed anti-synchronization controller (DASC) is designed so
that the two tasking agents are able to encircle all targets while staying
opposite to each other. In particular, the radius of the desired encirclement
trajectory can be dynamically determined to avoid potential collisions between
the two agents and all targets. Based on the Lyapunov stability analysis
method, the convergence proofs of the neural network prediction error, the
target-center position estimation error, and the controller error are addressed
respectively. Finally, both numerical simulations and UAV flight experiments
are conducted to demonstrate the validity of the encirclement algorithms. The
flight tests recorded video and other simulation results can be found in
https://youtu.be/B8uTorBNrl4. | arXiv |
The inherent volatility and dynamic fluctuations within the financial stock
market underscore the necessity for investors to employ a comprehensive and
reliable approach that integrates risk management strategies, market trends,
and the movement trends of individual securities. By evaluating specific data,
investors can make more informed decisions. However, the current body of
literature lacks substantial evidence supporting the practical efficacy of
reinforcement learning (RL) agents, as many models have only demonstrated
success in back testing using historical data. This highlights the urgent need
for a more advanced methodology capable of addressing these challenges. There
is a significant disconnect in the effective utilization of financial
indicators to better understand the potential market trends of individual
securities. The disclosure of successful trading strategies is often restricted
within financial markets, resulting in a scarcity of widely documented and
published strategies leveraging RL. Furthermore, current research frequently
overlooks the identification of financial indicators correlated with various
market trends and their potential advantages.
This research endeavors to address these complexities by enhancing the
ability of RL agents to effectively differentiate between positive and negative
buy/sell actions using financial indicators. While we do not address all
concerns, this paper provides deeper insights and commentary on the utilization
of technical indicators and their benefits within reinforcement learning. This
work establishes a foundational framework for further exploration and
investigation of more complex scenarios. | arXiv |
A solid electrolyte having the ionic conductivity comparable to that of the
conventional liquid electrolyte can be used in All Solid State Batteries. The
series Li6.75+xLa3-xSrxZr1.75Ta0.25O12 (x = 0 to 0.20) was synthesized to
improve the ionic conductivity of garnet Li7La3Zr2O12 (LLZO). The structural,
physical and morphological investigations have been carried out for all the
synthesized samples using X ray diffraction, density measurement and scanning
electron microscopy respectively. The results of electrochemical analysis
showed that the maximum room temperature ionic conductivity of 3.5 x 10-4 S/Cm
and minimum activation energy of 0.29 eV is achieved by the 0.05 Sr ceramic
sample. The DC conductivity measurement confirmed the dominance of ionic
conduction in the prepared ceramic samples. The highest ionic conductivity with
the minimum activation energy makes the 0.05 Sr ceramic sample a suitable
choice as solid electrolyte for All Solid State Lithium Ion Batteries. | arXiv |
We investigates the massless scalar perturbations of the
Pleba\'nski-Demia\'nski black hole considering the general case that admits all
nonzero parameters. This case is the most generic black hole spacetime in
general relativity, characterized by mass, spin, acceleration, electric and
magnetic charges, NUT parameter, and cosmological constant. Employing conformal
transformations, we can separate the massless scalar field equation and reduce
the effective potential in the radial perturbation equation into the
P\"oschl--Teller potential in the near-Nariai limit where the event and
cosmo-acceleration horizons are close. This allows us to obtain an exact
analytical solution of the quasinormal frequency, implying that the decay rate
of the field is quantized depending only on the surface gravity of the black
hole. | arXiv |
Moist thermodynamics is a fundamental driver of atmospheric dynamics across
all scales, making accurate modeling of these processes essential for reliable
weather forecasts and climate change projections. However, atmospheric models
often make a variety of inconsistent approximations in representing moist
thermodynamics. These inconsistencies can introduce spurious sources and sinks
of energy, potentially compromising the integrity of the models.
Here, we present a thermodynamically consistent and structure preserving
formulation of the moist compressible Euler equations. When discretised with a
summation by parts method, our spatial discretisation conserves: mass, water,
entropy, and energy. These properties are achieved by discretising a skew
symmetric form of the moist compressible Euler equations, using entropy as a
prognostic variable, and the summation-by-parts property of discrete derivative
operators. Additionally, we derive a discontinuous Galerkin spectral element
method with energy and tracer variance stable numerical fluxes, and
experimentally verify our theoretical results through numerical simulations. | arXiv |
We give an algorithm for the fully-dynamic carpooling problem with recourse:
Edges arrive and depart online from a graph $G$ with $n$ nodes according to an
adaptive adversary. Our goal is to maintain an orientation $H$ of $G$ that
keeps the discrepancy, defined as $\max_{v \in V} |\text{deg}_H^+(v) -
\text{deg}_H^-(v)|$, small at all times. We present a simple algorithm and
analysis for this problem with recourse based on cycles that simplifies and
improves on a result of Gupta et al. [SODA '22]. | arXiv |
We develop a variant of the hypergraph container lemma with non-uniform
conditions on the co-degrees. In particular, an upper bound on the co-degree of
some subset of vertices $T$ is allowed to depend on where the vertices in $T$
live in the hypergraph, rather than having one condition which holds for all
subsets. We use this to extend recent results on nearly-orthogonal sets in
$\mathbb{F}_p^d$. For a field $\mathbb{F}$ and integers $d, k$ and $\ell$, a
set $A \subseteq \mathbb{F}^d$ is called $(k,\ell)$-nearly orthogonal if all
vectors in $A$ are non-self-orthogonal and every $k+1$ vectors in $A$ contain
$\ell + 1$ pairwise orthogonal vectors. Recently, Haviv, Mattheus,
Milojevi\'{c} and Wigderson have improved the lower bound on nearly orthogonal
sets over finite fields, using counting arguments and a hypergraph container
lemma. They showed that for every prime $p$ and an integer $\ell$, there is a
constant $\delta(p,\ell)$ such that for every field $\mathbb{F}$ of
characteristic $p$ and for all integers $d \geq k \geq \ell + 1$,
$\mathbb{F}^d$ contains a $(k,\ell)$-nearly orthogonal set of size $d^{\delta k
/ \log k}$. This nearly matches an upper bound $\binom{d+k}{k}$ coming from
Ramsey theory. Moreover, they proved the same lower bound for the size of a
largest set $A$ where for any two subsets of $A$ of size $k+1$ each, there is a
vector in one of the subsets orthogonal to a vector in the other one. We prove
that essentially the same lower bound holds for the size of a largest set $A
\subseteq \mathbb{F}^d$ with the stronger property that given any family of
subsets $A_1, \ldots, A_{\ell+1} \subset A$, each of size $k+1$, we can find a
vector in each $A_i$ such that they are all pairwise orthogonal. | arXiv |
Auscultation of internal body sounds is essential for diagnosing a range of
health conditions, yet its effectiveness is often limited by clinicians'
expertise and the acoustic constraints of human hearing, restricting its use
across various clinical scenarios. To address these challenges, we introduce
AuscultaBase, a foundational framework aimed at advancing body sound
diagnostics through innovative data integration and contrastive learning
techniques. Our contributions include the following: First, we compile
AuscultaBase-Corpus, a large-scale, multi-source body sound database
encompassing 11 datasets with 40,317 audio recordings and totaling 322.4 hours
of heart, lung, and bowel sounds. Second, we develop AuscultaBase-Model, a
foundational diagnostic model for body sounds, utilizing contrastive learning
on the compiled corpus. Third, we establish AuscultaBase-Bench, a comprehensive
benchmark containing 16 sub-tasks, assessing the performance of various
open-source acoustic pre-trained models. Evaluation results indicate that our
model outperforms all other open-source models in 12 out of 16 tasks,
demonstrating the efficacy of our approach in advancing diagnostic capabilities
for body sound analysis. | arXiv |
DNSSEC, a DNS security extension, is essential to accurately translating
domain names to IP addresses. Digital signatures provide the foundation for
this reliable translation, however, the evolution of 'Quantum Computers' has
made traditional digital signatures vulnerable. In light of this, NIST has
recently selected potential post-quantum digital signatures that can operate on
conventional computers and resist attacks made with Quantum Computers. Since
these post-quantum digital signatures are still in their early stages of
development, replacing pre-quantum digital signature schemes in DNSSEC with
post-quantum candidates is risky until the post-quantum candidates have
undergone a thorough security analysis. Given this, herein, we investigate the
viability of employing 'Double-Signatures' in DNSSEC, combining a post-quantum
digital signature and a classic one. The rationale is that double-signatures
will offer protection against quantum threats on conventional signature schemes
as well as unknown non-quantum attacks on post-quantum signature schemes, hence
even if one fails the other provides security guarantees. However, the
inclusion of two signatures in the DNSSEC response message doesn't bode well
with the maximum allowed size of DNSSEC responses (i.e., 1232B, a limitation
enforced by MTU of physical links). To counter this issue, we leverage a way to
do application-layer fragmentation of DNSSEC responses with two signatures. We
implement our solution on top of OQS-BIND and through experiments show that the
addition of two signatures in DNSSEC and application-layer fragmentation of all
relevant resource records and their reassembly does not have any substantial
impact on the efficiency of the resolution process and thus is suitable for the
interim period at least until the quantum computers are fully realized. | arXiv |
ChatGPT and other large language models (LLMs) promise to revolutionize
software development by automatically generating code from program
specifications. We assess the performance of ChatGPT's GPT-3.5-turbo model on
LeetCode, a popular platform with algorithmic coding challenges for technical
interview practice, across three difficulty levels: easy, medium, and hard. We
test three main hypotheses. First, ChatGPT solves fewer problems as difficulty
rises (Hypothesis 1). Second, prompt engineering improves ChatGPT's
performance, with greater gains on easier problems and diminishing returns on
harder ones (Hypothesis 2). Third, ChatGPT performs better in popular languages
like Python, Java, and C++ than in less common ones like Elixir, Erlang, and
Racket (Hypothesis 3). To investigate these hypotheses, we conduct automated
experiments using Python scripts to generate prompts that instruct ChatGPT to
create Python solutions. These solutions are stored and manually submitted on
LeetCode to check their correctness. For Hypothesis 1, results show the
GPT-3.5-turbo model successfully solves 92% of easy, 79% of medium, and 51% of
hard problems. For Hypothesis 2, prompt engineering yields improvements: 14-29%
for Chain of Thought Prompting, 38-60% by providing failed test cases in a
second feedback prompt, and 33-58% by switching to GPT-4. From a random subset
of problems ChatGPT solved in Python, it also solved 78% in Java, 50% in C++,
and none in Elixir, Erlang, or Racket. These findings generally validate all
three hypotheses. | arXiv |
Large and Small Language Models (LMs) are typically pretrained using
extensive volumes of text, which are sourced from publicly accessible platforms
such as Wikipedia, Book Corpus, or through web scraping. These models, due to
their exposure to a wide range of language data, exhibit impressive
generalization capabilities and can perform a multitude of tasks
simultaneously. However, they often fall short when it comes to domain-specific
tasks due to their broad training data. This paper introduces SecEncoder, a
specialized small language model that is pretrained using security logs.
SecEncoder is designed to address the domain-specific limitations of general
LMs by focusing on the unique language and patterns found in security logs.
Experimental results indicate that SecEncoder outperforms other LMs, such as
BERTlarge, DeBERTa-v3-large and OpenAI's Embedding (textembedding-ada-002)
models, which are pretrained mainly on natural language, across various tasks.
Furthermore, although SecEncoder is primarily pretrained on log data, it
outperforms models pretrained on natural language for a range of tasks beyond
log analysis, such as incident prioritization and threat intelligence document
retrieval. This suggests that domain specific pretraining with logs can
significantly enhance the performance of LMs in security. These findings pave
the way for future research into security-specific LMs and their potential
applications. | arXiv |
Distributionally robust offline reinforcement learning (RL) aims to find a
policy that performs the best under the worst environment within an uncertainty
set using an offline dataset collected from a nominal model. While recent
advances in robust RL focus on Markov decision processes (MDPs), robust
non-Markovian RL is limited to planning problem where the transitions in the
uncertainty set are known. In this paper, we study the learning problem of
robust offline non-Markovian RL. Specifically, when the nominal model admits a
low-rank structure, we propose a new algorithm, featuring a novel dataset
distillation and a lower confidence bound (LCB) design for robust values under
different types of the uncertainty set. We also derive new dual forms for these
robust values in non-Markovian RL, making our algorithm more amenable to
practical implementation. By further introducing a novel type-I concentrability
coefficient tailored for offline low-rank non-Markovian decision processes, we
prove that our algorithm can find an $\epsilon$-optimal robust policy using
$O(1/\epsilon^2)$ offline samples. Moreover, we extend our algorithm to the
case when the nominal model does not have specific structure. With a new
type-II concentrability coefficient, the extended algorithm also enjoys
polynomial sample efficiency under all different types of the uncertainty set. | arXiv |
Deep Learning Recommendation Model(DLRM)s utilize the embedding layer to
represent various categorical features. Traditional DLRMs adopt unified
embedding size for all features, leading to suboptimal performance and
redundant parameters. Thus, lots of Automatic Embedding size Search (AES) works
focus on obtaining mixed embedding sizes with strong model performance.
However, previous AES works can hardly address several challenges together: (1)
The search results of embedding sizes are unstable; (2) Recommendation effect
with AES results is unsatisfactory; (3) Memory cost of embeddings is
uncontrollable. To address these challenges, we propose a novel one-shot AES
framework called AdaS&S, in which a supernet encompassing various candidate
embeddings is built and AES is performed as searching network architectures
within it. Our framework contains two main stages: In the first stage, we
decouple training parameters from searching embedding sizes, and propose the
Adaptive Sampling method to yield a well-trained supernet, which further helps
to produce stable AES results. In the second stage, to obtain embedding sizes
that benefits the model effect, we design a reinforcement learning search
process which utilizes the supernet trained previously. Meanwhile, to adapt
searching to specific resource constraint, we introduce the resource
competition penalty to balance the model effectiveness and memory cost of
embeddings. We conduct extensive experiments on public datasets to show the
superiority of AdaS&S. Our method could improve AUC by about 0.3% while saving
about 20% of model parameters. Empirical analysis also shows that the stability
of searching results in AdaS&S significantly exceeds other methods. | arXiv |
Objective: Ensuring the precision in motion tracking for MRI-guided
Radiotherapy (MRIgRT) is crucial for the delivery of effective treatments. This
study refined the motion tracking accuracy in MRIgRT through the innovation of
an automatic real-time tracking method, leveraging an enhanced
Tracking-Learning-Detection (ETLD) framework coupled with automatic
segmentation. Methods: We developed a novel MRIgRT motion tracking method by
integrating two primary methods: the ETLD framework and an improved Chan-Vese
model (ICV), named ETLD+ICV. The TLD framework was upgraded to suit real-time
cine MRI, including advanced image preprocessing, no-reference image quality
assessment, an enhanced median-flow tracker, and a refined detector with
dynamic search region adjustments. Additionally, ICV was combined for precise
coverage of the target volume, which refined the segmented region frame by
frame using tracking results, with key parameters optimized. Tested on 3.5D MRI
scans from 10 patients with liver metastases, our method ensures precise
tracking and accurate segmentation vital for MRIgRT. Results: An evaluation of
106,000 frames across 77 treatment fractions revealed sub-millimeter tracking
errors of less than 0.8mm, with over 99% precision and 98% recall for all
subjects, underscoring the robustness and efficacy of the ETLD. Moreover, the
ETLD+ICV yielded a dice global score of more than 82% for all subjects,
demonstrating the proposed method's extensibility and precise target volume
coverage. Conclusions: This study successfully developed an automatic real-time
motion tracking method for MRIgRT that markedly surpasses current methods. The
novel method not only delivers exceptional precision in tracking and
segmentation but also demonstrates enhanced adaptability to clinical demands,
positioning it as an indispensable asset in the quest to augment the efficacy
of radiotherapy treatments. | arXiv |
In the past decade, an asymmetry in the large-scale distribution of galaxy
spin directions has been observed in data from all relevant digital sky
surveys, all showing a higher number of galaxies rotating in the opposite
direction relative to the Milky Way as observed from Earth. Additionally, JWST
deep fields have shown that the asymmetry is clear and obvious, and can be
sensed even by the naked human eye. These experiments were performed using two
separate statistical methods: standard binomial distribution and simple
$\chi^2$ statistics. Stiskalek \& Desmond (2024) suggested that the asymmetry
in the distribution of galaxy spin directions is due to the use of binomial or
$\chi^2$ statistics. Instead, they developed a new complex ad-hoc statistical
method that shows random distribution in galaxy spin directions, and
specifically in data from HSC. Source code for the method was also made
available. The primary downside of the new method is that it is not able to
identify asymmetry in the distribution of galaxy spin directions. Even when the
new method is provided with synthetic data with extreme and obvious asymmetry,
it still reports a null-hypothesis Universe with random distribution. That
shows empirically that the method cannot sense asymmetry in the distribution of
the directions of rotation of galaxies. While this further concludes that the
distribution of galaxy spin direction as observed from Earth is not symmetric,
it is not necessarily an indication of an anomaly in the large-scale structure.
The excessive number of galaxies that rotate in the opposite direction relative
to the Milky Way can also be driven by the internal structure of galaxies and
the physics of galaxy rotation. The phenomenon can be related to other puzzling
anomalies such the Ho tension. Data are publicly available, and no code is
needed to reproduce the results since only conventional statistics is used. | arXiv |
As large language models (LLMs) grow more powerful, ensuring their safety
against misuse becomes crucial. While researchers have focused on developing
robust defenses, no method has yet achieved complete invulnerability to
attacks. We propose an alternative approach: instead of seeking perfect
adversarial robustness, we develop rapid response techniques to look to block
whole classes of jailbreaks after observing only a handful of attacks. To study
this setting, we develop RapidResponseBench, a benchmark that measures a
defense's robustness against various jailbreak strategies after adapting to a
few observed examples. We evaluate five rapid response methods, all of which
use jailbreak proliferation, where we automatically generate additional
jailbreaks similar to the examples observed. Our strongest method, which
fine-tunes an input classifier to block proliferated jailbreaks, reduces attack
success rate by a factor greater than 240 on an in-distribution set of
jailbreaks and a factor greater than 15 on an out-of-distribution set, having
observed just one example of each jailbreaking strategy. Moreover, further
studies suggest that the quality of proliferation model and number of
proliferated examples play an key role in the effectiveness of this defense.
Overall, our results highlight the potential of responding rapidly to novel
jailbreaks to limit LLM misuse. | arXiv |
Large Language Models (LLMs) excel in diverse applications including
generation of code snippets, but often struggle with generating code for
complex Machine Learning (ML) tasks. Although existing LLM single-agent based
systems give varying performance depending on the task complexity, they purely
rely on larger and expensive models such as GPT-4. Our investigation reveals
that no-cost and low-cost models such as Gemini-Pro, Mixtral and CodeLlama
perform far worse than GPT-4 in a single-agent setting. With the motivation of
developing a cost-efficient LLM based solution for solving ML tasks, we propose
an LLM Multi-Agent based system which leverages combination of experts using
profiling, efficient retrieval of past observations, LLM cascades, and
ask-the-expert calls. Through empirical analysis on ML engineering tasks in the
MLAgentBench benchmark, we demonstrate the effectiveness of our system, using
no-cost models, namely Gemini as the base LLM, paired with GPT-4 in cascade and
expert to serve occasional ask-the-expert calls for planning. With 94.2\%
reduction in the cost (from \$0.931 per run cost averaged over all tasks for
GPT-4 single agent system to \$0.054), our system is able to yield better
average success rate of 32.95\% as compared to GPT-4 single-agent system
yielding 22.72\% success rate averaged over all the tasks of MLAgentBench. | arXiv |
We present preliminary results of a Chandra Large Program to monitor the
ultraluminous X-ray source (ULX) populations of three nearby, ULX-rich galaxies
over the course of a year, finding the ULX population to show a variety of
long-term variability behaviours. Of a sample of 36 ULXs, some show persistent
or moderately variable flux, often with a significant relationship between
hardness and luminosity, consistent with a supercritically accreting source
with varying accretion rates. Six show very high-amplitude variability with no
strong relationship between luminosity and hardness, though not all of them
show evidence of any long-term periodicity, nor of the bimodal distribution
indicative of the propeller effect. We find evidence of additional eclipses for
two previously-identified eclipsing ULXs. Additionally, many sources that were
previously identified as ULXs in previous studies were not detected at ULX
luminosities during our monitoring campaign, indicating a large number of
transient ULXs. | arXiv |
We investigate critical points of eigencurves of bivariate matrix pencils
$A+\lambda B +\mu C$. Points $(\lambda,\mu)$ for which $\det(A+\lambda B+\mu
C)=0$ form algebraic curves in $\mathbb C^2$ and we focus on points where
$\mu'(\lambda)=0$. Such points are referred to as zero-group-velocity (ZGV)
points, following terminology from engineering applications. We provide a
general theory for the ZGV points and show that they form a subset (with
equality in the generic case) of the 2D points $(\lambda_0,\mu_0)$, where
$\lambda_0$ is a multiple eigenvalue of the pencil $(A+\mu_0 C)+\lambda B$, or,
equivalently, there exist nonzero $x$ and $y$ such that $(A+\lambda_0 B+\mu_0
C)x=0$, $y^H(A+\lambda_0 B+\mu_0 C)=0$, and $y^HBx=0$.
We introduce three numerical methods for computing 2D and ZGV points. The
first method calculates all 2D (ZGV) points from the eigenvalues of a related
singular two-parameter eigenvalue problem. The second method employs a
projected regular two-parameter eigenvalue problem to compute either all
eigenvalues or only a subset of eigenvalues close to a given target. The third
approach is a locally convergent Gauss--Newton-type method that computes a
single 2D point from an inital approximation, the later can be provided for all
2D points via the method of fixed relative distance by Jarlebring, Kvaal, and
Michiels. In our numerical examples we use these methods to compute
2D-eigenvalues, solve double eigenvalue problems, determine ZGV points of a
parameter-dependent quadratic eigenvalue problem, evaluate the distance to
instability of a stable matrix, and find critical points of eigencurves of a
two-parameter Sturm-Liouville problem. | arXiv |
The growing usage of Large Language Models (LLMs) highlights the demands and
challenges in scalable LLM inference systems, affecting deployment and
development processes. On the deployment side, there is a lack of comprehensive
analysis on the conditions under which a particular scheduler performs better
or worse, with performance varying substantially across different schedulers,
hardware, models, and workloads. Manually testing each configuration on GPUs
can be prohibitively expensive. On the development side, unpredictable
performance and unknown upper limits can lead to inconclusive trial-and-error
processes, consuming resources on ideas that end up ineffective. To address
these challenges, we introduce INFERMAX, an analytical framework that uses
inference cost models to compare various schedulers, including an optimal
scheduler formulated as a constraint satisfaction problem (CSP) to establish an
upper bound on performance. Our framework offers in-depth analysis and raises
essential questions, challenging assumptions and exploring opportunities for
more efficient scheduling. Notably, our findings indicate that preempting
requests can reduce GPU costs by 30% compared to avoiding preemptions at all.
We believe our methods and insights will facilitate the cost-effective
deployment and development of scalable, efficient inference systems and pave
the way for cost-based scheduling. | arXiv |
Existing approaches for all-in-one weather-degraded image restoration suffer
from inefficiencies in leveraging degradation-aware priors, resulting in
sub-optimal performance in adapting to different weather conditions. To this
end, we develop an adaptive degradation-aware self-prompting model (ADSM) for
all-in-one weather-degraded image restoration. Specifically, our model employs
the contrastive language-image pre-training model (CLIP) to facilitate the
training of our proposed latent prompt generators (LPGs), which represent three
types of latent prompts to characterize the degradation type, degradation
property and image caption. Moreover, we integrate the acquired
degradation-aware prompts into the time embedding of diffusion model to improve
degradation perception. Meanwhile, we employ the latent caption prompt to guide
the reverse sampling process using the cross-attention mechanism, thereby
guiding the accurate image reconstruction. Furthermore, to accelerate the
reverse sampling procedure of diffusion model and address the limitations of
frequency perception, we introduce a wavelet-oriented noise estimating network
(WNE-Net). Extensive experiments conducted on eight publicly available datasets
demonstrate the effectiveness of our proposed approach in both task-specific
and all-in-one applications. | arXiv |
Traditional compilers, designed for optimizing low-level code, fall short
when dealing with modern, computation-heavy applications like image processing,
machine learning, or numerical simulations. Optimizations should understand the
primitive operations of the specific application domain and thus happen on that
level.
Domain-specific languages (DSLs) fulfill these requirements. However, DSL
compilers reinvent the wheel over and over again as standard optimizations,
code generators, and general infrastructure & boilerplate code must be
reimplemented for each DSL compiler.
This paper presents MimIR, an extensible, higher-order intermediate
representation. At its core, MimIR is a pure type system and, hence, a form of
a typed lambda calculus. Developers can declare the signatures of new
(domain-specific) operations, called "axioms". An axiom can be the declaration
of a function, a type operator, or any other entity with a possibly
polymorphic, polytypic, and/or dependent type. This way, developers can extend
MimIR at any low or high level and bundle them in a plugin. Plugins extend the
compiler and take care of optimizing and lowering the plugins' axioms.
We show the expressiveness and effectiveness of MimIR in three case studies:
Low-level plugins that operate at the same level of abstraction as LLVM, a
regular-expression matching plugin, and plugins for linear algebra and
automatic differentiation. We show that in all three studies, MimIR produces
code that has state-of-the-art performance. | arXiv |
Deceptive patterns (DPs) in digital interfaces manipulate users into making
unintended decisions, exploiting cognitive biases and psychological
vulnerabilities. These patterns have become ubiquitous across various digital
platforms. While efforts to mitigate DPs have emerged from legal and technical
perspectives, a significant gap in usable solutions that empower users to
identify and make informed decisions about DPs in real-time remains. In this
work, we introduce AutoBot, an automated, deceptive pattern detector that
analyzes websites' visual appearances using machine learning techniques to
identify and notify users of DPs in real-time. AutoBot employs a two-staged
pipeline that processes website screenshots, identifying interactable elements
and extracting textual features without relying on HTML structure. By
leveraging a custom language model, AutoBot understands the context surrounding
these elements to determine the presence of deceptive patterns. We implement
AutoBot as a lightweight Chrome browser extension that performs all analyses
locally, minimizing latency and preserving user privacy. Through extensive
evaluation, we demonstrate AutoBot's effectiveness in enhancing users' ability
to navigate digital environments safely while providing a valuable tool for
regulators to assess and enforce compliance with DP regulations. | arXiv |
Current wireless communication technologies are insufficient in the face of
ever-increasing demands. Therefore, novel and high-performance communication
systems are needed. In this paper, a novel high data rate and high-performance
index modulation scheme called double media-based modulation (DMBM) is
proposed. The DMBM system doubles the number of mirror activation patterns
(MAPs) and the number of transmitted symbols compared to the traditional MBM
system during the same symbol period. In this way, the spectral efficiency of
the DMBM is doubled and the error performance improves as the number of bits
carried in the indices increases. Performance analysis of the DMBM scheme is
evaluated for $M$-ary quadrature amplitude modulation ($M$-QAM) on Rayleigh
fading channels. The error performance of the proposed DMBM system is compared
with spatial modulation (SM), quadrature SM (QSM), MBM, and double SM (DSM)
techniques. Also, the throughput, complexity, energy efficiency, spectral
efficiency, and capacity analyses for the proposed DMBM system and SM, QSM,
MBM, and DSM systems are presented. All analysis results show that the proposed
DMBM system is superior to the compared systems. | arXiv |
In this paper, we introduce a new technique to study the distribution in
residue classes of sets of integers with digit and sum-of-digits restrictions.
From our main theorem, we derive a necessary and sufficient condition for
integers with missing digits to be uniformly distributed in arithmetic
progressions, extending previous results going back to work of Erd\H{o}s,
Mauduit and S\'ark\"ozy. Our approach utilizes Markov chains and does not rely
on Fourier analysis as many results of this nature do. Our results apply more
generally to the class of multiplicatively invariant sets of integers. This
class, defined by Glasscock, Moreira and Richter using symbolic dynamics, is an
integer analogue to fractal sets and includes all missing digits sets. We
address uniform distribution in this setting, partially answering an open
question posed by the same authors. | arXiv |
Given a simple finite graph $G=(V(G),E(G))$, a vertex subset $D\subseteq
V(G)$ is said to be a dominating set if every vertex $v\in V(G)-D$ is adjacent
to a vertex in $D$. The independent domination number $\gamma_i(G)$ is the
minimum cardinality among all independent dominating sets of $G$. As the
problem of finding the domination number for general graphs is NP-complete, we
focus on the class of $k$-trees. In particular, we determine a tight upper
bound for the independent domination number of $k$-trees for all $k\in
\mathbb{N}$. | arXiv |
DDO68 is a star-forming (SF) dwarf galaxy residing in a nearby void. Its gas
metallicity is among the lowest known in the local Universe, with parameter
12+log(O/H) in the range of 6.96-7.3 dex. Six of its SF regions are located in
or near the so-called 'Northern Ring', in which the Hubble Space Telescope
(HST) images reveal many luminous young stars. We present for these SF regions
(Knots) the results of optical monitoring in 35 epochs during the years
2016--2023. The data was acquired with the 6m (BTA) and the 1m telescopes of
the Special Astrophysical Observatory and the 2.5m telescope of the MSU
Caucasian Mountain Observatory. We complement the above results with the
archive data from 10 other telescopes for 11 epochs during the years 1988-2013
and with 3 our BTA observations between 2005 and 2015. Our goal is to search
for variability of these Knots and to relate it to the probable light
variations of their brightest stars. One of them, DDO68-V1 (in Knot 3), was
identified in 2008 with a luminous blue variable (LBV) star, born in the lowest
metallicity environments. For Knot 3, variations of its integrated light in the
previous epochs reached ~0.8 mag. In the period since 2016, the amplitude of
variations of Knot 3 reached ~0.3 mag. For the rest Knots, due to the lower
amplitudes, the manifestation of variability is less pronounced. We examine the
presence of variability via the criterion chi^{2} and the Robust Median
Statistics and discuss the robustness of the detected variations. The
variability is detected according to the both criteria in the lightcurves of
all Knots with the chi^{2} confidence level of alpha = 0.0005. The peak-to-peak
amplitudes of variations are ~0.09, ~0.13, ~0.11, ~0.08 and ~0.16 mag for Knots
1, 2, 4, 5 and 6, respectively. The amplitudes of the related variations of the
brightest supergiants in these regions can reach of ~3.0 mag. | arXiv |
We show that the CNF satisfiability problem (SAT) can be solved in time
$O^*(1.1199^{(d-2)n})$, where $d$ is either the maximum number of occurrences
of any variable or the average number of occurrences of all variables if no
variable occurs only once. This improves upon the known upper bound of
$O^*(1.1279^{(d-2)n})$ by Wahlstr$\ddot{\text{o}}$m (SAT 2005) and
$O^*(1.1238^{(d-2)n})$ by Peng and Xiao (IJCAI 2023). For $d\leq 4$, our
algorithm is better than previous results. Our main technical result is an
algorithm that runs in $O^*(1.1199^n)$ for 3-occur-SAT, a restricted instance
of SAT where all variables have at most 3 occurrences. Through deeper case
analysis and a reduction rule that allows us to resolve many variables under a
relatively broad criteria, we are able to circumvent the bottlenecks in
previous algorithms. | arXiv |
We use new measurements of the M31 proper motion to examine the Milky Way
(MW) - M31 orbit and angular momentum. For Local Group (LG) mass consistent
with measured values, and assuming the system evolves in isolation, we show a
wide range of orbits is possible. We compare to a sample of LG-like systems in
the Illustris simulation and find that $\sim 13\%$ of these pairs have
undergone a pericentric passage. Using the simulated sample, we examine how
accurately an isolated, two-body model describes the MW-M31 orbit, and show
that $\sim 10\%$ of the analogues in the simulation are well-modeled by such an
orbit. Systems that evolve in isolation by this definition are found to have a
lower rate of major mergers and, in particular, have no major mergers since $z
\approx 0.3$. For all systems, we find an increase in the orbital angular
momentum, which is fairly independent of the merger rate and is possibly
explained by the influence of tidal torques on the LG. Given the likely quiet
recent major merger history of the MW, it is plausible that the isolated
two-body model appropriately describes the orbit, though recent evidence for a
major merger in M31 may complicate this interpretation. | arXiv |
Audio super-resolution aims to enhance low-resolution signals by creating
high-frequency content. In this work, we modify the architecture of AERO (a
state-of-the-art system for this task) for music super-resolution.
SPecifically, we replace its original Attention and LSTM layers with Mamba, a
State Space Model (SSM), across all network layers. Mamba is capable of
effectively substituting the mentioned modules, as it offers a mechanism
similar to that of Attention while also functioning as a recurrent network.
With the proposed AEROMamba, training requires 2-4x less GPU memory, since
Mamba exploits the convolutional formulation and leverages GPU memory
hierarchy. Additionally, during inference, Mamba operates in constant memory
due to recurrence, avoiding memory growth associated with Attention. This
results in a 14x speed improvement using 5x less GPU. Subjective listening
tests (0 to 100 scale) show that the proposed model surpasses the AERO model.
In the MUSDB dataset, degraded signals scored 38.22, while AERO and AEROMamba
scored 60.03 and 66.74, respectively. For the PianoEval dataset, scores were
72.92 for degraded signals, 76.89 for AERO, and 84.41 for AEROMamba. | arXiv |
We show that any infinite ring has an infinite nonunital compressed commuting
graph. We classify all infinite unital rings with finite unital compressed
commuting graph, using semidirect product of rings as our main tool. As a
consequence we also classify infinite unital rings with only finitely many
unital subrings. | arXiv |
In Special Relativity, massless objects are characterized as either vacuum
states or as radiation propagating at the speed of light. This distinction
extends to General Relativity for asymptotically flat initial data sets (IDS)
\((M^n, g, k)\), where vacuum is represented by slices of Minkowski space, and
radiation is modeled by slices of \(pp\)-wave spacetimes. In contrast, we
demonstrate that asymptotically hyperboloidal IDS with zero mass must embed
isometrically into Minkowski space, with no possible IDS configurations
modeling radiation in this setting. Our result holds under the most general
assumptions. The proof relies on precise decay estimates for spinors on level
sets of spacetime harmonic functions and works in all dimensions. | arXiv |
Advanced simulations of the mechanical behavior of soft tissues frequently
rely on structure-based constitutive models, including smeared descriptions of
collagen fibers. Among them, the so-called Discrete Fiber Dispersion (DFD)
model is based on a discrete integration of the fiber-strain energy over all
the fiber directions. In this paper, we recall the theoretical framework of the
DFD model, including a derivation of the stress and stiffness tensors required
for the finite element implementation. Specifically, their expressions for
incompressible plane stress problems are obtained. The use of a Lebedev
quadrature, built exploiting the octahedral symmetry, is then proposed,
illustrating the particular choice adopted for the orientation of the
integration points. Next, the convergence of this quadrature scheme is assessed
by means of three numerical benchmark tests, highlighting the advantages with
respect to other angular integration methods available in the literature.
Finally, we propose as applicative example a simulation of Z-plasty, a
technique commonly used in reconstructive skin surgery, considering multiple
geometrical configurations and orientations of the fibers. Results are provided
in terms of key mechanical quantities relevant for the surgical practice. | arXiv |
We derive a complete set of Feynman rules in the general two-Higgs doublet
model effective field theory where the effects of additional new physics are
parametrized by operators up to mass dimension-six. We calculate the physical
Higgs spectrum, contributions to the couplings and masses of electroweak gauge
bosons and fermions, and all contact interactions arising from dimension-six
operators. We also present results in specific limits and types, which include
the $CP$-conserving limit, alignment limit, and the four types of two-Higgs
doublet models: type-I, -II, -X, and -Y. We discuss the differences between the
two-Higgs doublet model effective field theory and the renormalizable two-Higgs
doublet model or the standard model effective field theory. We create a
FeynRules model package for calculating all Feynman rules in the general
effective field theory and its specific limits and types. | arXiv |
Multimodal learning can complete the picture of information extraction by
uncovering key dependencies between data sources. However, current systems fail
to fully leverage multiple modalities for optimal performance. This has been
attributed to modality competition, where modalities strive for training
resources, leaving some underoptimized. We show that current balancing methods
struggle to train multimodal models that surpass even simple baselines, such as
ensembles. This raises the question: how can we ensure that all modalities in
multimodal training are sufficiently trained, and that learning from new
modalities consistently improves performance? This paper proposes the
Multimodal Competition Regularizer (MCR), a new loss component inspired by
mutual information (MI) decomposition designed to prevent the adverse effects
of competition in multimodal training. Our key contributions are: 1)
Introducing game-theoretic principles in multimodal learning, where each
modality acts as a player competing to maximize its influence on the final
outcome, enabling automatic balancing of the MI terms. 2) Refining lower and
upper bounds for each MI term to enhance the extraction of task-relevant unique
and shared information across modalities. 3) Suggesting latent space
permutations for conditional MI estimation, significantly improving
computational efficiency. MCR outperforms all previously suggested training
strategies and is the first to consistently improve multimodal learning beyond
the ensemble baseline, clearly demonstrating that combining modalities leads to
significant performance gains on both synthetic and large real-world datasets. | arXiv |
In this paper we study the partition function of the mass deformed ABJM
theory on a squashed three sphere. In particular, we focus on the case with the
Chern-Simons levels being $\pm 1$ and apply a duality between this theory and
the $\mathcal{N}=4$ $\mathrm{U}\left(N\right)$ super Yang-Mills theory with an
adjoint hypermultiplet and a fundamental hypermultiplet. For a special mass
parameter depending on the squashing parameter, we find that the partition
function can be written as that of an ideal Fermi gas with a non-trivial
density matrix. By studying this density matrix, we analytically derive the all
order perturbative expansion of the partition function in $1/N$, which turns
out to take the form of the Airy function. Our results not only align with
previous findings and conjectures but also lead to a new formula for the
overall constant factor of the partition function. We also study the exact
values of the partition function for small but finite values of $N$. | arXiv |
Quantum mechanics started out as a theory to describe the smallest scales of
energy in Nature. After hundred years of development it is now routinely
employed to describe, for example, quantum computers with thousands of qubits.
This tremendous progress turns the debate of foundational questions into a
technological imperative. In what follows we introduce a model of a quantum
measurement process that consistently includes the impact of having access only
to finite resources when describing a macroscopic system, like a measurement
apparatus. Leveraging modern tools from equilibration of closed systems and
typicality, we show how the collapse can be seen as an effective description of
a closed dynamics, of which we do not know all its details. Our model is then
exploited to address the ``Wigner Friend Scenario'', and we observe that an
agreement is reached when both Wigner and his friend acknowledge their finite
resources perspective and describe the measurement process accordingly. | arXiv |
Subsets and Splits