text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
Causal concept effect estimation is gaining increasing interest in the field
of interpretable machine learning. This general approach explains the behaviors
of machine learning models by estimating the causal effect of
human-understandable concepts, which represent high-level knowledge more
comprehensibly than raw inputs like tokens. However, existing causal concept
effect explanation methods assume complete observation of all concepts involved
within the dataset, which can fail in practice due to incomplete annotations or
missing concept data. We theoretically demonstrate that unobserved concepts can
bias the estimation of the causal effects of observed concepts. To address this
limitation, we introduce the Missingness-aware Causal Concept Explainer (MCCE),
a novel framework specifically designed to estimate causal concept effects when
not all concepts are observable. Our framework learns to account for residual
bias resulting from missing concepts and utilizes a linear predictor to model
the relationships between these concepts and the outputs of black-box machine
learning models. It can offer explanations on both local and global levels. We
conduct validations using a real-world dataset, demonstrating that MCCE
achieves promising performance compared to state-of-the-art explanation methods
in causal concept effect estimation. | arXiv |
In this study, we examine two important new physics scenarios, \textit{i.e},
the theory of Large Extra Dimension (LED) and the theory of neutrino decay. We
study LED in the context of P2SO, DUNE, and T2HK with emphasis on P2SO, whereas
decay has been studied solely in the context of P2SO. For LED, in our study we
find that the combination of P2SO, DUNE, and T2HK can provide a better bound
than the current one only if all the oscillation parameters are measured with
absolute certainty. However, for decay, one can obtain a better bound with P2SO
as compared to ESSnuSB and MOMENT, but the bound obtained by P2SO is weak as
compared to DUNE and T2HK. Regarding sensitivities to the current unknowns, if
LED exists in nature, its impact on mass ordering, octant, and CP violation is
very mild; however, decay can alter the sensitivities related to CP violation
and octant in a non-trivial way. | arXiv |
We employ techniques from group theory to show that, in many cases, counting
problems on graphs are almost as hard to solve in a small number of instances
as they are in all instances. Specifically, we show the following results.
1. Goldreich (2020) asks if, for every constant $\delta < 1 / 2$, there is an
$\tilde{O} \left( n^2 \right)$-time randomized reduction from computing the
number of $k$-cliques modulo $2$ with a success probability of greater than $2
/ 3$ to computing the number of $k$-cliques modulo $2$ with an error
probability of at most $\delta$.
In this work, we show that for almost all choices of the $\delta 2^{n \choose
2}$ corrupt answers within the average-case solver, we have a reduction taking
$\tilde{O} \left( n^2 \right)$-time and tolerating an error probability of
$\delta$ in the average-case solver for any constant $\delta < 1 / 2$. By
"almost all", we mean that if we choose, with equal probability, any subset $S
\subset \{0,1\}^{n \choose 2}$ with $|S| = \delta2^{n \choose 2}$, then with a
probability of $1-2^{-\Omega \left( n^2 \right)}$, we can use an average-case
solver corrupt on $S$ to obtain a probabilistic algorithm.
2. Inspired by the work of Goldreich and Rothblum in FOCS 2018 to take the
weighted versions of the graph counting problems, we prove that if the RETH is
true, then for a prime $p = \Theta \left( 2^n \right)$, the problem of counting
the number of unique Hamiltonian cycles modulo $p$ on $n$-vertex directed
multigraphs and the problem of counting the number of unique half-cliques
modulo $p$ on $n$-vertex undirected multigraphs, both require exponential time
to compute correctly on even a $1 / 2^{n/\log n}$-fraction of instances.
Meanwhile, simply printing $0$ on all inputs is correct on at least a $\Omega
\left( 1 / 2^n \right)$-fraction of instances. | arXiv |
By augmenting Large Language Models (LLMs) with external tools, their
capacity to solve complex problems has been significantly enhanced. However,
despite ongoing advancements in the parsing capabilities of LLMs, incorporating
all available tools simultaneously in the prompt remains impractical due to the
vast number of external tools. Consequently, it is essential to provide LLMs
with a precise set of tools tailored to the specific task, considering both
quantity and quality. Current tool retrieval methods primarily focus on
refining the ranking list of tools and directly packaging a fixed number of
top-ranked tools as the tool set. However, these approaches often fail to equip
LLMs with the optimal set of tools prior to execution, since the optimal number
of tools for different tasks could be different, resulting in inefficiencies
such as redundant or unsuitable tools, which impede immediate access to the
most relevant tools. This paper addresses the challenge of recommending precise
toolsets for LLMs. We introduce the problem of tool recommendation, define its
scope, and propose a novel Precision-driven Tool Recommendation (PTR) approach.
PTR captures an initial, concise set of tools by leveraging historical tool
bundle usage and dynamically adjusts the tool set by performing tool matching,
culminating in a multi-view-based tool addition. Additionally, we present a new
dataset, RecTools, and a metric, TRACC, designed to evaluate the effectiveness
of tool recommendation for LLMs. We further validate our design choices through
comprehensive experiments, demonstrating promising accuracy across two open
benchmarks and our RecTools dataset. | arXiv |
In recent years, attention mechanisms have significantly enhanced the
performance of object detection by focusing on key feature information.
However, prevalent methods still encounter difficulties in effectively
balancing local and global features. This imbalance hampers their ability to
capture both fine-grained details and broader contextual information-two
critical elements for achieving accurate object detection.To address these
challenges, we propose a novel attention mechanism, termed Local-Global
Attention, which is designed to better integrate both local and global
contextual features. Specifically, our approach combines multi-scale
convolutions with positional encoding, enabling the model to focus on local
details while concurrently considering the broader global context.
Additionally, we introduce a learnable parameters, which allow the model to
dynamically adjust the relative importance of local and global attention,
depending on the specific requirements of the task, thereby optimizing feature
representations across multiple scales.We have thoroughly evaluated the
Local-Global Attention mechanism on several widely used object detection and
classification datasets. Our experimental results demonstrate that this
approach significantly enhances the detection of objects at various scales,
with particularly strong performance on multi-class and small object detection
tasks. In comparison to existing attention mechanisms, Local-Global Attention
consistently outperforms them across several key metrics, all while maintaining
computational efficiency. | arXiv |
We say that a function is rare-case hard against a given class of algorithms
(the adversary) if all algorithms in the class can compute the function only on
an $o(1)$-fraction of instances of size $n$ for large enough $n$. Starting from
any NP-complete language, for each $k > 0$, we construct a function that cannot
be computed correctly on even a $1/n^k$-fraction of instances for
polynomial-sized circuit families if NP $\not \subset$ P/POLY and by
polynomial-time algorithms if NP $\not \subset$ BPP - functions that are
rare-case hard against polynomial-time algorithms and polynomial-sized
circuits. The constructed function is a number-theoretic polynomial evaluated
over specific finite fields. For NP-complete languages that admit parsimonious
reductions from all of NP (for example, SAT), the constructed functions are
hard to compute on even a $1/n^k$-fraction of instances by polynomial-time
algorithms and polynomial-sized circuit families simply if $P^{\#P} \not
\subset$ BPP and $P^{\#P} \not \subset$ P/POLY, respectively. We also show that
if the Randomized Exponential Time Hypothesis (RETH) is true, none of these
constructed functions can be computed on even a $1/n^k$-fraction of instances
in subexponential time. These functions are very hard, almost always.
While one may not be able to efficiently compute the values of these
constructed functions themselves, in polynomial time, one can verify that the
evaluation of a function, $s = f(x)$, is correct simply by asking a prover to
compute $f(y)$ on targeted queries. | arXiv |
Propensity Score Matching (PSM) stands as a widely embraced method in
comparative effectiveness research. PSM crafts matched datasets, mimicking some
attributes of randomized designs, from observational data. In a valid PSM
design where all baseline confounders are measured and matched, the confounders
would be balanced, allowing the treatment status to be considered as if it were
randomly assigned. Nevertheless, recent research has unveiled a different facet
of PSM, termed "the PSM paradox." As PSM approaches exact matching by
progressively pruning matched sets in order of decreasing propensity score
distance, it can paradoxically lead to greater covariate imbalance, heightened
model dependence, and increased bias, contrary to its intended purpose.
Methods: We used analytic formula, simulation, and literature to demonstrate
that this paradox stems from the misuse of metrics for assessing chance
imbalance and bias. Results: Firstly, matched pairs typically exhibit different
covariate values despite having identical propensity scores. However, this
disparity represents a "chance" difference and will average to zero over a
large number of matched pairs. Common distance metrics cannot capture this
``chance" nature in covariate imbalance, instead reflecting increasing
variability in chance imbalance as units are pruned and the sample size
diminishes. Secondly, the largest estimate among numerous fitted models,
because of uncertainty among researchers over the correct model, was used to
determine statistical bias. This cherry-picking procedure ignores the most
significant benefit of matching design-reducing model dependence based on its
robustness against model misspecification bias. Conclusions: We conclude that
the PSM paradox is not a legitimate concern and should not stop researchers
from using PSM designs. | arXiv |
Let $\mathcal{H}$ be a hyperplane arrangement in $\mathbb{CP}^n$. We define a
quadratic form $Q$ on $\mathbb{R}^{\mathcal{H}}$ that is entirely determined by
the intersection poset of $\mathcal{H}$. Using the Bogomolov-Gieseker
inequality for parabolic bundles, we show that if $\mathbf{a} \in
\mathbb{R}^{\mathcal{H}}$ is such that the weighted arrangement $(\mathcal{H},
\mathbf{a})$ is \emph{stable}, then $Q(\mathbf{a}) \leq 0$.
As an application, we consider the symmetric case where all the weights are
equal. The inequality $Q(a, \ldots, a) \leq 0$ gives a lower bound for the
total sum of multiplicities of codimension $2$ intersection subspaces of
$\mathcal{H}$. The lower bound is attained when every $H \in \mathcal{H}$
intersects all the other members of $\mathcal{H} \setminus \{H\}$ along
$(1-2/(n+1))|\mathcal{H}| + 1$ codimension $2$ subspaces; extending from $n=2$
to higher dimensions a condition found by Hirzebruch for line arrangements in
the complex projective plane. | arXiv |
The Event Horizon Telescope Collaboration (EHTC) observed the Galactic centre
source Sgr A* and used emission models primarily based on single ion
temperature (1T) general relativistic magnetohydrodynamic (GRMHD) simulations.
This predicted emission is strongly dependent on a modelled prescription of the
ion-to-electron temperature ratio. The two most promising models are
magnetically arrested disk (MAD) states. However, these and nearly all MAD
models exhibit greater light-curve variability at 230 GHz compared to
historical observations. Moreover, no model successfully passes all the
variability and multiwavelength constraints. This limitation possibly stems
from the fact that the actual temperature ratio depends on microphysical
dissipation, radiative processes and other effects not captured in ideal fluid
simulations. Therefore, we investigate the effects of two-temperature (2T)
thermodynamics in MAD GRMHD simulations of Sgr A*, where the temperatures of
both species are evolved more self-consistently. We include Coulomb coupling,
radiative cooling of electrons, and model heating via magnetic reconnection. We
find that the light-curve variability more closely matches historical
observations when we include the 2T treatment and variable adiabatic indices,
compared to 1T simulations. Contrary to the common assumption of neglecting
radiative cooling for the low accretion rates of Sgr A*, we also find that
radiative cooling still affects the accretion flow, reducing the electron
temperature in the inner disk by about 10%, which in turn lowers both the
average flux and variability at 230 GHz by roughly 10%. | arXiv |
Existing claim verification datasets often do not require systems to perform
complex reasoning or effectively interpret multimodal evidence. To address
this, we introduce a new task: multi-hop multimodal claim verification. This
task challenges models to reason over multiple pieces of evidence from diverse
sources, including text, images, and tables, and determine whether the combined
multimodal evidence supports or refutes a given claim. To study this task, we
construct MMCV, a large-scale dataset comprising 16k multi-hop claims paired
with multimodal evidence, generated and refined using large language models,
with additional input from human feedback. We show that MMCV is challenging
even for the latest state-of-the-art multimodal large language models,
especially as the number of reasoning hops increases. Additionally, we
establish a human performance benchmark on a subset of MMCV. We hope this
dataset and its evaluation task will encourage future research in multimodal
multi-hop claim verification. | arXiv |
While general-purpose computing follows Von Neumann's architecture, the data
movement between memory and processor elements dictates the processor's
performance. The evolving compute-in-memory (CiM) paradigm tackles this issue
by facilitating simultaneous processing and storage within static random-access
memory (SRAM) elements. Numerous design decisions taken at different levels of
hierarchy affect the figure of merits (FoMs) of SRAM, such as power,
performance, area, and yield. The absence of a rapid assessment mechanism for
the impact of changes at different hierarchy levels on global FoMs poses a
challenge to accurately evaluating innovative SRAM designs. This paper presents
an automation tool designed to optimize the energy and latency of SRAM designs
incorporating diverse implementation strategies for executing logic operations
within the SRAM. The tool structure allows easy comparison across different
array topologies and various design strategies to result in energy-efficient
implementations. Our study involves a comprehensive comparison of over 6900+
distinct design implementation strategies for EPFL combinational benchmark
circuits on the energy-recycling resonant compute-in-memory (rCiM) architecture
designed using TSMC 28 nm technology. When provided with a combinational
circuit, the tool aims to generate an energy-efficient implementation strategy
tailored to the specified input memory and latency constraints. The tool
reduces 80.9% of energy consumption on average across all benchmarks while
using the six-topology implementation compared to baseline implementation of
single-macro topology by considering the parallel processing capability of rCiM
cache size ranging from 4KB to 192KB. | arXiv |
Theoretical methods based on the density matrix are powerful tools to
describe open quantum systems. However, such methods are complicated and
intricate to be used analytically. Here we present an object-oriented framework
for constructing the equation of motion of the correlation matrix at a given
order in the quantum chain of BBGKY hierarchy used to describe the interaction
of many-particle systems. The algorithm of machine derivation of equations
includes the implementation of the principles of quantum mechanics and operator
algebra. It is based on the description and use of classes in the Python
programming environment. Class objects correspond to the elements of the
equations that are derived: density matrix, correlation matrix, energy
operators, commutator and several operators indexing systems. The program
contains a special class that allows one to define a statistical ensemble with
an infinite number of subsystems. For all classes, methods implementing the
actions of the operator algebra are specified. The number of subsystems of the
statistical ensemble for the physical problem and the types of subsystems
between which pairwise interactions are possible are specified as an input
data. It is shown that this framework allows one to derive the equations of
motion of the fourth-order correlation matrix in less than a minute. | arXiv |
According to the asymptotically safe gravity, black holes can have
characteristics different from those described according to general relativity.
Particularly, they are more compact, with a smaller event horizon, which in
turn affects the other quantities dependent on it, like the photon ring and the
size of the innermost stable circular orbit. We decided to test the latter by
searching in the literature for observational measurements of the emission from
accretion disk around stellar-mass black holes. All published values of the
radius of the inner accretion disk were made homogeneous by taking into account
the most recent and more reliable values of mass, spin, viewing angle, and
distance from the Earth. We do not find any significant deviation from the
expectations of general relativity. Some doubtful cases can be easily
understood as due to specific states of the object during the observation or
instrumental biases. | arXiv |
The role of internal and environmental factors in the star formation activity
of galaxies is still a matter of debate, particularly at higher redshifts.
Leveraging the most recent release of the COSMOS catalog, COSMOS2020, and
density measurements from our previous study we disentangle the impact of
environment and stellar mass on the star formation rate (SFR), and specific SFR
(sSFR) of a sample of $\sim 210,000$ galaxies within redshift range $0.4< z <
4$ and present our findings in three cosmic epochs: 1) out to $z\sim 1$, the
average SFR and sSFR decline at extremely dense environments and high mass end
of the distribution which is mostly due to the presence of the massive
quiescent population; 2) at $1<z<2$, the environmental dependence diminishes,
while mass is still the dominant factor in star formation activity; 3) beyond
$z\sim 2$, our sample is dominated by star-forming galaxies and we observe a
reversal of the trends seen in the local universe: the average SFR increases
with increasing environmental density. Our analysis shows that both
environmental and mass quenching efficiencies increase with stellar mass at all
redshifts, with mass being the dominant quenching factor in massive galaxies
compared to environmental effects. At $2<z<4$, negative values of environmental
quenching efficiency suggest that the fraction of star-forming galaxies in
dense environments exceeds that in less dense regions, likely due to the
greater availability of cold gas, higher merger rates, and tidal effects that
trigger star formation activity. | arXiv |
Technical carbon dioxide removal through bioenergy with carbon capture or
direct air capture plays a role in virtually all climate mitigation scenarios.
Both of these technologies rely on the use of chemical solvents or sorbents in
order to capture CO$_2$. Lately, concerns have surfaced about the cost and
energy implications of producing solvents and sorbents at scale. Here, we show
that the production of chemical sorbents could have significant implications on
system cost, energy use and material use depending on how much they are
consumed. Among the three chemical sorbents investigated, namely
monoethanolamine (MEA) for post-combustion carbon capture, potassium hydroxide
for liquid direct air capture and polyethylenimine-silica (PEI) for solid
sorbent direct air capture, we found that the production of the compound for
solid sorbent direct air capture represent the highest uncertainties for the
system. At the high range of solid sorbent consumption, total energy system
cost increased by up to 6.5\%, while effects for other options were small to
negligible. Scale-up of material production capacities was also substantial for
MEA and PEI. Implications of sorbent consumption for carbon capture
technologies should be considered more thoroughly in scenarios relying on
direct air capture using a solid sorbent. | arXiv |
Magnetic imaging with ultra-high spatial resolution is crucial to exploring
the magnetic textures of emerging quantum materials. We propose a novel
magnetic imaging protocol that achieves Angstrom-scale resolution by combining
spin defects in van der Waals materials and terahertz scattering scanning
near-field optical microscopy (THz s-SNOM). Spin defects in the atomic
monolayer enable the probe-to-sample distance diving into the Angstrom range
where the exchange interactions between the probe and sample spins become
predominant. This exchange interaction leads to energy splitting of the probe
spin in the order of millielectronvolts, corresponding to THz frequencies. With
THz optics and the spin-dependent fluorescence of the probe spin, the
interaction energy can be resolved entirely through optical methods. Our
proposed all-optical magnetic imaging protocol holds significant promise for
investigating magnetic textures in condensed matter physics due to its
excellent compatibility and high spatial resolution. | arXiv |
In this paper, we present a framework for learning the solution map of a
backward parabolic Cauchy problem. The solution depends continuously but
nonlinearly on the final data, source, and force terms, all residing in Banach
spaces of functions. We utilize Fr\'echet space neural networks (Benth et al.
(2023)) to address this operator learning problem. Our approach provides an
alternative to Deep Operator Networks (DeepONets), using basis functions to
span the relevant function spaces rather than relying on finite-dimensional
approximations through censoring. With this method, structural information
encoded in the basis coefficients is leveraged in the learning process. This
results in a neural network designed to learn the mapping between
infinite-dimensional function spaces. Our numerical proof-of-concept
demonstrates the effectiveness of our method, highlighting some advantages over
DeepONets. | arXiv |
This paper compares the HLLEM and HLL-CPS schemes for Euler equations and
proposes improvements for all Mach number flows. Enhancements to the HLLEM
scheme involve adding anti-diffusion terms in the face normal direction and
modifying anti-diffusion coefficients for linearly degenerate waves near
shocks. The HLL-CPS scheme is improved by adjusting anti-diffusion coefficients
for the face normal direction and linearly degenerate waves. Matrix stability,
linear perturbation, and asymptotic analyses demonstrate the robustness of the
proposed schemes and their ability to capture low Mach flow features. Numerical
tests confirm that the schemes are free from shock instabilities at high speeds
and accurately resolve low Mach number flow features. | arXiv |
Turbulent kinetic energy (TKE) is a measure for unsteady loads and important
regarding the design of e.g. propellers or energy-saving devices. While
simulations are often done for a double-body, using a symmetry condition,
experiments and the final product have a free surface. Simulations with and
without free surface are carried out for the Japan Bulk Carrier, comparing TKE
in the vortex cores. The reliability of finding the vortex centers is
discussed. As the fine meshes show an unexpected trend for the TKE, a detailed
investigation is done, mainly to exclude method-related drawbacks from using a
hybrid URANS/ LES model. It is found that a shift in vortex-core positions
distorts the results whereby the experimental center positions which are
referenced are questionable. Using a fixed position for all cases improves
comparability and gives a different picture. Thereupon the medium meshes were
enhanced in such a way that one of the refinement boxes was extended further
forward, now showing much better agreement with the fine meshes. TKE is then
portrayed as integral quantity and shows no significant difference between the
simulations with and without free surface. However, the structure itself is
influenced by the surface in a way which alters local characteristics. | arXiv |
Understanding the text in legal documents can be challenging due to their
complex structure and the inclusion of domain-specific jargon. Laws and
regulations are often crafted in such a manner that engagement with them
requires formal training, potentially leading to vastly different
interpretations of the same texts. Linguistic complexity is an important
contributor to the difficulties experienced by readers. Simplifying texts could
enhance comprehension across a broader audience, not just among trained
professionals. Various metrics have been developed to measure document
readability. Therefore, we adopted a systematic review approach to examine the
linguistic and readability metrics currently employed for legal and regulatory
texts. A total of 3566 initial papers were screened, with 34 relevant studies
found and further assessed. Our primary objective was to identify which current
metrics were applied for evaluating readability within the legal field. Sixteen
different metrics were identified, with the Flesch-Kincaid Grade Level being
the most frequently used method. The majority of studies (73.5%) were found in
the domain of "informed consent forms". From the analysis, it is clear that not
all legal domains are well represented in terms of readability metrics and that
there is a further need to develop more consensus on which metrics should be
applied for legal documents. | arXiv |
Large language models (LLMs) excel in high-resource languages but face
notable challenges in low-resource languages like Mongolian. This paper
addresses these challenges by categorizing capabilities into language abilities
(syntax and semantics) and cognitive abilities (knowledge and reasoning). To
systematically evaluate these areas, we developed MM-Eval, a specialized
dataset based on Modern Mongolian Language Textbook I and enriched with WebQSP
and MGSM datasets.
Preliminary experiments on models including Qwen2-7B-Instruct, GLM4-9b-chat,
Llama3.1-8B-Instruct, GPT-4, and DeepseekV2.5 revealed that: 1) all models
performed better on syntactic tasks than semantic tasks, highlighting a gap in
deeper language understanding; and 2) knowledge tasks showed a moderate
decline, suggesting that models can transfer general knowledge from
high-resource to low-resource contexts.
The release of MM-Eval, comprising 569 syntax, 677 semantics, 344 knowledge,
and 250 reasoning tasks, offers valuable insights for advancing NLP and LLMs in
low-resource languages like Mongolian. The dataset is available at
https://github.com/joenahm/MM-Eval. | arXiv |
Two families $\mathcal{F}$ and $\mathcal{G}$ are called cross-intersecting if
for every $F\in \mathcal{F}$ and $G\in \mathcal{G}$, the intersection $F\cap G$
is non-empty. For any positive integers $n$ and $k$, let $\binom{[n]}{k}$
denote the family of all $k$-element subsets of $\{1,2,\ldots,n\}$. Let $t, s,
k, n$ be non-negative integers with $k \geq s+1$ and $n \geq 2 k+t$. In 2016,
Frankl proved that if $\mathcal{F} \subseteq\binom{[n]}{k+t}$ and $\mathcal{G}
\subseteq\binom{[n]}{k}$ are cross-intersecting families, and $\mathcal{F}$ is
$(t+1)$-intersecting and $|\mathcal{F}| \geq 1$, then
$|\mathcal{F}|+|\mathcal{G}| \leq\binom{n}{k}-\binom{n-k-t}{k}+1$. Furthermore,
Frankl conjectured that under an additional condition $\binom{[k+t+s]}
{k+t}\subseteq\mathcal{F}$, the following inequality holds: $$
|\mathcal{F}|+|\mathcal{G}|
\leq\binom{k+t+s}{k+t}+\binom{n}{k}-\sum_{i=0}^s\binom{k+t+s}{i}\binom{n-k-t-s}{k-i}.
$$ In this paper, we prove this conjecture. The key ingredient is to establish
a theorem for cross-intersecting families with a restricted universe. Moreover,
we derive an analogous result for this conjecture. | arXiv |
In this work, we propose an all-optical stroboscopic scheme to simulate an
open quantum system. By incorporating the tritter, consisting of a group of
beam splitters, we find the emergence of spontaneous anti-phase synchronization
in the steady state. To better understand the synchronization and entanglement
properties within the system, we utilize the relative error measure and find
the distribution of logarithmic negativity in parameter space shows similar
structures with the results of synchronization measure. Finally, we derive the
adjoint master equation corresponding to the system when the synchronization
condition is satisfied and explain the existence of oscillations. In addition,
we explore the effect of non-Markovianity on synchronization, and we find that
it only slows down the time for the system to reach the steady state but does
not change the synchronization properties of the steady state. Our work
provides a promising scheme for experimental studies focused on synchronization
and other nonequilibrium steady states. | arXiv |
Program decomposition is essential for developing maintainable and efficient
software, yet it remains a challenging skill to teach and learn in introductory
programming courses. What does program decomposition for procedural CS1
programs entail? How can CS1 students improve the decomposition of their
programs? What scaffolded exercises can instructors use to teach program
decomposition skills? We aim to answer all these questions by presenting a
conceptual framework that (1) is grounded in the established code style
principles, (2) provides a systematic approach that can be taught to students
as an actionable strategy to improve the program decomposition of their
programs, and (3) includes scaffolded exercises to be used in classroom
activities. In addition, this systematic approach is automatable and can
further be used to implement visualizers, automated feedback generators and
digital tutors. | arXiv |
We study the nonleptonic decays $\bar{B}_s^0 \to D_s^{(*)+} \pi^-$ and
$\bar{B}^0 \to D^{(*)+} K^-$ within the Weak Effective Theory (WET) up to
mass-dimension six. We revisit the calculation of the hadronic matrix elements
within QCD Factorization including the full set of WET operators. We
recalculate the two-particle contributions to the hard-scattering kernels at
next-to-leading order in $\alpha_s$, confirming recent results in the
literature. We also calculate the three-particle contributions at leading order
in $\alpha_s$, clarifying the procedure, refining the SM results in the
literature, and providing for the first time the complete set of contributions
within the WET. We use these results to perform a global phenomenological study
of the effective couplings, putting bounds on the size of the WET Wilson
coefficients in four distinct fit models. The fits include constraints from the
nonleptonic $B$-meson decay width, which we calculate at the leading order for
the full set of WET operators for the first time. This study is the first one
to account for simultaneous variation of up to six effective couplings. We
identify two distinct modes in all fit models and discuss how future
measurements can be used to distinguish between them. | arXiv |
We have analytically explored both the zero temperature and the finite
temperature scaling theory for the collapse of an attractively interacting 3-D
harmonically trapped Bose gas in a synthetic magnetic field. We have considered
short ranged (contact) attractive inter-particle interactions and Hartree-Fock
approximation for the same. We have separately studied the collapse of both the
condensate and the thermal cloud below and above the condensation point,
respectively. We have obtained an anisotropy, artificial magnetic field, and
temperature dependent critical number of particles for the collapse of the
condensate. We have found a dramatic change in the critical exponent (from
$\alpha=1$ to $0$) of the specific heat ($C_v\propto|T-T_c|^{\alpha}$) when the
thermal cloud is about to collapse with the critical number of particles
($N=N_c$) just below and above the condensation point. All the results obtained
by us are experimentally testable within the present-day experimental set-up
for the ultracold systems in the magneto-optical traps. | arXiv |
Recent research has shown that state-of-the-art (SotA) Automatic Speech
Recognition (ASR) systems, such as Whisper, often exhibit predictive biases
that disproportionately affect various demographic groups. This study focuses
on identifying the performance disparities of Whisper models on Dutch speech
data from the Common Voice dataset and the Dutch National Public Broadcasting
organisation. We analyzed the word error rate, character error rate and a
BERT-based semantic similarity across gender groups. We used the moral
framework of Weerts et al. (2022) to assess quality of service harms and
fairness, and to provide a nuanced discussion on the implications of these
biases, particularly for automatic subtitling. Our findings reveal substantial
disparities in word error rate (WER) among gender groups across all model
sizes, with bias identified through statistical testing. | arXiv |
Let $\mathcal{X} = \{ X_{\gamma} \}_{\gamma \in \Gamma}$ be a family of
Banach spaces and let $\mathcal{E}$ be a Banach sequence space defined on
$\Gamma$. The main aim of this work is to investigate the abstract Kadets--Klee
properties, that is, the Kadets--Klee type properties in which the weak
convergence of sequences is replaced by the convergence with respect to some
linear Hausdorff topology, for the direct sum construction $(\bigoplus_{\gamma
\in \Gamma} X_{\gamma})_{\mathcal{E}}$. As we will show, and this seems to be
quite atypical behavior when compared to some other geometric properties, to
lift the Kadets--Klee properties from the components to whole direct sum it is
not enough to assume that all involved spaces have the appropriate Kadets--Klee
property. Actually, to complete the picture one must add a dichotomy in the
form of the Schur type properties for $X_{\gamma}$'s supplemented by the
variant of strict monotonicity for $\mathcal{E}$. Back down to earth, this
general machinery naturally provides a blue print for other topologies like,
for example, the weak topology or the topology of local convergence in measure,
that are perhaps more commonly associated with this type of considerations.
Furthermore, by limiting ourselves to direct sums in which the family
$\mathcal{X}$ is constant, that is, $X_{\gamma} = X$ for all $\gamma \in
\Gamma$ and some Banach space $X$, we return to the well-explored ground of
K{\"o}the--Bochner sequence spaces $\mathcal{E}(X)$. Doing all this, we will
reproduce, but sometimes also improve, essentially all existing results about
the classical Kadets--Klee properties in K{\"o}the--Bochner sequence spaces. | arXiv |
Integrated sensing and communication (ISAC) is envisioned as a key technology
for future sixth-generation (6G) networks. Classical ISAC system considering
monostatic and/or bistatic settings will inevitably degrade both communication
and sensing performance due to the limited service coverage and easily blocked
transmission paths. Besides, existing ISAC studies usually focus on downlink
(DL) or uplink (UL) communication demands and unable to achieve the systematic
DL and UL communication tasks. These challenges can be overcome by networked FD
ISAC framework. Moreover, ISAC generally considers the trade-off between
communication and sensing, unavoidably leading to a loss in communication
performance. This shortcoming can be solved by the emerging movable antenna
(MA) technology. In this paper, we utilize the MA to promote communication
capability with guaranteed sensing performance via jointly designing
beamforming, power allocation, receiving filters and MA configuration towards
maximizing sum rate. The optimization problem is highly difficult due to the
unique channel model deriving from the MA. To resolve this challenge, via
leveraging the cutting-the-edge majorization-minimization (MM) method, we
develop an efficient solution that optimizes all variables via convex
optimization techniques. Extensive simulation results verify the effectiveness
of our proposed algorithms and demonstrate the substantial performance
promotion by deploying MA in the networked FD ISAC system. | arXiv |
We study in-depth those rings $R$ for which, there exists a fixed $n\geq 1$,
such that $u^n-1$ lies in the subring $\Delta(R)$ of $R$ for every unit $u\in
R$. We succeeded to describe for any $n\geq 1$ all reduced $\pi$-regular
$(2n-1)$-$\Delta$U rings by showing that they satisfy the equation $x^{2n}=x$
as well as to prove that the property of being exchange and clean are
tantamount in the class of $(2n-1)$-$\Delta$U rings. These achievements
considerably extend results established by Danchev (Rend. Sem. Mat. Univ. Pol.
Torino, 2019) and Ko\c{s}an et al. (Hacettepe J. Math. \& Stat., 2020). Some
other closely related results of this branch are also established. | arXiv |
Generative artificial intelligence is now a widely used tool in molecular
science. Despite the popularity of probabilistic generative models, numerical
experiments benchmarking their performance on molecular data are lacking. In
this work, we introduce and explain several classes of generative models,
broadly sorted into two categories: flow-based models and diffusion models. We
select three representative models: Neural Spline Flows, Conditional Flow
Matching, and Denoising Diffusion Probabilistic Models, and examine their
accuracy, computational cost, and generation speed across datasets with tunable
dimensionality, complexity, and modal asymmetry. Our findings are varied, with
no one framework being the best for all purposes. In a nutshell, (i) Neural
Spline Flows do best at capturing mode asymmetry present in low-dimensional
data, (ii) Conditional Flow Matching outperforms other models for
high-dimensional data with low complexity, and (iii) Denoising Diffusion
Probabilistic Models appears the best for low-dimensional data with high
complexity. Our datasets include a Gaussian mixture model and the dihedral
torsion angle distribution of the Aib\textsubscript{9} peptide, generated via a
molecular dynamics simulation. We hope our taxonomy of probabilistic generative
frameworks and numerical results may guide model selection for a wide range of
molecular tasks. | arXiv |
In this article we study a reaction diffusion system with $m$ unknown
concentration. The non-linearity in our study comes from an underlying
reversible chemical reaction and triangular in nature. Our objective is to
understand the large time behaviour of solution where there are degeneracies.
In particular we treat those cases when one of the diffusion coefficient is
zero and others are strictly positive. We prove convergence to equilibrium type
of results under some condition on stoichiometric coefficients in dimension
$1$,$2$ and $3$ in correspondence with the existence of classical solution. For
dimension greater than 3 we prove similar result under certain closeness
condition on the non-zero diffusion coefficients and with the same condition
imposed on stoichiometric coefficients. All the constant occurs in the decay
estimates are explicit. | arXiv |
Domain generalisation in computational histopathology is challenging because
the images are substantially affected by differences among hospitals due to
factors like fixation and staining of tissue and imaging equipment. We
hypothesise that focusing on nuclei can improve the out-of-domain (OOD)
generalisation in cancer detection. We propose a simple approach to improve OOD
generalisation for cancer detection by focusing on nuclear morphology and
organisation, as these are domain-invariant features critical in cancer
detection. Our approach integrates original images with nuclear segmentation
masks during training, encouraging the model to prioritise nuclei and their
spatial arrangement. Going beyond mere data augmentation, we introduce a
regularisation technique that aligns the representations of masks and original
images. We show, using multiple datasets, that our method improves OOD
generalisation and also leads to increased robustness to image corruptions and
adversarial attacks. The source code is available at
https://github.com/undercutspiky/SFL/ | arXiv |
We study operator algebraic and function theoretic aspects of algebras of
bounded nc functions on subvarieties of the nc domain determined by all levels
of the unit ball of an operator space (nc operator balls). Our main result is
the following classification theorem: under very mild assumptions on the
varieties, two such algebras $H^\infty(\mathfrak{V})$ and
$H^\infty(\mathfrak{W})$ are completely isometrically and weak-* isomorphic if
and only if there is a nc biholomorphism between the varieties. For matrix
spanning homogeneous varieties in injective operator balls, we further sharpen
this equivalence, showing that there exists a linear isomorphism between the
respective balls that maps one variety onto the other; in general, we show, the
homogeneity condition cannot be dropped. We highlight some difficulties and
open problems, contrasting with the well studied case of row ball. | arXiv |
A full set of vibrationally-resolved cross sections for electron impact
excitation of NO(X2{\Pi}, v) molecules is calculated from ab initio molecular
dynamics, in the framework of the local-complex-potential approach.
Electron-vibration energy exchanges in non-equilibrium thermodynamic conditions
are studied from a state-to-state model accounting for all electron impact
excitation and de-excitation processes of the nitric oxide vibration manifold,
and it is shown that the calculated vibration relaxation times are in good
agreement with the experimental data. The new vibrational excitation cross
sections are used in a complete electron impact cross section set in order to
obtain non-equilibrium electron energy distributions functions and to calculate
electron transport parameters in NO. It is verified that the new cross sections
bring a significant improvement between simulations and experimental swarm
data, providing an additional validation of the calculations, when used within
the complete set of cross sections investigated in this work. | arXiv |
This paper describes an algorithm for reconstructing and identifying a highly
collimated hadronically decaying $\tau$-lepton pair with low transverse
momentum. When two $\tau$-leptons are highly collimated, their visible decay
products might overlap, degrading the reconstruction performance for each of
the $\tau$-leptons. This requires a dedicated treatment that attempts to tag it
as a single object. The reconstruction algorithm is based on a large radius jet
and its associated two leading subjets, and the identification uses a boosted
decision tree to discriminate between signatures from $\tau^+\tau^-$ systems
and those arising from QCD jets. The efficiency of the identification algorithm
is measured in $Z\gamma$ events using proton-proton collision data at
$\sqrt{s}=13$ TeV collected by the ATLAS experiment at the Large Hadron
Collider between 2015 and 2018, corresponding to an integrated luminosity of
139 $\mbox{fb}^{-1}$. The resulting data-to-simulation scale factors are close
to unity with uncertainties ranging from 26% to 37%. | arXiv |
We study the design of iterative combinatorial auctions (ICAs). The main
challenge in this domain is that the bundle space grows exponentially in the
number of items. To address this, several papers have recently proposed machine
learning (ML)-based preference elicitation algorithms that aim to elicit only
the most important information from bidders to maximize efficiency. The SOTA
ML-based algorithms elicit bidders' preferences via value queries (i.e., "What
is your value for the bundle $\{A,B\}$?"). However, the most popular iterative
combinatorial auction in practice elicits information via more practical
\emph{demand queries} (i.e., "At prices $p$, what is your most preferred bundle
of items?"). In this paper, we examine the advantages of value and demand
queries from both an auction design and an ML perspective. We propose a novel
ML algorithm that provably integrates the full information from both query
types. As suggested by our theoretical analysis, our experimental results
verify that combining demand and value queries results in significantly better
learning performance. Building on these insights, we present MLHCA, the most
efficient ICA ever designed. MLHCA substantially outperforms the previous SOTA
in realistic auction settings, delivering large efficiency gains. Compared to
the previous SOTA, MLHCA reduces efficiency loss by up to a factor of 10, and
in the most challenging and realistic domain, MLHCA outperforms the previous
SOTA using 30% fewer queries. Thus, MLHCA achieves efficiency improvements that
translate to welfare gains of hundreds of millions of USD, while also reducing
the cognitive load on the bidders, establishing a new benchmark both for
practicability and for economic impact. | arXiv |
We consider the initial-boundary value problem in the quarter space for the
system of equations of ideal Magneto-Hydrodynamics for compressible fluids with
perfectly conducting wall boundary conditions. On the two parts of the boundary
the solution satisfies different boundary conditions, which make the problem an
initial-boundary value problem with non-uniformly characteristic boundary.
We identify a subspace ${{\mathcal H}}^3(\Omega)$ of the Sobolev space
$H^3(\Omega)$, obtained by addition of suitable boundary conditions on one
portion of the boundary, such that for initial data in ${{\mathcal
H}}^3(\Omega)$ there exists a solution in the same space ${{\mathcal
H}}^3(\Omega)$, for all times in a small time interval. This yields the
well-posedness of the problem combined with a persistence property of full
$H^3$-regularity, although in general we expect a loss of normal regularity
near the boundary. Thanks to the special geometry of the quarter space the
proof easily follows by the "reflection technique". | arXiv |
Starting with an $n$-dimensional oriented Riemannian manifold with a Spin-c
structure, we describe an elliptic system of equations which recover the
Seiberg-Witten equations when $n=3,4$. The equations are for a U(1)-connection
$A$ and spinor $\phi$, as usual, and also an odd degree form $\beta$ (generally
of inhomogeneous degree). From $A$ and $\beta$ we define a Dirac operator
$D_{A,\beta}$ using the action of $\beta$ and $*\beta$ on spinors (with
carefully chosen coefficients) to modify $D_A$. The first equation in our
system is $D_{A,\beta}(\phi)=0$. The left-hand side of the second equation is
the principal part of the Weitzenb\"ock remainder for
$D^*_{A,\beta}D_{A,\beta}$. The equation sets this equal to $q(\phi)$, the
trace-free part of projection against $\phi$, as is familiar from the cases
$n=3,4$. In dimensions $n=4m$ and $n=2m+1$, this gives an elliptic system
modulo gauge. To obtain a system which is elliptic modulo gauge in dimensions
$n=4m+2$, we use two spinors and two connections, and so have two Dirac and two
curvature equations, that are then coupled via the form $\beta$. We also prove
a collection of a priori estimates for solutions to these equations.
Unfortunately they are not sufficient to prove compactness modulo gauge,
instead leaving the possibility that bubbling may occur. | arXiv |
The first measurement of $\phi(1020)$ meson production in fixed-target $p$Ne
collisions at $\sqrt{s_{NN}}=68.5$ GeV is presented. The $\phi(1020)$ mesons
are reconstructed in their $K^{+}K^{-}$ decay in a data sample consisting of
proton collisions on neon nuclei at rest, corresponding to an integrated
luminosity of $21.7 \pm 1.4$ nb$^{-1}$, collected by the LHCb detector at CERN.
The $\phi(1020)$ production cross-section in the centre-of-mass rapidity range
of $-1.8<y^*<0$ and transverse momentum range of $800<p_{T}<6500$ MeV/c is
found to be
$\sigma=182.7\pm2.7~\text{(stat.)}\pm14.1~\text{(syst)}~\mu$b/nucleon. A
double-differential measurement of the cross-section is also provided in four
regions of rapidity and six regions of transverse momentum of the $\phi(1020)$
meson and compared with the predictions from Pythia and EPOS4, which are found
to underestimate the experimental values. | arXiv |
Event-by-event fluctuations of the event-wise mean transverse momentum,
$\langle p_{\mathrm{T}}\rangle$, of charged particles produced in proton-proton
(pp) collisions at $\sqrt{s}$ = 5.02 TeV, Xe-Xe collisions at
$\sqrt{s_{\mathrm{NN}}}$ = 5.44 TeV, and Pb-Pb collisions at
$\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV are studied using the ALICE detector based
on the integral correlator $\langle\langle \Delta p_{\rm T}\Delta p_{\rm
T}\rangle\rangle $. The correlator strength is found to decrease monotonically
with increasing produced charged-particle multiplicity measured at midrapidity
in all three systems. In Xe-Xe and Pb-Pb collisions, the multiplicity
dependence of the correlator deviates significantly from a simple power-law
scaling as well as from the predictions of the HIJING and AMPT models. The
observed deviation from power-law scaling is expected from transverse radial
flow in semicentral to central Xe-Xe and Pb-Pb collisions. In pp collisions,
the correlation strength is also studied by classifying the events based on the
transverse spherocity, $S_0$, of the particle production at midrapidity, used
as a proxy for the presence of a pronounced back-to-back jet topology.
Low-spherocity (jetty) events feature a larger correlation strength than those
with high spherocity (isotropic). The strength and multiplicity dependence of
jetty and isotropic events are well reproduced by calculations with the PYTHIA
8 and EPOS LHC models. | arXiv |
We propose a novel generalized framework for grant-free random-access (GFRA)
in cell-free massive multiple input multiple-output systems where multiple
geographically separated access points (APs) or base stations (BSs) aim to
detect sporadically active user-equipment (UEs). Unlike a conventional
architecture in which all the active UEs transmit their signature or pilot
sequences of equal length, we admit a flexible pilot length for each UE, which
also enables a seamless integration into conventional grant-based wireless
systems. We formulate the joint UE activity detection and the distributed
channel estimation as a sparse support and signal recovery problem, and
describe a Bayesian learning procedure to solve it. We develop a scheme to fuse
the posterior statistics of the latent variables inferred by each AP to jointly
detect the UEs' activities, and utilize them to further refine the channel
estimates. In addition, we allude to an interesting point which enables this
flexible GFRA framework to encode the information bits from the active UEs. We
numerically evaluate the normalized mean square error and the probability of
miss-detection performances obtained by the Bayesian algorithm and show that
the latent-variable fusion enhances the detection and the channel estimation
performances by a large margin. We also benchmark against a genie-aided
algorithm which has a prior knowledge of the UEs' activities. | arXiv |
The $k$d-tree is one of the most widely used data structures to manage
multi-dimensional data. Due to the ever-growing data volume, it is imperative
to consider parallelism in $k$d-trees. However, we observed challenges in
existing parallel kd-tree implementations, for both constructions and updates.
The goal of this paper is to develop efficient in-memory $k$d-trees by
supporting high parallelism and cache-efficiency. We propose the Pkd-tree
(Parallel $k$d-tree), a parallel $k$d-tree that is efficient both in theory and
in practice. The Pkd-tree supports parallel tree construction, batch update
(insertion and deletion), and various queries including k-nearest neighbor
search, range query, and range count. We proved that our algorithms have strong
theoretical bounds in work (sequential time complexity), span (parallelism),
and cache complexity. Our key techniques include 1) an efficient construction
algorithm that optimizes work, span, and cache complexity simultaneously, and
2) reconstruction-based update algorithms that guarantee the tree to be
weight-balanced. With the new algorithmic insights and careful engineering
effort, we achieved a highly optimized implementation of the Pkd-tree.
We tested Pkd-tree with various synthetic and real-world datasets, including
both uniform and highly skewed data. We compare the Pkd-tree with
state-of-the-art parallel $k$d-tree implementations. In all tests, with better
or competitive query performance, Pkd-tree is much faster in construction and
updates consistently than all baselines. We released our code. | arXiv |
The dominant accretion process leading to the formation of the terrestrial
planets of the Solar System is a subject of intense scientific debate. Two
radically different scenarios have been proposed. The classic scenario starts
from a disk of planetesimals which, by mutual collisions, produce a set of Moon
to Mars-mass planetary embryos. After the removal of gas from the disk, the
embryos experience mutual giant impacts which, together with the accretion of
additional planetesimals, lead to the formation of the terrestrial planets on a
timescale of tens of millions of years. In the alternative, pebble accretion
scenario, the terrestrial planets grow by accreting sunward-drifting mm-cm
sized particles from the outer disk. The planets all form within the lifetime
of the disk, with the sole exception of Earth, which undergoes a single
post-disk giant impact with Theia (a fifth protoplanet formed by pebble
accretion itself) to form the Moon. To distinguish between these two scenarios,
we revisit all available constraints: compositional (in terms of
nucleosynthetic isotope anomalies and chemical composition), dynamical and
chronological. We find that the pebble accretion scenario is unable to match
these constraints in a self-consistent manner, unlike the classic scenario. | arXiv |
A major challenge in nuclear fusion research is the coherent combination of
data from heterogeneous diagnostics and modelling codes for machine control and
safety as well as physics studies. Measured data from different diagnostics
often provide information about the same subset of physical parameters.
Additionally, information provided by some diagnostics might be needed for the
analysis of other diagnostics. A joint analysis of complementary and redundant
data allows, e.g., to improve the reliability of parameter estimation, to
increase the spatial and temporal resolution of profiles, to obtain synergistic
effects, to consider diagnostics interdependencies and to find and resolve data
inconsistencies. Physics-based modelling and parameter relationships provide
additional information improving the treatment of ill-posed inversion problems.
A coherent combination of all kind of available information within a
probabilistic framework allows for improved data analysis results.
The concept of Integrated Data Analysis (IDA) in the framework of Bayesian
probability theory is outlined and contrasted with conventional data analysis.
Components of the probabilistic approach are summarized and specific
ingredients beneficial for data analysis at fusion devices are discussed. | arXiv |
The large sieve is used to estimate the density of integral quadratic
polynomials $Q$, such that there exists an odd degree integral polynomial which
has resultant $\pm 1$ with $Q$. The proof uses properties of cyclotomic
polynomials and the Chebotarev density theorem. Given a monic integral
polynomial $R$ of odd degree, this is used to show that for almost all integral
quadratic polynomials $Q$, there exists a prime $p$ such that $Q$ and $R$ share
a common root in the algebraic closure of the finite field with $p$ elements.
Using recent work of Landesman, an application to the average size of the
$n$-torsion of the class group of quadratic number fields is also given. | arXiv |
We determine the spectroscopic properties of ~1000 ostensibly star-forming
galaxies at redshifts (z=4-10) using prism spectroscopy from JWST/NIRSpec. With
rest-wavelength coverage between Lya and [S II] in the optical, we stack
spectra as a function of nebular conditions, and compare UV spectral properties
with stellar age. This reveals UV lines of N III], N IV], C III], C IV, He II,
and O III] in the average high-z galaxy. All UV lines are more intense in
younger starbursts. We measure electron temperatures from the collisionally
excited [O III] line ratios, finding Te=18000-22000 K for the O++ regions. We
also detect a significant nebular Balmer Jump from which we estimate only
Te=8000-13000 K. Accounting for typical temperature offsets between zones
bearing doubly and singly ionized oxygen, these two temperatures remain
discrepant by around 40%. We use the [O III] temperatures to estimate
abundances of carbon, nitrogen, and oxygen. We find that log(C/O) is
consistently ~-1, with no evolution of C/O with metallicity or stellar age. The
average spectra are mildly enhanced in Nitrogen, with higher N/O than low-z
starbursts, but are less enhanced than samples of high-z galaxies with visible
UV N III] and N IV]. Whatever processes produce the N-enhancement in the
individual galaxies must also be ongoing, at lower levels, in the median galaxy
in the early Universe. The strongest starbursts are a source of significant
ionizing emission: ionizing photon production efficiencies reach 10^25.7
Hz/erg, and show multiple signatures of high Lyman continuum escape, including
Mg II escape fractions nearing 100%, significant deficits in [S II] emission,
high degrees of ionization, and blue UV colors. | arXiv |
Few-shot class-incremental learning (FSCIL) aims to continually learn new
classes from only a few samples without forgetting previous ones, requiring
intelligent agents to adapt to dynamic environments. FSCIL combines the
characteristics and challenges of class-incremental learning and few-shot
learning: (i) Current classes occupy the entire feature space, which is
detrimental to learning new classes. (ii) The small number of samples in
incremental rounds is insufficient for fully training. In existing mainstream
virtual class methods, for addressing the challenge (i), they attempt to use
virtual classes as placeholders. However, new classes may not necessarily align
with the virtual classes. For the challenge (ii), they replace trainable fully
connected layers with Nearest Class Mean (NCM) classifiers based on cosine
similarity, but NCM classifiers do not account for sample imbalance issues. To
address these issues in previous methods, we propose the class-center guided
embedding Space Allocation with Angle-Norm joint classifiers (SAAN) learning
framework, which provides balanced space for all classes and leverages norm
differences caused by sample imbalance to enhance classification criteria.
Specifically, for challenge (i), SAAN divides the feature space into multiple
subspaces and allocates a dedicated subspace for each session by guiding
samples with the pre-set category centers. For challenge (ii), SAAN establishes
a norm distribution for each class and generates angle-norm joint logits.
Experiments demonstrate that SAAN can achieve state-of-the-art performance and
it can be directly embedded into other SOTA methods as a plug-in, further
enhancing their performance. | arXiv |
Solving multiscale diffusion problems is often computationally expensive due
to the spatial and temporal discretization challenges arising from
high-contrast coefficients. To address this issue, a partially explicit
temporal splitting scheme is proposed. By appropriately constructing multiscale
spaces, the spatial multiscale property is effectively managed, and it has been
demonstrated that the temporal step size is independent of the contrast. To
enhance simulation speed, we propose a parallel algorithm for the multiscale
flow problem that leverages the partially explicit temporal splitting scheme.
The idea is first to evolve the partially explicit system using a coarse time
step size, then correct the solution on each coarse time interval with a fine
propagator, for which we consider both the sequential solver and all-at-once
solver. This procedure is then performed iteratively till convergence. We
analyze the stability and convergence of the proposed algorithm. The numerical
experiments demonstrate that the proposed algorithm achieves high numerical
accuracy for high-contrast problems and converges in a relatively small number
of iterations. The number of iterations stays stable as the number of coarse
intervals increases, thus significantly improving computational efficiency
through parallel processing. | arXiv |
Global attention has been focused on extreme climatic changes. This paper
investigates the relationship between different phases of solar activity and
extreme precipitation events in Kerala, India. Sunspot number and rainfall data
were analysed over 122 years (1901-2022) on an annual scale. A negative
correlation was observed in the winter and post-monsoon seasons, while positive
correlations were seen in the pre-monsoon and monsoon seasons, all of which
were statistically significant. Using cross-wavelet transform, the temporal
relationship between sunspot number and rainfall values was investigated,
revealing significant cross-power at an 8-12 year scale across all seasons.
Wavelet coherence between the two data sets demonstrated significant
correlation at the 2-4 and 4-8 year scales throughout the four seasons. The
results show that the seasonal rainfall over Kerala is related to solar
activity. The solar phases of Solar Cycles 14-24 were determined for all
seasons, and the years with excessive and insufficient rainfall were
identified. It was observed that the descending phase had an impact on excess
rainfall events during the winter and pre-monsoon seasons, while the ascending
phase notably affected the monsoon and post-monsoon seasons. The study
specifically examined the different magnetic polarities of sunspots in
alternating solar cycles, focusing on even and odd cycles. It was found that
extreme rainfall events were more frequent during the winter and pre-monsoon
seasons in the even cycles, whereas in the odd cycles, they were more prevalent
during the monsoon and post-monsoon seasons. These findings are presented for
the first time and may offer new perspectives on how different phases affect
rainfall. This study suggests a physical link between solar activity and
extreme precipitation in Kerala, which could increase predictability. | arXiv |
Pandora temporal fault tree, as one notable extension of the fault tree,
introduces temporal gates and temporal laws. Pandora Temporal Fault Tree(TFT)
enhances the capability of fault trees and enables the modeling of system
failure behavior that depends on sequences. The calculation of system failure
probability in Pandora TFT relies on precise probabilistic information on
component failures. However, obtaining such precise failure data can often be
challenging. The data may be uncertain as historical records are used to derive
failure data for system components. To mitigate this uncertainty, in this
study, we proposed a method that integrates fuzzy set theory with Pandora TFT.
This integration aims to enable dynamic analysis of complex systems, even in
cases where quantitative failure data of components is unreliable or imprecise.
The proposed work introduces the development of Fuzzy AND, Fuzzy OR, Fuzzy
PAND, and Fuzzy POR logic gates for Pandora TFT. We also introduce a fuzzy
importance measure for criticality analysis of basic events. All events in our
analysis are assumed to have exponentially distributed failures, with their
failure rates represented as triangular fuzzy numbers. We illustrate the
proposed method through a case study of the Aircraft Fuel Distribution System
(AFDS), highlighting its practical application and effectiveness in analyzing
complex systems. The results are compared with existing results from Petri net
and Bayesian network techniques to validate the findings. | arXiv |
The widespread use of social media platforms like Twitter and Facebook has
enabled people of all ages to share their thoughts and experiences, leading to
an immense accumulation of user-generated content. However, alongside the
benefits, these platforms also face the challenge of managing hate speech and
offensive content, which can undermine rational discourse and threaten
democratic values. As a result, there is a growing need for automated methods
to detect and mitigate such content, especially given the complexity of
conversations that may require contextual analysis across multiple languages,
including code-mixed languages like Hinglish, German-English, and Bangla. We
participated in the English task where we have to classify English tweets into
two categories namely Hate and Offensive and Non Hate-Offensive. In this work,
we experiment with state-of-the-art large language models like GPT-3.5 Turbo
via prompting to classify tweets into Hate and Offensive or Non Hate-Offensive.
In this study, we evaluate the performance of a classification model using
Macro-F1 scores across three distinct runs. The Macro-F1 score, which balances
precision and recall across all classes, is used as the primary metric for
model evaluation. The scores obtained are 0.756 for run 1, 0.751 for run 2, and
0.754 for run 3, indicating a high level of performance with minimal variance
among the runs. The results suggest that the model consistently performs well
in terms of precision and recall, with run 1 showing the highest performance.
These findings highlight the robustness and reliability of the model across
different runs. | arXiv |
Interpreting human neural signals to decode static speech intentions such as
text or images and dynamic speech intentions such as audio or video is showing
great potential as an innovative communication tool. Human communication
accompanies various features, such as articulatory movements, facial
expressions, and internal speech, all of which are reflected in neural signals.
However, most studies only generate short or fragmented outputs, while
providing informative communication by leveraging various features from neural
signals remains challenging. In this study, we introduce a dynamic neural
communication method that leverages current computer vision and brain-computer
interface technologies. Our approach captures the user's intentions from neural
signals and decodes visemes in short time steps to produce dynamic visual
outputs. The results demonstrate the potential to rapidly capture and
reconstruct lip movements during natural speech attempts from human neural
signals, enabling dynamic neural communication through the convergence of
computer vision and brain--computer interface. | arXiv |
We investigate the consequences of temporal reflection on wave propagation
and transformation in systems governed by a pseudospin-1/2 Dirac equation.
These systems are spatially uniform but are subject to random temporal
variations in mass, which correspond to the energy gap between the Dirac cones.
By employing the invariant imbedding method on two complementary random models,
we accurately compute all moments of temporal reflectance and derive their
analytical expressions in short- and long-time regimes. In the long-time
regime, the reflectance probability density is a constant equal to one,
indicating uniform probability for any reflectance value. The group velocity of
the wave decays to zero with time, signifying spatial localization induced by
temporal variations. Numerical simulations of a wave pulse show that the
initially narrow pulse evolves into a precisely Gaussian shape over time. In
the long-time regime, the pulse center exhibits spatial localization, while its
width shows ordinary diffusive behavior, increasing without limit. This
behavior is universal, persisting regardless of the initial pulse shape or the
probability distribution of the random mass. Our findings suggest that
insulating behavior can be induced in Dirac materials by random temporal
variations of the medium parameters. We discuss the possibilities of verifying
our predictions in various experimental systems. | arXiv |
Fast radio bursts (FRBs) are radio burst signals that lasting milliseconds.
They originate from cosmological distances and have relatively high dispersion
measures (DMs), making them being excellent distance indicators. However, there
are many important questions about FRBs remain us to resolve. With its wide
field of view and excellent sensitivity, CHIME/FRB has discovered more than
half of all known FRBs. As more and more FRBs are located within or connected
with their host galaxies, the study of FRB progenitors is becoming more
important. In this work, we collect the currently available information related
to the host galaxies of FRBs, and the MCMC analysis about limited localized
samples reveals no significant difference in the $\mathrm{DM_{host}}$ between
repeaters and non-repeaters. After examining CHIME/FRB samples, we estimated
the volumetric rates of repeaters and non-repeaters, accounting for
$\mathrm{DM_{host}}$ contributions. We compare event rate with rates of
predicted origin models and transient events. Our results indicate that
$\mathrm{DM_{host}}$ significantly affects volumetric rates and offer insights
into the origin mechanisms of FRB populations. | arXiv |
Efficient multiple setpoint tracking can enable advanced biotechnological
applications, such as maintaining desired population levels in co-cultures for
optimal metabolic division of labor. In this study, we employ reinforcement
learning as a control method for population setpoint tracking in co-cultures,
focusing on policy-gradient techniques where the control policy is
parameterized by neural networks. However, achieving accurate tracking across
multiple setpoints is a significant challenge in reinforcement learning, as the
agent must effectively balance the contributions of various setpoints to
maximize the expected system performance. Traditional return functions, such as
those based on a quadratic cost, often yield suboptimal performance due to
their inability to efficiently guide the agent toward the simultaneous
satisfaction of all setpoints. To overcome this, we propose a novel return
function that rewards the simultaneous satisfaction of multiple setpoints and
diminishes overall reward gains otherwise, accounting for both stage and
terminal system performance. This return function includes parameters to
fine-tune the desired smoothness and steepness of the learning process. We
demonstrate our approach considering an $\textit{Escherichia coli}$ co-culture
in a chemostat with optogenetic control over amino acid synthesis pathways,
leveraging auxotrophies to modulate growth. | arXiv |
Imagine searching a collection of coins for quarters ($0.25$), dimes
($0.10$), nickels ($0.05$), and pennies ($0.01$)-a hybrid foraging task where
observers look for multiple instances of multiple target types. In such tasks,
how do target values and their prevalence influence foraging and eye movement
behaviors (e.g., should you prioritize rare quarters or common nickels)? To
explore this, we conducted human psychophysics experiments, revealing that
humans are proficient reward foragers. Their eye fixations are drawn to regions
with higher average rewards, fixation durations are longer on more valuable
targets, and their cumulative rewards exceed chance, approaching the upper
bound of optimal foragers. To probe these decision-making processes of humans,
we developed a transformer-based Visual Forager (VF) model trained via
reinforcement learning. Our VF model takes a series of targets, their
corresponding values, and the search image as inputs, processes the images
using foveated vision, and produces a sequence of eye movements along with
decisions on whether to collect each fixated item. Our model outperforms all
baselines, achieves cumulative rewards comparable to those of humans, and
approximates human foraging behavior in eye movements and foraging biases
within time-limited environments. Furthermore, stress tests on
out-of-distribution tasks with novel targets, unseen values, and varying set
sizes demonstrate the VF model's effective generalization. Our work offers
valuable insights into the relationship between eye movements and
decision-making, with our model serving as a powerful tool for further
exploration of this connection. All data, code, and models will be made
publicly available. | arXiv |
We consider the simulation of isentropic flow in pipelines and pipe networks.
Standard operating conditions in pipe networks suggest an emphasis to simulate
low Mach and high friction regimes -- however, the system is stiff in these
regimes and conventional explicit approximation techniques prove quite costly
and often impractical. To combat these inefficiencies, we develop a novel
asymptotic-preserving scheme that is uniformly consistent and stable for all
Mach regimes. The proposed method for a single pipeline follows the flux
splitting suggested in [Haack et al., Commun. Comput. Phys., 12 (2012), pp.
955--980], in which the flux is separated into stiff and non-stiff portions
then discretized in time using an implicit-explicit approach. The non-stiff
part is advanced in time by an explicit hyperbolic solver; we opt for the
second-order central-upwind finite volume scheme. The stiff portion is advanced
in time implicitly using an approach based on Rosenbrock-type Runge-Kutta
methods, which ultimately reduces this implicit stage to a discretization of a
linear elliptic equation.
To extend to full pipe networks, the scheme on a single pipeline is paired
with coupling conditions defined at pipe-to-pipe intersections to ensure a
mathematically well-posed problem. We show that the coupling conditions remain
well-posed in the low Mach/high friction limit -- which, when used to define
the ghost cells of each pipeline, results in a method that is accurate across
these intersections in all regimes. The proposed method is tested on several
numerical examples and produces accurate, non-oscillatory results with run
times independent of the Mach number. | arXiv |
The observation of a Hall effect, a finite transverse voltage induced by a
longitudinal current, usually requires the breaking of time-reversal symmetry,
for example through the application of an external magnetic field or the
presence of long range magnetic order in a sample. Recently it was suggested
that under certain symmetry conditions, the presence of finite Berry curvatures
in the band structure of a system with time-reversal symmetry but without
inversion symmetry can give rise to a nonlinear Hall effect in the presence of
a probe current. In order to observe the nonlinear Hall effect, one requires a
finite component of a so-called Berry dipole along the direction of the probe
current. We report here measurements of the nonlinear Hall effect in
two-dimensional electron gases fabricated on the surface of KTaO$_3$ with
different surface crystal orientations as a function of the probe current, a
transverse electric field and back gate voltage. For all three crystal
orientations, the transverse electric field modifies the nonlinear Hall effect.
We discuss our results in the context of the current understanding of the
nonlinear Hall effect as well as potential experimental artifacts that may give
rise to the same effects. | arXiv |
Recent advancements in session-based recommendation models using deep
learning techniques have demonstrated significant performance improvements.
While they can enhance model sophistication and improve the relevance of
recommendations, they also make it challenging to implement a scalable
real-time solution. To addressing this challenge, we propose GRAINRec: a Graph
and Attention Integrated session-based recommendation model that generates
recommendations in real-time. Our scope of work is item recommendations in
online retail where a session is defined as an ordered sequence of digital
guest actions, such as page views or adds to cart. The proposed model generates
recommendations by considering the importance of all items in the session
together, letting us predict relevant recommendations dynamically as the
session evolves. We also propose a heuristic approach to implement real-time
inferencing that meets Target platform's service level agreement (SLA). The
proposed architecture lets us predict relevant recommendations dynamically as
the session evolves, rather than relying on pre-computed recommendations for
each item. Evaluation results of the proposed model show an average improvement
of 1.5% across all offline evaluation metrics. A/B tests done over a 2 week
duration showed an increase of 10% in click through rate and 9% increase in
attributable demand. Extensive ablation studies are also done to understand our
model performance for different parameters. | arXiv |
Egocentric Hand Object Interaction (HOI) videos provide valuable insights
into human interactions with the physical world, attracting growing interest
from the computer vision and robotics communities. A key task in fully
understanding the geometry and dynamics of HOI scenes is dense pointclouds
sequence reconstruction. However, the inherent motion of both hands and the
camera makes this challenging. Current methods often rely on time-consuming
test-time optimization, making them impractical for reconstructing
internet-scale videos. To address this, we introduce UniHOI, a model that
unifies the estimation of all variables necessary for dense 4D reconstruction,
including camera intrinsic, camera poses, and video depth, for egocentric HOI
scene in a fast feed-forward manner. We end-to-end optimize all these variables
to improve their consistency in 3D space. Furthermore, our model could be
trained solely on large-scale monocular video dataset, overcoming the
limitation of scarce labeled HOI data. We evaluate UniHOI with both in-domain
and zero-shot generalization setting, surpassing all baselines in pointclouds
sequence reconstruction and long-term 3D scene flow recovery. UniHOI is the
first approach to offer fast, dense, and generalizable monocular egocentric HOI
scene reconstruction in the presence of motion. Code and trained model will be
released in the future. | arXiv |
We introduce a set of useful expressions of Differential Privacy (DP) notions
in terms of the Laplace transform of the privacy loss distribution. Its bare
form expression appears in several related works on analyzing DP, either as an
integral or an expectation. We show that recognizing the expression as a
Laplace transform unlocks a new way to reason about DP properties by exploiting
the duality between time and frequency domains. Leveraging our interpretation,
we connect the $(q, \rho(q))$-R\'enyi DP curve and the $(\epsilon,
\delta(\epsilon))$-DP curve as being the Laplace and inverse-Laplace transforms
of one another. This connection shows that the R\'enyi divergence is
well-defined for complex orders $q = \gamma + i \omega$. Using our Laplace
transform-based analysis, we also prove an adaptive composition theorem for
$(\epsilon, \delta)$-DP guarantees that is exactly tight (i.e., matches even in
constants) for all values of $\epsilon$. Additionally, we resolve an issue
regarding symmetry of $f$-DP on subsampling that prevented equivalence across
all functional DP notions. | arXiv |
ABCI 3.0 is the latest version of the ABCI, a large-scale open AI
infrastructure that AIST has been operating since August 2018 and will be fully
operational in January 2025. ABCI 3.0 consists of computing servers equipped
with 6128 of the NVIDIA H200 GPUs and an all-flash storage system. Its peak
performance is 6.22 exaflops in half precision and 3.0 exaflops in single
precision, which is 7 to 13 times faster than the previous system, ABCI 2.0. It
also more than doubles both storage capacity and theoretical read/write
performance. ABCI 3.0 is expected to accelerate research and development,
evaluation, and workforce development of cutting-edge AI technologies, with a
particular focus on generative AI. | arXiv |
While contrastive pre-training is widely employed, its data efficiency
problem has remained relatively under-explored thus far. Existing methods often
rely on static coreset selection algorithms to pre-identify important data for
training. However, this static nature renders them unable to dynamically track
the data usefulness throughout pre-training, leading to subpar pre-trained
models. To address this challenge, our paper introduces a novel dynamic
bootstrapping dataset pruning method. It involves pruning data preparation
followed by dataset mutation operations, both of which undergo iterative and
dynamic updates. We apply this method to two prevalent contrastive pre-training
frameworks: \textbf{CLIP} and \textbf{MoCo}, representing vision-language and
vision-centric domains, respectively. In particular, we individually pre-train
seven CLIP models on two large-scale image-text pair datasets, and two MoCo
models on the ImageNet dataset, resulting in a total of 16 pre-trained models.
With a data pruning rate of 30-35\% across all 16 models, our method exhibits
only marginal performance degradation (less than \textbf{1\%} on average)
compared to corresponding models trained on the full dataset counterparts
across various downstream datasets, and also surpasses several baselines with a
large performance margin. Additionally, the byproduct from our method, \ie
coresets derived from the original datasets after pre-training, also
demonstrates significant superiority in terms of downstream performance over
other static coreset selection approaches. | arXiv |
We present a verifier of quantum programs called AutoQ 2.0. Quantum programs
extend quantum circuits (the domain of AutoQ 1.0) by classical control flow
constructs, which enable users to describe advanced quantum algorithms in a
formal and precise manner. The extension is highly non-trivial, as we needed to
tackle both theoretical challenges (such as the treatment of measurement, the
normalization problem, and lifting techniques for verification of classical
programs with loops to the quantum world), and engineering issues (such as
extending the input format with a~support for specifying loop invariants). We
have successfully used AutoQ 2.0 to verify two types of advanced quantum
programs that cannot be expressed using only quantum circuits: the
\emph{repeat-until-success} (RUS) algorithm and the weak-measurement-based
version of Grover's search algorithm. AutoQ 2.0 can efficiently verify all our
benchmarks: all RUS algorithms were verified instantly and, for the
weak-measurement-based version of Grover's search, we were able to handle the
case of 100 qubits in $\sim$20 minutes. | arXiv |
Interstellar objects (ISOs), astronomical objects not gravitationally bound
to the sun, could present valuable opportunities to advance our understanding
of the universe's formation and composition. In response to the unpredictable
nature of their discoveries that inherently come with large and rapidly
changing uncertainty in their state, this paper proposes a novel
multi-spacecraft framework for locally maximizing information to be gained
through ISO encounters with formal probabilistic guarantees. Given some
approximated control and estimation policies for fully autonomous spacecraft
operations, we first construct an ellipsoid around its terminal position, where
the ISO would be located with a finite probability. The large state uncertainty
of the ISO is formally handled here through the hierarchical property in
stochastically contracting nonlinear systems. We then propose a method to find
the terminal positions of the multiple spacecraft optimally distributed around
the ellipsoid, which locally maximizes the information we can get from all the
points of interest (POIs). This utilizes a probabilistic information cost
function that accounts for spacecraft positions, camera specifications, and ISO
position uncertainty, where the information is defined as visual data collected
by cameras. Numerical simulations demonstrate the efficacy of this approach
using synthetic ISO candidates generated from quasi-realistic empirical
populations. Our method allows each spacecraft to optimally select its terminal
state and determine the ideal number of POIs to investigate, potentially
enhancing the ability to study these rare and fleeting interstellar visitors
while minimizing resource utilization. | arXiv |
A number of models have been developed for information spread through
networks, often for solving the Influence Maximization (IM) problem. IM is the
task of choosing a fixed number of nodes to "seed" with information in order to
maximize the spread of this information through the network, with applications
in areas such as marketing and public health. Most methods for this problem
rely heavily on the assumption of known strength of connections between network
members (edge weights), which is often unrealistic. In this paper, we develop a
likelihood-based approach to estimate edge weights from the fully and partially
observed information diffusion paths. We also introduce a broad class of
information diffusion models, the general linear threshold (GLT) model, which
generalizes the well-known linear threshold (LT) model by allowing arbitrary
distributions of node activation thresholds. We then show our weight estimator
is consistent under the GLT and some mild assumptions. For the special case of
the standard LT model, we also present a much faster expectation-maximization
approach for weight estimation. Finally, we prove that for the GLT models, the
IM problem can be solved by a natural greedy algorithm with standard optimality
guarantees if all node threshold distributions have concave cumulative
distribution functions. Extensive experiments on synthetic and real-world
networks demonstrate that the flexibility in the choice of threshold
distribution combined with the estimation of edge weights significantly
improves the quality of IM solutions, spread prediction, and the estimates of
the node activation probabilities. | arXiv |
We carried out a project involving the systematic analysis of microlensing
data from the Korea Microlensing Telescope Network survey. The aim of this
project is to identify lensing events with complex anomaly features that are
difficult to explain using standard binary-lens or binary-source models. Our
investigation reveals that the light curves of microlensing events
KMT-2021-BLG-0284, KMT-2022-BLG-2480, and KMT-2024-BLG-0412 display highly
complex patterns with three or more anomaly features. These features cannot be
adequately explained by a binary-lens (2L1S) model alone. However, the 2L1S
model can effectively describe certain segments of the light curve. By
incorporating an additional source into the modeling, we identified a
comprehensive model that accounts for all the observed anomaly features.
Bayesian analysis, based on constraints provided by lensing observables,
indicates that the lenses of KMT-2021-BLG-0284 and KMT-2024-BLG-0412 are binary
systems composed of M dwarfs. For KMT-2022-BLG-2480, the primary lens is an
early K-type main-sequence star with an M dwarf companion. The lenses of
KMT-2021-BLG-0284 and KMT-2024-BLG-0412 are likely located in the bulge,
whereas the lens of KMT-2022-BLG-2480 is more likely situated in the disk. In
all events, the binary stars of the sources have similar magnitudes due to a
detection bias favoring binary source events with a relatively bright secondary
source star, which increases detection efficiency. | arXiv |
Retrograde analysis is used in game-playing programs to solve states at the
end of a game, working backwards toward the start of the game. The algorithm
iterates through and computes the perfect-play value for as many states as
resources allow. We introduce setrograde analysis which achieves the same
results by operating on sets of states that have the same game value. The
algorithm is demonstrated by computing exact solutions for Bridge double dummy
card-play. For deals with 24 cards remaining to be played ($10^{27}$ states,
which can be reduced to $10^{15}$ states using preexisting techniques), we
strongly solve all deals. The setrograde algorithm performs a factor of $10^3$
fewer search operations than a standard retrograde algorithm, producing a
database with a factor of $10^4$ fewer entries. For applicable domains, this
allows retrograde searching to reach unprecedented search depths. | arXiv |
In this paper we address the reverse isoperimetric inequality for convex
bodies with uniform curvature constraints in the hyperbolic plane
$\mathbb{H}^2$. We prove that thethick $\lambda$-sausage body, that is, the
convex domain bounded by two equal circular arcs of curvature $\lambda$ and two
equal arcs of hypercircle of curvature $1 / \lambda$, is the unique minimizer
of area among all bodies $K \subset \mathbb{H}^2$ with a given length and with
curvature of $\partial K$ satisfying $1 / \lambda \leq \kappa \leq \lambda$ (in
a weak sense). We call this class of bodies thick $\lambda$-concave bodies, in
analogy to the Euclidean case where a body is $\lambda$-concave if $0 \leq
\kappa \leq \lambda$. The main difficulty in the hyperbolic setting is that the
inner parallel bodies of a convex body are not necessarily convex. To overcome
this difficulty, we introduce an extra assumption of thickness $\kappa \geq
1/\lambda$. In addition, we prove the Blaschke's rolling theorem for
$\lambda$-concave bodies under the thickness assumption. That is, we prove that
a ball of curvature $\lambda$ can roll freely inside a thick $\lambda$-concave
body. In striking contrast to the Euclidean setting, Blaschke's rolling theorem
for $\lambda$-concave domains in $\mathbb{H}^2$ does not hold in general, and
thus has not been studied in literature before. We address this gap, and show
that the thickness assumption is necessary and sufficient for such a theorem to
hold. | arXiv |
We propose a quantum error mitigation scheme for single-qubit measurement
errors, particularly suited for one-way quantum computation. Contrary to well
established error mitigation methods for circuit-based quantum computation,
that require to run the circuits several times, our method is capable of
mitigating measurement errors in real-time, during the processing measurements
of the one-way computation. For that, an ancillary qubit register is entangled
with the to-be-measured qubit and additionally measured afterwards. By using a
voting protocol on all measurement outcomes, occurring measurement errors can
be mitigated in real-time while the one-way computation continues. We provide
an analytical expression for the probability to detect a measurement error in
dependency of the error rate and the number of ancilla qubits. From this, we
derive an estimate of the ancilla register size for a given measurement error
rate and a required success probability to detect a measurement error.
Additionally, we also consider the CNOT gate error in our mitigation method and
investigate how this influences the probability to detect a measurement error.
Finally, we show in proof-of-principle simulations, also considering a hardware
noise model, that our method is capable of reducing the measurement errors
significantly in a one-way quantum computation with only a small number of
ancilla qubits. | arXiv |
A common approach to detecting weak signals or minute quantities involves
leveraging on the localized spectral features of resonant modes, where sharper
lines (i.e. high Q-factors) enhance transduction sensitivity. However,
maximizing the Q-factor often introduced technical challenges in fabrication
and design. In this work, we propose an alternative strategy to achieve sharper
spectral features by using interference and nonlinearity, all while maintaining
a constant dissipation rate. Using far-infrared thermomechanical detectors as a
test case, we demonstrate that signal transduction along an engineered response
curve slope effectively reduces the detector's noise equivalent power (NEP).
This method, combined with an optimized absorbing layer, achieves sub-pW NEP
for electrical read-out detectors operating in the sub-THz range. | arXiv |
For an atomic domain $D$, the $elasticity$ $\rho(D)$ of $D$ is defined as
$\sup\{r/s: \pi_1\cdots \pi_r = \rho_1 \cdots \rho_s,~ \text{where each $\pi_i,
\rho_j$ is irreducible}\}$; the elasticity provides a concrete measure of the
failure of unique factorization in $D$. Fix a quadratic number field $K$ with
discriminant $\Delta_K$, and for each positive integer $f$, let $\mathcal{O}_f
= \mathbb{Z} + f\mathcal{O}_K$ denote the order of conductor $f$ in $K$.
Results of Halter-Koch imply that $\mathcal{O}_f$ has finite elasticity
precisely when $f$ is $\textit{split-free}$, meaning not divisible by any
rational prime $p$ with $(\Delta_K/p)=1$. When $K$ is imaginary, we show that
for almost all split-free $f$, \[ \rho(\mathcal{O}_f) =
f/(\log{f})^{\frac{1}{2}\log\log\log{f} + \frac{1}{2}C_K+o(1)}, \] for a
constant $C_K$ depending on $K$. When $K$ is real, we prove under the
assumption of the Generalized Riemann Hypothesis that \[ \rho(\mathcal{O}_f)=
(\log{f})^{\frac12 +o(1)} \] for almost all split-free $f$. Underlying these
estimates are new statistical theorems about class groups of orders in
quadratic fields, whose proofs borrow ideas from investigations of Erd\H{o}s,
Hooley, Li, Pomerance, Schmutz, and others into the multiplicative groups
$(\mathbb{Z}/m\mathbb{Z})^\times$. One novelty of the argument is the
development of a weighted version of the Tur\'{a}n--Kubilius inequality to
handle a variety of sums over split-free integers. | arXiv |
We study the metric Steiner tree problem in the sublinear query model. In
this problem, for a set of $n$ points $V$ in a metric space given to us by
means of query access to an $n\times n$ matrix $w$, and a set of terminals
$T\subseteq V$, the goal is to find the minimum-weight subset of the edges that
connects all the terminal vertices.
Recently, Chen, Khanna and Tan [SODA'23] gave an algorithm that uses
$\widetilde{O}(n^{13/7})$ queries and outputs a $(2-\eta)$-estimate of the
metric Steiner tree weight, where $\eta>0$ is a universal constant. A key
component in their algorithm is a sublinear algorithm for a particular set
cover problem where, given a set system $(U, F)$, the goal is to provide a
multiplicative-additive estimate for $|U|-\textsf{SC}(U, F)$. Here $U$ is the
set of elements, $F$ is the collection of sets, and $\textsf{SC}(U, F)$ denotes
the optimal set cover size of $(U, F)$. In particular, their algorithm returns
a $(1/4, \varepsilon\cdot|U|)$-multiplicative-additive estimate for this set
cover problem using $\widetilde{O}(|F|^{7/4})$ membership oracle queries
(querying whether a set $S$ contains an $e$), where $\varepsilon$ is a fixed
constant.
In this work, we improve the query complexity of $(2-\eta)$-estimating the
metric Steiner tree weight to $\widetilde{O}(n^{5/3})$ by showing a $(1/2,
\varepsilon \cdot |U|)$-estimate for the above set cover problem using
$\widetilde{O}(|F|^{5/3})$ membership queries. To design our set cover
algorithm, we estimate the size of a random greedy maximal matching for an
auxiliary multigraph that the algorithm constructs implicitly, without access
to its adjacency list or matrix. | arXiv |
Fainter standard stars are essential for the calibration of larger
telescopes. This work adds to the CALSPEC (calibration spectra) database 19
faint white dwarfs (WDs) with all-sky coverage and V magnitudes between 16.5
and 18.7. Included for these stars is new UV (ultraviolet) HST (Hubble Space
Telescope) STIS (Space Telescope Imaging Spectrometer) spectrophotometry
between 1150 and 3000~\AA\ with a resolution of $\sim$500. Pure hydrogen WD
models are fit to these UV spectra and to six-band HST/WFC3 (Wide Field Camera
3) photometry at 0.28 to 1.6~\micron\ to construct predicted model SEDs
(spectral energy distributions) covering wavelengths from 900~\AA\ to the JWST
(James Webb Space Telescope) limit of 30~\micron\ using well-established
CALSPEC procedures for producing flux standards with the goal of 1\% accuracy. | arXiv |
The objective assessment of gait kinematics is crucial in evaluating human
movement, informing clinical decisions, and advancing rehabilitation and
assistive technologies. Assessing gait symmetry, in particular, holds
significant importance in clinical rehabilitation, as it reflects the intricate
coordination between nerves and muscles during human walking. In this research,
a dataset has been compiled to improve the understanding of gait kinematics.
The dataset encompasses motion capture data of the walking patterns of eleven
healthy participants who were tasked with completing various activities on a
circular path. These activities included normal walking, walking with a
weighted dominant hand, walking with a braced dominant leg, and walking with
both weight and brace. The walking tasks involving weight and brace were
designed to emulate the asymmetry associated with common health conditions,
shedding light on irregularities in individuals' walking patterns and
reflecting the coordination between nerves and muscles. All tasks were
performed at regular and fast speeds, offering valuable insights into upper and
lower body kinematics. The dataset comprises raw sensor data, providing
information on joint dynamics, angular velocities, and orientation changes
during walking, as well as analyzed data, including processed data, Euler
angles, and joint kinematics spanning various body segments. This dataset will
serve as a valuable resource for researchers, clinicians, and engineers,
facilitating the analysis of gait patterns and extracting relevant indices on
mobility and balance. | arXiv |
As part of the Canadian Hydrogen Intensity Mapping Experiment Fast Radio
Burst (CHIME/FRB) project, we report 41 new Rotation Measures (RMs) from 20
repeating Fast Radio Bursts (FRBs) obtained between 2019 and 2023 for which no
previous RM was determined. We also report 22 additional RM measurements for
eight further repeating FRBs. We observe temporal RM variations in practically
all repeating FRBs. Repeaters appear to be separated into two categories: those
with dynamic and those with stable RM environments, differentiated by the
ratios of RM standard deviations over the averaged RM magnitudes. Sources from
stable RM environments likely have little RM contributions from the
interstellar medium (ISM) of their host galaxies, whereas sources from dynamic
RM environments share some similarities with Galactic pulsars in eclipsing
binaries but appear distinct from Galactic centre solitary pulsars. We observe
a new stochastic, secular, and again stochastic trend in the temporal RM
variation of FRB 20180916B, which does not support binary orbit modulation
being the reason for this RM changes. We highlight two more repeaters that show
RM sign change, namely FRBs 20290929C and 20190303A. We perform an updated
comparison of polarization properties between repeating and non-repeating FRBs,
which show a marginal dichotomy in their distribution of
electron-density-weighted parallel-component line-of-sight magnetic fields. | arXiv |
The Max-Flow/Min-Cut problem is a fundamental tool in graph theory, with
applications in many domains, including data mining, image segmentation,
transportation planning, and many types of assignment problems, in addition to
being an essential building block for many other algorithms. The Ford-Fulkerson
Algorithm for Max-Flow/Min-Cut and its variants are therefore commonly taught
in undergraduate and beginning graduate algorithms classes. However, these
algorithms -- and in particular the so-called residual graphs they utilize --
often pose significant challenges for students.
To help students achieve a deeper understanding, we developed iFlow, an
interactive visualization tool for the Ford-Fulkerson Algorithm and its
variants. iFlow lets users design or import flow networks, and execute the
algorithm by hand. In particular, the user can select an augmentation path and
amount, and then update the residual graph. The user is given detailed feedback
on mistakes, and can also have iFlow auto-complete each step, to use it as a
demonstration tool while still in the initial learning stages. iFlow has been
made publicly available and open-sourced.
We deployed iFlow in an undergraduate algorithms class, and collected
students' self-reported learning benefits via an optional survey. All
respondents considered the tool at least somewhat useful and engaging, with
most rating it either as useful/engaging or very useful/engaging. Students also
generally reported a significant increase in understanding of the algorithm. | arXiv |
This work explores the implications of the Exclusivity Principle (EP) in the
context of quantum and post-quantum correlations. We first establish a key
technical result demonstrating that given the set of correlations for a
complementary experiment, the EP restricts the maximum set of correlations for
the original experiment to the anti-blocking set. Based on it, we can prove our
central result: if all quantum behaviors are accessible in Nature, the EP
guarantees that no post-quantum behaviors can be realized. This can be seen as
a generalization of the result of [Phys. Rev. A 89, 030101(R)], to a wider
range of scenarios. It also provides novel insights into the structure of
quantum correlations and their limitations. | arXiv |
We study the long-time behaviour of solutions to some classes of fourth-order
nonlinear PDEs with non-monotone nonlinearities, which include the
Landau--Lifshitz--Baryakhtar (LLBar) equation (with all relevant fields and
spin torques) and the convective Cahn--Hilliard/Allen--Cahn (CH-AC) equation
with a proliferation term, in dimensions $d=1,2,3$. Firstly, we show the global
well-posedness, as well as the existence of global and exponential attractors
with finite fractal dimensions for these problems. In the case of the
exchange-dominated LLBar equation and the CH-AC equation without convection, an
estimate for the rate of convergence of the solution to the corresponding
stationary state is given. Finally, we show the existence of a robust family of
exponential attractors when $d\leq 2$. As a corollary, exponential attractor of
the LLBar equation is shown to converge to that of the Landau--Lifshitz--Bloch
equation in the limit of vanishing exchange damping, while exponential
attractor of the convective CH-AC equation is shown to converge to that of the
convective Allen--Cahn equation in the limit of vanishing diffusion
coefficient. | arXiv |
A discrete analog of quantum unique ergodicity was proved for Cayley graphs
of quasirandom groups by Magee, Thomas and Zhao. They show that for large
graphs there exist real orthonormal basis of eigenfunctions of the adjacency
matrix such that quantum probability measures of the eigenfunctions put
approximately the correct proportion of their mass on subsets of the vertices
that are not too small. We investigate this property for Cayley graphs of
cyclic groups (circulant graphs). We observe that there exist sequences of
orthonormal eigenfunction bases which are perfectly equidistributed. However,
for sequences of 4-regular circulant graphs of prime order, we show that there
are no sequences of real orthonormal bases where all sequences of
eigenfunctions equidistribute. To obtain this result, we also prove that, for
large 4-regular circulant graphs of prime order, the maximum multiplicity of
the eigenvalues of the adjacency matrix is two. | arXiv |
Improving hyperspectral image (HSI) semantic segmentation by exploiting
complementary information from a supplementary data type (referred to
X-modality) is promising but challenging due to differences in imaging sensors,
image content, and resolution. Current techniques struggle to enhance
modality-specific and modality-shared information, as well as to capture
dynamic interaction and fusion between different modalities. In response, this
study proposes CoMiX, an asymmetric encoder-decoder architecture with
deformable convolutions (DCNs) for HSI-X semantic segmentation. CoMiX is
designed to extract, calibrate, and fuse information from HSI and X data. Its
pipeline includes an encoder with two parallel and interacting backbones and a
lightweight all-multilayer perceptron (ALL-MLP) decoder. The encoder consists
of four stages, each incorporating 2D DCN blocks for the X model to accommodate
geometric variations and 3D DCN blocks for HSIs to adaptively aggregate
spatial-spectral features. Additionally, each stage includes a Cross-Modality
Feature enhancement and eXchange (CMFeX) module and a feature fusion module
(FFM). CMFeX is designed to exploit spatial-spectral correlations from
different modalities to recalibrate and enhance modality-specific and
modality-shared features while adaptively exchanging complementary information
between them. Outputs from CMFeX are fed into the FFM for fusion and passed to
the next stage for further information learning. Finally, the outputs from each
FFM are integrated by the ALL-MLP decoder for final prediction. Extensive
experiments demonstrate that our CoMiX achieves superior performance and
generalizes well to various multimodal recognition tasks. The CoMiX code will
be released. | arXiv |
We revisit the calculation of classical observables from causal response
functions, following up on recent work by Caron-Huot at al. [JHEP 01 (2024)
139]. We derive a formula to compute asymptotic in-in observables from a
particular soft limit of five-point amputated response functions. Using such
formula, we re-derive the formulas by Kosower, Maybee and O'Connell (KMOC) for
the linear impulse and radiated linear momentum of particles undergoing
scattering, and we present an unambiguous calculation of the radiated angular
momentum at leading order. Then, we explore the consequences of manifestly
causal Feynman rules in the calculation of classical observables by employing
the causal (Keldysh) basis in the in-in formalism. We compute the linear
impulse, radiated waveform and its variance at leading and/or next-to-leading
order in the causal basis, and find that all terms singular in the $\hbar \to
0$ limit cancel manifestly at the integrand level. We also find that the
calculations simplify considerably and classical properties such as
factorization of six-point amplitudes are more transparent in the causal basis. | arXiv |
We investigate the use of labelled graphs as a Morita equivalence invariant
for inverse semigroups. We construct a labelled graph from a combinatorial
inverse semigroup $S$ with $0$ admitting a special set of idempotent
$\mathcal{D}$-class representatives and show that $S$ is Morita equivalent to a
labelled graph inverse semigroup. For the inverse hull $S$ of a Markov shift,
we show that the labelled graph determines the Morita equivalence class of $S$
among all other inverse hulls of Markov shifts. | arXiv |
Quark stars are challenging to confirm or exclude observationally because
they can have similar masses and radii as neutron stars. By performing the
first calculation of the non-equilibrium equation of state of decompressed
quark matter at finite temperature, we determine the properties of the ejecta
from binary quark-star or quark star-black hole mergers. We account for all
relevant physical processes during the ejecta evolution, including quark nugget
evaporation and cooling, and weak interactions. We find that these merger
ejecta can differ significantly from those in neutron star mergers, depending
on the binding energy of quark matter. For relatively high binding energies,
quark star mergers are unlikely to produce r-process elements and kilonova
signals. We propose that future observations of binary mergers and kilonovae
could impose stringent constraints on the binding energy of quark matter and
the existence of quark stars. | arXiv |
AstroSage-Llama-3.1-8B is a domain-specialized natural-language AI assistant
tailored for research in astronomy, astrophysics, and cosmology. Trained on the
complete collection of astronomy-related arXiv papers from 2007-2024 along with
millions of synthetically-generated question-answer pairs and other
astronomical literature, AstroSage-Llama-3.1-8B demonstrates remarkable
proficiency on a wide range of questions. AstroSage-Llama-3.1-8B scores 80.9%
on the AstroMLab-1 benchmark, greatly outperforming all models -- proprietary
and open-weight -- in the 8-billion parameter class, and performing on par with
GPT-4o. This achievement demonstrates the potential of domain specialization in
AI, suggesting that focused training can yield capabilities exceeding those of
much larger, general-purpose models. AstroSage-Llama-3.1-8B is freely
available, enabling widespread access to advanced AI capabilities for
astronomical education and research. | arXiv |
As language models grow ever larger, so do their vocabularies. This has
shifted the memory footprint of LLMs during training disproportionately to one
single layer: the cross-entropy in the loss computation. Cross-entropy builds
up a logit matrix with entries for each pair of input tokens and vocabulary
items and, for small models, consumes an order of magnitude more memory than
the rest of the LLM combined. We propose Cut Cross-Entropy (CCE), a method that
computes the cross-entropy loss without materializing the logits for all tokens
into global memory. Rather, CCE only computes the logit for the correct token
and evaluates the log-sum-exp over all logits on the fly. We implement a custom
kernel that performs the matrix multiplications and the log-sum-exp reduction
over the vocabulary in flash memory, making global memory consumption for the
cross-entropy computation negligible. This has a dramatic effect. Taking the
Gemma 2 (2B) model as an example, CCE reduces the memory footprint of the loss
computation from 24 GB to 1 MB, and the total training-time memory consumption
of the classifier head from 28 GB to 1 GB. To improve the throughput of CCE, we
leverage the inherent sparsity of softmax and propose to skip elements of the
gradient computation that have a negligible (i.e., below numerical precision)
contribution to the gradient. Experiments demonstrate that the dramatic
reduction in memory consumption is accomplished without sacrificing training
speed or convergence. | arXiv |
One way to define a sub-Riemannian metric is as the limit of a Riemannian
metric. Consider a Riemannian structure depending on a parameter $s$ such that
its limit defines a sub-Riemannian metric when $s \to \infty$, assuming that
the Riemannian geodesic flow is integrable for all $s$. An interesting question
is: Can we determine the integrability of the sub-Riemannian geodesic flow as
the limit of the integrals of motion of the Riemannian geodesic flow? The
paper's main contribution is to provide a positive answer to this question in
the special orthogonal group. Theorem 1.1 states that the Riemannian geodesic
flow is Liuville integrable: The Manakov integrals' limit suggests the
existence of a Lax pair formulation of the Riemannian geodesic equations, and
the proof of Theorem 1.1 relies on this Lax pair. | arXiv |
We undertake a comprehensive analysis of the supersymmetric partition
function of the $\text{U}(N)_k\times\text{U}(N)_{-k}$ ABJM theory on a Seifert
manifold, evaluating it to all orders in the $1/N$-perturbative expansion up to
exponentially suppressed corrections. Through holographic duality, our
perturbatively exact result is successfully matched with the regularized
on-shell action of a dual Euclidean AdS$_4$-Taub-Bolt background incorporating
4-derivative corrections, and also provides valuable insights into the
logarithmic corrections that emerge from the 1-loop calculations in M-theory
path integrals. In this process, we revisit the Euclidean AdS$_4$-Taub-Bolt
background carefully, elucidating the flat connection in the background
graviphoton field. This analysis umambiguously determines the U(1)$_R$ holonomy
along the Seifert fiber, thereby solidifying the holographic comparison
regarding the partition function on a large class of Seifert manifolds. | arXiv |
In comparative studies of progressive diseases, such as randomized controlled
trials (RCTs), the mean Change From Baseline (CFB) of a continuous outcome at a
pre-specified follow-up time across subjects in the target population is a
standard estimand used to summarize the overall disease progression. Despite
its simplicity in interpretation, the mean CFB may not efficiently capture
important features of the trajectory of the mean outcome relevant to the
evaluation of the treatment effect of an intervention. Additionally, the
estimation of the mean CFB does not use all longitudinal data points. To
address these limitations, we propose a class of estimands called Principal
Progression Rate (PPR). The PPR is a weighted average of local or instantaneous
slope of the trajectory of the population mean during the follow-up. The
flexibility of the weight function allows the PPR to cover a broad class of
intuitive estimands, including the mean CFB, the slope of ordinary least-square
fit to the trajectory, and the area under the curve. We showed that properly
chosen PPRs can enhance statistical power over the mean CFB by amplifying the
signal of treatment effect and/or improving estimation precision. We evaluated
different versions of PPRs and the performance of their estimators through
numerical studies. A real dataset was analyzed to demonstrate the advantage of
using alternative PPR over the mean CFB. | arXiv |
Mixture-of-Experts (MoE) architectures have recently gained popularity in
enabling efficient scaling of large language models. However, we uncover a
fundamental tension: while MoEs are designed for selective expert activation,
production serving requires request batching, which forces the activation of
all experts and negates MoE's efficiency benefits during the decode phase. We
present Lynx, a system that enables efficient MoE inference through dynamic,
batch-aware expert selection. Our key insight is that expert importance varies
significantly across tokens and inference phases, creating opportunities for
runtime optimization. Lynx leverages this insight through a lightweight
framework that dynamically reduces active experts while preserving model
accuracy. Our evaluations show that Lynx achieves up to 1.55x reduction in
inference latency while maintaining negligible accuracy loss from baseline
model across complex code generation and mathematical reasoning tasks. | arXiv |
Dust polarization, which comes from the alignment of aspherical grains to
magnetic fields, has been widely employed to study the interstellar medium
(ISM) dust properties. The wavelength dependence of the degree of optical
polarization, known as the Serkowski relation, was a key observational
discovery that advanced grain modeling and alignment theories. However, it was
recently shown that line-of-sight (LOS) variations in the structure of the ISM
or the magnetic field morphology contaminate the constraints extracted from
fits to the Serkowski relation. These cases can be identified by the
wavelength-dependent variability in the polarization angles. We aim to
investigate the extent to which we can constrain the intrinsic dust properties
and alignment efficiency from dust polarization data, by accounting for LOS
variations of the magnetic field morphology. We employed archival data to fit
the Serkowski relation and constrain its free parameter. We explored potential
imprints of LOS variations of the magnetic field morphology in these
constraints. We found that these LOS integration effects contaminate the
majority of the existing dataset, thus biasing the obtained Serkowski
parameters by approximately 10%. The constancy of the polarization angles with
wavelength does not necessarily guarantee the absence of 3D averaging effects.
We examined the efficiency of dust grains in polarizing starlight, as probed by
the ratio of the degree of polarization to dust reddening, E(B-V). We found
that all measurements respect the limit established by polarized dust emission
data. A suppression in polarization efficiencies occurs at E(B-V) close to 0.5
mag, which we attribute to projection effects and may be unrelated to the
intrinsic alignment of dust grains. | arXiv |
Exploring nuclear physics through the fundamental constituents of the strong
force -- quarks and gluons -- is a formidable challenge. While numerical
calculations using lattice quantum chromodynamics offer the most promising
approach for this pursuit, practical implementation is arduous, especially due
to the uncontrollable growth of quark-combinatorics, the so-called
Wick-contraction problem of nuclei. We present here two novel methods providing
a state-of-the-art solution to this problem. In the first, we exploit
randomized algorithms inspired from computational number theory to detect and
eliminate redundancies that arise in Wick contraction computations. Our second
method explores facilities for automation of tensor computations -- in terms of
efficient utilization of specialized hardware, algorithmic optimizations, as
well as ease of programming and the potential for automatic code generation --
that are offered by new programming models inspired by applications in machine
learning (e.g., TensorFlow). We demonstrate the efficacy of our methods by
computing two-point correlation functions for Deuteron, Helium-3, Helium-4 and
Lithium-7, achieving at least an order of magnitude improvement over existing
algorithms with efficient implementation on GPU-accelerators. Additionally, we
discover an intriguing characteristic shared by all the nuclei we study:
specific spin-color combinations dominate the correlation functions, hinting at
a potential connection to an as-yet-unidentified symmetry in nuclei. Moreover
finding them beforehand can reduce the computing time further and
substantially. Our results, with the efficiency that we achieved, suggest the
possibility of extending the applicability of our methods for calculating
properties of light nuclei, potentially up to A ~12 and beyond. | arXiv |
Disks around intermediate mass stars called Herbig disks are the formation
sites of giant exoplanets. Obtaining a complete inventory of these disks will
therefore give insights into giant planet formation. However, until now no
complete disk survey has been done on Herbig disks in a single star-forming
region. Orion is the only nearby region with a significant number of Herbig
disks (N=35) to carry out such a survey. Using new NOEMA observations of 25
Herbig disks, in combination with ALMA archival data of 10 Herbig disks,
results in a complete sample of all know Herbig disks in Orion. Using uv-plane
analysis for the NOEMA observed disks, and literature values of the ALMA
observed disks, we obtain the dust masses of all Herbig disks and obtain a
cumulative dust mass distribution. Additionally, six disks with new CO
isotopologues detections are presented, one of which is detected in C17O. We
calculate the external ultraviolet (UV) irradiance on each disk and compare the
dust mass to it. We find a median disk dust mass of 11.7 M_\oplus for the
Herbig disks. Comparing the Herbig disks in Orion to previous surveys for
mainly T Tauri disks in Orion, we find that while ~50% of the Herbig disks have
a mass higher than 10 M_\oplus, this is at most 25% for the T~Tauri disks. This
difference is especially striking when considering that the Herbig disks are
around a factor of two older than the T Tauri disks. Comparing to the Herbig
disks observed with ALMA from a previous study, no significant difference is
found between the distributions. We find a steeper (slope of -7.6) relationship
between the dust mass and external UV irradation compared to that of the
T~Tauri disks (slope of -1.3). This work shows the importance of complete
samples, giving rise to the need of a complete survey of the Herbig disk
population. | arXiv |
Charge density waves (CDW) in single-layer 1$T$-MTe$_2$ (M= Nb, Ta) recently
raised large attention because of the contrasting structural and physical
behavior with the sulfide and selenide analogues. A first-principles study of
fourteen different 1$T$-type TaTe$_2$ single-layers is reported. The importance
of Te to Ta electron transfer and multicenter metal-metal bonding in
stabilizing different structural modulations is highlighted. Analysis of the
electronic structure of the optimized structures provides a rationale for what
distinguishes 1$T$-TaTe$_2$ from the related disulfide and diselenide, what are
the more stable structural modulations for 1$T$-type TaTe$_2$ single-layers,
the possible role of Fermi surface nesting on some of these CDW instabilities,
how the CDW affects the metallic properties of the non-distorted lattice and
the possibility that some of these CDW phases exhibit exotic properties. All
CDW phases studied exhibit band structures typical of metallic systems although
some exhibit both very flat and dispersive bands at the Fermi level so that
Mott effects could develop; one of the (4$\times$4) phases studied exhibits a
Dirac cone at the Fermi level. | arXiv |
We adapt the topological quantum chemistry formalism to layer groups, and
apply it to study the band topology of 8,872 entries from the computational
two-dimensional (2D) materials databases C2DB and MC2D. In our analysis, we
find 4,073 topologically non-trivial or obstructed atomic insulator entries,
including 905 topological insulators, 602 even-electron number topological
semimetals, and 1,003 obstructed atomic insulators. We thus largely expand the
library of known topological or obstructed materials in two dimensions, beyond
the few hundreds known to date. We additionally classify the materials into
four categories: experimentally existing, stable, computationally exfoliated,
and not stable. We present a detailed analysis of the edge states emerging in a
number of selected new materials, and compile a Topological 2D Materials
Database (2D-TQCDB) containing the band structures and detailed topological
properties of all the materials studied in this work. The methodology here
developed is implemented in new programs available to the public, designed to
study the topology of any non-magnetic monolayer or multilayer 2D material. | arXiv |
We show that the anisotropies in the spectrum of gravitational waves induced
by scalar modes after the end of inflation in canonical, single-field models
are completely determined by the tilt of the scalar and tensor power spectra.
The latter contains information about anisotropies produced due to the
propagation of the tensor modes in an inhomogeneous Universe, whereas the
former represents the anisotropies generated at the time of production and
arise only when non-Gaussian corrections to the angular power spectrum are
considered. Our proof takes into account all scalar interactions in the cubic
inflaton Lagrangian. | arXiv |
Several recent works seek to develop foundation models specifically for
medical applications, adapting general-purpose large language models (LLMs) and
vision-language models (VLMs) via continued pretraining on publicly available
biomedical corpora. These works typically claim that such domain-adaptive
pretraining (DAPT) improves performance on downstream medical tasks, such as
answering medical licensing exam questions. In this paper, we compare ten
public "medical" LLMs and two VLMs against their corresponding base models,
arriving at a different conclusion: all medical VLMs and nearly all medical
LLMs fail to consistently improve over their base models in the zero-/few-shot
prompting and supervised fine-tuning regimes for medical question-answering
(QA). For instance, across all tasks and model pairs we consider in the 3-shot
setting, medical LLMs only outperform their base models in 22.7% of cases,
reach a (statistical) tie in 36.8% of cases, and are significantly worse than
their base models in the remaining 40.5% of cases. Our conclusions are based on
(i) comparing each medical model head-to-head, directly against the
corresponding base model; (ii) optimizing the prompts for each model separately
in zero-/few-shot prompting; and (iii) accounting for statistical uncertainty
in comparisons. While these basic practices are not consistently adopted in the
literature, our ablations show that they substantially impact conclusions.
Meanwhile, we find that after fine-tuning on specific QA tasks, medical LLMs
can show performance improvements, but the benefits do not carry over to tasks
based on clinical notes. Our findings suggest that state-of-the-art
general-domain models may already exhibit strong medical knowledge and
reasoning capabilities, and offer recommendations to strengthen the conclusions
of future studies. | arXiv |
When the coupling of a quantum system to its environment is non-negligible,
its steady state is known to deviate from the textbook Gibbs state. The
Bloch-Redfield quantum master equation, one of the most widely adopted
equations to solve the open quantum dynamics, cannot predict all the deviations
of the steady state of a quantum system from the Gibbs state. In this paper,
for a generic spin-boson model, we use a higher-order quantum master equation
(in system environment coupling strength) to analytically calculate all the
deviations of the steady state of the quantum system up to second order in the
coupling strength. We also show that this steady state is exactly identical to
the corresponding generalized Gibbs state, the so-called quantum mean force
Gibbs state, at arbitrary temperature. All these calculations are highly
general, making them immediately applicable to a wide class of systems well
modeled by the spin-Boson model, ranging from various condensed phase processes
to quantum thermodynamics. As an example, we use our results to study the
dynamics and the steady state of a double quantum dot system under physically
relevant choices of parameters. | arXiv |
French language models, such as CamemBERT, have been widely adopted across
industries for natural language processing (NLP) tasks, with models like
CamemBERT seeing over 4 million downloads per month. However, these models face
challenges due to temporal concept drift, where outdated training data leads to
a decline in performance, especially when encountering new topics and
terminology. This issue emphasizes the need for updated models that reflect
current linguistic trends. In this paper, we introduce two new versions of the
CamemBERT base model-CamemBERTav2 and CamemBERTv2-designed to address these
challenges. CamemBERTav2 is based on the DeBERTaV3 architecture and makes use
of the Replaced Token Detection (RTD) objective for better contextual
understanding, while CamemBERTv2 is built on RoBERTa, which uses the Masked
Language Modeling (MLM) objective. Both models are trained on a significantly
larger and more recent dataset with longer context length and an updated
tokenizer that enhances tokenization performance for French. We evaluate the
performance of these models on both general-domain NLP tasks and
domain-specific applications, such as medical field tasks, demonstrating their
versatility and effectiveness across a range of use cases. Our results show
that these updated models vastly outperform their predecessors, making them
valuable tools for modern NLP systems. All our new models, as well as
intermediate checkpoints, are made openly available on Huggingface. | arXiv |
Subsets and Splits