text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
Cancer is the second leading cause of death, with chemotherapy as one of the
primary forms of treatment. As a result, researchers are turning to drug
combination therapy to decrease drug resistance and increase efficacy. Current
methods of drug combination screening, such as in vivo and in vitro, are
inefficient due to stark time and monetary costs. In silico methods have become
increasingly important for screening drugs, but current methods are inaccurate
and generalize poorly to unseen anticancer drugs. In this paper, I employ a
geometric deep-learning model utilizing a graph attention network that is
equivariant to 3D rotations, translations, and reflections with structural
motifs. Additionally, the gene expression of cancer cell lines is utilized to
classify synergistic drug combinations specific to each cell line. I compared
the proposed geometric deep learning framework to current state-of-the-art
(SOTA) methods, and the proposed model architecture achieved greater
performance on all 12 benchmark tasks performed on the DrugComb dataset.
Specifically, the proposed framework outperformed other SOTA methods by an
accuracy difference greater than 28%. Based on these results, I believe that
the equivariant graph attention network's capability of learning geometric data
accounts for the large performance improvements. The model's ability to
generalize to foreign drugs is thought to be due to the structural motifs
providing a better representation of the molecule. Overall, I believe that the
proposed equivariant geometric deep learning framework serves as an effective
tool for virtually screening anticancer drug combinations for further
validation in a wet lab environment. The code for this work is made available
online at: https://github.com/WeToTheMoon/EGAT_DrugSynergy. | arXiv |
We consider the problem of counting the copies of a length-$k$ pattern
$\sigma$ in a sequence $f \colon [n] \to \mathbb{R}$, where a copy is a subset
of indices $i_1 < \ldots < i_k \in [n]$ such that $f(i_j) < f(i_\ell)$ if and
only if $\sigma(j) < \sigma(\ell)$. This problem is motivated by a range of
connections and applications in ranking, nonparametric statistics,
combinatorics, and fine-grained complexity, especially when $k$ is a small
fixed constant.
Recent advances have significantly improved our understanding of counting and
detecting patterns. Guillemot and Marx [2014] demonstrated that the detection
variant is solvable in $O(n)$ time for any fixed $k$. Their proof has laid the
foundations for the discovery of the twin-width, a concept that has notably
advanced parameterized complexity in recent years. Counting, in contrast, is
harder: it has a conditional lower bound of $n^{\Omega(k / \log k)}$
[Berendsohn, Kozma, and Marx 2019] and is expected to be polynomially harder
than detection as early as $k = 4$, given its equivalence to counting
$4$-cycles in graphs [Dudek and Gawrychowski, 2020].
In this work, we design a deterministic near-linear time
$(1+\varepsilon)$-approximation algorithm for counting $\sigma$-copies in $f$
for all $k \leq 5$. Combined with the conditional lower bound for $k=4$, this
establishes the first known separation between approximate and exact algorithms
for pattern counting. Interestingly, our algorithm leverages the Birg\'e
decomposition -- a sublinear tool for monotone distributions widely used in
distribution testing -- which, to our knowledge, has not been applied in a
pattern counting context before. | arXiv |
Following the milestones in large language models (LLMs) and multimodal
models, we have seen a surge in applying LLMs to biochemical tasks. Leveraging
graph features and molecular text representations, LLMs can tackle various
tasks, such as predicting chemical reaction outcomes and describing molecular
properties. However, most current work overlooks the multi-level nature of
graph features. The impact of different feature levels on LLMs and the
importance of each level remain unexplored, and it is possible that different
chemistry tasks require different feature levels. In this work, we first
investigate the effect of feature granularity by fusing GNN-generated feature
tokens, discovering that even reducing all tokens to a single token does not
significantly impact performance. We then explore the effect of various feature
levels on performance, finding that both the quality of LLM-generated molecules
and performance on different tasks benefit from different feature levels. We
conclude with two key insights: (1) current molecular Multimodal LLMs(MLLMs)
lack a comprehensive understanding of graph features, and (2) static processing
is not sufficient for hierarchical graph feature. Our code will be publicly
available soon. | arXiv |
Large language models (LLMs), such as ChatGPT released by OpenAI, have
attracted significant attention from both industry and academia due to their
demonstrated ability to generate high-quality content for various tasks.
Despite the impressive capabilities of LLMs, there are growing concerns
regarding their potential risks in various fields, such as news, education, and
software engineering. Recently, several commercial and open-source
LLM-generated content detectors have been proposed, which, however, are
primarily designed for detecting natural language content without considering
the specific characteristics of program code. This paper aims to fill this gap
by proposing a novel ChatGPT-generated code detector, CodeGPTSensor, based on a
contrastive learning framework and a semantic encoder built with UniXcoder. To
assess the effectiveness of CodeGPTSensor on differentiating ChatGPT-generated
code from human-written code, we first curate a large-scale Human and Machine
comparison Corpus (HMCorp), which includes 550K pairs of human-written and
ChatGPT-generated code (i.e., 288K Python code pairs and 222K Java code pairs).
Based on the HMCorp dataset, our qualitative and quantitative analysis of the
characteristics of ChatGPT-generated code reveals the challenge and opportunity
of distinguishing ChatGPT-generated code from human-written code with their
representative features. Our experimental results indicate that CodeGPTSensor
can effectively identify ChatGPT-generated code, outperforming all selected
baselines. | arXiv |
In this work, we address the cooperation problem among large language model
(LLM) based embodied agents, where agents must cooperate to achieve a common
goal. Previous methods often execute actions extemporaneously and incoherently,
without long-term strategic and cooperative planning, leading to redundant
steps, failures, and even serious repercussions in complex tasks like
search-and-rescue missions where discussion and cooperative plan are crucial.
To solve this issue, we propose Cooperative Plan Optimization (CaPo) to enhance
the cooperation efficiency of LLM-based embodied agents. Inspired by human
cooperation schemes, CaPo improves cooperation efficiency with two phases: 1)
meta-plan generation, and 2) progress-adaptive meta-plan and execution. In the
first phase, all agents analyze the task, discuss, and cooperatively create a
meta-plan that decomposes the task into subtasks with detailed steps, ensuring
a long-term strategic and coherent plan for efficient coordination. In the
second phase, agents execute tasks according to the meta-plan and dynamically
adjust it based on their latest progress (e.g., discovering a target object)
through multi-turn discussions. This progress-based adaptation eliminates
redundant actions, improving the overall cooperation efficiency of agents.
Experimental results on the ThreeDworld Multi-Agent Transport and Communicative
Watch-And-Help tasks demonstrate that CaPo achieves much higher task completion
rate and efficiency compared with state-of-the-arts. | arXiv |
An improved bilinear fuzzy genetic algorithm (BFGA) is introduced in this
chapter for the design optimization of steel structures with semi-rigid
connections. Semi-rigid connections provide a compromise between the stiffness
of fully rigid connections and the flexibility of fully pinned connections.
However, designing such structures is challenging due to the nonlinear behavior
of semi-rigid connections. The BFGA is a robust optimization method that
combines the strengths of fuzzy logic and genetic algorithm to handle the
complexity and uncertainties of structural design problems. The BFGA, compared
to standard GA, demonstrated to generate high-quality solutions in a reasonable
time. The application of the BFGA is demonstrated through the optimization of
steel structures with semirigid connections, considering the weight and
performance criteria. The results show that the proposed BFGA is capable of
finding optimal designs that satisfy all the design requirements and
constraints. The proposed approach provides a promising solution for the
optimization of complex structures with nonlinear behavior. | arXiv |
A wide range of transformer-based language models have been proposed for
information retrieval tasks. However, fine-tuning and inference of these models
is often complex and requires substantial engineering effort. This paper
introduces Lightning IR, a PyTorch Lightning-based framework for fine-tuning
and inference of transformer-based language models for information retrieval.
Lightning IR provides a modular and extensible architecture that supports all
stages of an information retrieval pipeline: from fine-tuning and indexing to
searching and re-ranking. It is designed to be straightforward to use,
scalable, and reproducible. Lightning IR is available as open-source:
https://github.com/webis-de/lightning-ir. | arXiv |
We study the birational geometry of hypersurfaces in products of weighted
projective spaces, extending results previously established by J. C. Ottem. For
most cases where these hypersurfaces are Mori dream spaces, we determine all
relevant cones and characterise their birational models, along with the small
$\mathbf{Q}$-factorial modifications to them. We also provide a presentation of
their Cox ring. Finally, we establish the birational form of the
Kawamata-Morrison cone conjecture for terminal Calabi-Yau hypersurfaces in
Gorenstein products of weighted projective spaces. | arXiv |
Intracerebral hemorrhage (ICH) is the most fatal subtype of stroke and is
characterized by a high incidence of disability. Accurate segmentation of the
ICH region and prognosis prediction are critically important for developing and
refining treatment plans for post-ICH patients. However, existing approaches
address these two tasks independently and predominantly focus on imaging data
alone, thereby neglecting the intrinsic correlation between the tasks and
modalities. This paper introduces a multi-task network, ICH-SCNet, designed for
both ICH segmentation and prognosis classification. Specifically, we integrate
a SAM-CLIP cross-modal interaction mechanism that combines medical text and
segmentation auxiliary information with neuroimaging data to enhance
cross-modal feature recognition. Additionally, we develop an effective feature
fusion module and a multi-task loss function to improve performance further.
Extensive experiments on an ICH dataset reveal that our approach surpasses
other state-of-the-art methods. It excels in the overall performance of
classification tasks and outperforms competing models in all segmentation task
metrics. | arXiv |
With the rapid advancement of neural language models, the deployment of
over-parameterized models has surged, increasing the need for interpretable
explanations comprehensible to human inspectors. Existing post-hoc
interpretability methods, which often focus on unigram features of single input
textual instances, fail to capture the models' decision-making process fully.
Additionally, many methods do not differentiate between decisions based on
spurious correlations and those based on a holistic understanding of the input.
Our paper introduces DISCO, a novel method for discovering global, rule-based
explanations by identifying causal n-gram associations with model predictions.
This method employs a scalable sequence mining technique to extract relevant
text spans from training data, associate them with model predictions, and
conduct causality checks to distill robust rules that elucidate model behavior.
These rules expose potential overfitting and provide insights into misleading
feature combinations. We validate DISCO through extensive testing,
demonstrating its superiority over existing methods in offering comprehensive
insights into complex model behaviors. Our approach successfully identifies all
shortcuts manually introduced into the training data (100% detection rate on
the MultiRC dataset), resulting in an 18.8% regression in model performance --
a capability unmatched by any other method. Furthermore, DISCO supports
interactive explanations, enabling human inspectors to distinguish spurious
causes in the rule-based output. This alleviates the burden of abundant
instance-wise explanations and helps assess the model's risk when encountering
out-of-distribution (OOD) data. | arXiv |
This paper presents the second-placed solution for task 8 and the
participation solution for task 7 of BraTS 2024. The adoption of automated
brain analysis algorithms to support clinical practice is increasing. However,
many of these algorithms struggle with the presence of brain lesions or the
absence of certain MRI modalities. The alterations in the brain's morphology
leads to high variability and thus poor performance of predictive models that
were trained only on healthy brains. The lack of information that is usually
provided by some of the missing MRI modalities also reduces the reliability of
the prediction models trained with all modalities. In order to improve the
performance of these models, we propose the use of conditional 3D wavelet
diffusion models. The wavelet transform enabled full-resolution image training
and prediction on a GPU with 48 GB VRAM, without patching or downsampling,
preserving all information for prediction. For the inpainting task of BraTS
2024, the use of a large and variable number of healthy masks and the stability
and efficiency of the 3D wavelet diffusion model resulted in 0.007, 22.61 and
0.842 in the validation set and 0.07 , 22.8 and 0.91 in the testing set (MSE,
PSNR and SSIM respectively). The code for these tasks is available at
https://github.com/ShadowTwin41/BraTS_2023_2024_solutions. | arXiv |
The hydrodynamic instabilities of propagating interfaces in Hele-Shaw
channels or porous media under the influence of an imposed flow and
gravitational acceleration are investigated within the framework of Darcy's
law. The stability analysis pertains to an interface between two fluids with
different densities, viscosities, and permeabilities, which can be susceptible
to Darrieus-Landau, Saffman-Taylor, and Rayleigh-Taylor instabilities. A
theoretical analysis, treating the interface as a hydrodynamic discontinuity,
yields a simple dispersion relation between the perturbation growth rate $s$
and its wavenumber $k$ in the form $s=(ak - bk^2)/(1+ck)$, where $a$, $b$ and
$c$ are constants determined by problem parameters. The constant $a$
characterises all three hydrodynamic instabilities, which are long-wave in
nature. In contrast, $b$ and $c$, which characterize the influences of local
curvature and flow strain on interface propagation speed, typically provide
stabilisation at short wavelengths comparable to interface's diffusive
thickness. The theoretical findings for Darcy's law are compared with a
generalisation of the classical work by Joulin & Sivashinsky, which is based on
an Euler-Darcy model. The comparison provides a conceptual bridge between
predictions based on Darcy's law and those on Euler's equation and offers
valuable insights into the role of confinement on interface instabilities in
Hele-Shaw channels. Numerical analyses of the instabilities are carried out for
premixed flames using a simplified chemistry model and Darcy's law. The
numerical results corroborate with the explicit formula with a reasonable
accuracy. Time-dependent numerical simulations of unstable premixed flames are
carried out to gain insights into the nonlinear development of these
instabilities. | arXiv |
Distributed traces contain valuable information but are often massive in
volume, posing a core challenge in tracing framework design: balancing the
tradeoff between preserving essential trace information and reducing trace
volume. To address this tradeoff, previous approaches typically used a '1 or 0'
sampling strategy: retaining sampled traces while completely discarding
unsampled ones. However, based on an empirical study on real-world production
traces, we discover that the '1 or 0' strategy actually fails to effectively
balance this tradeoff.
To achieve a more balanced outcome, we shift the strategy from the '1 or 0'
paradigm to the 'commonality + variability' paradigm. The core of 'commonality
+ variability' paradigm is to first parse traces into common patterns and
variable parameters, then aggregate the patterns and filter the parameters. We
propose a cost-efficient tracing framework, Mint, which implements the
'commonality + variability' paradigm on the agent side to enable all requests
capturing. Our experiments show that Mint can capture all traces and retain
more trace information while optimizing trace storage (reduced to an average of
2.7%) and network overhead (reduced to an average of 4.2%). Moreover,
experiments also demonstrate that Mint is lightweight enough for production
use. | arXiv |
Large language models (LLMs), with advanced linguistic capabilities, have
been employed in reranking tasks through a sequence-to-sequence approach. In
this paradigm, multiple passages are reranked in a listwise manner and a
textual reranked permutation is generated. However, due to the limited context
window of LLMs, this reranking paradigm requires a sliding window strategy to
iteratively handle larger candidate sets. This not only increases computational
costs but also restricts the LLM from fully capturing all the comparison
information for all candidates. To address these challenges, we propose a novel
self-calibrated listwise reranking method, which aims to leverage LLMs to
produce global relevance scores for ranking. To achieve it, we first propose
the relevance-aware listwise reranking framework, which incorporates explicit
list-view relevance scores to improve reranking efficiency and enable global
comparison across the entire candidate set. Second, to ensure the comparability
of the computed scores, we propose self-calibrated training that uses
point-view relevance assessments generated internally by the LLM itself to
calibrate the list-view relevance assessments. Extensive experiments and
comprehensive analysis on the BEIR benchmark and TREC Deep Learning Tracks
demonstrate the effectiveness and efficiency of our proposed method. | arXiv |
An outerplanar graph is a planar graph that has a planar drawing with all
vertices on the unbounded face. The matching complex of a graph is the
simplicial complex whose faces are subsets of disjoint edges of the graph. In
this paper we prove that the matching complexes of outerplanar graphs are
contractible or homotopy equivalent to a wedge of spheres. This extends known
results about trees and polygonal line tilings. | arXiv |
Robustness is a fundamental aspect for developing safe and trustworthy
models, particularly when they are deployed in the open world. In this work we
analyze the inherent capability of one-stage object detectors to robustly
operate in the presence of out-of-distribution (OoD) data. Specifically, we
propose a novel detection algorithm for detecting unknown objects in image
data, which leverages the features extracted by the model from each sample.
Differently from other recent approaches in the literature, our proposal does
not require retraining the object detector, thereby allowing for the use of
pretrained models. Our proposed OoD detector exploits the application of
supervised dimensionality reduction techniques to mitigate the effects of the
curse of dimensionality on the features extracted by the model. Furthermore, it
utilizes high-resolution feature maps to identify potential unknown objects in
an unsupervised fashion. Our experiments analyze the Pareto trade-off between
the performance detecting known and unknown objects resulting from different
algorithmic configurations and inference confidence thresholds. We also compare
the performance of our proposed algorithm to that of logits-based post-hoc OoD
methods, as well as possible fusion strategies. Finally, we discuss on the
competitiveness of all tested methods against state-of-the-art OoD approaches
for object detection models over the recently published Unknown Object
Detection benchmark. The obtained results verify that the performance of
avant-garde post-hoc OoD detectors can be further improved when combined with
our proposed algorithm. | arXiv |
The challenges in dense ultra-reliable low-latency communication networks to
deliver the required service to multiple devices are addressed by three main
technologies: multiple antennas at the base station (MISO), rate splitting
multiple access (RSMA) with private and common message encoding, and
simultaneously transmitting and reflecting reconfigurable intelligent surfaces
(STAR-RIS). Careful resource allocation, encompassing beamforming and RIS
optimization, is required to exploit the synergy between the three. We propose
an alternating optimization-based algorithm, relying on
minorization-maximization. Numerical results show that the achievable
second-order max-min rates of the proposed scheme outperform the baselines
significantly. MISO, RSMA, and STAR-RIS all contribute to enabling
ultra-reliable low-latency communication (URLLC). | arXiv |
Statistical heterogeneity is a measure of how skewed the samples of a dataset
are. It is a common problem in the study of differential privacy that the usage
of a statistically heterogeneous dataset results in a significant loss of
accuracy. In federated scenarios, statistical heterogeneity is more likely to
happen, and so the above problem is even more pressing. We explore the three
most promising ways to measure statistical heterogeneity and give formulae for
their accuracy, while simultaneously incorporating differential privacy. We
find the optimum privacy parameters via an analytic mechanism, which
incorporates root finding methods. We validate the main theorems and related
hypotheses experimentally, and test the robustness of the analytic mechanism to
different heterogeneity levels. The analytic mechanism in a distributed setting
delivers superior accuracy to all combinations involving the classic mechanism
and/or the centralized setting. All measures of statistical heterogeneity do
not lose significant accuracy when a heterogeneous sample is used. | arXiv |
BosonSampling is a popular candidate for near-term quantum advantage, which
has now been experimentally implemented several times. The original proposal of
Aaronson and Arkhipov from 2011 showed that classical hardness of BosonSampling
is implied by a proof of the "Gaussian Permanent Estimation" conjecture. This
conjecture states that $e^{-n\log{n}-n-O(\log n)}$ additive error estimates to
the output probability of most random BosonSampling experiments are $\#P$-hard.
Proving this conjecture has since become the central question in the theory of
quantum advantage.
In this work we make progress by proving that $e^{-n\log n -n - O(n^\delta)}$
additive error estimates to output probabilities of most random BosonSampling
experiments are $\#P$-hard, for any $\delta>0$. In the process, we circumvent
all known barrier results for proving the hardness of BosonSampling
experiments. This is nearly the robustness needed to prove hardness of
BosonSampling -- the remaining hurdle is now "merely" to show that the
$n^\delta$ in the exponent can be improved to $O(\log n).$ We also obtain an
analogous result for Random Circuit Sampling.
Our result allows us to show, for the first time, a hardness of classical
sampling result for random BosonSampling experiments, under an
anticoncentration conjecture. Specifically, we prove the impossibility of
multiplicative-error sampling from random BosonSampling experiments with
probability $1-e^{-O(n)}$, unless the Polynomial Hierarchy collapses. | arXiv |
We explore the sensitivity of future muon colliders to CP-violating
interactions in the Higgs sector, specifically focusing on the process $\mu^-
\mu^+ \to h \bar{\nu_{l}} \nu_{l}$. Using a model-independent approach within
the framework of the Standard Model Effective Field Theory (SMEFT), we analyze
the contribution of dimension-six operators to Higgs-gauge boson couplings,
emphasizing CP-violating effects. To simulate the process, all signal and
background events are generated through MadGraph. The analysis provides 95\%
confidence level limits on the relevant Wilson coefficients $\tilde{c}_{HB}$,
$\tilde{c}_{HW}$, $\tilde{c}_{\gamma}$, with a comparative discussion of
existing experimental and phenomenological constraints. Our best constraints on
the $\tilde{c}_{HB}$, $\tilde{c}_{HW}$, $\tilde{c}_{\gamma}$ with an integrated
luminosity of 10 ab$^{-1}$ are $[-0.017148;0.018711]$, $[-0.002545;0.002837]$
and $[-0.010613;0.011210]$, respectively. In this context, this study
highlights the capability of future muon collider experiments to probe new
physics in the Higgs sector, potentially offering tighter constraints on
CP-violating Higgs-gauge boson interactions than those provided by current
colliders. | arXiv |
Consider an undirected graph G, representing a social network, where each
node is blue or red, corresponding to positive or negative opinion on a topic.
In the voter model, in discrete time rounds, each node picks a neighbour
uniformly at random and adopts its colour. Despite its significant popularity,
this model does not capture some fundamental real-world characteristics such as
the difference in the strengths of individuals connections, individuals with
neutral opinion on a topic, and individuals who are reluctant to update their
opinion. To address these issues, we introduce and study a generalisation of
the voter model. Motivating by campaigning strategies, we study the problem of
selecting a set of seeds blue nodes to maximise the expected number of blue
nodes after some rounds. We prove that the problem is NP- hard and provide a
polynomial time approximation algorithm with the best possible approximation
guarantee. Our experiments on real-world and synthetic graph data demonstrate
that the proposed algorithm outperforms other algorithms. We also investigate
the convergence properties of the model. We prove that the process could take
an exponential number of rounds to converge. However, if we limit ourselves to
strongly connected graphs, the convergence time is polynomial and the period
(the number of states in convergence) divides the length of all cycles in the
graph. | arXiv |
In offline reinforcement learning, a policy is learned using a static dataset
in the absence of costly feedback from the environment. In contrast to the
online setting, only using static datasets poses additional challenges, such as
policies generating out-of-distribution samples. Model-based offline
reinforcement learning methods try to overcome these by learning a model of the
underlying dynamics of the environment and using it to guide policy search. It
is beneficial but, with limited datasets, errors in the model and the issue of
value overestimation among out-of-distribution states can worsen performance.
Current model-based methods apply some notion of conservatism to the Bellman
update, often implemented using uncertainty estimation derived from model
ensembles. In this paper, we propose Constrained Latent Action Policies (C-LAP)
which learns a generative model of the joint distribution of observations and
actions. We cast policy learning as a constrained objective to always stay
within the support of the latent action distribution, and use the generative
capabilities of the model to impose an implicit constraint on the generated
actions. Thereby eliminating the need to use additional uncertainty penalties
on the Bellman update and significantly decreasing the number of gradient steps
required to learn a policy. We empirically evaluate C-LAP on the D4RL and
V-D4RL benchmark, and show that C-LAP is competitive to state-of-the-art
methods, especially outperforming on datasets with visual observations. | arXiv |
In a directed graph $D$, a vertex subset $S\subseteq V$ is a total dominating
set if every vertex of $D$ has an in-neighbor from $S$. A total dominating set
exists if and only if every vertex has at least one in-neighbor. We call the
orientation of such directed graphs valid. The total domination number of $D$,
denoted by $\gamma_t(D)$, is the size of the smallest total dominating set of
$D$. For an undirected graph $G$, we investigate the upper (or lower)
orientable total domination number of $G$, denoted by $\mathrm{DOM}_t(G)$ (or
$\mathrm{dom}_t(G)$), that is the maximum (or minimum) of the total domination
numbers over all valid orientations of $G$. We characterize those graphs for
which $\mathrm{DOM}_t(G)=|V(G)|-1$, and consequently we show that there exists
a family of graphs for which $\mathrm{DOM}_t(G)$ and $\mathrm{dom}_t(G)$ can be
as far as possible, namely $\mathrm{DOM}_t(G)=|V(G)|-1$ and
$\mathrm{dom}_t(G)=3$. | arXiv |
Predicting temporal progress from visual trajectories is important for
intelligent robots that can learn, adapt, and improve. However, learning such
progress estimator, or temporal value function, across different tasks and
domains requires both a large amount of diverse data and methods which can
scale and generalize. To address these challenges, we present Generative Value
Learning (\GVL), a universal value function estimator that leverages the world
knowledge embedded in vision-language models (VLMs) to predict task progress.
Naively asking a VLM to predict values for a video sequence performs poorly due
to the strong temporal correlation between successive frames. Instead, GVL
poses value estimation as a temporal ordering problem over shuffled video
frames; this seemingly more challenging task encourages VLMs to more fully
exploit their underlying semantic and temporal grounding capabilities to
differentiate frames based on their perceived task progress, consequently
producing significantly better value predictions. Without any robot or task
specific training, GVL can in-context zero-shot and few-shot predict effective
values for more than 300 distinct real-world tasks across diverse robot
platforms, including challenging bimanual manipulation tasks. Furthermore, we
demonstrate that GVL permits flexible multi-modal in-context learning via
examples from heterogeneous tasks and embodiments, such as human videos. The
generality of GVL enables various downstream applications pertinent to
visuomotor policy learning, including dataset filtering, success detection, and
advantage-weighted regression -- all without any model training or finetuning. | arXiv |
Evolutionary Multi-Objective Optimization Algorithms (EMOAs) are widely
employed to tackle problems with multiple conflicting objectives. Recent
research indicates that not all objectives are equally important to the
decision-maker (DM). In the context of interactive EMOAs, preference
information elicited from the DM during the optimization process can be
leveraged to identify and discard irrelevant objectives, a crucial step when
objective evaluations are computationally expensive. However, much of the
existing literature fails to account for the dynamic nature of DM preferences,
which can evolve throughout the decision-making process and affect the
relevance of objectives. This study addresses this limitation by simulating
dynamic shifts in DM preferences within a ranking-based interactive algorithm.
Additionally, we propose methods to discard outdated or conflicting preferences
when such shifts occur. Building on prior research, we also introduce a
mechanism to safeguard relevant objectives that may become trapped in local or
global optima due to the diminished correlation with the DM-provided rankings.
Our experimental results demonstrate that the proposed methods effectively
manage evolving preferences and significantly enhance the quality and
desirability of the solutions produced by the algorithm. | arXiv |
Exact solutions are presented which describe, either the evolution of fluid
distributions corresponding to a ghost star (vanishing total mass), or
describing the evolution of fluid distributions which attain the ghost star
status at some point of their lives. The first two solutions correspond to the
former case, they admit a conformal Killing vector (CKV) and describe the
adiabatic evolution of a ghost star. Other two solutions corresponding to the
latter case are found, which describe evolving fluid spheres absorbing energy
from the outside, leading to a vanishing total mass at some point of their
evolution. In this case the fluid is assumed to be expansion-free. In all four
solutions the condition of vanishing complexity factor was imposed. The
physical implications of the results, are discussed | arXiv |
It is well known that multiple Galactic thermal dust emission components may
exist along the line of sight, but a single-component approximation is still
widely used, since a full multi-component estimation requires a large number of
frequency bands that are only available with future experiments. In light of
this, we present a reliable, quantitative, and sensitive criterion to test the
goodness of all kinds of dust emission estimations. This can not only give a
definite answer to the quality of current single-component approximations; but
also help determine preconditions of future multi-component estimations. Upon
the former, previous works usually depend on a more complicated model to
improve the single-component dust emission; however, our method is free from
any additional model, and is sensitive enough to directly discover a
substantial discrepancy between the Planck HFI data (100-857 GHz) and
associated single-component dust emission estimations. This is the first time
that the single-component estimation is ruled out by the data itself. For the
latter, a similar procedure will be able to answer two important questions for
estimating the complicated Galactic emissions: the number of necessary
foreground components and their types. | arXiv |
The increasing growth of social media provides us with an instant opportunity
to be informed of the opinions of a large number of politically active
individuals in real-time. We can get an overall idea of the ideologies of these
individuals on governmental issues by analyzing the social media texts.
Nowadays, different kinds of news websites and popular social media such as
Facebook, YouTube, Instagram, etc. are the most popular means of communication
for the mass population. So the political perception of the users toward
different parties in the country is reflected in the data collected from these
social sites. In this work, we have extracted three types of features, such as
the stylometric feature, the word-embedding feature, and the TF-IDF feature.
Traditional machine learning classifiers and deep learning models are employed
to identify political ideology from the text. We have compared our methodology
with the research work in different languages. Among them, the word embedding
feature with LSTM outperforms all other models with 88.28% accuracy. | arXiv |
Recent studies have highlighted the significant potential of Large Language
Models (LLMs) as zero-shot relevance rankers. These methods predominantly
utilize prompt learning to assess the relevance between queries and documents
by generating a ranked list of potential documents. Despite their promise, the
substantial costs associated with LLMs pose a significant challenge for their
direct implementation in commercial search systems. To overcome this barrier
and fully exploit the capabilities of LLMs for text ranking, we explore
techniques to transfer the ranking expertise of LLMs to a more compact model
similar to BERT, using a ranking loss to enable the deployment of less
resource-intensive models. Specifically, we enhance the training of LLMs
through Continued Pre-Training, taking the query as input and the clicked title
and summary as output. We then proceed with supervised fine-tuning of the LLM
using a rank loss, assigning the final token as a representative of the entire
sentence. Given the inherent characteristics of autoregressive language models,
only the final token </s> can encapsulate all preceding tokens. Additionally,
we introduce a hybrid point-wise and margin MSE loss to transfer the ranking
knowledge from LLMs to smaller models like BERT. This method creates a viable
solution for environments with strict resource constraints. Both offline and
online evaluations have confirmed the efficacy of our approach, and our model
has been successfully integrated into a commercial web search engine as of
February 2024. | arXiv |
Human understanding of language is robust to different word choices as far as
they represent similar semantic concepts. To what extent does our human
intuition transfer to language models, which represent all subwords as distinct
embeddings? In this work, we take an initial step on measuring the role of
shared semantics among subwords in the encoder-only multilingual language
models (mLMs). To this end, we form "semantic tokens" by merging the
semantically similar subwords and their embeddings, and evaluate the updated
mLMs on 5 heterogeneous multilingual downstream tasks. Results show that the
general shared semantics could get the models a long way in making the
predictions on mLMs with different tokenizers and model sizes. Inspections on
the grouped subwords show that they exhibit a wide range of semantic
similarities, including synonyms and translations across many languages and
scripts. Lastly, we found the zero-shot results with semantic tokens are on par
or even better than the original models on certain classification tasks,
suggesting that the shared subword-level semantics may serve as the anchors for
cross-lingual transferring. | arXiv |
Video-sharing platforms (VSPs) have been increasingly embracing social
features such as likes, comments, and Danmaku to boost user engagement.
However, viewers may post inappropriate content through video commentary to
gain attention or express themselves anonymously and even toxically. For
example, on VSPs that support Danmaku, users may even intentionally create a
"flood" of Danmaku with inappropriate content shown overlain on videos,
disrupting the overall user experience. Despite of the prevalence of
inappropriate Danmaku on these VSPs, there is a lack of understanding about the
challenges and limitations of Danmaku content moderation on video-sharing
platforms. To explore how users perceive the challenges and limitations of
current Danmaku moderation methods on VSPs, we conducted probe-based interviews
and co-design activities with 21 active end-users. Our findings reveal that the
one-size-fits-all rules set by users or customizaibility moderation cannot
accurately match the continuous Danmaku. Additionally, the moderation
requirements of the Danmaku and the definition of offensive content must
dynamically adjust to the video content. Non-intrusive methods should be used
to maintain the coherence of the video browsing experience. Our findings inform
the design of future Danmaku moderation tools on video-sharing platforms. | arXiv |
Query optimization has become a research area where classical algorithms are
being challenged by machine learning algorithms. At the same time, recent
trends in learned query optimizers have shown that it is prudent to take
advantage of decades of database research and augment classical query
optimizers by shrinking the plan search space through different types of hints
(e.g. by specifying the join type, scan type or the order of joins) rather than
completely replacing the classical query optimizer with machine learning
models. It is especially relevant for cases when classical optimizers cannot
fully enumerate all logical and physical plans and, as an alternative, need to
rely on less robust approaches like genetic algorithms. However, even
symbiotically learned query optimizers are hampered by the need for vast
amounts of training data, slow plan generation during inference and unstable
results across various workload conditions. In this paper, we present GenJoin -
a novel learned query optimizer that considers the query optimization problem
as a generative task and is capable of learning from a random set of subplan
hints to produce query plans that outperform the classical optimizer. GenJoin
is the first learned query optimizer that significantly and consistently
outperforms PostgreSQL as well as state-of-the-art methods on two well-known
real-world benchmarks across a variety of workloads using rigorous machine
learning evaluations. | arXiv |
Answering a question raised by V. V. Tkachuk, we present several examples of
$\sigma$-compact spaces, some only consistent and some in ZFC, that are not
countably tight but in which the closure of any discrete subset is countably
tight. In fact, in some of our examples the closures of all discrete subsets
are even first countable. | arXiv |
For a commutative noetherian ring $R$, we classify all the hereditary
cotorsion pairs cogenerated by pure-injective modules of finite injective
dimension. The classification is done in terms of integer-valued functions on
the spectrum of the ring. Each such function gives rise to a system of local
depth conditions which describes the left-hand class in the corresponding
cotorsion pair. Furthermore, we show that these cotorsion pairs correspond by
explicit duality to hereditary Tor-pairs generated by modules of finite flat
dimension. | arXiv |
Forward modeling the galaxy density within the Effective Field Theory of
Large Scale Structure (EFT of LSS) enables field-level analyses that are robust
to theoretical uncertainties. At the same time, they can maximize the
constraining power from galaxy clustering on the scales amenable to
perturbation theory. In order to apply the method to galaxy surveys, the
forward model must account for the full observational complexity of the data.
In this context, a major challenge is the inclusion of redshift space
distortions (RSDs) from the peculiar motion of galaxies. Here, we present
improvements in the efficiency and accuracy of the RSD modeling in the
perturbative LEFTfield forward model. We perform a detailed quantification of
the perturbative and numerical error for the prediction of momentum, velocity
and the redshift-space matter density. Further, we test the recovery of
cosmological parameters at the field level, namely the growth rate $f$, from
simulated halos in redshift space. For a rigorous test and to scan through a
wide range of analysis choices, we fix the linear (initial) density field to
the known ground truth but marginalize over all unknown bias coefficients and
noise amplitudes. With a third-order model for gravity and bias, our results
yield $<1\,\%$ statistical and $<1.5\,\%$ systematic error. The computational
cost of the redshift-space forward model is only $\sim 1.5$ times of the rest
frame equivalent, enabling future field-level inference that simultaneously
targets cosmological parameters and the initial matter distribution. | arXiv |
We have calculated the fission fragments' mass distributions for several
isotopes of heavy and super-heavy nuclei from uranium to flerovium within an
improved scission point model. For all considered nuclei, in addition to the
standard mass-asymmetric fission mode we have found the mass super-asymmetric
mode with the mass of heavy fragments equal 190. For the actinide nuclei, the
probability of super-asymmetric fission is by 6 orders of magnitude smaller
than for standard asymmetric fission. For the superheavy nuclei this
probability is only by 2 orders of magnitude smaller. In all cases, the
super-asymmetric scission shapes are dumbbells with the heavy fragment close to
a sphere. We have estimated the stability of the light fragment concerning the
variation of the neck and found out that sequential ternary fission is not
favored energetically. The calculations were carried out with nuclear shape
described by generalized Cassinian ovals with 6 deformation parameters,
$\alpha, \alpha_1, \alpha_2, \alpha_3, \alpha_4$ and $\alpha_5$. The
configuration at the moment of the neck rupture was defined by fixing
$\alpha=0.98$. This value corresponds to a neck radius $r_{neck}\approx$ 1.5
fm. | arXiv |
As a generalisation of the periodic orbit structure often seen in reflection
or mirror symmetric MHD equilibria, we consider equilibria with other
orientation-reversing symmetries. An example of such a symmetry, which is a not
a reflection, is $(x,y,z) \mapsto (-x,-y,-z)$ in $\mathbb{R}^3$. It is shown
under any orientation-reversing isometry, that if the pressure function is
assumed to have a nested toroidal structure, then all orbits on the tori are
necessarily periodic. The techniques involved are almost entirely topological
in nature and give rise to a handy index describing how a diffeomorphism of
$\mathbb{R}^3$ alters the poloidal and toroidal curves of an invariant embedded
2-torus. | arXiv |
In this communication, we propose a tentative to set the fundamental problem
of measuring process done by a large structure on a microsopic one. We consider
the example of voting when an entire society tries to measure globally opinions
of all social actors in order to elect a delegate. We present a quantum model
to interpret an operational voting system and propose a quantum approach for
grading step of Range Voting, developed by M. Balinski and R. Laraki in 2007. | arXiv |
The commuting graph ${\Gamma(G)}$ of a group $G$ is the simple undirected
graph with group elements as a vertex set and two elements $x$ and $y$ are
adjacent if and only if $xy=yx$ in $G$. By eliminating the identity element of
$G$ and all the dominant vertices of $\Gamma(G)$, the resulting subgraphs of
$\Gamma(G)$ are $\Gamma^*(G)$ and $\Gamma^{**}(G)$, respectively. In this
paper, we classify all the finite groups $G$ such that the graph $\Delta(G) \in
\{\Gamma(G), \Gamma^*(G), \Gamma^{**}(G)\}$ is the line graph of some graph. We
also classify all the finite groups $G$ whose graph $\Delta(G) \in \{\Gamma(G),
\Gamma^*(G), \Gamma^{**}(G)\}$ is the complement of line graph. | arXiv |
Currently artificial intelligence (AI)-enabled chatbots are capturing the
hearts and imaginations of the public at large. Chatbots that users can build
and personalize, as well as pre-designed avatars ready for users' selection,
all of these are on offer in applications to provide social companionship,
friends and even love. These systems, however, have demonstrated challenges on
the privacy and ethics front. This paper takes a critical approach towards
examining the intricacies of these issues within AI companion services. We
chose Replika as a case and employed close reading to examine the service's
privacy policy. We additionally analyze articles from public media about the
company and its practices to gain insight into the trustworthiness and
integrity of the information provided in the policy. The aim is to ascertain
whether seeming General Data Protection Regulation (GDPR) compliance equals
reliability of required information, or whether the area of GDPR compliance in
itself is one riddled with ethical challenges. The paper contributes to a
growing body of scholarship on ethics and privacy related matters in the sphere
of social chatbots. The results reveal that despite privacy notices, data
collection practices might harvest personal data without users' full awareness.
Cross-textual comparison reveals that privacy notice information does not fully
correspond with other information sources. | arXiv |
Interpolatory filters are of great interest in subdivision schemes and
wavelet analysis. Due to the high-order linear-phase moment property,
interpolatory refinement filters are often used to construct wavelets and
framelets with high-order vanishing moments. In this paper, given a general
dilation matrix $\mathsf{M}$, we propose a method that allows us to construct a
dual $\mathsf{M}$-framelet from an arbitrary pair of $\mathsf{M}$-interpolatory
filters such that all framelet generators/high-pass filters (1) have the
interpolatory properties; (2) have high-order vanishing moments. Our method is
easy to implement, as the high-pass filters are either given in explicit
formulas or can be obtained by solving specific linear systems. Motivated by
constructing interpolatory dual framelets, we can further deduce a method to
construct an interpolatory quasi-tight framelet from an arbitrary interpolatory
filter. If, in addition, the refinement filters have symmetry, we will perform
a detailed analysis of the symmetry properties that the high-pass filters can
achieve. We will present several examples to demonstrate our theoretical
results. | arXiv |
For a set $A$ of positive integers with $\gcd(A)=1$, let $\langle A \rangle$
denote the set of all finite linear combinations of elements of $A$ over the
non-negative integers. The it is well known that only finitely many positive
integers do not belong to $\langle A \rangle$. The Frobenius number and the
genus associated with the set $A$ is the largest number and the cardinality of
the set of integers non-representable by $A$. By a generalized Fibonacci
sequence $\{V_n\}_{n \ge 1}$ we mean any sequence of positive integers
satisfying the recurrence $V_n=V_{n-1}+V_{n-2}$ for $n \ge 3$. We study the
problem of determining the Frobenius number and genus for sets
$A=\{V_n,V_{n+d},V_{n+2d},\ldots\}$ for arbitrary $n$, where $d$ odd or $d=2$. | arXiv |
The process of reconstructing quantum states from experimental measurements,
accomplished through quantum state tomography (QST), plays a crucial role in
verifying and benchmarking quantum devices. A key challenge of QST is to find
out how the accuracy of the reconstruction depends on the number of state
copies used in the measurements. When multiple measurement settings are used,
the total number of state copies is determined by multiplying the number of
measurement settings with the number of repeated measurements for each setting.
Due to statistical noise intrinsic to quantum measurements, a large number of
repeated measurements is often used in practice. However, recent studies have
shown that even with single-sample measurements--where only one measurement
sample is obtained for each measurement setting--high accuracy QST can still be
achieved with a sufficiently large number of different measurement settings. In
this paper, we establish a theoretical understanding of the trade-off between
the number of measurement settings and the number of repeated measurements per
setting in QST. Our focus is primarily on low-rank density matrix recovery
using Pauli measurements. We delve into the global landscape underlying the
low-rank QST problem and demonstrate that the joint consideration of
measurement settings and repeated measurements ensures a bounded recovery error
for all second-order critical points, to which optimization algorithms tend to
converge. This finding suggests the advantage of minimizing the number of
repeated measurements per setting when the total number of state copies is held
fixed. Additionally, we prove that the Wirtinger gradient descent algorithm can
converge to the region of second-order critical points with a linear
convergence rate. We have also performed numerical experiments to support our
theoretical findings. | arXiv |
Perfect complementary sequence sets (PCSSs) are widely used in multi-carrier
code-division multiple-access (MC-CDMA) communication system. However, the set
size of a PCSS is upper bounded by the number of row sequences of each
two-dimensional matrix in PCSS. Then quasi-complementary sequence set (QCSS)
was proposed to support more users in MC-CDMA communications. For practical
applications, it is desirable to construct an $(M,K,N,\vartheta_{max})$-QCSS
with $M$ as large as possible and $\vartheta_{max}$ as small as possible, where
$M$ is the number of matrices with $K$ rows and $N$ columns in the set and
$\vartheta_{max}$ denotes its periodic tolerance. There exists a tradoff among
these parameters and constructing QCSSs achieving or nearly achieving the known
correlation lower bound has been an interesting research topic. Up to now, only
a few constructions of asymptotically optimal or near-optimal periodic QCSSs
were reported in the literature. In this paper, we construct five families of
asymptotically optimal or near-optimal periodic QCSSs with large set sizes and
low periodic tolerances. These families of QCSSs have set size $\Theta(q^2)$ or
$\Theta(q^3)$ and flock size $\Theta(q)$, where $q$ is a power of a prime. To
the best of our knowledge, only three known families of periodic QCSSs with set
size $\Theta(q^2)$ and flock size $\Theta(q)$ were constructed and all other
known periodic QCSSs have set sizes much smaller than $\Theta(q^2)$. Our new
constructed periodic QCSSs with set size $\Theta(q^2)$ and flock size
$\Theta(q)$ have better parameters than known ones. They have larger set sizes
or lower periodic tolerances.The periodic QCSSs with set size $\Theta(q^3)$ and
flock size $\Theta(q)$ constructed in this paper have the largest set size
among all known families of asymptotically optimal or near-optimal periodic
QCSSs. | arXiv |
Probing and manipulating the spatiotemporal dynamics of hot carriers in
nanoscale metals is crucial to a plethora of applications ranging from
nonlinear nanophotonics to single molecule photochemistry. The direct
investigation of these highly non-equilibrium carriers requires the
experimental capability of high energy resolution (~ meV) broadband femtosecond
spectroscopy. When considering the ultimate limits of atomic scale structures,
this capability has remained out of reach until date. Using a two color
femtosecond pump-probe spectroscopy, we present here the real-time tracking of
hot carrier dynamics in a well-defined plasmonic picocavity, formed in the
tunnel junction of a scanning tunneling microscope (STM). The excitation of hot
carriers in the picocavity enables ultrafast all optical control over the
broadband (~ eV) anti Stokes electronic resonance Raman scattering (ERRS) and
the four-wave mixing (FWM) signals generated at the atomic length scale. By
mapping the ERRS and FWM signals from a single graphene nanoribbon (GNR), we
demonstrate that both signals are more efficiently generated along the edges of
the GNR: a manifestation of atomic-scale nonlinear optical microscopy. This
demonstration paves the way to the development of novel ultrafast nonlinear
picophotonic platforms, affording unique opportunities in a variety of
contexts, from the direct investigation of non equilibrium light matter
interactions in complex quantum materials, to the development of robust
strategies for hot carriers harvesting in single molecules and the next
generation of active metasurfaces with deep-sub-wavelength meta-atoms. | arXiv |
We study timelike supersymmetric solutions of a $D=3, N=4$ gauged
supergravity using Killing spinor bilinears method and prove that AdS$_3$ is
the only solution within this class. We then consider the ungauged version of
this model. It is found that for this type of solutions, the ungauged theory
effectively truncates to a supergravity coupled to a sigma model with a
2-dimensional hyperbolic target space $\mathbb{H}^2$, and all solutions can be
expressed in terms of two arbitrary holomorphic functions. The spacetime metric
is a warped product of the time direction with a 2-dimensional space, and the
warp factor is given in terms of the K\"ahler potential of $\mathbb{H}^2$. We
show that when the holomorphic function that determines the sigma model scalar
fields is not constant, the metric on the sigma model target manifold becomes
part of the spacetime metric. We then look at some special choices for these
holomorphic functions for which the spacetime metric and the Killing spinors
are only radial dependent. We also derive supersymmetric null solutions of the
ungauged model which are pp-waves on the Minkowski spacetime. | arXiv |
We prove that the Kazhdan-Lusztig basis of Specht modules is upper triangular
with respect to all generalized Gelfand-Tsetlin bases constructed from any
multiplicity-free tower of standard parabolic subgroups. | arXiv |
Fine-tuning large language models (LLMs) is essential for enhancing their
performance on specific tasks but is often resource-intensive due to redundant
or uninformative data. To address this inefficiency, we introduce DELIFT (Data
Efficient Language model Instruction Fine-Tuning), a novel algorithm that
systematically optimizes data selection across the three key stages of
fine-tuning: (1) instruction tuning, (2) task-specific fine-tuning (e.g.,
reasoning, question-answering), and (3) continual fine-tuning (e.g.,
incorporating new data versions). Unlike existing methods that focus on
single-stage optimization or rely on computationally intensive gradient
calculations, DELIFT operates efficiently across all stages. Central to our
approach is a pairwise utility metric that quantifies how beneficial a data
sample is for improving the model's responses to other samples, effectively
measuring the informational value relative to the model's current capabilities.
By leveraging different submodular functions applied to this metric, DELIFT
selects diverse and optimal subsets that are useful across all stages of
fine-tuning. Experiments across various tasks and model scales demonstrate that
DELIFT can reduce the fine-tuning data size by up to 70% without compromising
performance, offering significant computational savings and outperforming
existing methods in both efficiency and efficacy. | arXiv |
Vision-language model (VLM) embeddings have been shown to encode biases
present in their training data, such as societal biases that prescribe negative
characteristics to members of various racial and gender identities. VLMs are
being quickly adopted for a variety of tasks ranging from few-shot
classification to text-guided image generation, making debiasing VLM embeddings
crucial. Debiasing approaches that fine-tune the VLM often suffer from
catastrophic forgetting. On the other hand, fine-tuning-free methods typically
utilize a "one-size-fits-all" approach that assumes that correlation with the
spurious attribute can be explained using a single linear direction across all
possible inputs. In this work, we propose Bend-VLM, a nonlinear,
fine-tuning-free approach for VLM embedding debiasing that tailors the
debiasing operation to each unique input. This allows for a more flexible
debiasing approach. Additionally, we do not require knowledge of the set of
inputs a priori to inference time, making our method more appropriate for
online, open-set tasks such as retrieval and text guided image generation. | arXiv |
Over the years, there has been extensive work on fully dynamic algorithms for
classic graph problems that admit greedy solutions. Examples include
$(\Delta+1)$ vertex coloring, maximal independent set, and maximal matching.
For all three problems, there are randomized algorithms that maintain a valid
solution after each edge insertion or deletion to the $n$-vertex graph by
spending $\polylog n$ time, provided that the adversary is oblivious. However,
none of these algorithms work against adaptive adversaries whose updates may
depend on the output of the algorithm. In fact, even breaking the trivial bound
of $O(n)$ against adaptive adversaries remains open for all three problems. For
instance, in the case of $(\Delta+1)$ vertex coloring, the main challenge is
that an adaptive adversary can keep inserting edges between vertices of the
same color, necessitating a recoloring of one of the endpoints. The trivial
algorithm would simply scan all neighbors of one endpoint to find a new
available color (which always exists) in $O(n)$ time.
In this paper, we break this linear barrier for the $(\Delta+1)$ vertex
coloring problem. Our algorithm is randomized, and maintains a valid
$(\Delta+1)$ vertex coloring after each edge update by spending
$\widetilde{O}(n^{8/9})$ time with high probability. | arXiv |
Reconfigurable holographic surfaces (RHSs) have been suggested as an
energy-efficient solution for extremely large-scale arrays. By controlling the
amplitude of RHS elements, high-gain directional holographic patterns can be
achieved. However, the complexity of acquiring real-time channel state
information (CSI) for beamforming is exceedingly high, particularly in
large-scale RHS-assisted communications, where users may distribute in the
near-field region of RHS. This paper proposes a one-shot multi-user beam
training scheme in large-scale RHS-assisted systems applicable to both near and
far fields. The proposed beam training scheme comprises two phases: angle
search and distance search, both conducted simultaneously for all users. For
the angle search, an RHS angular codebook is designed based on holographic
principles so that each codeword covers multiple angles in both near-field and
far-field regions, enabling simultaneous angular search for all users. For the
distance search, we construct the distance-adaptive codewords covering all
candidate angles of users in a real-time way by leveraging the additivity of
holographic patterns, which is different from the traditional phase array case.
Simulation results demonstrate that the proposed scheme achieves higher system
throughput compared to traditional beam training schemes. The beam training
accuracy approaches the upper bound of exhaustive search at a significantly
reduced overhead. | arXiv |
Let us consider the Schr\"{o}dinger operator $\mathcal{L}=-\Delta+V$ on
$\mathbb R^d$ with $d\geq3$, where $\Delta$ is the Laplacian operator on
$\mathbb R^d$ and the nonnegative potential $V$ belongs to certain reverse
H\"{o}lder class $RH_s$ with $s\geq d/2$. In this paper, the authors first
introduce two kinds of function spaces related to the Schr\"{o}dinger operator
$\mathcal{L}$. A real-valued function $f\in L^1_{\mathrm{loc}}(\mathbb R^d)$
belongs to the (BLO) space $\mathrm{BLO}_{\rho,\theta}(\mathbb R^d)$ with
$0\leq\theta<\infty$ if \begin{equation*} \|f\|_{\mathrm{BLO}_{\rho,\theta}}
:=\sup_{\mathcal{Q}}\bigg(1+\frac{r}{\rho(x_0)}\bigg)^{-\theta}\bigg(\frac{1}{|Q(x_0,r)|}
\int_{Q(x_0,r)}\Big[f(x)-\underset{y\in\mathcal{Q}}{\mathrm{ess\,inf}}\,f(y)\Big]\,dx\bigg),
\end{equation*} where the supremum is taken over all cubes
$\mathcal{Q}=Q(x_0,r)$ in $\mathbb R^d$, $\rho(\cdot)$ is the critical radius
function in the Schr\"{o}dinger context. For $0<\beta<1$, a real-valued
function $f\in L^1_{\mathrm{loc}}(\mathbb R^d)$ belongs to the (Campanato)
space $\mathcal{C}^{\beta,\ast}_{\rho,\theta}(\mathbb R^d)$ with
$0\leq\theta<\infty$ if \begin{equation*}
\|f\|_{\mathcal{C}^{\beta,\ast}_{\rho,\theta}}
:=\sup_{\mathcal{B}}\bigg(1+\frac{r}{\rho(x_0)}\bigg)^{-\theta}
\bigg(\frac{1}{|B(x_0,r)|^{1+\beta/d}}\int_{B(x_0,r)}\Big[f(x)-\underset{y\in\mathcal{B}}{\mathrm{ess\,inf}}\,f(y)\Big]\,dx\bigg),
\end{equation*} where the supremum is taken over all balls
$\mathcal{B}=B(x_0,r)$ in $\mathbb R^d$. Then we establish the corresponding
John--Nirenberg inequality suitable for the space
$\mathrm{BLO}_{\rho,\theta}(\mathbb R^d)$ with $0\leq\theta<\infty$ and
$d\geq3$. Moreover, we give some new characterizations of the BLO and Campanato
spaces related to $\mathcal{L}$ on weighted Lebesgue spaces, which is the
extension of some earlier results. | arXiv |
Adversarial attacks pose significant threats to the reliability and safety of
deep learning models, especially in critical domains such as medical imaging.
This paper introduces a novel framework that integrates conformal prediction
with game-theoretic defensive strategies to enhance model robustness against
both known and unknown adversarial perturbations. We address three primary
research questions: constructing valid and efficient conformal prediction sets
under known attacks (RQ1), ensuring coverage under unknown attacks through
conservative thresholding (RQ2), and determining optimal defensive strategies
within a zero-sum game framework (RQ3). Our methodology involves training
specialized defensive models against specific attack types and employing
maximum and minimum classifiers to aggregate defenses effectively. Extensive
experiments conducted on the MedMNIST datasets, including PathMNIST,
OrganAMNIST, and TissueMNIST, demonstrate that our approach maintains high
coverage guarantees while minimizing prediction set sizes. The game-theoretic
analysis reveals that the optimal defensive strategy often converges to a
singular robust model, outperforming uniform and simple strategies across all
evaluated datasets. This work advances the state-of-the-art in uncertainty
quantification and adversarial robustness, providing a reliable mechanism for
deploying deep learning models in adversarial environments. | arXiv |
The interest in developing small language models (SLM) for on-device
deployment is fast growing. However, the existing SLM design hardly considers
the device hardware characteristics. Instead, this work presents a simple yet
effective principle for SLM design: architecture searching for (near-)optimal
runtime efficiency before pre-training. Guided by this principle, we develop
PhoneLM SLM family (currently with 0.5B and 1.5B versions), that acheive the
state-of-the-art capability-efficiency tradeoff among those with similar
parameter size. We fully open-source the code, weights, and training datasets
of PhoneLM for reproducibility and transparency, including both base and
instructed versions. We also release a finetuned version of PhoneLM capable of
accurate Android Intent invocation, and an end-to-end Android demo. All
materials are available at https://github.com/UbiquitousLearning/PhoneLM. | arXiv |
We present GazeGen, a user interaction system that generates visual content
(images and videos) for locations indicated by the user's eye gaze. GazeGen
allows intuitive manipulation of visual content by targeting regions of
interest with gaze. Using advanced techniques in object detection and
generative AI, GazeGen performs gaze-controlled image adding/deleting,
repositioning, and surface style changes of image objects, and converts static
images into videos. Central to GazeGen is the DFT Gaze (Distilled and
Fine-Tuned Gaze) agent, an ultra-lightweight model with only 281K parameters,
performing accurate real-time gaze predictions tailored to individual users'
eyes on small edge devices. GazeGen is the first system to combine visual
content generation with real-time gaze estimation, made possible exclusively by
DFT Gaze. This real-time gaze estimation enables various visual content
generation tasks, all controlled by the user's gaze. The input for DFT Gaze is
the user's eye images, while the inputs for visual content generation are the
user's view and the predicted gaze point from DFT Gaze. To achieve efficient
gaze predictions, we derive the small model from a large model (10x larger) via
novel knowledge distillation and personal adaptation techniques. We integrate
knowledge distillation with a masked autoencoder, developing a compact yet
powerful gaze estimation model. This model is further fine-tuned with Adapters,
enabling highly accurate and personalized gaze predictions with minimal user
input. DFT Gaze ensures low-latency and precise gaze tracking, supporting a
wide range of gaze-driven tasks. We validate the performance of DFT Gaze on AEA
and OpenEDS2020 benchmarks, demonstrating low angular gaze error and low
latency on the edge device (Raspberry Pi 4). Furthermore, we describe
applications of GazeGen, illustrating its versatility and effectiveness in
various usage scenarios. | arXiv |
Sample selection problems arise when treatment affects both the outcome and
the researcher's ability to observe it. This paper generalizes Lee (2009)
bounds for the average treatment effect of a binary treatment to a
continuous/multivalued treatment. We evaluate the Job Crops program to study
the causal effect of training hours on wages. To identify the average treatment
effect of always-takers who are selected regardless of the treatment values, we
assume that if a subject is selected at some sufficient treatment values, then
it remains selected at all treatment values. For example, if program
participants are employed with one month of training, then they remain employed
with any training hours. This sufficient treatment values assumption includes
the monotone assumption on the treatment effect on selection as a special case.
We further allow the conditional independence assumption and subjects with
different pretreatment covariates to have different sufficient treatment
values. The estimation and inference theory utilize the orthogonal moment
function and cross-fitting for double debiased machine learning. | arXiv |
Large language models (LLMs) offer promise in generating educational content,
providing instructor feedback, and reducing teacher workload on assessments.
While prior studies have focused on studying LLM-powered learning analytics,
limited research has examined how effective LLMs are in a bilingual context. In
this paper, we study the effectiveness of multilingual large language models
(MLLMs) across monolingual (English-only, Spanish-only) and bilingual
(Spanglish) student writing. We present a learning analytics use case that
details LLM performance in assessing acceptable and unacceptable explanations
of Science and Social Science concepts. Our findings reveal a significant bias
in the grading performance of pre-trained models for bilingual writing compared
to English-only and Spanish-only writing. Following this, we fine-tune
open-source MLLMs including Llama 3.1 and Mistral NeMo using synthetic datasets
generated in English, Spanish, and Spanglish. Our experiments indicate that the
models perform significantly better for all three languages after fine-tuning
with bilingual data. This study highlights the potential of enhancing MLLM
effectiveness to support authentic language practices amongst bilingual
learners. It also aims to illustrate the value of incorporating non-English
languages into the design and implementation of language models in education. | arXiv |
Amazon is the world number one online retailer and has nearly every product a
person could need along with a treasure trove of product reviews to help
consumers make educated purchases. Companies want to find a way to increase
their sales in a very crowded market, and using this data is key. A very good
indicator of how a product is selling is its sales rank; which is calculated
based on all-time sales of a product where recent sales are weighted more than
older sales. Using the data from the Amazon products and reviews we determined
that the most influential factors in determining the sales rank of a product
were the number of products Amazon showed that other customers also bought, the
number of products Amazon showed that customers also viewed, and the price of
the product. These results were consistent for the Digital Music category, the
Office Products category, and the subcategory Holsters under Cell Phones and
Accessories. | arXiv |
We revisit integrity checking in relational and deductive databases with an
approach that tolerates erroneous, inconsistent data. In particular, we relax
the fundamental prerequisite that, in order to apply any method for simplified
integrity checking, all data must initially have integrity. As opposed to a
long-standing belief, integrity in the old state before the update is not
needed for a correct application of simplification methods. Rather, we show
that correct simplifications preserve what was consistent across updates. We
formally characterize this property, that we call inconsistency tolerance, and
state its validity for some well-known methods for integrity checking. | arXiv |
Drought has been perceived as a persistent threat globally and the complex
mechanism of various factors contributing to its emergence makes it more
troublesome to understand. Droughts and their severity trends have been a point
of concern in the USA as well, since the economic impact of droughts has been
substantial, especially in parts that contribute majorly to US agriculture.
California is the biggest agricultural contributor to the United States with
its share amounting up to 12% approximately for all of US agricultural produce.
Although, according to a 20-year average, California ranks fifth on the list of
the highest average percentage of drought-hit regions. Therefore, drought
analysis and drought prediction are of crucial importance for California in
order to mitigate the associated risks. However, the design of a consistent
drought prediction model based on the dynamic relationship of the drought index
remains a challenging task. In the present study, we trained a Voting Ensemble
classifier utilizing a soft voting system and three different Random Forest
models, to predict the presence of drought and also its intensity. In this
paper, initially, we have discussed the trends of droughts and their
intensities in various California counties reviewed the correlation of
meteorological indicators with drought intensities and used these
meteorological indicators for drought prediction so as to evaluate their
effectiveness as well as significance. | arXiv |
Artificial Intelligence (AI) techniques, especially Large Language Models
(LLMs), have started gaining popularity among researchers and software
developers for generating source code. However, LLMs have been shown to
generate code with quality issues and also incurred copyright/licensing
infringements. Therefore, detecting whether a piece of source code is written
by humans or AI has become necessary. This study first presents an empirical
analysis to investigate the effectiveness of the existing AI detection tools in
detecting AI-generated code. The results show that they all perform poorly and
lack sufficient generalizability to be practically deployed. Then, to improve
the performance of AI-generated code detection, we propose a range of
approaches, including fine-tuning the LLMs and machine learning-based
classification with static code metrics or code embedding generated from
Abstract Syntax Tree (AST). Our best model outperforms state-of-the-art
AI-generated code detector (GPTSniffer) and achieves an F1 score of 82.55. We
also conduct an ablation study on our best-performing model to investigate the
impact of different source code features on its performance. | arXiv |
We present new ALMA observations of a starburst galaxy at cosmic noon hosting
a radio-loud active galactic nucleus: PKS 0529-549 at $z=2.57$. To investigate
the conditions of its cold interstellar medium, we use ALMA observations which
spatially resolve the [CI] fine-structure lines, [CI] (2-1) and [CI] (1-0), CO
rotational lines, CO (7-6) and CO (4-3), and the rest-frame continuum emission
at 461 and 809 GHz. The four emission lines display different morphologies,
suggesting spatial variation in the gas excitation conditions. The radio jets
have just broken out of the molecular gas but not through the more extended
ionized gas halo. The [CI] (2-1) emission is more extended ($\approx8\,{\rm
kpc}\times5\,{\rm kpc}$) than detected in previous shallower ALMA observations.
The [CI] luminosity ratio implies an excitation temperature of $44\pm16$ K,
similar to the dust temperature. Using the [CI] lines, CO (4-3), and 227 GHz
dust continuum, we infer the mass of molecular gas $M_{\mathrm{mol}}$ using
three independent approaches and typical assumptions in the literature. All
approaches point to a massive molecular gas reservoir of about $10^{11}$
$M_{\odot}$, but the exact values differ by up to a factor of 4. Deep
observations are critical in correctly characterizing the distribution of cold
gas in high-redshift galaxies, and highlight the need to improve systematic
uncertainties in inferring accurate molecular gas masses. | arXiv |
Background: The rate of energy production in the hot-CNO cycle and breakout
to the rapid-proton capture process in Type I X-ray bursts is strongly related
to the $^{14}$O($\alpha,p$)$^{17}$F reaction rate. The properties of states in
$^{18}$Ne near $E_x=6.1-6.3$ MeV are important for understanding this reaction
rate.
Experiment: The RESOLUT radioactive-ion beam facility at Florida State
University was used to study $^{18}$Ne resonances around this energy region
using $^{17}$F(p,p)$^{17}$F elastic scattering on a polypropylene target under
inverse kinematics. Scattered protons were detected in a silicon-strip detector
array while recoiling $^{17}$F ions were detected in coincidence in a gas
ionization detector.
Analysis: An $R$-matrix analysis of measured cross sections was conducted
along with a reanalysis of data from previous measurements.
Results: All the data analyzed are well described by a consistent set of
parameters with with a $1^-$ assignment for a state at 6.14(1) MeV. A second
comparable solution is also found with a $3^-$ assignment for the 6.14(1) MeV
state. The rate of the $^{14}$O($\alpha$,p)$^{17}$F reaction that is determined
from the two solutions differs by up to an order of magnitude. | arXiv |
We conduct a scoping review of existing approaches for synthetic EHR data
generation, and benchmark major methods with proposed open-source software to
offer recommendations for practitioners. We search three academic databases for
our scoping review. Methods are benchmarked on open-source EHR datasets,
MIMIC-III/IV. Seven existing methods covering major categories and two baseline
methods are implemented and compared. Evaluation metrics concern data fidelity,
downstream utility, privacy protection, and computational cost. 42 studies are
identified and classified into five categories. Seven open-source methods
covering all categories are selected, trained on MIMIC-III, and evaluated on
MIMIC-III or MIMIC-IV for transportability considerations. Among them,
GAN-based methods demonstrate competitive performance in fidelity and utility
on MIMIC-III; rule-based methods excel in privacy protection. Similar findings
are observed on MIMIC-IV, except that GAN-based methods further outperform the
baseline methods in preserving fidelity. A Python package, ``SynthEHRella'', is
provided to integrate various choices of approaches and evaluation metrics,
enabling more streamlined exploration and evaluation of multiple methods. We
found that method choice is governed by the relative importance of the
evaluation metrics in downstream use cases. We provide a decision tree to guide
the choice among the benchmarked methods. Based on the decision tree, GAN-based
methods excel when distributional shifts exist between the training and testing
populations. Otherwise, CorGAN and MedGAN are most suitable for association
modeling and predictive modeling, respectively. Future research should
prioritize enhancing fidelity of the synthetic data while controlling privacy
exposure, and comprehensive benchmarking of longitudinal or conditional
generation methods. | arXiv |
We propose a novel technique for optimizing a modular fault-tolerant quantum
computing architecture, taking into account any desired space-time trade--offs
between the number of physical qubits and the fault-tolerant execution time of
a quantum algorithm. We consider a concept architecture comprising a dedicated
zone as a multi-level magic state factory and a core processor for efficient
logical operations, forming a supply chain network for production and
consumption of magic states. Using a heuristic algorithm, we solve the
multi-objective optimization problem of minimizing space and time subject to a
user-defined error budget for the success of the computation, taking the
performance of various fault-tolerant protocols such as quantum memory, state
preparation, magic state distillation, code growth, and logical operations into
account. As an application, we show that physical quantum resource estimation
reduces to a simple model involving a small number of key parameters, namely,
the circuit volume, the error prefactors ($\mu$) and error suppression rates
($\Lambda$) of the fault-tolerant protocols, and an allowed slowdown factor
($\beta$). We show that, in the proposed architecture, $10^5$--$10^8$ physical
qubits are required for quantum algorithms with $T$-counts in the range
$10^6$--$10^{15}$ and logical qubit counts in the range $10^2$--$10^4$, when
run on quantum computers with quantum memory $\Lambda$ in the range 3--10, for
all slowdown factors $\beta \geq 0.2$. | arXiv |
The Ramsey number $R(s,t)$ is the smallest integer $n$ such that all graphs
of size $n$ contain a clique of size $s$ or an independent set of size $t$.
$\mathcal{R}(s,t,n)$ is the set of all counterexample graphs without this
property for a given $n$. We prove that if a graph $G_{n+1}$ of size $n+1$ has
$\max\{s,t\}+1$ subgraphs in $\mathcal{R}(s,t,n)$, then $G_{n+1}$ is in
$\mathcal{R}(s,t,n+1)$. Based on this, we introduce algorithms for one-vertex
extension and counterexample checking with runtime linearly bound by $s$ and
$t$. We prove the utility of these algorithms by verifying
$\mathcal{R}(4,6,36)$ and $\mathcal{R}(5,5,43)$ are empty given current sets
$\mathcal{R}(4,6,35)$ and $\mathcal{R}(5,5,42)$. | arXiv |
Graph neural networks (GNNs) provide state-of-the-art results in a wide
variety of tasks which typically involve predicting features at the vertices of
a graph. They are built from layers of graph convolutions which serve as a
powerful inductive bias for describing the flow of information among the
vertices. Often, more than one data modality is available. This work considers
a setting in which several graphs have the same vertex set and a common
vertex-level learning task. This generalizes standard GNN models to GNNs with
several graph operators that do not commute. We may call this model graph-tuple
neural networks (GtNN).
In this work, we develop the mathematical theory to address the stability and
transferability of GtNNs using properties of non-commuting non-expansive
operators. We develop a limit theory of graphon-tuple neural networks and use
it to prove a universal transferability theorem that guarantees that all
graph-tuple neural networks are transferable on convergent graph-tuple
sequences. In particular, there is no non-transferable energy under the
convergence we consider here. Our theoretical results extend well-known
transferability theorems for GNNs to the case of several simultaneous graphs
(GtNNs) and provide a strict improvement on what is currently known even in the
GNN case.
We illustrate our theoretical results with simple experiments on synthetic
and real-world data. To this end, we derive a training procedure that provably
enforces the stability of the resulting model. | arXiv |
Proxima Cen (GJ 551; dM5.5e) is one of only about a dozen fully convective
stars known to have a stellar cycle, and the only one to have long-term X-ray
monitoring. A previous analysis found that X-ray and mid-UV observations,
particularly two epochs of data from Swift, were consistent with a well sampled
7 yr optical cycle seen in ASAS data, but not convincing by themselves. The
present work incorporates several years of new ASAS-SN optical data and an
additional five years of Swift XRT and UVOT observations, with Swift
observations now spanning 2009 to 2021 and optical coverage from late 2000.
X-ray observations by XMM-Newton and Chandra are also included. Analysis of the
combined data, which includes modeling and adjustments for stellar
contamination in the optical and UV, now reveals clear cyclic behavior in all
three wavebands with a period of 8.0 yr. We also show that UV and X-ray
intensities are anti-correlated with optical brightness variations caused by
the cycle and by rotational modulation, discuss possible indications of two
coronal mass ejections, and provide updated results for the previous finding of
a simple correlation between X-ray cycle amplitude and Rossby number over a
wide range of stellar types and ages. | arXiv |
We investigate the time evolution generated by the two-sided chord
Hamiltonian in the double-scaled SYK model, which produces a probability
distribution over operators in the double-scaled algebra. Via the
bulk-to-boundary map, this distribution translates into dynamic profiles of
bulk states within the chord Hilbert space. We derive analytic expressions for
these states, valid across a wide parameter range and at all time scales.
Additionally, we show how distinct semi-classical behaviors emerge by
localizing within specific regions of the energy spectrum in the semi-classical
limit.
We reformulate the doubled Hilbert space formalism as an isometric map
between the one-particle sector of the chord Hilbert space and the doubled
zero-particle sector. Using this map, we obtain analytic results for
correlation functions and examine the dynamical properties of operator Krylov
complexity for chords, establishing an equivalence between the chord number
generating function and the crossed four-point correlation function. We also
consider finite-temperature effects, showing how operator spreading slows as
temperature decreases.
In the semi-classical limit, we apply a saddle point analysis and include the
one-loop determinant to derive the normalized time-ordered four-point
correlation function. The leading correction mirrors the \(1/N\) connected
contribution observed in the large-\(p\) SYK model at infinite temperature.
Finally, we analyze the time evolution of operator Krylov complexity for a
matter chord in the triple-scaled regime, linking it to the renormalized
two-sided length in JT gravity with matter. | arXiv |
In many causal learning problems, variables of interest are often not all
measured over the same observations, but are instead distributed across
multiple datasets with overlapping variables. Tillman et al. (2008) presented
the first algorithm for enumerating the minimal equivalence class of
ground-truth DAGs consistent with all input graphs by exploiting local
independence relations, called ION. In this paper, this problem is formulated
as a more computationally efficient answer set programming (ASP) problem, which
we call ION-C, and solved with the ASP system clingo. The ION-C algorithm was
run on random synthetic graphs with varying sizes, densities, and degrees of
overlap between subgraphs, with overlap having the largest impact on runtime,
number of solution graphs, and agreement within the output set. To validate
ION-C on real-world data, we ran the algorithm on overlapping graphs learned
from data from two successive iterations of the European Social Survey (ESS),
using a procedure for conducting joint independence tests to prevent
inconsistencies in the input. | arXiv |
Let $\mathcal{H}$ be the space of all functions that are analytic in
$\mathbb{D}$. Let $\mathcal{A}$ denote the family of all functions
$f\in\mathcal{H}$ and normalized by the conditions $f(0)=0=f'(0)-1$. In 2011,
Obradovi\'{c} and Ponnusamy introduced the class $\mathcal{M}(\lambda)$ of all
functions $f\in\mathcal{A}$ satisfying the condition
$\left|z^2\left(z/f(z)\right)''+f'(z)\left(z/f(z)\right)^2-1\right|\leq
\lambda$ for $z\in\mathbb{D}$ with $\lambda>0$. We show that the class
$\mathcal{M}(\lambda)$ is preserved under omitted-value transformation, but
this class is not preserved under dilation. In this paper, we investigate the
largest disk in which the property of preservation under dilation of the class
$\mathcal{M}:=\mathcal{M}(1)$ holds. We also address a radius property of the
class $\mathcal{M}(\lambda)$ and a number of associated results pertaining to
$\mathcal{M}$. Furthermore, we examine the largest disks with sharp radius for
which the functions $F$ defined by the relations $g(z)h(z)/z$, $z^2/g(z)$, and
$z^2/\int_0^z (t/g(t))dt$ belong to the class $\mathcal{M}$, where $g$ and $h$
belong to some suitable subclasses of $\mathcal{S}$, the class of univalent
functions from $\mathcal{A}$. In the final analysis, we obtain the sharp Bohr
radius, Bohr-Rogosinski radius and improved Bohr radius for a certain subclass
of starlike functions. | arXiv |
A hypersurface $M^n$ in a real space form ${\bf R}^{n+1}$, $S^{n+1}$, or
$H^{n+1}$ is isoparametric if it has constant principal curvatures. This paper
is a survey of the fundamental work of Cartan and M\"{u}nzner on the theory of
isoparametric hypersurfaces in real space forms, in particular, spheres. This
work is contained in four papers of Cartan published during the period
1938--1940, and two papers of M\"{u}nzner that were published in preprint form
in the early 1970's, and as journal articles in 1980--1981. These papers of
Cartan and M\"{u}nzner have been the foundation of the extensive field of
isoparametric hypersurfaces, and they have all been recently translated into
English by the author. The paper concludes with a brief survey of the recently
completed classification of isoparametric hypersurfaces in spheres. | arXiv |
We explicitly compute the effective action from Open Superstring Field Theory
in the hybrid formalism to quartic order in the $\alpha'\rightarrow 0$ limit,
and show that it reproduces ten-dimensional Super Yang-Mills in terms of
four-dimensional superfields. We also show that in this limit the gauge
transformations coincide with SYM to all orders, which means that the effective
action should reproduce SYM to all orders. | arXiv |
We consider the quantum magic in systems of dense neutrinos undergoing
coherent flavor transformations, relevant for supernova and neutron-star binary
mergers. Mapping the three-flavor-neutrino system to qutrits, the evolution of
quantum magic is explored in the single scattering angle limit for a selection
of initial tensor-product pure states for $N_\nu \le 8$ neutrinos. For
$|\nu_e\rangle^{\otimes N_\nu}$ initial states, the magic, as measured by the
$\alpha=2$ stabilizer Renyi entropy $M_2$, is found to decrease with radial
distance from the neutrino sphere, reaching a value that lies below the maximum
for tensor-product qutrit states. Further, the asymptotic magic per neutrino,
$M_2/N_\nu$, decreases with increasing $N_\nu$. In contrast, the magic evolving
from states containing all three flavors reaches values only possible with
entanglement, with the asymptotic $M_2/N_\nu$ increasing with $N_\nu$. These
results highlight the connection between the complexity in simulating quantum
physical systems and the parameters of the Standard Model. | arXiv |
In this study, we aim to enhance radiology reporting by improving both the
conciseness and structured organization of findings (also referred to as
templating), specifically by organizing information according to anatomical
regions. This structured approach allows physicians to locate relevant
information quickly, increasing the report's utility. We utilize Large Language
Models (LLMs) such as Mixtral, Mistral, and Llama to generate concise,
well-structured reports. Among these, we primarily focus on the Mixtral model
due to its superior adherence to specific formatting requirements compared to
other models. To maintain data security and privacy, we run these LLMs locally
behind our institution's firewall. We leverage the LangChain framework and
apply five distinct prompting strategies to enforce a consistent structure in
radiology reports, aiming to eliminate extraneous language and achieve a high
level of conciseness. We also introduce a novel metric, the Conciseness
Percentage (CP) score, to evaluate report brevity. Our dataset comprises 814
radiology reports authored by seven board-certified body radiologists at our
cancer center. In evaluating the different prompting methods, we discovered
that the most effective approach for generating concise, well-structured
reports involves first instructing the LLM to condense the report, followed by
a prompt to structure the content according to specific guidelines. We assessed
all prompting strategies based on their ability to handle formatting issues,
reduce report length, and adhere to formatting instructions. Our findings
demonstrate that open-source, locally deployed LLMs can significantly improve
radiology report conciseness and structure while conforming to specified
formatting standards. | arXiv |
Coherent Elastic Neutrino-Nucleus (CE$\nu$NS) and Elastic Neutrino-Electron
Scattering (E$\nu$ES) data are exploited to constrain "chiral" $U(1)_{X}$
gauged models with light vector mediator mass. These models fall under a
distinct class of new symmetries called Dark Hypercharge Symmetries. A key
feature is the fact that the $Z'$ boson can couple to all Standard Model
fermions at tree level, with the $U(1)_X$ charges determined by the requirement
of anomaly cancellation. Notably, the charges of leptons and quarks can differ
significantly depending on the specific anomaly cancellation solution. As a
result, different models exhibit distinct phenomenological signatures and can
be constrained through various experiments. In this work, we analyze the recent
data from the COHERENT experiment, along with results from Dark Matter (DM)
direct detection experiments such as XENONnT, LUX-ZEPLIN, and PandaX-4T, and
place new constraints on three benchmark models. Additionally, we set
constraints from a performed analysis of TEXONO data and discuss the prospects
of improvement in view of the next-generation DM direct detection DARWIN
experiment. | arXiv |
We report the discovery of the first example of an Einstein zig-zag lens, an
extremely rare lensing configuration. In this system, J1721+8842, six images of
the same background quasar are formed by two intervening galaxies, one at
redshift $z_1 = 0.184$ and a second one at $z_2 = 1.885$. Two out of the six
multiple images are deflected in opposite directions as they pass the first
lens galaxy on one side, and the second on the other side -- the optical paths
forming zig-zags between the two deflectors. In this letter, we demonstrate
that J1721+8842, previously thought to be a lensed dual quasar, is in fact a
compound lens with the more distant lens galaxy also being distorted as an arc
by the foreground galaxy. Evidence supporting this unusual lensing scenario
includes: 1- identical light curves in all six lensed quasar images obtained
from two years of monitoring at the Nordic Optical Telescope; 2- detection of
the additional deflector at redshift $z_2 = 1.885$ in JWST/NIRSpec IFU data;
and 3- a multiple-plane lens model reproducing the observed image positions.
This unique configuration offers the opportunity to combine two major lensing
cosmological probes: time-delay cosmography and dual source-plane lensing since
J1721+8842 features multiple lensed sources forming two distinct Einstein radii
of different sizes, one of which being a variable quasar. We expect tight
constraints on the Hubble constant and the equation of state of dark energy by
combining these two probes on the same system. The $z_2 = 1.885$ deflector, a
quiescent galaxy, is also the highest-redshift strong galaxy-scale lens with a
spectroscopic redshift measurement. | arXiv |
In this letter, we present a new formulation of loss cone theory as a
reaction-diffusion system, which is orbit averaged and accounts for loss cone
events through a sink term. This formulation can recover the standard approach
based on boundary conditions, and is derived from a simple physical model that
overcomes many of the classical theoretical constraints. The relaxed
distribution of disruptive orbits in phase space has a simple analytic form,
and it predicts accurately the pericentre of tidal disruption events at
disruption, better than other available formulas. This formulation of the
problem is particularly suitable for including more physics in tidal
disruptions and the analogous problem of gravitational captures, e.g. strong
scatterings, gravitational waves emission, physical stellar collisions, and
repeating partial disruptions - that can all act on timescale shorter than
two-body relaxation. This allows to explore in a simple way dynamical effects
that might affect tidal disruption events rates, tackling the expected vs
observed rate tension and the over-representation of E+A galaxies. | arXiv |
Quantum materials governed by emergent topological fermions have become a
cornerstone of physics. Dirac fermions in graphene form the basis for moir\'e
quantum matter, and Dirac fermions in magnetic topological insulators enabled
the discovery of the quantum anomalous Hall effect. In contrast, there are few
materials whose electromagnetic response is dominated by emergent Weyl
fermions. Nearly all known Weyl materials are overwhelmingly metallic, and are
largely governed by irrelevant, conventional electrons. Here we theoretically
predict and experimentally observe a semimetallic Weyl ferromagnet in van der
Waals (Cr,Bi)$_2$Te$_3$. In transport, we find a record bulk anomalous Hall
angle $> 0.5$ along with non-metallic conductivity, a regime sharply distinct
from conventional ferromagnets. Together with symmetry analysis, our data
suggest a semimetallic Fermi surface composed of two Weyl points, with a giant
separation $> 75\%$ of the linear dimension of the bulk Brillouin zone, and no
other electronic states. Using state-of-the-art crystal synthesis techniques,
we widely tune the electronic structure, allowing us to annihilate the Weyl
state and visualize a unique topological phase diagram exhibiting broad Chern
insulating, Weyl semimetallic and magnetic semiconducting regions. Our
observation of a semimetallic Weyl ferromagnet offers an avenue toward novel
correlated states and non-linear phenomena, as well as zero-magnetic-field Weyl
spintronic and optical devices. | arXiv |
While many statistical properties of deep random quantum circuits can be
deduced, often rigorously and other times heuristically, by an approximation to
global Haar-random unitaries, the statistics of constant-depth random quantum
circuits are generally less well-understood due to a lack of amenable tools and
techniques. We circumvent this barrier by considering a related constant-time
Brownian circuit model which shares many similarities with constant-depth
random quantum circuits but crucially allows for direct calculations of higher
order moments of its output distribution. Using mean-field (large-n)
techniques, we fully characterize the output distributions of Brownian circuits
at shallow depths and show that they follow a Porter-Thomas distribution, just
like in the case of deep circuits, but with a truncated Hilbert space. The
access to higher order moments allows for studying the expected and typical
Linear Cross-entropy (XEB) benchmark scores achieved by an ideal quantum
computer versus the state-of-the-art classical spoofers for shallow Brownian
circuits. We discover that for these circuits, while the quantum computer
typically scores within a constant factor of the expected value, the classical
spoofer suffers from an exponentially larger variance. Numerical evidence
suggests that the same phenomenon also occurs in constant-depth discrete random
quantum circuits, like those defined over the all-to-all architecture. We
conjecture that the same phenomenon is also true for random brickwork circuits
in high enough spatial dimension. | arXiv |
The recent experimental observation of quantum anomalous Hall (QAH) effects
in the rhombohedrally stacked pentalayer graphene has motivated theoretical
discussions on the possibility of quantum anomalous Hall crystal (QAHC), a
topological version of Wigner crystal. Conventionally Wigner crystal was
assumed to have a period $a_{\text{crystal}}=1/\sqrt{n}$ locked to the density
$n$. In this work we propose new types of topological Wigner crystals labeled
as QAHC-$z$ with period $a_{\text{crystal}}=\sqrt{z/n}$. In rhombohedrally
stacked graphene aligned with hexagon boron nitride~(hBN), we find parameter
regimes where QAHC-2 and QAHC-3 have lower energy than the conventional QAHC-1
at total filling $\nu=1$ per moir\'e unit cell. These states all have total
Chern number $C_\mathrm{tot}=1$ and are consistent with the QAH effect observed
in the experiments. The larger period QAHC states have better kinetic energy
due to the unique Mexican-hat dispersion of the pentalayer graphene, which can
compensate for the loss in the interaction energy. Unlike QAHC-1, QAHC-2 and
QAHC-3 also break the moir\'e translation symmetry and are sharply distinct
from a moir\'e band insulator. We also briefly discuss the competition between
integer QAHC and fractional QAHC states at filling $\nu=2/3$. Besides, we
notice the importance of the moir\'e potential. A larger moir\'e potential can
greatly change the phase diagram and even favors a QAHC-1 ansatz with $C=2$
Chern band. | arXiv |
We show that all totally positive formal power series with integer
coefficients and constant term $1$ are precisely the rank-generating functions
of Schur-positive upho posets, thereby resolving the main conjecture proposed
by Gao, Guo, Seetharaman, and Seidel. To achieve this, we construct a bijection
between finitary colored upho posets and atomic, left-cancellative,
invertible-free monoids, which restricts to a correspondence between
$\mathbb{N}$-graded colored upho posets and left-cancellative homogeneous
monoids. Furthermore, we introduce semi-upho posets and develop a convolution
operation on colored upho posets with colored semi-upho posets within this
monoid-theoretic framework. | arXiv |
Several recent works seek to develop foundation models specifically for
medical applications, adapting general-purpose large language models (LLMs) and
vision-language models (VLMs) via continued pretraining on publicly available
biomedical corpora. These works typically claim that such domain-adaptive
pretraining (DAPT) improves performance on downstream medical tasks, such as
answering medical licensing exam questions. In this paper, we compare seven
public "medical" LLMs and two VLMs against their corresponding base models,
arriving at a different conclusion: all medical VLMs and nearly all medical
LLMs fail to consistently improve over their base models in the zero-/few-shot
prompting regime for medical question-answering (QA) tasks. For instance,
across the tasks and model pairs we consider in the 3-shot setting, medical
LLMs only outperform their base models in 12.1% of cases, reach a (statistical)
tie in 49.8% of cases, and are significantly worse than their base models in
the remaining 38.2% of cases. Our conclusions are based on (i) comparing each
medical model head-to-head, directly against the corresponding base model; (ii)
optimizing the prompts for each model separately; and (iii) accounting for
statistical uncertainty in comparisons. While these basic practices are not
consistently adopted in the literature, our ablations show that they
substantially impact conclusions. Our findings suggest that state-of-the-art
general-domain models may already exhibit strong medical knowledge and
reasoning capabilities, and offer recommendations to strengthen the conclusions
of future studies. | arXiv |
We define Poisson genericity for infinite sequences in any finite or
countable alphabet with an invariant exponentially-mixing probability measure.
A sequence is Poisson generic if the number of occurrences of blocks of symbols
asymptotically follows a Poisson law as the block length increases. We prove
that almost all sequences are Poisson generic. Our result generalizes Peres and
Weiss' theorem about Poisson genericity of integral bases numeration systems.
In particular, we obtain that their continued fraction expansions for almost
all real numbers are Poisson generic. | arXiv |
We investigate the task of deterministically condensing randomness from
Online Non-Oblivious Symbol Fixing (oNOSF) sources, a natural model for which
extraction is impossible [AORSV, EUROCRYPT'20]. A $(g,\ell)$-oNOSF source is a
sequence of $\ell$ blocks where at least $g$ of the blocks are good
(independent and have some min-entropy) and the remaining bad blocks are
controlled by an online adversary where each bad block can be arbitrarily
correlated with any block that appears before it.
The existence of condensers was studied in [CGR, FOCS'24]. They proved
condensing impossibility results for various values of $g, \ell$ and showed the
existence of condensers matching the impossibility results in the case when $n$
is extremely large compared to $\ell$.
In this work, we make significant progress on proving the existence of
condensers with strong parameters in almost all parameter regimes, even when
$n$ is a large enough constant and $\ell$ is growing. This almost resolves the
question of the existence of condensers for oNOSF sources, except when $n$ is a
small constant.
We construct the first explicit condensers for oNOSF sources, achieve
parameters that match the existential results of [CGR, FOCS'24], and obtain an
improved construction for transforming low-entropy oNOSF sources into uniform
ones.
We find applications of our results to collective coin flipping and sampling,
well-studied problems in fault-tolerant distributed computing. We use our
condensers to provide simple protocols for these problems.
To understand the case of small $n$, we focus on $n=1$ which corresponds to
online non-oblivious bit-fixing (oNOBF) sources. We initiate a study of a new,
natural notion of influence of Boolean functions which we call online
influence. We establish tight bounds on the total online influence of Boolean
functions, implying extraction lower bounds. | arXiv |
Centralized learning requires data to be aggregated at a central server,
which poses significant challenges in terms of data privacy and bandwidth
consumption. Federated learning presents a compelling alternative, however,
vanilla federated learning methods deployed in robotics aim to learn a single
global model across robots that works ideally for all. But in practice one
model may not be well suited for robots deployed in various environments. This
paper proposes Federated-EmbedCluster (Fed-EC), a clustering-based federated
learning framework that is deployed with vision based autonomous robot
navigation in diverse outdoor environments. The framework addresses the key
federated learning challenge of deteriorating model performance of a single
global model due to the presence of non-IID data across real-world robots.
Extensive real-world experiments validate that Fed-EC reduces the communication
size by 23x for each robot while matching the performance of centralized
learning for goal-oriented navigation and outperforms local learning. Fed-EC
can transfer previously learnt models to new robots that join the cluster. | arXiv |
Background: Fluorescent Timer proteins, which display time-dependent changes
in their emission spectra, are invaluable for analyzing the temporal dynamics
of cellular events at the single-cell level. We previously developed the
Timer-of-cell-kinetics-and-activity (Tocky) tools, utilizing a specific Timer
protein, Fast-FT, to monitor temporal changes in cellular activities. Despite
their potential, the analysis of Timer fluorescence in flow cytometry is
frequently compromised by variability in instrument settings and the absence of
standardized preprocessing methods. The development and implementation of
effective data preprocessing methods remain to be achieved.
Results: In this study, we introduce the R package that automates the data
preprocessing of Timer fluorescence data from flow cytometry experiments for
quantitative analysis at single-cell level. Our aim is to standardize Timer
data analysis to enhance reproducibility and accuracy across different
experimental setups. The package includes a trigonometric transformation method
to elucidate the dynamics of Fluorescent Timer proteins. We have identified the
normalization of immature and mature Timer fluorescence data as essential for
robust analysis, clarifying how this normalization affects the analysis of
Timer maturation. These preprocessing methods are all encapsulated within the
TockyPrep R package.
Conclusions: TockyPrep is available for distribution via GitHub at
https://github.com/MonoTockyLab/TockyPrep, providing tools for data
preprocessing and basic visualization of Timer fluorescence data. This toolkit
is expected to enhance the utility of experimental systems utilizing
Fluorescent Timer proteins, including the Tocky tools. | arXiv |
The classification of multipartite entanglement is essential as it serves as
a resource for various quantum information processing tasks. This study
concerns a particular class of highly entangled multipartite states, the
so-called absolutely maximally entangled (AME) states. These are characterized
by maximal entanglement across all possible bipartitions. In particular we
analyze the local unitary equivalence among AME states using invariants. One of
our main findings is that the existence of special irredundant orthogonal
arrays implies the existence of an infinite number of equivalence classes of
AME states constructed from these. In particular, we show that there are
infinitely many local unitary inequivalent three-party AME states for local
dimension $d > 2$ and five-party AME states for $d \geq 2$. | arXiv |
Global partisan hostility and polarization has increased, and this
polarization is heightened around presidential elections. Models capable of
generating accurate summaries of diverse perspectives can help reduce such
polarization by exposing users to alternative perspectives. In this work, we
introduce a novel dataset and task for independently summarizing each political
perspective in a set of passages from opinionated news articles. For this task,
we propose a framework for evaluating different dimensions of perspective
summary performance. We benchmark 10 models of varying sizes and architectures
through both automatic and human evaluation. While recent models like GPT-4o
perform well on this task, we find that all models struggle to generate
summaries faithful to the intended perspective. Our analysis of summaries
focuses on how extraction behavior depends on the features of the input
documents. | arXiv |
We explore synergies between the Nancy Grace Roman Space Telescope High
Latitude Wide Area Survey (HLWAS) and CMB experiments, specifically Simons
Observatory (SO) and CMB-Stage4 (S4). Our simulated analyses include weak
lensing, photometric galaxy clustering, CMB lensing, thermal SZ, and
cross-correlations between these probes. While we assume the nominal 16,500
square degree area for SO and S4, we consider multiple survey designs for Roman
that overlap with Rubin Observatory's Legacy Survey of Space and Time (LSST):
the 2000 square degree reference survey using four photometric bands, and two
shallower single-band surveys that cover 10,000 and 18,000 square degree,
respectively. We find a ~2x increase in the dark energy figure of merit when
including CMB-S4 data for all Roman survey designs. We further find a strong
increase in constraining power for the Roman wide survey scenario cases,
despite the reduction in galaxy number density, and the increased systematic
uncertainties assumed due to the single band coverage. Even when tripling the
already worse systematic uncertainties in the Roman wide scenarios, which
reduces the 10,000 square degree FoM from 269 to 178, we find that the larger
survey area is still significantly preferred over the reference survey (FoM
64). We conclude that for the specific analysis choices and metrics of this
paper, a Roman wide survey is unlikely to be systematics-limited (in the sense
that one saturates the improvement that can be obtained by increasing survey
area). We outline several specific implementations of a two-tier Roman survey
(1000 square degree with 4 bands, and a second wide tier in one band) that can
further mitigate the risk of systematics for Roman wide concepts. | arXiv |
Existing benchmarks for evaluating foundation models mainly focus on
single-document, text-only tasks. However, they often fail to fully capture the
complexity of research workflows, which typically involve interpreting
non-textual data and gathering information across multiple documents. To
address this gap, we introduce M3SciQA, a multi-modal, multi-document
scientific question answering benchmark designed for a more comprehensive
evaluation of foundation models. M3SciQA consists of 1,452 expert-annotated
questions spanning 70 natural language processing paper clusters, where each
cluster represents a primary paper along with all its cited documents,
mirroring the workflow of comprehending a single paper by requiring multi-modal
and multi-document data. With M3SciQA, we conduct a comprehensive evaluation of
18 foundation models. Our results indicate that current foundation models still
significantly underperform compared to human experts in multi-modal information
retrieval and in reasoning across multiple scientific documents. Additionally,
we explore the implications of these findings for the future advancement of
applying foundation models in multi-modal scientific literature analysis. | arXiv |
Among all materials, mono-crystalline diamond has one of the highest measured
thermal conductivities, with values above 2000 W/m/K at room temperature. This
stems from momentum-conserving `normal' phonon-phonon scattering processes
dominating over momentum-dissipating `Umklapp' processes, a feature that also
suggests diamond as an ideal platform to experimentally investigate phonon heat
transport phenomena that violate Fourier's law. Here, we introduce dilute
nitrogen-vacancy color centers as in-situ, highly precise spin defect
thermometers to image temperature inhomogeneities in single-crystal diamond
microstructures heated from ambient conditions. We analyze cantilevers with
cross-sections in the range from about 0.2 to 2.6 $\mathrm{\mu m}^2$, observing
a relation between cross-section and heat flux that departs from Fourier's law
predictions. We rationalize such behavior relying on first-principles
simulations based on the linearized phonon Boltzmann transport equation, also
discussing how fabrication-induced impurities influence conduction. Our
temperature-imaging method can be applied to diamond devices of arbitrary
geometry, paving the way for the exploration of unconventional, non-diffusive
heat transport phenomena. | arXiv |
We determine parameters of the renormalization group-consistent
(RG-consistent) three-flavor color-superconducting Nambu-Jona-Lasinio (NJL)
model that are suited to investigate possible compact-star configurations. Our
goal is to provide viable quark-matter equation of state (EoS) that can
generally be used for hybrid-star constructions. To that end, we mainly focus
on quark-star properties in this work. By varying the vector and diquark
coupling constants, we analyze their impact on the EoS, speed of sound (SoS),
the maximum diquark gap, and the mass-radius relation. In almost all
configurations, a stable color-flavor-locked (CFL) phase appears in the core of
the maximum-mass configurations, typically spanning several kms in radius. In
other cases, the star's two-flavor color-superconducting (2SC) branch of the
EoS becomes unstable before reaching the CFL transition density. At
neutron-star densities, the SoS squared reaches up to 0.6 and the CFL diquark
gap up to 250 MeV. We argue that adding a hadronic EoS at lower densities by
performing a Maxwell construction, does not increase the maximum mass
substantially, thus we use the 2 solar mass constraint to constrain the NJL
model parameters that are suited for the construction of hybrid-star EoS. We
construct three examples of the hybrid star model, demonstrating that there is
room for different CSC compositions. The hybrid EoS obtained in this way can
have no 2SC matter or different ratios of 2SC and CFL quark matter in the core.
We show that early hadron-quark transitions are possible that can modify the
tidal deformability at 1.4 solar mass. We will provide tabulated EoS of the
RG-consistent NJL model for these three parameter sets. We find that these EoS
are consistent with the imposed constraints from astrophysics and perturbative
QCD. They allow for different hybrid-star scenarios with a hadronic EoS that is
soft at low densities. | arXiv |
We investigate whether neural networks (NNs) can accurately differentiate
between growth-rate data of the large-scale structure (LSS) of the Universe
simulated via two models: a cosmological constant and $\Lambda$ cold dark
matter (CDM) model and a tomographic coupled dark energy (CDE) model. We built
an NN classifier and tested its accuracy in distinguishing between cosmological
models. For our dataset, we generated $f\sigma_8(z)$ growth-rate observables
that simulate a realistic Stage IV galaxy survey-like setup for both
$\Lambda$CDM and a tomographic CDE model for various values of the model
parameters. We then optimised and trained our NN with \texttt{Optuna}, aiming
to avoid overfitting and to maximise the accuracy of the trained model. We
conducted our analysis for both a binary classification, comparing between
$\Lambda$CDM and a CDE model where only one tomographic coupling bin is
activated, and a multi-class classification scenario where all the models are
combined. For the case of binary classification, we find that our NN can
confidently (with $>86\%$ accuracy) detect non-zero values of the tomographic
coupling regardless of the redshift range at which coupling is activated and,
at a $100\%$ confidence level, detect the $\Lambda$CDM model. For the
multi-class classification task, we find that the NN performs adequately well
at distinguishing $\Lambda$CDM, a CDE model with low-redshift coupling, and a
model with high-redshift coupling, with 99\%, 79\%, and 84\% accuracy,
respectively. By leveraging the power of machine learning, our pipeline can be
a useful tool for analysing growth-rate data and maximising the potential of
current surveys to probe for deviations from general relativity. | arXiv |
The sum-of-squares hierarchy of semidefinite programs has become a common
tool for algorithm design in theoretical computer science, including problems
in quantum information. In this work we study a connection between a Hermitian
version of the SoS hierarchy, related to the quantum de Finetti theorem, and
geometric quantization of compact K\"ahler manifolds (such as complex
projective space $\mathbb{C}P^{d}$, the set of all pure states in a $(d +
1)$-dimensional Hilbert space). We show that previously known HSoS rounding
algorithms can be recast as quantizing an objective function to obtain a
finite-dimensional matrix, finding its top eigenvector, and then (possibly
nonconstructively) rounding it by using a version of the Husimi
quasiprobability distribution. Dually, we recover most known quantum de Finetti
theorems by doing the same steps in the reverse order: a quantum state is first
approximated by its Husimi distribution, and then quantized to obtain a
separable state approximating the original one. In cases when there is a
transitive group action on the manifold we give some new proofs of existing de
Finetti theorems, as well as some applications including a new version of
Renner's exponential de Finetti theorem proven using the Borel--Weil--Bott
theorem, and hardness of approximation results and optimal degree-2 integrality
gaps for the basic SDP relaxation of \textsc{Quantum Max-$d$-Cut} (for
arbitrary $d$). We also describe how versions of these results can be proven
when there is no transitive group action. In these cases we can deduce some
error bounds for the HSoS hierarchy on complex projective varieties which are
smooth. | arXiv |
Causal knowledge about the relationships among decision variables and a
reward variable in a bandit setting can accelerate the learning of an optimal
decision. Current works often assume the causal graph is known, which may not
always be available a priori. Motivated by this challenge, we focus on the
causal bandit problem in scenarios where the underlying causal graph is unknown
and may include latent confounders. While intervention on the parents of the
reward node is optimal in the absence of latent confounders, this is not
necessarily the case in general. Instead, one must consider a set of possibly
optimal arms/interventions, each being a special subset of the ancestors of the
reward node, making causal discovery beyond the parents of the reward node
essential. For regret minimization, we identify that discovering the full
causal structure is unnecessary; however, no existing work provides the
necessary and sufficient components of the causal graph. We formally
characterize the set of necessary and sufficient latent confounders one needs
to detect or learn to ensure that all possibly optimal arms are identified
correctly. We also propose a randomized algorithm for learning the causal graph
with a limited number of samples, providing a sample complexity guarantee for
any desired confidence level. In the causal bandit setup, we propose a
two-stage approach. In the first stage, we learn the induced subgraph on
ancestors of the reward, along with a necessary and sufficient subset of latent
confounders, to construct the set of possibly optimal arms. The regret incurred
during this phase scales polynomially with respect to the number of nodes in
the causal graph. The second phase involves the application of a standard
bandit algorithm, such as the UCB algorithm. We also establish a regret bound
for our two-phase approach, which is sublinear in the number of rounds. | arXiv |
Optical sensing technologies are emerging technologies used in cancer
surgeries to ensure the complete removal of cancerous tissue. While point-wise
assessment has many potential applications, incorporating automated large area
scanning would enable holistic tissue sampling. However, such scanning tasks
are challenging due to their long-horizon dependency and the requirement for
fine-grained motion. To address these issues, we introduce Memorized Action
Chunking with Transformers (MACT), an intuitive yet efficient imitation
learning method for tissue surface scanning tasks. It utilizes a sequence of
past images as historical information to predict near-future action sequences.
In addition, hybrid temporal-spatial positional embeddings were employed to
facilitate learning. In various simulation settings, MACT demonstrated
significant improvements in contour scanning and area scanning over the
baseline model. In real-world testing, with only 50 demonstration trajectories,
MACT surpassed the baseline model by achieving a 60-80% success rate on all
scanning tasks. Our findings suggest that MACT is a promising model for
adaptive scanning in surgical settings. | arXiv |
Minimal unitary representation of $SO(d,2)$ and its deformations describe all
the conformally massless fields in $d$ dimensional Minkowskian spacetimes. In
critical dimensions these spacetimes admit extensions with twistorial
coordinates plus a dilatonic coordinate to causal spacetimes coordinatized by
Jordan algebras $J_3^{A}$ of degree three over the four division algebras $A= R
, C , H , O $. We study the minimal unitary representation (minrep) of the
conformal group $E_{7(-25)}$ of the spacetime coordinatized by the exceptional
Jordan algebra $J_3^{O}$. We show that the minrep of $E_{7(-25)}$ decomposes
into infinitely many massless representations of the conformal group
$SO(10,2)$. Corresponding conformal fields transform as symmetric tensors in
spinor indices of $SO(9,1)$ subject to certain constraints. Even and odd
tensorial fields describe bosonic and fermionic conformal fields, respectively.
Each irrep of $SO(10,2)$ falls into a unitary representation of an $SU(1,1)$
subgroup that commutes with $SO(10,2)$. The noncompact generators in spinor
representation $16 $ of $SO(10)$ interpolate between the bosonic and fermionic
representations and hence act like "bosonic supersymmetry" generators. We also
give the decomposition of the minrep of $E_{7(-25)}$ with respect to the
subgroup $SO^*(12)\times SU(2)$ with $SO^*(12) $ acting as the conformal group
of the spacetime coordinatized by $J_3^{H}$. Group $E_{7(-25)}$ is also the
U-duality group of the exceptional $N=2$ Maxwell-Einstein supergravity in four
dimensions. We discuss the relevance of our results to the composite scenario
that was proposed for the exceptional supergravity so as to accommodate the
families of quarks and leptons of the standard model as well as to the proposal
that $E_{7(-25)}$ acts as spectrum generating symmetry group of the $5d$
exceptional supergravity | arXiv |
We introduce a string-based parametrization for nucleon quark and gluon
generalized parton distributions (GPDs) that is valid for all skewness. Our
approach leverages conformal moments, representing them as the sum of spin-j
nucleon A-form factor and skewness-dependent spin-j nucleon D-form factor,
derived from t-channel string exchange in AdS spaces consistent with Lorentz
invariance and unitarity. This model-independent framework, satisfying the
polynomiality condition due to Lorentz invariance, uses Mellin moments from
empirical data to estimate these form factors. With just five Regge slope
parameters, our method accurately produces various nucleon quark GPD types and
symmetric nucleon gluon GPDs through pertinent Mellin-Barnes integrals. Our
isovector nucleon quark GPD is in agreement with existing lattice data,
promising to improve the empirical extraction and global analysis of nucleon
GPDs in exclusive processes, by avoiding the deconvolution problem at any
skewness, for the first time. | arXiv |
We establish a generalized quantum asymptotic equipartition property (AEP)
beyond the i.i.d. framework where the random samples are drawn from two sets of
quantum states. In particular, under suitable assumptions on the sets, we prove
that all operationally relevant divergences converge to the quantum relative
entropy between the sets. More specifically, both the smoothed min- and
max-relative entropy approach the regularized relative entropy between the
sets. Notably, the asymptotic limit has explicit convergence guarantees and can
be efficiently estimated through convex optimization programs, despite the
regularization, provided that the sets have efficient descriptions.
We give four applications of this result: (i) The generalized AEP directly
implies a new generalized quantum Stein's lemma for conducting quantum
hypothesis testing between two sets of quantum states. (ii) We introduce a
quantum version of adversarial hypothesis testing where the tester plays
against an adversary who possesses internal quantum memory and controls the
quantum device and show that the optimal error exponent is precisely
characterized by a new notion of quantum channel divergence, named the minimum
output channel divergence. (iii) We derive a relative entropy accumulation
theorem stating that the smoothed min-relative entropy between two sequential
processes of quantum channels can be lower bounded by the sum of the
regularized minimum output channel divergences. (iv) We apply our generalized
AEP to quantum resource theories and provide improved and efficient bounds for
entanglement distillation, magic state distillation, and the entanglement cost
of quantum states and channels.
At a technical level, we establish new additivity and chain rule properties
for the measured relative entropy which we expect will have more applications. | arXiv |