text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
We study the problem of multilateral collaboration among agents with
transferable utilities. Any group of agents can sign a contract consisting of a
primitive contract and monetary transfers among the signatories. We propose a
dynamic auction that finds a stable outcome when primitive contracts are gross
complements for all participants. | arXiv |
This research proposes the development of a next generation airline
reservation system that incorporates the Cloud microservices, distributed
artificial intelligence modules and the blockchain technology to improve on the
efficiency, safety and customer satisfaction. The traditional reservation
systems encounter issues related to the expansion of the systems, the integrity
of the data provided and the level of service offered to the customers, which
is the main focus of this architecture through the modular and data centric
design approaches. This will allow different operations such as reservations,
payments, and customer data management among others to be performed separately
thereby facilitating high availability of the system by 30% and enhancing
performance of the system by 40% on its scalability. Such systems contain AI
driven modules that utilize the past booking patterns along with the profile of
the customer to estimate the demand and make recommendations, which increases
to 25 % of customer engagement. Moreover, blockchain is effective in engaging
an incorruptible ledger system for the all transactions therefore mitigating
fraud incidences and increasing the clarity by 20%. The system was subjected to
analysis using a simulator and using machine learning evaluations that rated it
against other conventional systems. The results show that there were clear
enhancements in the speed of transactions where the rates of secure data
processing rose by 35%, and the system response time by 15 %. The system can
also be used for other high transaction industries like logistics and
hospitality. This structural design is indicative of how the use of advanced
technologies will revolutionize the airline reservation sector. The
implications are growing effectiveness, improvement in security and greater
customer contentment. | arXiv |
Spin fluctuations have been proposed as a key mechanism for mediating
superconductivity, particularly in high-temperature superconducting cuprates,
where conventional electron-phonon interactions alone cannot account for the
observed critical temperatures. Traditionally, their role has been analyzed
through tight-binding based model Hamiltonians. In this work we present a
method that combines density functional theory with a momentum- and
frequency-dependent pairing interaction derived from the Fluctuation Exchange
(FLEX) type Random Phase Approximation (FLEX-RPA) to compute Eliashberg
spectral functions $\alpha ^{2}F(\omega )$ which are central to spin
fluctuation theory of superconductivity. We apply our numerical procedure to
study a series of cuprates where our extracted material specific $\alpha
^{2}F(\omega )$ are found to exhibit remarkable similarities characterized by a
sharp peak in the vicinity of 40-60 meV and their rapid decay at higher
frequencies. Our exact diagonalization of a linearized BCS gap equation
extracts superconducting energy gap functions for realistic Fermi surfaces of
the cuprates and predicts their symmetry to be $d_{x^{2}-y^{2}}$ in all studied
systems. Via a variation of on-site Coulomb repulsion $U$ for the copper
$d$-electrons we show that that the range of the experimental values of $T_{c}$
can be reproduced in this approach but is extremely sensitive to the proximity
of the spin density wave instability. These data highlight challenges in
building first-principle theories of high temperature superconductivity but
offer new insights beyond previous treatments, such as the confirmation of the
usability of approximate BCS-like $T_{c}$ equations, together with the
evaluations of the material specific coupling constant $\lambda $ without
reliance on tight-binding approximations of their electronic structures. | arXiv |
Deep learning (DL)-based methods have demonstrated remarkable achievements in
addressing orthogonal frequency division multiplexing (OFDM) channel estimation
challenges. However, existing DL-based methods mainly rely on separate real and
imaginary inputs while ignoring the inherent correlation between the two
streams, such as amplitude and phase information that are fundamental in
communication signal processing. This paper proposes AE-DENet, a novel
autoencoder(AE)-based data enhancement network to improve the performance of
existing DL-based channel estimation methods. AE-DENet focuses on enriching the
classic least square (LS) estimation input commonly used in DL-based methods by
employing a learning-based data enhancement method, which extracts interaction
features from the real and imaginary components and fuses them with the
original real/imaginary streams to generate an enhanced input for better
channel inference. Experimental findings in terms of the mean square error
(MSE) results demonstrate that the proposed method enhances the performance of
all state-of-the-art DL-based channel estimators with negligible added
complexity. Furthermore, the proposed approach is shown to be robust to channel
variations and high user mobility. | arXiv |
We study Friedel oscillations (FOs) in two-dimensional topological materials
with Mexican hat band dispersion, which attract great interest due to the
complex of its inherent non-trivial features, including the Van Hove
singularity, doubly connected Fermi surface, non-trivial quantum-geometric
properties, and the presence of states with negative effective mass. These
factors are found to lead to a three-mode structure of the FOs. One of the
modes, arising from electron transitions between the Fermi contours, has an
unexpectedly large amplitude. The evolution of the amplitudes of all modes with
Fermi energy is largely determined by the interplay of three main factors:
intra-contour and inter-contour electron transitions, the quantum metric of the
basis states, and the electron-electron interaction. We traced the role of each
factor in the formation of the FO pattern and identified the corresponding
features of the FO evolution. | arXiv |
The tree edit distance (TED) between two rooted ordered trees with $n$ nodes
labeled from an alphabet $\Sigma$ is the minimum cost of transforming one tree
into the other by a sequence of valid operations consisting of insertions,
deletions and relabeling of nodes. The tree edit distance is a well-known
generalization of string edit distance and has been studied since the 1970s.
Years of steady improvements have led to an $O(n^3)$ algorithm [DMRW 2010].
Fine-grained complexity casts light onto the hardness of TED showing that a
truly subcubic time algorithm for TED implies a truly subcubic time algorithm
for All-Pairs Shortest Paths (APSP) [BGMW 2020]. Therefore, under the popular
APSP hypothesis, a truly subcubic time algorithm for TED cannot exist. However,
unlike many problems in fine-grained complexity for which conditional hardness
based on APSP also comes with equivalence to APSP, whether TED can be reduced
to APSP has remained unknown.
In this paper, we resolve this. Not only we show that TED is fine-grained
equivalent to APSP, our reduction is tight enough, so that combined with the
fastest APSP algorithm to-date [Williams 2018] it gives the first ever subcubic
time algorithm for TED running in $n^3/2^{\Omega(\sqrt{\log{n}})}$ time.
We also consider the unweighted tree edit distance problem in which the cost
of each edit is one. For unweighted TED, a truly subcubic algorithm is known
due to Mao [Mao 2022], later improved slightly by D\"{u}rr [D\"{u}rr 2023] to
run in $O(n^{2.9148})$. Their algorithm uses bounded monotone min-plus product
as a crucial subroutine, and the best running time for this product is
$\tilde{O}(n^{\frac{3+\omega}{2}})\leq O(n^{2.6857})$ (where $\omega$ is the
exponent of fast matrix multiplication). In this work, we close this gap and
give an algorithm for unweighted TED that runs in
$\tilde{O}(n^{\frac{3+\omega}{2}})$ time. | arXiv |
During the COVID-19 crisis, mechanistic models have been proven fundamental
to guide evidence-based decision making. However, time-critical decisions in a
dynamically changing environment restrict the time available for modelers to
gather supporting evidence. As infectious disease dynamics are often
heterogeneous on a spatial or demographic scale, models should be resolved
accordingly. In addition, with a large number of potential interventions, all
scenarios can barely be computed on time, even when using supercomputing
facilities. We suggest to combine complex mechanistic models with data-driven
surrogate models to allow for on-the-fly model adaptations by public health
experts. We build upon a spatially and demographically resolved infectious
disease model and train a graph neural network for data sets representing early
phases of the pandemic. The resulting networks reached an execution time of
less than a second, a significant speedup compared to the metapopulation
approach. The suggested approach yields potential for on-the-fly execution and,
thus, integration of disease dynamics models in low-barrier website
applications. For the approach to be used with decision-making, datasets with
larger variance will have to be considered. | arXiv |
When training data are distributed across{ time or space,} covariate shift
across fragments of training data biases cross-validation, compromising model
selection and assessment. We present \textit{Fragmentation-Induced
covariate-shift Remediation} ($FIcsR$), which minimizes an $f$-divergence
between a fragment's covariate distribution and that of the standard
cross-validation baseline. We s{how} an equivalence with popular
importance-weighting methods. {The method}'s numerical solution poses a
computational challenge owing to the overparametrized nature of a neural
network, and we derive a Fisher Information approximation. When accumulated
over fragments, this provides a global estimate of the amount of shift
remediation thus far needed, and we incorporate that as a prior via the
minimization objective. In the paper, we run extensive classification
experiments on multiple data classes, over $40$ datasets, and with data batched
over multiple sequence lengths. We extend the study to the $k$-fold
cross-validation setting through a similar set of experiments. An ablation
study exposes the method to varying amounts of shift and demonstrates slower
degradation with $FIcsR$ in place. The results are promising under all these
conditions; with improved accuracy against batch and fold state-of-the-art by
more than $5\%$ and $10\%$, respectively. | arXiv |
Theoretically, Josephson junction (JJ) arrays can exhibit either a
superconducting or insulating state, separated by a quantum phase transition
(QPT). In this work, we analyzed published data on QPTs in three
one-dimensional arrays and two two-dimensional arrays using a recently
developed phenomenological model of QPTs. The model is based on the insight
that the scaled experimental data depend in a universal way on two
characteristic length scales of the system: the microscopic length scale $L_0$
from which the renormalization group flow starts, and the dephasing length,
$L_{\varphi}(T)$ as given by the distance travelled by system-specific
elementary excitations over the Planckian time. Our analysis reveals that the
data for all five arrays (both 1D and 2D) can be quantitatively and
self-consistently explained within the framework of interacting superconducting
plasmons. In this picture, $L_{\varphi}=v_p\hbar/k_B T$, and $L_0 \approx
\Lambda$, where $v_p$ is the speed of the plasmons and $\Lambda$ is the Coulomb
screening length of the Cooper pairs. We also observe that, in 1D arrays, the
transition is significantly shifted towards the insulating side compared to the
predictions of the sine-Gordon model. Finally, we discuss similarities and
differences with recent microwave studies of extremely long JJ chains, as well
as with the pair-breaking QPT observed in superconducting nanowires and films. | arXiv |
Detecting Beyond Standard Model (BSM) signals in high-energy particle
collisions presents significant challenges due to complex data and the need to
differentiate rare signal events from Standard Model (SM) backgrounds. This
study investigates the efficacy of deep learning models, specifically Deep
Neural Networks (DNNs) and Graph Neural Networks (GNNs), in classifying
particle collision events as either BSM signal or background. The research
utilized a dataset comprising 214,000 SM background and 10,755 BSM events. To
address class imbalance, an undersampling method was employed, resulting in
balanced classes. Three models were developed and compared: a DNN and two GNN
variants with different graph construction methods. All models demonstrated
high performance, achieving Area Under the Receiver Operating Characteristic
curve (AUC) values exceeding $94\%$. While the DNN model slightly outperformed
GNNs across various metrics, both GNN approaches showed comparable results
despite different graph structures. The GNNs' ability to explicitly capture
inter-particle relationships within events highlights their potential for BSM
signal detection. | arXiv |
Imagine having a system to control and only know that it belongs to a certain
class of dynamical systems. Would it not be amazing to simply plug in a
controller and have it work as intended? With the rise of in-context learning
and powerful architectures like Transformers, this might be possible, and we
want to show it. In this work, within the model reference framework, we hence
propose the first in-context learning-based approach to design a unique
contextual controller for an entire class of dynamical systems rather than
focusing on just a single instance. Our promising numerical results show the
possible advantages of the proposed paradigm, paving the way for a shift from
the "one-system-one-controller" control design paradigm to a new
"one-class-one-controller" logic. | arXiv |
Learning natural and diverse behaviors from human motion datasets remains
challenging in physics-based character control. Existing conditional
adversarial models often suffer from tight and biased embedding distributions
where embeddings from the same motion are closely grouped in a small area and
shorter motions occupy even less space. Our empirical observations indicate
this limits the representational capacity and diversity under each skill. An
ideal latent space should be maximally packed by all motion's embedding
clusters. In this paper, we propose a skill-conditioned controller that learns
diverse skills with expressive variations. Our approach leverages the Neural
Collapse phenomenon, a natural outcome of the classification-based encoder, to
uniformly distributed cluster centers. We additionally propose a novel
Embedding Expansion technique to form stylistic embedding clusters for diverse
skills that are uniformly distributed on a hypersphere, maximizing the
representational area occupied by each skill and minimizing unmapped regions.
This maximally packed and uniformly distributed embedding space ensures that
embeddings within the same cluster generate behaviors conforming to the
characteristics of the corresponding motion clips, yet exhibiting noticeable
variations within each cluster. Compared to existing methods, our controller
not only generates high-quality, diverse motions covering the entire dataset
but also achieves superior controllability, motion coverage, and diversity
under each skill. Both qualitative and quantitative results confirm these
traits, enabling our controller to be applied to a wide range of downstream
tasks and serving as a cornerstone for diverse applications. | arXiv |
Federated Learning (FL) enables clients to train a joint model without
disclosing their local data. Instead, they share their local model updates with
a central server that moderates the process and creates a joint model. However,
FL is susceptible to a series of privacy attacks. Recently, the source
inference attack (SIA) has been proposed where an honest-but-curious central
server tries to identify exactly which client owns a specific data record. n
this work, we propose a defense against SIAs by using a trusted shuffler,
without compromising the accuracy of the joint model. We employ a combination
of unary encoding with shuffling, which can effectively blend all clients'
model updates, preventing the central server from inferring information about
each client's model update separately. In order to address the increased
communication cost of unary encoding we employ quantization. Our preliminary
experiments show promising results; the proposed mechanism notably decreases
the accuracy of SIAs without compromising the accuracy of the joint model. | arXiv |
Recovering impact parameter variations in multi-planet systems is an
effective approach for detecting non-transiting planets and refining planetary
mass estimates. Traditionally, two methodologies have been employed: the
Individual Fit, which fits each transit independently to analyze impact
parameter changes, and the Dynamical Fit, which simulates planetary dynamics to
match transit light curves. We introduce a new fitting method, Simultaneous
Impact Parameter Variation Analysis (SIPVA), which outperforms the Individual
Fit and is computationally more efficient than the Dynamical Fit. SIPVA
directly integrates a linear time-dependent model for impact parameters into
the Monte Carlo Markov Chain (MCMC) algorithm by fitting all transits
simultaneously. We evaluate SIPVA and the Individual Fit on artificial systems
with varying LLRs and find that SIPVA consistently outperforms the Individual
Fit in recovery rates and accuracy. When applied to selected Kepler planetary
candidates exhibiting significant transit duration variations (TDVs), SIPVA
identifies significant impact parameter trends in 10 out of 16 planets. In
contrast, the Individual Fit does so in only 4. We also employ probabilistic
modeling to calculate the theoretical distribution of planets with significant
impact parameter variations across all observed Kepler systems and compare the
distribution of recovered candidates by the Individual Fit and Dynamical Fit
from previous work with our theoretical distribution. Our findings offer an
alternative framework for analyzing planetary transits, relying solely on
Bayesian inference without requiring prior assumptions about the planetary
system's dynamical architecture. | arXiv |
This textbook introduces the basic concepts of the theory of causal fermion
systems, a recent approach to the description of fundamental physics. The
theory yields quantum mechanics, general relativity and quantum field theory as
limiting cases and is therefore a candidate for a unified physical theory. From
the mathematical perspective, causal fermion systems provide a general
framework for describing and analyzing non-smooth geometries and "quantum
geometries." The dynamics is described by a novel variational principle, the
causal action principle.
The book includes a detailed summary of the mathematical and physical
preliminaries. It explains the physical concepts behind the causal fermion
system approach from the basics. Moreover, all the mathematical objects and
structures are introduced step by step. The mathematical methods used for the
analysis of causal fermion systems and the causal action principle are
introduced in depth. Many examples and applications are worked out.
The textbook is addressed to master and graduate students in mathematics or
physics. Furthermore, it serves as a reference work for researchers working in
the field. | arXiv |
Graph Anomaly Detection (GAD) aims to identify uncommon, deviated, or
suspicious objects within graph-structured data. Existing methods generally
focus on a single graph object type (node, edge, graph, etc.) and often
overlook the inherent connections among different object types of graph
anomalies. For instance, a money laundering transaction might involve an
abnormal account and the broader community it interacts with. To address this,
we present UniGAD, the first unified framework for detecting anomalies at node,
edge, and graph levels jointly. Specifically, we develop the Maximum Rayleigh
Quotient Subgraph Sampler (MRQSampler) that unifies multi-level formats by
transferring objects at each level into graph-level tasks on subgraphs. We
theoretically prove that MRQSampler maximizes the accumulated spectral energy
of subgraphs (i.e., the Rayleigh quotient) to preserve the most significant
anomaly information. To further unify multi-level training, we introduce a
novel GraphStitch Network to integrate information across different levels,
adjust the amount of sharing required at each level, and harmonize conflicting
training goals. Comprehensive experiments show that UniGAD outperforms both
existing GAD methods specialized for a single task and graph prompt-based
approaches for multiple tasks, while also providing robust zero-shot task
transferability. All codes can be found at
https://github.com/lllyyq1121/UniGAD. | arXiv |
As the integration of the Large Language Models (LLMs) into various
applications increases, so does their susceptibility to misuse, raising
significant security concerns. Numerous jailbreak attacks have been proposed to
assess the security defense of LLMs. Current jailbreak attacks mainly rely on
scenario camouflage, prompt obfuscation, prompt optimization, and prompt
iterative optimization to conceal malicious prompts. In particular, sequential
prompt chains in a single query can lead LLMs to focus on certain prompts while
ignoring others, facilitating context manipulation. This paper introduces
SequentialBreak, a novel jailbreak attack that exploits this vulnerability. We
discuss several scenarios, not limited to examples like Question Bank, Dialog
Completion, and Game Environment, where the harmful prompt is embedded within
benign ones that can fool LLMs into generating harmful responses. The distinct
narrative structures of these scenarios show that SequentialBreak is flexible
enough to adapt to various prompt formats beyond those discussed. Extensive
experiments demonstrate that SequentialBreak uses only a single query to
achieve a substantial gain of attack success rate over existing baselines
against both open-source and closed-source models. Through our research, we
highlight the urgent need for more robust and resilient safeguards to enhance
LLM security and prevent potential misuse. All the result files and website
associated with this research are available in this GitHub repository:
https://anonymous.4open.science/r/JailBreakAttack-4F3B/. | arXiv |
We show that two natural and a priori unrelated structures encapsulate the
same data, namely certain commutative and associative product structures and a
class of superintegrable Hamiltonian systems. More precisely, consider a
Euclidean space of dimension at least three, equipped with a commutative and
associative product structure that satisfies certain compatibility conditions.
We prove that such a product structure encapsulates precisely the conditions of
a so-called abundant structure. Such a structure provides the data needed to
construct a family of second-order (maximally) superintegrable Hamiltonian
systems of second order. We prove that all abundant superintegrable Hamiltonian
systems on Euclidean space of dimension at least three arise in this way. As an
example, we present the Smorodinski-Winternitz I Hamiltonian system. | arXiv |
Radio Frequency Fingerprinting (RFF) techniques allow a receiver to
authenticate a transmitter by analyzing the physical layer of the radio
spectrum. Although the vast majority of scientific contributions focus on
improving the performance of RFF considering different parameters and
scenarios, in this work, we consider RFF as an attack vector to identify and
track a target device.
We propose, implement, and evaluate HidePrint, a solution to prevent tracking
through RFF without affecting the quality of the communication link between the
transmitter and the receiver. HidePrint hides the transmitter's fingerprint
against an illegitimate eavesdropper by injecting controlled noise in the
transmitted signal. We evaluate our solution against state-of-the-art
image-based RFF techniques considering different adversarial models, different
communication links (wired and wireless), and different configurations. Our
results show that the injection of a Gaussian noise pattern with a standard
deviation of (at least) 0.02 prevents device fingerprinting in all the
considered scenarios, thus making the performance of the identification process
indistinguishable from the random guess while affecting the Signal-to-Noise
Ratio (SNR) of the received signal by only 0.1 dB. Moreover, we introduce
selective radio fingerprint disclosure, a new technique that allows the
transmitter to disclose the radio fingerprint to only a subset of intended
receivers. This technique allows the transmitter to regain anonymity, thus
preventing identification and tracking while allowing authorized receivers to
authenticate the transmitter without affecting the quality of the transmitted
signal. | arXiv |
We describe a method for constructing $n$-orthogonal coordinate systems in
constant curvature spaces. The construction proposed is a modification of
Krichever's method for producing orthogonal curvilinear coordinate systems in
the $n$-dimensional Euclidean space. To demonstrate how this method works, we
construct examples of orthogonal coordinate systems on the two-dimensional
sphere and the hyperbolic plane, in the case when the spectral curve is
reducible and all irreducible components are isomorphic to a complex projective
line. | arXiv |
Compared to rigid hands, underactuated compliant hands offer greater
adaptability to object shapes, provide stable grasps, and are often more
cost-effective. However, they introduce uncertainties in hand-object
interactions due to their inherent compliance and lack of precise finger
proprioception as in rigid hands. These limitations become particularly
significant when performing contact-rich tasks like insertion. To address these
challenges, additional sensing modalities are required to enable robust
insertion capabilities. This letter explores the essential sensing requirements
for successful insertion tasks with compliant hands, focusing on the role of
visuotactile perception. We propose a simulation-based multimodal policy
learning framework that leverages all-around tactile sensing and an extrinsic
depth camera. A transformer-based policy, trained through a teacher-student
distillation process, is successfully transferred to a real-world robotic
system without further training. Our results emphasize the crucial role of
tactile sensing in conjunction with visual perception for accurate
object-socket pose estimation, successful sim-to-real transfer and robust task
execution. | arXiv |
Sound event localization and detection (SELD) has seen substantial
advancements through learning-based methods. These systems, typically trained
from scratch on specific datasets, have shown considerable generalization
capabilities. Recently, deep neural networks trained on large-scale datasets
have achieved remarkable success in the sound event classification (SEC) field,
prompting an open question of whether these advancements can be extended to
develop general-purpose SELD models. In this paper, leveraging the power of
pre-trained SEC models, we propose pre-trained SELD networks (PSELDNets) on
large-scale synthetic datasets. These synthetic datasets, generated by
convolving sound events with simulated spatial room impulse responses (SRIRs),
contain 1,167 hours of audio clips with an ontology of 170 sound classes. These
PSELDNets are transferred to downstream SELD tasks. When we adapt PSELDNets to
specific scenarios, particularly in low-resource data cases, we introduce a
data-efficient fine-tuning method, AdapterBit. PSELDNets are evaluated on a
synthetic-test-set using collected SRIRs from TAU Spatial Room Impulse Response
Database (TAU-SRIR DB) and achieve satisfactory performance. We also conduct
our experiments to validate the transferability of PSELDNets to three publicly
available datasets and our own collected audio recordings. Results demonstrate
that PSELDNets surpass state-of-the-art systems across all publicly available
datasets. Given the need for direction-of-arrival estimation, SELD generally
relies on sufficient multi-channel audio clips. However, incorporating the
AdapterBit, PSELDNets show more efficient adaptability to various tasks using
minimal multi-channel or even just monophonic audio clips, outperforming the
traditional fine-tuning approaches. | arXiv |
COVID-19 is extremely contagious and its rapid growth has drawn attention
towards its early diagnosis. Early diagnosis of COVID-19 enables healthcare
professionals and government authorities to break the chain of transition and
flatten the epidemic curve. With the number of cases accelerating across the
developed world, COVID-19 induced Viral Pneumonia cases is a big challenge.
Overlapping of COVID-19 cases with Viral Pneumonia and other lung infections
with limited dataset and long training hours is a serious problem to cater.
Limited amount of data often results in over-fitting models and due to this
reason, model does not predict generalized results. To fill this gap, we
proposed GAN-based approach to synthesize images which later fed into the deep
learning models to classify images of COVID-19, Normal, and Viral Pneumonia.
Specifically, customized Wasserstein GAN is proposed to generate 19% more Chest
X-ray images as compare to the real images. This expanded dataset is then used
to train four proposed deep learning models: VGG-16, ResNet-50, GoogLeNet and
MNAST. The result showed that expanded dataset utilized deep learning models to
deliver high classification accuracies. In particular, VGG-16 achieved highest
accuracy of 99.17% among all four proposed schemes. Rest of the models like
ResNet-50, GoogLeNet and MNAST delivered 93.9%, 94.49% and 97.75% testing
accuracies respectively. Later, the efficiency of these models is compared with
the state of art models on the basis of accuracy. Further, our proposed models
can be applied to address the issue of scant datasets for any problem of image
analysis. | arXiv |
On a connected finite graph, we propose an evolution of weights including
Ollivier's Ricci flow as a special case. During the evolution process, on each
edge, the speed of change of weight is exactly the difference between the
Wasserstein distance related to two probability measures and certain graph
distance. Here the probability measure may be chosen as an $\alpha$-lazy
one-step random walk, an $\alpha$-lazy two-step random walk, or a general
probability measure. Based on the ODE theory, we show that the initial value
problem has a unique global solution.
A discrete version of the above evolution is applied to the problem of
community detection. Our algorithm is based on such a discrete evolution, where
probability measures are chosen as $\alpha$-lazy one-step random walk and
$\alpha$-lazy two-step random walk respectively. Note that the later measure
has not been used in previous works [2, 16, 20, 23]. Here, as in [20], only one
surgery needs to be performed after the last iteration. Moreover, our algorithm
is much easier than those of [2, 16, 20], which were all based on Lin-Lu-Yau's
Ricci curvature. The code is available at
https://github.com/mjc191812/Evolution-of-weights-on-a-connected-finite-graph. | arXiv |
Purpose: This study aims to assess the accuracy of degree adaptive strategies
in the context of incompressible Navier-Stokes flows using the high order
hybridisable discontinuous Galerkin (HDG) method.
Design/methodology/approach: The work presents a series of numerical examples
to show the inability of standard degree adaptive processes to accurate capture
aerodynamic quantities of interest, in particular the drag. A new conservative
projection is proposed and the results between a standard degree adaptive
procedure and the adaptive process enhanced with this correction are compared.
The examples involve two transient problems where flow vortices or a gust needs
to be accurately propagated over long distances.
\noindent \textbf{}Findings:polynomials with a lower degree. Due to the
coupling of velocity-pressure in incompressible flows, the violation of the
incompressibility constraint leads to inaccurate pressure fields in the wake
that have a sizeable effect on the drag. The new conservative projection
proposed is found to remove all the numerical artefacts shown by the standard
adaptive process.
Originality/value: This work proposes a new conservative projection for the
degree adaptive process. The projection does not introduce a significant
overhead because it requires to solve an element-by-element problem and only
for those elements where the adaptive process lowers the degree of
approximation. Numerical results show that with the proposed projection
non-physical oscillations in the drag disappear and the results are in good
agreement with reference solutions. | arXiv |
Let $Z$ be an abelian group, $ x \in Z$, and $[x] = \{ y : \langle x \rangle
= \langle y \rangle \}$. A graph is called integral if all its eigenvalues are
integers. It is known that a Cayley graph is integral if and only if its
connection set can be express as union of the sets $[x] $. In this paper, we
determine an algebraic formula for eigenvalues of the integral Cayley graph
when the connection set is $ [x]$. This formula involves an analogue of
M$\ddot{\text{o}}$bius function. | arXiv |
Context-free language (CFL) reachability is a standard approach in static
analyses, where the analysis question is phrased as a language reachability
problem on a graph $G$ wrt a CFL L. While CFLs lack the expressiveness needed
for high precision, common formalisms for context-sensitive languages are such
that the corresponding reachability problem is undecidable. Are there useful
context-sensitive language-reachability models for static analysis?
In this paper, we introduce Multiple Context-Free Language (MCFL)
reachability as an expressive yet tractable model for static program analysis.
MCFLs form an infinite hierarchy of mildly context sensitive languages
parameterized by a dimension $d$ and a rank $r$. We show the utility of MCFL
reachability by developing a family of MCFLs that approximate interleaved Dyck
reachability, a common but undecidable static analysis problem.
We show that MCFL reachability be computed in $O(n^{2d+1})$ time on a graph
of $n$ nodes when $r=1$, and $O(n^{d(r+1)})$ time when $r>1$. Moreover, we show
that when $r=1$, the membership problem has a lower bound of $n^{2d}$ based on
the Strong Exponential Time Hypothesis, while reachability for $d=1$ has a
lower bound of $n^{3}$ based on the combinatorial Boolean Matrix Multiplication
Hypothesis. Thus, for $r=1$, our algorithm is optimal within a factor $n$ for
all levels of the hierarchy based on $d$.
We implement our MCFL reachability algorithm and evaluate it by
underapproximating interleaved Dyck reachability for a standard taint analysis
for Android. Used alongside existing overapproximate methods, MCFL reachability
discovers all tainted information on 8 out of 11 benchmarks, and confirms
$94.3\%$ of the reachable pairs reported by the overapproximation on the
remaining 3. To our knowledge, this is the first report of high and provable
coverage for this challenging benchmark set. | arXiv |
In this paper, we derive a new Kalman filter with probabilistic data
association between measurements and states. We formulate a variational
inference problem to approximate the posterior density of the state conditioned
on the measurement data. We view the unknown data association as a latent
variable and apply Expectation Maximization (EM) to obtain a filter with update
step in the same form as the Kalman filter but with expanded measurement vector
of all potential associations. We show that the association probabilities can
be computed as permanents of matrices with measurement likelihood entries. We
also propose an ambiguity check that associates only a subset of ambiguous
measurements and states probabilistically, thus reducing the association time
and preventing low-probability measurements from harming the estimation
accuracy. Experiments in simulation show that our filter achieves lower
tracking errors than the well-established joint probabilistic data association
filter (JPDAF), while running at comparable rate. We also demonstrate the
effectiveness of our filter in multi-object tracking (MOT) on multiple
real-world datasets, including MOT17, MOT20, and DanceTrack. We achieve better
higher order tracking accuracy (HOTA) than previous Kalman-filter methods and
remain real-time. Associating only bounding boxes without deep features or
velocities, our method ranks top-10 on both MOT17 and MOT20 in terms of HOTA.
Given offline detections, our algorithm tracks at 250+ fps on a single laptop
CPU. Code is available at https://github.com/hwcao17/pkf. | arXiv |
We define and study geometric versions of the Benoist limit cone and matrix
joint spectrum, which we call the translation cone and the joint translation
spectrum, respectively. These new notions allow us to generalize the study of
embeddings into products of rank-one simple Lie groups and to compare group
actions on different metric spaces, quasi-morphisms, Anosov representations and
many other natural objects of study.
We identify the joint translation spectrum with the image of the gradient
function of a corresponding Manhattan manifold: a higher dimensional version of
the well known and studied Manhattan curve. As a consequence we deduce many
properties of the spectrum. For example we show that it is given by the closure
of the set of all possible drift vectors associated to finitely supported,
symmetric, admissible random walks on the associated group. | arXiv |
Cardinality sketches are compact data structures for representing sets or
vectors, enabling efficient approximation of their cardinality (or the number
of nonzero entries). These sketches are space-efficient, typically requiring
only logarithmic storage relative to input size, and support incremental
updates, allowing for dynamic modifications. A critical property of many
cardinality sketches is composability, meaning that the sketch of a union of
sets can be computed from individual sketches. Existing designs typically
provide strong statistical guarantees, accurately answering an exponential
number of queries in terms of sketch size $k$. However, these guarantees
degrade to quadratic in $k$ when queries are adaptive and may depend on
previous responses.
Prior works on statistical queries (Steinke and Ullman, 2015) and specific
MinHash cardinality sketches (Ahmadian and Cohen, 2024) established that the
quadratic bound on the number of adaptive queries is, in fact, unavoidable. In
this work, we develop a unified framework that generalizes these results across
broad classes of cardinality sketches. We show that any union-composable
sketching map is vulnerable to attack with $\tilde{O}(k^4)$ queries and, if the
sketching map is also monotone (as for MinHash and statistical queries), we
obtain a tight bound of $\tilde{O}(k^2)$ queries. Additionally, we demonstrate
that linear sketches over the reals $\mathbb{R}$ and fields $\mathbb{F}_p$ can
be attacked using $\tilde{O}(k^2)$ adaptive queries, which is optimal and
strengthens some of the recent results by Gribelyuk et al. (2024), which
required a larger polynomial number of rounds for such matrices. | arXiv |
In this paper, we study the partial data inverse problem for nonlinear
magnetic Schr\"odinger equations. We show that the knowledge of the
Dirichlet-to-Neumann map, measured on an arbitrary part of the boundary,
determines the time-dependent linear coefficients, electric and magnetic
potentials, and nonlinear coefficients, provided that the divergence of the
magnetic potential is given. Additionally, we also investigate both the forward
and inverse problems for the linear magnetic Schr\"odinger equation with a
time-dependent leading term. In particular, all coefficients are uniquely
recovered from boundary data. | arXiv |
Transcranial direct current stimulation (tDCS) has emerged as a promising
non-invasive therapeutic intervention for major depressive disorder (MDD), yet
its effects on neural mechanisms remain incompletely understood. This study
investigates the impact of tDCS in individuals with MDD using resting-state EEG
data and network neuroscience to analyze functional connectivity. We examined
power spectral density (PSD) changes and functional connectivity (FC) patterns
across theta, alpha, and beta bands before and after tDCS intervention. A
notable aspect of this research involves the modification of the binarizing
threshold algorithm to assess functional connectivity networks, facilitating a
meaningful comparison at the group level. Our analysis using optimal threshold
binarization techniques revealed significant modifications in network topology,
particularly evident in the beta band, indicative of reduced randomization or
enhanced small-worldness after tDCS. Furthermore, the hubness analysis
identified specific brain regions, notably the dorsolateral prefrontal cortex
(DLPFC) regions across all frequency bands, exhibiting increased functional
connectivity, suggesting their involvement in the antidepressant effects of
tDCS. Notably, tDCS intervention transformed the dispersed high connectivity
into localized connectivity and increased left-sided asymmetry across all
frequency bands. Overall, this study provides valuable insights into the
effects of tDCS on neural mechanisms in MDD, offering a potential direction for
further research and therapeutic development in the field of neuromodulation
for mental health disorders. | arXiv |
Collision-resistant, cryptographic hash (CRH) functions have long been an
integral part of providing security and privacy in modern systems. Certain
constructions of zero-knowledge proof (ZKP) protocols aim to utilize CRH
functions to perform cryptographic hashing. Standard CRH functions, such as
SHA2, are inefficient when employed in the ZKP domain, thus calling for
ZK-friendly hashes, which are CRH functions built with ZKP efficiency in mind.
The most mature ZK-friendly hash, MiMC, presents a block cipher and hash
function with a simple algebraic structure that is well-suited, due to its
achieved security and low complexity, for ZKP applications. Although
ZK-friendly hashes have improved the performance of ZKP generation in software,
the underlying computation of ZKPs, including CRH functions, must be optimized
on hardware to enable practical applications. The challenge we address in this
work is determining how to efficiently incorporate ZK-friendly hash functions,
such as MiMC, into hardware accelerators, thus enabling more practical
applications. In this work, we introduce AMAZE, a highly hardware-optimized
open-source framework for computing the MiMC block cipher and hash function.
Our solution has been primarily directed at resource-constrained edge devices;
consequently, we provide several implementations of MiMC with varying power,
resource, and latency profiles. Our extensive evaluations show that the
AMAZE-powered implementation of MiMC outperforms standard CPU implementations
by more than 13$\times$. In all settings, AMAZE enables efficient ZK-friendly
hashing on resource-constrained devices. Finally, we highlight AMAZE's
underlying open-source arithmetic backend as part of our end-to-end design,
thus allowing developers to utilize the AMAZE framework for custom ZKP
applications. | arXiv |
The efficient scheduling of multi-task jobs across multiprocessor systems has
become increasingly critical with the rapid expansion of computational systems.
This challenge, known as Multiprocessor Multitask Scheduling (MPMS), is
essential for optimizing the performance and scalability of applications in
fields such as cloud computing and deep learning. In this paper, we study the
MPMS problem under both deterministic and stochastic models, where each job is
composed of multiple tasks and can only be completed when all its tasks are
finished. We introduce $\mathsf{NP}$-$\mathsf{SRPT}$, a non-preemptive variant
of the Shortest Remaining Processing Time (SRPT) algorithm, designed to
accommodate scenarios with non-preemptive tasks. Our algorithm achieves a
competitive ratio of $\ln \alpha + \beta + 1$ for minimizing response time,
where $\alpha$ represents the ratio of the largest to the smallest job
workload, and $\beta$ captures the ratio of the largest non-preemptive task
workload to the smallest job workload. We further establish that this
competitive ratio is order-optimal when the number of processors is fixed. For
stochastic systems modeled as M/G/N queues, where job arrivals follow a Poisson
process and task workloads are drawn from a general distribution, we prove that
$\mathsf{NP}$-$\mathsf{SRPT}$ achieves asymptotically optimal mean response
time as the traffic intensity $\rho$ approaches $1$, assuming the task size
distribution has finite support. Moreover, the asymptotic optimality extends to
cases with infinite task size distributions under mild probabilistic
assumptions, including the standard M/M/N model. Experimental results validate
the effectiveness of $\mathsf{NP}$-$\mathsf{SRPT}$, demonstrating its
asymptotic optimality in both theoretical and practical settings. | arXiv |
Video geolocalization is a crucial problem in current times. Given just a
video, ascertaining where it was captured from can have a plethora of
advantages. The problem of worldwide geolocalization has been tackled before,
but only using the image modality. Its video counterpart remains relatively
unexplored. Meanwhile, video geolocalization has also garnered some attention
in the recent past, but the existing methods are all restricted to specific
regions. This motivates us to explore the problem of video geolocalization at a
global scale. Hence, we propose a novel problem of worldwide video
geolocalization with the objective of hierarchically predicting the correct
city, state/province, country, and continent, given a video. However, no large
scale video datasets that have extensive worldwide coverage exist, to train
models for solving this problem. To this end, we introduce a new dataset,
CityGuessr68k comprising of 68,269 videos from 166 cities all over the world.
We also propose a novel baseline approach to this problem, by designing a
transformer-based architecture comprising of an elegant Self-Cross Attention
module for incorporating scenes as well as a TextLabel Alignment strategy for
distilling knowledge from textlabels in feature space. To further enhance our
location prediction, we also utilize soft-scene labels. Finally we demonstrate
the performance of our method on our new dataset as well as Mapillary(MSLS).
Our code and datasets are available at: https://github.com/ParthPK/CityGuessr | arXiv |
We use multi-regional input-output analysis to calculate the paid labour,
energy, emissions, and material use required to provide basic needs for all
people. We calculate two different low-consumption scenarios, using the UK as a
case study: (1) a "decent living" scenario, which includes only the bare
necessities, and (2) a "good life" scenario, which is based on the minimum
living standards demanded by UK residents. We compare the resulting footprints
to the current footprint of the UK, and to the footprints of the US, China,
India, and a global average. Labour footprints are disaggregated by sector,
skill level, and region of origin.
We find that both low-consumption scenarios would still require an
unsustainable amount of labour and resources at the global scale. The decent
living scenario would require a 26-hour working week, and on a per capita
basis, 89 GJ of energy use, 5.9 tonnes of emissions, and 5.7 tonnes of used
materials per year. The more socially sustainable good life scenario would
require a 53-hour working week, 165 GJ of energy use, 9.9 tonnes of emissions,
and 11.5 tonnes of used materials per capita. Both scenarios represent
substantial reductions from the UK's current labour footprint of 68 hours per
week, which the UK is only able to sustain by importing a substantial portion
of its labour from other countries. We conclude that reducing consumption to
the level of basic needs is not enough to achieve either social or
environmental sustainability. Dramatic improvements in provisioning systems are
also required. | arXiv |
We consider non-Hermitian tight-binding one-dimensional Hamiltonians and show
that imposing a certain symmetry causes all eigenvalues to pair up and the
corresponding eigenstates to coalesce in pairs. This Pairwise Coalescence (PC)
is an enhanced version of an exceptional point -- the complete spectrum pairs
up, not just one pair of eigenstates. The symmetry is that of reflection
excluding the central two sites, and allowing flipping of non-reciprocal
hoppings (``generalized off-center reflection symmetry''). Two simple examples
of PC exist in the literature -- our construction encompasses these examples
and extends them to a vast class of Hamiltonians. We study several families of
such Hamiltonians, extend to cases of full-spectrum higher-order coalescences,
and show how the PC point corresponds to amplified non-orthogonality of the
eigenstates and enhanced loss of norm in time evolution. | arXiv |
This study focuses on Intelligent Fault Diagnosis (IFD) in rotating machinery
utilizing a single microphone and a data-driven methodology, effectively
diagnosing 42 classes of fault types and severities. The research leverages
sound data from the imbalanced MaFaulDa dataset, aiming to strike a balance
between high performance and low resource consumption. The testing phase
encompassed a variety of configurations, including sampling, quantization,
signal normalization, silence removal, Wiener filtering, data scaling,
windowing, augmentation, and classifier tuning using XGBoost. Through the
analysis of time, frequency, mel-frequency, and statistical features, we
achieved an impressive accuracy of 99.54% and an F-Beta score of 99.52% with
just 6 boosting trees at an 8 kHz, 8-bit configuration. Moreover, when
utilizing only MFCCs along with their first- and second-order deltas, we
recorded an accuracy of 97.83% and an F-Beta score of 97.67%. Lastly, by
implementing a greedy wrapper approach, we obtained a remarkable accuracy of
96.82% and an F-Beta score of 98.86% using 50 selected features, nearly all of
which were first- and second-order deltas of the MFCCs. | arXiv |
This paper introduces the Smooth Zone Barrier Lyapunov Function (s-ZBLF) for
output and full-state constrained nonlinear control systems. Unlike traditional
BLF methods, where control effort continuously increases as the state moves
toward the constraint boundaries, the s-ZBLF method keeps the control effort
nearly zero near the origin, with a more aggressive increase as the system
approaches the boundary. However, unlike previous works where control effort
was zero within a predefined safe region around the origin, the s-ZBLF
overcomes the disadvantage of discontinuous control activation by providing a
smooth, gradual increase in control effort as the state nears the constraints.
This smooth transition improves continuity in the control response and enhances
stability by reducing chattering. Additionally, the s-ZBLF provides the
advantage of minimal control effort in regions far from the constraints,
reducing energy consumption and actuator wear. Two forms of the s-ZBLF,
logarithmic-based and rational-based, are presented. Theoretical analysis
guarantees that all system states remain within the defined constraints,
ensuring boundedness and stability of the closed-loop system. Simulation
results validate the effectiveness of the proposed method in handling
constrained nonlinear systems. | arXiv |
Coronal Mass Ejections (CMEs) erupting from the host star are expected to
have effects on the atmospheric erosion processes of the orbiting planets. For
planets with a magnetosphere, the embedded magnetic field in the CMEs is
thought to be the most important parameter to affect planetary mass loss. In
this work, we investigate the effect of different magnetic field structures of
stellar CMEs on the atmosphere of a hot Jupiter with a dipolar magnetosphere.
We use a time-dependent 3D radiative magnetohydrodynamics (MHD) atmospheric
escape model that self-consistently models the outflow from hot Jupiters
magnetosphere and its interaction with stellar CMEs. For our study, we consider
three configurations of magnetic field embedded in stellar CMEs -- (a)
northward $B_z$ component, (b) southward $B_z$ component, and (c) radial
component. {We find that both the CMEs with northward $B_z$ component and
southward $B_z$ component increase the planetary mass-loss rate when the CME
arrives from the stellar side, with the mass-loss rate remaining higher for the
CME with northward $B_z$ component until it arrives at the opposite side.} The
largest magnetopause is found for the CME with a southward $B_z$ component when
the dipole and the CME magnetic field have the same direction. We also find
that during the passage of a CME, the planetary magnetosphere goes through
three distinct changes - (1) compressed magnetosphere, (2) enlarged
magnetosphere, and (3) relaxed magnetosphere for all three considered CME
configurations. We compute synthetic Ly-$\alpha$ transits at different times
during the passage of the CMEs. The synthetic Ly-$\alpha$ transit absorption
generally increases when the CME is in interaction with the planet for all
three magnetic configurations. The maximum Ly-$\alpha$ absorption is found for
the radial CME case when the magnetosphere is the most compressed. | arXiv |
We introduce stationary generalized Bratteli diagrams $B$ which are
represented as the union of countably many classical Pascal-Bratteli diagrams.
We describe all ergodic invariant measures on $B$. We show that there exist
orders which produce no infinite minimal or maximal paths and the corresponding
Vershik map is a homeomorphism. We also describe orders that generate
uncountably many infinite minimal and uncountably many infinite maximal paths
both on $B$ and on the classical Pascal-Bratteli diagram. | arXiv |
We investigate the locality properties of $T \overline T$-deformed CFTs
within perturbation theory. Up to third order in the deformation parameter, we
find a Hamiltonian operator which solves the flow equation, reproduces the
Zamolodchikov energy spectrum, and is consistent with quasi-locality of the
theory. This Hamiltonian includes terms proportional to the central charge
which have not appeared before and which are necessary to reproduce the correct
spectrum. We show that the Hamiltonian is not uniquely defined since it
contains free parameters, starting at second order, which do not spoil the
above properties. We then use it to determine the full conserved stress tensor.
In our approach, the KdV charges are automatically conserved to all orders but
are not a priori local. Nevertheless, we show that they can be made local to
first order. Our techniques allow us to further comment on the space of
Hamiltonians constructed from products of KdV charges which also flow to local
charges in the deformed theory in the IR. | arXiv |
This work delves into the study of flavor invariants and, in special,
invariants capable of detecting CP (Charge-Parity) violation. Through the
mathematical tool of the Hilbert series, we systematically enumerate and
explore flavor invariants that are unchanged under weak basis transformations.
After reviewing the Hilbert series and the flavor invariants of the SM quark
sector, we apply the tool of Hilbert series to the SM extended by a singlet
vector-like quark (VLQ) of down-type. The introduction of these hypothetical
particles leads to a simple extension of the SM that can be motivated by many
problems, including the need for new sources of CP violation to explain the
observed matter-antimatter asymmetry in the universe. We were successful in
calculating the Hilbert series for the VLQ extension in the mass basis of the
VLQ, where the spurion transformations are simpler. Based on the Hilbert
series, we build and enumerate the basic flavor invariants with which all
invariants can be constructed. For a generic basis, where the spurion
transformations involve a larger group, we could only get the first few terms
of the Hilbert series. | arXiv |
We used the Condor Array Telescope to obtain deep imaging observations
through luminance broad-band and He II, [O III], He I, H$\alpha$, [N II], and
[S II] narrow-band filters of an extended region of the M81 Group spanning
$\approx 8 \times 8$ deg$^2$ on the sky centered near M81 and M82. Here we
report aspects of these observations that are specifically related to (1) a
remarkable filament known as the "Ursa Major Arc" that stretches $\approx 30$
deg on the sky roughly in the direction of Ursa Major, (2) a "Giant Shell of
Ionized Gas" that stretches $\approx 0.8$ deg on the sky located $\approx 0.6$
deg NW of M82, and (3) a remarkable network of ionized gaseous filaments
revealed by the new Condor observations that appear to connect the arc, the
shell, and various of the galaxies of the M81 Group and, by extension, the
group itself. We measure flux ratios between the various ions to help to
distinguish photoionized from shock-ionized gas, and we find that the flux
ratios of the arc and shell are not indicative of shock ionization. This
provides strong evidence against a previous interpretation of the arc as an
interstellar shock produced by an unrecognized supernova. We suggest that all
of these objects, including the arc, are associated with the M81 Group and are
located at roughly the distance $\approx 3.6$ Mpc of M81, that the arc is an
intergalactic filament, and that the objects are associated with the
low-redshift cosmic web. | arXiv |
We used the Condor Array Telescope to obtain deep imaging observations
through the luminance broad-band and He II 468.6 nm, [O III] 500.7 nm, He I
587.5 nm, H$\alpha$, [N II] 658.4 nm, and [S II] 671.6 nm narrow-band filters
of an extended region comprising 13 "Condor fields" spanning $\approx 8 \times
8$ deg$^2$ on the sky centered near M81 and M82. Here we describe the
acquisition and processing of these observations, which together constitute
unique very deep imaging observations of a large portion of the M81 Group
through a complement of broad- and narrow-band filters. The images are
characterized by an intricate web of faint, diffuse, continuum produced by
starlight scattered from Galactic cirrus, and all prominent cirrus features
identified in the broad-band image can also be identified in the narrow-band
images. We subtracted the luminance image from the narrow-band images to leave
more or less only line emission in the difference images, and we masked regions
of the resulting images around stars at an isophotal limit. The difference
images exhibit extensive extended structures of ionized gas in the direction of
the M81 Group, from known galaxies of the M81 Group, clouds of gas, filamentary
structures, and apparent or possible bubbles or shells. Specifically, the
difference images show a remarkable filament known as the "Ursa Major Arc;" a
remarkable network of criss-crossed filaments between M81 and NGC 2976, some of
which intersect and overlap the Ursa Major Arc; and details of a "giant shell
of ionized gas." | arXiv |
The ability of large language models to generate complex texts allows them to
be widely integrated into many aspects of life, and their output can quickly
fill all network resources. As the impact of LLMs grows, it becomes
increasingly important to develop powerful detectors for the generated text.
This detector is essential to prevent the potential misuse of these
technologies and to protect areas such as social media from the negative
effects of false content generated by LLMS. The main goal of LLM-generated text
detection is to determine whether text is generated by an LLM, which is a basic
binary classification task. In our work, we mainly use three different
classification methods based on open source datasets: traditional machine
learning techniques such as logistic regression, k-means clustering, Gaussian
Naive Bayes, support vector machines, and methods based on converters such as
BERT, and finally algorithms that use LLMs to detect LLM-generated text. We
focus on model generalization, potential adversarial attacks, and accuracy of
model evaluation. Finally, the possible research direction in the future is
proposed, and the current experimental results are summarized. | arXiv |
Machine learning models have demonstrated substantial performance
enhancements over non-learned alternatives in various fundamental data
management operations, including indexing (locating items in an array),
cardinality estimation (estimating the number of matching records in a
database), and range-sum estimation (estimating aggregate attribute values for
query-matched records). However, real-world systems frequently favor less
efficient non-learned methods due to their ability to offer (worst-case) error
guarantees - an aspect where learned approaches often fall short. The primary
objective of these guarantees is to ensure system reliability, ensuring that
the chosen approach consistently delivers the desired level of accuracy across
all databases. In this paper, we embark on the first theoretical study of such
guarantees for learned methods, presenting the necessary conditions for such
guarantees to hold when using machine learning to perform indexing, cardinality
estimation and range-sum estimation. Specifically, we present the first known
lower bounds on the model size required to achieve the desired accuracy for
these three key database operations. Our results bound the required model size
for given average and worst-case errors in performing database operations,
serving as the first theoretical guidelines governing how model size must
change based on data size to be able to guarantee an accuracy level. More
broadly, our established guarantees pave the way for the broader adoption and
integration of learned models into real-world systems. | arXiv |
Together with David Schlang we computed the discriminants of the invariant
Hermitian forms for all indicator $o$ even degree absolutely irreducible
characters of the ATLAS groups supplementing the tables of orthogonal
determinants computed in collaboration with Richard Parker, Tobias Braun and
Thomas Breuer. The methods that are used in the unitary case are described in
this paper. A character has a well defined unitary discriminant, if and only if
it is unitary stable, i.e. all irreducible unitary constituents have even
degree. Computations for large degree characters are only possible because of a
new method called {\em unitary condensation}. A suitable automorphism helps to
single out a square class of the real subfield of the character field
consisting of representatives of the discriminant of the invariant Hermitian
forms. This square class can then be determined modulo enough primes. | arXiv |
Voice user interfaces (VUIs) have facilitated the efficient interactions
between humans and machines through spoken commands. Since real-word acoustic
scenes are complex, speech enhancement plays a critical role for robust VUI.
Transformer and its variants, such as Conformer, have demonstrated cutting-edge
results in speech enhancement. However, both of them suffers from the quadratic
computational complexity with respect to the sequence length, which hampers
their ability to handle long sequences. Recently a novel State Space Model
called Mamba, which shows strong capability to handle long sequences with
linear complexity, offers a solution to address this challenge. In this paper,
we propose a novel hybrid convolution-Mamba backbone, denoted as MambaDC, for
speech enhancement. Our MambaDC marries the benefits of convolutional networks
to model the local interactions and Mamba's ability for modeling long-range
global dependencies. We conduct comprehensive experiments within both basic and
state-of-the-art (SoTA) speech enhancement frameworks, on two commonly used
training targets. The results demonstrate that MambaDC outperforms Transformer,
Conformer, and the standard Mamba across all training targets. Built upon the
current advanced framework, the use of MambaDC backbone showcases superior
results compared to existing \textcolor{black}{SoTA} systems. This sets the
stage for efficient long-range global modeling in speech enhancement. | arXiv |
Advances in artificial intelligence (AI) have great potential to help address
societal challenges that are both collective in nature and present at national
or trans-national scale. Pressing challenges in healthcare, finance,
infrastructure and sustainability, for instance, might all be productively
addressed by leveraging and amplifying AI for national-scale collective
intelligence. The development and deployment of this kind of AI faces
distinctive challenges, both technical and socio-technical. Here, a research
strategy for mobilising inter-disciplinary research to address these challenges
is detailed and some of the key issues that must be faced are outlined. | arXiv |
We revisit particle creation in strong fields, and backreaction on those
fields, from an amplitudes perspective. We describe the strong field by an
initial coherent state of photons which we explicitly evolve in time, thus
going beyond the background field approximation, and then consider observables
which quantify the effects of backreaction. We present expressions for the
waveform, vacuum persistence probability, and number of produced photons at
next-to-leading order, all of which are impacted by backreaction, along with
the number and statistics of produced pairs. We find that converting between
in-out (amplitude) and in-in (expectation value) expressions requires explicit
resummation of an infinite number of disconnected loop diagrams. | arXiv |
We study a class of dynamical systems generated by random substitutions,
which contains both intrinsically ergodic systems and instances with several
measures of maximal entropy. In this class, we show that the measures of
maximal entropy are classified by invariance under an appropriate symmetry
relation. All measures of maximal entropy are fully supported and they are
generally not Gibbs measures. We prove that there is a unique measure of
maximal entropy if and only if an associated Markov chain is ergodic in inverse
time. This Markov chain has finitely many states and all transition matrices
are explicitly computable. Thereby, we obtain several sufficient conditions for
intrinsic ergodicity that are easy to verify. A practical way to compute the
topological entropy in terms of inflation words is extended from previous work
to a more general geometric setting. | arXiv |
Let $G=(V,E)$ be a simple graph of order $n$. A Majority Roman Dominating
Function (MRDF) on a graph G is a function $f: V\rightarrow\{-1, +1, 2\}$ if
the sum of its function values over at least half the closed neighborhoods is
at least one , this is , for at least half of the vertices $v\in V$,
$f(N[v])\geq 1$. Moreover, every vertex u with $f(u)=-1$ is adjacent to at
least one vertex $w$ with $f(w)=2$. The Majority Roman Domination number of a
graph $G$, denoted by $\gamma_{MR}(G)$ , is the minimum value of
$\sum_{v\in{V(G)}}f(v)$ over all Majority Roman Dominating Function $f$ of $G$.
In this paper we study properties of the Majority Roman Domination in graphs
and obtain lower and upper bounds the Majority Roman Domination number of some
graphs. | arXiv |
Multi-object tracking is advancing through two dominant paradigms:
traditional tracking by detection and newly emerging tracking by query. In this
work, we fuse them together and propose the tracking-by-detection-and-query
paradigm, which is achieved by a Learnable Associator. Specifically, the basic
information interaction module and the content-position alignment module are
proposed for thorough information Interaction among object queries. Tracking
results are directly Decoded from these queries. Hence, we name the method as
LAID. Compared to tracking-by-query models, LAID achieves competitive tracking
accuracy with notably higher training efficiency. With regard to
tracking-by-detection methods, experimental results on DanceTrack show that
LAID significantly surpasses the state-of-the-art heuristic method by 3.9% on
HOTA metric and 6.1% on IDF1 metric. On SportsMOT, LAID also achieves the best
score on HOTA metric. By holding low training cost, strong tracking
capabilities, and an elegant end-to-end approach all at once, LAID presents a
forward-looking direction for the field. | arXiv |
Recently, the successful synthesis of the pentagonal form of PdTe$_{2}$
monolayer (\emph{p}-PdTe$_{2}$) was reported [Liu~\emph{et al.}, Nature
Materials \textbf{23}, 1339 (2024)]. In this work, we present an extensive
first-principles density-functional theory (DFT) based computational study of
vacancies in this material. Our study covers the evolution of the electronic,
optical, and magnetic properties of various defect configurations and compares
those to the pristine monolayer (\emph{p}-PdTe$_{2}$). We find that V$_{Pd}$
(V$_{Te}$) is the most stable defect in the~\emph{p}-PdTe$_{2}$ monolayer in
the Te-rich (Pd-rich) limit. The defects alter the electronic properties of the
monolayer significantly, leading to changes in their magnetic and optical
properties due to the emergence of midgap impurity states. The defect complex
V$_{Pd+4Te}$ is found to induce spin-polarization in the system with a total
magnetic moment of 1.87 $\mu_{B}$. The obtained low diffusion energy barriers
of 1.13 eV (in-plane) and 0.063 eV (top-bottom) corresponding to V$_{Te}$
indicates its facile migration probability is higher in the top-bottom
direction at room temperature, as revealed by AIMD simulations as well. In
order to guide the experimentalists, we also simulated the scanning-tunneling
microscope (STM) images corresponding to all the defect configurations.
Moreover, we also computed the electron-beam energies required for creating
mono-vacancies. In the optical absorption spectra of the defective
configurations, finite peaks appear below the band edge that are unique to the
respective defective configuration. We have also computed the excess
polarizability of the defective configurations with respect to the pristine one
and found that maximum changes occur in the infrared and visible regions,
providing insights into the change in their optical response as compared to the
pristine monolayer. | arXiv |
This paper presents ViTOC (Vision Transformer and Object-aware Captioner), a
novel vision-language model for image captioning that addresses the challenges
of accuracy and diversity in generated descriptions. Unlike conventional
approaches, ViTOC employs a dual-path architecture based on Vision Transformer
and object detector, effectively fusing global visual features and local object
information through learnable vectors. The model introduces an innovative
object-aware prompting strategy that significantly enhances its capability in
handling long-tail data. Experiments on the standard COCO dataset demonstrate
that ViTOC outperforms baseline models across all evaluation metrics.
Additionally, we propose a reference-free evaluation method based on CLIP to
further validate the model's effectiveness. By utilizing pretrained visual
model parameters, ViTOC achieves efficient end-to-end training. | arXiv |
The ground state properties of strongly rotating bosons confined in an
asymmetric anharmonic potential exhibit a split density distribution. However,
the out-of-equilibrium dynamics of this split structure remain largely
unexplored. Given that rotation is responsible for the breakup of the bosonic
cloud, we investigate the out-of-equilibrium dynamics by abruptly changing the
rotation frequency. Our study offers insights into the dynamics of trapped
Bose-Einstein condensates in both symmetric and asymmetric anharmonic
potentials under different rotation quench scenarios. In the rotationally
symmetric trap, angular momentum is a good quantum number. This makes it
challenging to exchange angular momentum within the system; hence, a rotation
quench does practically not impact the density distribution. In contrast, the
absence of angular momentum conservation in asymmetric traps results in more
complex dynamics. This allows rotation quenches to either inject into or
extract angular momentum from the system. We observe and analyze these
intricate dynamics both for the mean-field condensed and the many-body
fragmented systems. The dynamical evolution of the condensed system and the
fragmented system exhibits similarities in several observables during small
rotation quenches. However, these similarities diverge notably for larger
quenches. Additionally, we investigate the formation and the impact of the
vortices on the angular momentum dynamics of the evolving split density. All in
all, our findings offer valuable insights into the dynamics of trapped
interacting bosons under different rotation quenches. | arXiv |
Text emotion detection constitutes a crucial foundation for advancing
artificial intelligence from basic comprehension to the exploration of
emotional reasoning. Most existing emotion detection datasets rely on manual
annotations, which are associated with high costs, substantial subjectivity,
and severe label imbalances. This is particularly evident in the inadequate
annotation of micro-emotions and the absence of emotional intensity
representation, which fail to capture the rich emotions embedded in sentences
and adversely affect the quality of downstream task completion. By proposing an
all-labels and training-set label regression method, we map label values to
energy intensity levels, thereby fully leveraging the learning capabilities of
machine models and the interdependencies among labels to uncover multiple
emotions within samples. This led to the establishment of the Emotion
Quantization Network (EQN) framework for micro-emotion detection and
annotation. Using five commonly employed sentiment datasets, we conducted
comparative experiments with various models, validating the broad applicability
of our framework within NLP machine learning models. Based on the EQN
framework, emotion detection and annotation are conducted on the GoEmotions
dataset. A comprehensive comparison with the results from Google literature
demonstrates that the EQN framework possesses a high capability for automatic
detection and annotation of micro-emotions. The EQN framework is the first to
achieve automatic micro-emotion annotation with energy-level scores, providing
strong support for further emotion detection analysis and the quantitative
research of emotion computing. | arXiv |
Online controlled experiments, or A/B tests, are large-scale randomized
trials in digital environments. This paper investigates the estimands of the
difference-in-means estimator in these experiments, focusing on scenarios with
repeated measurements on users. We compare cumulative metrics that use all
post-exposure data for each user to windowed metrics that measure each user
over a fixed time window. We analyze the estimands and highlight trade-offs
between the two types of metrics. Our findings reveal that while cumulative
metrics eliminate the need for pre-defined measurement windows, they result in
estimands that are more intricately tied to the experiment intake and runtime.
This complexity can lead to counter-intuitive practical consequences, such as
decreased statistical power with more observations. However, cumulative metrics
offer earlier results and can quickly detect strong initial signals. We
conclude that neither metric type is universally superior. The optimal choice
depends on the temporal profile of the treatment effect, the distribution of
exposure, and the stopping time of the experiment. This research provides
insights for experimenters to make informed decisions about how to define
metrics based on their specific experimental contexts and objectives. | arXiv |
This paper investigates the semantic communication and cooperative tracking
control for an UAV swarm comprising a leader UAV and a group of follower UAVs,
all interconnected via unreliable wireless multiple-input-multiple-output
(MIMO) channels. Initially, we develop a dynamic model for the UAV swarm that
accounts for both the internal interactions among the cooperative follower UAVs
and the imperfections inherent in the MIMO channels that interlink the leader
and follower UAVs. Building on this model, we incorporate the power costs of
the UAVs and formulate the communication and cooperative tracking control
challenge as a drift-plus-penalty optimization problem. We then derive a
closed-form optimal solution that maintains a decentralized semantic
architecture, dynamically adjusting to the tracking error costs and local
channel conditions within the swarm. Employing Lyapunov drift analysis, we
establish closed-form sufficient conditions for the stabilization of the UAV
swarm's tracking performance. Numerical results demonstrate the significant
enhancements in our proposed scheme over various state-of-the-art methods. | arXiv |
Generative Artificial Intelligence offers a powerful tool for adversaries who
wish to engage in influence operations, such as the Chinese Spamouflage
operation and the Russian Internet Research Agency effort that both sought to
interfere with recent US election cycles. Therefore, this study seeks to
investigate the propensity of current Generative AI models for producing
harmful disinformation during an election cycle. The probability that different
Generative AI models produced disinformation when given adversarial prompts was
evaluated, in addition the associated harm. This allows for the expected harm
for each model to be computed and it was discovered that Copilot and Gemini
tied for the overall safest performance by realizing the lowest expected harm,
while GPT-4o produced the greatest rates of harmful disinformation, resulting
in much higher expected harm scores. The impact of disinformation category was
also investigated and Gemini was safest within the political category of
disinformation, while Copilot was safest for topics related to health.
Moreover, characteristics of adversarial roles were discovered that led to
greater expected harm across all models. Finally, classification models were
developed that predicted disinformation production based on the conditions
considered in this study, which offers insight into factors important for
predicting disinformation production. Based on all of these insights,
recommendations are provided that seek to mitigate factors that lead to harmful
disinformation being produced by Generative AI models. It is hoped that
developers will use these insights to improve future models. | arXiv |
Depth measures quantify central tendency in the analysis of statistical and
geometric data. Selecting a depth measure that is simple and efficiently
computable is often important, e.g., when calculating depth for multiple query
points or when applied to large sets of data. In this work, we introduce
\emph{Hyperplane Distance Depth (HDD)}, which measures the centrality of a
query point $q$ relative to a given set $P$ of $n$ points in $\mathbb{R}^d$,
defined as the sum of the distances from $q$ to all $\binom{n}{d}$ hyperplanes
determined by points in $P$. We present algorithms for calculating the HDD of
an arbitrary query point $q$ relative to $P$ in $O(d \log n)$ time after
preprocessing $P$, and for finding a median point of $P$ in $O(d n^{d^2} \log
n)$ time. We study various properties of hyperplane distance depth and show
that it is convex, symmetric, and vanishing at infinity. | arXiv |
Determining the atomic-level structure of crystalline solids is critically
important across a wide array of scientific disciplines. The challenges
associated with obtaining samples suitable for single-crystal diffraction,
coupled with the limitations inherent in classical structure determination
methods that primarily utilize powder diffraction for most polycrystalline
materials, underscore an urgent need to develop alternative approaches for
elucidating the structures of commonly encountered crystalline compounds. In
this work, we present an artificial intelligence-directed leapfrog model
capable of accurately determining the structures of both organic and
inorganic-organic hybrid crystalline solids through direct analysis of powder
X-ray diffraction data. This model not only offers a comprehensive solution
that effectively circumvents issues related to insoluble challenges in
conventional structure solution methodologies but also demonstrates
applicability to crystal structures across all conceivable space groups.
Furthermore, it exhibits notable compatibility with routine powder diffraction
data typically generated by standard instruments, featuring rapid data
collection and normal resolution levels. | arXiv |
A charge qubit couples to environmental electric field fluctuations through
its dipole moment, resulting in fast decoherence. We propose the p orbital (pO)
qubit, formed by the single electron, p-like valence states of a five-electron
Si quantum dot, which couples to charge noise through the quadrupole moment. We
demonstrate that the pO qubit offers distinct advantages in quality factor,
gate speed, readout and size. We use a phenomenological, dipole
two-level-fluctuator charge noise model to estimate a $T_2^* \sim 80$ ns. In
conjunction with Rabi frequencies of order 10 GHz, an order of magnitude
improvement in qubit quality factor is expected relative to state-of-the-art
semiconductor spin qubits. The pO qubit features all-electrical control via
modulating the dot's eccentricity. We also show how to perform two-qubit gates
via the $1/r^5$ quadrupole-quadrupole interaction. We find a universal gate set
using gradient ascent based control pulse optimization, subject to 10 GHz
maximum allowable bandwidth and 1 ns pulse times. | arXiv |
In the past two years, large language models (LLMs) have achieved rapid
development and demonstrated remarkable emerging capabilities. Concurrently,
with powerful semantic understanding and reasoning capabilities, LLMs have
significantly empowered the rapid advancement of the recommendation system
field. Specifically, in news recommendation (NR), systems must comprehend and
process a vast amount of clicked news text to infer the probability of
candidate news clicks. This requirement exceeds the capabilities of traditional
NR models but aligns well with the strengths of LLMs. In this paper, we propose
a novel NR algorithm to reshape the news model via LLM Embedding and
Co-Occurrence Pattern (LECOP). On one hand, we fintuned LLM by contrastive
learning using large-scale datasets to encode news, which can fully explore the
semantic information of news to thoroughly identify user preferences. On the
other hand, we explored multiple co-occurrence patterns to mine collaborative
information. Those patterns include news ID co-occurrence, Item-Item keywords
co-occurrence and Intra-Item keywords co-occurrence. The keywords mentioned
above are all generated by LLM. As far as we know, this is the first time that
constructing such detailed Co-Occurrence Patterns via LLM to capture
collaboration. Extensive experiments demonstrate the superior performance of
our proposed novel method | arXiv |
In this article, we explore the lifetime of localized excitations in
nonlinear lattices, called breathers, when a thermalized lattice is perturbed
with localized energy delivered to a single site. We develop a method to
measure the time it takes for the system to approach equilibrium based on a
single scalar quantity, the participation number, and deduce the value
corresponding to thermal equilibrium. We observe the time to achieve
thermalization as a function of the energy of the excited site. We explore a
variety of different physical system models. The result is that the lifetime of
breathers increases exponentially with the breather energy for all the systems.
These results may provide a method to detect the existence of breathers in real
systems. | arXiv |
Exploring the optimal management strategy for nitrogen and irrigation has a
significant impact on crop yield, economic profit, and the environment. To
tackle this optimization challenge, this paper introduces a deployable
\textbf{CR}op Management system \textbf{O}ver all \textbf{P}ossible
\textbf{S}tate availabilities (CROPS). CROPS employs a language model (LM) as a
reinforcement learning (RL) agent to explore optimal management strategies
within the Decision Support System for Agrotechnology Transfer (DSSAT) crop
simulations. A distinguishing feature of this system is that the states used
for decision-making are partially observed through random masking.
Consequently, the RL agent is tasked with two primary objectives: optimizing
management policies and inferring masked states. This approach significantly
enhances the RL agent's robustness and adaptability across various real-world
agricultural scenarios. Extensive experiments on maize crops in Florida, USA,
and Zaragoza, Spain, validate the effectiveness of CROPS. Not only did CROPS
achieve State-of-the-Art (SOTA) results across various evaluation metrics such
as production, profit, and sustainability, but the trained management policies
are also immediately deployable in over of ten millions of real-world contexts.
Furthermore, the pre-trained policies possess a noise resilience property,
which enables them to minimize potential sensor biases, ensuring robustness and
generalizability. Finally, unlike previous methods, the strength of CROPS lies
in its unified and elegant structure, which eliminates the need for pre-defined
states or multi-stage training. These advancements highlight the potential of
CROPS in revolutionizing agricultural practices. | arXiv |
Trapped ions provide a highly controlled platform for quantum sensors,
clocks, simulators, and computers, all of which depend on cooling ions close to
their motional ground state. Existing methods like Doppler, resolved sideband,
and dark resonance cooling balance trade-offs between the final temperature and
cooling rate. A traveling polarization gradient has been shown to cool multiple
modes quickly and in parallel, but utilizing a stable polarization gradient can
achieve lower ion energies, while also allowing more tailorable light-matter
interactions in general. In this paper, we demonstrate cooling of a trapped ion
below the Doppler limit using a phase-stable polarization gradient created
using trap-integrated photonic devices. At an axial frequency of
$2\pi\cdot1.45~ \rm MHz$ we achieve $\langle n \rangle = 1.3 \pm 1.1$ in
$500~\mu \rm s$ and cooling rates of ${\sim}0.3 \, \rm quanta/\mu s$. We
examine ion dynamics under different polarization gradient phases, detunings,
and intensities, showing reasonable agreement between experimental results and
a simple model. Cooling is fast and power-efficient, with improved performance
compared to simulated operation under the corresponding running wave
configuration. | arXiv |
Grey-body factors and quasinormal modes are two distinct characteristics of
radiation near black holes, each associated with different boundary conditions.
Nevertheless, a correspondence exists between them, which we use to calculate
the grey-body factors of three recently constructed quantum-corrected black
hole models. Our findings demonstrate that the grey-body factors are
significantly influenced by the quantum corrections for some of the models
under consideration, and the correspondence holds with reasonable accuracy
across all three models. We confirm that the grey-body factors are less
sensitive to the near-horizon corrections of the spacetime, because the
grey-body factors are reproduced via the correspondence using only the
fundamental mode and the first overtone. | arXiv |
Brown dwarfs with measured dynamical masses and spectra from direct imaging
are benchmarks that anchor substellar atmosphere cooling and evolution models.
We present Subaru SCExAO/CHARIS infrared spectroscopy of HIP 93398 B, a brown
dwarf companion recently discovered by Li et al. 2023 as part of an informed
survey using the Hipparcos-Gaia Catalog of Accelerations. This object was
previously classified as a T6 dwarf based on its luminosity, with its
independently-derived age and dynamical mass in tension with existing models of
brown dwarf evolution. Spectral typing via empirical standard spectra,
temperatures derived by fitting substellar atmosphere models, and J-H, J-K and
H-L' colors all suggest that this object has a substantially higher temperature
and luminosity, consistent with classification as a late-L dwarf near the L/T
transition (T = 1200$^{+140}_{-119}$ K) with moderate to thick clouds possibly
present in its atmosphere. When compared with the latest generation of
evolution models that account for clouds with our revised luminosity and
temperature for the object, the tension between the model-independent mass/age
and model predictions is resolved. | arXiv |
Picture countably many logicians all wearing a hat in one of $\kappa$-many
colours. They each get to look at finitely many other hats and afterwards make
finitely many guesses for their own hat's colour. For which $\kappa$ can the
logicians guarantee that at least one of them guesses correctly? This will be
the archetypical hat problem we analyse and solve here. We generalise this by
varying the amount of logicians as well as the number of allowed guesses and
describe exactly for which combinations the logicians have a winning strategy.
We also solve these hat problems under the additional restriction that their
vision is restrained in terms of a partial order. Picture e.g.~countably many
logicians standing on the real number line and each logician is only allowed to
look at finitely many others in front of them.
In many cases, the least $\kappa$ for which the logicians start losing can be
described by an instance of the free subset property which in turn is connected
to large cardinals. In particular, $\mathrm{ZFC}$ can sometimes not decide
whether or not the logicians can win for every possible set of colours. | arXiv |
We study eigenfunction localization for higher dimensional cat maps, a
popular model of quantum chaos. These maps are given by linear symplectic maps
in ${\mathrm Sp}(2g,\mathbb Z)$, which we take to be ergodic. Under some
natural assumptions, we show that there is a density one sequence of integers
$N$ so that as $N$ tends to infinity along this sequence, all eigenfunctions of
the quantized map at the inverse Planck constant $N$ are uniformly distributed.
For the two-dimensional case ($g=1$), this was proved by P. Kurlberg and Z.
Rudnick (2001). The higher dimensional case offers several new features and
requires a completely different set of tools, including from additive
combinatorics, in particular Bourgain's bound (2005) for Mordell sums, and a
study of tensor product structures for the cat map. | arXiv |
All-dielectric metasurfaces can produce structural colors, but the most
advantageous design criteria are still being investigated. This work
numerically studies how the two-dimensional shape of nanoparticles affects the
colorimetric response under circularly polarized light (CPL) to develop a
sensor distinguishing CPL orientations. Using lossless dielectric materials
(silicon nitride on silicon dioxide), we achieve far-field dichroism by
modifying oblong nanoparticles into L-shaped structures through corner cuts.
This design suppresses one electric dipole under CPL illumination, leading to
differential colorimetric responses. We link these responses to a decoupling
effect in the near-field net electric flux. Our findings provide design
guidelines for all-dielectric, lossless colorimetric sensors of chiral light. | arXiv |
We present high pressure electrical transport, magnetization, and single
crystal X-ray diffraction data on SrCo2P2 single crystals. X-ray diffraction
data show that there is a transition to a collapsed tetragonal structure for p
~> 10 GPa and measurements of resistance show that above ~ 10 GPa, a clear
transition-like feature can be observed at temperatures up to 260 K. Further
magnetization, magnetoresistance and Hall measurements made under pressure all
indicate that this transition is to a ferromagnetic ground state. First
principles-based density functional theory (DFT) calculations also show that
there is a first-order transition between tetragonal and collapsed tetragonal
(cT) phases, with an onset near ~ 10 GPa as well as the appearance of the
ferromagnetic (FM) ordering in the cT phase. Above ~ 30 GPa, the experimental
signatures of the magnetic ordering vanish in a first-order-like manner,
consistent with the theoretical calculation results, indicating that SrCo2P2 is
another example of the avoidance of quantum criticality in ferromagnetic
intermetallic compounds. SrCo2P2 provides clear evidence that the structural,
electronic and magnetic properties associated with the cT transition are
strongly entangled and are not only qualitatively captured by our first
principles-based calculations but are quantitatively reproduced as well. | arXiv |
Consider $E$ a vector bundle over a smooth curve $C$. We compute the
$\delta$-invariant of all ample ($\mathbb{Q}$-) line bundles on $\mathbb{P}(E)$
when $E$ is strictly Mumford semistable. We also investigate the case when one
assumes that the Harder-Narasimhan filtration of $E$ has only one step. | arXiv |
This paper considers real-time control and learning problems for
finite-dimensional linear systems under binary-valued and randomly disturbed
output observations. This has long been regarded as an open problem because the
exact values of the traditional regression vectors used in the construction of
adaptive algorithms are unavailable, as one only has binary-valued output
information. To overcome this difficulty, we consider the adaptive estimation
problem of the corresponding infinite-impulse-response (IIR) dynamical systems,
and apply the double array martingale theory that has not been previously used
in adaptive control. This enables us to establish global convergence results
for both the adaptive prediction regret and the parameter estimation error,
without resorting to such stringent data conditions as persistent excitation
and bounded system signals that have been used in almost all existing related
literature. Based on this, an adaptive control law will be designed that can
effectively combine adaptive learning and feedback control. Finally, we are
able to show that the closed-loop adaptive control system is optimal in the
sense that the long-run average tracking error is minimized almost surely for
any given bounded reference signals. To the best of the authors' knowledge,
this appears to be the first adaptive control result for general linear systems
with general binary sensors and arbitrarily given bounded reference signals. | arXiv |
Let M be a transitive model of set theory and X be a space in the sense of M.
Is there a reasonable way to interpret X as a space in V? A general theory due
to Zapletal provides a natural candidate which behaves well on sufficiently
complete spaces (for instance \v{C}ech complete spaces) but behaves poorly on
more general spaces - for instance, the Zapletal interpretation does not
commute with products. We extend Zapletal's framework to instead interpret
locales, a generalization of topological spaces which focuses on the structure
of open sets. Our extension has a number of desirable properties; for instance,
localic products always interpret as spatial products. We show that a number of
localic notions coincide exactly with properties of their interpretations; for
instance, we show a locale is $T_U$ if and only if all its interpretations are
$T_1$, a locale is $I$-Hausdorff if and only if all its interpretations are
$T_2$, a locale is regular if and only if all its interpretations are $T_3$,
and a locale is compact if and only if all its interpretations are compact. | arXiv |
Seismic data inevitably suffers from random noise and missing traces in field
acquisition. This limits the utilization of seismic data for subsequent imaging
or inversion applications. Recently, dictionary learning has gained remarkable
success in seismic data denoising and interpolation. Variants of the
patch-based learning technique, such as the K-SVD algorithm, have been shown to
improve denoising and interpolation performance compared to the analytic
transform-based methods. However, patch-based learning algorithms work on
overlapping patches of data and do not take the full data into account during
reconstruction. By contrast, the data patches (CSC) model treats signals
globally and, therefore, has shown superior performance over patch-based
methods in several image processing applications. In consequence, we test the
use of CSC model for seismic data denoising and interpolation. In particular,
we use the local block coordinate descent (LoBCoD) algorithm to reconstruct
missing traces and clean seismic data from noisy input. The denoising and
interpolation performance of the LoBCoD algorithm has been compared with that
of K-SVD and orthogonal matching pursuit (OMP) algorithms using synthetic and
field data examples. We use three quality measures to test the denoising
accuracy: the peak signal-to-noise ratio (PSNR), the relative L2-norm of the
error (RLNE), and the structural similarity index (SSIM). We find that LoBCoD
performs better than K-SVD and OMP for all test cases in improving PSNR and
SSIM, and in reducing RLNE. These observations suggest a huge potential of the
CSC model in seismic data denoising and interpolation applications. | arXiv |
Multifractality is a concept that helps compactly grasping the most essential
features of the financial dynamics. In its fully developed form, this concept
applies to essentially all mature financial markets and even to more liquid
cryptocurrencies traded on the centralized exchanges. A new element that adds
complexity to cryptocurrency markets is the possibility of decentralized
trading. Based on the extracted tick-by-tick transaction data from the
Universal Router contract of the Uniswap decentralized exchange, from June 6,
2023, to June 30, 2024, the present study using Multifractal Detrended
Fluctuation Analysis (MFDFA) shows that even though liquidity on these new
exchanges is still much lower compared to centralized exchanges convincing
traces of multifractality are already emerging on this new trading as well. The
resulting multifractal spectra are however strongly left-side asymmetric which
indicates that this multifractality comes primarily from large fluctuations and
small ones are more of the uncorrelated noise type. What is particularly
interesting here is the fact that multifractality is more developed for time
series representing transaction volumes than rates of return. On the level of
these larger events a trace of multifractal cross-correlations between the two
characteristics is also observed. | arXiv |
Active learning aims to train accurate classifiers while minimizing labeling
costs by strategically selecting informative samples for annotation. This study
focuses on image classification tasks, comparing AL methods on CIFAR10,
CIFAR100, Food101, and the Chest X-ray datasets under varying label noise
rates. We investigate the impact of model architecture by comparing
Convolutional Neural Networks (CNNs) and Vision Transformer (ViT)-based models.
Additionally, we propose a novel deep active learning algorithm, GCI-ViTAL,
designed to be robust to label noise. GCI-ViTAL utilizes prediction entropy and
the Frobenius norm of last-layer attention vectors compared to class-centric
clean set attention vectors. Our method identifies samples that are both
uncertain and semantically divergent from typical images in their assigned
class. This allows GCI-ViTAL to select informative data points even in the
presence of label noise while flagging potentially mislabeled candidates. Label
smoothing is applied to train a model that is not overly confident about
potentially noisy labels. We evaluate GCI-ViTAL under varying levels of
symmetric label noise and compare it to five other AL strategies. Our results
demonstrate that using ViTs leads to significant performance improvements over
CNNs across all AL strategies, particularly in noisy label settings. We also
find that using the semantic information of images as label grounding helps in
training a more robust model under label noise. Notably, we do not perform
extensive hyperparameter tuning, providing an out-of-the-box comparison that
addresses the common challenge practitioners face in selecting models and
active learning strategies without an exhaustive literature review on training
and fine-tuning vision models on real-world application data. | arXiv |
Silicon photonics is a leading platform for realizing the vast numbers of
physical qubits needed for useful quantum information processing because it
leverages mature complementary metal-oxide-semiconductor (CMOS) manufacturing
to integrate on-chip thousands of optical devices for generating and
manipulating quantum states of light. A challenge to the practical operation
and scale-up of silicon quantum-photonic integrated circuits, however, is the
need to control their extreme sensitivity to process and temperature
variations, free-carrier and self-heating nonlinearities, and thermal
crosstalk. To date these challenges have been partially addressed using bulky
off-chip electronics, sacrificing many benefits of a chip-scale platform. Here,
we demonstrate the first electronic-photonic quantum system-on-chip (EPQSoC)
consisting of quantum-correlated photon-pair sources stabilized via on-chip
feedback control circuits, all fabricated in a high-volume 45nm CMOS
microelectronics foundry. We use non-invasive photocurrent sensing in a tunable
microring cavity photon-pair source to actively lock it to a fixed pump laser
while operating in the quantum regime, enabling large scale microring-based
quantum systems. In this first demonstration of such a capability, we achieve a
high CAR of 134 with an ultra-low g(2)(0) of 0.021 at 2.2 kHz off-chip detected
pair rate and 3.3 MHz/mW2 on-chip pair generation efficiency, and over 100 kHz
off-chip detected pair rate at higher pump powers (1.5 MHz on-chip). These
sources maintain stable quantum properties in the presence of temperature
variations, operating reliably in practical settings with many adjacent devices
creating thermal disturbances on the same chip. Such dense electronic-photonic
integration enables implementation and control of quantum-photonic systems at
the scale required for useful quantum information processing with
CMOS-fabricated chips. | arXiv |
Direct imaging observations are biased towards wide-separation, massive
companions that have degenerate formation histories. Although the majority of
exoplanets are expected to form via core accretion, most directly imaged
exoplanets have not been convincingly demonstrated to follow this formation
pathway. We obtained new interferometric observations of the directly imaged
giant planet AF Lep b with the VLTI/GRAVITY instrument. We present three epochs
of 50$\mu$as relative astrometry and the K-band spectrum of the planet for the
first time at a resolution of R=500. Using only these measurements, spanning
less than two months, and the Hipparcos-Gaia Catalogue of Accelerations, we are
able to significantly constrain the planet's orbit; this bodes well for
interferometric observations of planets discovered by Gaia DR4. Including all
available measurements of the planet, we infer an effectively circular orbit
($e<0.02, 0.07, 0.13$ at $1, 2, 3 \sigma$) in spin-orbit alignment with the
host, and a measure a dynamical mass of
$M_\mathrm{p}=3.75\pm0.5\,M_\mathrm{Jup}$. Models of the spectrum of the planet
show that it is metal rich ([M/H]$=0.75\pm0.25$), with a C/O ratio encompassing
the solar value. This ensemble of results show that the planet is consistent
with core accretion formation. | arXiv |
In this work we illustrate a general framework to describe the LHC
phenomenology of extended scalar (and fermion) sectors, with focus on dark
matter (DM) physics, based on an effective field theory (EFT) with non-linearly
realized electroweak symmetry. Generalizing Higgs EFT (HEFT), the setup allows
to include a generic set of new scalar resonances, without the need to specify
their UV origin, that could for example be at the interface of the Standard
Model (SM) and the DM world. In particular, we study the case of fermionic DM
interacting with the SM via two mediators, each of which can possess either CP
property and originate from various electroweak representations in the UV
theory. Besides trilinear interactions between the mediators and DM or SM pairs
(including pairs of gauge field-strength tensors), the EFT contains all further
gauge-invariant operators up to mass dimension $D=5$. While remaining
theoretically consistent, this setup offers enough flexibility to capture the
phenomenology of many benchmark models used to interpret the results of
experimental DM and BSM searches, such as two-Higgs doublet extensions of the
SM or singlet extensions. Furthermore, the presence of two mediators with
potentially sizable couplings allows to account for a broad variety of
interesting collider signatures, as for example detectable mono-$h$ and
mono-$Z$ signals. Correlations can be employed to diagnose the nature of the
new particles. | arXiv |
We introduce "Method Actors" as a mental model for guiding LLM prompt
engineering and prompt architecture. Under this mental model, LLMs should be
thought of as actors; prompts as scripts and cues; and LLM responses as
performances. We apply this mental model to the task of improving LLM
performance at playing Connections, a New York Times word puzzle game that
prior research identified as a challenging benchmark for evaluating LLM
reasoning. Our experiments with GPT-4o show that a "Method Actors" approach can
significantly improve LLM performance over both a vanilla and "Chain of
Thoughts" approach. A vanilla approach solves 27% of Connections puzzles in our
dataset and a "Chain of Thoughts" approach solves 41% of puzzles, whereas our
strongest "Method Actor" approach solves 86% of puzzles. We also test OpenAI's
newest model designed specifically for complex reasoning tasks, o1-preview.
When asked to solve a puzzle all at once, o1-preview solves 79% of Connections
puzzles in our dataset, and when allowed to build puzzle solutions one guess at
a time over multiple API calls, o1-preview solves 100% of the puzzles.
Incorporating a "Method Actor" prompt architecture increases the percentage of
puzzles that o1-preview solves perfectly from 76% to 87%. | arXiv |
Time-dependent Thermoremanent Magnetization (TRM) studies have been
instrumental in probing energy dynamics within the spin glass phase. In this
paper, we will review the evolution of the TRM experiment over the last half
century and discuss some aspects related to how it has been employed in the
understanding of spin glasses. We will also report on recent experiments using
high resolution DC SQUID magnetometry to probe the TRM at temperatures less
than but near to the transition temperature Tc. These experiments have been
performed as a function of waiting time, temperature, and five different
magnetic fields. We find that as the transition temperature is approached from
below, the characteristic time scale of the TRM is suppressed up to several
orders of magnitude in time. In the highest temperature region, we find that
the waiting time effect goes away, and a waiting time independent crossover
line is reached. We also find that increasing the magnetic field, further
suppresses the crossover line. Using a first principles energy argument across
the crossover line, we derive an equation that is an excellent fit to the
crossover lines for all magnetic fields probed. The data show strong evidence
for an H = 0 Oe phase transition. | arXiv |
This article presents a data-driven equation-free modeling of the dynamics of
a hexafloat floating offshore wind turbine based on the Dynamic Mode
Decomposition (DMD). The DMD is here used to provide a modal analysis and
extract knowledge from the dynamic system. A forecasting algorithm for the
motions, accelerations, and forces acting on the floating system, as well as
the height of the incoming waves, the wind speed, and the power extracted by
the wind turbine, is developed by using a methodological extension called
Hankel-DMD, that includes time-delayed copies of the states in an augmented
state vector. All the analyses are performed on experimental data collected
from an operating prototype. The quality of the forecasts obtained varying two
main hyperparameters of the algorithm, namely the number of delayed copies and
the length of the observation time, is assessed using three different error
metrics, each analyzing complementary aspects of the prediction. A statistical
analysis exposed the existence of optimal values for the algorithm
hyperparameters. Results show the approach's capability for short-term future
estimates of the system's state, which can be used for real-time prediction and
control. Furthermore, a novel Stochastic Hankel-DMD formulation is introduced
by considering hyperparameters as stochastic variables. The stochastic version
of the method not only enriches the prediction with its related uncertainty but
is also found to improve the normalized root mean square error up to 10% on a
statistical basis compared to the deterministic counterpart. | arXiv |
Weakly collisional plasmas contain a wealth of information about the dynamics
of the plasma in the particle velocity distribution functions, yet our ability
to exploit fully that information remains relatively primitive. Here we aim to
present the fundamentals of a new technique denoted Plasma Seismology that aims
to invert the information from measurements of the particle velocity
distribution functions at a single point in space over time to enable the
determination of the electric field variation over an extended spatial region.
The fundamental mathematical tool at the heart of this technique is the
Morrison $G$ Transform. Using kinetic numerical simulations of Langmuir waves
in a Vlasov-Poisson plasma, we demonstrate the application of the standard
Morrison $G$ Transform, which uses measurements of the particle velocity
distribution function over all space at one time to predict the evolution of
the electric field in time. Next, we introduce a modified Morrison $G$
Transform which uses measurements of the particle velocity distribution
function at one point in space over time to determine the spatial variation of
the electric field over an extended spatial region. We discuss the limitations
of this approach, particularly for the numerically challenging case of Langmuir
waves. The application of this technique to Alfven waves in a magnetized plasma
holds the promise to apply the technique to existing spacecraft particle
measurement instrumentation to determine the electric fields over an extended
spatial region away from the spacecraft. | arXiv |
We study higher uniformity properties of the von Mangoldt function $\Lambda$,
the M\"obius function $\mu$, and the divisor functions $d_k$ on short intervals
$(x,x+H]$ for almost all $x \in [X, 2X]$.
Let $\Lambda^\sharp$ and $d_k^\sharp$ be suitable approximants of $\Lambda$
and $d_k$, $G/\Gamma$ a filtered nilmanifold, and $F\colon G/\Gamma \to
\mathbb{C}$ a Lipschitz function. Then our results imply for instance that when
$X^{1/3+\varepsilon} \leq H \leq X$ we have, for almost all $x \in [X, 2X]$, \[
\sup_{g \in \text{Poly}(\mathbb{Z} \to G)} \left| \sum_{x < n \leq x+H}
(\Lambda(n)-\Lambda^\sharp(n)) \overline{F}(g(n)\Gamma) \right| \ll H\log^{-A}
X \] for any fixed $A>0$, and that when $X^{\varepsilon} \leq H \leq X$ we
have, for almost all $x \in [X, 2X]$, \[ \sup_{g \in \text{Poly}(\mathbb{Z} \to
G)} \left| \sum_{x < n \leq x+H} (d_k(n)-d_k^\sharp(n))
\overline{F}(g(n)\Gamma) \right| = o(H \log^{k-1} X). \]
As a consequence, we show that the short interval Gowers norms
$\|\Lambda-\Lambda^\sharp\|_{U^s(X,X+H]}$ and $\|d_k-d_k^\sharp\|_{U^s(X,X+H]}$
are also asymptotically small for any fixed $s$ in the same ranges of $H$. This
in turn allows us to establish the Hardy-Littlewood conjecture and the divisor
correlation conjecture with a short average over one variable.
Our main new ingredients are type $II$ estimates obtained by developing a
"contagion lemma" for nilsequences and then using this to "scale up" an
approximate functional equation for the nilsequence to a larger scale. This
extends an approach developed by Walsh for Fourier uniformity. | arXiv |
Separating disinformation from fact on the web has long challenged both the
search and the reasoning powers of humans. We show that the reasoning power of
large language models (LLMs) and the retrieval power of modern search engines
can be combined to automate this process and explainably verify claims. We
integrate LLMs and search under a multi-hop evidence pursuit strategy. This
strategy generates an initial question based on an input claim using a sequence
to sequence model, searches and formulates an answer to the question, and
iteratively generates follow-up questions to pursue the evidence that is
missing using an LLM. We demonstrate our system on the FEVER 2024 (AVeriTeC)
shared task. Compared to a strategy of generating all the questions at once,
our method obtains .045 higher label accuracy and .155 higher AVeriTeC score
(evaluating the adequacy of the evidence). Through ablations, we show the
importance of various design choices, such as the question generation method,
medium-sized context, reasoning with one document at a time, adding metadata,
paraphrasing, reducing the problem to two classes, and reconsidering the final
verdict. Our submitted system achieves .510 AVeriTeC score on the dev set and
.477 AVeriTeC score on the test set. | arXiv |
We study the performances of a world-wide network made by a European
third-generation gravitational-wave (GW) detector, together with a 40-km Cosmic
Explorer detector in the US, considering three scenarios for the European
detector: (1) Einstein Telescope (ET) in its 10-km triangle configuration; (2)
ET in its configuration featuring two 15-km L-shaped detectors in different
sites, still taken to have all other ET characteristics (underground, and with
each detector made of a high-frequency interferometer and a cryogenic
low-frequency interferometer); (3) A single L-shaped underground interferometer
with the ET amplitude spectral density, either with 15~km or with 20~km arm
length. Overall, we find that, if a 2L configuration should be retained for ET,
the network made by a single-L European underground detector together with
CE-40km could already provide a very interesting intermediate step toward the
construction of a full 2L+CE network, and is in any case superior to a 10-km
triangle not inserted in an international network. | arXiv |
Given a database of bit strings $A_1,\ldots,A_m\in \{0,1\}^n$, a fundamental
data structure task is to estimate the distances between a given query $B\in
\{0,1\}^n$ with all the strings in the database. In addition, one might further
want to ensure the integrity of the database by releasing these distance
statistics in a secure manner. In this work, we propose differentially private
(DP) data structures for this type of tasks, with a focus on Hamming and edit
distance. On top of the strong privacy guarantees, our data structures are also
time- and space-efficient. In particular, our data structure is $\epsilon$-DP
against any sequence of queries of arbitrary length, and for any query $B$ such
that the maximum distance to any string in the database is at most $k$, we
output $m$ distance estimates. Moreover,
- For Hamming distance, our data structure answers any query in $\widetilde
O(mk+n)$ time and each estimate deviates from the true distance by at most
$\widetilde O(k/e^{\epsilon/\log k})$;
- For edit distance, our data structure answers any query in $\widetilde
O(mk^2+n)$ time and each estimate deviates from the true distance by at most
$\widetilde O(k/e^{\epsilon/(\log k \log n)})$.
For moderate $k$, both data structures support sublinear query operations. We
obtain these results via a novel adaptation of the randomized response
technique as a bit flipping procedure, applied to the sketched strings. | arXiv |
Language model performance depends on identifying the optimal mixture of data
groups to train on (e.g., law, code, math). Prior work has proposed a diverse
set of methods to efficiently learn mixture proportions, ranging from fitting
regression models over training runs to dynamically updating proportions
throughout training. Surprisingly, we find that no existing method consistently
outperforms a simple stratified sampling baseline in terms of average test
perplexity per group. In this paper, we study the cause of this inconsistency
by unifying existing methods into a standard optimization framework. We show
that all methods set proportions to minimize total loss, subject to a
method-specific mixing law -- an assumption on how loss is a function of
mixture proportions. We find that existing parameterizations of mixing laws can
express the true loss-proportion relationship empirically, but the methods
themselves often set the mixing law parameters inaccurately, resulting in poor
and inconsistent performance. Finally, we leverage the insights from our
framework to derive a new online method named Aioli, which directly estimates
the mixing law parameters throughout training and uses them to dynamically
adjust proportions. Empirically, Aioli outperforms stratified sampling on 6 out
of 6 datasets by an average of 0.28 test perplexity points, whereas existing
methods fail to consistently beat stratified sampling, doing up to 6.9 points
worse. Moreover, in a practical setting where proportions are learned on
shorter runs due to computational constraints, Aioli can dynamically adjust
these proportions over the full training run, consistently improving
performance over existing methods by up to 12.01 test perplexity points. | arXiv |
When the period of an incommensurate charge density wave (ICDW) approaches an
integer multiple of a lattice vector, the energy gain obtained from locking the
period to the lattice can lead to a fascinating transition into a commensurate
state. This transition actually occurs through an intermediate
near-commensurate (NC) phase, with locally commensurate regions separated by an
ordered array of phase slips of a complex CDW order parameter. TiSe2 is a
paradigmatic CDW system where incommensuration is believed to be induced by
carrier doping, yet its putative NC state has never been imaged or its nature
established. Here we report the observation of a striking NC state in
ultraclean, slightly doped monolayers of TiSe2, displaying an intricate network
of coherent, unidirectional CDW domain walls over hundreds of nanometers.
Detailed analysis reveals these are not phase slips of a complex CDW, but
rather sign-changing Ising-type domain walls of two coupled real CDWs of
previously known symmetry, consistent with the period doubling nature of the
parent commensurate state. In addition, we observe an unexpected nematic
modulation at the original lattice Bragg peaks which couples to the CDW order
parameters. A Ginzburg-Landau analysis naturally explains the couplings and
relative modulations of all order parameters, unveiling TiSe2 as a rare example
of an NC-CDW of two intertwined real modulations and emergent nematicity. | arXiv |
One of the fundamental open problems in the field of tensors is the border
Comon's conjecture: given a symmetric tensor $F\in(\mathbb{C}^n)^{\otimes d}$
for $d\geq 3$, its border and symmetric border ranks are equal. In this paper,
we prove the conjecture for large classes of concise tensors in
$(\mathbb{C}^n)^{\otimes d}$ of border rank $n$, i.e., tensors of minimal
border rank. These families include all tame tensors and all tensors whenever
$n\leq d+1$. Our technical tools are border apolarity and border varieties of
sums of powers. | arXiv |
We introduce and study a generalized form of derivations for dendriform
algebras, specifying all admissible parameter values that define these
derivations. Additionally, we present a complete classification of generalized
derivations for two-dimensional left-symmetric dialgebras over the field
$\mathbb{K}$. | arXiv |
Humans are able to fuse information from both auditory and visual modalities
to help with understanding speech. This is frequently demonstrated through an
phenomenon known as the McGurk Effect, during which a listener is presented
with incongruent auditory and visual speech that fuse together into the percept
of an illusory intermediate phoneme. Building on a recent framework that
proposes how to address developmental 'why' questions using artificial neural
networks, we evaluated a set of recent artificial neural networks trained on
audiovisual speech by testing them with audiovisually incongruent words
designed to elicit the McGurk effect. We compared networks trained on clean
speech to those trained on noisy speech, and discovered that training with
noisy speech led to an increase in both visual responses and McGurk responses
across all models. Furthermore, we observed that systematically increasing the
level of auditory noise during ANN training also increased the amount of
audiovisual integration up to a point, but at extreme noise levels, this
integration failed to develop. These results suggest that excessive noise
exposure during critical periods of audiovisual learning may negatively
influence the development of audiovisual speech integration. This work also
demonstrates that the McGurk effect reliably emerges untrained from the
behaviour of both supervised and unsupervised networks. This supports the
notion that artificial neural networks might be useful models for certain
aspects of perception and cognition. | arXiv |
We advance the study of pure de Sitter supergravity by introducing a finite
formulation of unimodular supergravity via the super-St\"uckelberg mechanism.
Building on previous works, we construct a complete four-dimensional action of
spontaneously broken ${\cal N}\!\!=\!\!1$ supergravity to all orders, which
allows for de Sitter solutions. The introduction of finite supergravity
transformations extends the super-St\"uckelberg procedure beyond the second
order, offering a recursive solution to all orders in the goldstino sector.
This work bridges the earlier perturbative approaches and the complete finite
theory, opening new possibilities for de Sitter vacua in supergravity models
and eventually string theory. | arXiv |
Let $p$ be a prime number. We consider diagonal $p$-permutation functors over
a (commutative, unital) ring $\mathsf{R}$ in which all prime numbers different
from $p$ are invertible. We first determine the finite groups $G$ for which the
associated essential algebra $\mathcal{E}_\mathsf{R}(G)$ is non zero: These are
groups of the form $G=L\rtimes \langle u\rangle$, where $(L,u)$ is a
$D^\Delta$-pair. When $\mathsf{R}$ is an algebraically closed field
$\mathbb{F}$ of characteristic 0 or $p$, this yields a parametrization of the
simple diagonal $p$-permutation functors over $\mathbb{F}$ by triples
$(L,u,W)$, where $(L,u)$ is a $D^\Delta$-pair, and $W$ is a simple
$\mathbb{F}\mathrm{Out}(L,u)$-module. Finally, we describe the evaluations of
the simple functor $\mathsf{S}_{L,u,W}$ parametrized by the triple $(L,u,W)$.
We show in particular that if $G$ is a finite group and $\mathbb{F}$ has
characteristic $p$, the dimension of $\mathsf{S}_{L,1,\mathbb{F}}(G)$ is equal
to the number of conjugacy classes of $p$-regular elements of $G$ with defect
isomorphic to $L$. | arXiv |
Complex systems, such as economic, social, biological, and ecological
systems, usually feature interactions not only between pairwise entities but
also among three or more entities. These multi-entity interactions are known as
higher-order interactions. Hypergraph, as a mathematical tool, can effectively
characterize higher-order interactions, where nodes denote entities and
hyperedges represent interactions among multiple entities. Meanwhile, all
higher-order interactions can also be projected into a number of lower-order
interactions or even some pairwise interactions. Whether it is necessary to
consider all higher-order interactions, and whether it is with little loss to
replace them by lower-order or even pairwise interactions, remain a
controversial issue. If the role of higher-order interactions is insignificant,
the complexity of computation and the difficulty of analysis can be drastically
reduced by projecting higher-order interactions into lower-order or pairwise
interactions. We use link prediction, a fundamental problem in network science,
as the entry point. Specifically, we evaluate the impact of higher-order
interactions on link predictive accuracy to explore the necessity of these
structures. We propose a method to decompose the higher-order structures in a
stepwise way, thereby allowing to systematically explore the impacts of
structures at different orders on link prediction. The results indicate that in
some networks, incorporating higher-order interactions significantly enhances
the accuracy of link prediction, while in others, the effect is insignificant.
Therefore, we think that the role of higher-order interactions varies in
different types of networks. Overall, since the improvement in predictive
accuracy provided by higher-order interactions is significant in some networks,
we believe that the study of higher-order interactions is both necessary and
valuable. | arXiv |
The integration of unmanned platforms equipped with advanced sensors promises
to enhance situational awareness and mitigate the "fog of war" in military
operations. However, managing the vast influx of data from these platforms
poses a significant challenge for Command and Control (C2) systems. This study
presents a novel multi-agent learning framework to address this challenge. Our
method enables autonomous and secure communication between agents and humans,
which in turn enables real-time formation of an interpretable Common
Operational Picture (COP). Each agent encodes its perceptions and actions into
compact vectors, which are then transmitted, received and decoded to form a COP
encompassing the current state of all agents (friendly and enemy) on the
battlefield. Using Deep Reinforcement Learning (DRL), we jointly train COP
models and agent's action selection policies. We demonstrate resilience to
degraded conditions such as denied GPS and disrupted communications.
Experimental validation is performed in the Starcraft-2 simulation environment
to evaluate the precision of the COPs and robustness of policies. We report
less than 5% error in COPs and policies resilient to various adversarial
conditions. In summary, our contributions include a method for autonomous COP
formation, increased resilience through distributed prediction, and joint
training of COP models and multi-agent RL policies. This research advances
adaptive and resilient C2, facilitating effective control of heterogeneous
unmanned platforms. | arXiv |
Subsets and Splits