text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
Planning with partial observation is a central challenge in embodied AI. A
majority of prior works have tackled this challenge by developing agents that
physically explore their environment to update their beliefs about the world
state.In contrast, humans can $\textit{imagine}$ unseen parts of the world
through a mental exploration and $\textit{revise}$ their beliefs with imagined
observations. Such updated beliefs can allow them to make more informed
decisions, without necessitating the physical exploration of the world at all
times. To achieve this human-like ability, we introduce the $\textit{Generative
World Explorer (Genex)}$, an egocentric world exploration framework that allows
an agent to mentally explore a large-scale 3D world (e.g., urban scenes) and
acquire imagined observations to update its belief. This updated belief will
then help the agent to make a more informed decision at the current step. To
train $\textit{Genex}$, we create a synthetic urban scene dataset, Genex-DB.
Our experimental results demonstrate that (1) $\textit{Genex}$ can generate
high-quality and consistent observations during long-horizon exploration of a
large virtual physical world and (2) the beliefs updated with the generated
observations can inform an existing decision-making model (e.g., an LLM agent)
to make better plans. | arXiv |
I model a rational agent who spends resources between the current time and
some fixed future deadline. Opportunities to spend resources arise randomly
according to a Poisson process, and the quality of each opportunity follows a
uniform distribution. The agent values their current resource stock at exactly
the sum of expected utility from all future spending opportunities. Unlike in
traditional discounted expected utility models, the agent exhibits correlation
aversion, static (but not dynamic) preference reversals, and monotonicity with
respect to payment timing. Connecting the agent's risk and time preference is
intuitive, and doing so leads to a new model of procrastination where the agent
misperceives their general attitude toward spending resources. | arXiv |
A full set of optimized observables is measured in an angular analysis of the
decay B$^0$ $\to$ K$^*$(892)$^0\mu^+\mu^-$ using a sample of proton-proton
collisions at $\sqrt{s}$ = 13 TeV, collected with the CMS detector at the LHC,
corresponding to an integrated luminosity of 140 fb$^{-1}$. The analysis is
performed in six bins of the squared invariant mass of the dimuon system,
$q^2$, over the range 1.1 $\lt$ $q^2$ $\lt$ 16 GeV$^2$. The results are among
the most precise experimental measurements of the angular observables for this
decay and are compared to a variety of predictions based on the standard model. | arXiv |
Differential equations are derived which show how generalized Euler vector
representations of the Euler rotation axis and angle for a rigid body evolve in
time; the Euler vector is also known as a rotation vector or axis-angle vector.
The solutions can exhibit interesting rotational features in this non-abstract,
visualizable setting, including spinor-like behavior and quasiperiodicity. The
equations are well-behaved at zero, reducing to the simple infinitesimal case
there. One of them is equivalent to a known quaternion differential equation.
The simple geometric derivation does not depend on Euler's rotation theorem,
and yields a proof of Euler's theorem using only infinitesimal motions. With
mild regularity conditions on the angular velocity function, there is a
continuous evolution of the normalized axis and angle for all time. Dynamical
systems properties are discussed, and numerical solutions are used to
investigate them when the angular velocity is itself rotating, and the Euler
vector trajectory traces out a torus-like shape, with a strobe plot that
densely fills in a closed curve. | arXiv |
This paper describes two C++/Open Motion Planning Library implementations of
the recently developed motion planning algorithms HyRRT arXiv:2210.15082v1
[cs.RO] and HySST arXiv:2305.18649v1 [cs.RO]. Specifically, cHyRRT, an
implementation of the HyRRT algorithm, is capable of generating a solution to a
motion planning problem for hybrid systems with probabilistically completeness,
while cHySST, an implementation of the asymptotically near-optimal HySST
algorithm, is capable of computing a trajectory to solve the optimal motion
planning problem for hybrid systems. cHyRRT is suitable for motion planning
problems where an optimal solution is not required, whereas cHySST is suitable
for such problems that prefer optimal solutions, within all feasible solutions.
The structure, components, and usage of the two tools are described. Examples
are included to illustrate the main capabilities of the toolbox. | arXiv |
Federated learning (FL) is an emerging paradigm for training machine learning
models across distributed clients. Traditionally, in FL settings, a central
server assigns training efforts (or strategies) to clients. However, from a
market-oriented perspective, clients may independently choose their training
efforts based on rational self-interest. To explore this, we propose a
potential game framework where each client's payoff is determined by their
individual efforts and the rewards provided by the server. The rewards are
influenced by the collective efforts of all clients and can be modulated
through a reward factor. Our study begins by establishing the existence of Nash
equilibria (NEs), followed by an investigation of uniqueness in homogeneous
settings. We demonstrate a significant improvement in clients' training efforts
at a critical reward factor, identifying it as the optimal choice for the
server. Furthermore, we prove the convergence of the best-response algorithm to
compute NEs for our FL game. Finally, we apply the training efforts derived
from specific NEs to a real-world FL scenario, validating the effectiveness of
the identified optimal reward factor. | arXiv |
High penetration from volatile renewable energy resources in the grid and the
varying nature of loads raise the need for frequent line switching to ensure
the efficient operation of electrical distribution networks. Operators must
ensure maximum load delivery, reduced losses, and the operation between voltage
limits. However, computations to decide the optimal feeder configuration are
often computationally expensive and intractable, making it unfavorable for
real-time operations. This is mainly due to the existence of binary variables
in the network reconfiguration optimization problem. To tackle this issue, we
have devised an approach that leverages machine learning techniques to reshape
distribution networks featuring multiple substations. This involves predicting
the substation responsible for serving each part of the network. Hence, it
leaves simple and more tractable Optimal Power Flow problems to be solved. This
method can produce accurate results in a significantly faster time, as
demonstrated using the IEEE 37-bus distribution feeder. Compared to the
traditional optimization-based approaches, a feasible solution is achieved
approximately ten times faster for all the tested scenarios. | arXiv |
Tuning the density of resident electrons or holes in semiconductors provides
crucial insight into the composition of excitonic complexes that are observed
as absorption or photoluminescence resonances in optical studies. Moreover, we
can change the way these resonances shift and broaden in energy by controlling
the quantum numbers of the resident carriers with magnetic fields and doping
levels, and by selecting the quantum numbers of the photoexcited or recombining
electron-hole (e-h) pair through optical polarization. We discuss the roles of
distinguishability and optimality of excitonic complexes, showing them to be
key ingredients that determine the energy shifts and broadening of optical
resonances in charge-tunable semiconductors. A distinguishable e-h pair means
that the electron and hole undergoing photoexcitation or recombination have
quantum numbers that are not shared by any of the resident carriers. An optimal
excitonic complex refers to a complex whose particles come with all available
quantum numbers of the resident carriers. All optical resonances may be
classified as either distinct or indistinct depending on the distinguishability
of the e-h pair, and the underlying excitonic complex can be classified as
either optimal or suboptimal. The universality of these classifications,
inherited from the fundamental Pauli exclusion principle, allows us to
understand how optical resonances shift in energy and whether they should
broaden as doping is increased. This understanding is supported by conclusive
evidence that the decay of optical resonances cannot be simply attributed to
enhanced screening when resident carriers are added to a semiconductor.
Finally, applying the classification scheme in either monolayer or moire
heterobilayer systems, we relate the energy shift and amplitude of the neutral
exciton resonance to the compressibility of the resident carrier gas. | arXiv |
Blockchain networks are facing increasingly heterogeneous computational
demands, and in response, protocol designers have started building specialized
infrastructure to supply that demand. This paper introduces Resoonance: a new
kind of transaction fee mechanism that operates in a general two-sided
marketplace setting with extreme preference heterogeneity on both sides of the
market. We allow users submitting transactions to have arbitrary valuations for
inclusion, nodes responsible for executing transactions to incur arbitrary net
costs for executing any bundle, and further allow for arbitrary constraints in
allocation validity. These constraints, for example, may range from
representing an individual node's specialized hardware constraints to denoting
the fact that transactions may not be executed in parallel across different
nodes if they utilize the same part of the network's state. Transactions may
even require multiple nodes for execution.
Resonance's design utilizes competition among sophisticated brokers to find
idiosyncratic prices. We show that at pure Nash equilibria, Resonance finds an
efficient outcome and minimizes the need for strategization by users and nodes.
It is also budget-balanced, individually rational for all parties, and
computationally tractable. | arXiv |
In this paper we prove that Schr\"{o}dinger's equation with a Hamiltonian of
the form $H=-\Delta+i(A \nabla + \nabla A) + V$, which includes a magnetic
potential $A$, has the same dispersive and solution decay properties as the
free Schr\"{o}dinger equation. In particular, we prove $L^1 \to L^\infty$ decay
and some related estimates for the wave equation.
The potentials $A$ and $V$ are short-range and $A$ has four derivatives, but
they can be arbitrarily large. All results hold in three space dimensions. | arXiv |
A generative adversarial network (GAN) has been a representative backbone
model in generative artificial intelligence (AI) because of its powerful
performance in capturing intricate data-generating processes. However, the GAN
training is well-known for its notorious training instability, usually
characterized by the occurrence of mode collapse. Through the lens of
gradients' variance, this work particularly analyzes the training instability
and inefficiency in the presence of mode collapse by linking it to
multimodality in the target distribution. To ease the raised training issues
from severe multimodality, we introduce a novel GAN training framework that
leverages a series of tempered distributions produced via convex interpolation.
With our newly developed GAN objective function, the generator can learn all
the tempered distributions simultaneously, conceptually resonating with the
parallel tempering in Statistics. Our simulation studies demonstrate the
superiority of our approach over existing popular training strategies in both
image and tabular data synthesis. We theoretically analyze that such
significant improvement can arise from reducing the variance of gradient
estimates by using the tempered distributions. Finally, we further develop a
variant of the proposed framework aimed at generating fair synthetic data which
is one of the growing interests in the field of trustworthy AI. | arXiv |
In 1975, Erd\H{o}s and Sauer asked to estimate, for any constant $r$, the
maximum number of edges an $n$-vertex graph can have without containing an
$r$-regular subgraph. In a recent breakthrough, Janzer and Sudakov proved that
any $n$-vertex graph with no $r$-regular subgraph has at most $C_r n \log \log
n$ edges, matching an earlier lower bound by Pyber, R\"odl and Szemer\'edi and
thereby resolving the Erd\H{o}s-Sauer problem up to a constant depending on
$r$. We prove that every $n$-vertex graph without an $r$-regular subgraph has
at most $Cr^2 n \log \log n$ edges. This bound is tight up to the value of $C$
for $n\geq n_0(r)$ and hence resolves the Erd\H{o}s-Sauer problem up to an
absolute constant.
Moreover, we obtain similarly tight results for the whole range of possible
values of $r$ (i.e., not just when $r$ is a constant), apart from a small error
term at a transition point near $r\approx \log n$, where, perhaps surprisingly,
the answer changes. More specifically, we show that every $n$-vertex graph with
average degree at least $\min(Cr\log(n/r),Cr^2 \log\log n)$ contains an
$r$-regular subgraph. The bound $Cr\log(n/r)$ is tight for $r\geq \log n$,
while the bound $Cr^2 \log \log n$ is tight for $r<(\log n)^{1-\Omega(1)}$.
These results resolve a problem of R\"odl and Wysocka from 1997 for almost all
values of $r$.
Among other tools, we develop a novel random process that efficiently finds a
very nearly regular subgraph in any almost-regular graph. A key step in our
proof uses this novel random process to show that every $K$-almost-regular
graph with average degree $d$ contains an $r$-regular subgraph for some
$r=\Omega_K(d)$, which is of independent interest. | arXiv |
Quantum computing architectures based on neutral atoms offer large scales and
high-fidelity operations. They can be heterogeneous, with different zones for
storage, entangling operations, and readout. Zoned architectures improve
computation fidelity by shielding idling qubits in storage from side-effect
noise, unlike monolithic architectures where all operations occur in a single
zone. However, supporting these flexible architectures with efficient
compilation remains challenging. In this paper, we propose ZAC, a scalable
compiler for zoned architectures. ZAC minimizes data movement overhead between
zones with qubit reuse, i.e., keeping them in the entanglement zone if an
immediate entangling operation is pending. Other innovations include novel data
placement and instruction scheduling strategies in ZAC, a flexible
specification of zoned architectures, and an intermediate representation for
zoned architectures, ZAIR. Our evaluation shows that zoned architectures
equipped with ZAC achieve a 22x improvement in fidelity compared to monolithic
architectures. Moreover, ZAC is shown to have a 10% fidelity gap on average
compared to the ideal solution. This significant performance enhancement
enables more efficient and reliable quantum circuit execution, enabling
advancements in quantum algorithms and applications. ZAC is open source at
https://github.com/UCLA-VAST/ZAC | arXiv |
Motivated by knot theory, it is natural to define the orientation-reversal of
a quandle orbit by inverting all the translations given by elements of that
orbit. In this short note we observe that this natural notion is unsuited to
medial quandles. | arXiv |
We consider the dynamics of a continuously monitored qubit in the limit of
strong measurement rate where the quantum trajectory is described by a
stochastic master equation with Poisson noise. Such limits are expected to give
rise to quantum jumps between the pointer states associated with the
non-demolition measurement. A surprising discovery in earlier work [Tilloy et
al., Phys. Rev. A 92, 052111 (2015)] on quantum trajectories with Brownian
noise was the phenomena of spikes observed in between the quantum jumps. Here,
we show that spikes are observed also for Poisson noise. We consider three
cases where the non-demolition is broken by adding, to the basic strong
measurement dynamics, either unitary evolution or thermal noise or additional
measurements. We present a complete analysis of the spike and jump statistics
for all three cases using the fact that the dynamics effectively corresponds to
that of stochastic resetting. We provide numerical results to support our
analytic results. | arXiv |
Coherent perfect absorption (CPA) has been a topic of considerable
contemporary research interest. However, its implementation in practical
applications has been limited, since it has been demonstrated only for plane
waves till now. The issue for beams with finite confinement -- characterized by
a collection of plane waves -- is that complete destructive interference is not
feasible for all the plane waves simultaneously. In this paper, we study the
absorption characteristics of two counter-propagating structured beams, e.g.,
Gaussian and Laguerre-Gaussian (LG) beams with and without orbital angular
momentum respectively, incident normally on a composite slab from both sides by
fulfilling the CPA condition exclusively for the central plane waves. We show
that though perfect absorption is not achievable, there can be a substantial
reduction of the scattered light. We also consider CPA for oblique incidence
and discuss the difficulties. | arXiv |
We investigate prophet inequalities with competitive ratios approaching $1$,
seeking to generalize $k$-uniform matroids. We first show that large girth does
not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a
prophet inequality instance on that matroid whose optimal competitive ration is
$\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a
prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for
any $k$-fold matroid union. Our prophet inequality follows from an online
contention resolution scheme.
The key technical ingredient in our online contention resolution scheme is a
novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz
functions over independent items which may be of independent interest. Applied
to our particular setting, our bicriterion concentration inequality yields
"Chernoff-strength" concentration for a $1$-Lipschitz function that is not
(approximately) self-bounding. | arXiv |
We prove upper bounds which are independent of the dimension of the ambient
space, on the number of realizable zero-nonzero patterns as well as sign
conditions (when the field of coefficients is ordered) of a finite set of
polynomials $\mathcal{P}$ restricted to some algebraic subset $V$.Our bounds
(which are tight) depend on the number and the degrees of the polynomials in
$\mathcal{P}$, as well as the degree (of the embedding) of $V$ and the
dimension of $V$, but are independent of the dimension of the space in which
$V$ is embedded. This last feature of our bounds is useful in situations where
the ambient dimension could be much larger than $\dim V$.
We give several applications of our results. We generalize existing results
on bounding the speeds of algebraically defined classes of graphs, as well as
lower bounds in terms of the number of connected components for testing
membership in semi-algebraic sets using algebraic computation trees. Motivated
by quantum complexity theory we introduce a notion of relative rank (additive
as well as multiplicative) in finite dimensional vector spaces and algebras
relative to a fixed algebraic subset in the vector space or algebra -- which
generalizes the classical definition of ranks of tensors. We prove a very
general lower bound on the maximum relative rank of finite subsets relative to
algebraic subsets of bounded degree and dimension which is independent of the
dimension of the vector space or algebra. We show how our lower bound implies a
quantum analog of a classical lower bound result of Shannon for Boolean
circuits -- that almost all Boolean functions require (classical) circuits of
size at least $\Omega(2^n/n)$. | arXiv |
Aligning diffusion models with downstream objectives is essential for their
practical applications. However, standard alignment methods often struggle with
step generalization when directly applied to few-step diffusion models, leading
to inconsistent performance across different denoising step scenarios. To
address this, we introduce Stepwise Diffusion Policy Optimization (SDPO), a
novel alignment method tailored for few-step diffusion models. Unlike prior
approaches that rely on a single sparse reward from only the final step of each
denoising trajectory for trajectory-level optimization, SDPO incorporates dense
reward feedback at every intermediate step. By learning the differences in
dense rewards between paired samples, SDPO facilitates stepwise optimization of
few-step diffusion models, ensuring consistent alignment across all denoising
steps. To promote stable and efficient training, SDPO introduces an online
reinforcement learning framework featuring several novel strategies designed to
effectively exploit the stepwise granularity of dense rewards. Experimental
results demonstrate that SDPO consistently outperforms prior methods in
reward-based alignment across diverse step configurations, underscoring its
robust step generalization capabilities. Code is avaliable at
https://github.com/ZiyiZhang27/sdpo. | arXiv |
The Vol-Det Conjecture, formulated by Champanerkar, Kofman and Purcell,
states that there exists a specific inequality connecting the hyperbolic volume
of an alternating link and its determinant. Among the classes of links for
which this conjecture holds are all knots with at most 16 crossings, 2-bridge
links, and links that are closures of 3-strand braids. In the present paper,
Burton's bound on the number of crossings for which the Vol-Det Conjecture
holds is improved for links with more than eight twists. In addition,
Stoimenow's inequalities between hyperbolic volumes and determinants are
improved for alternating and alternating arborescent links with more than eight
twists. | arXiv |
We prove that for every compact, convex subset $K\subset\mathbb{R}^2$ the
operator system $A(K)$, consisting of all continuous affine functions on $K$,
is hyperrigid in the C*-algebra $C(\mathrm{ex}(K))$. In particular, this result
implies that the weak and strong operator topologies coincide on the set $$
\{ T\in\mathcal{B}(H);\ T\ \mathrm{normal}\ \mathrm{and}\ \sigma(T)\subset
\mathrm{ex}(K) \}.
$$ Our approach relies on geometric properties of $K$ and generalizes
previous results by Brown. | arXiv |
This paper explores quadratic forms over finite fields with associated
Artin-Schreier curves. Specifically, we investigate quadratic forms of $\mathbb
F_{q^n}/\mathbb F_q$ represented by polynomials over $\mathbb F_{q^n}$ with $q$
odd, characterizing them using certain matrices defined by coefficients of the
polynomials. In particular, a comprehensive treatment will be given for those
polynomials whose coefficients all lie in $\mathbb F_q$. Afterwards, the
results on quadratic forms will be applied to get maximal and minimal
Artin-Schreier curves explicitly. | arXiv |
We are interested in the nonlinear damped Klein-Gordon equation \[
\partial_t^2 u+2\alpha \partial_t u-\Delta u+u-|u|^{p-1}u=0 \] on
$\mathbb{R}^d$ for $2\le d\le 5$ and energy sub-critical exponents $2 < p <
\frac{d+2}{d-2}$.
We construct multi-solitons, that is, solutions which behave for large times
as a sum of decoupled solitons, in various configurations with symmetry: this
includes multi-solitons whose soliton centers lie at the vertices of an
expanding regular polygon (with or without a center), of a regular polyhedron
(with a center), or of a higher dimensional regular polytope. We give a precise
description of these multi-solitons: in particular the interaction between
nearest neighbour solitons is asymptotic to $\ln (t)$ as $t \to +\infty$.
We also prove that in any multi-soliton, the solitons can not all share the
same sign.
Both statements generalize and precise results from \cite{F98}, \cite{Nak}
and are based on the analysis developed in \cite{CMYZ,CMY}. | arXiv |
The personalization techniques of diffusion models succeed in generating
specific concepts but also pose threats to copyright protection and illegal
use. Model Watermarking is an effective method to prevent the unauthorized use
of subject-driven or style-driven image generation, safeguarding concept
copyrights. However, under the goal of concept-oriented protection, current
watermarking schemes typically add watermarks to all images rather than
applying them in a refined manner targeted at specific concepts. Additionally,
the personalization techniques of diffusion models can easily remove
watermarks. Existing watermarking methods struggle to achieve fine-grained
watermark embedding with a few images of specific concept and prevent removal
of watermarks through personalized fine-tuning. Therefore, we introduce a novel
concept-oriented watermarking framework that seamlessly embeds imperceptible
watermarks into the concept of diffusion models. We conduct extensive
experiments and ablation studies to verify our framework. Our code is available
at https://anonymous.4open.science/r/Conceptwm-4EB3/. | arXiv |
We critically review the evidence for time-varying dark energy from recent
Baryon Acoustic Oscillations (BAO) and Supernova (SN) observations. First, we
show that such evidence is present at the 3$\sigma$ level, even without the new
BAO data from the dark energy Spectroscopic Instrument (DESI), by instead using
BAO data from the dark energy Survey (DES), combined with the DES5Y supernovae
and Planck CMB data. Next, we examine the role of the DES5Y supernova dataset,
showing that the preference for time-varying dark energy is driven by the low
redshift supernovae common to both the DES5Y and Pantheon+ compilations. We
find that combining Pantheon+ and DES5Y supernovae by removing the common
supernovae leads to two different results, depending on whether they are
removed from the DES5Y or the Pantheon+ catalog, leading to stronger or weaker
exclusion of $\Lambda$CDM, at the (3.8$\sigma$) and (2.5$\sigma$) level,
respectively. These common supernovae have smaller error bars in DES5Y compared
to Pantheon+, and, as recently pointed out, there is an offset in magnitude in
DES5Y between supernovae at ($z > 0.1$), where almost all the measurements
taken during the full five years of DES are, and the low-redshift ones ($z <
0.1$), where all the historical set of nearby supernovae lies. We show that
marginalizing over such an offset in DES5Y would lead to significantly weaker
evidence for evolving dark energy. | arXiv |
The abundant chemical compositions in ternary hydrides bring much more
possibility to explore high temperature superconductors under lower pressure.
Here we constructed 115 ternary hydrides on the basis of the elements
substitution using 16 metal elements within 5 reported prototype structures. We
conducted a three-step approach to screen and study these candidate structures
in the aspects of dynamical stability, formation energy and relative enthalpy,
respectively. Based on this approach, we found three meta-stable compounds with
hydrogen clathrate cages in the space group of P-3m1, including Y2CdH18,
Y2InH18 and Ca2SnH18. All of the structures are superconductive under high
pressure with Tc above 110 K, which is larger than the superconductive
temperature of liquid nitrogen. Our study enriches the database of novel
ternary hydrides under high pressure, and provides insight for future
theoretical and experimental researches. | arXiv |
The variation of the physical conditions across the three dimensions of our
Galaxy is a major source of complexity for the modelling of the foreground
signal facing the cosmic microwave background (CMB). In the present work, we
demonstrate that the spin-moment expansion formalism provides a powerful
framework to model and understand this complexity, with a special focus on that
arising from variations of the physical conditions along each line-of-sight on
the sky. We perform the first application of the moment expansion to reproduce
a thermal dust model largely used by the CMB community, demonstrating its power
as a minimal tool to compress, understand and model the information contained
within any foreground model. Furthermore, we use this framework to produce new
models of thermal dust emission containing the maximal amount of complexity
allowed by the current data, remaining compatible with the observed angular
power-spectra by the $Planck$ mission. By assessing the impact of these models
on the performance of component separation methodologies, we conclude that the
additional complexity contained within the third dimension could represent a
significant challenge for future CMB experiments and that different component
separation approaches are sensitive to different properties of the moments. | arXiv |
Ashkin-Teller model is a two-layer lattice model where spins in each layer
interact ferromagnetically with strength $J$, and the spin-dipoles (product of
spins) interact with neighbors with strength $\lambda.$ The model exhibits
simultaneous magnetic and electric transitions along a self-dual line on the
$\lambda$-$J$ plane with continuously varying critical exponents. In this
article, we investigate the percolation of geometric clusters of spins and
spin-dipoles denoted respectively as magnetic and electric clusters. We find
that the largest cluster in both cases becomes macroscopic in size and spans
the lattice when interaction exceeds a critical threshold given by the same
self-dual line where magnetic and electric transitions occur. The fractal
dimension of the critical spanning clusters is related to order parameter
exponent $\beta_{m,e}$ as $D_{m,e}=d-\frac{5}{12}\frac{\beta_{m,e}}\nu,$ where
$d=2$ is the spatial dimension and $\nu$ is the correlation length exponent.
This relation determines all other percolation exponents and their variation
wrt $\lambda.$ We show that for magnetic Percolation, the Binder cumulant, as a
function of $\xi_2/L$ with $\xi_2$ being the second-moment correlation length,
remains invariant all along the critical line and matches with that of the
spin-percolation in the usual Ising model. The function also remains invariant
for the electric percolation, forming a new superuniversality class of
percolation transition. | arXiv |
We study the dynamics of synthetic molecules whose architectures are
generated by space transformations from a point group acting on seed
resonators. We show that the dynamical matrix of any such molecule can be
reproduced as the left regular representation of a self-adjoint element from
the stabilized group's algebra. Furthermore, we use elements of representation
theory and K-theory to rationalize the dynamical features supported by such
synthetic molecules up to topological equivalences. These tools enable us to
identify a set of fundamental models which generate by superposition all
possible dynamical matrices up to homotopy equivalences. Interpolations between
these fundamental models give rise to topological spectral flows. | arXiv |
We prove a version of Jordan's classification theorem for finite subgroups of
$\mathrm{GL}_{n}(K)$ that is at the same time quantitatively explicit,
CFSG-free, and valid for arbitrary $K$. This is the first proof to satisfy all
three properties at once. Our overall strategy follows Larsen and Pink [24],
with explicit computations based on techniques developed by the authors and
Helfgott [2, 3], particularly in relation to dimensional estimates. | arXiv |
We consider two supersymmetric M5 brane probe solutions in $\textrm{AdS}_7
\times S^4$ and one in $\textrm{AdS}_4 \times S^7$ that all have the
$\textrm{AdS}_3 \times S^3$ world-volume geometry. The values of the classical
action of the first two M5 probes (with $S^3$ in $\textrm{AdS}_7$ or in $S^4$)
are related to the leading $N^2$ parts in the anomaly b-coefficient in the
(2,0) theory corresponding to a spherical surface defect in symmetric or
antisymmetric $SU(N)$ representations. We present a detailed computation of the
corresponding one-loop M5 brane partition functions finding that they vanish
(in a particular regularization). This implies the vanishing of the order $N^0$
part in the b-anomaly coefficients, in agreement with earlier predictions for
their exact values. It remains, however, a puzzle of how to reproduce the
non-vanishing order $N$ terms in these coefficients within the semiclassical
M5-brane probe setup. | arXiv |
Determining potential probability distributions with a given causal graph is
vital for causality studies. To bypass the difficulty in characterizing latent
variables in a Bayesian network, the nested Markov model provides an elegant
algebraic approach by listing exactly all the equality constraints on the
observed variables. However, this algebraically motivated causal model
comprises distributions outside Bayesian networks, and its physical
interpretation remains vague. In this work, we inspect the nested Markov model
through the lens of generalized probabilistic theory, an axiomatic framework to
describe general physical theories. We prove that all the equality constraints
defining the nested Markov model hold valid theory-independently. Yet, we show
this model generally contains distributions not implementable even within such
relaxed physical theories subjected to merely the relativity principles and
mild probabilistic rules. To interpret the origin of such a gap, we establish a
new causal model that defines valid distributions as projected from a
high-dimensional Bell-type causal structure. The new model unveils inequality
constraints induced by relativity principles, or equivalently high-dimensional
conditional independences, which are absent in the nested Markov model.
Nevertheless, we also notice that the restrictions on states and measurements
introduced by the generalized probabilistic theory framework can pose
additional inequality constraints beyond the new causal model. As a by-product,
we discover a new causal structure exhibiting strict gaps between the
distribution sets of a Bayesian network, generalized probabilistic theories,
and the nested Markov model. We anticipate our results will enlighten further
explorations on the unification of algebraic and physical perspectives of
causality. | arXiv |
In this paper, we construct new t-server Private Information Retrieval (PIR)
schemes with communication complexity subpolynomial in the previously best
known, for all but finitely many t. Our results are based on combining
derivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based
PIRs of Yekhanin and Efremenko. Previously such a combination was achieved in
an ingenious way by Dvir and Gopi, using polynomials and derivatives over
certain exotic rings, en route to their fundamental result giving the first
2-server PIR with subpolynomial communication.
Our improved PIRs are based on two ingredients:
- We develop a new and direct approach to combine derivatives with Matching
Vector based PIRs. This approach is much simpler than that of Dvir-Gopi: it
works over the same field as the original PIRs, and only uses elementary
properties of polynomials and derivatives.
- A key subproblem that arises in the above approach is a higher-order
polynomial interpolation problem. We show how "sparse S-decoding polynomials",
a powerful tool from the original constructions of Matching Vector PIRs, can be
used to solve this higher-order polynomial interpolation problem using
surprisingly few higer-order evaluations.
Using the known sparse S-decoding polynomials, in combination with our ideas
leads to our improved PIRs. Notably, we get a 3-server PIR scheme with
communication $2^{O^{\sim}( (\log n)^{1/3}) }$, improving upon the previously
best known communication of $2^{O^{\sim}( \sqrt{\log n})}$ due to Efremenko. | arXiv |
Large double-logarithmic corrections are induced by soft gluon emissions near
threshold in the semi-inclusive $e^+e^-$ annihilation (SIA) distributions, and
must be resummed to all-orders in perturbation theory for reliable theoretical
predictions. Building on strategy developed for threshold resummation for DIS
structure function in momentum space using soft-collinear effective theory
(SCET), we present the explicit formalism for SIA cross section. We then
perform the resummation directly in momentum space for $\gamma^* \to q \bar q$,
$H \to gg$ and $H \to b\bar b$ to N$^4$LL accuracy and demonstrate good
convergence. We anticipate that these results will benefit the extraction of
the light-quark, the heavy-quark as well as the gluon fragmentation functions. | arXiv |
This paper studies the distributed bandit convex optimization problem with
time-varying inequality constraints, where the goal is to minimize network
regret and cumulative constraint violation. To calculate network cumulative
constraint violation, existing distributed bandit online algorithms solving
this problem directly use the clipped constraint function to replace its
original constraint function. However, the use of the clipping operation
renders Slater condition (i.e, there exists a point that strictly satisfies the
inequality constraints at all iterations) ineffective to achieve reduced
network cumulative constraint violation. To tackle this challenge, we propose a
new distributed bandit online primal-dual algorithm. If local loss functions
are convex, we show that the proposed algorithm establishes sublinear network
regret and cumulative constraint violation bounds. When Slater condition holds,
the network cumulative constraint violation bound is reduced. In addition, if
local loss functions are strongly convex, for the case where strongly convex
parameters are unknown, the network regret bound is reduced. For the case where
strongly convex parameters are known, the network regret and cumulative
constraint violation bounds are further reduced. To the best of our knowledge,
this paper is among the first to establish reduced (network) cumulative
constraint violation bounds for (distributed) bandit convex optimization with
time-varying constraints under Slater condition. Finally, a numerical example
is provided to verify the theoretical results. | arXiv |
This paper studies observability inequalities for heat equations on both
bounded domains and the whole space $\mathbb{R}^d$. The observation sets are
measured by log-type Hausdorff contents, which are induced by certain log-type
gauge functions closely related to the heat kernel. On a bounded domain, we
derive the observability inequality for observation sets of positive log-type
Hausdorff content. Notably, the aforementioned inequality holds not only for
all sets with Hausdorff dimension $s$ for any $s\in (d-1,d]$, but also for
certain sets of Hausdorff dimension $d-1$. On the whole space $\mathbb{R}^d$,
we establish the observability inequality for observation sets that are thick
at the scale of the log-type Hausdorff content. Furthermore, we prove that for
the 1-dimensional heat equation on an interval, the Hausdorff content we have
chosen is an optimal scale for the observability inequality.
To obtain these observability inequalities, we use the adapted Lebeau-Robiano
strategy from \cite{Duyckaerts2012resolvent}. For this purpose, we prove the
following results at scale of the log-type Hausdorff content, the former being
derived from the latter: We establish a spectral inequality/a Logvinenko-Sereda
uncertainty principle; we set up a quantitative propagation of smallness of
analytic functions; we build up a Remez' inequality; and more fundamentally, we
provide an upper bound for the log-type Hausdorff content of a set where a
monic polynomial is small, based on an estimate in Lubinsky
\cite{Lubinsky1997small}, which is ultimately traced back to the classical
Cartan Lemma. In addition, we set up a capacity-based slicing lemma (related to
the log-type gauge functions) and establish a quantitative relationship between
Hausdorff contents and capacities. These tools are crucial in the studies of
the aforementioned propagation of smallness in high-dimensional situations. | arXiv |
Bonne and Censor-Hillel (ICALP 2019) initiated the study of distributed
subgraph finding in dynamic networks of limited bandwidth. For the case where
the target subgraph is a clique, they determined the tight bandwidth complexity
bounds in nearly all settings. However, several open questions remain, and very
little is known about finding subgraphs beyond cliques. In this work, we
consider these questions and explore subgraphs beyond cliques.
For finding cliques, we establish an $\Omega(\log \log n)$ bandwidth lower
bound for one-round membership-detection under edge insertions only and an
$\Omega(\log \log \log n)$ bandwidth lower bound for one-round detection under
both edge insertions and node insertions. Moreover, we demonstrate new
algorithms to show that our lower bounds are tight in bounded-degree networks
when the target subgraph is a triangle. Prior to our work, no lower bounds were
known for these problems.
For finding subgraphs beyond cliques, we present a complete characterization
of the bandwidth complexity of the membership-listing problem for every target
subgraph, every number of rounds, and every type of topological change: node
insertions, node deletions, edge insertions, and edge deletions. We also show
partial characterizations for one-round membership-detection and listing. | arXiv |
In this paper, we study the componentwise linearity of symbolic powers of
edge ideals. We propose the conjecture that all symbolic powers of the edge
ideal of a cochordal graph are componentwise linear. This conjecture is
verified for some families of cochordal graphs, including complements of block
graphs and complements of proper interval graphs. As a corollary, Minh's
conjecture is established for such families. Moreover, we show that
$I(G)^{(2)}$ is componentwise linear, for any cochordal graph $G$. | arXiv |
The Schrieffer-Wolff transformation (SWT) is an important perturbative method
in quantum mechanics used to simplify Hamiltonians by decoupling low- and
high-energy subspaces. Existing methods for implementing the SWT often lack
general applicability to arbitrary perturbative systems or fail to provide a
closed-form solution for the SWT generator. In this article, we present a
systematic and unified framework for the SWT that addresses these shortcomings.
Specifically, we derive a closed-form solution for the SWT generator that is
universally applicable to any system that satisfies the conditions required to
be perturbatively treated. Furthermore, we extend this solution to
time-dependent systems with periodic perturbations, covering all frequency
regimes. The effectiveness of this approach is then demonstrated by applying it
to analyze the dispersive shift of an anharmonic resonator coupled to a
two-level system with time-dependent coupling. | arXiv |
The James Webb Space Telescope has uncovered a puzzling population of
UV-faint broad-line active galactic nuclei (AGN), nicknamed ``Little Red Dots''
(LRD) owing to their compact morphology and red rest-frame optical colours.
Interpreted as dust attenuated AGN, their inferred intrinsic luminosities and
supermassive black hole (SMBH) masses rival those of UV-luminous quasars,
although they are $>100$ times more abundant. If LRDs and quasars are members
of the same underlying population, they should inhabit comparable mass dark
matter halos, traced by similar overdensities of galaxies. Otherwise, they
represent distinct populations with different physical properties and formation
histories. Characterizing LRD environments thus provides a critical test of
their nature. Here, we report the discovery of a LRD at $z=7.3$, attenuated by
moderate amounts of dust, $A_V = {3.26}\,\rm{mag}$, with an intrinsic
bolometric luminosity of $10^{46.7}\,\rm{erg}\,\rm{s}^{-1}$ and a SMBH mass of
$7\times10^8\,\rm{M}_\odot$. Most notably, this object is embedded in an
overdensity of eight nearby galaxies, allowing us to calculate the first
spectroscopic estimate of the clustering of galaxies around LRDs. We find a
LRD-galaxy cross-correlation length of $r_0\!=\!9\pm2\,\rm{h}^{-1}\,\rm{cMpc}$,
comparable to that of $z\!\sim\!6$ UV-luminous quasars. The resulting estimate
of their minimum dark matter halo mass of $\log_{10}(M_{\rm{halo,
min}}/\rm{M}_{\odot})= 12.3_{-0.8}^{+0.7}$ indicates that nearly all halos
above this mass must host actively accreting SMBHs at $z\approx7$, in strong
contrast with the far smaller duty cycle of luminous quasars ($<1\%$). Our
results, taken at face value, motivate a picture in which LRDs are the obscured
counterparts of UV-luminous quasars, which provides a natural explanation for
the short UV-luminous lifetimes inferred from both quasar clustering and quasar
proximity zones. | arXiv |
The rapid development of large language models (LLMs) with advanced
programming capabilities has paved the way for innovative approaches in
software testing. Fuzz testing, a cornerstone for improving software
reliability and detecting vulnerabilities, often relies on manually written
fuzz drivers, limiting scalability and efficiency. To address this challenge,
we propose CodeGraphGPT, a novel system that integrates code knowledge graphs
with an LLM-powered intelligent agent to automate the fuzz driver generation
process. By framing fuzz driver creation as a code generation task,
CodeGraphGPT leverages program analysis to construct a knowledge graph of code
repositories, where nodes represent code entities, such as functions or files,
and edges capture their relationships. This enables the system to generate
tailored fuzz drivers and input seeds, resolve compilation errors, and analyze
crash reports, all while adapting to specific API usage scenarios.
Additionally, querying the knowledge graph helps identify precise testing
targets and contextualize the purpose of each fuzz driver within the fuzzing
loop. We evaluated CodeGraphGPT on eight open-source software projects,
achieving an average improvement of 8.73\% in code coverage compared to
state-of-the-art methods. Moreover, it reduced the manual workload in crash
case analysis by 84.4\% and identified 11 real-world bugs, including nine
previously unreported ones. This work highlights how integrating LLMs with code
knowledge graphs enhances fuzz driver generation, offering an efficient
solution for vulnerability detection and software quality improvement. | arXiv |
In this paper we present an approach to reduce hallucinations in Large
Language Models (LLMs) by incorporating Knowledge Graphs (KGs) as an additional
modality. Our method involves transforming input text into a set of KG
embeddings and using an adapter to integrate these embeddings into the language
model space, without relying on external retrieval processes.
To facilitate this, we created WikiEntities, a dataset containing over 3
million Wikipedia texts annotated with entities from Wikidata and their
corresponding embeddings from PyTorch-BigGraph. This dataset serves as a
valuable resource for training Entity Linking models and adapting the described
method to various LLMs using specialized adapters.
Our method does not require fine-tuning of the language models themselves;
instead, we only train the adapter. This ensures that the model's performance
on other tasks is not affected. We trained an adapter for the Mistral 7B, LLaMA
2-7B (chat), and LLaMA 3-8B (instruct) models using this dataset and
demonstrated that our approach improves performance on the HaluEval, True-False
benchmarks and FEVER dataset. The results indicate that incorporating KGs as a
new modality can effectively reduce hallucinations and improve the factual
accuracy of language models, all without the need for external retrieval. | arXiv |
Massive Open Online Courses (MOOCs) have greatly contributed to making
education more accessible.However, many MOOCs maintain a rigid,
one-size-fits-all structure that fails to address the diverse needs and
backgrounds of individual learners.Learning path personalization aims to
address this limitation, by tailoring sequences of educational content to
optimize individual student learning outcomes.Existing approaches, however,
often require either massive student interaction data or extensive expert
annotation, limiting their broad application.In this study, we introduce a
novel data-efficient framework for learning path personalization that operates
without expert annotation.Our method employs a flexible recommender system
pre-trained with reinforcement learning on a dataset of raw course
materials.Through experiments on semi-synthetic data, we show that this
pre-training stage substantially improves data-efficiency in a range of
adaptive learning scenarios featuring new educational materials.This opens up
new perspectives for the design of foundation models for adaptive learning. | arXiv |
E-commerce app users exhibit behaviors that are inherently logically
consistent. A series of multi-scenario user behaviors interconnect to form the
scene-level all-domain user moveline, which ultimately reveals the user's true
intention. Traditional CTR prediction methods typically focus on the item-level
interaction between the target item and the historically interacted items.
However, the scene-level interaction between the target item and the user
moveline remains underexplored. There are two challenges when modeling the
interaction with preceding all-domain user moveline: (i) Heterogeneity between
items and scenes: Unlike traditional user behavior sequences that utilize items
as carriers, the user moveline utilizes scenes as carriers. The heterogeneity
between items and scenes complicates the process of aligning interactions
within a unified representation space. (ii) Temporal misalignment of linked
scene-level and item-level behaviors: In the preceding user moveline with a
fixed sampling length, certain critical scene-level behaviors are closely
linked to subsequent item-level behaviors. However, it is impossible to
establish a complete temporal alignment that clearly identifies which specific
scene-level behaviors correspond to which item-level behaviors. To address
these challenges and pioneer modeling user intent from the perspective of the
all-domain moveline, we propose All-domain Moveline Evolution Network (AMEN).
AMEN not only transfers interactions between items and scenes to homogeneous
representation spaces, but also introduces a Temporal Sequential Pairwise (TSP)
mechanism to understand the nuanced associations between scene-level and
item-level behaviors, ensuring that the all-domain user moveline differentially
influences CTR predictions for user's favored and unfavored items. Online A/B
testing demonstrates that our method achieves a +11.6% increase in CTCVR. | arXiv |
While AI models have demonstrated remarkable capabilities in constrained
domains like game strategy, their potential for genuine creativity in
open-ended domains like art remains debated. We explore this question by
examining how AI can transcend human cognitive limitations in visual art
creation. Our research hypothesizes that visual art contains a vast unexplored
space of conceptual combinations, constrained not by inherent incompatibility,
but by cognitive limitations imposed by artists' cultural, temporal,
geographical and social contexts.
To test this hypothesis, we present the Alien Recombination method, a novel
approach utilizing fine-tuned large language models to identify and generate
concept combinations that lie beyond human cognitive availability. The system
models and deliberately counteracts human availability bias, the tendency to
rely on immediately accessible examples, to discover novel artistic
combinations.
This system not only produces combinations that have never been attempted
before within our dataset but also identifies and generates combinations that
are cognitively unavailable to all artists in the domain. Furthermore, we
translate these combinations into visual representations, enabling the
exploration of subjective perceptions of novelty. Our findings suggest that
cognitive unavailability is a promising metric for optimizing artistic novelty,
outperforming merely temperature scaling without additional evaluation
criteria. This approach uses generative models to connect previously
unconnected ideas, providing new insight into the potential of framing
AI-driven creativity as a combinatorial problem. | arXiv |
We establish the profound equivalence between measures of genuine
multipartite entanglement(GME) and their corresponding coherence measures.
Initially we construct two distinct classes of measures for genuine
multipartite entanglement utilizing real symmetric concave functions and the
convex roof technique. We then demonstrate that all coherence measures for any
qudit states, defined through the convex roof approach, are identical to our
two classes of GME measures of the states combined with an incoherent ancilla
under a unitary incoherent operation. This relationship implies that genuine
multipartite entanglement can be generated from the coherence inherent in an
initial state through the unitary incoherent operations. Furthermore, we
explore the interplay between coherence and other forms of genuine quantum
correlations, specifically genuine multipartite steering and genuine
multipartite nonlocality. In the instance of special three-qubit X-states (only
nonzero elements of X-state are diagonal or antidiagonal when written in an
orthonormal basis), we find that genuine multipartite steering and nonlocality
are present if and only if the coherence exists in the corresponding qubit
states. | arXiv |
Uncertainty persists over how and why some countries become democratic and
others do not, or why some countries remain democratic and others 'backslide'
toward autocracy. Furthermore, while scholars generally agree on the nature of
'democracy' and 'autocracy', the nature of regimes in-between, and changes
between them, are much less clear. By applying the spectral
dimensionality-reduction technique Diffusion Map to political-science data from
the V-Dem project for the period 1900 to 2021, we identify a low-dimensional
non-linear manifold on which all electoral regimes move. Using the diffusion
equation from statistical physics, we measure the time scale on which countries
change their degree of electoral quality, freedom of association, and freedom
of expression depending on their position on the manifold. By quantifying the
coefficients of the diffusion equation for each country and over time, we show
that democracies behave like sub-diffusive (i.e. slow spreading) particles and
that autocracies on the verge of collapse behave like super-diffusive (i.e.
fast spreading) particles. We show that regimes in-between exhibit diffusion
dynamics distinct from autocracies and democracies, and an overall higher
instability. Furthermore, we show that a country's position on the manifold and
its dynamics are linked to its propensity for civil conflict. Our study
pioneers the use of statistical physics in the analysis of political regimes.
Our results provide a quantitative foundation for developing theories about
what changes during democratization and democratic backsliding, as well as a
new framework for regime-transformation and risk-of-conflict assessment. | arXiv |
Given a closed subset $K$ in $\mathbb{R}$, the rational $K$-truncated moment
problem ($K$-RTMP) asks to characterize the existence of a positive Borel
measure $\mu$, supported on $K$, such that a linear functional $\mathcal{L}$,
defined on all rational functions of the form $\frac{f}{q}$, where $q$ is a
fixed polynomial with all real zeros of even order and $f$ is any real
polynomial of degree at most $2k$, is an integration with respect to $\mu$. The
case of a compact set $K$ was solved by Chandler in 1994, but there is no
argument that ensures that $\mu$ vanishes on all real zeros of $q$. An obvious
necessary condition for the solvability of the $K$-RTMP is that $\mathcal{L}$
is nonnegative on every $f$ satisfying $f|_{K}\geq 0$. If $\mathcal{L}$ is
strictly positive on every $0\neq f|_{K}\geq 0$, we add the missing argument
from Chandler's solution and also bound the number of atoms in a minimal
representing measure. We show by an example that nonnegativity of $\mathcal{L}$
is not sufficient and add the missing conditions to the solution. We also solve
the $K$-RTMP for unbounded $K$ and derive the solutions to the strong truncated
Hamburger moment problem and the truncated moment problem on the unit circle as
special cases. | arXiv |
This work investigates the effects of tangent polar activity on the
conformational and dynamic properties of entangled polymer melts through
Langevin molecular dynamics simulations. We examine systems composed of all
self-propelled, monodisperse linear chains, so that constraint release is
considered. The range of activities explored here includes values where the
active reptation theory is applicable, as well as higher activities that
challenge the validity of the theory. Chain conformations exhibit a moderate
increase in coil size increase, which becomes more pronounced at higher
activity levels. Under these conditions, a local bond alignment along the chain
contour appears together with a non-homogeneous segmental stretching, and
orientation and stretching of the tube. Dynamically, polar activity induces a
molecular-weight-independent diffusion coefficient, a transient superdiffusive
behavior, and an end-to-end relaxation time inversely proportional to the
molecular weight. Finally, our results are summarized in a diagram that
classifies the various regimes of behavior observed in the simulations.
Overall, these findings provide valuable insights into the complex interplay
between activity and entanglements, advancing our understanding of active
polymer systems and their potential applications across various fields. | arXiv |
Depth estimation is an essential task toward full scene understanding since
it allows the projection of rich semantic information captured by cameras into
3D space. While the field has gained much attention recently, datasets for
depth estimation lack scene diversity or sensor modalities. This work presents
the ADUULM-360 dataset, a novel multi-modal dataset for depth estimation. The
ADUULM-360 dataset covers all established autonomous driving sensor modalities,
cameras, lidars, and radars. It covers a frontal-facing stereo setup, six
surround cameras covering the full 360-degree, two high-resolution long-range
lidar sensors, and five long-range radar sensors. It is also the first depth
estimation dataset that contains diverse scenes in good and adverse weather
conditions. We conduct extensive experiments using state-of-the-art
self-supervised depth estimation methods under different training tasks, such
as monocular training, stereo training, and full surround training. Discussing
these results, we demonstrate common limitations of state-of-the-art methods,
especially in adverse weather conditions, which hopefully will inspire future
research in this area. Our dataset, development kit, and trained baselines are
available at https://github.com/uulm-mrm/aduulm_360_dataset. | arXiv |
Migration is a key ingredient for the formation of close-in super-Earth and
mini-Neptune systems, as it sets in which resonances planets can be trapped.
Slower migration rates result in wider resonance configurations compared to
higher migration rates. We investigate the influence of different migration
rates, set by the disc's viscosity, on the structure of multi-planet systems
growing by pebble accretion via N-body simulations. Planets in low viscosity
environments migrate slower due to partial gap opening. Thus systems formed in
low viscosity environments tend to have planets trapped in wider resonant
configurations (typically 4:3, 3:2 and 2:1), compared to their high viscosity
counterparts (mostly 7:6, 5:4 and 4:3 resonances). After gas disc dissipation,
the damping forces cease and the systems can undergo instabilities, rearranging
their configurations and breaking the resonance chains. The low viscosity discs
naturally account for the resonant chains like Trappist-1, TOI-178 and
Kepler-223, unlike high viscosity simulations which produce relatively more
compact chains. About 95% of our low viscosity resonant chains became unstable,
experiencing giant impacts. Dynamical instabilities in our low viscosity
simulations are more violent than those of high viscosity simulations due to
the effects of leftover external perturbers (P>200 days). About 50% of our
final system ended with no planets within 200 days, while all our systems have
remaining outer planets. We speculate that this process could be qualitatively
consistent with the lack of inner planets in a large fraction of Sun-like
stars. Systems produced in low viscosity simulations alone do not match the
overall period ratio distribution of observations, but give a better match to
the period distributions of chains, which may suggest that systems of
super-Earths and mini-Neptunes form in natal discs with a diversity of
viscosities. | arXiv |
Recently, the Large High-Altitude Air Shower Observatory (LHAASO)
collaboration has obtained a measurement of the gamma-ray diffuse emission in
the ultra-high energy range, $10-10^3$ TeV after masking the contribution of
known sources. The measurement appears to be 2-3 times higher than the
gamma-ray signal expected from the hadronic interactions of diffuse cosmic rays
with the interstellar medium, potentially suggesting a contribution from
unresolved sources. However, estimates of the diffuse emission are affected by
large uncertainties that must be accounted for. In this work, we calculate the
hadronic gamma-ray diffuse emission including uncertainties in the gas content
of the Galactic disk, in the energy and spatial distribution of cosmic rays as
well as in the hadronic interaction cross-section. We show that the LHAASO data
above $\sim 30$ TeV are consistent with the gamma-ray diffuse emission model
when all these uncertainties are taken into account. This implies that, with
the current data in this energy range, there is no need to invoke a cosmic ray
spectral variation toward the Galactic center, nor a dominant contribution from
unresolved sources. | arXiv |
A large host of scientific journals and conferences solicit peer reviews from
multiple reviewers for the same submission, aiming to gather a broader range of
perspectives and mitigate individual biases. In this work, we reflect on the
role of diversity in the slate of reviewers assigned to evaluate a submitted
paper as a factor in diversifying perspectives and improving the utility of the
peer-review process. We propose two measures for assessing review utility:
review coverage -- reviews should cover most contents of the paper -- and
review redundancy -- reviews should add information not already present in
other reviews. We hypothesize that reviews from diverse reviewers will exhibit
high coverage and low redundancy. We conduct a causal study of different
measures of reviewer diversity on review coverage and redundancy using
observational data from a peer-reviewed conference with approximately 5,000
submitted papers. Our study reveals disparate effects of different diversity
measures on review coverage and redundancy. Our study finds that assigning a
group of reviewers that are topically diverse, have different seniority levels,
or have distinct publication networks leads to broader coverage of the paper or
review criteria, but we find no evidence of an increase in coverage for
reviewer slates with reviewers from diverse organizations or geographical
locations. Reviewers from different organizations, seniority levels, topics, or
publications networks (all except geographical diversity) lead to a decrease in
redundancy in reviews. Furthermore, publication network-based diversity alone
also helps bring in varying perspectives (that is, low redundancy), even within
specific review criteria. Our study adopts a group decision-making perspective
for reviewer assignments in peer review and suggests dimensions of diversity
that can help guide the reviewer assignment process. | arXiv |
Graph augmentation is a fundamental and well-studied problem that arises in
network optimization. We consider a new variant of this model motivated by
reconfigurable communication networks. In this variant, we consider a given
physical network and the measured communication demands between the nodes. Our
goal is to augment the given physical network with a matching, so that the
shortest path lengths in the augmented network, weighted with the demands, are
minimal.We prove that this problem is NP-hard, even if the physical network is
a cycle. We then use results from demand-aware network design to provide a
constant-factor approximation algorithm for adding a matching in case that only
a few nodes in the network cause almost all the communication. For general
real-world communication patterns, we design and evaluate a series of
heuristics that can deal with arbitrary graphs as the underlying network
structure. Our algorithms are validated experimentally using real-world traces
(from e.g., Facebook) of data centers. | arXiv |
Recent advances in Large Language Models (LLMs) have enabled them to overcome
their context window limitations, and demonstrate exceptional retrieval and
reasoning capacities on longer context. Quesion-answering systems augmented
with Long-Context Language Models (LCLMs) can automatically search massive
external data and incorporate it into their contexts, enabling faithful
predictions and reducing issues such as hallucinations and knowledge staleness.
Existing studies targeting LCLMs mainly concentrate on addressing the so-called
lost-in-the-middle problem or improving the inference effiencicy, leaving their
privacy risks largely unexplored. In this paper, we aim to bridge this gap and
argue that integrating all information into the long context makes it a
repository of sensitive information, which often contains private data such as
medical records or personal identities. We further investigate the membership
privacy within LCLMs external context, with the aim of determining whether a
given document or sequence is included in the LCLMs context. Our basic idea is
that if a document lies in the context, it will exhibit a low generation loss
or a high degree of semantic similarity to the contents generated by LCLMs. We
for the first time propose six membership inference attack (MIA) strategies
tailored for LCLMs and conduct extensive experiments on various popular models.
Empirical results demonstrate that our attacks can accurately infer membership
status in most cases, e.g., 90.66% attack F1-score on Multi-document QA
datasets with LongChat-7b-v1.5-32k, highlighting significant risks of
membership leakage within LCLMs input contexts. Furthermore, we examine the
underlying reasons why LCLMs are susceptible to revealing such membership
information. | arXiv |
We elaborate on and further develop an approach to determining the
teleparallel analogue of spacetimes in General Relativity (GR) by studying the
Teleparallel analogue of pp-Wave (TppW) spacetimes. This relies on using the
fact that these solutions belong to the Vanishing Scalar Invariant (VSI)
subclass for which the explicit forms of the frame and spin-connection are
known. By identifying the pp-wave (ppW) metric within this class, we are able
to use frame based symmetry methods and the Cartan-Karlhede (CK) algorithm to
determine the necessary form for the frame. Through this analysis we find two
overlooked solutions that are permitted in teleparallel gravity (TPG) and in
GR. | arXiv |
Let $k$ be an uncountable algebraically closed filed of positive
characteristic and let $S_0$ be a connected smooth projective surface over $k$.
We extend the theorem on the Gysin kernel from [28, Theorem 5.1] to also be
true over $k$, where it was proved over $\mathbb{C}$. This is done by showing
that almost all results still hold true over $k$ via the same argument or by
using \'{e}tale base arguments and then use a lift with the Comparison theorem
[22, Theorems 21.1 & 20.5] as needed. | arXiv |
In this study, we introduced a simple yet innovative method to trigger
turbulence in a channel flow to achieve statistically stationary flow
conditions. We compare this new method based on synthetically generated
three-dimensional turbulence with two other well-established methods, namely,
linear profile superposed with random noise and descending counter-rotating
vortices and log-law profile superposed with random noise and descending
counter-rotating vortices. We found that synthetically generated
three-dimensional turbulence provides a computationally cheap and effective way
to reduce simulation spin-up to achieve statistically stationary flow
conditions when a precursor turbulent initial condition is not available. At a
one-time cost of less than 1 CPU hour to generate the synthetic turbulent
initial condition, the flow becomes statistically stationary within 3 eddy
turnovers for all the parameters of interest in wall-bounded pressure-driven
channel flow simulations when compared to other alternatives that can take more
than 10 eddy turnovers resulting in substantial savings in the computational
cost. | arXiv |
In this paper, we prove several new infinite families of Ramanujan--like
congruences satisfied by the coefficients of the generating function $U_t(a,q)$
which is an extension of MacMahon's generalized sum-of-divisors function. As a
by-product, we also show that, for all $n\geq 0$, $\overline{B}_3(15n+7)\equiv
0 \pmod{5}$ where $\overline{B}_3(n)$ is the number of almost $3$-regular
overpartitions of $n$. | arXiv |
Bell scenarios are multipartite scenarios that exclude any communication
between parties. This constraint leads to a strict hierarchy of correlation
sets in such scenarios, namely, classical, quantum, and nonsignaling. However,
without any constraints on communication between the parties, they can realize
arbitrary correlations by exchanging only classical systems. Here we consider a
multipartite scenario where the parties can engage in at most a single round of
communication, i.e., each party is allowed to receive a system once, implement
any local intervention on it, and send out the resulting system once. While no
global assumption about causal relations between parties is assumed in this
scenario, we do make a causal assumption local to each party, i.e., the input
received by it causally precedes the output it sends out. We then introduce
antinomicity, a notion of nonclassicality for correlations in such scenarios,
and prove the existence of a strict hierarchy of correlation sets classified by
their antinomicity. Antinomicity serves as a generalization of Bell
nonlocality: when all the parties discard their output systems (i.e., in a
nonsignaling scenario), it is mathematically equivalent to Bell nonlocality.
Like Bell nonlocality, it can be understood as an instance of fine-tuning, one
that is necessary in any classical model of cyclic causation that avoids
time-travel antinomies but allows antinomic correlations. Furthermore,
antinomicity resolves a long-standing puzzle, i.e., the failure of causal
inequality violations as device-independent witnesses of nonclassicality.
Antinomicity implies causal inequality violations, but not conversely. | arXiv |
The rapid advancement of face forgery techniques has introduced a growing
variety of forgeries. Incremental Face Forgery Detection (IFFD), involving
gradually adding new forgery data to fine-tune the previously trained model,
has been introduced as a promising strategy to deal with evolving forgery
methods. However, a naively trained IFFD model is prone to catastrophic
forgetting when new forgeries are integrated, as treating all forgeries as a
single ''Fake" class in the Real/Fake classification can cause different
forgery types overriding one another, thereby resulting in the forgetting of
unique characteristics from earlier tasks and limiting the model's
effectiveness in learning forgery specificity and generality. In this paper, we
propose to stack the latent feature distributions of previous and new tasks
brick by brick, $\textit{i.e.}$, achieving $\textbf{aligned feature
isolation}$. In this manner, we aim to preserve learned forgery information and
accumulate new knowledge by minimizing distribution overriding, thereby
mitigating catastrophic forgetting. To achieve this, we first introduce Sparse
Uniform Replay (SUR) to obtain the representative subsets that could be treated
as the uniformly sparse versions of the previous global distributions. We then
propose a Latent-space Incremental Detector (LID) that leverages SUR data to
isolate and align distributions. For evaluation, we construct a more advanced
and comprehensive benchmark tailored for IFFD. The leading experimental results
validate the superiority of our method. | arXiv |
From an architectural perspective with the main goal of reducing the
effective traffic load in the network and thus gaining more operational
efficiency, optical networks have been essentially remained the same in the
recent two decades since the year 2000s with the success and then dominance of
optical-bypass mode. In the optical-bypass-enabled network, the add/drop and
cross-connect functions constitute the fundamental operations in handling the
traffic at the optical layer, whose the underlying principle lies in the fact
that in cross-connecting in-transit lightpaths over an intermediate node, such
lightpaths must be guarded from each other in a certain dimension, be it the
time, frequency or spatial domain, to avoid interference, which is treated as
destructive. In view of the rapid progresses in the realm of optical computing
enabling the precisely controlled interference between optical channels for
various computing capabilities, we envision a different perspective to turn the
long-established wisdom in optical-bypass network around by putting the optical
channel interference to a good use, resulting into the so-called
optical-computing-enabled network. This paper presents two illustrative
examples based on the optical aggregation and optical XOR operations which have
been progressively maturing and thus, could be feasibly integrated into the
current legacy infrastructure with possibly minimal disruptions. We then
propose a detailed case study in formulating and solving the network
coding-enabled optical networks, demonstrating the efficacy of the
optical-computing-enabled network, and highlighting the unique challenges tied
with greater complexities in network design problems, compared to
optical-bypass counterpart | arXiv |
Recent advancements in medical image analysis have predominantly relied on
Convolutional Neural Networks (CNNs), achieving impressive performance in chest
X-ray classification tasks, such as the 92% AUC reported by AutoThorax-Net and
the 88% AUC achieved by ChexNet in classifcation tasks. However, in the medical
field, even small improvements in accuracy can have significant clinical
implications. This study explores the application of Vision Transformers (ViT),
a state-of-the-art architecture in machine learning, to chest X-ray analysis,
aiming to push the boundaries of diagnostic accuracy. I present a comparative
analysis of two ViT-based approaches: one utilizing full chest X-ray images and
another focusing on segmented lung regions. Experiments demonstrate that both
methods surpass the performance of traditional CNN-based models, with the
full-image ViT achieving up to 97.83% accuracy and the lung-segmented ViT
reaching 96.58% accuracy in classifcation of diseases on three label and AUC of
94.54% when label numbers are increased to eight. Notably, the full-image
approach showed superior performance across all metrics, including precision,
recall, F1 score, and AUC-ROC. These findings suggest that Vision Transformers
can effectively capture relevant features from chest X-rays without the need
for explicit lung segmentation, potentially simplifying the preprocessing
pipeline while maintaining high accuracy. This research contributes to the
growing body of evidence supporting the efficacy of transformer-based
architectures in medical image analysis and highlights their potential to
enhance diagnostic precision in clinical settings. | arXiv |
Nuclear magnetic resonance instruments are becoming available to the
do-it-yourself community. The challenges encountered in the endeavor to build a
magnetic resonance imaging instrument from scratch were confronted in a
four-day hackathon at Singapore University of Technology and Design in spring
2024. One day was devoted to educational lectures and three days to system
construction and testing. Seventy young researchers from all parts of the world
formed six teams focusing on magnet, gradient coil, RF coil, console, system
integration, and design, which together produced a working MRI instrument in
three days. The different steps, encountered challenges, and their solutions
are reported. | arXiv |
We describe an algorithm to compute the stable multiplicity of a family of
irreducible representations in the cohomology of ordered configuration space of
the plane. Using this algorithm, we compute the stable multiplicities of all
families of irreducibles given by Young diagrams with $23$ boxes or less up to
cohomological degree $50$. In particular, this determines the stable cohomology
in cohomological degrees $0 \leq i \leq 11$. We prove related qualitative
results and formulate some conjectures. | arXiv |
Amongst the issues plaguing the Standard Model (SM) are questions pertaining
to neutrino masses and mixings, the anomalous magnetic moment of the electron
and muon and the problem of a suitable dark matter (DM) candidate. All the
three issues can be addressed at once by extending the SM with two generations
of vector-like fermions and an inert scalar doublet, all odd under a Z2
symmetry. The light neutrino masses and mixings are generated radiatively while
maintaining consistency with bounds on lepton flavor violation. Loop diagrams
with the very same fields also serve to explain the anomalous magnetic moments.
Similarly, the correct dark matter relic abundance is reproduced without coming
into conflict with direct detection constraints, or those from big bang
nucleosynthesis or the cosmic microwave observations. Finally, prospective
signatures at the LHC are discussed. | arXiv |
Machine learning is an important tool for analyzing high-dimension
hyperspectral data; however, existing software solutions are either
closed-source or inextensible research products. In this paper, we present
cuvis.ai, an open-source and low-code software ecosystem for data acquisition,
preprocessing, and model training. The package is written in Python and
provides wrappers around common machine learning libraries, allowing both
classical and deep learning models to be trained on hyperspectral data. The
codebase abstracts processing interconnections and data dependencies between
operations to minimize code complexity for users. This software package
instantiates nodes in a directed acyclic graph to handle all stages of a
machine learning ecosystem, from data acquisition, including live or static
data sources, to final class assignment or property prediction. User-created
models contain convenient serialization methods to ensure portability and
increase sharing within the research community. All code and data are available
online: https://github.com/cubert-hyperspectral/cuvis.ai | arXiv |
In this paper, we propose a conceptual framework for personalized
brain-computer interface (BCI) applications, which can offer an enhanced user
experience by customizing services to individual preferences and needs, based
on endogenous electroencephalography (EEG) paradigms including motor imagery
(MI), speech imagery (SI), and visual imagery. The framework includes two
essential components: user identification and intention classification, which
enable personalized services by identifying individual users and recognizing
their intended actions through EEG signals. We validate the feasibility of our
framework using a private EEG dataset collected from eight subjects, employing
the ShallowConvNet architecture to decode EEG features. The experimental
results demonstrate that user identification achieved an average classification
accuracy of 0.995, while intention classification achieved 0.47 accuracy across
all paradigms, with MI demonstrating the best performance. These findings
indicate that EEG signals can effectively support personalized BCI
applications, offering robust identification and reliable intention decoding,
especially for MI and SI. | arXiv |
This paper presents an accelerated spherical K-means clustering algorithm for
large-scale and high-dimensional sparse document data sets. We design an
algorithm working in an architecture-friendly manner (AFM), which is a
procedure of suppressing performance-degradation factors such as the numbers of
instructions, branch mispredictions, and cache misses in CPUs of a modern
computer system. For the AFM operation, we leverage unique universal
characteristics (UCs) of a data-object and a cluster's mean set, which are
skewed distributions on data relationships such as Zipf's law and a
feature-value concentration phenomenon. The UCs indicate that the most part of
the number of multiplications for similarity calculations is executed regarding
terms with high document frequencies (df) and the most part of a similarity
between an object- and a mean-feature vector is obtained by the multiplications
regarding a few high mean-feature values. Our proposed algorithm applies an
inverted-index data structure to a mean set, extracts the specific region with
high-df terms and high mean-feature values in the mean-inverted index by newly
introduced two structural parameters, and exploits the index divided into three
parts for efficient pruning. The algorithm determines the two structural
parameters by minimizing the approximate number of multiplications related to
that of instructions, reduces the branch mispredictions by sharing the index
structure including the two parameters with all the objects, and suppressing
the cache misses by keeping in the caches the frequently used data in the
foregoing specific region, resulting in working in the AFM. We experimentally
demonstrate that our algorithm efficiently achieves superior speed performance
in large-scale documents compared with algorithms using the state-of-the-art
techniques. | arXiv |
Quantum secure direct communication (QSDC) enables the message sender to
directly send secure messages to the receiver through the quantum channel
without keys. Device-independent (DI) and measurement-device-independent (MDI)
QSDC protocols can enhance QSDC's practical security in theory. DI QSDC
requires extremely high global detection efficiency and has quite low secure
communication distance. DI and MDI QSDC both require high-quality entanglement.
Current entanglement sources prepare entangled photon pairs with low
efficiency, largely reducing their practical communication efficiency. In the
paper, we propose a single-photon-based receiver-device-independent (RDI) QSDC
protocol. It only relies on the trusted single-photon source, which is nearly
on-demand under current technology, and treats all the receiving devices in
both communication parties as ``black-boxes''. The parties ensure the message
security only from the observed statistics. We develop a numerical method to
simulate its performance in practical noisy communication situation. RDI QSDC
provides the same security level as MDI QSDC. Compared with DI and MDI QSDC,
RDI QSDC has some advantages. First, it uses the single-photon source and
single-photon measurement, which makes it obtain the practical communication
efficiency about 3415 times of that in DI QSDC and easy to implement. The whole
protocol is feasible with current technology. Second, it has higher photon loss
robustness and noise tolerance than DI QSDC, which enables it to have a secure
communication distance about 26 times of that in DI QSDC. Based on above
features, the RDI QSDC protocol makes it possible to achieve highly-secure and
high-efficient QSDC in the near future. | arXiv |
It is proved that the Chebyshev's method applied to an entire function $f$ is
a rational map if and only if $f(z) = p(z) e^{q(z)}$, for some polynomials $p$
and $q$. These are referred to as rational Chebyshev maps, and their fixed
points are discussed in this article. It is seen that $\infty$ is a parabolic
fixed point with multiplicity one bigger than the degree of $q$. Considering
$q(z)=p(z)^n+c$, where $p$ is a linear polynomial, $n \in \mathbb{N}$ and $c$
is a non-zero constant, we show that the Chebyshev's method applied to $pe^q$
is affine conjugate to that applied to $z e^{z^n}$. We denote this by $C_n$.
All the finite extraneous fixed points of $C_n$ are shown to be repelling. The
Julia set $\mathcal{J}(C_n)$ of $C_n$ is found to be preserved under rotations
of order $n$ about the origin. For each $n$, the immediate basin of $0$ is
proved to be simply connected. For all $n \leq 16$, we prove that
$\mathcal{J}(C_n)$ is connected. The Newton's method applied to $ze^{z^n}$ is
found to be conjugate to a polynomial, and its dynamics is also completely
determined. | arXiv |
Currently, deep learning-based instance segmentation for various applications
(e.g., Agriculture) is predominantly performed using a labor-intensive process
involving extensive field data collection using sophisticated sensors, followed
by careful manual annotation of images, presenting significant logistical and
financial challenges to researchers and organizations. The process also slows
down the model development and training process. In this study, we presented a
novel method for deep learning-based instance segmentation of apples in
commercial orchards that eliminates the need for labor-intensive field data
collection and manual annotation. Utilizing a Large Language Model (LLM), we
synthetically generated orchard images and automatically annotated them using
the Segment Anything Model (SAM) integrated with a YOLO11 base model. This
method significantly reduces reliance on physical sensors and manual data
processing, presenting a major advancement in "Agricultural AI". The synthetic,
auto-annotated dataset was used to train the YOLO11 model for Apple instance
segmentation, which was then validated on real orchard images. The results
showed that the automatically generated annotations achieved a Dice Coefficient
of 0.9513 and an IoU of 0.9303, validating the accuracy and overlap of the mask
annotations. All YOLO11 configurations, trained solely on these synthetic
datasets with automated annotations, accurately recognized and delineated
apples, highlighting the method's efficacy. Specifically, the YOLO11m-seg
configuration achieved a mask precision of 0.902 and a mask mAP@50 of 0.833 on
test images collected from a commercial orchard. Additionally, the YOLO11l-seg
configuration outperformed other models in validation on 40 LLM-generated
images, achieving the highest mask precision and mAP@50 metrics.
Keywords: YOLO, SAM, SAMv2, YOLO11, YOLOv11, Segment Anything, YOLO-SAM | arXiv |
A \emph{conforming partition} of a rectilinear $ n $-gon\bastien{I change
from ``a polygon'', otherwise $ n $ is not defined.} $ P $ is a partition of $
P $ into rectangles without using Steiner points (i.e., all corners of all
rectangles must lie on\bastien{Maybe add: the boundary of} $ P $). The stabbing
number of such a partition is the maximum number of rectangles intersected by
an axis-aligned segment lying in the interior of $ P $. In this paper, we
examine the problem of computing conforming partitions with low stabbing
number. We show that computing a conforming partition with stabbing number at
most~$ 4 $ is $ NP $-hard, which strengthens a previously known hardness result
[Durocher \& Mehrabi, Theor. Comput. Sci. 689: 157-168 (2017)] and eliminates
the possibility for fixed-parameter-tractable algorithms parameterized by the
stabbing number unless $ P = NP $. In contrast, we give (i) an $ O ( n \log n )
$-time\bastien{Reviewer request: changed from "linearithmic".} algorithm to
decide whether a conforming partition with stabbing number~$ 2 $ exists, (ii) a
fixed-parameter-tractable algorithm parameterized by both the stabbing number
and treewidth of the pixelation of the polygon, and (iii) a
fixed-parameter-tractable algorithm parameterized by the stabbing number for
simple polygons in general position. | arXiv |
We consider estimating the shared mean of a sequence of heavy-tailed random
variables taking values in a Banach space. In particular, we revisit and extend
a simple truncation-based mean estimator by Catoni and Giulini. While existing
truncation-based approaches require a bound on the raw (non-central) second
moment of observations, our results hold under a bound on either the central or
non-central $p$th moment for some $p > 1$. In particular, our results hold for
distributions with infinite variance. The main contributions of the paper
follow from exploiting connections between truncation-based mean estimation and
the concentration of martingales in 2-smooth Banach spaces. We prove two types
of time-uniform bounds on the distance between the estimator and unknown mean:
line-crossing inequalities, which can be optimized for a fixed sample size $n$,
and non-asymptotic law of the iterated logarithm type inequalities, which match
the tightness of line-crossing inequalities at all points in time up to a
doubly logarithmic factor in $n$. Our results do not depend on the dimension of
the Banach space, hold under martingale dependence, and all constants in the
inequalities are known and small. | arXiv |
In Radio Super Novae (RSNe) a magnetic field of $(B \, \times \, r) \, = \,
10^{16.0 \pm 0.12} \, {\rm Gauss \, \times \, cm}$ is observed; these are the
same numbers for Blue Super Giant (BSG) star explosions as for Red Super Giant
(RSG) star explosions, despite their very different wind properties. The EHT
data for M87 as well for low power radio galaxies all show consistency with
just this value of the quantity $(B \, \times \, r )$, key for angular momentum
and energy transport, and can be derived from the radio jet data. We interpret
this as a property of the near surroundings of a black hole (BH) at near
maximal rotation, independent of BH mass. In the commonly used green onion
model, in which a $2 \, \pi$ flow changes over to a jet flow we interpret this
as a wind emanating from the BH/accretion disk system and its surroundings.
Near the BH collisions in the wind can produce a large fraction of
anti-protons. In this scenario the cosmic Ray (CR) population from the wind/jet
is proposed to be visible as EeV protons and anti-protons in the CR data to EeV
energy, with a $E^{-7/3}$ spectrum. This can be connected to a concept of inner
and outer Penrose zones in the ergo-region. The observed numbers for the
magnetic field imply the Planck time as the governing time scale: A BH rotating
near maximum can accept a proton per log bin of energy in an extended spectrum
with the associated pions every Planck time. | arXiv |
Aggregators of distributed energy resources are increasingly encouraged to
participate in wholesale market bidding. However, the delivery of the power
they are awarded can result in over-voltage or congestion issues within the
distribution network (DN). The opportunity to lease energy storage from the
utility that manages the DN provides the aggregator with a means to mitigate
these issues, while also benefiting the utility in terms of additional lease
revenue. Nevertheless, this leasing opportunity considerably complicates the
aggregator's offer-making process, as it requires the consideration of market
uncertainties, uncertain power injection at DN buses, and the strategic
interactions between the aggregator and the utility. This paper presents a
stochastic Stackelberg game model that effectively captures the interactions
between the aggregator and the utility, ensuring DN security across all
potential uncertainty scenarios. Furthermore, in light of the privacy concerns
of both the aggregator and the utility, two distributed solution methods are
proposed. The first method follows a traditional predict-then-optimize
framework and has been validated to achieve the game equilibrium. The second
method employs an end-to-end framework, which has been empirically shown to
yield superior economic results. Case studies conducted on 69 and 533-bus DNs
illustrate the efficacy of the proposed methods. | arXiv |
Convection in planets and stars is predicted to occur in the "ultimate
regime'' of diffusivity-free, rapidly rotating turbulence, in which flows are
characteristically unaffected by viscous and thermal diffusion. Boundary layer
diffusion, however, has historically hindered experimental study of this
regime. Here, we utilize the boundary-independent oscillatory thermal-inertial
mode of rotating convection to realize the diffusivity-free scaling in liquid
metal laboratory experiments. This oscillatory style of convection arises in
rotating liquid metals (low Prandtl number fluids) and is driven by the
temperature gradient in the fluid bulk, thus remaining independent of diffusive
boundary dynamics. We triply verify the existence of the diffusivity-free
regime via measurements of heat transfer efficiency $Nu$, dimensionless flow
velocities $Re$, and internal temperature anomalies $\theta$, all of which are
in quantitative agreement with planar asymptotically-reduced models. Achieving
the theoretical diffusivity-free scalings in desktop-sized laboratory
experiments provides the validation necessary to extrapolate and predict the
convective flows in remote geophysical and astrophysical systems. | arXiv |
Pre-trained vision-language models provide a robust foundation for efficient
transfer learning across various downstream tasks. In the field of video action
recognition, mainstream approaches often introduce additional parameter modules
to capture temporal information. While the increased model capacity brought by
these additional parameters helps better fit the video-specific inductive
biases, existing methods require learning a large number of parameters and are
prone to catastrophic forgetting of the original generalizable knowledge. In
this paper, we propose a simple yet effective Multi-modal Spatio-Temporal
Adapter (MSTA) to improve the alignment between representations in the text and
vision branches, achieving a balance between general knowledge and
task-specific knowledge. Furthermore, to mitigate over-fitting and enhance
generalizability, we introduce a spatio-temporal description-guided consistency
constraint. This constraint involves feeding template inputs (i.e., ``a video
of $\{\textbf{cls}\}$'') into the trainable language branch, while
LLM-generated spatio-temporal descriptions are input into the pre-trained
language branch, enforcing consistency between the outputs of the two branches.
This mechanism prevents over-fitting to downstream tasks and improves the
distinguishability of the trainable branch within the spatio-temporal semantic
space. We evaluate the effectiveness of our approach across four tasks:
zero-shot transfer, few-shot learning, base-to-novel generalization, and
fully-supervised learning. Compared to many state-of-the-art methods, our MSTA
achieves outstanding performance across all evaluations, while using only 2-7\%
of the trainable parameters in the original model. Code will be avaliable at
https://github.com/chenhaoxing/ETL4Video. | arXiv |
Most robotics applications are typically accompanied with safety restrictions
that need to be satisfied with a high degree of confidence even in environments
under uncertainty. Controlling the state distribution of a system and enforcing
such specifications as distribution constraints is a promising approach for
meeting such requirements. In this direction, covariance steering (CS) is an
increasingly popular stochastic optimal control (SOC) framework for designing
safe controllers via explicit constraints on the system covariance.
Nevertheless, a major challenge in applying CS methods to systems with the
nonlinear dynamics and chance constraints common in robotics is that the
approximations needed are conservative and highly sensitive to the point of
approximation. This can cause sequential convex programming methods to converge
to poor local minima or incorrectly report problems as infeasible due to
shifting constraints. This paper presents a novel algorithm for solving
chance-constrained nonlinear CS problems that directly addresses this
challenge. Specifically, we propose an operator-splitting approach that
temporarily separates the main problem into subproblems that can be solved in
parallel. The benefit of this relaxation lies in the fact that it does not
require all iterates to satisfy all constraints simultaneously prior to
convergence, thus enhancing the exploration capabilities of the algorithm for
finding better solutions. Simulation results verify the ability of the proposed
method to find higher quality solutions under stricter safety constraints than
standard methods on a variety of robotic systems. Finally, the applicability of
the algorithm on real systems is confirmed through hardware demonstrations. | arXiv |
In this article, we study the FitzHugh-Nagumo $(1,1)$--fast-slow system where
the vector fields associated to the slow/fast equations come from the reduction
of the Hodgin-Huxley model for the nerve impulse. After deriving dynamical
properties of the singular and regular cases, we perform a bifurcation analysis
and we investigate how the parameters (of the affine slow equation) impact the
dynamics of the system. The study of codimension one bifurcations and the
numerical locus of canards concludes this case-study. All theoretical results
are numerically illustrated. | arXiv |
Let $p$ be an odd prime and $k$ be an algebraically closed field with
characteristic $p$. Booher and Cais showed that the $a$-number of a $\mathbb
Z/p \mathbb Z$-Galois cover of curves $\phi: Y \to X$ must be greater than a
lower bound determined by the ramification of $\phi$. In this paper, we provide
evidence that the lower bound is optimal by finding examples of Artin-Schreier
curves that have $a$-number equal to its lower bound for all $p$. Furthermore
we use formal patching to generate infinite families of Artin-Schreier curves
with $a$-number equal to the lower bound in any characteristic. | arXiv |
Because of the rapid development and increasing public availability of
Generative Artificial Intelligence (GenAI) models and tools, educational
institutions and educators must immediately reckon with the impact of students
using GenAI. There is limited prior research on computing students' use and
perceptions of GenAI. In anticipation of future advances and evolutions of
GenAI, we capture a snapshot of student attitudes towards and uses of yet
emerging GenAI, in a period of time before university policies had reacted to
these technologies. We surveyed all computer science majors in a small
engineering-focused R1 university in order to: (1) capture a baseline
assessment of how GenAI has been immediately adopted by aspiring computer
scientists; (2) describe computing students' GenAI-related needs and concerns
for their education and careers; and (3) discuss GenAI influences on CS
pedagogy, curriculum, culture, and policy. We present an exploratory
qualitative analysis of this data and discuss the impact of our findings on the
emerging conversation around GenAI and education. | arXiv |
In a system of two-dimensional electrons, a combination of broken symmetry,
interactions, and nontrivial topology can conspire to give rise to a nonlinear
transport regime, where electric current density scales as the square of
electric field. This regime has become a venue for exciting discoveries such as
the nonlinear Hall effect and diode-like nonreciprocal transport. However,
interpretation of experimental data is challenging in the nonlinear regime as
DC transport is described by a rank-3 conductivity tensor with 6 free
parameters. Here, we resolve this challenge by analytically solving for the
nonlinear potential distribution across the disk sample for an arbitrary linear
and nonlinear conductivity tensors. This allows us to unambiguously extract all
components of the nonlinear tensor from experimental measurement. Using this
novel tool, we identify giant nonlinear Hall effect in Bernal bilayer graphene.
Our methodology provides the first systematic framework for interpreting
nonlinear transport and uncovers a new route towards understanding quasi-2D
materials. | arXiv |
This work investigates the self-organization of multi-agent systems into
closed trajectories, a common requirement in unmanned aerial vehicle (UAV)
surveillance tasks. In such scenarios, smooth, unbiased control signals save
energy and mitigate mechanical strain. We propose a decentralized control
system architecture that produces a globally stable emergent structure from
local observations only; there is no requirement for agents to share a global
plan or follow prescribed trajectories. Central to our approach is the
formulation of an injective virtual embedding induced by rotations from the
actual agent positions. This embedding serves as a structure-preserving map
around which all agent stabilize their relative positions and permits the use
of well-established linear control techniques. We construct the embedding such
that it is topologically equivalent to the desired trajectory (i.e., a
homeomorphism), thereby preserving the stability characteristics. We
demonstrate the versatility of this approach through implementation on a swarm
of Quanser QDrone quadcopters. Results demonstrate the quadcopters
self-organize into the desired trajectory while maintaining even separation. | arXiv |
For a positive integer $k \ge 1$, a $k$-star ($k^+$-star, $k^-$-star,
respectively) is a connected graph containing a degree-$\ell$ vertex and $\ell$
degree-$1$ vertices, where $\ell = k$ ($\ell \ge k$, $1 \le \ell \le k$,
respectively). The $k^+$-star packing problem is to cover as many vertices of
an input graph $G$ as possible using vertex-disjoint $k^+$-stars in $G$; and
given $k > t \ge 1$, the $k^-/t$-star packing problem is to cover as many
vertices of $G$ as possible using vertex-disjoint $k^-$-stars but no $t$-stars
in $G$. Both problems are NP-hard for any fixed $k \ge 2$. We present a $(1 +
\frac {k^2}{2k+1})$- and a $\frac 32$-approximation algorithms for the
$k^+$-star packing problem when $k \ge 3$ and $k = 2$, respectively, and a $(1
+ \frac 1{t + 1 + 1/k})$-approximation algorithm for the $k^-/t$-star packing
problem when $k > t \ge 2$. They are all local search algorithms and they
improve the best known approximation algorithms for the problems, respectively. | arXiv |
We consider the problem of allocating heterogeneous and indivisible goods
among strategic agents, with preferences over subsets of goods, when there is
no medium of exchange. This model captures the well studied problem of fair
allocation of indivisible goods. Serial-quota mechanisms are allocation
mechanisms where there is a predefined order over agents, and each agent in her
turn picks a predefined number of goods from the remaining goods. These
mechanisms are clearly strategy-proof, non-bossy, and neutral. Are there other
mechanisms with these properties? We show that for important classes of strict
ordinal preferences (as lexicographic preferences, and as the class of all
strict preferences), these are the only mechanisms with these properties.
Importantly, unlike previous work, we can prove the claim even for mechanisms
that are not Pareto-efficient. Moreover, we generalize these results to
preferences that are cardinal, including any valuation class that contains
additive valuations. We then derive strong negative implications of this result
on truthful mechanisms for fair allocation of indivisible goods to agents with
additive valuations. | arXiv |
We participated in track 2 of the VoiceMOS Challenge 2024, which aimed to
predict the mean opinion score (MOS) of singing samples. Our submission secured
the first place among all participating teams, excluding the official baseline.
In this paper, we further improve our submission and propose a novel
Pitch-and-Spectrum-aware Singing Quality Assessment (PS-SQA) method. The PS-SQA
is designed based on the self-supervised-learning (SSL) MOS predictor,
incorporating singing pitch and spectral information, which are extracted using
pitch histogram and non-quantized neural codec, respectively. Additionally, the
PS-SQA introduces a bias correction strategy to address prediction biases
caused by low-resource training samples, and employs model fusion technology to
further enhance prediction accuracy. Experimental results confirm that our
proposed PS-SQA significantly outperforms all competing systems across all
system-level metrics, confirming its strong sing quality assessment
capabilities. | arXiv |
We investigate the degrees of freedom of New General Relativity. This theory
is a three-parameter theory and is classified into nine irreducible types
according to the rotation symmetry of $SO(3)$ on each leaf of ADM-foliation. In
the previous work~[{\it 2410.15056[gr-qc]}], we investigated the degrees of
freedom in types of NGR that are interests for describing gravity: Type 2, Type
3, Type 5, and Type 8. In this work, we focus on unveiling those numbers in all
other types to complete the analysis of NGR. After providing the Hamiltonian
formulation of NGR, we perform the analysis on Type 4, Type 7, and Type 9
according to the method that is provided in the previous work~[{\it
2410.15056[gr-qc]}]. Then we unveil that the degrees of freedom of Type 4, Type
7, and Type 9 are five, null, and three, respectively. Type 4 and Type 9 have
second-class constraint densities only. Type 7 has first-class constraint
densities only, but which is over constraint. In every type, no bifurcation
occurs unlikely to Type 8 in the previous work~[2410.15056[gr-qc]]. Finally, we
summarize this work and give a concluding remark for this series of works. | arXiv |
Abundant geomorphological and geochemical evidence of liquid water on the
surface of early Mars during the late Noachian and early Hesperian periods
needs to be reconciled with a fainter young Sun. While a dense CO2 atmosphere
and related warming mechanisms are potential solutions to the early Mars
climate problem, further investigation is warranted. Here, we complete a
comprehensive survey of the warming potential of all known greenhouse gases and
perform detailed calculations for 15 different minor gas species under early
Martian conditions. We find that of these 15 species, H2O2, HNO3, NH3, SO2, and
C2H4 cause significant greenhouse warming at concentrations of ~0.1 ppmv or
greater. However, the most highly effective greenhouse gas species also tend to
be more condensable, soluble and vulnerable to photolytic destruction. To
provide a reference for future atmospheric evolution and photochemical studies,
we have made our warming potential database freely available online. | arXiv |
The accurate segmentation of retinal blood vessels plays a crucial role in
the early diagnosis and treatment of various ophthalmic diseases. Designing a
network model for this task requires meticulous tuning and extensive
experimentation to handle the tiny and intertwined morphology of retinal blood
vessels. To tackle this challenge, Neural Architecture Search (NAS) methods are
developed to fully explore the space of potential network architectures and go
after the most powerful one. Inspired by neuronal diversity which is the
biological foundation of all kinds of intelligent behaviors in our brain, this
paper introduces a novel and foundational approach to neural network design,
termed ``neuron programming'', to automatically search neuronal types into a
network to enhance a network's representation ability at the neuronal level,
which is complementary to architecture-level enhancement done by NAS.
Additionally, to mitigate the time and computational intensity of neuron
programming, we develop a hypernetwork that leverages the search-derived
architectural information to predict optimal neuronal configurations.
Comprehensive experiments validate that neuron programming can achieve
competitive performance in retinal blood segmentation, demonstrating the strong
potential of neuronal diversity in medical image analysis. | arXiv |
In a setting where segmentation models have to be built for multiple
datasets, each with its own corresponding label set, a straightforward way is
to learn one model for every dataset and its labels. Alternatively, multi-task
architectures with shared encoders and multiple segmentation heads or shared
weights with compound labels can also be made use of. This work proposes a
novel label sharing framework where a shared common label space is constructed
and each of the individual label sets are systematically mapped to the common
labels. This transforms multiple datasets with disparate label sets into a
single large dataset with shared labels, and therefore all the segmentation
tasks can be addressed by learning a single model. This eliminates the need for
task specific adaptations in network architectures and also results in
parameter and data efficient models. Furthermore, label sharing framework is
naturally amenable for incremental learning where segmentations for new
datasets can be easily learnt. We experimentally validate our method on various
medical image segmentation datasets, each involving multi-label segmentation.
Furthermore, we demonstrate the efficacy of the proposed method in terms of
performance and incremental learning ability vis-a-vis alternative methods. | arXiv |
The coefficient algebra of a finite-dimensional Lie algebra with respect to a
faithful representation is defined as the subalgebra generated by all
coefficients of the corresponding characteristic polynomial. We establish a
connection between classical invariant theory and the coefficient algebras of
finite-dimensional complex Lie algebras. Specifically, we prove that with
respect to any symmetric power of the standard representation: (1) the
coefficient algebra of the upper triangular solvable complex Lie algebra is
isomorphic to the algebra of symmetric polynomials; (2) the coefficient algebra
of the general linear complex Lie algebra is the invariant ring of the general
linear group with the conjugacy action on the full space of matrices; and (3)
the coefficient algebra of the special linear complex Lie algebra can be
generated by classical trace functions. As an application, we exactly exhibit
the characteristic polynomial of the special linear complex Lie algebra. | arXiv |
Let $M_n$ be the algebra of $n \times n$ complex matrices and $\mathcal{T}_n
\subseteq M_n$ the corresponding upper-triangular subalgebra. In their
influential work, Petek and \v{S}emrl characterize Jordan automorphisms of
$M_n$ and $\mathcal{T}_n$, when $n \geq 3$, as (injective in the case of
$\mathcal{T}_n$) continuous commutativity and spectrum preserving maps $\phi :
M_n \to M_n$ and $\phi : \mathcal{T}_n \to \mathcal{T}_n$. Recently, in a joint
work with Petek, the authors extended this characterization to the maps $\phi :
\mathcal{A} \to M_n$, where $\mathcal{A}$ is an arbitrary subalgebra of $M_n$
that contains $\mathcal{T}_n$. In particular, any such map $\phi$ is a Jordan
embedding and hence of the form $\phi(X)=TXT^{-1}$ or $\phi(X)=TX^tT^{-1}$, for
some invertible matrix $T\in M_n$. In this paper we further extend the
aforementioned results in the context of structural matrix algebras (SMAs),
i.e. subalgebras $\mathcal{A}$ of $M_n$ that contain all diagonal matrices.
More precisely, we provide both a necessary and sufficient condition for an SMA
$\mathcal{A}\subseteq M_n$ such that any injective continuous commutativity and
spectrum preserving map $\phi: \mathcal{A} \to M_n$ is necessarily a Jordan
embedding. In contrast to the previous cases, such maps $\phi$ no longer need
to be multiplicative/antimultiplicative, nor rank-one preservers. | arXiv |
Expanding reinforcement learning (RL) to offline domains generates promising
prospects, particularly in sectors where data collection poses substantial
challenges or risks. Pivotal to the success of transferring RL offline is
mitigating overestimation bias in value estimates for state-action pairs absent
from data. Whilst numerous approaches have been proposed in recent years, these
tend to focus primarily on continuous or small-scale discrete action spaces.
Factorised discrete action spaces, on the other hand, have received relatively
little attention, despite many real-world problems naturally having
factorisable actions. In this work, we undertake a formative investigation into
offline reinforcement learning in factorisable action spaces. Using
value-decomposition as formulated in DecQN as a foundation, we present the case
for a factorised approach and conduct an extensive empirical evaluation of
several offline techniques adapted to the factorised setting. In the absence of
established benchmarks, we introduce a suite of our own comprising datasets of
varying quality and task complexity. Advocating for reproducible research and
innovation, we make all datasets available for public use alongside our code
base. | arXiv |
The prevailing of artificial intelligence-of-things calls for higher
energy-efficient edge computing paradigms, such as neuromorphic agents
leveraging brain-inspired spiking neural network (SNN) models based on
spatiotemporally sparse binary activations. However, the lack of efficient and
high-accuracy deep SNN learning algorithms prevents them from practical edge
deployments with a strictly bounded cost. In this paper, we propose a
spatiotemporal orthogonal propagation (STOP) algorithm to tack this challenge.
Our algorithm enables fully synergistic learning of synaptic weights as well as
firing thresholds and leakage factors in spiking neurons to improve SNN
accuracy, while under a unified temporally-forward trace-based framework to
mitigate the huge memory requirement for storing neural states of all
time-steps in the forward pass. Characteristically, the spatially-backward
neuronal errors and temporally-forward traces propagate orthogonally to and
independently of each other, substantially reducing computational overhead. Our
STOP algorithm obtained high recognition accuracies of 99.53%, 94.84%, 74.92%,
98.26% and 77.10% on the MNIST, CIFAR-10, CIFAR-100, DVS-Gesture and
DVS-CIFAR10 datasets with adequate SNNs of intermediate scales from LeNet-5 to
ResNet-18. Compared with other deep SNN training works, our method is more
plausible for edge intelligent scenarios where resources are limited but
high-accuracy in-situ learning is desired. | arXiv |
High annotation costs from hiring or crowdsourcing complicate the creation of
large, high-quality datasets needed for training reliable text classifiers.
Recent research suggests using Large Language Models (LLMs) to automate the
annotation process, reducing these costs while maintaining data quality. LLMs
have shown promising results in annotating downstream tasks like hate speech
detection and political framing. Building on the success in these areas, this
study investigates whether LLMs are viable for annotating the complex task of
media bias detection and whether a downstream media bias classifier can be
trained on such data. We create annolexical, the first large-scale dataset for
media bias classification with over 48000 synthetically annotated examples. Our
classifier, fine-tuned on this dataset, surpasses all of the annotator LLMs by
5-9 percent in Matthews Correlation Coefficient (MCC) and performs close to or
outperforms the model trained on human-labeled data when evaluated on two media
bias benchmark datasets (BABE and BASIL). This study demonstrates how our
approach significantly reduces the cost of dataset creation in the media bias
domain and, by extension, the development of classifiers, while our subsequent
behavioral stress-testing reveals some of its current limitations and
trade-offs. | arXiv |
The demand for deploying deep convolutional neural networks (DCNNs) on
resource-constrained devices for real-time applications remains substantial.
However, existing state-of-the-art structured pruning methods often involve
intricate implementations, require modifications to the original network
architectures, and necessitate an extensive fine-tuning phase. To overcome
these challenges, we propose a novel method that, for the first time,
incorporates the concepts of charge and electrostatic force from physics into
the training process of DCNNs. The magnitude of this force is directly
proportional to the product of the charges of the convolution filter and the
source filter, and inversely proportional to the square of the distance between
them. We applied this electrostatic-like force to the convolution filters,
either attracting filters with opposite charges toward non-zero weights or
repelling filters with like charges toward zero weights. Consequently, filters
subject to repulsive forces have their weights reduced to zero, enabling their
removal, while the attractive forces preserve filters with significant weights
that retain information. Unlike conventional methods, our approach is
straightforward to implement, does not require any architectural modifications,
and simultaneously optimizes weights and ranks filter importance, all without
the need for extensive fine-tuning. We validated the efficacy of our method on
modern DCNN architectures using the MNIST, CIFAR, and ImageNet datasets,
achieving competitive performance compared to existing structured pruning
approaches. | arXiv |
We report the detection of an extreme stellar prominence eruption on the M
dwarf LAMOST J044431.62+235627.9, observed through time-domain H$\alpha$
spectroscopy with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope
(LAMOST). This prominence eruption was accompanied by a superflare lasting over
160.4 minutes. The H$\alpha$ line profile exhibits significant blue-wing
enhancement during the impulsive phase and near the flare peak, with a
projected bulk blueshift velocity of $-228\pm11$~km~s$^{-1}$ and a maximum
blueshift velocity reaching $-605\pm15$~km~s$^{-1}$. Velocity analysis of the
eruptive prominence at various heights above the stellar surface indicates that
some of the projected ejection velocities along the line of sight exceed the
corresponding escape velocities, suggesting a potential coronal mass ejection
(CME). The equivalent width (EW) of the H$\alpha$ blue-wing enhancement in this
eruption appears to be the largest observed to date and is comparable to the EW
of the H$\alpha$ line profile during the quiescent phase of the host star. We
performed a two-cloud modeling for the prominence and the associated flare,
which suggests that the eruptive prominence has a mass ranging from $1.6 \times
10^{19}~\text{g}$ to $7.2 \times 10^{19}~\text{g}$. More importantly, the mass
ratio of the erupting prominence to its host star is the largest among all
reported stellar prominence eruptions/CMEs. | arXiv |
Subspace clustering seeks to identify subspaces that segment a set of n data
points into k (k<<n) groups, which has emerged as a powerful tool for analyzing
data from various domains, especially images and videos. Recently, several
studies have demonstrated the great potential of subspace clustering models for
partitioning vertices in attributed graphs, referred to as SCAG. However, these
works either demand significant computational overhead for constructing the nxn
self-expressive matrix, or fail to incorporate graph topology and attribute
data into the subspace clustering framework effectively, and thus, compromise
result quality.
Motivated by this, this paper presents two effective and efficient
algorithms, S2CAG and M-S2CAG, for SCAG computation. Particularly, S2CAG
obtains superb performance through three major contributions. First, we
formulate a new objective function for SCAG with a refined representation model
for vertices and two non-trivial constraints. On top of that, an efficient
linear-time optimization solver is developed based on our theoretically
grounded problem transformation and well-thought-out adaptive strategy. We then
conduct an in-depth analysis to disclose the theoretical connection of S2CAG to
conductance minimization, which further inspires the design of M-S2CAG that
maximizes the modularity. Our extensive experiments, comparing S2CAG and
M-S2CAG against 17 competitors over 8 benchmark datasets, exhibit that our
solutions outperform all baselines in terms of clustering quality measured
against the ground truth while delivering high efficiency | arXiv |
By the end of 2023, renewable sources cover 63.4% of the total electric power
demand of Chile, and in line with the global trend, photovoltaic (PV) power
shows the most dynamic increase. Although Chile's Atacama Desert is considered
the sunniest place on Earth, PV power production, even in this area, can be
highly volatile. Successful integration of PV energy into the country's power
grid requires accurate short-term PV power forecasts, which can be obtained
from predictions of solar irradiance and related weather quantities. Nowadays,
in weather forecasting, the state-of-the-art approach is the use of ensemble
forecasts based on multiple runs of numerical weather prediction models.
However, ensemble forecasts still tend to be uncalibrated or biased, thus
requiring some form of post-processing. The present work investigates
probabilistic forecasts of solar irradiance for Regions III and IV in Chile.
For this reason, 8-member short-term ensemble forecasts of solar irradiance for
calendar year 2021 are generated using the Weather Research and Forecasting
(WRF) model, which are then calibrated using the benchmark ensemble model
output statistics (EMOS) method based on a censored Gaussian law, and its
machine learning-based distributional regression network (DRN) counterpart.
Furthermore, we also propose a neural network-based post-processing method
resulting in improved 8-member ensemble predictions. All forecasts are
evaluated against station observations for 30 locations, and the skill of
post-processed predictions is compared to the raw WRF ensemble. Our case study
confirms that all studied post-processing methods substantially improve both
the calibration of probabilistic- and the accuracy of point forecasts. Among
the methods tested, the corrected ensemble exhibits the best overall
performance. Additionally, the DRN model generally outperforms the
corresponding EMOS approach. | arXiv |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 47