text
stringlengths 189
1.92k
| split
stringclasses 1
value |
---|---|
Cell-free (CF) massive multiple-input multiple-output (mMIMO) and
reconfigurable intelligent surface (RIS) are two advanced transceiver
technologies for realizing future sixth-generation (6G) networks. In this
paper, we investigate the joint precoding and access point (AP) selection for
energy efficient RIS-aided CF mMIMO system. To address the associated
computational complexity and communication power consumption, we advocate for
user-centric dynamic networks in which each user is served by a subset of APs
rather than by all of them. Based on the user-centric network, we formulate a
joint precoding and AP selection problem to maximize the energy efficiency (EE)
of the considered system. To solve this complex nonconvex problem, we propose
an innovative double-layer multi-agent reinforcement learning (MARL)-based
scheme. Moreover, we propose an adaptive power threshold-based AP selection
scheme to further enhance the EE of the considered system. To reduce the
computational complexity of the RIS-aided CF mMIMO system, we introduce a fuzzy
logic (FL) strategy into the MARL scheme to accelerate convergence. The
simulation results show that the proposed FL-based MARL cooperative
architecture effectively improves EE performance, offering a 85\% enhancement
over the zero-forcing (ZF) method, and achieves faster convergence speed
compared with MARL. It is important to note that increasing the transmission
power of the APs or the number of RIS elements can effectively enhance the
spectral efficiency (SE) performance, which also leads to an increase in power
consumption, resulting in a non-trivial trade-off between the quality of
service and EE performance. | arXiv |
Transition region explosive events are characterized by non-Gaussian profiles
of the emission lines formed at transition region temperatures, and they are
believed to be manifestations of small-scale reconnection events in the
transition region. We took a 3D self-consistent quiet-Sun model extending from
the upper convection zone to the lower corona calculated using the MURaM code.
We first synthesized the Si IV line profiles from the model and then located
the profiles which show signatures of bi-directional flows. These tend to
appear along network lanes, and most do not reach coronal temperatures. We
isolated two hot (around 1 MK) events and one cool (order of 0.1 MK) event and
examined the magnetic field evolution in and around these selected events.
Furthermore, we investigated why some explosive events reach coronal
temperatures while most remain cool. The field lines around two events
reconnect at small angles, i.e., they undergo component reconnection. The third
case is associated with the relaxation of a highly twisted flux rope. All of
the three events reveal signatures in the synthesized EUI 174 {\AA} images. The
intensity variations in two events are dominated by variations of the coronal
emissions, while the cool component seen in the respective channel contributes
significantly to the intensity variation in one case. Comparing to the cool
event, one hot event is embedded in regions with higher magnetic field strength
and heating rates while the densities are comparable, and the other hot event
is heated to coronal temperatures mainly because of the low density.
Small-scale heating events seen in EUV channels of AIA or EUI might be hot or
cool. Our results imply that the major difference between the events in which
coronal counterparts dominate or not is the amount of converted magnetic energy
and/or density in and around the reconnection region. | arXiv |
Recent advances in multimodal Large Language Models (LLMs) have shown great
success in understanding multi-modal contents. For video understanding tasks,
training-based video LLMs are difficult to build due to the scarcity of
high-quality, curated video-text paired data. In contrast, paired image-text
data are much easier to obtain, and there is substantial similarity between
images and videos. Consequently, extending image LLMs for video understanding
tasks presents an appealing alternative. Developing effective strategies for
compressing visual tokens from multiple frames is a promising way to leverage
the powerful pre-trained image LLM. In this work, we explore the limitations of
the existing compression strategies for building a training-free video LLM. The
findings lead to our method TS-LLaVA, which constructs visual tokens through a
Thumbnail-and-Sampling strategy. Given a video, we select few equidistant
frames from all input frames to construct a Thumbnail image as a detailed
visual cue, complemented by Sampled visual tokens from all input frames. Our
method establishes the new state-of-the-art performance among training-free
video LLMs on various benchmarks. Notably, our 34B model outperforms GPT-4V on
the MVBench benchmark, and achieves performance comparable to the 72B
training-based video LLM, Video-LLaMA2, on the challenging MLVU benchmark. Code
is available at https://github.com/tingyu215/TS-LLaVA. | arXiv |
We prove that for all constants $a\in\N$, $b\in\Z$, $c,d\in\R$, $c\neq 0$,
the fractions $\phi(an+b)/(cn+d)$ lie dense in the interval $]0,D]$
(respectively $[D,0[$ if $c<0$), where $D=a\phi(\gcd(a,b))/(c\gcd(a,b))$. This
interval is the largest possible, since it may happen that isolated fractions
lie outside of the interval: we prove a complete determination of the case
where this happens, which yields an algorithm that calculates the amount of $n$
such that $\rad(an+b)|g$ for coprime $a,b$ and any $g$. Furthermore, this leads
to an interesting open question which is a generalization of a famous problem
raised by V.~Arnold. For the fractions $\phi(an+b)/\phi(cn+d)$ with constants
$a,c\in\N,b,d\in\Z$, we prove that they lie dense in $]0,\infty[$ exactly if
$ad\neq bc$. | arXiv |
In the theory of dynamic programming, an optimal policy is a policy whose
lifetime value dominates that of all other policies at every point in the state
space. This raises a natural question: under what conditions does optimality at
a single state imply optimality at every state? We show that, in a general
setting, the irreducibility of the transition kernel under a feasible policy is
a sufficient condition for extending optimality from one state to all states.
These results have important implications for dynamic optimization algorithms
based on gradient methods, which are routinely applied in reinforcement
learning and other large scale applications. | arXiv |
Recent advancements of generative AI have significantly promoted content
creation and editing, where prevailing studies further extend this exciting
progress to video editing. In doing so, these studies mainly transfer the
inherent motion patterns from the source videos to the edited ones, where
results with inferior consistency to user prompts are often observed, due to
the lack of particular alignments between the delivered motions and edited
contents. To address this limitation, we present a shape-consistent video
editing method, namely StableV2V, in this paper. Our method decomposes the
entire editing pipeline into several sequential procedures, where it edits the
first video frame, then establishes an alignment between the delivered motions
and user prompts, and eventually propagates the edited contents to all other
frames based on such alignment. Furthermore, we curate a testing benchmark,
namely DAVIS-Edit, for a comprehensive evaluation of video editing, considering
various types of prompts and difficulties. Experimental results and analyses
illustrate the outperforming performance, visual consistency, and inference
efficiency of our method compared to existing state-of-the-art studies. | arXiv |
Population size estimation is a major challenge in official statistics,
social sciences, and natural sciences. The problem can be tackled by applying
capture-recapture methods, which vary depending on the number of sources used,
particularly on whether a single or multiple sources are involved. This paper
focuses on the first group of methods and introduces a novel R package:
singleRcapture. The package implements state-of-the-art single-source
capture-recapture (SSCR) models (e.g.zero-truncated one-inflated regression)
together with new developments proposed by the authors, and provides a
user-friendly application programming interface (API). This self-contained
package can be used to produce point estimates and their variance and
implements several bootstrap variance estimators or diagnostics to assess
quality and conduct sensitivity analysis. It is intended for users interested
in estimating the size of populations, particularly those that are difficult to
reach or measure, for which information is available only from one source and
dual/multiple system estimation is not applicable. Our package serves to bridge
a significant gap, as the SSCR methods are either not available at all or are
only partially implemented in existing R packages and other open-source
software. Furthermore, since many R users are familiar with countreg or VGAM
packages, we have implemented a lightweight extension called
singleRcaptureExtra which can be used to integrate singleRcapture with these
packages. | arXiv |
Quantum Local Area Networks (QLANs) represent a promising building block for
larger scale quantum networks with the ambitious goal -- in a long time horizon
-- of realizing a Quantum Internet. Surprisingly, the physical topology of a
QLAN can be enriched by a set of artificial links, enabled by shared
multipartite entangled states among the nodes of the network. This novel
concept of artificial topology revolutionizes the possibilities of connectivity
within the local network, enabling an on-demand manipulation of the artificial
network topology. In this paper, we discuss the implementation of the QLAN
model in SeQUeNCe, a discrete-event simulator of quantum networks.
Specifically, we provide an analysis of how network nodes interact, with an
emphasis on the interplay between quantum operations and classical signaling
within the network. Remarkably, through the modeling of a measurement protocol
and a correction protocol, our QLAN model implementation enables the simulation
of the manipulation process of a shared entangled quantum state, and the
subsequent engineering of the entanglement-based connectivity. Our simulations
demonstrate how to obtain different virtual topologies with different
manipulations of the shared resources and with all the possible measurement
outcomes, with an arbitrary number of nodes within the network. | arXiv |
We propose and study a minimalist approach towards synthetic tabular data
generation. The model consists of a minimalistic unsupervised SparsePCA encoder
(with contingent clustering step or log transformation to handle nonlinearity)
and XGboost decoder which is SOTA for structured data regression and
classification tasks. We study and contrast the methodologies with
(variational) autoencoders in several toy low dimensional scenarios to derive
necessary intuitions. The framework is applied to high dimensional simulated
credit scoring data which parallels real-life financial applications. We
applied the method to robustness testing to demonstrate practical use cases.
The case study result suggests that the method provides an alternative to raw
and quantile perturbation for model robustness testing. We show that the method
is simplistic, guarantees interpretability all the way through, does not
require extra tuning and provide unique benefits. | arXiv |
Future satellite missions are expected to perform all-sky surveys, thus
providing the entire sky near-infrared spectral data and consequently opening a
new window to investigate the evolution of galaxies. Specifically, the infrared
spectral data facilitate the precise estimation of stellar masses of numerous
low-redshift galaxies. We utilize the synthetic spectral energy distribution
(SED) of 2853 nearby galaxies drawn from the DustPedia (435) and Stripe 82
regions (2418). The stellar mass-to-light ratio ($M_*/L$) estimation accuracy
over a wavelength range of $0.75-5.0$ $\mu$m is computed through the SED
fitting of the multi-wavelength photometric dataset, which has not yet been
intensively explored in previous studies. We find that the scatter in $M_*/L$
is significantly larger in the shorter and longer wavelength regimes due to the
effect of the young stellar population and the dust contribution, respectively.
While the scatter in $M_*/L$ approaches its minimum ($\sim0.10$ dex) at
$\sim1.6$ $\mu$m, it remains sensitive to the adopted star formation history
model. Furthermore, $M_*/L$ demonstrates weak and strong correlations with the
stellar mass and the specific star formation rate (SFR), respectively. Upon
adequately correcting the dependence of $M_*/L$ on the specific SFR, the
scatter in the $M_*/L$ further reduces to $0.02$ dex at $\sim1.6$ $\mu$m. This
indicates that the stellar mass can be estimated with an accuracy of $\sim0.02$
dex with a prior knowledge of SFR, which can be estimated using the infrared
spectra obtained with future survey missions. | arXiv |
The majority of atomic nuclei have deformed shapes and nearly all these
shapes are symmetric with respect to reflection. There are only a few
reflection asymmetric pear-shaped nuclei that have been found in actinide and
lanthanide regions, which have static octupole deformation. These nuclei
possess an intrinsic electric dipole moment due to the shift between the center
of charge and the center of mass. This manifests in the enhancement of the
electric dipole transition rates. In this article, we report on the measurement
of the lifetimes of the high spin levels of the two alternate parity bands in
$^{100}$Ru through the Doppler Shift Attenuation Method. The estimated electric
dipole transition rates have been compared with the calculated transition rates
using the triaxial projected shell model without octupole deformation, and are
found to be an order of magnitude enhanced. Thus, the observation of seven
inter-leaved electric dipole transitions with enhanced rates establish
$^{100}$Ru as possibly the first octupole deformed nucleus reported in the A
$\approx$ 100 mass region. | arXiv |
Random walks and related spatial stochastic models have been used in a range
of application areas including animal and plant ecology, infectious disease
epidemiology, developmental biology, wound healing, and oncology. Classical
random walk models assume that all individuals in a population behave
independently, ignoring local physical and biological interactions. This
assumption simplifies the mathematical description of the population
considerably, enabling continuum-limit descriptions to be derived and used in
model analysis and fitting. However, interactions between individuals can have
a crucial impact on population-level behaviour. In recent decades, research has
increasingly been directed towards models that include interactions, including
physical crowding effects and local biological processes such as adhesion,
competition, dispersal, predation and adaptive directional bias. In this
article, we review the progress that has been made with models of interacting
individuals. We aim to provide an overview that is accessible to researchers in
application areas, as well as to specialist modellers. We focus particularly on
derivation of asymptotically exact or approximate continuum-limit descriptions
and simplified deterministic models of mean-field behaviour and resulting
spatial patterns. We provide worked examples and illustrative results of
selected models. We conclude with a discussion of current areas of focus and
future challenges. | arXiv |
We show how curing an anomaly of the twistor uplift of self-dual Yang-Mills
theory implies linear relations among one-loop, $n$-gluon, color-ordered
subamplitudes in QCD, when all $n$ gluon helicities are positive, or when
exactly one is negative. We compute the number of linearly independent
subamplitudes as determined by these relations, in terms of unsigned Stirling
numbers. Then we use a momentum-twistor parametrization to show that there are
no further linear dependencies. | arXiv |
Ultra-high-definition (UHD) image restoration is vital for applications
demanding exceptional visual fidelity, yet existing methods often face a
trade-off between restoration quality and efficiency, limiting their practical
deployment. In this paper, we propose TSFormer, an all-in-one framework that
integrates \textbf{T}rusted learning with \textbf{S}parsification to boost both
generalization capability and computational efficiency in UHD image
restoration. The key is that only a small amount of token movement is allowed
within the model. To efficiently filter tokens, we use Min-$p$ with random
matrix theory to quantify the uncertainty of tokens, thereby improving the
robustness of the model. Our model can run a 4K image in real time (40fps) with
3.38 M parameters. Extensive experiments demonstrate that TSFormer achieves
state-of-the-art restoration quality while enhancing generalization and
reducing computational demands. In addition, our token filtering method can be
applied to other image restoration models to effectively accelerate inference
and maintain performance. | arXiv |
We study the problem of maximizing a function that is approximately
submodular under a cardinality constraint. Approximate submodularity implicitly
appears in a wide range of applications as in many cases errors in evaluation
of a submodular function break submodularity. Say that $F$ is
$\varepsilon$-approximately submodular if there exists a submodular function
$f$ such that $(1-\varepsilon)f(S) \leq F(S)\leq (1+\varepsilon)f(S)$ for all
subsets $S$. We are interested in characterizing the query-complexity of
maximizing $F$ subject to a cardinality constraint $k$ as a function of the
error level $\varepsilon>0$. We provide both lower and upper bounds: for
$\varepsilon>n^{-1/2}$ we show an exponential query-complexity lower bound. In
contrast, when $\varepsilon< {1}/{k}$ or under a stronger bounded curvature
assumption, we give constant approximation algorithms. | arXiv |
In recent years, with the rapid development of augmented reality (AR)
technology, there is an increasing demand for multi-user collaborative
experiences. Unlike for single-user experiences, ensuring the spatial
localization of every user and maintaining synchronization and consistency of
positioning and orientation across multiple users is a significant challenge.
In this paper, we propose a multi-user localization system based on ORB-SLAM2
using monocular RGB images as a development platform based on the Unity 3D game
engine. This system not only performs user localization but also places a
common virtual object on a planar surface (such as table) in the environment so
that every user holds a proper perspective view of the object. These generated
virtual objects serve as reference points for multi-user position
synchronization. The positioning information is passed among every user's AR
devices via a central server, based on which the relative position and movement
of other users in the space of a specific user are presented via virtual
avatars all with respect to these virtual objects. In addition, we use deep
learning techniques to estimate the depth map of an image from a single RGB
image to solve occlusion problems in AR applications, making virtual objects
appear more natural in AR scenes. | arXiv |
We present a novel digital humanities method for representing our Twitch
chatters as user embeddings created by a large language model (LLM). We cluster
these embeddings automatically using affinity propagation and further narrow
this clustering down through manual analysis. We analyze the chat of one stream
by each Twitch streamer: SmallAnt, DougDoug and PointCrow. Our findings suggest
that each streamer has their own type of chatters, however two categories
emerge for all of the streamers: supportive viewers and emoji and reaction
senders. Repetitive message spammers is a shared chatter category for two of
the streamers. | arXiv |
Non-gravitational forces play surprising and, sometimes, centrally important
roles in shaping the motions and properties of small planetary bodies. In the
solar system, the morphologies of comets, the delivery of meteorites and the
shapes and dynamics of asteroids are all affected by non-gravitational forces.
Around other stars, non-gravitational forces affect the lifetimes of particles
and their rates of radial transport within circumstellar disks. Unlike the
gravitational force, which is a simple function of the well known separations
and masses of bodies, the non-gravitational forces are frequently functions of
poorly known or even unmeasurable physical properties. Here, we present
order-of-magnitude descriptions of non-gravitational forces, with examples of
their application. | arXiv |
Understanding the performance of electrochemical energy storage systems
requires probing the electrochemical properties at each layer and interface
during cell operation. While traditional onboard and operando methods can
measure impedance, voltage, or capacity, they lack spatial resolution to
pinpoint the properties to specific layers and interfaces. In this work, we
describe an approach of using thermal waves to measure entropy change,
transport resistance, and charge-transfer resistance with depth resolution of a
few microns within an electrochemical cell. We achieve this by relating heat
generation at multiple harmonics of an AC current to electrochemical processes
and leveraging frequency dependence of thermal penetration depth for spatial
resolution. We name this frequency domain spectroscopy of the thermal
signatures of the electrochemical processes measured at multiple harmonics of
the alternating current as Multi-harmonic ElectroThermal Spectroscopy (METS).
This technique enables isolation and measurement of solvation entropy at
individual electrode-electrolyte interfaces from the first harmonic (1{\omega})
thermal signature and resolution of the overall interfacial impedance into
charge-transfer and interface transport resistance components from the second
harmonic (2{\omega}) thermal signature. From this, we also demonstrate an
operando measurement of the growth of the solid-electrolyte interphase (SEI)
layer at the lithium-electrolyte interface and show that two chemically similar
electrodes can have significantly different interfacial transport resistance
based on the preparation of the electrodes. Additionally, the method is not
specific to lithium-ion chemistry and can therefore be generalized for all
electrochemical systems of interest. | arXiv |
We establish a fundamental theorem of orders (FTO) which allows us to express
all orders uniquely as an intersection of `irreducible orders' along which the
index and the conductor distributes multiplicatively.
We define a subclass of Irreducible orders named Pseudo maximal orders. We
then consider orders (called Sudo maximal orders) whose decomposition under FTO
contains only Pseudo maximal orders. These rings can be seen as being ``close"
to being maximal (ring of integers) and thus there is a limited number of them
with bounded index (by X). We give an upper bound for this quantity. We then
show that all polynomials which can be sieved using only the Ekedahl sieve
correspond to Sudo Maximal Orders. We use this understanding to get a weighted
count for the number of number-fields with fixed degree and bounded
discriminant using the concept of weakly divisible rings. | arXiv |
Alignment of large language models (LLMs) to societal values should account
for pluralistic values from diverse groups. One technique uses in-context
learning for inference-time alignment, but only considers similarity when
drawing few-shot examples, not accounting for cross-group differences in value
prioritization. We propose SPICA, a framework for pluralistic alignment that
accounts for group-level differences during in-context example retrieval. SPICA
introduces three designs to facilitate pluralistic alignment: scenario banks,
group-informed metrics, and in-context alignment prompts. From an evaluation of
SPICA on an alignment task collecting inputs from four demographic groups ($n =
544$), our metrics retrieve in-context examples that more closely match
observed preferences, with the best prompt configuration using multiple
contrastive responses to demonstrate examples. In an end-to-end evaluation ($n
= 80$), we observe that SPICA-aligned models are higher rated than a baseline
similarity-only retrieval approach, with groups seeing up to a +0.16 point
improvement on a 5 point scale. Additionally, gains from SPICA were more
uniform, with all groups benefiting from alignment rather than only some.
Finally, we find that while a group-agnostic approach can effectively align to
aggregated values, it is not most suited for aligning to divergent groups. | arXiv |
A fundamental problem in network experiments is selecting an appropriate
experimental design in order to precisely estimate a given causal effect of
interest. In fact, optimal rates of estimation remain unknown for essentially
all causal effects in network experiments. In this work, we propose a general
approach for constructing experiment designs under network interference with
the goal of precisely estimating a pre-specified causal effect. A central
aspect of our approach is the notion of a conflict graph, which captures the
fundamental unobservability associated with the casual effect and the
underlying network. We refer to our experimental design as the Conflict Graph
Design. In order to estimate effects, we propose a modified Horvitz--Thompson
estimator. We show that its variance under the Conflict Graph Design is bounded
as $O(\lambda(H) / n )$, where $\lambda(H)$ is the largest eigenvalue of the
adjacency matrix of the conflict graph. These rates depend on both the
underlying network and the particular causal effect under investigation. Not
only does this yield the best known rates of estimation for several
well-studied causal effects (e.g. the global and direct effects) but it also
provides new methods for effects which have received less attention from the
perspective of experiment design (e.g. spill-over effects). Our results
corroborate two implicitly understood points in the literature: (1) that in
order to increase precision, experiment designs should be tailored to specific
causal effects of interest and (2) that "more local" effects are easier to
estimate than "more global" effects. In addition to point estimation, we
construct conservative variance estimators which facilitate the construction of
asymptotically valid confidence intervals for the casual effect of interest. | arXiv |
Distributed phased arrays have recently garnered interest in applications
such as satellite communications and high-resolution remote sensing.
High-performance coherent distributed operations such as distributed
beamforming are dependent on the ability to synchronize the spatio-electrical
states of the elements in the array to the order of the operational wavelength,
so that coherent signal summation can be achieved at any arbitrary target
destination. In this paper, we address the fundamental challenge of precise
distributed array element localization to enable coherent operation, even in
complex environments where the array may not be capable of directly estimating
all nodal link distances. We employ a two-way time transfer technique to
synchronize the nodes of the array and perform internode ranging. We implement
the classical multidimensional scaling algorithm to recover a decentralized
array geometry from a set of range estimates. We also establish the incomplete
set of range estimates as a multivariable non-convex optimization problem, and
define the differential evolution algorithm which searches the solution space
to complete the set of ranges. We experimentally demonstrate wireless
localization using a spectrally-sparse pulsed two-tone waveform with 40 MHz
tone separation in a laboratory environment, achieving a mean localization
error vector magnitude of 0.82 mm in an environment with an average link SNR of
34 dB, theoretically supporting distributed beamforming operation up to 24.3
GHz. | arXiv |
Recently the galaxy matter density 4-point correlation function has been
looked at to investigate parity violation in large scale structure surveys. The
4-point correlation function is the lowest order statistic which is sensitive
to parity violation, since a tetrahedron is the simplest shape that cannot be
superimposed on its mirror image by a rotation. If the parity violation is
intrinsic in nature, this could give us a window into inflationary physics.
However, we need to exhaust all other contaminations before we consider them to
be intrinsic. Even though the standard Newtonian redshift-space distortions are
parity symmetric, the full relativistic picture is not. Therefore, we expect a
parity-odd trispectrum when observing in redshift space. We calculate the
trispectrum with the leading-order relativistic effects and investigate in
detail the parameter space of the trispectrum and the effects of these
relativistic corrections for different parameter values and configurations. We
also look at different surveys and how the evolution and magnification biases
can be affected by different parameter choices. | arXiv |
While state-of-the-art models for breast cancer detection leverage multi-view
mammograms for enhanced diagnostic accuracy, they often focus solely on visual
mammography data. However, radiologists document valuable lesion descriptors
that contain additional information that can enhance mammography-based breast
cancer screening. A key question is whether deep learning models can benefit
from these expert-derived features. To address this question, we introduce a
novel multi-modal approach that combines textual BI-RADS lesion descriptors
with visual mammogram content. Our method employs iterative attention layers to
effectively fuse these different modalities, significantly improving
classification performance over image-only models. Experiments on the CBIS-DDSM
dataset demonstrate substantial improvements across all metrics, demonstrating
the contribution of handcrafted features to end-to-end. | arXiv |
We study the classic single-choice prophet secretary problem through a
resource augmentation lens. Our goal is to bound the $(1-\epsilon)$-competition
complexity for different classes of online algorithms. This metric asks for the
smallest $k$ such that the expected value of the online algorithm on $k$ copies
of the original instance, is at least a $(1 - \epsilon)$-approximation to the
expected offline optimum on the original instance (without added copies).
We consider four natural classes of online algorithms: single-threshold,
time-based threshold, activation-based, and general algorithms. We show that
for single-threshold algorithms the $(1-\epsilon)$-competition complexity is
$\Theta(\ln(\frac{1}{\epsilon}))$ (as in the i.i.d. case). Additionally, we
demonstrate that time-based threshold and activation-based algorithms (which
cover all previous approaches for obtaining competitive-ratios for the classic
prophet secretary problem) yield a sub-optimal $(1-\epsilon)$-competition
complexity of
$\Theta\left(\frac{\ln(\frac{1}{\epsilon})}{\ln\ln(\frac{1}{\epsilon})}\right)$,
which is strictly better than the class of single-threshold algorithms.
Finally, we find that the $(1-\epsilon)$-competition complexity of general
adaptive algorithms is $\Theta(\sqrt{\ln(\frac{1}{\epsilon})})$, which is in
sharp contrast to $\Theta(\ln\ln(\frac{1}{\epsilon}))$ in the i.i.d. case. | arXiv |
Scientific Workflow Systems (SWSs) are advanced software frameworks that
drive modern research by orchestrating complex computational tasks and managing
extensive data pipelines. These systems offer a range of essential features,
including modularity, abstraction, interoperability, workflow composition
tools, resource management, error handling, and comprehensive documentation.
Utilizing these frameworks accelerates the development of scientific computing,
resulting in more efficient and reproducible research outcomes. However,
developing a user-friendly, efficient, and adaptable SWS poses several
challenges. This study explores these challenges through an in-depth analysis
of interactions on Stack Overflow (SO) and GitHub, key platforms where
developers and researchers discuss and resolve issues. In particular, we
leverage topic modeling (BERTopic) to understand the topics SWSs developers
discuss on these platforms. We identified 10 topics developers discuss on SO
(e.g., Workflow Creation and Scheduling, Data Structures and Operations,
Workflow Execution) and found that workflow execution is the most challenging.
By analyzing GitHub issues, we identified 13 topics (e.g., Errors and Bug
Fixing, Documentation, Dependencies) and discovered that data structures and
operations is the most difficult. We also found common topics between SO and
GitHub, such as data structures and operations, task management, and workflow
scheduling. Additionally, we categorized each topic by type (How, Why, What,
and Others). We observed that the How type consistently dominates across all
topics, indicating a need for procedural guidance among developers. The
dominance of the How type is also evident in domains like Chatbots and Mobile
development. Our study will guide future research in proposing tools and
techniques to help the community overcome the challenges developers face when
developing SWSs. | arXiv |
We study the contact geometry of the connected components of the energy
hypersurface, in the symmetric restricted 3-body problem on $\mathbb{S}^2$, for
a specific type of motion of the primaries. In particular, we show that these
components are of contact type for all energies below the first critical value
and slightly above it. We prove that these components, suitably compactified
using a Moser-type regularization are contactomorphic to $R\mathbb{P}^3$ with
its unique tight contact structure or to the connected sum of two copies of it,
depending on the value of the energy. We exploit Taubes' solution of the
Weinstein conjecture in dimension three, to infer the existence of periodic
orbits in all these cases. | arXiv |
Operating Systems enforce logical isolation using abstractions such as
processes, containers, and isolation technologies to protect a system from
malicious or buggy code. In this paper, we show new types of side channels
through the file system that break this logical isolation. The file system
plays a critical role in the operating system, managing all I/O activities
between the application layer and the physical storage device. We observe that
the file system implementation is shared, leading to timing leakage when using
common I/O system calls. Specifically, we found that modern operating systems
take advantage of any flush operation (which saves cached blocks in memory to
the SSD or disk) to flush all of the I/O buffers, even those used by other
isolation domains. Thus, by measuring the delay of syncfs, the attacker can
infer the I/O behavior of victim programs. We then demonstrate a syncfs covert
channel attack on multiple file systems, including both Linux native file
systems and the Windows file system, achieving a maximum bandwidth of 5 Kbps
with an error rate of 0.15% on Linux and 7.6 Kbps with an error rate of 1.9% on
Windows. In addition, we construct three side-channel attacks targeting both
Linux and Android devices. On Linux devices, we implement a website
fingerprinting attack and a video fingerprinting attack by tracking the write
patterns of temporary buffering files. On Android devices, we design an
application fingerprinting attack that leaks application write patterns during
boot-up. The attacks achieve over 90% F1 score, precision, and recall. Finally,
we demonstrate that these attacks can be exploited across containers
implementing a container detection technique and a cross-container covert
channel attack. | arXiv |
Giant radio galaxies (GRGs), a minority among the extended-jetted population,
form in a wide range of jet and environmental configurations, complicating the
identification of the growth factors that facilitate their attainment of
megaparsec scales. This study aims to numerically investigate the hypothesized
formation mechanisms of GRGs extending $\gtrsim 1$ Mpc to assess their general
applicability. We employ triaxial ambient medium settings to generate varying
levels of jet frustration and simulate jets with low and high power from
different locations in the environment, formulating five representations. The
emergence of distinct giant phases in all five simulated scenarios suggests
that GRGs may be more common than previously believed, a prediction to be
verified with contemporary radio telescopes. We find that different
combinations of jet morphology, power, and the evolutionary age of the formed
structure hold the potential to elucidate different formation scenarios. The
simulated lobes are overpressured, prompting further investigation into
pressure profiles when jet activity ceases, potentially distinguishing between
relic and active GRGs. We observed a potential phase transition in giant radio
galaxies, marked by differences in lobe expansion speed and pressure variations
compared to their smaller evolutionary phases. This suggests the need for
further investigation across a broader parameter space to determine if GRGs
fundamentally differ from smaller RGs. Axial ratio analysis reveals
self-similar expansion in rapidly propagating jets, with notable deviations
when the jet forms wider lobes. Overall, this study emphasizes that multiple
growth factors at work can better elucidate the current-day population of GRGs,
including scenarios e.g., growth of GRGs in dense environments, GRGs of several
megaparsecs, GRG development in low-powered jets, and the formation of X-shaped
GRGs. | arXiv |
The generation of complex, large-scale code projects using generative AI
models presents challenges due to token limitations, dependency management, and
iterative refinement requirements. This paper introduces the See-Saw generative
mechanism, a novel methodology for dynamic and recursive code generation. The
proposed approach alternates between main code updates and dependency
generation to ensure alignment and functionality. By dynamically optimizing
token usage and incorporating key elements of the main code into the generation
of dependencies, the method enables efficient and scalable code generation for
projects requiring hundreds of interdependent files. The mechanism ensures that
all code components are synchronized and functional, enabling scalable and
efficient project generation. Experimental validation demonstrates the method's
capability to manage dependencies effectively while maintaining coherence and
minimizing computational overhead. | arXiv |
Many computational problems can be modelled as the class of all finite
relational structures $\mathbb A$ that satisfy a fixed first-order sentence
$\phi$ hereditarily, i.e., we require that every substructure of $\mathbb A$
satisfies $\phi$. In this case, we say that the class is in HerFO. The problems
in HerFO are always in coNP, and sometimes coNP-complete. HerFO also contains
many interesting computational problems in P, including many constraint
satisfaction problems (CSPs). We show that HerFO captures the class of
complements of CSPs for reducts of finitely bounded structures, i.e., every
such CSP is polynomial-time equivalent to the complement of a problem in HerFO.
However, we also prove that HerFO does not have the full computational power of
coNP: there are problems in coNP that are not polynomial-time equivalent to a
problem in HerFO, unless E=NE. Another main result is a description of the
quantifier-prefixes for $\phi$ such that hereditarily checking $\phi$ is in P;
we show that for every other quantifier-prefix there exists a formula $\phi$
with this prefix such that hereditarily checking $\phi$ is coNP-complete. | arXiv |
Data contamination presents a critical barrier preventing widespread
industrial adoption of advanced software engineering techniques that leverage
code language models (CLMs). This phenomenon occurs when evaluation data
inadvertently overlaps with the public code repositories used to train CLMs,
severely undermining the credibility of performance evaluations. For software
companies considering the integration of CLM-based techniques into their
development pipeline, this uncertainty about true performance metrics poses an
unacceptable business risk. Code refactoring, which comprises code
restructuring and variable renaming, has emerged as a promising measure to
mitigate data contamination. It provides a practical alternative to the
resource-intensive process of building contamination-free evaluation datasets,
which would require companies to collect, clean, and label code created after
the CLMs' training cutoff dates. However, the lack of automated code
refactoring tools and scientifically validated refactoring techniques has
hampered widespread industrial implementation. To bridge the gap, this paper
presents the first systematic study to examine the efficacy of code refactoring
operators at multiple scales (method-level, class-level, and cross-class level)
and in different programming languages. In particular, we develop an
open-sourced toolkit, CODECLEANER, which includes 11 operators for Python, with
nine method-level, one class-level, and one cross-class-level operator. A drop
of 65% overlap ratio is found when applying all operators in CODECLEANER,
demonstrating their effectiveness in addressing data contamination.
Additionally, we migrate four operators to Java, showing their generalizability
to another language. We make CODECLEANER online available to facilitate further
studies on mitigating CLM data contamination. | arXiv |
We employ Maximum Likelihood Estimators to examine the Pantheon+ catalogue of
Type Ia supernovae for large scale anisotropies in the expansion rate of the
Universe. The analyses are carried out in the heliocentric frame, the CMB
frame, as well as the Local Group frame. In all frames, the Hubble expansion
rate in the redshift range 0.023 < z < 0.15 is found to have a statistically
significant dipolar variation exceeding 1.5 km/s/Mpc, i.e. bigger than the
claimed 1% uncertainty in the SH0ES measurement of the Hubble parameter H_0.
The deceleration parameter too has a redshift-dependent dipolar modulation at
>5 sigma significance, consistent with previous findings using the SDSSII/SNLS3
Joint Lightcurve Analysis catalogue. The inferred cosmic acceleration cannot
therefore be due to a Cosmological Constant, but is probably an apparent
(general relativistic) effect due to the anomalous bulk flow in our local
Universe. | arXiv |
We present a unified controllable video generation approach AnimateAnything
that facilitates precise and consistent video manipulation across various
conditions, including camera trajectories, text prompts, and user motion
annotations. Specifically, we carefully design a multi-scale control feature
fusion network to construct a common motion representation for different
conditions. It explicitly converts all control information into frame-by-frame
optical flows. Then we incorporate the optical flows as motion priors to guide
final video generation. In addition, to reduce the flickering issues caused by
large-scale motion, we propose a frequency-based stabilization module. It can
enhance temporal coherence by ensuring the video's frequency domain
consistency. Experiments demonstrate that our method outperforms the
state-of-the-art approaches. For more details and videos, please refer to the
webpage: https://yu-shaonian.github.io/Animate_Anything/. | arXiv |
Given the inherent non-stationarity prevalent in real-world applications,
continual Reinforcement Learning (RL) aims to equip the agent with the
capability to address a series of sequentially presented decision-making tasks.
Within this problem setting, a pivotal challenge revolves around
\textit{catastrophic forgetting} issue, wherein the agent is prone to
effortlessly erode the decisional knowledge associated with past encountered
tasks when learning the new one. In recent progresses, the \textit{generative
replay} methods have showcased substantial potential by employing generative
models to replay data distribution of past tasks. Compared to storing the data
from past tasks directly, this category of methods circumvents the growing
storage overhead and possible data privacy concerns. However, constrained by
the expressive capacity of generative models, existing \textit{generative
replay} methods face challenges in faithfully reconstructing the data
distribution of past tasks, particularly in scenarios with a myriad of tasks or
high-dimensional data. Inspired by the success of diffusion models in various
generative tasks, this paper introduces a novel continual RL algorithm DISTR
(Diffusion-based Trajectory Replay) that employs a diffusion model to memorize
the high-return trajectory distribution of each encountered task and wakeups
these distributions during the policy learning on new tasks. Besides,
considering the impracticality of replaying all past data each time, a
prioritization mechanism is proposed to prioritize the trajectory replay of
pivotal tasks in our method. Empirical experiments on the popular continual RL
benchmark \texttt{Continual World} demonstrate that our proposed method obtains
a favorable balance between \textit{stability} and \textit{plasticity},
surpassing various existing continual RL baselines in average success rate. | arXiv |
Taking inspiration from [1, 21, 24], we develop a general framework to deal
with the model theory of open incidence structures. In this first paper we
focus on the study of systems of points and lines (rank $2$). This has a number
of applications, in particular we show that for any of the following classes
all the non-degenerate free structures are elementarily equivalent, and their
common theory is decidable, stricly stable, and with no prime model: $(k,
n)$-Steiner systems (for $2 \leq k < n$); generalised $n$-gons (for $n \geq
3$); $k$-nets (for $k \geq 3$); affine planes; projective M\"obius, Laguerre
and Minkowski planes. | arXiv |
Consider a trade market with one seller and multiple buyers. The seller aims
to sell an indivisible item and maximize their revenue. This paper focuses on a
simple and popular mechanism--the fixed-price mechanism. Unlike the standard
setting, we assume there is information asymmetry between buyers and the
seller. Specifically, we allow the seller to design information before setting
the fixed price, which implies that we study the mechanism design problem in a
broader space. We call this mechanism space the fixed-price signaling
mechanism.
We assume that buyers' valuation of the item depends on the quality of the
item. The seller can privately observe the item's quality, whereas buyers only
see its distribution. In this case, the seller can influence buyers' valuations
by strategically disclosing information about the item's quality, thereby
adjusting the fixed price. We consider two types of buyers with different
levels of rationality: ex-post individual rational (IR) and ex-interim
individual rational. We show that when the market has only one buyer, the
optimal revenue generated by the fixed-price signaling mechanism is identical
to that of the fixed-price mechanism, regardless of the level of rationality.
Furthermore, when there are multiple buyers in the market and all of them are
ex-post IR, we show that there is no fixed-price mechanism that is obedient for
all buyers. However, if all buyers are ex-interim IR, we show that the optimal
fixed-price signaling mechanism will generate more revenue for the seller than
the fixed-price mechanism. | arXiv |
We conduct a systematic investigation of the role of Hubbard U corrections in
electronic structure calculations of two-dimensional (2D) materials containing
3d transition metals. Specifically, we use density functional theory (DFT) with
the PBE and PBE+U approximations to calculate the crystal structure, band gaps,
and magnetic parameters of 638 monolayers. Based on a comprehensive comparison
to experiments we first establish that inclusion of the U correction worsens
the accuracy for the lattice constant. Consequently, PBE structures are used
for subsequent property evaluations. The band gaps show significant dependence
on the U-parameter. In particular, for 134 (21%) of the materials the U
parameter leads to a metal-insulator transition. For the magnetic materials we
calculate the magnetic moment, magnetic exchange coupling, and magnetic
anisotropy parameters. In contrast to the band gaps, the size of the magnetic
moments shows only weak dependence on U. Both the exchange energies and
magnetic anisotropy parameters are systematically reduced by the U correction.
On this basis we conclude that the Hubbard U correction will lead to lower
predicted Curie temperatures in 2D materials. All the calculated properties are
available in the Computational 2D Materials Database (C2DB). | arXiv |
Cardiovascular magnetic resonance (CMR) imaging is the gold standard for
diagnosing several heart diseases due to its non-invasive nature and proper
contrast. MR imaging is time-consuming because of signal acquisition and image
formation issues. Prolonging the imaging process can result in the appearance
of artefacts in the final image, which can affect the diagnosis. It is possible
to speed up CMR imaging using image reconstruction based on deep learning. For
this purpose, the high-quality clinical interpretable images can be
reconstructed by acquiring highly undersampled k-space data, that is only
partially filled, and using a deep learning model. In this study, we proposed a
stepwise reconstruction approach based on the Patch-GAN structure for highly
undersampled k-space data compatible with the multi-contrast nature, various
anatomical views and trajectories of CMR imaging. The proposed approach was
validated using the CMRxRecon2024 challenge dataset and outperformed previous
studies. The structural similarity index measure (SSIM) values for the first
and second tasks of the challenge are 99.07 and 97.99, respectively. This
approach can accelerate CMR imaging to obtain high-quality images, more
accurate diagnosis and a pleasant patient experience. | arXiv |
We study the following one-dimensional cubic nonlinear Schr\"{o}dinger
system: \[ u_i''+2\Big(\sum_{k=1}^Nu_k^2\Big)u_i=-\mu_iu_i \ \,\ \mbox{in}\, \
\mathbb{R} , \ \ i=1, 2, \cdots, N, \] where
$\mu_1\leq\mu_2\leq\cdots\leq\mu_N<0$ and $N\ge 2$. In this paper, we mainly
focus on the case $N=3$ and prove the following results: (i). The solutions of
the system can be completely classified; (ii). Depending on the explicit values
of $\mu_1\leq\mu_2\leq\mu_3<0$, there exist two different classes of normalized
solutions $u=(u_1, u_2, u_3)$ satisfying $\int _{R}u_i^2dx=1$ for all $i=1, 2,
3$, which are completely different from the case $N=2$; (iii). The linearized
operator at any nontrivial solution of the system is non-degenerate. The
conjectures on the explicit classification and nondegeneracy of solutions for
the system are also given for the case $N>3$. These address the questions of
[R. Frank, D. Gontier and M. Lewin, CMP, 2021], where the complete
classification and uniqueness results for the system were already proved for
the case $N=2$. | arXiv |
Chest X-rays (CXRs) often display various diseases with disparate class
frequencies, leading to a long-tailed, multi-label data distribution. In
response to this challenge, we explore the Pruned MIMIC-CXR-LT dataset, a
curated collection derived from the MIMIC-CXR dataset, specifically designed to
represent a long-tailed and multi-label data scenario. We introduce LTCXNet, a
novel framework that integrates the ConvNeXt model, ML-Decoder, and strategic
data augmentation, further enhanced by an ensemble approach. We demonstrate
that LTCXNet improves the performance of CXR interpretation across all classes,
especially enhancing detection in rarer classes like `Pneumoperitoneum' and
`Pneumomediastinum' by 79\% and 48\%, respectively. Beyond performance metrics,
our research extends into evaluating fairness, highlighting that some methods,
while improving model accuracy, could inadvertently affect fairness across
different demographic groups negatively. This work contributes to advancing the
understanding and management of long-tailed, multi-label data distributions in
medical imaging, paving the way for more equitable and effective diagnostic
tools. | arXiv |
A popular poster from Myanmar lists food pairings that should be avoided,
sometimes at all costs. Coconut and honey taken together, for example, are
believed to cause nausea, while pork and curdled milk will induce diarrhea.
Worst of all, according to the poster, many seemingly innocuous combinations
that include jelly and coffee, beef and star fruit, or pigeon and pumpkin, are
likely to kill the unwary consumer. But why are these innocuous combinations
considered dangerous, even fatal? The answer is relevant, not just to food
beliefs, but to social beliefs of many kinds. Here we describe the prevalence
of food combination superstitions, and an opinion formation model simulating
their emergence and fixation. We find that such food norms are influenced, not
just by actual risks, but also by strong forces of cultural learning that can
drive and lock in arbitrary rules, even in the face of contrary evidence. | arXiv |
Various linear complexity models, such as Linear Transformer (LinFormer),
State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace
the conventional softmax attention in Transformer structures. However, the
optimal design of these linear models is still an open question. In this work,
we attempt to answer this question by finding the best linear approximation to
softmax attention from a theoretical perspective. We start by unifying existing
linear complexity models as the linear attention form and then identify three
conditions for the optimal linear attention design: 1) Dynamic memory ability;
2) Static approximation ability; 3) Least parameter approximation. We find that
none of the current linear models meet all three conditions, resulting in
suboptimal performance. Instead, we propose Meta Linear Attention (MetaLA) as a
solution that satisfies these conditions. Our experiments on Multi-Query
Associative Recall (MQAR) task, language modeling, image classification, and
Long-Range Arena (LRA) benchmark demonstrate that MetaLA is more effective than
the existing linear models. | arXiv |
We developed a shoe-mounted gait monitoring system capable of tracking up to
17 gait parameters, including gait length, step time, stride velocity, and
others. The system employs a stereo camera mounted on one shoe to track a
marker placed on the opposite shoe, enabling the estimation of spatial gait
parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel
of the shoe, combined with a custom-designed algorithm, is utilized to measure
temporal gait parameters. Through testing on multiple participants and
comparison with the gait mat, the proposed gait monitoring system exhibited
notable performance, with the accuracy of all measured gait parameters
exceeding 93.61%. The system also demonstrated a low drift of 4.89% during
long-distance walking. A gait identification task conducted on participants
using a trained Transformer model achieved 95.7% accuracy on the dataset
collected by the proposed system, demonstrating that our hardware has the
potential to collect long-sequence gait data suitable for integration with
current Large Language Models (LLMs). The system is cost-effective,
user-friendly, and well-suited for real-life measurements. | arXiv |
Finding ground state solutions of diagonal Hamiltonians is relevant for both
theoretical as well as practical problems of interest in many domains such as
finance, physics and computer science. These problems are typically very hard
to tackle by classical computing and quantum computing could help in speeding
up computations and efficiently tackling larger problems. Here we use imaginary
time evolution through a new block encoding scheme to obtain the ground state
of such problems and apply our method to MaxCut as an illustration. Our method,
which for simplicity we call ITE-BE, requires no variational parameter
optimization as all the parameters in the procedure are expressed as analytical
functions of the couplings of the Hamiltonian. We demonstrate that our method
can be successfully combined with other quantum algorithms such as quantum
approximate optimization algorithm (QAOA). We find that the QAOA ansatz
increases the post-selection success of ITE-BE, and shallow QAOA circuits, when
boosted with ITE-BE, achieve better performance than deeper QAOA circuits. For
the special case of the transverse initial state, we adapt our block encoding
scheme to allow for a deterministic application of the first layer of the
circuit. | arXiv |
Gamma-ray bursts (GRBs) are intense pulses of high-energy emission associated
with massive stars' death or compact objects' coalescence. Their
multi-wavelength observations help verify the reliability of the standard
fireball model. We analyze 14 GRBs observed contemporaneously in gamma-rays by
the \textit{Fermi} Large Area Telescope (LAT), in X-rays by the \textit{Swift}
Telescope, and in the optical bands by \textit{Swift} and many ground-based
telescopes. We study the correlation between the spectral and temporal indices
using closure relations according to the synchrotron forward-shock model in the
stratified medium ($n \propto r^{-k}$) with $k$ ranging from 0 to 2.5. We find
that the model without energy injection is preferred over the one with energy
injection in all the investigated wavelengths. In gamma-rays, we only explored
the $\nu > $ max\{$\nu_c,\nu_m$\} (SC/FC) cooling condition (where $\nu_c$ and
$\nu_m$ are the cooling and characteristic frequencies, namely the frequencies
at the spectral break). In the X-ray and optical bands, we explored all the
cooling conditions, including $\nu_m < \nu < \nu_c$ (SC), $\nu_c < \nu < \nu_m$
(FC), and SC/FC, and found a clear preference for SC for X-rays and SC/FC for
optical. Within these cooling conditions, X-rays exhibit the highest rate of
occurrence for the density profile with $k = 0$, while the optical band has the
highest occurrence for $k$ = 2.5 when considering no energy injection. Although
we can pinpoint a definite environment for some GRBs, we find degeneracies in
other GRBs. | arXiv |
We explore Mahler numbers originating from functions $f(z)$ that satisfy the
functional equation $f(z) = (A(z)f(z^d) + C(z))/B(z)$. A procedure to compute
the irrationality exponents of such numbers is developed using continued
fractions for formal Laurent series, and the form of all such irrationality
exponents is investigated. This serves to extend Dmitry Badziahin's paper, On
the Spectrum of Irrationality Exponents of Mahler Numbers, where he does the
same under the condition that $C(z) = 0$. Furthermore, we cover the required
background of continued fractions in detail for unfamiliar readers. This essay
was submitted as a thesis in the Pure Mathematics Honours program at the
University of Sydney. | arXiv |
The management of type 1 diabetes has been revolutionized by the artificial
pancreas system (APS), which automates insulin delivery based on continuous
glucose monitor (CGM). While conventional closed-loop systems rely on CGM data,
which leads to higher energy consumption at the sensors and increased data
redundancy in the underlying communication network. In contrast, this paper
proposes a self-triggered control mechanism that can potentially achieve lower
latency and energy efficiency. The model for the APS consists of a state and
input-constrained dynamical system affected by exogenous meal disturbances. Our
self-triggered mechanism relies on restricting the state evolution within the
robust control invariant of such a system at all times. To that end, using
tools from reachability, we associate a safe time interval with such invariant
sets, which denotes the maximum time for which the invariant set remains
invariant, even without transmission of CGM data at all times. | arXiv |
Image restoration models often face the simultaneous interaction of multiple
degradations in real-world scenarios. Existing approaches typically handle
single or composite degradations based on scene descriptors derived from text
or image embeddings. However, due to the varying proportions of different
degradations within an image, these scene descriptors may not accurately
differentiate between degradations, leading to suboptimal restoration in
practical applications. To address this issue, we propose a novel
Transformer-based restoration framework, AllRestorer. In AllRestorer, we enable
the model to adaptively consider all image impairments, thereby avoiding errors
from scene descriptor misdirection. Specifically, we introduce an All-in-One
Transformer Block (AiOTB), which adaptively removes all degradations present in
a given image by modeling the relationships between all degradations and the
image embedding in latent space. To accurately address different variations
potentially present within the same type of degradation and minimize ambiguity,
AiOTB utilizes a composite scene descriptor consisting of both image and text
embeddings to define the degradation. Furthermore, AiOTB includes an adaptive
weight for each degradation, allowing for precise control of the restoration
intensity. By leveraging AiOTB, AllRestorer avoids misdirection caused by
inaccurate scene descriptors, achieving a 5.00 dB increase in PSNR compared to
the baseline on the CDD-11 dataset. | arXiv |
The Gutzwiller trace formula establishes a profound connection between the
quantum spectrum and classical periodic orbits. However, its application is
limited by its reliance on the semiclassical saddle point approximation. In
this work, we explore the full quantum version of the trace formula using the
Lefschetz thimble method by incorporating complexified periodic orbits. Upon
complexification, classical real periodic orbits are transformed into cycles on
compact Riemann surfaces. Our key innovation lies in the simultaneous
complexification of the periods of cycles, resulting in a fully quantum trace
formula that accounts for all contributions classified by the homology classes
of the associated Riemann surfaces. This formulation connects the quantum
spectrum to contributions across all complex time directions, encompassing all
relevant homology classes. Our approach naturally unifies and extends two
established methodologies: periodic orbits in real time, as in Gutzwiller's
original work, and quantum tunneling in imaginary time, as in the instanton
method. | arXiv |
In this paper, we analyze the feature-based knowledge distillation for
recommendation from the frequency perspective. By defining knowledge as
different frequency components of the features, we theoretically demonstrate
that regular feature-based knowledge distillation is equivalent to equally
minimizing losses on all knowledge and further analyze how this equal loss
weight allocation method leads to important knowledge being overlooked. In
light of this, we propose to emphasize important knowledge by redistributing
knowledge weights. Furthermore, we propose FreqD, a lightweight knowledge
reweighting method, to avoid the computational cost of calculating losses on
each knowledge. Extensive experiments demonstrate that FreqD consistently and
significantly outperforms state-of-the-art knowledge distillation methods for
recommender systems. Our code is available at
\url{https://anonymous.4open.science/r/FreqKD/} | arXiv |
Federated learning (FL) is vulnerable to model poisoning attacks due to its
distributed nature. The current defenses start from all user gradients (model
updates) in each communication round and solve for the optimal aggregation
gradients (horizontal solution). This horizontal solution will completely fail
when facing large-scale (>50%) model poisoning attacks. In this work, based on
the key insight that the convergence process of the model is a highly
predictable process, we break away from the traditional horizontal solution of
defense and innovatively transform the problem of solving the optimal
aggregation gradients into a vertical solution problem. We propose VERT, which
uses global communication rounds as the vertical axis, trains a predictor using
historical gradients information to predict user gradients, and compares the
similarity with actual user gradients to precisely and efficiently select the
optimal aggregation gradients. In order to reduce the computational complexity
of VERT, we design a low dimensional vector projector to project the user
gradients to a computationally acceptable length, and then perform subsequent
predictor training and prediction tasks. Exhaustive experiments show that VERT
is efficient and scalable, exhibiting excellent large-scale (>=80%) model
poisoning defense effects under different FL scenarios. In addition, we can
design projector with different structures for different model structures to
adapt to aggregation servers with different computing power. | arXiv |
As the research of Multimodal Large Language Models (MLLMs) becomes popular,
an advancing MLLM model is typically required to handle various textual and
visual tasks (e.g., VQA, Detection, OCR, and ChartQA) simultaneously for
real-world applications. However, due to the significant differences in
representation and distribution among data from various tasks, simply mixing
data of all tasks together leads to the well-known``multi-task conflict" issue,
resulting in performance degradation across various tasks. To address this
issue, we propose Awaker2.5-VL, a Mixture of Experts~(MoE) architecture
suitable for MLLM, which acquires the multi-task capabilities through multiple
sparsely activated experts. To speed up the training and inference of
Awaker2.5-VL, each expert in our model is devised as a low-rank adaptation
(LoRA) structure. Extensive experiments on multiple latest benchmarks
demonstrate the effectiveness of Awaker2.5-VL. The code and model weight are
released in our Project Page: https://github.com/MetabrainAGI/Awaker. | arXiv |
The discovery of the Dead Sea Scrolls over 60 years ago is widely regarded as
one of the greatest archaeological breakthroughs in modern history. Recent
study of the scrolls presents ongoing computational challenges, including
determining the provenance of fragments, clustering fragments based on their
degree of similarity, and pairing fragments that originate from the same
manuscript -- all tasks that require focusing on individual letter and fragment
shapes. This paper presents a computational method for segmenting ink and
parchment regions in multispectral images of Dead Sea Scroll fragments. Using
the newly developed Qumran Segmentation Dataset (QSD) consisting of 20
fragments, we apply multispectral thresholding to isolate ink and parchment
regions based on their unique spectral signatures. To refine segmentation
accuracy, we introduce an energy minimization technique that leverages ink
contours, which are more distinguishable from the background and less noisy
than inner ink regions. Experimental results demonstrate that this
Multispectral Thresholding and Energy Minimization (MTEM) method achieves
significant improvements over traditional binarization approaches like Otsu and
Sauvola in parchment segmentation and is successful at delineating ink borders,
in distinction from holes and background regions. | arXiv |
Large Language Models (LLMs) have revolutionized natural language processing
by unifying tasks into text generation, yet their large parameter sizes and
autoregressive nature limit inference speed. SAM-Decoding addresses this by
introducing a novel retrieval-based speculative decoding method that uses a
suffix automaton for efficient and accurate draft generation. Unlike n-gram
matching used by the existing method, SAM-Decoding finds the longest suffix
match in generating text and text corpuss, achieving an average time complexity
of $O(1)$ per generation step. SAM-Decoding constructs static and dynamic
suffix automatons for the text corpus and input prompts, respectively, enabling
fast and precise draft generation. Meanwhile, it is designed as an approach
that can be combined with existing methods, allowing SAM-Decoding to adaptively
select a draft generation strategy based on the matching length, thus
increasing the inference speed of the LLM. When combined with Token Recycling,
evaluations show SAM-Decoding outperforms existing model-free methods,
achieving a speedup of $2.27\times$ over autoregressive decoding on Spec-Bench.
When combined with EAGLE2, it reaches a speedup of $2.49\times$, surpassing all
current approaches. Our code is available at
https://github.com/hyx1999/SAM-Decoding. | arXiv |
Latency is a major concern for web rendering engines like those in Chrome,
Safari, and Firefox. These engines reduce latency by using an incremental
layout algorithm to redraw the page when the user interacts with it. In such an
algorithm, elements that change frame-to-frame are marked dirty; only the dirty
elements need be processed to draw the next frame, dramatically reducing
latency. However, the standard incremental layout algorithm must search the
page for dirty elements, accessing a number of auxiliary elements in the
process. These auxiliary elements add cache misses and stalled cycles, and are
responsible for a sizable fraction of all layout latency. We introduce a new,
faster incremental layout algorithm called Spineless Traversal. Spineless
Traversal uses a more computationally demanding priority queue algorithm to
avoid the need to access auxiliary nodes and thus reduces cache traffic and
stalls. This leads to dramatic speedups on the most latency-critical
interactions such as hovering, typing, or animations. Moreover, thanks to
numerous low-level optimizations, we are able to make Spineless Traversal
competitive across the whole spectrum of incremental layout workloads. As a
result, across 2216 benchmarks, Spineless Traversal is faster on 78.2% of the
benchmark, with a mean speedup of 3.23x concentrated in the most
latency-critical interactions such as hovering, typing, and animations. | arXiv |
As technology advances, conceptualizations of effective strategies for
teaching and learning shift. Due in part to their facilitation of unique
affordances for learning, mobile devices, augmented reality, and games are all
becoming more prominent elements in learning environments. In this work, we
examine mobile augmented reality serious games (MARSGs) as the intersection of
these technology-based experiences and to what effect their combination can
yield even greater learning outcomes. We present a PRISMA review of 23 papers
(from 610) spanning the entire literature timeline from 2002 to 2023. Among
these works, there is wide variability in the realized application of game
elements and pedagogical theories underpinning the game experience. For an
educational tool to be effective, it must be designed to facilitate learning
while anchored by pedagogical theory. Given that most MARSG developers are not
pedagogical experts, this review further provides design considerations
regarding which game elements might proffer the best of three major pedagogical
theories for modern learning (cognitive constructivism, social constructivism,
and behaviorism) based on existing applications. We will also briefly touch on
radical constructivism and the instructional elements embedded within MARSGs.
Lastly, this work offers a synthesis of current MARSG findings and extended
future directions for MARSG development. | arXiv |
Generalizations of plain strings have been proposed as a compact way to
represent a collection of nearly identical sequences or to express uncertainty
at specific text positions by enumerating all possibilities. While a plain
string stores a character at each of its positions, generalizations consider a
set of characters (indeterminate strings), a set of strings of equal length
(generalized degenerate strings, or shortly GD strings), or a set of strings of
arbitrary lengths (elastic-degenerate strings, or shortly ED strings). These
generalizations are of importance to compactly represent such type of data, and
find applications in bioinformatics for representing and maintaining a set of
genetic sequences of the same taxonomy or a multiple sequence alignment. To be
of use, attention has been drawn to answering various query types such as
pattern matching or measuring similarity of ED strings by generalizing
techniques known to plain strings. However, for some types of queries, it has
been shown that a generalization of a polynomial-time solvable query on classic
strings becomes NP-hard on ED strings, e.g. [Russo et al.,2022]. In that light,
we wonder about other types of queries, which are of particular interest to
bioinformatics: the search for the longest repeating factor, unique substrings,
absent words, anti-powers, and longest previous factors. While we obtain a
polynomial time algorithm for the first problem on ED strings, we show that all
others are NP-hard to compute, some of them even under the restriction that the
input can be modelled as an indeterminate or GD string. | arXiv |
The practical applications of Wasserstein distances (WDs) are constrained by
their sample and computational complexities. Sliced-Wasserstein distances
(SWDs) provide a workaround by projecting distributions onto one-dimensional
subspaces, leveraging the more efficient, closed-form WDs for one-dimensional
distributions. However, in high dimensions, most random projections become
uninformative due to the concentration of measure phenomenon. Although several
SWD variants have been proposed to focus on \textit{informative} slices, they
often introduce additional complexity, numerical instability, and compromise
desirable theoretical (metric) properties of SWD. Amidst the growing literature
that focuses on directly modifying the slicing distribution, which often face
challenges, we revisit the classical Sliced-Wasserstein and propose instead to
rescale the 1D Wasserstein to make all slices equally informative. Importantly,
we show that with an appropriate data assumption and notion of \textit{slice
informativeness}, rescaling for all individual slices simplifies to \textbf{a
single global scaling factor} on the SWD. This, in turn, translates to the
standard learning rate search for gradient-based learning in common machine
learning workflows. We perform extensive experiments across various machine
learning tasks showing that the classical SWD, when properly configured, can
often match or surpass the performance of more complex variants. We then answer
the following question: "Is Sliced-Wasserstein all you need for common learning
tasks?" | arXiv |
In wireless communications, efficient image transmission must balance
reliability, throughput, and latency, especially under dynamic channel
conditions. This paper presents an adaptive and progressive pipeline for
learned image compression (LIC)-based architectures tailored to such
environments. We investigate two state-of-the-art learning-based models: the
hyperprior model and Vector Quantized Generative Adversarial Network (VQGAN).
The hyperprior model achieves superior compression performance through lossless
compression in the bottleneck but is susceptible to bit errors, necessitating
the use of error correction or retransmission mechanisms. In contrast, the
VQGAN decoder demonstrates robust image reconstruction capabilities even in the
absence of channel coding, enhancing reliability in challenging transmission
scenarios. We propose progressive versions of both models, enabling partial
image transmission and decoding under imperfect channel conditions. This
progressive approach not only maintains image integrity under poor channel
conditions but also significantly reduces latency by allowing immediate partial
image availability. We evaluate our pipeline using the Kodak high-resolution
image dataset under a Rayleigh fading wireless channel model simulating dynamic
conditions. The results indicate that the progressive transmission framework
enhances reliability and latency while maintaining or improving throughput
compared to non-progressive counterparts across various Signal-to-Noise Ratio
(SNR) levels. Specifically, the progressive-hyperprior model consistently
outperforms others in latency metrics, particularly in the 99.9th percentile
waiting time-a measure indicating the maximum waiting time experienced by 99.9%
of transmission instances-across all SNRs, and achieves higher throughput in
low SNR scenarios. where Adaptive WebP fails. | arXiv |
Iterative methods such as iterative closest point (ICP) for point cloud
registration often suffer from bad local optimality (e.g. saddle points), due
to the nature of nonconvex optimization. To address this fundamental challenge,
in this paper we propose learning to form the loss landscape of a deep
iterative method w.r.t. predictions at test time into a convex-like shape
locally around each ground truth given data, namely Deep Loss Convexification
(DLC), thanks to the overparametrization in neural networks. To this end, we
formulate our learning objective based on adversarial training by manipulating
the ground-truth predictions, rather than input data. In particular, we propose
using star-convexity, a family of structured nonconvex functions that are
unimodal on all lines that pass through a global minimizer, as our geometric
constraint for reshaping loss landscapes, leading to (1) extra novel hinge
losses appended to the original loss and (2) near-optimal predictions. We
demonstrate the state-of-the-art performance using DLC with existing network
architectures for the tasks of training recurrent neural networks (RNNs), 3D
point cloud registration, and multimodel image alignment. | arXiv |
The pseudogap state of high-$T_{\rm c}$ cuprates, known for its partial
gapping of the Fermi surface above the superconducting transition temperature
$T_{\rm c}$, is believed to hold the key to understanding the origin of
Planckian relaxation and quantum criticality. However, the nature of the Fermi
surface in the pseudogap state has remained a fundamental open question. Here,
we report the observation of the Yamaji effect above $T_{\rm c}$ in the single
layer cuprate HgBa$_2$CuO$_{4+\delta}$. This observation is direct evidence of
closed Fermi surface pockets in the normal state of the pseudogap phase. The
small size of the pockets determined from the Yamaji effect (occupying
approximately $1.3\%$ of the Brillouin zone area) is all the more surprising
given the absence of evidence for long-range broken translational symmetry that
can reconstruct the Fermi-surface. | arXiv |
Efforts are needed to identify and measure both communities' exposure to
climate hazards and the social vulnerabilities that interact with these
hazards, but the science of validating hazard vulnerability indicators is still
in its infancy. Progress is needed to improve: 1) the selection of variables
that are used as proxies to represent hazard vulnerability; 2) the
applicability and scale for which these indicators are intended, including
their transnational applicability. We administered an international urban
survey in Buenos Aires, Argentina; Johannesburg, South Africa; London, United
Kingdom; New York City, United States; and Seoul, South Korea in order to
collect data on exposure to various types of extreme weather events,
socioeconomic characteristics commonly used as proxies for vulnerability (i.e.,
income, education level, gender, and age), and additional characteristics not
often included in existing composite indices (i.e., queer identity, disability
identity, non-dominant primary language, and self-perceptions of both
discrimination and vulnerability to flood risk). We then use feature importance
analysis with gradient-boosted decision trees to measure the importance that
these variables have in predicting exposure to various types of extreme weather
events. Our results show that non-traditional variables were more relevant to
self-reported exposure to extreme weather events than traditionally employed
variables such as income or age. Furthermore, differences in variable relevance
across different types of hazards and across urban contexts suggest that
vulnerability indicators need to be fit to context and should not be used in a
one-size-fits-all fashion. | arXiv |
Pressure injury (PI) detection is challenging, especially in dark skin tones,
due to the unreliability of visual inspection. Thermography has been suggested
as a viable alternative as temperature differences in the skin can indicate
impending tissue damage. Although deep learning models have demonstrated
considerable promise toward reliably detecting PI, the existing work fails to
evaluate the performance on darker skin tones and varying data collection
protocols. In this paper, we introduce a new thermal and optical imaging
dataset of 35 participants focused on darker skin tones where temperature
differences are induced through cooling and cupping protocols. We vary the
image collection process to include different cameras, lighting, patient pose,
and camera distance. We compare the performance of a small convolutional neural
network (CNN) trained on either the thermal or the optical images on all skin
tones. Our preliminary results suggest that thermography-based CNN is robust to
data collection protocols for all skin tones. | arXiv |
We address the issue of the exploding computational requirements of recent
State-of-the-art (SOTA) open set multimodel 3D mapping (dense 3D mapping)
algorithms and present Voxel-Aggregated Feature Synthesis (VAFS), a novel
approach to dense 3D mapping in simulation. Dense 3D mapping involves
segmenting and embedding sequential RGBD frames which are then fused into 3D.
This leads to redundant computation as the differences between frames are small
but all are individually segmented and embedded. This makes dense 3D mapping
impractical for research involving embodied agents in which the environment,
and thus the mapping, must be modified with regularity. VAFS drastically
reduces this computation by using the segmented point cloud computed by a
simulator's physics engine and synthesizing views of each region. This reduces
the number of features to embed from the number of captured RGBD frames to the
number of objects in the scene, effectively allowing a "ground truth" semantic
map to be computed an order of magnitude faster than traditional methods. We
test the resulting representation by assessing the IoU scores of semantic
queries for different objects in the simulated scene, and find that VAFS
exceeds the accuracy and speed of prior dense 3D mapping techniques. | arXiv |
Resilient divertor features connected to open chaotic edge structures in the
Helically Symmetric Experiment (HSX) are investigated. For the first time, an
expanded vessel wall was considered that would give space for implementation of
a physical divertor target structure. The analysis was done for four different
magnetic configurations with very different chaotic plasma edges. A resilient
plasma wall interaction pattern was identified across all configurations. This
manifests as qualitatively very similar footprint behavior across the different
plasma equilibria. Overall, the resilient field lines of interest with high
connection length $L_C$ lie within a helical band along the wall for all
configurations. This resiliency can be used to identify the best location of a
divertor. The details of the magnetic footprint's resilient helical band is
subject to specific field line structures which are linked to the penetration
depth of field lines into the plasma and directly influence the heat and
particle flux patterns. The differences arising from these details are
characterized by introducing a new metric, the minimum radial connection
$\text{min}(\delta_N)$ of a field line from the last closed flux surface. The
relationship, namely the deviation from a scaling law, between
$\text{min}(\delta_N)$ and $L_C$ of the field lines in the plasma edge field
line behavior suggests that the field lines are associated with structures such
as resonant islands, cantori, and turnstiles. This helps determine the relevant
magnetic flux channels based on the radial location of these chaotic edge
structures and the divertor target footprint. These details will need to be
taken into account for resilient divertor design. | arXiv |
In this work we establish under certain hypotheses the $N \to +\infty$
asymptotic expansion of integrals of the form $$\mathcal{Z}_{N,\Gamma}[V] \, =
\, \int_{\Gamma^N} \prod_{ a < b}^{N}(z_a - z_b)^\beta \, \prod_{k=1}^{N}
\mathrm{e}^{ - N \beta V(z_k) } \, \mathrm{d}\mathbf{z}$$ where $V \in
\mathbb{C}[X]$, $\beta \in 2 \mathbb{N}^*$ is an even integer and $\Gamma
\subset \mathbb{C}$ is an unbounded contour such that the integral converges.
For even degree, real valued $V$s and when $\Gamma = \mathbb{R}$, it is well
known that the large-$N$ expansion is characterised by an equilibrium measure
corresponding to the minimiser of an appropriate energy functional. This method
bears a structural resemblance with the Laplace method. By contrast, in the
complex valued setting we are considering, the analysis structurally resembles
the classical steepest-descent method, and involves finding a critical point
\textit{and} a steepest descent curve, the latter being a deformation of the
original integration contour. More precisely, one minimises a curve-dependent
energy functional with respect to measures on the curve and then maximises the
energy over an appropriate space of curves. Our analysis deals with the one-cut
regime of the associated equilibrium measure. We establish the existence of an
all order asymptotic expansion for $\ln \mathcal{Z}_{N,\Gamma}[V]$ and
explicitly identify the first few terms. | arXiv |
We investigate the dynamic behavior of spin reversal events in the dilute
Ising model, focusing on the influence of static disorder introduced by pinned
spins. Our Monte Carlo simulations reveal that in a homogeneous, defect-free
system, the inter-event time (IET) between local spin flips follows an
exponential distribution, characteristic of Poissonian processes. However, in
heterogeneous systems where defects are present, we observe a significant
departure from this behavior. At high temperatures, the IET exhibits a
power-law distribution resulting from the interplay of spins located in varying
potential environments, where defect density influences reversal probabilities.
At low temperatures, all site classes converge to a unique power-law
distribution, regardless of their potential, leading to distinct critical
exponents for the high- and low-temperature regimes. This transition from
exponential to power-law behavior underscores the critical response features of
magnetic systems with defects, suggesting analogies to glassy dynamics. Our
findings highlight the complex mechanisms governing spin dynamics in disordered
systems, with implications for understanding the universal aspects of
relaxation in glassy materials. | arXiv |
Federated Learning (FL) enables collaborative, personalized model training
across multiple devices without sharing raw data, making it ideal for pervasive
computing applications that optimize user-centric performances in diverse
environments. However, data heterogeneity among clients poses a significant
challenge, leading to inconsistencies among trained client models and reduced
performance. To address this, we introduce the Alignment with Prototypes (ALP)
layers, which align incoming embeddings closer to learnable prototypes through
an optimal transport plan. During local training, the ALP layer updates local
prototypes and aligns embeddings toward global prototypes aggregated from all
clients using our novel FL framework, Federated Alignment (FedAli). For model
inferences, embeddings are guided toward local prototypes to better reflect the
client's local data distribution. We evaluate FedAli on heterogeneous
sensor-based human activity recognition and vision benchmark datasets,
demonstrating that it outperforms existing FL strategies. We publicly release
our source code to facilitate reproducibility and furthered research. | arXiv |
Price forecasting for used construction equipment is a challenging task due
to spatial and temporal price fluctuations. It is thus of high interest to
automate the forecasting process based on current market data. Even though
applying machine learning (ML) to these data represents a promising approach to
predict the residual value of certain tools, it is hard to implement for small
and medium-sized enterprises due to their insufficient ML expertise. To this
end, we demonstrate the possibility of substituting manually created ML
pipelines with automated machine learning (AutoML) solutions, which
automatically generate the underlying pipelines. We combine AutoML methods with
the domain knowledge of the companies. Based on the CRISP-DM process, we split
the manual ML pipeline into a machine learning and non-machine learning part.
To take all complex industrial requirements into account and to demonstrate the
applicability of our new approach, we designed a novel metric named method
evaluation score, which incorporates the most important technical and
non-technical metrics for quality and usability. Based on this metric, we show
in a case study for the industrial use case of price forecasting, that domain
knowledge combined with AutoML can weaken the dependence on ML experts for
innovative small and medium-sized enterprises which are interested in
conducting such solutions. | arXiv |
Rocky planets in our Solar System, namely Mercury, Venus, Earth, Mars, and
the Moon, which is generally added to this group due to its geological
complexity, possess a solid surface and share a common structure divided into
major layers, namely a silicate crust, a silicate mantle, and an iron-rich
core. However, while all terrestrial planets share a common structure, the
thickness of their interior layers, their bulk chemical composition, and
surface expressions of geological processes are often unique to each of them.
In this chapter we provide an overview of the surfaces and interiors of rocky
planets in the Solar System. We list some of the major discoveries in planetary
exploration and discuss how they have helped to answer fundamental questions
about planetary evolution while at the same time opening new avenues. For each
of the major planetary layers, i.e., the surface, the crust and lithosphere,
the mantle, and the core, we review key geological and geophysical processes
that have shaped the planets that we observe today. Understanding the
similarities and differences between the terrestrial planets in the Solar
System will teach us about the diversity of evolutionary paths a planet could
follow, helping us to better understand our own home, the Earth. | arXiv |
We prove that for $1\le p,q\le\infty$ the mixed-norm spaces $L_q(L_p)$ are
mutually non-isomorphic, with the only exception that $L_q(L_2)$ is isomorphic
to $L_q(L_q)$ for all $1<q<\infty$. | arXiv |
Identifying predictive features from high-dimensional datasets is a major
task in biomedical research. However, it is difficult to determine the
robustness of selected features. Here, we investigate the performance of
randomly chosen features, what we term "random feature baselines" (RFBs), in
the context of disease risk prediction from blood plasma proteomics data in the
UK Biobank. We begin with two published case studies predicting diagnosis of
(1) dementia and (2) hip fracture. RFBs perform similarly to published features
of interest (using the same number of proteins, but randomly chosen). We then
measure the performance of RFBs for all 607 disease outcomes in the UK Biobank,
with various numbers of randomly chosen features, as well as all proteins in
the dataset. 114/607 outcomes showed a higher mean AUROC when choosing 5 random
features than using all proteins, and the absolute difference in mean AUC was
0.075. 163 outcomes showed a higher mean AUROC when choosing 1000 random
features than using all proteins, and the absolute difference in mean AUC was
0.03. Incorporating RFBs should become part of ML practice when feature
selection or target discovery is a goal. | arXiv |
In computer vision tasks, the ability to focus on relevant regions within an
image is crucial for improving model performance, particularly when key
features are small, subtle, or spatially dispersed. Convolutional neural
networks (CNNs) typically treat all regions of an image equally, which can lead
to inefficient feature extraction. To address this challenge, I have introduced
Vision Eagle Attention, a novel attention mechanism that enhances visual
feature extraction using convolutional spatial attention. The model applies
convolution to capture local spatial features and generates an attention map
that selectively emphasizes the most informative regions of the image. This
attention mechanism enables the model to focus on discriminative features while
suppressing irrelevant background information. I have integrated Vision Eagle
Attention into a lightweight ResNet-18 architecture, demonstrating that this
combination results in an efficient and powerful model. I have evaluated the
performance of the proposed model on three widely used benchmark datasets:
FashionMNIST, Intel Image Classification, and OracleMNIST, with a primary focus
on image classification. Experimental results show that the proposed approach
improves classification accuracy. Additionally, this method has the potential
to be extended to other vision tasks, such as object detection, segmentation,
and visual tracking, offering a computationally efficient solution for a wide
range of vision-based applications. Code is available at:
https://github.com/MahmudulHasan11085/Vision-Eagle-Attention.git | arXiv |
This paper introduces a large-scale multi-modal dataset captured in and
around well-known landmarks in Oxford using a custom-built multi-sensor
perception unit as well as a millimetre-accurate map from a Terrestrial LiDAR
Scanner (TLS). The perception unit includes three synchronised global shutter
colour cameras, an automotive 3D LiDAR scanner, and an inertial sensor - all
precisely calibrated. We also establish benchmarks for tasks involving
localisation, reconstruction, and novel-view synthesis, which enable the
evaluation of Simultaneous Localisation and Mapping (SLAM) methods,
Structure-from-Motion (SfM) and Multi-view Stereo (MVS) methods as well as
radiance field methods such as Neural Radiance Fields (NeRF) and 3D Gaussian
Splatting. To evaluate 3D reconstruction the TLS 3D models are used as ground
truth. Localisation ground truth is computed by registering the mobile LiDAR
scans to the TLS 3D models. Radiance field methods are evaluated not only with
poses sampled from the input trajectory, but also from viewpoints that are from
trajectories which are distant from the training poses. Our evaluation
demonstrates a key limitation of state-of-the-art radiance field methods: we
show that they tend to overfit to the training poses/images and do not
generalise well to out-of-sequence poses. They also underperform in 3D
reconstruction compared to MVS systems using the same visual inputs. Our
dataset and benchmarks are intended to facilitate better integration of
radiance field methods and SLAM systems. The raw and processed data, along with
software for parsing and evaluation, can be accessed at
https://dynamic.robots.ox.ac.uk/datasets/oxford-spires/. | arXiv |
We study the binary discrimination problem of identification of boosted $H\to
gg$ decays from massive QCD jets in a systematic expansion in the strong
coupling. Though this decay mode of the Higgs is unlikely to be discovered at
the LHC, we analytically demonstrate several features of the likelihood ratio
for this problem through explicit analysis of signal and background matrix
elements. Through leading-order, we prove that by imposing a constraint on the
jet mass and measuring the energy fraction of the softer subjet an improvement
of signal to background ratio that is independent of the kinematics of the jets
at high boosts can be obtained, and is approximately equal to the inverse of
the strong coupling evaluated at the Higgs mass. At next-to-leading order, we
construct a powerful discrimination observable through a sort of anomaly
detection approach by simply inverting the next-to-leading order $H\to gg$
matrix element with soft gluon emission, which is naturally infrared and
collinear safe. Our analytic conclusions are validated in simulated data from
all-purpose event generators and subsequent parton showering and demonstrate
that the signal-to-background ratio can be improved by a factor of several
hundred at high, but accessible, jet energies at the LHC. | arXiv |
We present a detailed analysis of EELG1002: a $z = 0.8275$ EELG identified
within archival Gemini/GMOS spectroscopy as part of the COSMOS Spectroscopic
Archive. Combining GMOS spectra and available multi-wavelength photometry, we
find EELG1002 is a low-mass ($10^{7 - 8}$ M$_\odot$), compact ($\sim 530$ pc),
and bursty star-forming galaxy with mass doubling timescales of $\sim 5 - 15$
Myr. EELG1002 has record-breaking rest-frame [OIII]+H$\beta$ EW of $\sim 2800 -
3700$\AA~which is $\sim 16 - 35 \times$ higher than typical $z \sim 0.8$ [OIII]
emitters with similar stellar mass and even higher than typical $z > 5$
galaxies. We find no clear evidence of an AGN suggesting the emission lines are
star formation driven. EELG1002 is chemically unevolved (direct $T_e$;
$12+\log_{10} (\textrm{O/H}) \sim 7.5$ consistent with $z > 5$ galaxies at
fixed stellar mass) and may be undergoing a first intense, bursty star
formation phase analogous to conditions expected of galaxies in the early
Universe. We find evidence for a highly energetic ISM ([OIII]/[OII] $\sim 11$)
and hard ionizing radiation field (elevated [NeIII]/[OII] at fixed
[OIII]/[OII]). Coupled with its compact, metal-poor, and actively star-forming
nature, EELG1002 is found to efficiently produce ionizing photons with
$\xi_{ion} \sim 10^{25.70 - 25.75}$ erg$^{-1}$ Hz and may have $\sim 10 - 20\%$
LyC escape fraction suggesting such sources may be important reionization-era
analogs. We find dynamical mass of $\sim 10^9$ M$_\odot$ suggesting copious
amounts of gas to support intense star-formation activity as also suggested by
analogs identified in Illustris-TNG. EELG1002 may be an ideal low-$z$
laboratory of galaxies in the early Universe and demonstrates how archival
datasets can support high-$z$ science and next-generation surveys planned with
\textit{Euclid} and \textit{Roman}. | arXiv |
Nearly all cool, evolved stars are solar-like oscillators, and fundamental
stellar properties can be inferred from these oscillations with
asteroseismology. Scaling relations are commonly used to relate global
asteroseismic properties, the frequency of maximum power $\nu_{max}$ and the
large frequency separation $\Delta \nu$, to stellar properties. Mass, radius,
and age can then be inferred with the addition of stellar spectroscopy. There
is excellent agreement between seismic radii and fundamental data on the lower
red giant branch and red clump. However, the scaling relations appear to
breakdown in luminous red giant stars. We attempt to constrain the
contributions of the asteroseismic parameters to the observed breakdown. We
test the $\nu_{max}$ and $\Delta \nu$ scaling relations separately, by using
stars of known mass and radius in star clusters and the Milky Way's
high-$\alpha$ sequence. We find evidence that the $\Delta \nu$-scaling relation
contributes to the observed breakdown in luminous giants more than the
$\nu_{max}$ relation. We test different methods of mapping the observed $\Delta
\nu$ to the mean density via a correction factor, $F_{\Delta \nu}$ and find a
$\approx 1 - 3\%$ difference in the radii in the luminous giant regime
depending on the technique used to measure $F_{\Delta \nu}$. The differences
between the radii inferred by these two techniques are too small on the
luminous giant branch to account for the inflated seismic radii observed in
evolved giant stars. Finally, we find that the $F_{\Delta \nu}$ correction is
insensitive to the adopted mixing length, chosen by calibrating the models to
observations of $T_{eff}$. | arXiv |
We explore the possibility that exotic forms of dark matter could expose
humans on Earth or on prolonged space travel to a significant radiation dose.
The radiation exposure from dark matter interacting with nuclei in the human
body is generally assumed to be negligible compared to other sources of
background radiation. However, as we discuss here, current data allow for dark
matter models where this is not necessarily true. In particular, if dark matter
is heavier and more strongly interacting than weakly interacting massive
particle dark matter, it could act as ionizing radiation and deposit a
significant amount of radiation energy in all or part of the human population,
similar to or even exceeding the known radiation exposure from other background
sources. Conversely, the non-observation of such an exposure can be used to
constrain this type of heavier and more strongly interacting dark matter. We
first consider the case where dark matter scatters elastically and identify the
relevant parameter space in a model-independent way. We also discuss how
previous bounds from cosmological probes, as well as atmospheric and
space-based detectors, might be avoided, and how a re-analysis of existing
radiation data, along with a simple experiment monitoring ionizing radiation in
space with a lower detection threshold, could help constrain part of this
parameter space. We finally propose a hypothetical dark matter candidate that
scatters inelastically and argue that, in principle, one per mille of the
Earth's population could attain a significant radiation dose from such a dark
matter exposure in their lifetime. | arXiv |
Quenching of star-formation plays a fundamental role in galaxy evolution.
This process occurs due to the removal of the cold interstellar medium (ISM) or
stabilization against collapse, so that gas cannot be used in the formation of
new stars. In this paper, we study the effect of different mechanisms of ISM
removal. In particular, we revised the well-known Baldwin-Philips-Terlevich
(BPT) and $\mathrm{EW_{H\alpha}}$ vs. $\mathrm{[NII]/H\alpha}$ (WHAN)
emission-line ratio diagnostics, so that we could classify all galaxies, even
those not detected at some emission lines, introducing several new spectral
classes. We use spectroscopic data and several physical parameters of 2409
dusty early-type galaxies in order to find out the dominant ionization source
[active galactic nuclei (AGNs), young massive stars, hot low-mass evolved stars
(HOLMES)] and its effect on the ISM. We find that strong AGNs can play a
significant role in the ISM removal process only for galaxies with ages lower
than $10^{9.4}$ yr, but we cannot rule out the influence of weak AGNs at any
age. For older galaxies, HOLMES/planetary nebulae contribute significantly to
the ISM removal process. Additionally, we provide the BPT and WHAN
classifications not only for the selected sample but also for all 300000
galaxies in the GAMA fields. | arXiv |
The paper is concerned with a scalar conservation law with discontinuous
gradient-dependent flux. Namely, the flux is described by two different
functions $f(u)$ or $g(u)$, when the gradient $u_x$ of the solution is positive
or negative, respectively. We study here the stable case where $f(u)<g(u)$ for
all $u\in {\mathbb R}$, with $f,g$ smooth but possibly not convex. A front
tracking algorithm is introduced, proving that piecewise constant
approximations converge to the trajectories of a contractive semigroup on
$\mathbf{L}^1({\mathbb R})$. In the spatially periodic case, we prove that
semigroup trajectories coincide with the unique limits of a suitable class of
vanishing viscosity approximations. | arXiv |
Single-photon emission from a two-level system offers promising perspectives
for the development of quantum technologies, where multiphotons are generally
regarded as accidental, undesired and should be suppressed. In quantum
mechanics, however, multiphoton emission can turn out to be even more
fundamental and interesting than the single-photon emission, since in a
coherently driven system, the multiphoton suppression arises from quantum
interferences between virtual multiphoton fluctuations and the mean field in a
Poisson superposition of all number states. Here, we demonstrate how one can
control the multiphoton dynamics of a two-level system by disrupting these
quantum interferences through a precise and independent homodyne control of the
mean field. We show that, counterintuitively, quantum fluctuations always play
a major qualitative role, even and in fact especially, when their quantitative
contribution is vanishing as compared to that of the mean field. Our findings
provide new insights into the paradoxical character of quantum mechanics and
open pathways for mean-field engineering as a tool for precision multiphoton
control. | arXiv |
Applied macroeconomists frequently use impulse response estimators motivated
by linear models. We study whether the estimands of such procedures have a
causal interpretation when the true data generating process is in fact
nonlinear. We show that vector autoregressions and linear local projections
onto observed shocks or proxies identify weighted averages of causal effects
regardless of the extent of nonlinearities. By contrast, identification
approaches that exploit heteroskedasticity or non-Gaussianity of latent shocks
are highly sensitive to departures from linearity. Our analysis is based on new
results on the identification of marginal treatment effects through weighted
regressions, which may also be of interest to researchers outside
macroeconomics. | arXiv |
We present an approach that can be utilized in order to account for the
covariate shift between two datasets of the same observable with different
distributions, so as to improve the generalizability of a neural network model
trained on in-distribution samples (IDs) when inferring cosmology at the field
level on out-of-distribution samples (OODs) of {\it unknown labels}. We make
use of HI maps from the two simulation suites in CAMELS, IllustrisTNG and
SIMBA. We consider two different techniques, namely adversarial approach and
optimal transport, to adapt a target network whose initial weights are those of
a source network pre-trained on a labeled dataset. Results show that after
adaptation, salient features that are extracted by source and target encoders
are well aligned in the embedding space, indicating that the target encoder has
learned the representations of the target domain via the adversarial training
and optimal transport. Furthermore, in all scenarios considered in our
analyses, the target encoder, which does not have access to any labels
($\Omega_{\rm m}$) during adaptation phase, is able to retrieve the underlying
$\Omega_{\rm m}$ from out-of-distribution maps to a great accuracy of $R^{2}$
score $\ge$ 0.9, comparable to the performance of the source encoder trained in
a supervised learning setup. We further test the viability of the techniques
when only a few out-of-distribution instances are available and find that the
target encoder still reasonably recovers the matter density. Our approach is
critical in extracting information from upcoming large scale surveys. | arXiv |
Galaxies at redshift $z\sim 1-2$ display high star formation rates (SFRs)
with elevated cold gas fractions and column densities. Simulating a
self-regulated ISM in a hydrodynamical, self-consistent context, has proven
challenging due to strong outflows triggered by supernova (SN) feedback. At
sufficiently high gas column densities, and in the absence of magnetic fields,
these outflows prevent a quasi-steady disk from forming at all. To this end, we
present GHOSDT, a suite of magneto-hydrodynamical simulations that implement
ISM physics at high resolution. We demonstrate the importance of magnetic
pressure in the stabilization of gas-rich star-forming disks. We show that a
relation between the magnetic field and gas surface density emerges naturally
from our simulations. We argue that the magnetic field in the dense,
star-forming gas, may be set by the SN-driven turbulent gas motions. When
compared to pure hydrodynamical runs, we find that the inclusion of magnetic
fields increases the cold gas fraction and reduces the disc scale height, both
by up to a factor of $\sim 2$, and reduces the star formation burstiness. In
dense ($n>100\;\rm{cm}^{-3}$) gas, we find steady-state magnetic field
strengths of 10--40 $\mu$G, comparable to those observed in molecular clouds.
Finally, we demonstrate that our simulation framework is consistent with the
Ostriker & Kim (2022) Pressure Regulated Feedback Modulated Theory of star
formation and stellar Feedback. | arXiv |
We study vertical resonant trapping and resonant heating of orbits. These two
processes both lead to the growth of a boxy/peanut-shaped bulge in a typical
$N$-body model. For the first time, we study this by means of the action
variables and resonant angles of the actual orbits that compose the model
itself. We used the resonant angle instead of the frequency ratio, which
allowed us to clearly distinguish between these two processes in numerical
simulations. We show that trapping and heating occur simultaneously, at least
at the stage of a mature bar, that is, some orbits quickly pass through
vertical resonance while at the same time, a substantial number of orbits
remains trapped into this stage for a long time. Half of all bar orbits spend
more than 2.5 Gyr in vertical resonance over an interval of 4 Gyr. Half of the
orbits trapped into the bar over the last 3 Gyr of simulation remain captured
in vertical resonance for more than 2 Gyr. We conclude that in the later stages
of the bar evolution, the process of vertical trapping dominates in the ongoing
process that causes the boxy/peanut shape of a bar in a typical $N$-body model.
This contradicts the results of several recent works. | arXiv |
Internal crack detection has been a subject of focus in structural health
monitoring. By focusing on crack detection in structural datasets, it is
demonstrated that deep learning (DL) methods can effectively analyze seismic
wave fields interacting with micro-scale cracks, which are beyond the
resolution of conventional visual inspection. This work explores a novel
application of DL-based key point detection technique, where cracks are
localized by predicting the coordinates of four key points that define a
bounding region of the crack. The study not only opens new research directions
for non-visual applications but also effectively mitigates the impact of
imbalanced data which poses a challenge for previous DL models, as it can be
biased toward predicting the majority class (non-crack regions). Popular DL
techniques, such as the Inception blocks, are used and investigated. The model
shows an overall reduction in loss when applied to micro-scale crack detection
and is reflected in the lower average deviation between the location of actual
and predicted cracks, with an average Intersection over Union (IoU) being 0.511
for all micro cracks (greater than 0.00 micrometers) and 0.631 for larger micro
cracks (greater than 4 micrometers). | arXiv |
Studies investigating the causal effects of spatially varying exposures on
health$\unicode{x2013}$such as air pollution, green space, or
crime$\unicode{x2013}$often rely on observational and spatially indexed data. A
prevalent challenge is unmeasured spatial confounding, where an unobserved
spatially varying variable affects both exposure and outcome, leading to biased
causal estimates and invalid confidence intervals. In this paper, we introduce
a general framework based on instrumental variables (IV) that encompasses and
unites most of the existing methods designed to account for an unmeasured
spatial confounder. We show that a common feature of all existing methods is
their reliance on small-scale variation in exposure, which functions as an IV.
In this framework, we outline the underlying assumptions and the estimation
strategy of each method. Furthermore, we demonstrate that the IV can be used to
identify and estimate the exposure-response curve under more relaxed
assumptions. We conclude by estimating the exposure-response curve between
long-term exposure to fine particulate matter and all-cause mortality among
33,454 zip codes in the United States while adjusting for unmeasured spatial
confounding. | arXiv |
Different methods can be employed to render virtual reverberation, often
requiring substantial information about the room's geometry and the acoustic
characteristics of the surfaces. However, fully comprehensive approaches that
account for all aspects of a given environment may be computationally costly
and redundant from a perceptual standpoint. For these methods, achieving a
trade-off between perceptual authenticity and model's complexity becomes a
relevant challenge.
This study investigates this compromise through the use of geometrical
acoustics to render Ambisonics-based binaural reverberation. Its precision is
determined, among other factors, by its fidelity to the room's geometry and to
the acoustic properties of its materials.
The purpose of this study is to investigate the impact of simplifying the
room geometry and the frequency resolution of absorption coefficients on the
perception of reverberation within a virtual sound scene. Several decimated
models based on a single room were perceptually evaluated using the a
multi-stimulus comparison method. Additionally, these differences were
numerically assessed through the calculation of acoustic parameters of the
reverberation.
According to numerical and perceptual evaluations, lowering the frequency
resolution of absorption coefficients can have a significant impact on the
perception of reverberation, while a less notable impact was observed when
decimating the geometry of the model. | arXiv |
Simulations and observations suggest that galaxy interactions may enhance the
star formation rate (SFR) in merging galaxies. One proposed mechanism is the
torque exerted on the gas and stars in the larger galaxy by the smaller galaxy.
We analyze the interaction torques and star formation activity on six galaxies
from the FIRE-2 simulation suite with masses comparable to the Milky Way galaxy
at redshift $z=0$. We trace the halos from $z = 3.6$ to $z=0$, calculating the
torque exerted by the nearby galaxies on the gas in the central galaxy. We
calculate the correlation between the torque and the SFR across the simulations
for various mass ratios. For near-equal-stellar-mass-ratio interactions in the
galaxy sample, occurring between $z=1.2-3.6$, there is a positive and
statistically significant correlation between the torque from nearby galaxies
on the gas of the central galaxies and the SFR. For all other samples, no
statistically significant correlation is found between the torque and the SFR.
Our analysis shows that some, but not all, major interactions cause starbursts
in the simulated Milky Way-mass galaxies, and that most starbursts are not
caused by galaxy interactions. The transition from `bursty' at high redshift
($z\gtrsim1$) to `steady' star-formation state at later times is independent of
the interaction history of the galaxies, and most of the interactions do not
leave significant imprints on the overall trend of the star formation history
of the galaxies. | arXiv |
False data injection attacks (FDIAs) on smart inverters are a growing concern
linked to increased renewable energy production. While data-based FDIA
detection methods are also actively developed, we show that they remain
vulnerable to impactful and stealthy adversarial examples that can be crafted
using Reinforcement Learning (RL). We propose to include such adversarial
examples in data-based detection training procedure via a continual adversarial
RL (CARL) approach. This way, one can pinpoint the deficiencies of data-based
detection, thereby offering explainability during their incremental
improvement. We show that a continual learning implementation is subject to
catastrophic forgetting, and additionally show that forgetting can be addressed
by employing a joint training strategy on all generated FDIA scenarios. | arXiv |
The current vision-based aphid counting methods in water traps suffer from
undercounts caused by occlusions and low visibility arising from dense
aggregation of insects and other objects. To address this problem, we propose a
novel aphid counting method through interactive stirring actions. We use
interactive stirring to alter the distribution of aphids in the yellow water
trap and capture a sequence of images which are then used for aphid detection
and counting through an optimized small object detection network based on
Yolov5. We also propose a counting confidence evaluation system to evaluate the
confidence of count-ing results. The final counting result is a weighted sum of
the counting results from all sequence images based on the counting confidence.
Experimental results show that our proposed aphid detection network
significantly outperforms the original Yolov5, with improvements of 33.9% in
[email protected] and 26.9% in AP@[0.5:0.95] on the aphid test set. In addition, the aphid
counting test results using our proposed counting confidence evaluation system
show significant improvements over the static counting method, closely aligning
with manual counting results. | arXiv |
We analyze the recent MIT lattice data for the gravitational form factors
(GFFs) of the pion which extend up to $Q^2= 2~{\rm GeV}^2$ for $m_\pi=170$~MeV.
We show that simple monopole fits comply with the old idea of meson dominance.
We use Chiral Perturbation theory ($\chi$PT) to next-to-leading order (NLO) to
transform the MIT data to the physical world with $m_\pi=140~$MeV and find that
the spin-0 GFF is effectively saturated with the $f_0(600)$ and the spin-2 with
the $f_2(1270)$, with monopole masses $m_\sigma= 630(60)$~MeV and $m_{f_2}=
1270(40)$~MeV. We determine in passing the chiral low energy constants (LECs)
from the MIT lattice data alone
$$
10^3 \cdot L_{11} (m_\rho^2)=1.06(15) \, , \qquad 10^3 \cdot L_{12}
(m_\rho^2)= -2.2(1) \, ,
\qquad 10^3 \cdot L_{13} (m_\rho^2) = -0.7(1.1).
$$ which agree in sign and order of magnitude with the original estimates by
Donoghue and Leutwyler. We also analyze the sum rules based on perturbative QCD
(pQCD) that imply that the corresponding spectral functions are not positive
definite. We show that these sum rules are strongly violated in a variety of
$\pi\pi-K \bar K$ coupled channel Omn\`es-Muskhelishvili calculations. This is
not mended by the inclusion of the pQCD tail, suggesting the need for an extra
negative spectral strength. Using a simple model implementing all sum rules, we
find the expected onset of pQCD at very high momenta. | arXiv |
Large language models (LLMs) have significantly advanced the field of
automated code generation. However, a notable research gap exists in the
evaluation of social biases that may be present in the code produced by LLMs.
To solve this issue, we propose a novel fairness framework, i.e., Solar, to
assess and mitigate the social biases of LLM-generated code. Specifically,
Solar can automatically generate test cases for quantitatively uncovering
social biases of the auto-generated code by LLMs. To quantify the severity of
social biases in generated code, we develop a dataset that covers a diverse set
of social problems. We applied Solar and the crafted dataset to four
state-of-the-art LLMs for code generation. Our evaluation reveals severe bias
in the LLM-generated code from all the subject LLMs. Furthermore, we explore
several strategies for bias mitigation, including Chain-of-Thought (CoT)
prompting, combining positive role-playing with CoT prompting and iterative
prompting. Our experiments show that iterative prompting can effectively reduce
social bias in LLM-generated code by up to 90%. Solar is highly extensible to
evaluate new social problems. | arXiv |
In machine learning (ML), the inference phase is the process of applying
pre-trained models to new, unseen data with the objective of making
predictions. During the inference phase, end-users interact with ML services to
gain insights, recommendations, or actions based on the input data. For this
reason, serving strategies are nowadays crucial for deploying and managing
models in production environments effectively. These strategies ensure that
models are available, scalable, reliable, and performant for real-world
applications, such as time series forecasting, image classification, natural
language processing, and so on. In this paper, we evaluate the performances of
five widely-used model serving frameworks (TensorFlow Serving, TorchServe,
MLServer, MLflow, and BentoML) under four different scenarios (malware
detection, cryptocoin prices forecasting, image classification, and sentiment
analysis). We demonstrate that TensorFlow Serving is able to outperform all the
other frameworks in serving deep learning (DL) models. Moreover, we show that
DL-specific frameworks (TensorFlow Serving and TorchServe) display
significantly lower latencies than the three general-purpose ML frameworks
(BentoML, MLFlow, and MLServer). | arXiv |
We present Y-MAP-Net, a Y-shaped neural network architecture designed for
real-time multi-task learning on RGB images. Y-MAP-Net, simultaneously predicts
depth, surface normals, human pose, semantic segmentation and generates
multi-label captions, all from a single network evaluation. To achieve this, we
adopt a multi-teacher, single-student training paradigm, where task-specific
foundation models supervise the network's learning, enabling it to distill
their capabilities into a lightweight architecture suitable for real-time
applications. Y-MAP-Net, exhibits strong generalization, simplicity and
computational efficiency, making it ideal for robotics and other practical
scenarios. To support future research, we will release our code publicly. | arXiv |
Bitcoin, launched in 2008 by Satoshi Nakamoto, established a new digital
economy where value can be stored and transferred in a fully decentralized
manner - alleviating the need for a central authority. This paper introduces a
large scale dataset in the form of a transactions graph representing
transactions between Bitcoin users along with a set of tasks and baselines. The
graph includes 252 million nodes and 785 million edges, covering a time span of
nearly 13 years of and 670 million transactions. Each node and edge is
timestamped. As for supervised tasks we provide two labeled sets i. a 33,000
nodes based on entity type and ii. nearly 100,000 Bitcoin addresses labeled
with an entity name and an entity type. This is the largest publicly available
data set of bitcoin transactions designed to facilitate advanced research and
exploration in this domain, overcoming the limitations of existing datasets.
Various graph neural network models are trained to predict node labels,
establishing a baseline for future research. In addition, several use cases are
presented to demonstrate the dataset's applicability beyond Bitcoin analysis.
Finally, all data and source code is made publicly available to enable
reproducibility of the results. | arXiv |
The recently released model, Claude 3.5 Computer Use, stands out as the first
frontier AI model to offer computer use in public beta as a graphical user
interface (GUI) agent. As an early beta, its capability in the real-world
complex environment remains unknown. In this case study to explore Claude 3.5
Computer Use, we curate and organize a collection of carefully designed tasks
spanning a variety of domains and software. Observations from these cases
demonstrate Claude 3.5 Computer Use's unprecedented ability in end-to-end
language to desktop actions. Along with this study, we provide an
out-of-the-box agent framework for deploying API-based GUI automation models
with easy implementation. Our case studies aim to showcase a groundwork of
capabilities and limitations of Claude 3.5 Computer Use with detailed analyses
and bring to the fore questions about planning, action, and critic, which must
be considered for future improvement. We hope this preliminary exploration will
inspire future research into the GUI agent community. All the test cases in the
paper can be tried through the project:
https://github.com/showlab/computer_use_ootb. | arXiv |
Let $\mathcal{G}$ be the set of all the planar embeddings of a (not
necessarily connected) $n$-vertex graph $G$. We present a bijection $\Phi$ from
$\mathcal{G}$ to the natural numbers in the interval $[0 \dots |\mathcal{G}| -
1]$. Given a planar embedding $\mathcal{E}$ of $G$, we show that
$\Phi(\mathcal{E})$ can be decomposed into a sequence of $O(n)$ natural numbers
each describing a specific feature of $\mathcal{E}$. The function $\Phi$, which
is a ranking function for $\mathcal{G}$, can be computed in $O(n)$ time, while
its inverse unranking function $\Phi^{-1}$ can be computed in $O(n \alpha(n))$
time. The results of this paper can be of practical use to uniformly at random
generating the planar embeddings of a graph $G$ or to enumerating such
embeddings with amortized constant delay. Also, they can be used to counting,
enumerating or uniformly at random generating constrained planar embeddings of
$G$. | arXiv |