text
stringlengths 6
128k
|
---|
The back-reaction of a classical gravitational field interacting with quantum
matter fields is described by the semiclassical Einstein equation, which has
the expectation value of the quantum matter fields stress tensor as a source.
The semiclassical theory may be obtained from the quantum field theory of
gravity interacting with N matter fields in the large N limit. This theory
breaks down when the fields quantum fluctuations are important. Stochastic
gravity goes beyond the semiclassical limit and allows for a systematic and
self-consistent description of the metric fluctuations induced by these quantum
fluctuations. The correlation functions of the metric fluctuations obtained in
stochastic gravity reproduce the correlation functions in the quantum theory to
leading order in an 1/N expansion. Two main applications of stochastic gravity
are discussed. The first, in cosmology, to obtain the spectrum of primordial
metric perturbations induced by the inflaton fluctuations, even beyond the
linear approximation. The second, in black hole physics, to study the
fluctuations of the horizon of an evaporating black hole.
|
We report quantum and semi-classical calculations of spin current and
spin-transfer torque in a free-electron Stoner model for systems where the
magnetization varies continuously in one dimension.Analytic results are
obtained for an infinite spin spiral and numerical results are obtained for
realistic domain wall profiles. The adiabatic limit describes conduction
electron spins that follow the sum of the exchange field and an effective,
velocity-dependent field produced by the gradient of the magnetization in the
wall. Non-adiabatic effects arise for short domain walls but their magnitude
decreases exponentially as the wall width increases. Our results cast doubt on
the existence of a recently proposed non-adiabatic contribution to the
spin-transfer torque due to spin flip scattering.
|
We introduce Dehn invariants as a useful tool in the study of the inflation
of quasiperiodic space tilings. The tilings by ``golden tetrahedra'' are
considered. We discuss how the Dehn invariants can be applied to the study of
inflation properties of the six golden tetrahedra. We also use geometry of the
faces of the golden tetrahedra to analyze their inflation properties. We give
the inflation rules for decorated Mosseri-Sadoc tiles in the projection class
of tilings ${\cal T}^{(MS)}$. The Dehn invariants of the Mosseri-Sadoc tiles
provide two eigenvectors of the inflation matrix with eigenvalues equal to
$\tau = \frac{1+\sqrt{5}}{2}$ and $-\frac{1}{\tau}$, and allow to reconstruct
the inflation matrix uniquely.
|
Stochastic variational inference (SVI), the state-of-the-art algorithm for
scaling variational inference to large-datasets, is inherently serial.
Moreover, it requires the parameters to fit in the memory of a single
processor; this is problematic when the number of parameters is in billions. In
this paper, we propose extreme stochastic variational inference (ESVI), an
asynchronous and lock-free algorithm to perform variational inference for
mixture models on massive real world datasets. ESVI overcomes the limitations
of SVI by requiring that each processor only access a subset of the data and a
subset of the parameters, thus providing data and model parallelism
simultaneously. We demonstrate the effectiveness of ESVI by running Latent
Dirichlet Allocation (LDA) on UMBC-3B, a dataset that has a vocabulary of 3
million and a token size of 3 billion. In our experiments, we found that ESVI
not only outperforms VI and SVI in wallclock-time, but also achieves a better
quality solution. In addition, we propose a strategy to speed up computation
and save memory when fitting large number of topics.
|
Neural operator learning as a means of mapping between complex function
spaces has garnered significant attention in the field of computational science
and engineering (CS&E). In this paper, we apply Neural operator learning to the
time-of-flight ultrasound computed tomography (USCT) problem. We learn the
mapping between time-of-flight (TOF) data and the heterogeneous sound speed
field using a full-wave solver to generate the training data. This novel
application of operator learning circumnavigates the need to solve the
computationally intensive iterative inverse problem. The operator learns the
non-linear mapping offline and predicts the heterogeneous sound field with a
single forward pass through the model. This is the first time operator learning
has been used for ultrasound tomography and is the first step in potential
real-time predictions of soft tissue distribution for tumor identification in
beast imaging.
|
We study the charge-dependent azimuthal correlations in relativistic heavy
ion collisions, as motivated by the search for the Chiral Magnetic Effect (CME)
and the investigation of related background contributions. In particular we aim
to understand how these correlations induced by various proposed effects evolve
from collisions with AuAu system to that with UU system. To do that, we
quantify the generation of magnetic field in UU collisions at RHIC energy and
its azimuthal correlation to the matter geometry using event-by-event
simulations. Taking the experimental data for charge-dependent azimuthal
correlations from AuAu collisions and extrapolating to UU with reasonable
assumptions, we examine the resulting correlations to be expected in UU
collisions and compare them with recent STAR measurements. Based on such
analysis we discuss the viability for explaining the data with a combination of
the CME-like and flow-induced contributions.
|
We study the nonlinear Schr\"odinger equation (NLS) with the quadratic
nonlinearity $|u|^2$, posed on the two-dimensional torus $\mathbb{T}^2$. While
the relevant $L^3$-Strichartz estimate is known only with a derivative loss, we
prove local well-posedness of the quadratic NLS in $L^2(\mathbb{T}^2)$, thus
resolving an open problem of thirty years since Bourgain (1993). In view of
ill-posedness in negative Sobolev spaces, this result is sharp. We establish a
crucial bilinear estimate by separately studying the non-resonant and nearly
resonant cases. As a corollary, we obtain a tri-linear version of the
$L^3$-Strichartz estimate without any derivative loss.
|
A complex unit gain graph is a graph where each orientation of an edge is
given a complex unit, which is the inverse of the complex unit assigned to the
opposite orientation. We extend some fundamental concepts from spectral graph
theory to complex unit gain graphs. We define the adjacency, incidence and
Laplacian matrices, and study each of them. The main results of the paper are
eigenvalue bounds for the adjacency and Laplacian matrices.
|
Quantum-access security, where an attacker is granted superposition access to
secret-keyed functionalities, is a fundamental security model and its study has
inspired results in post-quantum security. We revisit, and fill a gap in, the
quantum-access security analysis of the Lamport one-time signature scheme (OTS)
in the quantum random oracle model (QROM) by Alagic et al.~(Eurocrypt 2020). We
then go on to generalize the technique to the Winternitz OTS. Along the way, we
develop a tool for the analysis of hash chains in the QROM based on the
superposition oracle technique by Zhandry (Crypto 2019) which might be of
independent interest.
|
The theory of scarring of eigenfunctions of classically chaotic systems by
short periodic orbits is extended in several ways. The influence of short-time
linear recurrences on correlations and fluctuations at long times is
emphasized. We include the contribution to scarring of nonlinear recurrences
associated with homoclinic orbits, and treat the different scenarios of random
and nonrandom long-time recurrences. The importance of the local classical
structure around the periodic orbit is emphasized, and it is shown for an
optimal choice of test basis in phase space, scars must persist in the
semiclassical limit. The crucial role of symmetry is also discussed, which
together with the nonlinear recurrences gives a much improved account of the
actual strength of scars for given classical orbits and in individual
wavefunctions. Quantitative measures of scarring are provided and comparisons
are made with numerical data.
|
We deduce the existence of a maximal irreducibility measure for a Markov
chain from Zorn's lemma.
|
Let $f:\mathbb{C}^2\to \mathbb{C}^2$ be a polynomial skew product which
leaves invariant an attracting vertical line $ L $. Assume moreover $f$
restricted to $L$ is non-uniformly hyperbolic, in the sense that $f$ restricted
to $L$ satisfies one of the following conditions: 1. $f|_L$ satisfies
Topological Collet-Eckmann and Weak Regularity conditions. 2. The Lyapunov
exponent at every critical value point lying in the Julia set of $f|_L$ exist
and is positive, and there is no parabolic cycle. Under one of the above
conditions we show that the Fatou set in the basin of $L$ coincides with the
union of the basins of attracting cycles, and the Julia set in the basin of $L$
has Lebesgue measure zero. As an easy consequence there are no wandering Fatou
components in the basin of $L$.
|
The aim of the present work is to show that the results obtained earlier on
the approximation of distributions of sums of independent summands by
infinitely divisible laws may be transferred to the estimation of the closeness
of distributions on convex polyhedra.
|
In Feynman's lectures there is a remark about the limiting value of the
impedance of an n-section ladder consisting of purely reactive elements
(capacitances and inductances). The remark is that this limiting impedance
$z=\lim_{n\to\infty}z_n$ has a positive real part. He notes that this is
surprising since the real part of each $z_n$ is zero, therefore it is
impossible for the limit to have a positive real part. A recent article in this
journal offered an explanation of this paradox based on the fact that realistic
impedances have a non-negative real part, but the authors noted that their
argument was incomplete. We use the same physical idea, but give a simple
argument which shows that the sequence $z_n$ converges like a geometric series.
We also calculate the finite speed at which energy is propagated out into the
infinite ladder.
|
This study addresses the limitations of single-viewpoint observations of
Coronal Mass Ejections (CMEs) by presenting results from a 3D catalog of 360
CMEs during solar cycle 24, fitted using the GCS model. The dataset combines
326 previously analyzed CMEs and 34 newly examined events, categorized by their
source regions into active region (AR) eruptions, active prominence (AP)
eruptions, and prominence eruptions (PE). Estimates of errors are made using a
bootstrapping approach. The findings highlight that the average 3D speed of
CMEs is $\sim$1.3 times greater than the 2D speed. PE CMEs tend to be slow,
with an average speed of 432 km $s^{-1}$. AR and AP speeds are higher, at 723
km $s^{-1}$ and 813 km $s^{-1}$, respectively, with the latter having fewer
slow CMEs. The distinctive behavior of AP CMEs is attributed to factors like
overlying magnetic field distribution or geometric complexities leading to less
accurate GCS fits. A linear fit of projected speed to width gives a gradient of
2 km $s^{-1}deg^{-1}$, which increases to 5 km $s^{-1}deg^{-1}$ when the
GCS-fitted `true' parameters are used. Notably, AR CMEs exhibit a high gradient
of 7 km $s^{-1}deg^{-1}$, while AP CMEs show a gradient of 4 km
$s^{-1}deg^{-1}$. PE CMEs, however, lack a significant speed-width
relationship. We show that fitting multi-viewpoint CME images to a geometrical
model such as GCS is important to study the statistical properties of CMEs, and
can lead to a deeper insight into CME behavior that is essential for improving
future space weather forecasting.
|
We present a new leptogenesis scenario, where the lepton asymmetry is
generated by CP violating decays of heavy electroweak singlet neutrinos via
electromagnetic dipole moment couplings to the ordinary light neutrinos. Akin
to the usual scenario where the decays are mediated through Yukawa
interactions, we have shown, by explicit calculations, that the desired
asymmetry can be produced through the interference of the corresponding
tree-level and one-loop decay amplitudes involving the effective dipole moment
operators. We also find that the relationship of the leptogenesis scale to the
light neutrino masses is similar to that for the standard Yukawa-mediated
mechanism.
|
We explore methods to generate quantum coherence through unitary evolutions,
by introducing and studying the coherence generating capacity of Hamiltonians.
This quantity is defined as the maximum derivative of coherence that can be
achieved by a Hamiltonian. By adopting the relative entropy of coherence as our
figure of merit, we evaluate the maximal coherence generating capacity with the
constraint of a bounded Hilbert-Schmidt norm for the Hamiltonian. Our
investigation yields closed-form expressions for both Hamiltonians and quantum
states that induce the maximal derivative of coherence under these conditions.
Specifically, for qubit systems, we solve this problem comprehensively for any
given Hamiltonian, identifying the quantum states that lead to the largest
coherence derivative induced by the Hamiltonian. Our investigation enables a
precise identification of conditions under which quantum coherence is optimally
enhanced, offering valuable insights for the manipulation and control of
quantum coherence in quantum systems.
|
We use SPH simulations with an approximate radiative cooling prescription to
model evolution of a massive and large ($\sim 100$ AU) very young
protoplanetary disc. We also model dust growth and gas-grain dynamics with a
second fluid approach. It is found that the disc fragments onto a large number
of $\sim 10$ Jupiter mass clumps that cool and contract slowly. Some of the
clumps evolve onto eccentric orbits delivering them into the inner tens of AU,
where they are disrupted by tidal forces from the star. Dust grows and
sediments inside the clumps, displaying a very strong segregation, with the
largest particles forming dense cores in the centres. The density of the dust
cores may exceed that of the gas and is limited only by the numerical
constraints, indicating that these cores should collapse into rocky planetary
cores. One particular giant planet embryo migrates inward close enough to be
disrupted at about 10 AU, leaving a self-bound solid core of about 7.5
$\mearth$ mass on a low eccentricity orbit at a radius of $\sim$ 8 AU. These
simulations support the recent suggestions that terrestrial and giant planets
may be the remnants of tidally disrupted giant planet embryos.
|
We study the properties and couplings of hybrid baryons in the large-$N_c$
expansion. These are color-neutral baryon states which contain in addition to
$N_c$ quarks also one constituent gluon. Hybrid baryons with both symmetric and
mixed symmetric orbital wave functions are considered. We introduce a Hartree
description for these states, similar to the one used by Witten for ordinary
baryons. It is shown that the Hartree equations for $N_c (N_c-1)$ quarks for
symmetric (mixed symmetric) states in these states coincide with those in
ordinary baryons in the large-$N_c$ limit. The energy due to the gluon field is
of order $\Lambda_{QCD}$. Under the assumption of color confinement, our
results prove the existence of hybrid baryons made up of heavy quarks in the
large $N_c$ limit and provides a justification for the constituent gluon
picture of these states. The couplings of the hybrid baryons to mesons of
arbitrary spin are computed in the quark model. Using constraints from the
large $N_c$ scaling laws for the meson-baryon scattering amplitudes, we write
down consistency conditions for the meson couplings of the hybrid baryons.
These consistency conditions are solved explicitly with results in agreement
with those in the quark model for the respective couplings.
|
Over the past few years, we have built a system that has exposed large
volumes of Deep-Web content to Google.com users. The content that our system
exposes contributes to more than 1000 search queries per-second and spans over
50 languages and hundreds of domains. The Deep Web has long been acknowledged
to be a major source of structured data on the web, and hence accessing
Deep-Web content has long been a problem of interest in the data management
community. In this paper, we report on where we believe the Deep Web provides
value and where it does not. We contrast two very different approaches to
exposing Deep-Web content -- the surfacing approach that we used, and the
virtual integration approach that has often been pursued in the data management
literature. We emphasize where the values of each of the two approaches lie and
caution against potential pitfalls. We outline important areas of future
research and, in particular, emphasize the value that can be derived from
analyzing large collections of potentially disparate structured data on the
web.
|
It is known that many different types of finite random subgraph models
undergo quantitatively similar phase transitions around their percolation
thresholds, and the proofs of these results rely on isoperimetric properties of
the underlying host graph. Recently, the authors showed that such a phase
transition occurs in a large class of regular high-dimensional product graphs,
generalising a classic result for the hypercube.
In this paper we give new isoperimetric inequalities for such regular
high-dimensional product graphs, which generalise the well-known isoperimetric
inequality of Harper for the hypercube, and are asymptotically sharp for a wide
range of set sizes. We then use these isoperimetric properties to investigate
the structure of the giant component $L_1$ in supercritical percolation on
these product graphs, that is, when $p=\frac{1+\epsilon}{d}$, where $d$ is the
degree of the product graph and $\epsilon>0$ is a small enough constant.
We show that typically $L_1$ has edge-expansion $\Omega\left(\frac{1}{d\ln
d}\right)$. Furthermore, we show that $L_1$ likely contains a linear-sized
subgraph with vertex-expansion $\Omega\left(\frac{1}{d\ln d}\right)$. These
results are best possible up to the logarithmic factor in $d$.
Using these likely expansion properties, we determine, up to small
polylogarithmic factors in $d$, the likely diameter of $L_1$ as well as the
typical mixing time of a lazy random walk on $L_1$. Furthermore, we show the
likely existence of a path of length $\Omega\left(\frac{n}{d\ln d}\right)$.
These results not only generalise, but also improve substantially upon the
known bounds in the case of the hypercube, where in particular the likely
diameter and typical mixing time of $L_1$ were previously only known to be
polynomial in $d$.
|
The smallest deformation of the minimal model M(2,3) that can accommodate
Cardy's derivation of the percolation crossing probability is presented. It is
shown that this leads to a consistent logarithmic conformal field theory at
c=0. A simple recipe for computing the associated fusion rules is given. The
differences between this theory and the other recently proposed c=0 logarithmic
conformal field theories are underlined. The discussion also emphasises the
existence of invariant logarithmic couplings that generalise Gurarie's anomaly
number.
|
Nonperturbative effects in c<1 noncritical string theory are studied using
the two-matrix model. Such effects are known to have the form fixed by the
string equations but the numerical coefficients have not been known so far.
Using the method proposed recently, we show that it is possible to determine
the coefficients for (p,q) string theory. We find that they are indeed finite
in the double scaling limit and universal in the sense that they do not depend
on the detailed structure of the potential of the two-matrix model.
|
Today, the audit and diagnosis of the causal relationships between the events
in a trigger-action-based event chain (e.g., why is a light turned on in a
smart home?) in the Internet of Things (IoT) platforms are untrustworthy and
unreliable. The current IoT platforms lack techniques for transparent and
tamper-proof ordering of events due to their device-centric logging mechanism.
In this paper, we develop a framework that facilitates tamper-proof
transparency and event order in an IoT platform by proposing a Blockchain
protocol and adopting the vector clock system, both tailored for the
resource-constrained heterogeneous IoT devices, respectively. To cope with the
unsuited storage (e.g., ledger) and computing power (e.g., proof of work
puzzle) requirements of the Blockchain in the commercial off-the-shelf IoT
devices, we propose a partial consistent cut protocol and engineer a modular
arithmetic-based lightweight proof of work puzzle, respectively. To the best of
our knowledge, this is the first Blockchain designed for resource-constrained
heterogeneous IoT platforms. Our event ordering protocol based on the vector
clock system is also novel for the IoT platforms. We implement our framework
using an IoT gateway and 30 IoT devices. We experiment with 10 concurrent
trigger-action-based event chains while each chain involves 20 devices, and
each device participates in 5 different chains. The results show that our
framework may order these events in 2.5 seconds while consuming only 140 mJ of
energy per device. The results hence demonstrate the proposed platform as a
practical choice for many IoT applications such as smart home, traffic
monitoring, and crime investigation.
|
We demonstrate that a single-color ultrashort optical pulse propagating in
air can emit THz radiation along any direction with respect to its propagation
axis. The emission angle can be adjusted by the flying focus technique which
determines the speed and direction of the ionization front. When the ionization
front velocity becomes superluminal, the THz emission corresponds to classical
Cherenkov radiation.
|
The interaction between light and acoustic phonons is strongly modified in
sub-wavelength confinement, and has led to the demonstration and control of
Brillouin scattering in photonic structures such as nano-scale optical
waveguides and cavities. Besides the small optical mode volume, two physical
mechanisms come into play simultaneously: a volume effect caused by the strain
induced refractive index perturbation (known as photo-elasticity), and a
surface effect caused by the shift of the optical boundaries due to mechanical
vibrations. As a result proper material and structure engineering allows one to
control each contribution individually. In this paper, we experimentally
demonstrate the perfect cancellation of Brillouin scattering by engineering a
silica nanowire with exactly opposing photo-elastic and moving-boundary
effects. This demonstration provides clear experimental evidence that the
interplay between the two mechanisms is a promising tool to precisely control
the photon-phonon interaction, enhancing or suppressing it.
|
Let $\Sigma_g$ be a compact, connected, orientable surface of genus $g \geq
2$. We ask for a parametrization of the discrete, faithful, totally loxodromic
representations in the deformation space ${\rm Hom}(\pi_1(\Sigma_g), {\rm
SU}(3,1))/{\rm SU}(3,1)$. We show that such a representation, under some
hypothesis, can be determined by $30g-30$ real parameters.
|
Using kinematic properties of handwriting to support the diagnosis of
neurodegenerative disease is a real challenge: non-invasive detection
techniques combined with machine learning approaches promise big steps forward
in this research field. In literature, the tasks proposed focused on different
cognitive skills to elicitate handwriting movements. In particular, the meaning
and phonology of words to copy can compromise writing fluency. In this paper,
we investigated how word semantics and phonology affect the handwriting of
people affected by Alzheimer's disease. To this aim, we used the data from six
handwriting tasks, each requiring copying a word belonging to one of the
following categories: regular (have a predictable phoneme-grapheme
correspondence, e.g., cat), non-regular (have atypical phoneme-grapheme
correspondence, e.g., laugh), and non-word (non-meaningful pronounceable letter
strings that conform to phoneme-grapheme conversion rules). We analyzed the
data using a machine learning approach by implementing four well-known and
widely-used classifiers and feature selection. The experimental results showed
that the feature selection allowed us to derive a different set of highly
distinctive features for each word type. Furthermore, non-regular words needed,
on average, more features but achieved excellent classification performance:
the best result was obtained on a non-regular, reaching an accuracy close to
90%.
|
The hybrid halide perovskite CH3NH3PbI3 exhibits a complex structural
behaviour, with successive transitions between orthorhombic, tetragonal and
cubic polymorphs at ca. 165 K and 327 K. Herein we report first-principles
lattice dynamics (phonon spectrum) for each phase of CH3NH3PbI3. The
equilibrium structures compare well to solutions of temperature-dependent
powder neutron diffraction. By following the normal modes we calculate infrared
and Raman intensities of the vibrations, and compare them to the measurement of
a single crystal where the Raman laser is controlled to avoid degradation of
the sample. Despite a clear separation in energy between low frequency modes
associated with the inorganic PbI3 network and high-frequency modes of the
organic CH3NH3+ cation, significant coupling between them is found, which
emphasises the interplay between molecular orientation and the corner-sharing
octahedral networks in the structural transformations. Soft modes are found at
the boundary of the Brillouin zone of the cubic phase, consistent with
displacive instabilities and anharmonicity involving tilting of the PbI6
octahedra around room temperature.
|
Quantum Geometry (the modern Loop Quantum Gravity using graphs and
spin-networks instead of the loops) provides microscopic degrees of freedom
that account for the black-hole entropy. However, the procedure for state
counting used in the literature contains an error and the number of the
relevant horizon states is underestimated. In our paper a correct method of
counting is presented. Our results lead to a revision of the literature of the
subject. It turns out that the contribution of spins greater then 1/2 to the
entropy is not negligible. Hence, the value of the Barbero-Immirzi parameter
involved in the spectra of all the geometric and physical operators in this
theory is different than previously derived. Also, the conjectured relation
between Quantum Geometry and the black hole quasi-normal modes should be
understood again.
|
The tau neutrino is the least well measured particle in the Standard Model.
Most notably, the tau neutrino row of the lepton mixing matrix is quite poorly
constrained when unitarity is not assumed. In this paper, we identify data sets
involving tau neutrinos that improve our understanding of the tau neutrino part
of the mixing matrix, in particular $\nu_\tau$ appearance in atmospheric
neutrinos. We present new results on the elements of the tau row leveraging
existing constraints on the electron and muon rows for the cases of unitarity
violation, with and without kinematically accessible steriles. We also show the
expected sensitivity due to upcoming experiments and demonstrate that the tau
neutrino row precision may be comparable to the muon neutrino row in a careful
combined fit.
|
We study the rate of mixing of observables of Z^d-extensions of probability
preserving dynamical systems. We explain how this question is directly linked
to the local limit theorem and establish a rate of mixing for general classes
of observables of the Z^2-periodic Sinai billiard. We compare our approach with
the induction method.
|
We provide new upper bounds for sums of certain arithmetic functions in many
variables at polynomial arguments and, exploiting recent progress on the
mean-value of the Erd\H os-Hooley $\Delta$-function, we derive lower bounds for
the cardinality of those integers not exceeding a given limit that are
expressible as some sums of powers.
|
The Rice-Mele model has two topological and spatially-inversion symmetric
phases, namely the Su-Schrieffer-Heeger (SSH) phase with alternating hopping
only, and the charge-density-wave (CDW) phase with alternating energies only.
The chiral symmetry of the SSH phase is robust in position space, so that it is
preserved in the presence of the ends of a finite system and of textures in the
alternating hopping. However, the chiral symmetry of the CDW wave phase is
nonsymmorphic, resulting in a breaking of the bulk topology by an end or a
texture in the alternating energies. We consider the presence of solitons
(textures in position space separating two degenerate ground states) in finite
systems with open boundary conditions. We identify the parameter range under
which an atomically-sharp soliton in the CDW phase supports a localized state
which lies within the band gap, and we calculate the expectation value $p_y$ of
the nonsymmorphic chiral operator for this state, and the soliton electric
charge. As the spatial extent of the soliton increases beyond the atomic limit,
the energy level approaches zero exponentially quickly or inversely
proportionally to the width, depending on microscopic details of the soliton
texture. In both cases, the difference of $p_y$ from one is inversely
proportional to the soliton width, while the charge is independent of the
width. We investigate the robustness of the soliton level in the presence of
disorder and sample-to-sample parameter variations, comparing with a single
soliton level in the SSH phase with an odd number of sites.
|
Diffusion model-based inverse problem solvers have demonstrated
state-of-the-art performance in cases where the forward operator is known (i.e.
non-blind). However, the applicability of the method to blind inverse problems
has yet to be explored. In this work, we show that we can indeed solve a family
of blind inverse problems by constructing another diffusion prior for the
forward operator. Specifically, parallel reverse diffusion guided by gradients
from the intermediate stages enables joint optimization of both the forward
operator parameters as well as the image, such that both are jointly estimated
at the end of the parallel reverse diffusion procedure. We show the efficacy of
our method on two representative tasks -- blind deblurring, and imaging through
turbulence -- and show that our method yields state-of-the-art performance,
while also being flexible to be applicable to general blind inverse problems
when we know the functional forms.
|
The analytical solution of neutron transport equation has fascinated
mathematicians and physicists alike since the Milne half-space problem was
introduce in 1921 [1]. Numerous numerical solutions exist, but understandably,
there are only a few analytical solutions, with the prominent one being the
singular eigenfunction expansion (SEE) introduced by Case [2] in 1960. For the
half-space, the method, though yielding, an elegant analytical form resulting
from half-range completeness, requires numerical evaluation of complicated
integrals. In addition, one finds closed form analytical expressions only for
the infinite medium and half-space cases. One can find the flux in a slab only
iteratively. That is to say, in general one must expend a considerable
numerical effort to get highly precise benchmarks from SEE. As a result,
investigators have devised alternative methods, such as the CN [3], FN [4] and
Greens Function Method (GFM) [5] based on the SEE have been devised. These
methods take the SEE at their core and construct a numerical method around the
analytical form. The FN method in particular has been most successful in
generating highly precise benchmarks. No method yielding a precise numerical
solution has yet been based solely on a fundamental discretization until now.
Here, we show for the albedo problem with a source on the vacuum boundary of a
homogeneous medium, a precise numerical solution is possible via Lagrange
interpolation over a discrete set of directions.
|
We review the class of species sampling models (SSM). In particular, we
investigate the relation between the exchangeable partition probability
function (EPPF) and the predictive probability function (PPF). It is
straightforward to define a PPF from an EPPF, but the converse is not
necessarily true. In this paper we introduce the notion of putative PPFs and
show novel conditions for a putative PPF to define an EPPF. We show that all
possible PPFs in a certain class have to define (unnormalized) probabilities
for cluster membership that are linear in cluster size. We give a new necessary
and sufficient condition for arbitrary putative PPFs to define an EPPF.
Finally, we show posterior inference for a large class of SSMs with a PPF that
is not linear in cluster size and discuss a numerical method to derive its PPF.
|
In the present paper, we suggest a convenient model for the vector
$\rho$-meson longitudinal leading-twist distribution amplitude
$\phi_{2;\rho}^\|$, whose distribution is controlled by a single parameter
$B^\|_{2;\rho}$. By choosing proper chiral current in the correlator, we obtain
new light-cone sum rules (LCSR) for the $B\to\rho$ TFFs $A_1$, $A_2$ and $V$,
in which the $\delta^1$-order $\phi_{2;\rho}^\|$ provides dominant
contributions. Then we make a detailed discussion on the $\phi_{2;\rho}^\|$
properties via those $B\to\rho$ TFFs. A proper choice of $B^\|_{2;\rho}$ can
make all the TFFs agree with the lattice QCD predictions. A prediction of
$|V_{\rm ub}|$ has also been presented by using the extrapolated TFFs, which
indicates that a larger $B^{\|}_{2;\rho}$ leads to a larger $|V_{\rm ub}|$. To
compare with the BABAR data on $|V_{\rm ub}|$, the longitudinal leading-twist
DA $\phi_{2;\rho}^\|$ prefers a doubly-humped behavior.
|
As computing systems become increasingly advanced and as users increasingly
engage themselves in technology, security has never been a greater concern. In
malware detection, static analysis, the method of analyzing potentially
malicious files, has been the prominent approach. This approach, however,
quickly falls short as malicious programs become more advanced and adopt the
capabilities of obfuscating its binaries to execute the same malicious
functions, making static analysis extremely difficult for newer variants. The
approach assessed in this paper is a novel dynamic malware analysis method,
which may generalize better than static analysis to newer variants. Inspired by
recent successes in Natural Language Processing (NLP), widely used document
classification techniques were assessed in detecting malware by doing such
analysis on system calls, which contain useful information about the operation
of a program as requests that the program makes of the kernel. Features
considered are extracted from system call traces of benign and malicious
programs, and the task to classify these traces is treated as a binary document
classification task of system call traces. The system call traces were
processed to remove the parameters to only leave the system call function
names. The features were grouped into various n-grams and weighted with Term
Frequency-Inverse Document Frequency. This paper shows that Linear Support
Vector Machines (SVM) optimized by Stochastic Gradient Descent and the
traditional Coordinate Descent on the Wolfe Dual form of the SVM are effective
in this approach, achieving a highest of 96% accuracy with 95% recall score.
Additional contributions include the identification of significant system call
sequences that could be avenues for further research.
|
In this paper we consider surfaces of class $C^1$ with continuous prescribed
mean curvature in a three-dimensional contact sub-Riemannian manifold and prove
that their characteristic curves are of class $C^2$. This regularity result
also holds for critical points of the sub-Riemannian perimeter under a volume
constraint. All results are valid in the first Heisenberg group $\mathbb{H}^1$.
|
Self-supervised tasks have been utilized to build useful representations that
can be used in downstream tasks when the annotation is unavailable. In this
paper, we introduce a self-supervised video representation learning method
based on the multi-transformation classification to efficiently classify human
actions. Self-supervised learning on various transformations not only provides
richer contextual information but also enables the visual representation more
robust to the transforms. The spatio-temporal representation of the video is
learned in a self-supervised manner by classifying seven different
transformations i.e. rotation, clip inversion, permutation, split, join
transformation, color switch, frame replacement, noise addition. First, seven
different video transformations are applied to video clips. Then the 3D
convolutional neural networks are utilized to extract features for clips and
these features are processed to classify the pseudo-labels. We use the learned
models in pretext tasks as the pre-trained models and fine-tune them to
recognize human actions in the downstream task. We have conducted the
experiments on UCF101 and HMDB51 datasets together with C3D and 3D Resnet-18 as
backbone networks. The experimental results have shown that our proposed
framework is outperformed other SOTA self-supervised action recognition
approaches. The code will be made publicly available.
|
For the Fermi-Pasta-Ulam chain, an effective Hamiltonian is constructed,
describing the motion of approximate, weakly localized discrete breathers
traveling along the chain. The velocity of these moving and localized
vibrations can be estimated analytically as the group velocity of the
corresponding wave packet. The Peierls-Nabarro barrier is estimated for
strongly localized discrete breathers.
|
Supermassive black holes (with $\mathrm{M_{BH} \sim 10^9 M_{\odot}}$) are
observed in the first Gyr of the Universe, and their host galaxies are found to
contain unexpectedly large amounts of dust and metals. In light of the two
empirical facts, we explore the possibility of supercritical accretion and
early black hole growth occurring in dusty environments. We generalise the
concept of photon trapping to the case of dusty gas and analyse the physical
conditions leading to dust photon trapping. Considering the parameter space
dependence, we obtain that the dust photon trapping regime can be more easily
realised for larger black hole masses, higher ambient gas densities, and lower
gas temperatures. The trapping of photons within the accretion flow implies
obscured active galactic nuclei (AGNs), while it may allow a rapid black hole
mass build-up at early times. We discuss the potential role of such dust photon
trapping in the supercritical growth of massive black holes in the early
Universe.
|
An inequality proved firstly by Remak and then generalized by Friedman shows
that there are only finitely many number fields with a fixed signature and
whose regulator is less than a prescribed bound. Using this inequality,
Astudillo, Diaz y Diaz, Friedman and Ramirez-Raposo succeeded to detect all
fields with small regulators having degree less or equal than 7. In this paper
we show that a certain upper bound for a suitable polynomial, if true, can
improve Remak-Friedman's inequality and allows a classification for some
signatures in degree 8 and better results in degree 5 and 7. The validity of
the conjectured upper bound is extensively discussed.
|
The Fractional Fourier Transform is a ubiquitous signal processing tool in
basic and applied sciences. The Fractional Fourier Transform generalizes every
property and application of the Fourier Transform. Despite the practical
importance of the discrete fractional Fourier transform, its applications in
digital communications have been elusive. The convolution property of the
discrete Fourier transform plays a vital role in designing multi-carrier
modulation systems. Here we report a closed-form affine discrete fractional
Fourier transform and we show the circular convolution property for it. The
proposed approach is versatile and generalizes the discrete Fourier transform
and can find applications in Fourier based signal processing tools.
|
Given an abelian group $G$, it is natural to ask whether there exists a
permutation $\pi$ of $G$ that "destroys" all nontrivial 3-term arithmetic
progressions (APs), in the sense that $\pi(b) - \pi(a) \neq \pi(c) - \pi(b)$
for every ordered triple $(a,b,c) \in G^3$ satisfying $b-a = c-b \neq 0$. This
question was resolved for infinite groups $G$ by Hegarty, who showed that there
exists an AP-destroying permutation of $G$ if and only if $G/\Omega_2(G)$ has
the same cardinality as $G$, where $\Omega_2(G)$ denotes the subgroup of all
elements in $G$ whose order divides $2$. In the case when $G$ is finite,
however, only partial results have been obtained thus far. Hegarty has
conjectured that an AP-destroying permutation of $G$ exists if $G =
\mathbb{Z}/n\mathbb{Z}$ for all $n \neq 2,3,5,7$, and together with Martinsson,
he has proven the conjecture for all $n > 1.4 \times 10^{14}$. In this paper,
we show that if $p$ is a prime and $k$ is a positive integer, then there is an
AP-destroying permutation of the elementary $p$-group
$(\mathbb{Z}/p\mathbb{Z})^k$ if and only if $p$ is odd and $(p,k) \not\in
\{(3,1),(5,1), (7,1)\}$.
|
As the share of renewable generation in large power systems continues to
increase, the operation of power systems becomes increasingly challenging. The
constantly shifting mix of renewable and conventional generation leads to
largely changing dynamics, increasing the risk of blackouts. We propose to
retune the parameters of the already present controllers in the power systems
to account for the seemingly changing operating conditions. To this end, we
present an approach for fast and computationally efficient tuning of parameters
of structured controllers. The goal of the tuning is to shift system poles to a
specified region in the complex plane, e.g. for stabilization or oscillation
damping. The approach exploits singular value optimization in the frequency
domain, which enables scaling to large systems and is not limited to power
systems. The efficiency of the approach is shown on three systems of increasing
size with multiple initial parameterizations.
|
We give an exact solution for the complete distribution of component sizes in
random networks with arbitrary degree distributions. The solution tells us the
probability that a randomly chosen node belongs to a component of size s, for
any s. We apply our results to networks with the three most commonly studied
degree distributions -- Poisson, exponential, and power-law -- as well as to
the calculation of cluster sizes for bond percolation on networks, which
correspond to the sizes of outbreaks of SIR epidemic processes on the same
networks. For the particular case of the power-law degree distribution, we show
that the component size distribution itself follows a power law everywhere
below the phase transition at which a giant component forms, but takes an
exponential form when a giant component is present.
|
We conjecture an exact form for an universal ratio of four-point cluster
connectivities in the critical two-dimensional $Q$-color Potts model. We also
provide analogous results for the limit $Q\rightarrow 1$ that corresponds to
percolation where the observable has a logarithmic singularity. Our conjectures
are tested against Monte Carlo simulations showing excellent agreement for
$Q=1,2,3$.
|
In this paper, we study two challenging and less-touched problems in single
image dehazing, namely, how to make deep learning achieve image dehazing
without training on the ground-truth clean image (unsupervised) and a image
collection (untrained). An unsupervised neural network will avoid the intensive
labor collection of hazy-clean image pairs, and an untrained model is a
``real'' single image dehazing approach which could remove haze based on only
the observed hazy image itself and no extra images is used. Motivated by the
layer disentanglement idea, we propose a novel method, called you only look
yourself (\textbf{YOLY}) which could be one of the first unsupervised and
untrained neural networks for image dehazing. In brief, YOLY employs three
jointly subnetworks to separate the observed hazy image into several latent
layers, \textit{i.e.}, scene radiance layer, transmission map layer, and
atmospheric light layer. After that, these three layers are further composed to
the hazy image in a self-supervised manner. Thanks to the unsupervised and
untrained characteristics of YOLY, our method bypasses the conventional
training paradigm of deep models on hazy-clean pairs or a large scale dataset,
thus avoids the labor-intensive data collection and the domain shift issue.
Besides, our method also provides an effective learning-based haze transfer
solution thanks to its layer disentanglement mechanism. Extensive experiments
show the promising performance of our method in image dehazing compared with 14
methods on four databases.
|
Aligning large language models (LLMs) behaviour with human intent is critical
for future AI. An important yet often overlooked aspect of this alignment is
the perceptual alignment. Perceptual modalities like touch are more
multifaceted and nuanced compared to other sensory modalities such as vision.
This work investigates how well LLMs align with human touch experiences using
the "textile hand" task. We created a "Guess What Textile" interaction in which
participants were given two textile samples -- a target and a reference -- to
handle. Without seeing them, participants described the differences between
them to the LLM. Using these descriptions, the LLM attempted to identify the
target textile by assessing similarity within its high-dimensional embedding
space. Our results suggest that a degree of perceptual alignment exists,
however varies significantly among different textile samples. For example, LLM
predictions are well aligned for silk satin, but not for cotton denim.
Moreover, participants didn't perceive their textile experiences closely
matched by the LLM predictions. This is only the first exploration into
perceptual alignment around touch, exemplified through textile hand. We discuss
possible sources of this alignment variance, and how better human-AI perceptual
alignment can benefit future everyday tasks.
|
A new version of the alternating directions implicit (ADI) iteration for the
solution of large-scale Lyapunov equations is introduced. It generalizes the
hitherto existing iteration, by incorporating tangential directions in the way
they are already available for rational Krylov subspaces. Additionally, first
strategies to adaptively select shifts and tangential directions in each
iteration are presented. Numerical examples emphasize the potential of the new
results.
|
We analyse the entropy properties in the proton-proton 1800 GeV events from
the PYTHIA/JETSET Monte Carlo generator following a recent proposal concerning
the measurement of entropy in multiparticle systems. The dependence on the
number of bins and on the size of the phase-space region is investigated. Our
results may serve as a reference sample for experimental data from
hadron-hadron and heavy ion collisions.
|
We show that the cohomology of a rank 1 local system on the complement of a
projective hyperplane arrangement can be calculated by the Aomoto complex in
certain cases even if the condition on the sum of the residues of connection
due to Esnault et al is not satisfied. For this we have to study the
localization of Hodge-logarithmic differential forms which are defined by using
an embedded resolution of singularities. As an application we can compute
certain monodromy eigenspaces of the first Milnor cohomology group of the
defining polynomial of the reflection hyperplane arrangement of type $G_{31}$
without using a computer.
|
Graph-walking automata (GWA) traverse graphs by moving between the nodes
following the edges, using a finite-state control to decide where to go next.
It is known that every GWA can be transformed to a GWA that halts on every
input, to a GWA returning to the initial node in order to accept, and to a
reversible GWA. This paper establishes lower bounds on the state blow-up of
these transformations, as well as closely matching upper bounds. It is shown
that making an $n$-state GWA traversing $k$-ary graphs halt on every input
requires at most $2nk+1$ states and at least $2(n-1)(k-3)$ states in the worst
case; making a GWA return to the initial node before acceptance takes at most
$2nk+n$ and at least $2(n-1)(k-3)$ states in the worst case; Automata
satisfying both properties at once have at most $4nk+1$ and at least
$4(n-1)(k-3)$ states in the worst case. Reversible automata have at most
$4nk+1$ and at least $4(n-1)(k-3)-1$ states in the worst case.
|
A simple graph $G$ is said to admit an antimagic orientation if there exist
an orientation on the edges of $G$ and a bijection from $E(G)$ to
$\{1,2,\ldots,|E(G)|\}$ such that the vertex sums of vertices are pairwise
distinct, where the vertex sum of a vertex is defined to be the sum of the
labels of the in-edges minus that of the out-edges incident to the vertex. It
was conjectured by Hefetz, M\"{u}tze, and Schwartz~\cite{HMS10} in 2010 that
every connected simple graph admits an antimagic orientation.
In this paper, we prove that the Mycielski construction and the corona
product for graphs with some conditions yield graphs satisfying the above
conjecture.
|
We are concentrating on reducing overhead of heaps based on comparisons with
optimal worstcase behaviour. The paper is inspired by Strict Fibonacci Heaps
[1], where G. S. Brodal, G. Lagogiannis, and R. E. Tarjan implemented the heap
with DecreaseKey and Meld interface in assymptotically optimal worst case times
(based on key comparisons). In the paper [2], the ideas were elaborated and it
was shown that the same asymptotical times could be achieved with a strategy
loosing much less information from previous comparisons. There is big overhead
with maintainance of violation lists in these heaps. We propose simple
alternative reducing this overhead. It allows us to implement fast amortized
Fibonacci heaps, where user could call some methods in variants guaranting
worst case time. If he does so, the heaps are not guaranted to be Fibonacci
until an amortized version of a method is called. Of course we could call worst
case versions all the time, but as there is an overhead with the guarantee,
calling amortized versions is prefered choice if we are not concentrated on
complexity of the separate operation.
We have shown, we could implement full DecreaseKey-Meld interface, but Meld
interface is not natural for these heaps, so if Meld is not needed, much
simpler implementation suffices. As I don't know application requiring Meld, we
would concentrate on noMeld variant, but we will show the changes could be
applied on Meld including variant as well. The papers [1], [2] shown the heaps
could be implemented on pointer machine model. For fast practical
implementations we would rather use arrays. Our goal is to reduce number of
pointer manipulations. Maintainance of ranks by pointers to rank lists would be
unnecessary overhead.
|
Different models of dark matter can alter the distribution of mass in galaxy
clusters in a variety of ways. However, so can uncertain astrophysical feedback
mechanisms. Here we present a Machine Learning method that ''learns'' how the
impact of dark matter self-interactions differs from that of astrophysical
feedback in order to break this degeneracy and make inferences on dark matter.
We train a Convolutional Neural Network on images of galaxy clusters from
hydro-dynamic simulations. In the idealised case our algorithm is 80% accurate
at identifying if a galaxy cluster harbours collisionless dark matter, dark
matter with ${\sigma}_{\rm DM}/m = 0.1$cm$^2/$g or with ${\sigma}_{DM}/m =
1$cm$^2$/g. Whilst we find adding X-ray emissivity maps does not improve the
performance in differentiating collisional dark matter, it does improve the
ability to disentangle different models of astrophysical feedback. We include
noise to resemble data expected from Euclid and Chandra and find our model has
a statistical error of < 0.01cm$^2$/g and that our algorithm is insensitive to
shape measurement bias and photometric redshift errors. This method represents
a new way to analyse data from upcoming telescopes that is an order of
magnitude more precise and many orders faster, enabling us to explore the dark
matter parameter space like never before.
|
It is known that, for each real number x such that 1,x,x^2 are linearly
independent over Q, the uniform exponent of simultaneous approximation to
(1,x,x^2) by rational numbers is at most (sqrt{5}-1)/2 (approximately 0.618)
and that this upper bound is best possible. In this paper, we study the
analogous problem for Q-linearly independent triples (1,x,x^3), and show that,
for these, the uniform exponent of simultaneous approximation by rational
numbers is at most 2(9+sqrt{11})/35 (approximately 0.7038). We also establish
general properties of the sequence of minimal points attached to such triples
that are valid for smaller values of the exponent.
|
Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces
(RKHS) provide a very general framework for data analysis, modeling and
learning in a variety of fields. Discrete, noisy scattered, direct and indirect
observations can be accommodated with multiple inputs and multiple possibly
correlated outputs and a variety of meaningful structures. The purpose of this
paper is to give a brief overview of the approach and describe and contrast a
series of applications, while noting some recent results.
|
The works of [Cha-DunAlvInoNieCarFieLaw,Cha-Dun] describe upward sweeps in
populations of city-states and attempt to characterize such phenomenon. The
model proposed in both [TurKor,Tur] describes how the population, state
resources and internal conflict influence each other over time. We show that
one can obtain an upward sweep in the population by altering particular
parameters of the system of differential equations constituting the model given
in [TurKor,Tur]. Moreover, we show that such a system has a unstable critical
point and propose an approach for determining bifurcation points in the
parameter space for the model.
|
In high dimensional settings where a small number of regressors are expected
to be important, the Lasso estimator can be used to obtain a sparse solution
vector with the expectation that most of the non-zero coefficients are
associated with true signals. While several approaches have been developed to
control the inclusion of false predictors with the Lasso, these approaches are
limited by relying on asymptotic theory, having to empirically estimate terms
based on theoretical quantities, assuming a continuous response class with
Gaussian noise and design matrices, or high computation costs. In this paper we
show how: (1) an existing model (the SQRT-Lasso) can be recast as a method of
controlling the number of expected false positives, (2) how a similar estimator
can used for all other generalized linear model classes, and (3) this approach
can be fit with existing fast Lasso optimization solvers. Our justification for
false positive control using randomly weighted self-normalized sum theory is to
our knowledge novel. Moreover, our estimator's properties hold in finite
samples up to some approximation error which we find in practical settings to
be negligible under a strict mutual incoherence condition.
|
On top of machine learning models, uncertainty quantification (UQ) functions
as an essential layer of safety assurance that could lead to more principled
decision making by enabling sound risk assessment and management. The safety
and reliability improvement of ML models empowered by UQ has the potential to
significantly facilitate the broad adoption of ML solutions in high-stakes
decision settings, such as healthcare, manufacturing, and aviation, to name a
few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods
for ML models with a particular focus on neural networks and the applications
of these UQ methods in tackling engineering design as well as prognostics and
health management problems. Toward this goal, we start with a comprehensive
classification of uncertainty types, sources, and causes pertaining to UQ of ML
models. Next, we provide a tutorial-style description of several
state-of-the-art UQ methods: Gaussian process regression, Bayesian neural
network, neural network ensemble, and deterministic UQ methods focusing on
spectral-normalized neural Gaussian process. Established upon the mathematical
formulations, we subsequently examine the soundness of these UQ methods
quantitatively and qualitatively (by a toy regression example) to examine their
strengths and shortcomings from different dimensions. Then, we review
quantitative metrics commonly used to assess the quality of predictive
uncertainty in classification and regression problems. Afterward, we discuss
the increasingly important role of UQ of ML models in solving challenging
problems in engineering design and health prognostics. Two case studies with
source codes available on GitHub are used to demonstrate these UQ methods and
compare their performance in the life prediction of lithium-ion batteries at
the early stage and the remaining useful life prediction of turbofan engines.
|
The multiview variety of an arrangement of cameras is the Zariski closure of
the images of world points in the cameras. The prime vanishing ideal of this
complex projective variety is called the multiview ideal. We show that the
bifocal and trifocal polynomials from the cameras generate the multiview ideal
when the foci are distinct. In the computer vision literature, many sets of
(determinantal) polynomials have been proposed to describe the multiview
variety. We establish precise algebraic relationships between the multiview
ideal and these various ideals. When the camera foci are noncoplanar, we prove
that the ideal of bifocal polynomials saturate to give the multiview ideal.
Finally, we prove that all the ideals we consider coincide when dehomogenized,
to cut out the space of finite images.
|
In this paper, we present a relay-selection strategy for multi-way
cooperative multi-antenna systems that are aided by a central processor node,
where a cluster formed by two users is selected to simultaneously transmit to
each other with the help of relays. In particular, we present a novel multi-way
relay selection strategy based on the selection of the best link, exploiting
the use of buffers and physical-layer network coding, that is called Multi-Way
Buffer-Aided Max-Link (MW-Max-Link). We compare the proposed MW-Max-Link to
existing techniques in terms of bit error rate, pairwise error probability, sum
rate and computational complexity. Simulations are then employed to evaluate
the performance of the proposed and existing techniques.
|
RGB-Event based tracking is an emerging research topic, focusing on how to
effectively integrate heterogeneous multi-modal data (synchronized exposure
video frames and asynchronous pulse Event stream). Existing works typically
employ Transformer based networks to handle these modalities and achieve decent
accuracy through input-level or feature-level fusion on multiple datasets.
However, these trackers require significant memory consumption and
computational complexity due to the use of self-attention mechanism. This paper
proposes a novel RGB-Event tracking framework, Mamba-FETrack, based on the
State Space Model (SSM) to achieve high-performance tracking while effectively
reducing computational costs and realizing more efficient tracking.
Specifically, we adopt two modality-specific Mamba backbone networks to extract
the features of RGB frames and Event streams. Then, we also propose to boost
the interactive learning between the RGB and Event features using the Mamba
network. The fused features will be fed into the tracking head for target
object localization. Extensive experiments on FELT and FE108 datasets fully
validated the efficiency and effectiveness of our proposed tracker.
Specifically, our Mamba-based tracker achieves 43.5/55.6 on the SR/PR metric,
while the ViT-S based tracker (OSTrack) obtains 40.0/50.9. The GPU memory cost
of ours and ViT-S based tracker is 13.98GB and 15.44GB, which decreased about
$9.5\%$. The FLOPs and parameters of ours/ViT-S based OSTrack are 59GB/1076GB
and 7MB/60MB, which decreased about $94.5\%$ and $88.3\%$, respectively. We
hope this work can bring some new insights to the tracking field and greatly
promote the application of the Mamba architecture in tracking. The source code
of this work will be released on
\url{https://github.com/Event-AHU/Mamba_FETrack}.
|
Assume that we observe a sample of size n composed of p-dimensional signals,
each signal having independent entries drawn from a scaled Poisson distribution
with an unknown intensity. We are interested in estimating the sum of the n
unknown intensity vectors, under the assumption that most of them coincide with
a given 'background' signal. The number s of p-dimensional signals different
from the background signal plays the role of sparsity and the goal is to
leverage this sparsity assumption in order to improve the quality of estimation
as compared to the naive estimator that computes the sum of the observed
signals. We first introduce the group hard thresholding estimator and analyze
its mean squared error measured by the squared Euclidean norm. We establish a
nonasymptotic upper bound showing that the risk is at most of the order of
{\sigma}^2(sp + s^2sqrt(p)) log^3/2(np). We then establish lower bounds on the
minimax risk over a properly defined class of collections of s-sparse signals.
These lower bounds match with the upper bound, up to logarithmic terms, when
the dimension p is fixed or of larger order than s^2. In the case where the
dimension p increases but remains of smaller order than s^2, our results show a
gap between the lower and the upper bounds, which can be up to order sqrt(p).
|
Beamed gamma-ray burst (GRB) sources produce a bow shock in their gaseous
environment. The emitted flux from this bow shock may dominate over the direct
emission from the jet for lines of sight which are outside the angular radius
of the jet emission, theta. The event rate for these lines of sight is
increased by a factor of 260*(theta/5_degrees)^{-2}. For typical GRB
parameters, we find that the bow shock emission from a jet with half-angle of
about 5 degrees is visible out to tens of Mpc in the radio and hundreds of Mpc
in the X-rays. If GRBs are linked to supernovae, studies of peculiar supernovae
in the local universe should reveal this non-thermal bow shock emission for
weeks to months following the explosion.
|
Two types of dynamics, chaotic and monotone, are compared. It is shown that
monotone maps in strongly ordered spaces do not have chaotic attracting sets.
|
We present Voevodsky's construction of a model of univalent type theory in
the category of simplicial sets.
To this end, we first give a general technique for constructing categorical
models of dependent type theory, using universes to obtain coherence. We then
construct a (weakly) universal Kan fibration, and use it to exhibit a model in
simplicial sets. Lastly, we introduce the Univalence Axiom, in several
equivalent formulations, and show that it holds in our model.
As a corollary, we conclude that Martin-L\"of type theory with one univalent
universe (formulated in terms of contextual categories) is at least as
consistent as ZFC with two inaccessible cardinals.
|
This is part of a series of papers describing the new curve integral
formalism for scattering amplitudes of the colored scalar tr$\phi^3$ theory. We
show that the curve integral manifests a very surprising fact about these
amplitudes: the dependence on the number of particles, $n$, and the loop order,
$L$, is effectively decoupled. We derive the curve integrals at tree-level for
all $n$. We then show that, for higher loop-order, it suffices to study the
curve integrals for $L$-loop tadpole-like amplitudes, which have just one
particle per color trace-factor. By combining these tadpole-like formulas with
the the tree-level result, we find formulas for the all $n$ amplitudes at $L$
loops. We illustrate this result by giving explicit curve integrals for all the
amplitudes in the theory, including the non-planar amplitudes, through to two
loops, for all $n$.
|
We investigate the origin of Abelian and non-Abelian type magnetic
instabilities induced by Fermi surface mismatch between the two pairing
fermions in a non-relativistic model. The Abelian type instability occurs only
in gapless state and the Meissner mass squared becomes divergent at the
gapless-gapped transition point, while the non-Abelian type instability happens
in both gapless and gapped states and the divergence vanishes. The non-Abelian
type instability can be cured in strong coupling region.
|
This work makes a substantial step in the field of split computing, i.e., how
to split a deep neural network to host its early part on an embedded device and
the rest on a server. So far, potential split locations have been identified
exploiting uniquely architectural aspects, i.e., based on the layer sizes.
Under this paradigm, the efficacy of the split in terms of accuracy can be
evaluated only after having performed the split and retrained the entire
pipeline, making an exhaustive evaluation of all the plausible splitting points
prohibitive in terms of time. Here we show that not only the architecture of
the layers does matter, but the importance of the neurons contained therein
too. A neuron is important if its gradient with respect to the correct class
decision is high. It follows that a split should be applied right after a layer
with a high density of important neurons, in order to preserve the information
flowing until then. Upon this idea, we propose Interpretable Split (I-SPLIT): a
procedure that identifies the most suitable splitting points by providing a
reliable prediction on how well this split will perform in terms of
classification accuracy, beforehand of its effective implementation. As a
further major contribution of I-SPLIT, we show that the best choice for the
splitting point on a multiclass categorization problem depends also on which
specific classes the network has to deal with. Exhaustive experiments have been
carried out on two networks, VGG16 and ResNet-50, and three datasets,
Tiny-Imagenet-200, notMNIST, and Chest X-Ray Pneumonia. The source code is
available at https://github.com/vips4/I-Split.
|
We study the diameter two properties in the spaces $JH$, $JT_\infty$ and
$JH_\infty$. We show that the topological dual space of the previous Banach
spaces fails every diameter two property. However, we prove that $JH$ and
$JH_{\infty}$ satisfy the strong diameter two property, and so the dual norm of
these spaces is octahedral. Also we find a closed hyperplane $M$ of $JH_\infty$
whose topological dual space enjoys the $w^*$-strong diameter two property and
also $M$ and $M^*$ have an octahedral norm.
|
We find a new class of the Fuchsian equations, which have an algebraic
geometric solutions with the parameter belonging to a hyperelliptic curve.
Methods of calculating the algebraic genus of the curve, and its branching
points, are suggested. Numerous examples are given.
|
We present Warp, a hardware platform to support research in approximate
computing, sensor energy optimization, and energy-scavenged systems. Warp
incorporates 11 state-of-the-art sensor integrated circuits, computation, and
an energy-scavenged power supply, all within a miniature system that is just
3.6 cm x 3.3 cm x 0.5 cm. Warp's sensor integrated circuits together contain a
total of 21 sensors with a range of precisions and accuracies for measuring
eight sensing modalities of acceleration, angular rate, magnetic flux density
(compass heading), humidity, atmospheric pressure (elevation), infrared
radiation, ambient temperature, and color. Warp uses a combination of analog
circuits and digital control to facilitate further tradeoffs between sensor and
communication accuracy, energy efficiency, and performance. This article
presents the design of Warp and presents an evaluation of our hardware
implementation. The results show how Warp's design enables performance and
energy efficiency versus ac- curacy tradeoffs.
|
HETE-2 has provided new evidence that gamma-ray bursts may evolve with
redshift. We investigate the consequences of this possibility for the unified
jet model of XRFs and GRBs. We find that burst evolution with redshift can be
naturally explained within the unified jet model, and the resulting model
provides excellent agreement with existing HETE-2 and BeppoSAX data sets. In
addition, this evolution model produces reasonable fits to the BATSE peak
photon number flux distribution -- something that cannot be easily done without
redshift evolution.
|
Baikal-GVD is a large ($\sim$1 km$^3$) underwater neutrino telescope
installed in the fresh waters of Lake Baikal. The deep lake water environment
is pervaded by background light, which is detectable by Baikal-GVD's
photosensors. We introduce a neural network for an efficient separation of
these noise hits from the signal ones, stemming from the propagation of
relativistic particles through the detector. The model has a U-net-like
architecture and employs temporal (causal) structure of events. The neural
network's metrics reach up to 99\% signal purity (precision) and 96\% survival
efficiency (recall) on Monte-Carlo simulated dataset. We compare the developed
method with the algorithmic approach to rejecting the noise and discuss other
possible architectures of neural networks, including graph-based ones.
|
Wireless fingerprint-based localization has become one of the most promising
technologies for ubiquitous location-aware computing and intelligent
location-based services. However, due to RF vulnerability to environmental
dynamics over time, continuous radio map updates are time-consuming and
infeasible, resulting in severe accuracy degradation. To address this issue, we
propose a novel approach of robust localization with dynamic adversarial
learning, known as DadLoc which realizes automatic radio map adaptation by
incorporating multiple robust factors underlying RF fingerprints to learn the
evolving feature representation with the complicated environmental dynamics.
DadLoc performs a finer-grained distribution adaptation with the developed
dynamic adversarial adaptation network and quantifies the contributions of both
global and local distribution adaptation in a dynamics-adaptive manner.
Furthermore, we adopt the strategy of prediction uncertainty suppression to
conduct source-supervised training, target-unsupervised training, and
source-target dynamic adversarial adaptation which can trade off the
environment adaptability and the location discriminability of the learned deep
representation for safe and effective feature transfer across different
environments. With extensive experimental results, the satisfactory accuracy
over other comparative schemes demonstrates that the proposed DanLoc can
facilitate fingerprint-based localization for wide deployments.
|
Strain engineering applied to carbon monosulphide monolayers allows to
control the bandgap, controlling electronic and thermoelectric responses.
Herein, we study the semiconductor-metal phase transition of this layered
material driven by strain control on the basis of first-principles
calculations. We consider uniaxial and biaxial tensile strain and we find a
highly anisotropic electronic and thermoelectonic responses depending on the
direction of the applied strain. Our results indicate that strain-induced
response could be an effective method to control the electronic response and
the thermoelectric performance.
|
The organic-inorganic hybrid lead trihalide perovskites have been emerging as
the most attractive photovoltaic material. As regulated by Shockley-Queisser
theory, a formidable materials science challenge for the next level improvement
requires further band gap narrowing for broader absorption in solar spectrum,
while retaining or even synergistically prolonging the carrier lifetime, a
critical factor responsible for attaining the near-band gap photovoltage.
Herein, by applying controllable hydrostatic pressure we have achieved
unprecedented simultaneous enhancement in both band gap narrowing and carrier
lifetime prolongation (up to 70~100% increase) under mild pressures at ~0.3
GPa. The pressure-induced modulation on pure hybrid perovskites without
introducing any adverse chemical or thermal effect clearly demonstrates the
importance of band edges on the photon-electron interaction and maps a
pioneering route towards a further boost in their photovoltaic performance.
|
We propose a novel multi-component system of nonlinear equations that
generalizes the short pulse (SP) equation describing the propagation of
ultra-short pulses in optical fibers. By means of the bilinear formalism
combined with a hodograph transformation, we obtain its multi-soliton solutions
in the form of a parametric representation. Notably, unlike the determinantal
solutions of the SP equation, the proposed system is found to exhibit solutions
expressed in terms of pfaffians. The proof of the solutions is performed within
the framework of an elementary theory of determinants. The reduced 2-component
system deserves a special consideration. In particular, we show by establishing
a Lax pair that the system is completely integrable. The properties of
solutions such as loop solitons and breathers are investigated in detail,
confirming their solitonic behavior. A variant of the 2-component system is
also discussed with its multisoliton solutions.
|
We consider the total variation (TV) minimization problem used for
compressive sensing and solve it using the generalized alternating projection
(GAP) algorithm. Extensive results demonstrate the high performance of proposed
algorithm on compressive sensing, including two dimensional images,
hyperspectral images and videos. We further derive the Alternating Direction
Method of Multipliers (ADMM) framework with TV minimization for video and
hyperspectral image compressive sensing under the CACTI and CASSI framework,
respectively. Connections between GAP and ADMM are also provided.
|
The sparse group lasso optimization problem is solved using a coordinate
gradient descent algorithm. The algorithm is applicable to a broad class of
convex loss functions. Convergence of the algorithm is established, and the
algorithm is used to investigate the performance of the multinomial sparse
group lasso classifier. On three different real data examples the multinomial
group lasso clearly outperforms multinomial lasso in terms of achieved
classification error rate and in terms of including fewer features for the
classification. The run-time of our sparse group lasso implementation is of the
same order of magnitude as the multinomial lasso algorithm implemented in the R
package glmnet. Our implementation scales well with the problem size. One of
the high dimensional examples considered is a 50 class classification problem
with 10k features, which amounts to estimating 500k parameters. The
implementation is available as the R package msgl.
|
This work produces a q-analogue of the Cauchi-Szeg\"o integral representation
that retrieves a holomorphic function in the matrix ball from its values on the
Shilov boundary. Besides that, the Shilov boundary of the quantum matrix ball
is described and the U_q su(m,n)-covariance of the U_q s(u(m)x u(n))-invariant
integral on this boundary is established. The latter result allows one to
obtain a q-analogue for the principal degenerate series of unitary
representations related to the Shilov boundary of the matrix ball.
|
In the present work we elaborate the innovative design of the solar air
heater and justify it by a Computational Fluid Dynamics (CFD) simulation,
implementing and experimentally testing a sample. We propose to use this device
for maintenance of constant ambient conditions for thermal comfort and low
energy consumption for indoor environments, inside greenhouses, passive houses,
and to protect buildings against temperature fluctuations. We tested the
functionality of our sample of the solar air heater for 50 weeks and obtained
an agreement between the results of the numerical simulation, implemented using
OpenFOAM (an open source numerical CFD software) and the experimental results.
|
We show that (central) Cowling-Haagerup constant of discrete quantum groups
is multiplicative, which extends the result of Freslon to general (not
necesarilly unimodular) discrete quantum groups. The crucial feature of our
approach is considering algebras $\mathrm{C}(\mathbb{G}),
\operatorname{L}^{\infty}(\mathbb{G})$ as operator modules over
$\operatorname{L}^1(\mathbb{G})$.
|
Randomized smoothing is a popular certified defense against adversarial
attacks. In its essence, we need to solve a problem of statistical estimation
which is usually very time-consuming since we need to perform numerous (usually
$10^5$) forward passes of the classifier for every point to be certified. In
this paper, we review the statistical estimation problems for randomized
smoothing to find out if the computational burden is necessary. In particular,
we consider the (standard) task of adversarial robustness where we need to
decide if a point is robust at a certain radius or not using as few samples as
possible while maintaining statistical guarantees. We present estimation
procedures employing confidence sequences enjoying the same statistical
guarantees as the standard methods, with the optimal sample complexities for
the estimation task and empirically demonstrate their good performance.
Additionally, we provide a randomized version of Clopper-Pearson confidence
intervals resulting in strictly stronger certificates.
|
We have used GALEX observations of the North and South Galactic poles to
study the diffuse ultraviolet background at locations where the Galactic light
is expected to be at a minimum. We find offsets of 230 -- 290 photon units in
the FUV (1531 \AA) and 480 -- 580 photon units in the NUV (2361 \AA). Of this,
approximately 120 photon units can be ascribed to dust scattered light and
another 110 (190 in the NUV) photon units to extragalactic radiation. The
remaining radiation is, as yet, unidentified and amounts to $120 -- 180$ photon
units in the FUV and $300 -- 400$ photon units in the NUV. We find that
molecular hydrogen fluorescence contributes to the FUV when the 100 $\mu$m
surface brightness is greater than 1.08 MJy sr$^{-1}$.
|
We classify those manifolds mentioned in the title which have finite
topological type. Namely we show any such connected M is isomorphic to a
hyperkaehler quotient of a flat quaternionic vector space by an abelian group.
We also show that a compact connected and simply connected 3-Sasakian
manifold of dimension 4n-1 whose isometry group has rank n+1 is isometric to a
3-Sasakian quotient of a sphere by a torus. As a corollary, a compact connected
quaternion-Kaehler 4n-manifold with positive scalar curvature and isometry
group of rank n+1 is isometric to the quaternionic projective space or the
complex grassmanian.
|
On an infinite base set X, every ideal of subsets of X can be associated with
the clone of those operations on X which map small sets to small sets. We
continue earlier investigations on the position of such clones in the clone
lattice.
|
We construct an invariant of closed oriented $3$-manifolds using a finite
dimensional, involutory, unimodular and counimodular Hopf algebra $H$. We use
the framework of normal o-graphs introduced by R. Benedetti and C. Petronio, in
which one can represent a branched ideal triangulation via an oriented virtual
knot diagram. We assign a copy of a canonical element of the Heisenberg double
$\mathcal{H}(H)$ of $H$ to each real crossing, which represents a branched
ideal tetrahedron. The invariant takes values in the cyclic quotient
$\mathcal{H}(H)/{[\mathcal{H}(H),\mathcal{H}(H)]}$, which is isomorphic to the
base field. In the construction we use only the canonical element and structure
constants of $H$ and we do not use any representations of $H$. This, together
with the finiteness and locality conditions of the moves for normal o-graphs,
makes the calculation of our invariant rather simple and easy to understand.
When $H$ is the group algebra of a finite group, the invariant counts the
number of group homomorphisms from the fundamental group of the $3$-manifold to
the group.
|
The need of surface-localized thermal processing is strongly increasing
especially w.r.t three-dimensionally (3D) integrated electrical devices. UV
laser annealing (UV-LA) technology well addresses this challenge. Particularly
UV-LA can reduce resistivity by enlarging metallic grains in lines or thin
films, irradiating only the interconnects for short timescales. However, the
risk of failure in electrical performance must be correctly managed, and that
of UV-LA has not been deeply studied yet. In this work microsecond-scale UV-LA
is applied on a stack comparable to an interconnect structure
(dielectric/Cu/Ta/SiO2/Si) in either melt or sub-melt regime for grain growth.
The failure modes such as (i) Cu diffusion into SiO2, (ii) O incorporation into
Cu, and (iii) intermixing between Cu and Ta are investigated.
|
Determining the readability of a text is the first step to its
simplification. In this paper, we present a readability analysis tool capable
of analyzing text written in the Bengali language to provide in-depth
information on its readability and complexity. Despite being the 7th most
spoken language in the world with 230 million native speakers, Bengali suffers
from a lack of fundamental resources for natural language processing.
Readability related research of the Bengali language so far can be considered
to be narrow and sometimes faulty due to the lack of resources. Therefore, we
correctly adopt document-level readability formulas traditionally used for U.S.
based education system to the Bengali language with a proper age-to-age
comparison. Due to the unavailability of large-scale human-annotated corpora,
we further divide the document-level task into sentence-level and experiment
with neural architectures, which will serve as a baseline for the future works
of Bengali readability prediction. During the process, we present several
human-annotated corpora and dictionaries such as a document-level dataset
comprising 618 documents with 12 different grade levels, a large-scale
sentence-level dataset comprising more than 96K sentences with simple and
complex labels, a consonant conjunct count algorithm and a corpus of 341 words
to validate the effectiveness of the algorithm, a list of 3,396 easy words, and
an updated pronunciation dictionary with more than 67K words. These resources
can be useful for several other tasks of this low-resource language. We make
our Code & Dataset publicly available at
https://github.com/tafseer-nayeem/BengaliReadability} for reproduciblity.
|
I propose a frequency domain adaptation of the Expectation Maximization (EM)
algorithm to group a family of time series in classes of similar dynamic
structure. It does this by viewing the magnitude of the discrete Fourier
transform (DFT) of each signal (or power spectrum) as a probability
density/mass function (pdf/pmf) on the unit circle: signals with similar
dynamics have similar pdfs; distinct patterns have distinct pdfs. An advantage
of this approach is that it does not rely on any parametric form of the dynamic
structure, but can be used for non-parametric, robust and model-free
classification. This new method works for non-stationary signals of similar
shape as well as stationary signals with similar auto-correlation structure.
Applications to neural spike sorting (non-stationary) and pattern-recognition
in socio-economic time series (stationary) demonstrate the usefulness and wide
applicability of the proposed method.
|
Second order circularity, also called properness, for complex random
variables is a well known and studied concept. In the case of quaternion random
variables, some extensions have been proposed, leading to applications in
quaternion signal processing (detection, filtering, estimation). Just like in
the complex case, circularity for a quaternion-valued random variable is
related to the symmetries of its probability density function. As a
consequence, properness of quaternion random variables should be defined with
respect to the most general isometries in $4D$, i.e. rotations from $SO(4)$.
Based on this idea, we propose a new definition of properness, namely the
$(\mu_1,\mu_2)$-properness, for quaternion random variables using invariance
property under the action of the rotation group $SO(4)$. This new definition
generalizes previously introduced properness concepts for quaternion random
variables. A second order study is conducted and symmetry properties of the
covariance matrix of $(\mu_1,\mu_2)$-proper quaternion random variables are
presented. Comparisons with previous definitions are given and simulations
illustrate in a geometric manner the newly introduced concept.
|
Low-mass cluster galaxies are the most common galaxy type in the universe and
are at a cornerstone of our understanding of galaxy formation, cluster
luminosity functions, dark matter and the formation of large scale structure. I
describe in this summary recent observational results concerning the properties
and likely origins of low-mass galaxies in clusters and the implications of
these findings in broader galaxy formation issues.
|
We provide quantitative bounds on the characterisation of multiparticle
separable states by states that have locally symmetric extensions. The bounds
are derived from two-particle bounds and relate to recent studies on quantum
versions of de Finetti's theorem. We discuss algorithmic applications of our
results, in particular a quasipolynomial-time algorithm to decide whether a
multiparticle quantum state is separable or entangled (for constant number of
particles and constant error in the LOCC or Frobenius norm). Our results
provide a theoretical justification for the use of the Search for Symmetric
Extensions as a practical test for multiparticle entanglement.
|
Given a thick subcategory of a triangulated category, we define a
colocalisation and a natural long exact sequence that involves the original
category and its localisation and colocalisation at the subcategory. Similarly,
we construct a natural long exact sequence containing the canonical map between
a homological functor and its total derived functor with respect to a thick
subcategory.
|
Observational evidence shows that low-redshift galaxies are surrounded by
extended haloes of multiphase gas, the so-called 'circumgalactic medium' (CGM).
To study the survival of relatively cool gas (T < 10^5 K) in the CGM, we
performed a set of hydrodynamical simulations of cold (T = 10^4 K) neutral gas
clouds travelling through a hot (T = 2x10^6 K) and low-density (n = 10^-4
cm^-3) coronal medium, typical of Milky Way-like galaxies at large
galactocentric distances (~ 50-150 kpc). We explored the effects of different
initial values of relative velocity and radius of the clouds. Our simulations
were performed on a two-dimensional grid with constant mesh size (2 pc) and
they include radiative cooling, photoionization heating and thermal conduction.
We found that for large clouds (radii larger than 250 pc) the cool gas survives
for very long time (larger than 250 Myr): despite that they are partially
destroyed and fragmented into smaller cloudlets during their trajectory, the
total mass of cool gas decreases at very low rates. We found that thermal
conduction plays a significant role: its effect is to hinder formation of
hydrodynamical instabilities at the cloud-corona interface, keeping the cloud
compact and therefore more difficult to destroy. The distribution of column
densities extracted from our simulations are compatible with those observed for
low-temperature ions (e.g. SiII and SiIII) and for high-temperature ions (OVI)
once we take into account that OVI covers much more extended regions than the
cool gas and, therefore, it is more likely to be detected along a generic line
of sight.
|
Subsets and Splits