text
stringlengths 6
128k
|
---|
An \emph{indeterminate string} $x = x[1..n]$ on an alphabet $\Sigma$ is a
sequence of nonempty subsets of $\Sigma$; $x$ is said to be \emph{regular} if
every subset is of size one. A proper substring $u$ of regular $x$ is said to
be a \emph{cover} of $x$ iff for every $i \in 1..n$, an occurrence of $u$ in
$x$ includes $x[i]$. The \emph{cover array} $\gamma = \gamma[1..n]$ of $x$ is
an integer array such that $\gamma[i]$ is the longest cover of $x[1..i]$.
Fifteen years ago a complex, though nevertheless linear-time, algorithm was
proposed to compute the cover array of regular $x$ based on prior computation
of the border array of $x$. In this paper we first describe a linear-time
algorithm to compute the cover array of regular string $x$ based on the prefix
table of $x$. We then extend this result to indeterminate strings.
|
In this article, we propose noncommutative versions of Tate conjecture and
Hodge conjecture. If we consider these conjectures for a dg-category of perfect
complexes over a certain schemes $X$, then they are equivalent to the classical
Tate and Hodge conjectures for $X$ respectively. We also propose a strategy of
how to prove these conjectures by utilizing a version of motivic Bass
conjecture.
|
The Fluctuation Theorem describes the probability ratio of observing
trajectories that satisfy or violate the second law of thermodynamics. It has
been proved in a number of different ways for thermostatted deterministic
nonequilibrium systems. In the present paper we show that the Fluctuation
Theorem is also valid for a class of stochastic nonequilibrium systems. The
Theorem is therefore not reliant on the reversibility or the determinism of the
underlying dynamics. Numerical tests verify the theoretical result.
|
A data graph is a convenient paradigm for supporting keyword search that
takes into account available semantic structure and not just textual relevance.
However, the problem of constructing data graphs that facilitate both
efficiency and effectiveness of the underlying system has hardly been
addressed. A conceptual model for this task is proposed. Principles for
constructing good data graphs are explained. Transformations for generating
data graphs from RDB and XML are developed. The results obtained from these
transformations are analyzed. It is shown that XML is a better starting point
for getting a good data graph.
|
We derive strong mixing conditions for many existing discrete-valued time
series models that include exogenous covariates in the dynamic. Our main
contribution is to study how a mixing condition on the covariate process
transfers to a mixing condition for the response. Using a coupling method, we
first derive mixing conditions for some Markov chains in random environments,
which gives a first result for some autoregressive categorical processes with
strictly exogenous regressors. Our result is then extended to some infinite
memory categorical processes. In the second part of the paper, we study
autoregressive models for which the covariates are sequentially exogenous.
Using a general random mapping approach on finite sets, we get explicit mixing
conditions that can be checked for many categorical time series found in the
literature, including multinomial autoregressive processes, ordinal time series
and dynamic multiple choice models. We also study some autoregressive count
time series using a somewhat different contraction argument. Our contribution
fill an important gap for such models, presented here under a more general
form, since such a strong mixing condition is often assumed in some recent
works but no general approach is available to check it.
|
The dynamics of molecular collisions in a macroscopic body are encoded by the
parameter Thermodynamic entropy - a statistical measure of the number of
molecular configurations that correspond to a given macrostate. Directionality
in the flow of energy in macroscopic bodies is described by the Second Law of
Thermodynamics: In isolated systems, that is systems closed to the input of
energy and matter, thermodynamic entropy increases. The dynamics of the lower
level interactions in populations of replicating organisms is encoded by the
parameter Evolutionary entropy, a statistical measure which describes the
number and diversity of metabolic cycles in a population of replicating
organisms. Directionality in the transformation of energy in populations of
organisms is described by the Fundamental Theorem of Evolution: In systems open
to the input of energy and matter, Evolutionary entropy increases, when the
energy source is scarce and diverse, and decreases when the energy source is
abundant and singular. This article shows that when rho to 0, and N to
infinity, where rho is the production rate of the external energy source, and N
denote the number of replicating units, evolutionary entropy, an organized
state of energy; and thermodynamic entropy, a randomized state of energy,
coincide. Accordingly, the Fundamental Theorem of Evolution, is a
generalization of the Second Law of Thermodynamics.
|
Pronunciation assessment and its application in computer-aided pronunciation
training (CAPT) have seen impressive progress in recent years. With the rapid
growth in language processing and deep learning over the past few years, there
is a need for an updated review. In this paper, we review methods employed in
pronunciation assessment for both phonemic and prosodic. We categorize the main
challenges observed in prominent research trends, and highlight existing
limitations, and available resources. This is followed by a discussion of the
remaining challenges and possible directions for future work.
|
The utilization of online stochastic algorithms is popular in large-scale
learning settings due to their ability to compute updates on the fly, without
the need to store and process data in large batches. When a constant step-size
is used, these algorithms also have the ability to adapt to drifts in problem
parameters, such as data or model properties, and track the optimal solution
with reasonable accuracy. Building on analogies with the study of adaptive
filters, we establish a link between steady-state performance derived under
stationarity assumptions and the tracking performance of online learners under
random walk models. The link allows us to infer the tracking performance from
steady-state expressions directly and almost by inspection.
|
The decay properties of long-lived excited states (isomers) can have a
significant impact on the destruction channels of isotopes under stellar
conditions. In sufficiently hot environments, the population of isomers can be
altered via thermal excitation or de-excitation. If the corresponding lifetimes
are of the same order of magnitude as the typical time scales of the
environment, the isomers have to be the treated explicitly. We present a
general approach to the treatment of isomers in stellar nucleosynthesis codes
and discuss a few illustrative examples. The corresponding code is available
online at http://exp-astro.de/isomers/
|
Let $\mathcal{X}$ be a complex Banach space and
$A\in\mathcal{L}(\mathcal{X})$ with $\sigma(A)=\{1\}$. We prove that for a
vector $x\in \mathcal{X}$, if $\|(A^{k}+A^{-k})x\|=O(k^N)$ as $k \rightarrow
+\infty$ for some positive integer $N$, then $(A-\mathbf{I})^{N+1}x=0$ when $N$
is even and $(A-\mathbf{I})^{N+2}x=0$ when $N$ is odd. This could be seemed as
a new version of the Gelfand-Hille theorem. As a corollary, we also obtain that
for a quasinilpotent operator $Q\in\mathcal{L}(\mathcal{X})$ and a vector
$x\in\mathcal{X}$, if $\|\cos(kQ)x\|=O(k^N)$ as $k \rightarrow +\infty$ for
some positive integer $N$, then $Q^{N+1}x=0$ when $N$ is even and $Q^{N+2}x=0$
when $N$ is odd.
|
Using our recent attempt to formulate second law of thermodynamics in a
general way into a language with a probability density function, we derive
degenerate vacua. Under the assumption that many coupling constants are
effectively ``dynamical'' in the sense that they are or can be counted as
initial state conditions, we argue in our model behind the second law that
these coupling constants will adjust to make several vacua all having their
separate effective cosmological constants or, what is the same, energy
densities, being almost the \underline{same} value, essentially zero. Such
degeneracy of vacuum energy densities is what one of us works on a lot under
the name "The multiple point principle" (MPP).
|
In this paper, we have obtained motion equations for a wide class of
one-dimensional singularities in 2-D ideal hydrodynamics. The simplest of them,
are well known as point vortices. More complicated singularities correspond to
vorticity point dipoles. It has been proved that point multipoles of a higher
order (quadrupoles and more) are not the exact solutions of two-dimensional
ideal hydrodynamics. The motion equations for a system of interacting point
vortices and point dipoles have been obtained. It is shown that these equations
are Hamiltonian ones and have three motion integrals in involution. It means
the complete integrability of two-particle system, which has a point vortex and
a point dipole.
|
We strengthen and put in a broader perspective previous results of the first
two authors on colliding permutations. The key to the present approach is a new
non-asymptotic invariant for graphs.
|
Stationary black holes of massless supergravity theories are described by
certain geodesic curves on the target space that is obtained after dimensional
reduction over time. When the target space is a symmetric coset space we make
use of the group-theoretical structure to prove that the second order geodesic
equations are integrable in the sense of Liouville, by explicitly constructing
the correct amount of Hamiltonians in involution. This implies that the
Hamilton-Jacobi formalism can be applied, which proves that all such black hole
solutions, including non-extremal solutions, possess a description in terms of
a (fake) superpotential. Furthermore, we improve the existing integration
method by the construction of a Lax integration algorithm that integrates the
second order equations in one step instead of the usual two step procedure. We
illustrate this technology with a specific example.
|
Big Bang models of the Universe predict rapid domination by curvature, a
paradox known as the flatness problem. Solutions to this problem usually leave
the Universe exactly flat for every practical purpose. Explaining a nearly but
not exactly flat current Universe is a new problem, which we label the
quasi-flatness problem. We show how theories incorporating time-varying
coupling constants could drive the Universe to a late-time near-flat attractor.
A similar problem may be posed with regards to the cosmological constant
$\Lambda$, the quasi-lambda problem, and we exhibit a solution to this problem
as well.
|
I review the decoherent (or consistent) histories approach to quantum
mechanics, due to Griffiths, to Gell-Mann and Hartle, and to Omnes. This is an
approach to standard quantum theory specifically designed to apply to genuinely
closed systems, up to and including the entire universe. It does not depend on
an assumed separation of classical and quantum domains, on notions of
measurement, or on collapse of the wave function. Its primary aim is to find
sets of histories for closed systems exhibiting negligble interference, and
therefore, to which probabilities may be assigned. Such sets of histories are
called consistent or decoherent, and may be manipulated according to the rules
of ordinary (Boolean) logic. The approach provides a framework from which one
may predict the emergence of an approximately classical domain for macroscopic
systems, together with the conventional Copenhagen quantum mechanics for
microscropic subsystems. In the special case in which the total closed system
naturally separates into a distinguished subsystem coupled to an environment,
the decoherent histories approach is closed related to the quantum state
diffusion approach of Gisin and Percival.
|
The design and optimization of the Electromagnetic Calorimeter (ECAL) are
crucial for the Circular Electron Positron Collider (CEPC) project, a proposed
future Higgs/Z factory. Following the reference design of the International
Large Detector (ILD), a set of silicon-tungsten sampling ECAL geometries are
implemented into the Geant4 simulation, whose performance is then scanned using
Arbor algorithm. At single particle level, the photon energy response at
different ECAL longitudinal structures is analyzed. At bi-particle sample, the
separation performance with different ECAL transverse cell sizes is
investigated and parametrized. The overall performance is characterized by a
set of physics benchmarks, including $\nu\nu H$ events where Higgs boson decays
into a pair of photons (EM objects) or gluons (jets) and $Z\to\tau^+\tau^-$
events. Based on these results, we proposed an optimized ECAL geometry for the
CEPC project.
|
Defect detection is a critical research area in artificial intelligence.
Recently, synthetic data-based self-supervised learning has shown great
potential on this task. Although many sophisticated synthesizing strategies
exist, little research has been done to investigate the robustness of models
when faced with different strategies. In this paper, we focus on this issue and
find that existing methods are highly sensitive to them. To alleviate this
issue, we present a Discrepancy Aware Framework (DAF), which demonstrates
robust performance consistently with simple and cheap strategies across
different anomaly detection benchmarks. We hypothesize that the high
sensitivity to synthetic data of existing self-supervised methods arises from
their heavy reliance on the visual appearance of synthetic data during
decoding. In contrast, our method leverages an appearance-agnostic cue to guide
the decoder in identifying defects, thereby alleviating its reliance on
synthetic appearance. To this end, inspired by existing knowledge distillation
methods, we employ a teacher-student network, which is trained based on
synthesized outliers, to compute the discrepancy map as the cue. Extensive
experiments on two challenging datasets prove the robustness of our method.
Under the simple synthesis strategies, it outperforms existing methods by a
large margin. Furthermore, it also achieves the state-of-the-art localization
performance. Code is available at: https://github.com/caiyuxuan1120/DAF.
|
We present initial results from an exploratory X-ray monitoring project of
two groups of comparably luminous radio-quiet quasars (RQQs). The first
consists of four sources at 4.10 <= z <= 4.35, monitored by Chandra, and the
second is a comparison sample of three sources at 1.33 <= z <= 2.74, monitored
by Swift. Together with archival X-ray data, the total rest-frame temporal
baseline spans ~2-4 yr and ~5-13 yr for the first and second group,
respectively. Six of these sources show significant X-ray variability over
rest-frame timescales of ~10^2 - 10^3 d; three of these also show significant
X-ray variability on rest-frame timescales of ~1-10 d. The X-ray variability
properties of our variable sources are similar to those exhibited by nearby and
far less luminous active galactic nuclei (AGNs). While we do not directly
detect a trend of increasing X-ray variability with redshift, we do confirm
previous reports of luminous AGNs exhibiting X-ray variability above that
expected from their luminosities, based on simplistic extrapolation from lower
luminosity sources. This result may be attributed to luminous sources at the
highest redshifts having relatively high accretion rates. Complementary
UV-optical monitoring of our sources shows that variations in their
optical-X-ray spectral energy distribution are dominated by the X-ray
variations. We confirm previous reports of X-ray spectral variations in one of
our sources, HS 1700+6416, but do not detect such variations in any of our
other sources in spite of X-ray flux variations of up to a factor of ~4. This
project is designed to provide a basic assessment of the X-ray variability
properties of RQQs at the highest accessible redshifts that will serve as a
benchmark for more systematic monitoring of such sources with future X-ray
missions.
|
We perform three-dimensional gas-dynamical simulations and show that the
asymmetric morphology of the blue and red-shifted components of the outflow at
hundreds of astronomical units (AU) from the massive binary system eta Carinae
can be accounted for from the collision of the free primary stellar wind with
the slowly expanding dense equatorial gas. Owing to the very complicated
structure of the century-old equatorial ejecta, that is not fully spatially
resolved by observations, we limit ourselves to modelling the equatorial dense
gas by one or two dense spherical clouds. Because of that we reproduce the
general qualitative properties of the velocity maps, but not the fine details.
The fine details of the velocity maps can be matched by simply structuring the
dense ejecta in an appropriate way. The blue and red-shifted components are
formed in the post-shock flow of the primary wind, on the two sides of the
equatorial plane, respectively. The fast wind from the secondary star plays no
role in our model, as for most of the orbital period in our model the primary
star is closer to us. The dense clouds are observed to be closer to us than the
binary system is, and so in our model the primary star faces the dense
equatorial ejecta for the majority of the orbital period.
|
A recent technique, proposed to alleviate the ``sign problem disease'', is
discussed in details. As well known the ground state of a given Hamiltonian $H$
can be obtained by applying the imaginary time propagator $e^{-H \tau}$ to a
given trial state $\psi_T$ for large imaginary time $\tau$ and sampling
statistically the propagated state $ \psi_{\tau} = e^{-H \tau} \psi_T$. However
the so called ``sign problem'' may appear in the simulation and such
statistical propagation would be practically impossible without employing some
approximation such as the well known ``fixed node'' approximation (FN). This
method allows to improve the FN dynamic with a systematic correction scheme.
This is possible by the simple requirement that, after a short imaginary time
propagation via the FN dynamic, a number $p$ of correlation functions can be
further constrained to be {\em exact} by small perturbation of the FN
propagated state, which is free of the sign problem. By iterating this scheme
the Monte Carlo average sign, which is almost zero when there is sign problem,
remains stable and finite even for large $\tau$. The proposed algorithm is
tested against the exact diagonalization results available on finite lattice.
It is also shown in few test cases that the dependence of the results upon the
few parameters entering the stochastic technique can be very easily controlled,
unless for exceptional cases.
|
Idiopathic ventricular arrhythmia (IVAs) is extra abnormal heartbeats
disturbing the regular heart rhythm that can become fatal if left untreated.
Cardiac catheter ablation is the standard approach to treat IVAs, however, a
crucial prerequisite for the ablation is the localization of IVAs' origin. The
current IVA localization techniques are invasive, rely on expert
interpretation, or are inaccurate. In this study, we developed a new
deep-learning algorithm that can automatically identify the origin of IVAs from
ECG signals without the need for expert manual analysis. Our developed deep
learning algorithm was comprised of a spatial fusion to extract the most
informative features from multichannel ECG data, temporal modeling to capture
the evolving pattern of the ECG time series, and an attention mechanism to
weigh the most important temporal features and improve the model
interpretability. The algorithm was validated on a 12-lead ECG dataset
collected from 334 patients (230 females) who experienced IVA and successfully
underwent a catheter ablation procedure that determined IVA's exact origins.
The proposed method achieved an area under the curve of 93%, an accuracy of
94%, a sensitivity of 97%, a precision of 95%, and an F1 score of 96% in
locating the origin of IVAs and outperformed existing automatic and
semi-automatic algorithms. The proposed method shows promise toward automatic
and noninvasive evaluation of IVA patients before cardiac catheter ablation.
|
The nuclear level density (NLD) and $\gamma$-ray strength function
($\gamma$SF) of $^{63}\mathrm{Ni}$ have been investigated using the Oslo
method. The extracted NLD is compared with previous measurements using particle
evaporation and those found from neutron resonance spacing. The $\gamma$SF was
found to feature a strong low energy enhancement that could be explained as M1
strength based on large scale shell model calculations. Comparison of
$\gamma$SFs measured with the Oslo method for various $\mathrm{Ni}$ isotopes
reveals systematic changes to the strength below $5$ MeV with increasing mass.
|
Biological cortical networks are potentially fully recurrent networks without
any distinct output layer, where recognition may instead rely on the
distribution of activity across its neurons. Because such biological networks
can have rich dynamics, they are well-designed to cope with dynamical
interactions of the types that occur in nature, while traditional machine
learning networks may struggle to make sense of such data. Here we connected a
simple model neuronal network (based on the 'linear summation neuron model'
featuring biologically realistic dynamics (LSM), consisting of 10 of excitatory
and 10 inhibitory neurons, randomly connected) to a robot finger with multiple
types of force sensors when interacting with materials of different levels of
compliance. Scope: to explore the performance of the network on classification
accuracy. Therefore, we compared the performance of the network output with
principal component analysis of statistical features of the sensory data as
well as its mechanical properties. Remarkably, even though the LSM was a very
small and untrained network, and merely designed to provide rich internal
network dynamics while the neuron model itself was highly simplified, we found
that the LSM outperformed these other statistical approaches in terms of
accuracy.
|
We present a status report on our study of long-distance contributions to the
decay amplitudes A(K^0 --> pi pi, I) in the framework of the 1/N expansion. We
argue that a modified prescription for the identification of meson momenta in
the chiral loop corrections has to be used to gain a self-consistent picture
which allows an appropriate matching with the short-distance part. Possible
uncertainties in the analysis of the density-density operators Q_6 and Q_8
which dominate the CP violation parameter epsilon'/epsilon are discussed. As a
first result we present the long-distance 1/N correction to the gluon penguin
operator Q_6 in the chiral limit.
|
Synthetic data has been hailed as the silver bullet for privacy preserving
data analysis. If a record is not real, then how could it violate a person's
privacy? In addition, deep-learning based generative models are employed
successfully to approximate complex high-dimensional distributions from data
and draw realistic samples from this learned distribution. It is often
overlooked though that generative models are prone to memorising many details
of individual training records and often generate synthetic data that too
closely resembles the underlying sensitive training data, hence violating
strong privacy regulations as, e.g., encountered in health care. Differential
privacy is the well-known state-of-the-art framework for guaranteeing
protection of sensitive individuals' data, allowing aggregate statistics and
even machine learning models to be released publicly without compromising
privacy. The training mechanisms however often add too much noise during the
training process, and thus severely compromise the utility of these private
models. Even worse, the tight privacy budgets do not allow for many training
epochs so that model quality cannot be properly controlled in practice. In this
paper we explore an alternative approach for privately generating data that
makes direct use of the inherent stochasticity in generative models, e.g.,
variational autoencoders. The main idea is to appropriately constrain the
continuity modulus of the deep models instead of adding another noise mechanism
on top. For this approach, we derive mathematically rigorous privacy guarantees
and illustrate its effectiveness with practical experiments.
|
We have studied how 2- and 3- dimensional systems made up of particles
interacting with finite range, repulsive potentials jam (i.e., develop a yield
stress in a disordered state) at zero temperature and applied stress. For each
configuration, there is a unique jamming threshold, $\phi_c$, at which
particles can no longer avoid each other and the bulk and shear moduli
simultaneously become non-zero. The distribution of $\phi_c$ values becomes
narrower as the system size increases, so that essentially all configurations
jam at the same $\phi$ in the thermodynamic limit. This packing fraction
corresponds to the previously measured value for random close-packing. In fact,
our results provide a well-defined meaning for "random close-packing" in terms
of the fraction of all phase space with inherent structures that jam. The
jamming threshold, Point J, occurring at zero temperature and applied stress
and at the random close-packing density, has properties reminiscent of an
ordinary critical point. As Point J is approached from higher packing
fractions, power-law scaling is found for many quantities. Moreover, near Point
J, certain quantities no longer self-average, suggesting the existence of a
length scale that diverges at J. However, Point J also differs from an ordinary
critical point: the scaling exponents do not depend on dimension but do depend
on the interparticle potential. Finally, as Point J is approached from high
packing fractions, the density of vibrational states develops a large excess of
low-frequency modes. All of these results suggest that Point J may control
behavior in its vicinity-perhaps even at the glass transition.
|
The photodisintegration of the 7Li nucleus through the n6Li channel is
considered on the basis of a potential two-cluster model.
Intercluster-interaction potentials involve forbidden states and reproduce the
phase shifts of low-energy elastic scattering that are obtained by the
resonating-group method. The proposed model is shown to describe the total
cross section for photodisintegration in the energy range under consideration.
|
Powerful current and future cosmological constraints using high precision
measurements of the large-scale structure of galaxies and its weak
gravitational lensing effects rely on accurate characterization of the redshift
distributions of the galaxy samples using only broadband imaging. We present a
framework for constraining both the redshift probability distributions of
galaxy populations and the redshifts of their individual members. We use a
hierarchical Bayesian model (HBM) which provides full posterior distributions
on those redshift probability distributions, and, for the first time, we show
how to combine survey photometry of single galaxies and the information
contained in the galaxy clustering against a well-characterized tracer
population in a robust way. One critical approximation turns the HBM into a
system amenable to efficient Gibbs sampling. We show that in the absence of
photometric information, this method reduces to commonly used clustering
redshift estimators. Using a simple model system, we show how the incorporation
of clustering information with photo-$z$'s tightens redshift posteriors, and
can overcome biases or gaps in the coverage of a spectroscopic prior. The
method enables the full propagation of redshift uncertainties into cosmological
analyses, and uses all the information at hand to reduce those uncertainties
and associated potential biases.
|
Thin film rupture is a type of nonlinear instability that causes the solution
to touch down to zero at finite time. We investigate the finite-time rupture
behavior of a generalized elastohydrodynamic lubrication model. This model
features the interplay between destabilizing disjoining pressure and
stabilizing elastic bending pressure and surface tension. The governing
equation is a sixth-order nonlinear degenerate parabolic partial differential
equation parameterized by exponents in the mobility function and the disjoining
pressure, respectively. Asymptotic self-similar finite-time rupture solutions
governed by a sixth-order leading-order equation are analyzed. In the weak
elasticity limit, transient self-similar dynamics governed by a fourth-order
similarity equation are also identified.
|
The light higgsino-singlino scenario of the NMSSM allows to combine a
naturally small $\mu$ parameter with a good dark matter relic density. Given
the new constraints on spin-dependent and spin-independent direct detection
cross sections in 2016 we study first which regions in the plane of chargino-
and LSP-masses below 300 GeV remain viable. Subsequently we investigate the
impact of searches for charginos and neutralinos at the LHC, and find that the
limits from run I do not rule out any additional region in this plane. Only the
HL-LHC at 3000 fb$^{-1}$ will test parts of this plane corresponding to
higgsino-like charginos heavier than 150 GeV and relatively light singlinos,
but notably the most natural regions with lighter charginos seem to remain
unexplored.
|
We provide an up-to-date view on the knowledge management system ScienceWISE
(SW) and address issues related to the automatic assignment of articles to
research topics. So far, SW has been proven to be an effective platform for
managing large volumes of technical articles by means of ontological
concept-based browsing. However, as the publication of research articles
accelerates, the expressivity and the richness of the SW ontology turns into a
double-edged sword: a more fine-grained characterization of articles is
possible, but at the cost of introducing more spurious relations among them. In
this context, the challenge of continuously recommending relevant articles to
users lies in tackling a network partitioning problem, where nodes represent
articles and co-occurring concepts create edges between them. In this paper, we
discuss the three research directions we have taken for solving this issue: i)
the identification of generic concepts to reinforce inter-article similarities;
ii) the adoption of a bipartite network representation to improve scalability;
iii) the design of a clustering algorithm to identify concepts for
cross-disciplinary articles and obtain fine-grained topics for all articles.
|
Network slicing allows network operators to build multiple isolated virtual
networks on a shared physical network to accommodate a wide variety of services
and applications. With network slicing, service providers can provide a
cost-efficient solution towards meeting diverse performance requirements of
deployed applications and services. Despite slicing benefits, End-to-End
orchestration and management of network slices is a challenging and complicated
task. In this chapter, we intend to survey all the relevant aspects of network
slicing, with the focus on networking technologies such as Software-defined
networking (SDN) and Network Function Virtualization (NFV) in 5G, Fog/Edge and
Cloud Computing platforms. To build the required background, this chapter
begins with a brief overview of 5G, Fog/Edge and Cloud computing, and their
interplay. Then we cover the 5G vision for network slicing and extend it to the
Fog and Cloud computing through surveying the state-of-the-art slicing
approaches in these platforms. We conclude the chapter by discussing future
directions, analyzing gaps and trends towards the network slicing realization.
|
In this paper, the quasinormal modes of gravitational perturbation around a
Schwarzschild black hole surrounded by quintessence were evaluated by using the
third-order WKB approximation. Due to the presence of quintessence, the
gravitational wave damps more slowly.
|
The optical spectra of two dimensional (2D) materials exhibit sharp
absorption peaks that are commonly identified with exciton and trions (or
charged excitons). In this paper, we show that excitons and trions in doped 2D
materials can be described by two coupled Schrodinger-like equations - one
two-body equation for excitons and another four-body equation for trions. In
electron doped 2D materials, a bound trion state is identified with a four-body
bound state of an exciton and an excited conduction band electron-hole pair. In
doped 2D materials, the exciton and the trions states are the not the
eigenstates of the full Hamiltonian and their respective Schrodinger equations
are coupled due to Coulomb interactions. The strength of this coupling
increases with the doping density. Solutions of these two coupled equations can
quantitatively explain all the prominent features experimentally observed in
the optical absorption spectra of 2D materials including the observation of two
prominent absorption peaks and the variation of their energy splittings and
spectral shapes and strengths with the electron density. The optical
conductivity obtained in our work satisfies the optical conductivity sum rule
exactly. A superposition of exciton and trion states can be used to construct a
solution of the two coupled Schrodinger equations and this solution resembles
the variational exciton-polaron state, thereby establishing the relationship
between our approach and Fermi polaron physics.
|
An approach to hiding objects levitating or flying above a conducting sheet
is suggested in this letter. The proposed device makes use of isotropic
negative-refractive-index materials without extreme material parameters, and
creates an illusion of a remote conducting sheet. Numerical simulations are
performed to investigate the performance of this cloak in two-dimensional (2D)
and three-dimensional (3D) cases.
|
For a normalised analytic function f defined on the open unit disk in the
complex plane, we determine several sufficient conditions for starlikeness in
terms of the quotients Q_{ST}:=zf'(z)/f(z), Q_{CV}:=1+zf"(z)/f'(z) and the
Schwarzian derivative Q_{SD}:=z^2((f"(z)/f'(z))'-(f"(z)/f'(z))^2/2)$. These
conditions were obtained by using the admissibility criteria of starlikeness in
the theory of second order differential subordination.
|
Based on the data of 12-17-crossing knots, we establish three new conjectures
about the hyperbolic volume and knot cohomology:
(1) There exists a constant $a \in R_{>0}$ such that the percentage of knots
for which the following inequality holds converges to 1 as the crossing number
$c \to \infty$:
$\log r(K) < a \cdot Vol(K)$ for a knot $K$ where $r(K)$ is the total rank of
knot Floer homology (KFH) of $K$ and $Vol(K)$ is the hyperbolic volume of $K$.
(2) There exist constants $a,b\in R$ such that the percentage of knots for
which the following inequality holds converges to 1 as the crossing number $c
\to \infty$:
$\log \det(K) < a \cdot Vol(K) + b$ for a knot $K$ where $\det(K)$ is the
knot determinant of $K$.
(3) Fix a small cut-off value $d$ of the total rank of KFH and let $f(x)$ be
defined as the fraction of knots whose total rank of knot Floer homology is
less than $d$ among the knots whose hyperbolic volume is less than $x$. Then
for sufficiently large crossing numbers, the following inequality holds:
$f(x)<\frac{L}{1+\exp{(-k \cdot (x-x_0))}} + b$ where $L, x_0, k, b$ are
constants.
|
Disentangled representation learning is one of the major goals of deep
learning, and is a key step for achieving explainable and generalizable models.
A well-defined theoretical guarantee still lacks for the VAE-based unsupervised
methods, which are a set of popular methods to achieve unsupervised
disentanglement. The Group Theory based definition of representation
disentanglement mathematically connects the data transformations to the
representations using the formalism of group. In this paper, built on the
group-based definition and inspired by the n-th dihedral group, we first
propose a theoretical framework towards achieving unsupervised representation
disentanglement. We then propose a model, based on existing VAE-based methods,
to tackle the unsupervised learning problem of the framework. In the
theoretical framework, we prove three sufficient conditions on model, group
structure, and data respectively in an effort to achieve, in an unsupervised
way, disentangled representation per group-based definition. With the first two
of the conditions satisfied and a necessary condition derived for the third
one, we offer additional constraints, from the perspective of the group-based
definition, for the existing VAE-based models. Experimentally, we train 1800
models covering the most prominent VAE-based methods on five datasets to verify
the effectiveness of our theoretical framework. Compared to the original
VAE-based methods, these Groupified VAEs consistently achieve better mean
performance with smaller variances.
|
Physical Unclonable Functions (PUFs) are gaining attention in the
cryptography community because of the ability to efficiently harness the
intrinsic variability in the manufacturing process. However, this means that
they are noisy devices and require error correction mechanisms, e.g., by
employing Fuzzy Extractors (FEs). Recent works demonstrated that applying FEs
for error correction may enable new opportunities to break the PUFs if no
countermeasures are taken. In this paper, we address an attack model on FEs
hardware implementations and provide a solution for early identification of the
timing Side-Channel Attack (SCA) vulnerabilities which can be exploited by
physical fault injection. The significance of this work stems from the fact
that FEs are an essential building block in the implementations of PUF-enabled
devices. The information leaked through the timing side-channel during the
error correction process can reveal the FE input data and thereby can endanger
revealing secrets. Therefore, it is very important to identify the potential
leakages early in the process during RTL design. Experimental results based on
RTL analysis of several Bose-Chaudhuri-Hocquenghem (BCH) and Reed-Solomon
decoders for PUF-enabled devices with FEs demonstrate the feasibility of the
proposed methodology.
|
We formulate a continuum linear response theory on the basis of the
Hartree-Fock-Bogoliubov formalism in the coordinate space representation in
order to describe low-lying and high-lying collective excitations which couple
to one-particle and two-particle continuum states. Numerical analysis is done
for the neutron drip-line nucleus $^{24}$O. A low-lying collective mode that
emerges above the continuum threshold with large neutron strength is analyzed.
The collective state is sensitive to the density-dependence of the pairing. The
present theory satisfies accurately the energy weighted sum rule. This is
guaranteed by treating the pairing selfconsistently both in the static HFB and
in the dynamical linear response equation.
|
It is known that the gauge field and its composite operators evolved by the
Yang--Mills gradient flow are ultraviolet (UV) finite without any
multiplicative wave function renormalization. In this paper, we prove that the
gradient flow in the 2D $O(N)$ non-linear sigma model possesses a similar
property: The flowed $N$-vector field and its composite operators are UV finite
without multiplicative wave function renormalization. Our proof in all orders
of perturbation theory uses a $(2+1)$-dimensional field theoretical
representation of the gradient flow, which possesses local gauge invariance
without gauge field. As application of the UV finiteness of the gradient flow,
we construct the energy--momentum tensor in the lattice formulation of the
$O(N)$ non-linear sigma model that automatically restores the correct
normalization and the conservation law in the continuum limit.
|
This paper addresses the limitations of standard uncertainty models, e.g.,
robust (norm-bounded) and stochastic (one fixed distribution, e.g., Gaussian),
and proposes to model uncertainty via Optimal Transport (OT) ambiguity sets.
These constitute a very rich uncertainty model, which enjoys many desirable
geometrical, statistical, and computational properties, and which: (1)
naturally generalizes both robust and stochastic models, and (2) captures many
additional real-world uncertainty phenomena (e.g., black swan events). Our
contributions show that OT ambiguity sets are also analytically tractable: they
propagate easily and intuitively through linear and nonlinear (possibly
corrupted by noise) transformations, and the result of the propagation is again
an OT ambiguity set or can be tightly upper bounded by an OT ambiguity set. In
the context of dynamical systems, our results allow us to consider multiple
sources of uncertainty (e.g., initial condition, additive noise, multiplicative
noise) and to capture in closed-form, via an OT ambiguity set, the resulting
uncertainty in the state at any future time. Our results are actionable,
interpretable, and readily employable in a great variety of computationally
tractable control and estimation formulations. To highlight this, we study
three applications in trajectory planning, consensus algorithms, and least
squares estimation. We conclude the paper with a list of exciting open problems
enabled by our results.
|
For a compact CR manifold $(X,T^{1,0}X)$ of dimension $2n+1$, $n\geq 2$,
admitting a $S^1\times T^d$ action, if the lattice point
$(-p_1,\cdots,-p_d)\in\mathbb{Z}^{d}$ is a regular value of the associate CR
moment map $\mu$, then we establish the asymptotic expansion of the torus
equivariant Szeg\H{o} kernel $\Pi^{(0)}_{m,mp_1,\cdots,mp_d}(x,y)$ as $m\to
+\infty$ under certain assumptions of the positivity of Levi form and the torus
action on $Y:=\mu^{-1}(-p_1,\cdots,-p_d)$.
|
Phase synchronisation in multichannel EEG is known as the manifestation of
functional brain connectivity. Traditional phase synchronisation studies are
mostly based on time average synchrony measures hence do not preserve the
temporal evolution of the phase difference. Here we propose a new method to
show the existence of a small set of unique phase synchronised patterns or
"states" in multi-channel EEG recordings, each "state" being stable of the
order of ms, from typical and pathological subjects during face perception
tasks. The proposed methodology bridges the concepts of EEG microstates and
phase synchronisation in time and frequency domain respectively. The analysis
is reported for four groups of children including typical, Autism Spectrum
Disorder (ASD), low and high anxiety subjects - a total of 44 subjects. In all
cases, we observe consistent existence of these states - termed as
synchrostates - within specific cognition related frequency bands (beta and
gamma bands), though the topographies of these synchrostates differ for
different subject groups with different pathological conditions. The
inter-synchrostate switching follows a well-defined sequence capturing the
underlying inter-electrode phase relation dynamics in stimulus- and
person-centric manner. Our study is motivated from the well-known EEG
microstate exhibiting stable potential maps over the scalp. However, here we
report a similar observation of quasi-stable phase synchronised states in
multichannel EEG. The existence of the synchrostates coupled with their unique
switching sequence characteristics could be considered as a potentially new
field over contemporary EEG phase synchronisation studies.
|
A set of independence statements may define the independence structure of
interest in a family of joint probability distributions. This structure is
often captured by a graph that consists of nodes representing the random
variables and of edges that couple node pairs. One important class contains
regression graphs. Regression graphs are a type of so-called chain graph and
describe stepwise processes, in which at each step single or joint responses
are generated given the relevant explanatory variables in their past. For joint
densities that result after possible marginalising or conditioning, we
introduce summary graphs. These graphs reflect the independence structure
implied by the generating process for the reduced set of variables and they
preserve the implied independences after additional marginalising and
conditioning. They can identify generating dependences that remain unchanged
and alert to possibly severe distortions due to direct and indirect
confounding. Operators for matrix representations of graphs are used to derive
these properties of summary graphs and to translate them into special types of
paths in graphs.
|
Surveys open up unbiased discovery space and generate legacy datasets of
long-lasting value. One of the goals of imaging arrays of Cherenkov telescopes
like CTA is to survey areas of the sky for faint very high energy gamma-ray
(VHE) sources, especially sources that would not have drawn attention were it
not for their VHE emission (e.g. the Galactic "dark accelerators"). More than
half the currently known VHE sources are to be found in the Galactic plane.
Using standard techniques, CTA can carry out a survey of the region |l|<60
degrees, |b|<2 degrees in 250 hr (1/4th the available time per year at one
location) down to a uniform sensitivity of 3 mCrab (a "Galactic Plane survey").
CTA could also survey 1/4th of the sky down to a sensitivity of 20 mCrab in 370
hr of observing time (an "all-sky survey"), which complements well the surveys
by the Fermi/LAT at lower energies and extended air shower arrays at higher
energies. Observations in (non-standard) divergent pointing mode may shorten
the "all-sky survey" time to about 100 hr with no loss in survey sensitivity.
We present the scientific rationale for these surveys, their place in the
multi-wavelength context, their possible impact and their feasibility. We find
that the Galactic Plane survey has the potential to detect hundreds of sources.
Implementing such a survey should be a major goal of CTA. Additionally, about a
dozen blazars, or counterparts to Fermi/LAT sources, are expected to be
detected by the all-sky survey, whose prime motivation is the search for
extragalactic "dark accelerators".
|
In many classification tasks designed for AI or human to solve, gold labels
are typically included within the label space by default, often posed as "which
of the following is correct?" This standard setup has traditionally highlighted
the strong performance of advanced AI, particularly top-performing Large
Language Models (LLMs), in routine classification tasks. However, when the gold
label is intentionally excluded from the label space, it becomes evident that
LLMs still attempt to select from the available label candidates, even when
none are correct. This raises a pivotal question: Do LLMs truly demonstrate
their intelligence in understanding the essence of classification tasks?
In this study, we evaluate both closed-source and open-source LLMs across
representative classification tasks, arguing that the perceived performance of
LLMs is overstated due to their inability to exhibit the expected comprehension
of the task. This paper makes a threefold contribution: i) To our knowledge,
this is the first work to identify the limitations of LLMs in classification
tasks when gold labels are absent. We define this task as Classify-w/o-Gold and
propose it as a new testbed for LLMs. ii) We introduce a benchmark, Know-No,
comprising two existing classification tasks and one new task, to evaluate
Classify-w/o-Gold. iii) This work defines and advocates for a new evaluation
metric, OmniAccuracy, which assesses LLMs' performance in classification tasks
both when gold labels are present and absent.
|
Graph neural networks (GNNs), a type of neural network that can learn from
graph-structured data and learn the representation of nodes through aggregating
neighborhood information, have shown superior performance in various downstream
tasks. However, it is known that the performance of GNNs degrades gradually as
the number of layers increases. In this paper, we evaluate the expressive power
of GNNs from the perspective of subgraph aggregation. We reveal the potential
cause of performance degradation for traditional deep GNNs, i.e., aggregated
subgraph overlap, and we theoretically illustrate the fact that previous
residual-based GNNs exploit the aggregation results of 1 to $k$ hop subgraphs
to improve the effectiveness. Further, we find that the utilization of
different subgraphs by previous models is often inflexible. Based on this, we
propose a sampling-based node-level residual module (SNR) that can achieve a
more flexible utilization of different hops of subgraph aggregation by
introducing node-level parameters sampled from a learnable distribution.
Extensive experiments show that the performance of GNNs with our proposed SNR
module outperform a comprehensive set of baselines.
|
We analyze how tariff design incentivizes households to invest in residential
photovoltaic and battery systems, and explore selected power sector effects. To
this end, we apply an open-source power system model featuring prosumage agents
to German 2030 scenarios. Results show that lower feed-in tariffs substantially
reduce investments in photovoltaics, yet optimal battery sizing and
self-generation are relatively robust. With increasing fixed parts of retail
tariffs, optimal battery capacities and self-generation are smaller, and
households contribute more to non-energy power sector costs. When choosing
tariff designs, policy makers should not aim to (dis-)incentivize prosumage as
such, but balance effects on renewable capacity expansion and system cost
contribution.
|
This thesis is divided into two parts. The first one is composed of
recollections on operad theory, model categories, simplicial homotopy theory,
rational homotopy theory, Maurer-Cartan spaces, and deformation theory. The
second part deals with the theory of convolution algebras and some of their
applications, as explained below.
Suppose we are given a type of algebras, a type of coalgebras, and a
relationship between those types of algebraic structures (encoded by an operad,
a cooperad, and a twisting morphism respectively). Then, it is possible to
endow the space of linear maps from a coalgebra C and an algebra A with a
natural structure of Lie algebra up to homotopy. We call the resulting homotopy
Lie algebra the convolution algebra of A and C. We study the theory of
convolution algebras and their compatibility with the tools of homotopical
algebra: infinity morphisms and the homotopy transfer theorem. After doing
that, we apply this theory to various domains, such as derived deformation
theory and rational homotopy theory. In the first case, we use the tools we
developed to construct an universal Lie algebra representing the space of
Maurer-Cartan elements, a fundamental object of deformation theory. In the
second case, we generalize a result of Berglund on rational models for mapping
spaces between pointed topological spaces. In the last chapter of this thesis,
we give a new approach to two important theorems in deformation theory: the
Goldman-Millson theorem and the Dolgushev-Rogers theorem.
|
With a new detector setup and the high-resolution performance of the fragment
separator FRS at GSI we discovered 57 new isotopes in the atomic number range
of 60$\leq Z \leq 78$: \nuc{159-161}{Nb}, \nuc{160-163}{Pm}, \nuc{163-166}Sm,
\nuc{167-168}{Eu}, \nuc{167-171}{Gd}, \nuc{169-171}{Tb}, \nuc{171-174}{Dy},
\nuc{173-176}{Ho}, \nuc{176-178}{Er}, \nuc{178-181}{Tm}, \nuc{183-185}{Yb},
\nuc{187-188}{Lu}, \nuc{191}{Hf}, \nuc{193-194}{Ta}, \nuc{196-197}{W},
\nuc{199-200}{Re}, \nuc{201-203}{Os}, \nuc{204-205}{Ir} and \nuc{206-209}{Pt}.
The new isotopes have been unambiguously identified in reactions with a
$^{238}$U beam impinging on a Be target at 1 GeV/u. The isotopic production
cross-section for the new isotopes have been measured and compared with
predictions of different model calculations. In general, the ABRABLA and COFRA
models agree better than a factor of two with the new data, whereas the
semiempirical EPAX model deviates much more. Projectile fragmentation is the
dominant reaction creating the new isotopes, whereas fission contributes
significantly only up to about the element holmium.
|
Theoretical emission-line ratios involving Fe XI transitions in the 257-407 A
wavelength range are derived using fully relativistic calculations of radiative
rates and electron impact excitation cross sections. These are subsequently
compared with both long wavelength channel Extreme-Ultraviolet Imaging
Spectrometer (EIS) spectra from the Hinode satellite (covering 245-291 A), and
first-order observations (235-449 A) obtained by the Solar Extreme-ultraviolet
Research Telescope and Spectrograph (SERTS). The 266.39, 266.60 and 276.36 A
lines of Fe XI are detected in two EIS spectra, confirming earlier
identifications of these features, and 276.36 A is found to provide an electron
density diagnostic when ratioed against the 257.55 A transition. Agreement
between theory and observation is found to be generally good for the SERTS data
sets, with discrepancies normally being due to known line blends, while the
257.55 A feature is detected for the first time in SERTS spectra. The most
useful Fe XI electron density diagnostic is found to be the 308.54/352.67
intensity ratio, which varies by a factor of 8.4 between N_e = 10^8 and 10^11
cm^-3, while showing little temperature sensitivity. However, the 349.04/352.67
ratio potentially provides a superior diagnostic, as it involves lines which
are closer in wavelength, and varies by a factor of 14.7 between N_e = 10^8 and
10^11 cm^-3. Unfortunately, the 349.04 A line is relatively weak, and also
blended with the second-order Fe X 174.52 A feature, unless the first-order
instrument response is enhanced.
|
We compute cohomology of the moduli space of genus three curves with level
two structure and some related spaces. In particular, we determine the
cohomology groups of the moduli space of plane quartics with level two
structure as representations of the symplectic group on a six dimensional
vector space over the field of two elements. We also make the analogous
computations for some related spaces such as moduli spaces of genus three
curves with a marked points and strata of the moduli space of Abelian
differentials of genus three.
|
Consider a space X with the singular locus of positive dimension, Z=Sing(X).
Suppose both Z and X are locally complete intersections at each point. The
transversal type of X along Z is generically constant but at some points of Z
it degenerates. In the previous work we have introduced (locally) the
discriminant of the transversal type, a subscheme of Z, that reflects these
degenerations whenever the generic transversal type is "ordinary". We have
established the basic local properties of the discriminant.
In the current paper we consider the global case. We compute the equivalence
class of the discriminant in the Picard group, Pic(Z). If $X$ is a
hypersurface, the discriminant is naturally stratified by the singularities of
fibres in the projectivized normal cone $\mathbb{P}$N$_{X/Z}$.
In this case (under some additional assumptions) we compute the classes of
low codimension strata in the Chow group, A$^2$(Z).
As immediate applications, we (re)derive the multi-degrees of the classical
discriminant of projective complete intersections and bound the jumps of
multiplicity of X along Z (when the singular locus is one-dimensional).
|
We give a new method to construct isolated left orderings of groups whose
positive cones are finitely generated. Our construction uses an amalgamated
free product of two groups having an isolated ordering. We construct a lot of
new examples of isolated orderings, and give an example of isolated left
orderings having various properties which previously known isolated orderings
do not have.
|
The transition to a deeply decarbonized energy system requires coordinated
planning of infrastructure investments and operations serving multiple end-uses
while considering technology and policy-enabled interactions across sectors.
Electricity and natural gas (NG), which are vital vectors of today's energy
system, are likely to be coupled in different ways in the future, resulting
from increasing electrification, adoption of variable renewable energy (VRE)
generation in the power sector and policy factors such as cross-sectoral
emissions trading. This paper develops a least-cost investment and operations
model for joint planning of electricity and NG infrastructures that considers a
wide range of available and emerging technology options across the two vectors,
including carbon capture and storage (CCS) equipped power generation,
low-carbon drop-in fuels (LCDF) as well as long-duration energy storage (LDES).
The model incorporates the main operational constraints of both systems and
allows each system to operate under different temporal resolutions consistent
with their typical scheduling timescales. We apply the modeling framework to
evaluate power-NG system outcomes for the U.S. New England region under
different technology, decarbonization goals, and demand scenarios. Under a
global emissions constraint, ranging between 80-95\% emissions reduction
compared to 1990 levels, the least-cost solution disproportionately relies on
using the available emissions budget to serve non-power NG demand and results
in the power sector using only 15-43\% of the emissions budget.
|
A set of positive integers is primitive (or 1-primitive) if no member divides
another. Erd\H{o}s proved in 1935 that the weighted sum $\sum1/(n \log n)$ for
$n$ ranging over a primitive set $A$ is universally bounded over all choices
for $A$. In 1988 he asked if this universal bound is attained by the set of
prime numbers. One source of difficulty in this conjecture is that $\sum
n^{-\lambda}$ over a primitive set is maximized by the primes if and only if
$\lambda$ is at least the critical exponent $\tau_1 \approx 1.14$.
A set is $k$-primitive if no member divides any product of up to $k$ other
distinct members. One may similarly consider the critical exponent $\tau_k$ for
which the primes are maximal among $k$-primitive sets. In recent work the
authors showed that $\tau_2 < 0.8$, which directly implies the Erd\H{o}s
conjecture for 2-primitive sets. In this article we study the limiting behavior
of the critical exponent, proving that $\tau_k$ tends to zero as $k\to\infty$.
|
The light-cone Hamiltonians for spin 1 and spin 2 fields, describing both the
pure and the maximally supersymmetric theories, may be expressed as quadratic
forms. In this paper, we show that this feature extends to light-cone higher
spin theories. To first order in the coupling constant, we prove that the
higher spin Hamiltonians, with and without supersymmetry, are quadratic forms.
Scattering amplitude structures emerge naturally in this framework and we
relate the momentum space vertex in a supersymmetric higher spin theory to the
corresponding vertex in the N=4 Yang-Mills theory.
|
It is shown that any generalized Kac-Moody Lie algebra g that has no mutually
orthogonal imaginary simple roots can be written as the vector space direct sum
of a Kac-Moody subalgebra and subalgebras isomorphic to free Lie algebras over
certain modules for the Kac-Moody subalgebra. Also included is a detailed
discussion of Borcherds' construction of the Monster Lie algebra from a vertex
algebra and an elementary proof of Borcherds' theorem relating Lie algebras
with `an almost positive definite bilinear form' to generalized Kac-Moody
algebras. (Preprint version 1996)
|
We study the nuclear enhancement of the transverse momentum imbalance for
back-to-back particle production in both p+A and e+A collisions. Specifically,
we present results for photon+jet and photon+hadron production in p+A
collisions, di-jet and di-hadron production in e+A collisions, and heavy-quark
and heavy-meson pair production in both p+A and e+A collisions. We evaluate the
effect of both initial-state and final-state multiple scattering, which
determine the strength of the nuclear-induced transverse momentum imbalance in
these processes. We give theoretical predictions for the experimentally
relevant kinematic regions in d+Au collisions at RHIC, p+Pb collisions at LHC
and e+A collisions at the future EIC and LHeC.
|
We report a NuSTAR observation of a solar microflare, SOL2015-09-01T04.
Although it was too faint to be observed by the GOES X-ray Sensor, we estimate
the event to be an A0.1 class flare in brightness. This microflare, with only 5
counts per second per detector observed by RHESSI, is fainter than any hard
X-ray (HXR) flare in the existing literature. The microflare occurred during a
solar pointing by the highly sensitive NuSTAR astrophysical observatory, which
used its direct focusing optics to produce detailed HXR microflare spectra and
images. The microflare exhibits HXR properties commonly observed in larger
flares, including a fast rise and more gradual decay, earlier peak time with
higher energy, spatial dimensions similar to the RHESSI microflares, and a
high-energy excess beyond an isothermal spectral component during the impulsive
phase. The microflare is small in emission measure, temperature, and energy,
though not in physical size; observations are consistent with an origin via the
interaction of at least two magnetic loops. We estimate the increase in thermal
energy at the time of the microflare to be 2.4x10^27 ergs. The observation
suggests that flares do indeed scale down to extremely small energies and
retain what we customarily think of as "flarelike" properties.
|
We present a scheme to perform an iterative variational optimization with
infinite projected entangled-pair states (iPEPS), a tensor network ansatz for a
two-dimensional wave function in the thermodynamic limit, to compute the ground
state of a local Hamiltonian. The method is based on a systematic summation of
Hamiltonian contributions using the corner transfer-matrix method. Benchmark
results for challenging problems are presented, including the 2D Heisenberg
model, the Shastry-Sutherland model, and the t-J model, which show that the
variational scheme yields considerably more accurate results than the
previously best imaginary time evolution algorithm, with a similar
computational cost and with a faster convergence towards the ground state.
|
This paper presents a search-based partial motion planner to generate
dynamically feasible trajectories for car-like robots in highly dynamic
environments. The planner searches for smooth, safe, and near-time-optimal
trajectories by exploring a state graph built on motion primitives, which are
generated by discretizing the time dimension and the control space. To enable
fast online planning, we first propose an efficient path searching algorithm
based on the aggregation and pruning of motion primitives. We then propose a
fast collision checking algorithm that takes into account the motions of moving
obstacles. The algorithm linearizes relative motions between the robot and
obstacles and then checks collisions by comparing a point-line distance.
Benefiting from the fast searching and collision checking algorithms, the
planner can effectively and safely explore the state-time space to generate
near-time-optimal solutions. The results through extensive experiments show
that the proposed method can generate feasible trajectories within milliseconds
while maintaining a higher success rate than up-to-date methods, which
significantly demonstrates its advantages.
|
We discuss the notion of a dense cluster with respect to the information
distance and prove that all such clusters have an extractable core that
represents the mutual information shared by the objects in the cluster.
|
We study the AdS/CFT relation between an infinite class of 5-d Ypq
Sasaki-Einstein metrics and the corresponding quiver theories. The long BPS
operators of the field theories are matched to massless geodesics in the
geometries, providing a test of AdS/CFT for these cases. Certain small
fluctuations (in the BMN sense) can also be successfully compared. We then go
further and find, using an appropriate limit, a reduced action, first order in
time derivatives, which describes strings with large R-charge. In the field
theory we consider holomorphic operators with large winding numbers around the
quiver and find, interestingly, that, after certain simplifying assumptions,
they can be described effectively as strings moving in a particular metric.
Although not equal, the metric is similar to the one in the bulk. We find it
encouraging that a string picture emerges directly from the field theory and
discuss possible ways to improve the agreement.
|
Regarding three-dimensional (3D) topological insulators and semimetals as a
stack of constituent 2D topological (or sometimes non-topological) layers is a
useful viewpoint. Primarily, concrete theoretical models of the paradigmatic 3D
topological phases such as Weyl semimetal (WSM), strong and weak topological
insulators (STI/WTI), and Chern insulator (CI), are often constructed in that
way. Secondarily, fabrication of the corresponding 3D topological material is
also done in the same spirit; epitaxial growth technique is employed, making
the resulting sample in the form of a thin film. Here, in this paper we
calculate $\mathbb{Z}$- and $\mathbb{Z}_2$-indices and study evolution of the
topological properties of such thin films of 3D topological systems, making
also a comparative study of CI- vs. TI-type models belonging to different
symmetry classes in this respect. Through this comparative study we suggest
that WSM is to CI as STI is to WTI. Finally, to test the robustness of our
scenario against disorder and relevance to experiments we have also studied
numerically the two-terminal conductance of the system using transfer matrix
method.
|
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis
with mostly low-resolution (LR) inputs. Our method is built upon Neural
Radiance Fields (NeRF) that predicts per-point density and color with a
multi-layer perceptron. While producing images at arbitrary scales, NeRF
struggles with resolutions that go beyond observed images. Our key insight is
that NeRF benefits from 3D consistency, which means an observed pixel absorbs
information from nearby views. We first exploit it by a supersampling strategy
that shoots multiple rays at each image pixel, which further enforces
multi-view constraint at a sub-pixel level. Then, we show that NeRF-SR can
further boost the performance of supersampling by a refinement network that
leverages the estimated depth at hand to hallucinate details from related
patches on only one HR reference image. Experiment results demonstrate that
NeRF-SR generates high-quality results for novel view synthesis at HR on both
synthetic and real-world datasets without any external information.
|
Based on first-principles calculations, the evolution of the electronic and
magnetic properties of transition metal dihalides MX$_2$ (M= V, Mn, Fe, Co, Ni;
X = Cl, Br, I) is analyzed from the bulk to the monolayer limit. A variety of
magnetic ground states is obtained as a result of the competition between
direct exchange and superexchange. The results predict that FeX$_2$, NiX$_2$,
CoCl$_2$ and CoBr$_2$ monolayers are ferromagnetic insulators with sizable
magnetocrystalline anisotropies. This makes them ideal candidates for robust
ferromagnetism at the single layer level. Our results also highlight the
importance of spin-orbit coupling to obtain the correct ground state.
|
Patterns by self-organization in nature have garnered significant interest in
a range of disciplines due to their intriguing structures. In the context of
the snowdrift game (SDG), which is considered as an anti-coordination game, but
the anti-coordination patterns are counterintuitively rare. In the work, we
introduce a model called the Two-Agents, Two-Action Reinforcement Learning
Evolutionary Game ($2\times 2$ RLEG), and apply it to the SDG on regular
lattices. We uncover intriguing phenomena in the form of Anti-Coordinated
domains (AC-domains), where different frustration regions are observed and
continuous phase transitions at the boundaries are identified. To understand
the underlying mechanism, we develop a perturbation theory to analyze the
stability of different AC-domains. Our theory accurately partitions the
parameter space into non-anti-coordinated, anti-coordinated, and mixed areas,
and captures their dependence on the learning parameters. Lastly, abnormal
scenarios with a large learning rate and a large discount factor that deviate
from the theory are investigated by examining the growth and nucleation of
AC-domains. Our work provides insights into the emergence of spatial patterns
in nature, and contributes to the development of theory for analysing their
structural complexities.
|
Recently proposed supergravity theories in odd dimensions whose fields are
connection one-forms for the minimal supersymmetric extensions of anti-de
Sitter gravity are discussed. Two essential ingredients are required for this
construction: (1) The superalgebras, which extend the adS algebra for different
dimensions, and (2) the lagrangians, which are Chern-Simons $(2n-1)$-forms. The
first item completes the analysis of van Holten and Van Proeyen, which was
valid for N=1 only. The second ensures that the actions are invariant by
construction under the gauge supergroup and, in particular, under local
supersymmetry. Thus, unlike standard supergravity, the local supersymmetry
algebra closes off-shell and without requiring auxiliary fields. \\
The superalgebras are constructed for all dimensions and they fall into three
families: $osp(m|N)$ for $D=2,3,4$, mod 8, $osp(N|m)$ for $D=6,7,8$, mod 8, and
$su(m-2,2|N)$ for D=5 mod 4, with $m=2^{[D/2]}$. The lagrangian is constructed
for $D=5, 7$ and 11. In all cases the field content includes the vielbein
($e_{\mu}^{a}$), the spin connection ($\omega_{\mu}^{ab}$), $N$ gravitini
($\psi_{\mu}^{i}$), and some extra bosonic "matter" fields which vary from one
dimension to another.
|
In this paper the spectral analysis of all possible linear congruent
sequences with a maximum period is conducted and the best random number
generators are selected among them.
|
Eigenmodes of electromagnetic field with perfectly conducting or infinitely
permeable conditions on the boundary of a D-dimensional spherically symmetric
cavity is derived explicitly. It is shown that there are (D-2) polarizations
for TE modes and one polarization for TM modes, giving rise to a total of (D-1)
polarizations. In case of a D-dimensional ball, the eigenfrequencies of
electromagnetic field with perfectly conducting boundary condition coincides
with the eigenfrequencies of gauge one-forms with relative boundary condition;
whereas the eigenfrequencies of electromagnetic field with infinitely permeable
boundary condition coincides with the eigenfrequencies of gauge one-forms with
absolute boundary condition. Casimir energy for a D-dimensional spherical shell
configuration is computed using both cut-off regularization and zeta
regularization. For a double spherical shell configuration, it is shown that
the Casimir energy can be written as a sum of the single spherical shell
contributions and an interacting term, and the latter is free of divergence.
The interacting term always gives rise to an attractive force between the two
spherical shells. Its leading term is the Casimir force acting between two
parallel plates of the same area, as expected by proximity force approximation.
|
We obtain plane fronted gravitational waves (PFGWs) in arbitrary dimension in
Lovelock gravity, to any order in the Riemann tensor. We exhibit pure gravity
as well as Lovelock-Yang-Mills PFGWs. Lovelock-Maxwell and $pp$ waves arise as
particular cases. The electrovac solutions trivially satisfy the
Lovelock-Born-Infeld field equations. The peculiarities that arise in
degenerate Lovelock theories are also analyzed.
|
We study the back reaction of a thermal field in a weak gravitational
background depicting the far-field limit of a black hole enclosed in a box by
the Close Time Path (CTP) effective action and the influence functional method.
We derive the noise and dissipation kernels of this system in terms of
quantities in quasi-equilibrium, and formally prove the existence of a
Fluctuation-Dissipation Relation (FDR) at all temperatures between the quantum
fluctuations of the thermal radiance and the dissipation of the gravitational
field. This dynamical self-consistent interplay between the quantum field and
the classical spacetime is, we believe, the correct way to treat back-reaction
problems. To emphasize this point we derive an Einstein-Langevin equation which
describes the non-equilibrium dynamics of the gravitational perturbations under
the influence of the thermal field. We show the connection between our method
and the linear response theory (LRT), and indicate how the functional method
can provide more accurate results than prior derivations of FDRs via LRT in the
test-field, static conditions. This method is in principle useful for treating
fully non-equilibrium cases such as back reaction in black hole collapse.
|
We study the virial coefficients B_k of hard spheres in D dimensions by means
of Monte-Carlo integration. We find that B_5 is positive in all dimensions but
that B_6 is negative for all D >= 6. For 7<=k<=17 we compute sets of Ree-Hoover
diagrams and find that either for large D or large k the dominant diagrams are
"loose packed". We use these results to study the radius of convergence and the
validity of the many approximations used for the equations of state for hard
spheres.
|
In this note we present a characterisation of exponentiable approach spaces
in terms of ultrafilter convergence.
|
We construct for an equivariant cohomology theory for proper equivariant
CW-complexes an equivariant Chern character, provided that certain conditions
about the coefficients are satisfied. These conditions are fulfilled if the
coefficients of the equivariant cohomology theory possess a Mackey structure.
Such a structure is present in many interesting examples.
|
In collisionless and weakly collisional plasmas, the particle distribution
function is a rich tapestry of the underlying physics. However, actually
leveraging the particle distribution function to understand the dynamics of a
weakly collisional plasma is challenging. The equation system of relevance, the
Vlasov-Maxwell-Fokker-Planck (VM-FP) system of equations, is difficult to
numerically integrate, and traditional methods such as the particle-in-cell
method introduce counting noise into the distribution function.
In this thesis, we present a new algorithm for the discretization of VM-FP
system of equations for the study of plasmas in the kinetic regime. Using the
discontinuous Galerkin (DG) finite element method for the spatial
discretization and a third order strong-stability preserving Runge-Kutta for
the time discretization, we obtain an accurate solution for the plasma's
distribution function in space and time.
We both prove the numerical method retains key physical properties of the
VM-FP system, such as the conservation of energy and the second law of
thermodynamics, and demonstrate these properties numerically. These results are
contextualized in the history of the DG method. We discuss the importance of
the algorithm being alias-free, a necessary condition for deriving stable DG
schemes of kinetic equations so as to retain the implicit conservation
relations embedded in the particle distribution function, and the computational
favorable implementation using a modal, orthonormal basis in comparison to
traditional DG methods applied in computational fluid dynamics. Finally, we
demonstrate how the high fidelity representation of the distribution function,
combined with novel diagnostics, permits detailed analysis of the energization
mechanisms in fundamental plasma processes such as collisionless shocks.
|
Affine systems reachability is the basis of many verification methods. With
further computation, methods exist to reason about richer models with inputs,
nonlinear differential equations, and hybrid dynamics. As such, the scalability
of affine systems verification is a prerequisite to scalable analysis for more
complex systems. In this paper, we improve the scalability of affine systems
verification, in terms of the number of dimensions (variables) in the system.
The reachable states of affine systems can be written in terms of the matrix
exponential, and safety checking can be performed at specific time steps with
linear programming. Unfortunately, for large systems with many state variables,
this direct approach requires an intractable amount of memory while using an
intractable amount of computation time. We overcome these challenges by
combining several methods that leverage common problem structure. Memory is
reduced by exploiting initial states that are not full-dimensional and safety
properties (outputs) over a few linear projections of the state variables.
Computation time is saved by using numerical simulations to compute only
projections of the matrix exponential relevant for the verification problem.
Since large systems often have sparse dynamics, we use Krylov-subspace
simulation approaches based on the Arnoldi or Lanczos iterations. Our method
produces accurate counter-examples when properties are violated and, in the
extreme case with sufficient problem structure, can analyze a system with one
billion real-valued state variables.
|
We describe the formation of charge- and spin-density patterns induced by
spin-selective photoexcitations of interacting fermionic systems in the
presence of a microstructure. As an example, we consider a one-dimensional
Hubbard-like system with a periodic magnetic microstructure, which has a
uniform charge distribution in its ground state, and in which a long-lived
charge-density pattern is induced by the spin-selective photoexcitation. Using
tensor-network methods, we study the full quantum dynamics in the presence of
electron-electron interactions and identify doublons as the main decay channel
for the induced charge pattern. Our setup is compared to the OISTR mechanism,
in which ultrafast optically induced spin transfer in Heusler and magnetic
compounds is associated to the difference of the local density of states of the
different elements in the alloys. We find that applying a spin-selective
excitation there induces spatially periodic patterns in local observables.
Implications for pump-probe experiments on correlated materials and experiments
with ultracold gases on optical lattices are discussed.
|
We construct a family of right coideal subalgebras of quantum groups, which
have the property that all irreducible representations are one-dimensional, and
which are maximal with this property. The obvious examples for this are the
standard Borel subalgebras expected from Lie theory, but in a quantum group
there are many more. Constructing and classifying them is interesting for
structural reasons, and because they lead to unfamiliar induced (Verma-)modules
for the quantum group. The explicit family we construct in this article
consists of quantum Weyl algebras combined with parts of a standard Borel
subalgebra, and they have a triangular decomposition. Our main result is
proving their Borel subalgebra property. Conversely we prove under some
restrictions a classification result, which characterizes our family. Moreover
we list for Uq(sl4) all possible triangular Borel subalgebras, using our
underlying results and additional by-hand arguments. This gives a good working
example and puts our results into context.
|
We consider escape from a metastable state of a nonlinear oscillator driven
close to triple its eigenfrequency. The oscillator can have three stable states
of period-3 vibrations and a zero-amplitude state. Because of the symmetry of
period-tripling, the zero-amplitude state remains stable as the driving
increases. However, it becomes shallow in the sense that the rate of escape
from this state exponentially increases, while the system still lacks detailed
balance. We find the escape rate and show how it scales with the parameters of
the oscillator and the driving. The results facilitate using nanomechanical,
Josephson-junction based, and other mesoscopic vibrational systems for
studying, in a well-controlled setting, the rates of rare events in systems
lacking detailed balance. They also describe how fluctuations spontaneously
break the time-translation symmetry of a driven oscillator.
|
Quantum Information Processing, which is an exciting area of research at the
intersection of physics and computer science, has great potential for
influencing the future development of information processing systems. The
building of practical, general purpose Quantum Computers may be some years into
the future. However, Quantum Communication and Quantum Cryptography are well
developed. Commercial Quantum Key Distribution systems are easily available and
several QKD networks have been built in various parts of the world. The
security of the protocols used in these implementations rely on
information-theoretic proofs, which may or may not reflect actual system
behaviour. Moreover, testing of implementations cannot guarantee the absence of
bugs and errors. This paper presents a novel framework for modelling and
verifying quantum protocols and their implementations using the proof assistant
Coq. We provide a Coq library for quantum bits (qubits), quantum gates, and
quantum measurement. As a step towards verifying practical quantum
communication and security protocols such as Quantum Key Distribution, we
support multiple qubits, communication and entanglement. We illustrate these
concepts by modelling the Quantum Teleportation Protocol, which communicates
the state of an unknown quantum bit using only a classical channel.
|
We consider a nonlinear Choquard equation $$
-\Delta u+u= (V * |u|^p )|u|^{p-2}u \qquad \text{in }\mathbb{R}^N, $$ when
the self-interaction potential $V$ is unbounded from below. Under some
assumptions on $V$ and on $p$, covering $p =2$ and $V$ being the one- or
two-dimensional Newton kernel, we prove the existence of a nontrivial
groundstate solution $u\in H^1 (\mathbb{R}^N)\setminus\{0\}$ by solving a
relaxed problem by a constrained minimization and then proving the convergence
of the relaxed solutions to a groundstate of the original equation.
|
Following the recent measurement of the acoustic peak by the BOOMERanG and
MAXIMA experiments in the CMB anisotropy angular power spectrum, many analyses
have found that the geometry of the Universe is very close to flat, but
slightly closed models are favoured. In this paper we will briefly review how
the CMB anisotropies depend on the curvature, explaining any assumptions we
could make and showing that this skewness towards closed models can be easily
explained by degeneracies in the cosmological parameters. While it is difficult
to give independent constraints on the cosmological constant and/or different
forms of dark energies, we will also show that combining CMB measurements with
other observational data will introduce new and tighter constraints, like
$\Omega_\Lambda > 0$ at high significance.
|
Few-shot learning for image classification comes up as a hot topic in
computer vision, which aims at fast learning from a limited number of labeled
images and generalize over the new tasks. In this paper, motivated by the idea
of Fisher Score, we propose a Discriminative Local Descriptors Attention (DLDA)
model that adaptively selects the representative local descriptors and does not
introduce any additional parameters, while most of the existing local
descriptors based methods utilize the neural networks that inevitably involve
the tedious parameter tuning. Moreover, we modify the traditional $k$-NN
classification model by adjusting the weights of the $k$ nearest neighbors
according to their distances from the query point. Experiments on four
benchmark datasets show that our method not only achieves higher accuracy
compared with the state-of-art approaches for few-shot learning, but also
possesses lower sensitivity to the choices of $k$.
|
Chronic pain is a multi-dimensional experience, and pain intensity plays an
important part, impacting the patients emotional balance, psychology, and
behaviour. Standard self-reporting tools, such as the Visual Analogue Scale for
pain, fail to capture this burden. Moreover, this type of tools is susceptible
to a degree of subjectivity, dependent on the patients clear understanding of
how to use it, social biases, and their ability to translate a complex
experience to a scale. To overcome these and other self-reporting challenges,
pain intensity estimation has been previously studied based on facial
expressions, electroencephalograms, brain imaging, and autonomic features.
However, to the best of our knowledge, it has never been attempted to base this
estimation on the patient narratives of the personal experience of chronic
pain, which is what we propose in this work. Indeed, in the clinical assessment
and management of chronic pain, verbal communication is essential to convey
information to physicians that would otherwise not be easily accessible through
standard reporting tools, since language, sociocultural, and psychosocial
variables are intertwined. We show that language features from patient
narratives indeed convey information relevant for pain intensity estimation,
and that our computational models can take advantage of that. Specifically, our
results show that patients with mild pain focus more on the use of verbs,
whilst moderate and severe pain patients focus on adverbs, and nouns and
adjectives, respectively, and that these differences allow for the distinction
between these three pain classes.
|
It is recently shown that, besides the Schwarzshcild black hole solution,
there exist also scalarized black hole solutions in some
Einstein-scalar-Gauss-Bonnet theories. In this paper, we construct analytical
expressions for the metric functions and scalar field configurations for these
scalarized black hole solutions approximately by employing the continued
fraction parametrization method and investigate their thermodynamic stability.
It is found that the horizon entropy of a scalarized black hole is always
smaller than that of a Schwarzschild black hole, which indicates that these
scalarized black holes may decay to Schwarzschild black holes by emission of
scalar waves. This fact also implies the possibility to extract the energy of
scalar charges.
|
The positive therapeutic effect of viewing pet images online has been
well-studied. However, it is difficult to obtain large-scale production of such
content since it relies on pet owners to capture photographs and upload them. I
use a Generative Adversarial Network-based framework for the creation of fake
pet images at scale. These images are uploaded on an Instagram account where
they drive user engagement at levels comparable to those seen with images from
accounts with traditional pet photographs, underlining the applicability of the
framework to be used for pet-therapy social media content.
|
We study the dispersionless limit of the recently introduced Toda lattice
hierarchy with constraint of type B (the B-Toda hierarchy) and compare it with
that of the DKP and C-Toda hierarchies. The dispersionless limits of the B-Toda
and C-Toda hierarchies turn out to be the same.
|
Antiprotons in Fermilab's Recycler ring are cooled by a 4.3 MeV, 0.1 - 0.5 A
DC electron beam (as well as by a stochastic cooling system). The unique
combination of the relativistic energy ({\gamma} = 9.49), an Ampere - range DC
beam, and a relatively weak focusing makes the cooling efficiency particularly
sensitive to ion neutralization. A capability to clear ions was recently
implemented by way of interrupting the electron beam for 1-30 \mus with a
repetition rate of up to 40 Hz. The cooling properties of the electron beam
were analyzed with drag rate measurements and showed that accumulated ions
significantly affect the beam optics. For a beam current of 0.3 A, the
longitudinal cooling rate was increased by factor of ~2 when ions were removed.
|
Models for near-rigid shape matching are typically based on distance-related
features, in order to infer matches that are consistent with the isometric
assumption. However, real shapes from image datasets, even when expected to be
related by "almost isometric" transformations, are actually subject not only to
noise but also, to some limited degree, to variations in appearance and scale.
In this paper, we introduce a graphical model that parameterises appearance,
distance, and angle features and we learn all of the involved parameters via
structured prediction. The outcome is a model for near-rigid shape matching
which is robust in the sense that it is able to capture the possibly limited
but still important scale and appearance variations. Our experimental results
reveal substantial improvements upon recent successful models, while
maintaining similar running times.
|
This paper develops the basic analytical theory related to some recently
introduced crowd dynamics models. Where well posedness was known only locally
in time, it is here extended to all of $\reali^+$. The results on the stability
with respect to the equations are improved. Moreover, here the case of several
populations is considered, obtaining the well posedness of systems of multi-D
non-local conservation laws. The basic analytical tools are provided by the
classical Kruzkov theory of scalar conservation laws in several space
dimensions.
|
The next generation of communication is envisioned to be intelligent
communication, that can replace traditional symbolic communication, where
highly condensed semantic information considering both source and channel will
be extracted and transmitted with high efficiency. The recent popular large
models such as GPT4 and the boosting learning techniques lay a solid foundation
for the intelligent communication, and prompt the practical deployment of it in
the near future. Given the characteristics of "training once and widely use" of
those multimodal large language models, we argue that a pay-as-you-go service
mode will be suitable in this context, referred to as Large Model as a Service
(LMaaS). However, the trading and pricing problem is quite complex with
heterogeneous and dynamic customer environments, making the pricing
optimization problem challenging in seeking on-hand solutions. In this paper,
we aim to fill this gap and formulate the LMaaS market trading as a Stackelberg
game with two steps. In the first step, we optimize the seller's pricing
decision and propose an Iterative Model Pricing (IMP) algorithm that optimizes
the prices of large models iteratively by reasoning customers' future rental
decisions, which is able to achieve a near-optimal pricing solution. In the
second step, we optimize customers' selection decisions by designing a robust
selecting and renting (RSR) algorithm, which is guaranteed to be optimal with
rigorous theoretical proof. Extensive experiments confirm the effectiveness and
robustness of our algorithms.
|
Iwaniec and Sarnak showed that at the minimum 25% of L-values associated to
holomorphic newforms of fixed even integral weight and level $N \rightarrow
\infty$ do not vanish at the critical point when N is square-free and
$\phi(N)\sim N$. In this paper we extend the given result to the case of prime
power level $N=p^{\nu}$, $\nu\geq 2$.
|
Due to their anisotropy, layered materials are excellent candidates for
studying the interplay between the in-plane and out-of-plane entanglement in
strongly correlated systems. A relevant example is provided by 1T-TaS2, which
exhibits a multifaceted electronic and magnetic scenario due to the existence
of several charge density wave (CDW) configurations. It includes quantum hidden
phases, superconductivity and exotic quantum spin liquid (QSL) states, which
are highly dependent on the out-of-plane stacking of the CDW. In this system,
the interlayer stacking of the CDW is crucial for the interpretation of the
underlying electronic and magnetic phase diagram. Here, thin-layers of 1T-TaS2
are integrated in vertical van der Waals heterostructures based on few-layer
graphene (FLG) contacts and their electrical transport properties are measured.
Different activation energies in the conductance and a gap at the Fermi level
are clearly observed. Our experimental findings are supported by fully
self-consistent DFT+U calculations, which evidence the presence of an energy
gap in the few-layer limit, not necessarily coming from the formation of
out-of-plane spin-paired bilayers at low temperatures, as previously proposed
for the bulk. These results highlight dimensionality as a key effect for
understanding the properties of 1T-TaS2 and opens the door to the possible
experimental realization of low-dimensional QSLs.
|
Learning to capture long-range relations is fundamental to image/video
recognition. Existing CNN models generally rely on increasing depth to model
such relations which is highly inefficient. In this work, we propose the
"double attention block", a novel component that aggregates and propagates
informative global features from the entire spatio-temporal space of input
images/videos, enabling subsequent convolution layers to access features from
the entire space efficiently. The component is designed with a double attention
mechanism in two steps, where the first step gathers features from the entire
space into a compact set through second-order attention pooling and the second
step adaptively selects and distributes features to each location via another
attention. The proposed double attention block is easy to adopt and can be
plugged into existing deep neural networks conveniently. We conduct extensive
ablation studies and experiments on both image and video recognition tasks for
evaluating its performance. On the image recognition task, a ResNet-50 equipped
with our double attention blocks outperforms a much larger ResNet-152
architecture on ImageNet-1k dataset with over 40% less the number of parameters
and less FLOPs. On the action recognition task, our proposed model achieves the
state-of-the-art results on the Kinetics and UCF-101 datasets with
significantly higher efficiency than recent works.
|
We present simulations of the superradiant dynamics of ensembles of atoms in
the presence of collective and individual atomic decay processes. We unravel
the density matrix with Monte-Carlo wave-functions and identify the quantum
jumps in a reduced Dicke state basis, which reflects the permutation symmetry
of the identical atoms. While the number of density matrix elements in the
Dicke representation increases polynomially with atom number, the quantum jump
dynamics populates only a single Dicke state at the time and thus efficient
simulations can be carried out for tens of thousands of atoms. The calculated
superradiance pulses from initially excited atoms agree quantitatively with
recent experimental results with strontium atoms but rapid atom loss in these
experiments does not permit steady-state superradiance. By introducing an
incident flux of new atoms, the system can maintain a large average atom
number, and our theoretical calculations predict lasing with millihertz
linewidth despite rapid atom number fluctuations.
|
The quantization of the forced harmonic oscillator is studied with the
quantum variable ($x,\hat v$), with the commutation relation $[x,\hat
v]=i\hbar/m$, and using a Shr\"odinger's like equation on these variable, and
associating a linear operator to a constant of motion $K(x,v,t)$ of the
classical system, The comparison with the quantization in the space ($x,p$) is
done with the usual Schr\"odinger's equation for the Hamiltonian $H(x,p,t)$,
and with the commutation relation $[x,\hat p]=i\hbar$. It is found that for the
non resonant case, both forms of quantization brings about the same result.
However, for the resonant case, both forms of quantization are different, and
the probability for the system to be in the exited state for the ($x,\hat v$)
quantization has less oscillations than the ($x,\hat p$) quantization, the
average energy of the system is higher in ($x,\hat p$) quantization than on the
$(x,\hat v$) quantization, and the Boltzmann-Shannon entropy on the ($x,\hat
p$) quantization is higher than on the ($x,\hat v$) quantization.
|
Subsets and Splits