text
stringlengths 6
128k
|
---|
Ontology embeddings map classes, relations, and individuals in ontologies
into $\mathbb{R}^n$, and within $\mathbb{R}^n$ similarity between entities can
be computed or new axioms inferred. For ontologies in the Description Logic
$\mathcal{EL}^{++}$, several embedding methods have been developed that
explicitly generate models of an ontology. However, these methods suffer from
some limitations; they do not distinguish between statements that are
unprovable and provably false, and therefore they may use entailed statements
as negatives. Furthermore, they do not utilize the deductive closure of an
ontology to identify statements that are inferred but not asserted. We
evaluated a set of embedding methods for $\mathcal{EL}^{++}$ ontologies based
on high-dimensional ball representation of concept descriptions, incorporating
several modifications that aim to make use of the ontology deductive closure.
In particular, we designed novel negative losses that account both for the
deductive closure and different types of negatives. We demonstrate that our
embedding methods improve over the baseline ontology embedding in the task of
knowledge base or ontology completion.
|
J191213.72-441045.1 is a binary system composed of a white dwarf and an
M-dwarf in a 4.03-hour orbit. It shows emission in radio, optical, and X-ray,
all modulated at the white dwarf spin period of 5.3 min, as well as various
orbital sideband frequencies. Like in the prototype of the class of
radio-pulsing white dwarfs, AR Scorpii, the observed pulsed emission seems to
be driven by the binary interaction. In this work, we present an analysis of
far-ultraviolet spectra obtained with the Cosmic Origins Spectrograph at the
Hubble Space Telescope, in which we directly detect the white dwarf in
J191213.72-441045.1. We find that the white dwarf has an effective temperature
of 11485+/-90 K and mass of 0.59+/-0.05 solar masses. We place a tentative
upper limit on the magnetic field of ~50 MG. If the white dwarf is in thermal
equilibrium, its physical parameters would imply that crystallisation has not
started in the core of the white dwarf. Alternatively, the effective
temperature could have been affected by compressional heating, indicating a
past phase of accretion. The relatively low upper limit to the magnetic field
and potential lack of crystallisation that could generate a strong field pose
challenges to pulsar-like models for the system and give preference to
propeller models with a low magnetic field. We also develop a geometric model
of the binary interaction which explains many salient features of the system.
|
Operator inference learns low-dimensional dynamical-system models with
polynomial nonlinear terms from trajectories of high-dimensional physical
systems (non-intrusive model reduction). This work focuses on the large class
of physical systems that can be well described by models with quadratic
nonlinear terms and proposes a regularizer for operator inference that induces
a stability bias onto quadratic models. The proposed regularizer is physics
informed in the sense that it penalizes quadratic terms with large norms and so
explicitly leverages the quadratic model form that is given by the underlying
physics. This means that the proposed approach judiciously learns from data and
physical insights combined, rather than from either data or physics alone.
Additionally, a formulation of operator inference is proposed that enforces
model constraints for preserving structure such as symmetry and definiteness in
the linear terms. Numerical results demonstrate that models learned with
operator inference and the proposed regularizer and structure preservation are
accurate and stable even in cases where using no regularization or Tikhonov
regularization leads to models that are unstable.
|
We introduce pretty clean modules, extending the notion of clean modules by
Dress, and show that pretty clean modules are sequentially Cohen-Macaulay. We
also extend a theorem of Dress on shellable simplicial complexes to
multicomplexes.
|
The existence of a paradoxical supersolid phase of matter, possessing the
apparently incompatible properties of crystalline order and superfluidity, was
predicted 50 years ago. Solid helium was the natural candidate, but there
supersolidity has not been observed yet, despite numerous attempts. Ultracold
quantum gases have recently shown the appearance of the periodic order typical
of a crystal, due to various types of controllable interactions. A crucial
feature of a D-dimensional supersolid is the occurrence of up to D+1 gapless
excitations reflecting the Goldstone modes associated with the spontaneous
breaking of two continuous symmetries: the breaking of phase invariance,
corresponding to the locking of the phase of the atomic wave functions at the
origin of superfluid phenomena, and the breaking of translational invariance
due to the lattice structure of the system. The occurrence of such modes has
been the object of intense theoretical investigations, but their experimental
observation is still missing. Here we demonstrate the supersolid symmetry
breaking through the appearance of two distinct compressional oscillation modes
in a harmonically trapped dipolar Bose-Einstein condensate, reflecting the
gapless Goldstone excitations of the homogeneous system. We observe that the
two modes have different natures, with the higher frequency mode associated
with an oscillation of the periodicity of the emergent lattice and the lower
one characterizing the superfluid oscillations. Our work paves the way to
explore the two quantum phase transitions between the superfluid, supersolid
and solid-like configurations that can be accessed by tuning a single
interaction parameter.
|
We investigate the geometry of almost Robinson manifolds, Lorentzian
analogues of almost Hermitian manifolds, defined by Nurowski and Trautman as
Lorentzian manifolds of even dimension equipped with a totally null complex
distribution of maximal rank. Associated to such a structure, there is a
congruence of null curves, which, in dimension four, is geodesic and
non-shearing if and only if the complex distribution is involutive. Under
suitable conditions, the distribution gives rise to an almost Cauchy-Riemann
structure on the leaf space of the congruence.
We give a comprehensive classification of such manifolds on the basis of
their intrinsic torsion. This includes an investigation of the relation between
an almost Robinson structure and the geometric properties of the leaf space of
its congruence. We also obtain conformally invariant properties of such a
structure, and we finally study an analogue of so-called generalised optical
geometries as introduced by Robinson and Trautman.
|
The O(N) invariant quartic anharmonic oscillator is shown to be exactly
solvable if the interaction parameter satisfies special conditions. The problem
is directly related to that of a quantum double well anharmonic oscillator in
an external field. A finite dimensional matrix equation for the problem is
constructed explicitly, along with analytical expressions for some excited
states in the system. The corresponding Niven equations for determining the
polynomial solutions for the problem are given.
|
This document describes the current understanding of the IVOA controlled
vocabulary for describing astronomical data quantities, called Unified Content
Descriptors (UCDs).
The present document defines a new standard (named UCD1+) improving the first
generation of UCDs (hereafter UCD1). The basic idea is to adopt a new syntax
and vocabulary requiring little effort for people to adapt softwares already
using UCD1.
This document also addresses the questions of maintenance and evolution of
the UCD1+. Examples of use cases within the VO, and tools for using UCD1+ are
also described.
|
The Galactic plane is a strong hard x-ray emitter and the emission forms a
narrow continuous ridge. The currently known hard x-ray sources are far too few
to explain the ridge x-ray emission, and the fundamental question as to whether
the ridge emission is ultimately resolved into numerous dimmer discrete sources
or truly diffuse emission has not yet been settled. In order to obtain a
decisive answer, using the Chandra x-ray observatory, we have carried out the
deepest hard x-ray survey of a Galactic plane region which is devoid of known
x-ray point sources. We have detected at least 36 new hard x-ray point sources
in addition to strong diffuse emission within a 17' x 17' field of view. The
surface density of the point sources is comparable to that at high Galactic
latitudes after the effects of Galactic absorption are considered. Therefore,
most of these point sources are probably extragalactic, presumably active
galaxies seen through the Galactic disk. The Galactic ridge hard x-ray emission
is diffuse, which indicates omnipresence of the hot plasma along the Galactic
plane whose energy density is more than one order of magnitude higher than any
other substances in the interstellar space.
|
We obtain two new Thomae-type transformations for hypergeometric series with
r pairs of numeratorial and denominatorial parameters differing by positive
integers. This is achieved by application of the so-called Beta integral method
developed by Krattenthaler and Rao [Symposium on Symmetries in Science (ed. B.
Gruber), Kluwer (2004)] to two recently obtained Euler-type transformations.
Some special cases are given.
|
The physics beyond the Standard Model with parameters of the compressed
spectrum is well motivated both in a theory side and with phenomenological
reasons, especially related to dark matter phenomenology. In this letter, we
propose a method to tag soft final state particles from a decaying process of a
new particle in this parameter space. By taking a supersymmetric gluino search
as an example, we demonstrate how the Large Hadron Collider experimental
collaborations can improve a sensitivity in these non-trivial search regions.
|
Graph neural networks (GNN) are a powerful tool for combining imaging and
non-imaging medical information for node classification tasks. Cross-network
node classification extends GNN techniques to account for domain drift,
allowing for node classification on an unlabeled target network. In this paper
we present OTGCN, a powerful, novel approach to cross-network node
classification. This approach leans on concepts from graph convolutional
networks to harness insights from graph data structures while simultaneously
applying strategies rooted in optimal transport to correct for the domain drift
that can occur between samples from different data collection sites. This
blended approach provides a practical solution for scenarios with many distinct
forms of data collected across different locations and equipment. We
demonstrate the effectiveness of this approach at classifying Autism Spectrum
Disorder subjects using a blend of imaging and non-imaging data.
|
This paper studies quantized corrupted sensing where the measurements are
contaminated by unknown corruption and then quantized by a dithered uniform
quantizer. We establish uniform guarantees for Lasso that ensure the accurate
recovery of all signals and corruptions using a single draw of the sub-Gaussian
sensing matrix and uniform dither. For signal and corruption with structured
priors (e.g., sparsity, low-rankness), our uniform error rate for constrained
Lasso typically coincides with the non-uniform one [Sun, Cui and Liu, 2022] up
to logarithmic factors. By contrast, our uniform error rate for unconstrained
Lasso exhibits worse dependence on the structured parameters due to
regularization parameters larger than the ones for non-uniform recovery. For
signal and corruption living in the ranges of some Lipschitz continuous
generative models (referred to as generative priors), we achieve uniform
recovery via constrained Lasso with a measurement number proportional to the
latent dimensions of the generative models. Our treatments to the two kinds of
priors are (nearly) unified and share the common key ingredients of (global)
quantized product embedding (QPE) property, which states that the dithered
uniform quantization (universally) preserves inner product. As a by-product,
our QPE result refines the one in [Xu and Jacques, 2020] under sub-Gaussian
random matrix, and in this specific instance we are able to sharpen the uniform
error decaying rate (for the projected-back projection estimator with signals
in some convex symmetric set) presented therein from $O(m^{-1/16})$ to
$O(m^{-1/8})$.
|
Mapping the recent expansion history of the universe offers the best hope for
uncovering the characteristics of the dark energy believed to be responsible
for the acceleration of the expansion. In determining cosmological and
dark-energy parameters to the percent level, systematic uncertainties impose a
floor on the accuracy more severe than the statistical measurement precision.
We delineate the categorization, simulation, and understanding required to
bound systematics for the specific case of the Type Ia supernova method. Using
simulated data of forthcoming ground-based surveys and the proposed space-based
SNAP mission we present Monte Carlo results for the residual uncertainties on
the cosmological parameter determination. The tight systematics control with
optical and near-infrared observations and the extended redshift reach allow a
space survey to bound the systematics below 0.02 magnitudes at z=1.7. For a
typical SNAP-like supernova survey, this keeps total errors within 15% of the
statistical values and provides estimation of Omega_m to 0.03, w_0 to 0.07, and
w' to 0.3; these can be further improved by incorporating complementary data.
|
Maintaining the position that the wave function $\psi$ provides a complete
description of state, the traditional formalism of quantum mechanics is
augmented by introducing continuous trajectories for particles which are sample
paths of a stochastic process determined (including the underlying probability
space) by $\psi$. In the resulting formalism, problems relating to measurements
and objective reality are solved as in Bohmian mechanics (without sharing its
weak points). The pitfalls of Nelson's stochastic mechanics are also avoided.
|
It is well established that physical quantities like the heavy quark
potentials get temperature independent at sufficiently short distances. As a
first application of this feature we suggest a new order parameter for the
confinement/deconfinement phase transition. Our investigations are based on
recent lattice studies.
|
Recently it was shown that permutation-only multimedia ciphers can completely
be broken in a chosen-plaintext scenario. Apparently, chosen-plaintext scenario
models a very resourceful adversary and does not hold in many practical
situations. To show that these ciphers are totally broken, we propose a
cipher-text only attack on these ciphers. To that end, we investigate speech
permutation-only ciphers and show that inherent redundancies of speech signal
can pave the path for a successful cipher-text only attack. For this task
different concepts and techniques are merged together. First, Short Time
Fourier Transform (STFT) is employed to extract regularities of audio signal in
both time and frequency. Then, it is shown that cipher-texts can be considered
as a set of scrambled puzzles. Then different techniques such as estimation,
image processing, branch and bound, and graph theory are fused together to
create and solve these puzzles. After extracting the keys from the solved
puzzles, they are applied on the scrambled signal. Conducted tests show that
the proposed method achieves objective and subjective intelligibility of 87.8%
and 92.9%. These scores are 50.9% and 34.6% higher than scores of previous
method.
|
The interaction between the surface of a 3D topological insulator and a
skyrmion / anti-skyrmion structure is studied in order to investigate the
possibility of electron confinement due to skyrmion presence. Both hedgehog
(N\'eel) and vortex (Bloch) skyrmions are considered. For the hedgehog skyrmion
the in-plane components cannot be disregarded and their interaction with the
surface state of the TI has to be taken into account. A semi-classical
description of the skyrmion chiral angle is obtained using the variational
principle. It is shown that both the hedgehog and the vortex skyrmion can
induce bound states on the surface of the TI. However, the number and the
properties of these states depend strongly on the skyrmion type and on the
skyrmion topological number $N_{Sk}$. The probability densities of the bound
electrons are also derived where it is shown that they are localized within the
skyrmion region.
|
Geodesics (by definition) have an intrinsic 4-acceleration zero. However,
when expressed in terms of coordinates, the coordinate acceleration $d^2 x^i/d
t^2$ can very easily be non-zero, and the coordinate velocity $d x^i/d t$ can
behave unexpectedly. The situation becomes extremely delicate in the
near-horizon limit---for both astrophysical and idealised black holes---where
an inappropriate choice of coordinates can quite easily lead to significant
confusion. We shall carefully explore the relative merits of
horizon-penetrating versus horizon-non-penetrating coordinates, arguing that in
the near-horizon limit the coordinate acceleration $d^2 x^i/d t^2$ is best
interpreted in terms of horizon-penetrating coordinates.
|
Let $G$ be a finite tree with root $r$ and associate to the internal vertices
of $G$ a collection of transition probabilities for a simple nondegenerate
Markov chain. Embedd $G$ into a graph $G^\prime$ constructed by gluing finite
linear chains of length at least 2 to the terminal vertices of $G.$ Then
$G^\prime$ admits distinguished boundary layers and the transition
probabilities associated to the internal vertices of $G$ can be augmented to
define a simple nondegenerate Markov chain $X$ on the vertices of $G^\prime.$
We show that the transition probabilities of $X$ can be recovered from the
joint distribution of first hitting time and first hitting place of $X$ started
at the root $r$ for the distinguished boundary layers of $G^\prime.$
|
With $\mu\to e\gamma$ decay forbidden by multiplicative lepton number
conservation, we study muonium--antimuonium transitions induced by neutral
scalar bosons. Pseudoscalars do not induce conversion for triplet muonium,
while for singlet muonium, pseudoscalar and scalar contributions add
constructively. This is in contrast to the usual case of doubly charged scalar
exchange, where the conversion rate is the same for both singlet and triplet
muonium. Complementary to muonium conversion studies, high energy $\mu^+e^- \to
\mu^- e^+$ and $e^-e^- \to \mu^- \mu^-$ collisions could reveal spectacular
resonance peaks for the cases of neutral and doubly charged scalars,
respectively.
|
We use a new scaling variable $\xi_w$, and add low $Q^2$ modifications to
GRV98 leading order parton distribution functions such that they can be used to
model electron, muon and neutrino inelastic scattering cross sections (and also
photoproduction) at both very low and high energies.
|
In this paper we report the discovery of eclipses in the early-type
Magellanic binary Sk-69 194. We derive an ephemeris for this system, and
present a CCD V light curve together with CCD spectroscopic observations. We
also briefly discuss the nature of the binary components.
|
Network meta-analysis (NMA) is a statistical technique for the comparison of
treatment options. The nodes of the network are the competing treatments and
edges represent comparisons of treatments in trials. Outcomes of Bayesian NMA
include estimates of treatment effects, and the probabilities that each
treatment is ranked best, second best and so on. How exactly network geometry
affects the accuracy and precision of these outcomes is not fully understood.
Here we carry out a simulation study and find that disparity in the number of
trials involving different treatments leads to a systematic bias in estimated
rank probabilities. This bias is associated with an increased variation in the
precision of treatment effect estimates. Using ideas from the theory of complex
networks, we define a measure of `degree irregularity' to quantify asymmetry in
the number of studies involving each treatment. Our simulations indicate that
more regular networks have more precise treatment effect estimates and smaller
bias of rank probabilities. We also find that degree regularity is a better
indicator of NMA quality than both the total number of studies in a network and
the disparity in the number of trials per comparison. These results have
implications for planning future trials. We demonstrate that choosing trials
which reduce the network's irregularity can improve the precision and accuracy
of NMA outcomes.
|
Many interesting models incorporate scalar fields with non-minimal couplings
to the spacetime Ricci curvature scalar. As is well known, if only one scalar
field is non-minimally coupled, then one may perform a conformal transformation
to a new frame in which both the gravitational portion of the Lagrangian and
the kinetic term for the (rescaled) scalar field assume canonical form. We
examine under what conditions the gravitational and kinetic terms in the
Lagrangian may be brought into canonical form when more than one scalar field
has non-minimal coupling. A particular class of two-field models admits such a
transformation, but models with more than two non-minimally coupled fields in
general do not.
|
Recent observations with the Submillimeter Wave Astronomy Satellite indicate
abundances of gaseous H2O and O2 in dense molecular clouds which are
significantly lower than found in standard homogeneous chemistry models. We
present here results for the thermal and chemical balance of inhomogeneous
molecular clouds exposed to ultraviolet radiation in which the abundances of
H2O and O2 are computed for various density distributions, radiation field
strengths and geometries. It is found that an inhomogeneous density
distribution lowers the column densities of H2O and O2 compared to the
homogeneous case by more than an order of magnitude at the same A_V. O2 is
particularly sensitive to the penetrating ultraviolet radiation, more so than
H2O. The S140 and rho Oph clouds are studied as relevant test cases of
star-forming and quiescent regions. The SWAS results of S140 can be
accommodated naturally in a clumpy model with mean density of 2x10^3 cm-3 and
enhancement I_UV=140 compared with the average interstellar radiation field, in
agreement with observations of [CI] and 13CO of this cloud. Additional
radiative transfer computations suggest that this diffuse H2O component is
warm, ~60-90 K, and can account for the bulk of the 1_10-1_01 line emission
observed by SWAS. The rho Oph model yields consistent O2 abundances but too
much H2O, even for [C]/[O]=0.94, if I_UV<10 respectively <40 for a mean density
of 10^3 respectively 10^4 cm-3. It is concluded that enhanced photodissociation
in clumpy regions can explain the low H2O and O2 abundances and emissivities
found in the large SWAS beam for extended molecular clouds, but that additional
freeze-out of oxygen onto grains is needed in dense cold cores.
|
During 2022--2023 Z.-W. Sun posed many conjectures on infinite series with
summands involving generalized harmonic numbers. Motivated by this, we deduce
$31$ series identities involving harmonic numbers, three of which were
previously conjectured by the second author. For example, we obtain that \[
\sum_{k=1}^{\infty} \frac{(-1)^k}{k^2{2k \choose k}{3k \choose k}} \big(
\frac{7 k-2}{2 k-1} H_{k-1}^{(2)}-\frac{3}{4 k^2} \big)=\frac{\pi^4}{720}. \]
and \[ \sum_{k=1}^\infty \frac{1}{k^2 {2k \choose k}^2} \left(
\frac{30k-11}{k(2k-1)} (H_{2k-1}^{(3)} + 2 H_{k-1}^{(3)}) + \frac{27}{8k^4}
\right) = 4 \zeta(3)^2, \] where $H_n^{(m)}$ denotes $\sum_{0<j \le n}j^{-m}$.
|
We show that nuclear motion of Rydberg atoms can be induced by resonant
dipole-dipole interactions that trigger the energy transfer between two
energetically close Rydberg states. How and if the atoms move depends on their
initial arrangement as well as on the initial electronic excitation. Using a
mixed quantum/classical propagation scheme we obtain the trajectories and
kinetic energies of atoms, initially arranged in a regular chain and prepared
in excitonic eigenstates. The influence of off-diagonal disorder on the motion
of the atoms is examined and it is shown that irregularity in the arrangement
of the atoms can lead to an acceleration of the nuclear dynamics.
|
We examine sets of lines in PG(d,F) meeting each hyperplane in a generator
set of points. We prove that such a set has to contain at least 1.5d lines if
the field F has more than 1.5d elements, and at least 2d-1 lines if the field F
is algebraically closed. We show that suitable 2d-1 lines constitute such a set
(if |F| > or = 2d-1), proving that the lower bound is tight over algebraically
closed fields. At last, we will see that the strong (s,A) subspace designs
constructed by Guruswami and Kopparty have better (smaller) parameter A than
one would think at first sight.
|
We study the group Tame($\mathbf A^3$) of tame automorphisms of the
3-dimensional affine space, over a field of characteristic zero. We recover, in
a unified and (hopefully) simplified way, previous results of Kuroda,
Shestakov, Umirbaev and Wright, about the theory of reduction and the relations
in Tame($\mathbf A^3$). The novelty in our presentation is the emphasis on a
simply connected 2-dimensional simplicial complex on which Tame($\mathbf A^3$)
acts by isometries.
|
We investigate the relationship between turbulence and feedback in the Orion
A molecular cloud using maps of $^{12}$CO(1-0), $^{13}$CO(1-0) and
C$^{18}$O(1-0) from the CARMA-NRO Orion survey. We compare gas statistics with
the impact of feedback in different parts of the cloud to test whether feedback
changes the structure and kinematics of molecular gas. We use principal
component analysis, the spectral correlation function, and the spatial power
spectrum to characterize the cloud. We quantify the impact of feedback with
momentum injection rates of protostellar outflows and wind-blown shells as well
as the surface density of young stars. We find no correlation between shells or
outflows and any of the gas statistics. However, we find a significant
anti-correlation between young star surface density and the slope of the
$^{12}$CO spectral correlation function, suggesting that feedback may influence
this statistic. While calculating the principal components, we find peaks in
the covariance matrix of our molecular line maps offset by 1-3 km s$^{-1}$
toward several regions of the cloud which may be produced by feedback. We
compare these results to predictions from molecular cloud simulations.
|
Catalytic Janus swimmers demonstrate a diffusio-phoretic motion by
self-generating the gradients of concentrations and electric potential. Recent
work has focused on simplified cases, such as a release of solely one type of
ions or low surface fluxes of ions, with limited theoretical guidance. Here, we
consider the experimentally relevant case of particles that release both types
of ions, and obtain a simple expression for a particle velocity in the limit of
thin electrostatic diffuse layer. Our approximate expression is very accurate
even when ion fluxes and surface potentials are large, and allows one to
interpret a number of intriguing phenomena, such as the reverse in the
direction of the particle motion in response to variations of the salt
concentration or self-diffusiophoresis of uncharged particles.
|
Analysing data from Smoothed Particle Hydrodynamics (SPH) simulations is
about understanding global fluid properties rather than individual fluid
elements. Therefore, in order to properly understand the outcome of such
simulations it is crucial to transition from a particle to a grid based
picture. In this paper we briefly summarise different methods of calculating a
representative volume discretisation from SPH data and propose an improved
version of commonly used techniques. We present a possibility to generate
accurate 2D data directly without the CPU time and memory consuming detour over
a 3D grid. We lay out the importance of an accurate algorithm to conserve
integral fluid properties and to properly treat small scale structures using a
typical galaxy simulation snapshot. For demonstration purposes we additionally
calculate velocity power spectra and as expected find the main differences on
small scales. Finally we propose two new multi-purpose analysis packages which
utilise the new algorithms: Pygad and SPHMapper.
|
Recently, Chapman et al. argued that holographic complexities for defects
distinguish action from volume. Motivated by their work, we study complexity of
quantum states in conformal field theory with boundary. In generic
two-dimensional BCFT, we work on the path-integral optimization which gives one
of field-theoretic definitions for the complexity. We also perform holographic
computations of the complexity in Takayanagi's AdS/BCFT model following by the
"complexity $=$ volume" conjecture and "complexity $=$ action" conjecture. We
find that increments of the complexity due to the boundary show the same
divergent structures in these models except for the CA complexity in the
AdS$_3$/BCFT$_2$ model as the argument by Chapman et al. Thus, we conclude that
boundary does not distinguish the complexities in general.
|
Control policy learning for modular robot locomotion has previously been
limited to proprioceptive feedback and flat terrain. This paper develops
policies for modular systems with vision traversing more challenging
environments. These modular robots can be reconfigured to form many different
designs, where each design needs a controller to function. Though one could
create a policy for individual designs and environments, such an approach is
not scalable given the wide range of potential designs and environments. To
address this challenge, we create a visual-motor policy that can generalize to
both new designs and environments. The policy itself is modular, in that it is
divided into components, each of which corresponds to a type of module (e.g., a
leg, wheel, or body). The policy components can be recombined during training
to learn to control multiple designs. We develop a deep reinforcement learning
algorithm where visual observations are input to a modular policy interacting
with multiple environments at once. We apply this algorithm to train robots
with combinations of legs and wheels, then demonstrate the policy controlling
real robots climbing stairs and curbs.
|
The maximum utility estimation proposed by Elliott and Lieli (2013) can be
viewed as cost-sensitive binary classification; thus, its in-sample overfitting
issue is similar to that of perceptron learning. A utility-maximizing
prediction rule (UMPR) is constructed to alleviate the in-sample overfitting of
the maximum utility estimation. We establish non-asymptotic upper bounds on the
difference between the maximal expected utility and the generalized expected
utility of the UMPR. Simulation results show that the UMPR with an appropriate
data-dependent penalty achieves larger generalized expected utility than common
estimators in the binary classification if the conditional probability of the
binary outcome is misspecified.
|
In this paper we determine the number of general points through which a
Brill--Noether curve of fixed degree and genus in any projective space can be
passed.
|
Aggregation has been an important operation since the early days of
relational databases. Today's Big Data applications bring further challenges
when processing aggregation queries, demanding adaptive aggregation algorithms
that can process large volumes of data relative to a potentially limited memory
budget (especially in multiuser settings). Despite its importance, the design
and evaluation of aggregation algorithms has not received the same attention
that other basic operators, such as joins, have received in the literature. As
a result, when considering which aggregation algorithm(s) to implement in a new
parallel Big Data processing platform (AsterixDB), we faced a lack of "off the
shelf" answers that we could simply read about and then implement based on
prior performance studies.
In this paper we revisit the engineering of efficient local aggregation
algorithms for use in Big Data platforms. We discuss the salient implementation
details of several candidate algorithms and present an in-depth experimental
performance study to guide future Big Data engine developers. We show that the
efficient implementation of the aggregation operator for a Big Data platform is
non-trivial and that many factors, including memory usage, spilling strategy,
and I/O and CPU cost, should be considered. Further, we introduce precise cost
models that can help in choosing an appropriate algorithm based on input
parameters including memory budget, grouping key cardinality, and data skew.
|
Longitudinal weak gauge boson scattering has been well known as a powerful
method to probe the underlying mechanism of the electroweak symmetry breaking
sector of the Standard Model. We point out that longitudinal weak gauge boson
scattering is also sensitive to the gauge sector when the non-Abelian trilinear
and quartic couplings of the Standard Model Z boson are modified due to the
general mixings with another Z' boson in the hidden sector and possibly with
the photon as well. In particular, these mixings can lead to a partially strong
scattering effect in the channels of W^\pm_L W^\pm_L \to W^\pm_L W^\pm_L and
W^\pm_L W^\mp_L \to W^\pm_L W^\mp_L which can be probed at the Large Hadron
Collider. We study this effect in a simple U(1) extension of the Standard Model
recently suggested in the literature that includes both the symmetry breaking
Higgs mechanism as well as the gauge invariant Stueckelberg mass terms for the
two Abelian groups. Other types of Z' models are also briefly discussed.
|
We consider all 1/2 BPS excitations of $AdS \times S$ configurations in both
type IIB string theory and M-theory. In the dual field theories these
excitations are described by free fermions. Configurations which are dual to
arbitrary droplets of free fermions in phase space correspond to smooth
geometries with no horizons. In fact, the ten dimensional geometry contains a
special two dimensional plane which can be identified with the phase space of
the free fermion system. The topology of the resulting geometries depends only
on the topology of the collection of droplets on this plane. These solutions
also give a very explicit realization of the geometric transitions between
branes and fluxes. We also describe all 1/2 BPS excitations of plane wave
geometries. The problem of finding the explicit geometries is reduced to
solving a Laplace (or Toda) equation with simple boundary conditions. We
present a large class of explicit solutions. In addition, we are led to a
rather general class of $AdS_5$ compactifications of M-theory preserving ${\cal
N} =2$ superconformal symmetry. We also find smooth geometries that correspond
to various vacua of the maximally supersymmetric mass-deformed M2 brane theory.
Finally, we present a smooth 1/2 BPS solution of seven dimensional gauged
supergravity corresponding to a condensate of one of the charged scalars.
|
A general discussion is made concerning the ways in which one can get
signatures about a possible liquid-gas phase transition in nuclear matter.
Microcanonical temperature, heat capacity and second order derivative of the
entropy versus energy formulas have been deduced in a general case. These
formulas are {\em exact}, simply applicable and do not depend on any model
assumption. Therefore, they are suitable to be applied on experimental data.
The formulas are tested in various situations. It is evidenced that when the
freeze-out constraint is of fluctuating volume type the deduced (heat capacity
and second order derivative of the entropy versus energy) formulas will prompt
the spinodal region through specific signals. Finally, the same microcanonical
formulas are deduced for the case when an incomplete number of fragments per
event are available. These formulas could overcome the freeze-out backtracking
deficiencies.
|
The aim of this paper is to investigate the onset of penetrative convection
in a Darcy-Brinkmann porous medium under the hypothesis of local therma
non-equilibrium. For the problem at stake, the strong form of the principle of
exchange of stabilities has been proved, i.e. convective motions can occur only
through a secondary stationary motion. We perform linear and nonlinear
stability analyses of the basic state motion, with particular regard to the
behaviour of the stability thresholds with respect to the relevant physical
parameters characterizing the model. The Chebyshev-$\tau$ method and the
shooting method are employed and implemented to solve the differential
eigenvalue problems arising from linear and nonlinear analyses to determine
critical Rayleigh numbers. Numerical simulations prove the stabilising effect
of upper bounding plane temperature, Darcy's number and the interaction
coefficient characterising the local thermal non-equilibrium regime.
|
We study properties of dilute polymer solutions which are known to depend
strongly on polymer elongation. The probability density function (PDF) of
polymer end-to-end extensions $R$ in turbulent flows is examined. We
demonstrate that if the value of the Lyapunov exponent $\lambda$ is smaller
than the inverse molecular relaxation time $1/\tau$ then the PDF has a strong
peak at the equilibrium size $R_0$ and a power tail at $R\gg R_0$. This
confirms and extends the results of \cite{Lumley72}. There is no essential
influence of polymers on the flow in the regime $\lambda\tau<1$. At
$\lambda>1/\tau$ the majority of molecules is stretched to the linear size
$R_{\rm op}\gg R_0$. The value of $R_{\rm op}$ can be much smaller than the
maximal length of the molecules because of back reaction of the polymers on the
flow, which suppresses velocity gradients thus preventing the polymers from
maximal possible stretching.
|
Developing and implementing AI-based solutions help state and federal
government agencies, research institutions, and commercial companies enhance
decision-making processes, automate chain operations, and reduce the
consumption of natural and human resources. At the same time, most AI
approaches used in practice can only be represented as "black boxes" and suffer
from the lack of transparency. This can eventually lead to unexpected outcomes
and undermine trust in such systems. Therefore, it is crucial not only to
develop effective and robust AI systems, but to make sure their internal
processes are explainable and fair. Our goal in this chapter is to introduce
the topic of designing assurance methods for AI systems with high-impact
decisions using the example of the technology sector of the US economy. We
explain how these fields would benefit from revealing cause-effect
relationships between key metrics in the dataset by providing the causal
experiment on technology economics dataset. Several causal inference approaches
and AI assurance techniques are reviewed and the transformation of the data
into a graph-structured dataset is demonstrated.
|
We introduce the poset NC^d_n of all noncrossing partitions such that each
block has cardinality 1 modulo d and each block of the dual partition also has
cardinality 1 modulo d. We obtain the cardinality, the M\"obius function, the
rank numbers, the antipode, and the number of maximal chains. Generalizing work
of Stanley, we give an edge labeling such that the labels of the maximal chains
are exactly the d-parking functions. We also introduce two classes of labeled
trees: the first class is in bijective correspondence with the noncrossing
partitions in NC^d_n and the second class is in bijective correspondence with
the maximal chains.
|
The stopping power of antiprotons in atomic and molecular hydrogen as well as
helium was calculated in an impact-energy range from 1 keV to 6.4 MeV. In the
case of H2 and He the targets were described with a single-active electron
model centered on the target. The collision process was treated with the
close-coupling formulation of the impact-parameter method. An extensive
comparison of the present results with theoretical and experimental literature
data was performed in order to evaluate which of the partly disagreeing
theoretical and experimental data are most reliable. Furthermore, the size of
the corrections to the first-order stopping number, the average energy
transferred to the target electrons, and the relative importance of the
excitation and the ionization process for the energy loss of the projectile was
determined. Finally, the stopping power of the H, H2, and He targets were
directly compared revealing specific similarities and differences of the three
targets.
|
We investigate the Weyl channels being covariant with respect to the maximum
commutative group of unitary operators. This class includes the quantum
depolarizing channel and the "two-Pauli" channel as well. Then, we show that
our estimation of the output entropy for a tensor product of the phase damping
channel and the identity channel based upon the decreasing property of the
relative entropy allows to prove the additivity conjecture for the minimal
output entropy for the quantum depolarizing channel in any prime dimesnsion and
for the "two Pauli" channel in the qubit case.
|
A "natural" model for the QCD invariant (running) coupling, free of the IR
singularity, is proposed. It is based upon the hypothesis of finite gluon mass
$ m_{gl}$ existence and, technically, uses an accurate treating of threshold
behavior of Feynman diagram contribution. The model correlates with the
unitarity condition.
Quantitative estimate, performed in the one-loop approximation yields a
reasonable lower bound for this mass $m_{gl} > 150$ MeV and a smooth IR
freezing at the level $\alpha_s(Q^2)\lesssim 1$.
|
Reliable pedestrian crash avoidance mitigation (PCAM) systems are crucial
components of safe autonomous vehicles (AVs). The nature of the
vehicle-pedestrian interaction where decisions of one agent directly affect the
other agent's optimal behavior, and vice versa, is a challenging yet often
neglected aspect of such systems. We address this issue by modeling a Markov
decision process (MDP) for a simulated AV-pedestrian interaction at an unmarked
crosswalk. The AV's PCAM decision policy is learned through deep reinforcement
learning (DRL). Since modeling pedestrians realistically is challenging, we
compare two levels of intelligent pedestrian behavior. While the baseline model
follows a predefined strategy, our advanced pedestrian model is defined as a
second DRL agent. This model captures continuous learning and the uncertainty
inherent in human behavior, making the AV-pedestrian interaction a deep
multi-agent reinforcement learning (DMARL) problem. We benchmark the developed
PCAM systems according to the collision rate and the resulting traffic flow
efficiency with a focus on the influence of observation uncertainty on the
decision-making of the agents. The results show that the AV is able to
completely mitigate collisions under the majority of the investigated
conditions and that the DRL pedestrian model learns an intelligent crossing
behavior.
|
Deep learning has brought great progress for the sequential recommendation
(SR) tasks. With advanced network architectures, sequential recommender models
can be stacked with many hidden layers, e.g., up to 100 layers on real-world
recommendation datasets. Training such a deep network is difficult because it
can be computationally very expensive and takes much longer time, especially in
situations where there are tens of billions of user-item interactions. To deal
with such a challenge, we present StackRec, a simple, yet very effective and
efficient training framework for deep SR models by iterative layer stacking.
Specifically, we first offer an important insight that hidden layers/blocks in
a well-trained deep SR model have very similar distributions. Enlightened by
this, we propose the stacking operation on the pre-trained layers/blocks to
transfer knowledge from a shallower model to a deep model, then we perform
iterative stacking so as to yield a much deeper but easier-to-train SR model.
We validate the performance of StackRec by instantiating it with four
state-of-the-art SR models in three practical scenarios with real-world
datasets. Extensive experiments show that StackRec achieves not only comparable
performance, but also substantial acceleration in training time, compared to SR
models that are trained from scratch. Codes are available at
https://github.com/wangjiachun0426/StackRec.
|
The Tokunaga condition is an algebraic rule that provides a detailed
description of the branching structure in a self-similar tree. Despite a solid
empirical validation and practical convenience, the Tokunaga condition lacks a
theoretical justification. Such a justification is suggested in this work. We
define a geometric branching processes $\mathcal{G}(s)$ that generates
self-similar rooted trees. The main result establishes the equivalence between
the invariance of $\mathcal{G}(s)$ with respect to a time shift and a
one-parametric version of the Tokunaga condition. In the parameter region where
the process satisfies the Tokunaga condition (and hence is time invariant),
$\mathcal{G}(s)$ enjoys many of the symmetries observed in a critical binary
Galton-Watson branching process and reproduce the latter for a particular
parameter value.
|
We construct an analytical model derived from nuclear reaction theory and
having a simple functional form to demonstrate the quantitative agreement with
the measured cross sections for neutron induced reactions. The neutron-nucleus
total, reaction and scattering cross sections, for energies ranging from 5 to
700 MeV and for several nuclei spanning a wide mass range are estimated.
Systematics of neutron scattering cross sections on various materials for
neutron energies upto several hundred MeV are important for ADSS applications.
The reaction cross sections of neutrons are useful for determining the neutron
induced fission yields in actinides and pre-actinides. The present model based
on nuclear reaction theory provides good estimates of the total cross section
for neutron induced reaction.
|
Coulomb interaction has a striking effect on electronic propagation in one
dimensional conductors. The interaction of an elementary excitation with
neighboring conductors favors the emergence of collective modes which
eventually leads to the destruction of the Landau quasiparticle. In this
process, an injected electron tends to fractionalize into separated pulses
carrying a fraction of the electron charge. Here we use two-particle
interferences in the electronic analog of the Hong-Ou-Mandel experiment in a
quantum Hall conductor at filling factor 2 to probe the fate of a single
electron emitted in the outer edge channel and interacting with the inner one.
By studying both channels, we analyze the propagation of the single electron
and the generation of interaction induced collective excitations in the inner
channel. These complementary information reveal the fractionalization process
in time domain and establish its relevance for the destruction of the
quasiparticle which degrades into the collective modes.
|
We present BVR photometric colors of six Uranian and two Neptunian irregular
satellites, collected using the Magellan Observatory (Las Campanas, Chile) and
the Keck Observatory, (Manua Kea, Hawaii). The colors range from neutral to
light red, and like the Jovian and the Saturnian irregulars (Grav et al. 2003)
there is an apparent lack of the extremely red objects found among the Centaurs
and Kuiper belt objects.
The Uranian irregulars can be divided into three possible dynamical families,
but the colors collected show that two of these dynamical families, the Caliban
and Sycorax-clusters, have heterogeneous colors. Of the third possible family,
the 168-degree cluster containing two objects with similar average inclinations
but quite different average semi-major axis, only one object (U XXI Trinculo)
was observed. The heterogeneous colors and the large dispersion of the average
orbital elements leads us to doubt that they are collisional families. We favor
single captures as a more likely scenario. The two neptunians observed (N II
Nereid and S/2002 N1) both have very similar neutral, sun-like colors. Together
with the high collisional probability between these two objects over the age of
the solar system (Nesvorny et al. 2003, Holman et al. 2004), this suggests that
S/2002 N1 be a fragment of Nereid, broken loose during a collision or cratering
event with an undetermined impactor.
|
The IS discourse on the potential of distributed ledger technology (DLT) in
the financial services has grown at a tremendous pace in recent years. Yet,
little has been said about the related implications for the costly and highly
regulated process of compliance reporting. Working with a group of
representatives from industry and regulatory authorities, we employ the design
science research methodology (DSR) in the design, development, and evaluation
of an artefact, enabling the automated collection and enrichment of
transactional data. Our findings indicate that DLT may facilitate the
automation of key compliance processes through the implementation of a
"pull-model", in which regulators can access compliance data in near real-time
to stage aggregate exposures at the supranational level. Generalizing our
preliminary results, we present four propositions on the implications of DLT in
compliance. The findings contribute new practical insights on the topic of
compliance to the growing IS discourse on DLT.
|
Quantum Sensors offer great potential for providing enhanced sensitivity in
high energy physics experiments. In this report we provide a summary of key
quantum sensors technologies - interferometers, optomechanics, and clocks; spin
dependent sensors; superconducting sensors; and quantum calorimeters -
highlighting existing experiments along with areas for development. We also
provide a set of key messages intended to further advance the state of quantum
sensors used for high energy physics specific applications.
|
Recent investigations of final state interactions of W+W- events in e+e-
collisions up to center of mass energies sqrt{S}=189GeV at LEP2 are reviewed.
The data were used to look for color reconnection and Bose-Einstein
correlations between the two color singlets of fully hadronic W events.
|
This paper presents a set of tests and an algorithm for agnostic, data-driven
selection among macroeconomic DSGE models inspired by structure learning
methods for DAGs. As the log-linear state-space solution to any DSGE model is
also a DAG it is possible to use associated concepts to identify a unique
ground-truth state-space model which is compatible with an underlying DGP,
based on the conditional independence relationships which are present in that
DGP. In order to operationalise search for this ground-truth model, the
algorithm tests feasible analogues of these conditional independence criteria
against the set of combinatorially possible state-space models over observed
variables. This process is consistent in large samples. In small samples the
result may not be unique, so conditional independence tests can be combined
with likelihood maximisation in order to select a single optimal model. The
efficacy of this algorithm is demonstrated for simulated data, and results for
real data are also provided and discussed.
|
We argue that at low-energies, typical of the resonance region, the
contribution from direct-channel exotic trajectories replaces the Pomeron
exchange, typical of high energies. A dual model realizing this idea is
suggested. While at high energies it matches the Regge pole behavior, dominated
by a Pomeron exchange, at low energies it produces a smooth, structureless
behavior of the total cross section determined by a direct-channel nonlinear
exotic trajectory, dual to the Pomeron exchange.
|
We propose a modification of the standard linear implicit Euler integrator
for the weak approximation of parabolic semilinear stochastic PDEs driven by
additive space-time white noise. The new method can easily be combined with a
finite difference method for the spatial discretization. The proposed method is
shown to have improved qualitative properties compared with the standard
method. First, for any time-step size, the spatial regularity of the solution
is preserved, at all times. Second, the proposed method preserves the Gaussian
invariant distribution of the infinite dimensional Ornstein--Uhlenbeck process
obtained when the nonlinearity is absent, for any time-step size. The weak
order of convergence of the proposed method is shown to be equal to $1/2$ in a
general setting, like for the standard Euler scheme. A stronger weak
approximation result is obtained when considering the approximation of a Gibbs
invariant distribution, when the nonlinearity is a gradient: one obtains an
approximation in total variation distance of order $1/2$, which does not hold
for the standard method. This is the first result of this type in the
literature. A key point in the analysis is the interpretation of the proposed
modified Euler scheme as the accelerated exponential Euler scheme applied to a
modified stochastic evolution equation. Finally, it is shown that the proposed
method can be applied to design an asymptotic preserving scheme for a class of
slow-fast multiscale systems, and to construct a Markov Chain Monte Carlo
method which is well-defined in infinite dimension. We also revisit the
analysis of the standard and the accelerated exponential Euler scheme, and we
prove new results with approximation in the total variation distance, which
serve to illustrate the behavior of the proposed modified Euler scheme.
|
We prove a long-standing conjecture of Chudnovsky for very general and
generic points in $\mathbb{P}_k^N$, where $k$ is an algebraically closed field
of characteristic zero, and for any finite set of points lying on a quadric,
without any assumptions on $k$. We also prove that for any homogeneous ideal
$I$ in the homogeneous coordinate ring $R=k[x_0, \ldots, x_N]$, Chudnovsky's
conjecture holds for large enough symbolic powers of $I$.
|
The intelligent Processing technique is more and more attractive to
researchers due to its ability to deal with key problems in Vehicular Ad hoc
networks. However, several problems in applying intelligent processing
technologies in VANETs remain open. The existing applications are
comprehensively reviewed and discussed, and classified into different
categories in this paper. Their strategies, advantages/disadvantages, and
performances are elaborated. By generalizing different tactics in various
applications related to different scenarios of VANETs and evaluating their
performances, several promising directions for future research have been
suggested.
|
High Bandwidth Memory (HBM) provides massive aggregated memory bandwidth by
exposing multiple memory channels to the processing units. To achieve high
performance, an accelerator built on top of an FPGA configured with HBM (i.e.,
FPGA-HBM platform) needs to scale its performance according to the available
memory channels. In this paper, we propose an accelerator for BFS
(Breadth-First Search) algorithm, named as ScalaBFS, that builds multiple
processing elements to sufficiently exploit the high bandwidth of HBM to
improve efficiency. We implement the prototype system of ScalaBFS and conduct
BFS in both real-world and synthetic scale-free graphs on Xilinx Alveo U280
FPGA card real hardware. The experimental results show that ScalaBFS scales its
performance almost linearly according to the available memory pseudo channels
(PCs) from the HBM2 subsystem of U280. By fully using the 32 PCs and building
64 processing elements (PEs) on U280, ScalaBFS achieves a performance up to
19.7 GTEPS (Giga Traversed Edges Per Second). When conducting BFS in sparse
real-world graphs, ScalaBFS achieves equivalent GTEPS to Gunrock running on the
state-of-art Nvidia V100 GPU that features 64-PC HBM2 (twice memory bandwidth
than U280).
|
We consider a semi-scale invariant version of the Poisson cylinder model
which in a natural way induces a random fractal set. We show that this random
fractal exhibits an existence phase transition for any dimension $d\geq 2,$ and
a connectivity phase transition whenever $d\geq 4.$ We determine the exact
value of the critical point of the existence phase transition, and we show that
the fractal set is almost surely empty at this critical point.
A key ingredient when analysing the connectivity phase transition is to
consider a restriction of the full process onto a subspace. We show that this
restriction results in a fractal ellipsoid model which we describe in detail,
as it is key to obtaining our main results.
In addition we also determine the almost sure Hausdorff dimension of the
fractal set.
|
We present the distributions of elemental abundance ratios using
chemodynamical simulations which include four different neutron capture
processes: magneto-rotational supernovae, neutron star mergers, neutrino driven
winds, and electron capture supernovae. We examine both simple isolated dwarf
disc galaxies and cosmological zoom-in simulations of Milky Way-type galaxies,
and compare the [Eu/Fe] and [Eu/{\alpha}] evolution with recent observations,
including the HERMES-GALAH survey. We find that neither electron-capture
supernovae or neutrino-driven winds are able to adequately produce heavy
neutron-capture elements such as Eu in quantities to match observations. Both
neutron-star mergers and magneto-rotational supernovae are able to produce
these elements in sufficient quantities. Additionally, we find that the scatter
in [Eu/Fe] and [Eu/{\alpha}] at low metallicity ([Fe/H] < -1) and the [Eu/(Fe,
{\alpha})] against [Fe/H] gradient of the data at high metallicity ([Fe/H] >
-1) are both potential indicators of the dominant r-process site. Using the
distribution in [Eu/(Fe, {\alpha}] - [Fe/H] we predict that neutron star
mergers alone are unable to explain the observed Eu abundances, but may be able
to together with magneto-rotational supernovae.
|
We prove the main Conjecture 4 of our paper arXiv:1403.6061v5, which leads to
classification of degenerations of codimension one of Kahlerian K3 surfaces
with finite symplectic automorphism groups.
|
We present a set of Bell inequalities for multiqubit quantum systems. These
Bell inequalities are shown to be able to detect multiqubit entanglement better
than previous Bell inequalities such as Werner-Wolf-Zukowski- Brukner ones.
Computable formulas are presented for calculating the maximal violations of
these Bell inequalities for any multiqubit states.
|
A new approach for continuous and non-invasive monitoring of the glucose
concentration in human epidermis has been suggested recently. This method is
based on photoacoustic (PA) analysis of human interstitial fluid. The
measurement can be performed in vitro and in vivo and, therefore, may form the
basis for a non-invasive monitoring of the blood sugar level for diabetes
patients. It requires a windowless PA cell with an additional opening that is
pressed onto the human skin. Since signals are weak, advantage is taken of
acoustic resonances of the cell. Recently, a numerical approach based on the
Finite Element (FE) Method has been successfully used for the calculation of
the frequency response function of closed PA cells. This method has now been
adapted to obtain the frequency response of the open cell. Despite the fact
that loss due to sound radiation at the opening is not included, fairly good
accordance with measurement is achieved.
|
Open and globular star clusters have served as benchmarks for the study of
stellar evolution due to their supposed nature as simple stellar populations of
the same age and metallicity. After a brief review of some of the pioneering
work that established the importance of imaging stars in these systems, we
focus on several recent studies that have challenged our fundamental picture of
star clusters. These new studies indicate that star clusters can very well
harbour multiple stellar populations, possibly formed through self-enrichment
processes from the first-generation stars that evolved through
post-main-sequence evolutionary phases. Correctly interpreting stellar
evolution in such systems is tied to our understanding of both
chemical-enrichment mechanisms, including stellar mass loss along the giant
branches, and the dynamical state of the cluster. We illustrate recent imaging,
spectroscopic and theoretical studies that have begun to shed new light on the
evolutionary processes that occur within star clusters.
|
In his previous paper (Math. Res. Letters 7(2000), 123--132) the author
proved that in characteristic zero the jacobian $J(C)$ of a hyperelliptic curve
$C: y^2=f(x)$ has only trivial endomorphisms over an algebraic closure of the
ground field $K$ if the Galois group $Gal(f)$ of the irreducible polynomial
$f(x) \in K[x]$ is either the symmetric group $S_n$ or the alternating group
$A_n$. Here $n>4$ is the degree of $f$.
In the present paper we extend this result to the case of certain ``smaller''
Galois groups. In particular, we treat the case when $n=11$ or 12 and $Gal(f)$
is the Mathieu group $M_{11}$ or $M_{12}$ respectively. The infinite series
$n=2^r+1, Gal(f)=L_2(2^r)$ and $n=2^{4r+2}+1, Gal(f)=Sz(2^{2r+1})$ are also
treated.
|
Bacteria can swim upstream due to hydrodynamic interactions with the fluid
flow in a narrow tube, and pose a clinical threat of urinary tract infection to
patients implanted with catheters. Coatings and structured surfaces have been
proposed as a way to suppress bacterial contamination in catheters. However,
there is no surface structuring or coating approach to date that thoroughly
addresses the contamination problem. Here, based on the physical mechanism of
upstream swimming, we propose a novel geometric design, optimized by an AI
model predicting in-flow bacterial dynamics. The AI method, based on Fourier
neural operator, offers significant speedups over traditional simulation
methods. Using Escherichia coli, we demonstrate the anti-infection mechanism in
quasi-2D micro-fluidic experiments and evaluate the effectiveness of the design
in 3Dprinted prototype catheters under clinical flow rates. Our catheter design
shows 1-2 orders of magnitude improved suppression of bacterial contamination
at the upstream end of the catheter, potentially prolonging the in-dwelling
time for catheter use and reducing the overall risk of catheter-associated
urinary tract infections.
|
We report the discovery of WASP-5b, a Jupiter-mass planet orbiting a 12th-mag
G-type star in the Southern hemisphere. The 1.6-d orbital period places WASP-5b
in the class of very-hot Jupiters and leads to a predicted equilibrium
temperature of 1750 K. WASP-5b is the densest of any known Jovian-mass planet,
being a factor seven denser than TrES-4, which is subject to similar stellar
insolation, and a factor three denser than WASP-4b, which has a similar orbital
period. We present transit photometry and radial-velocity measurements of
WASP-5 (= USNO-B1 0487-0799749), from which we derive the mass, radius and
density of the planet: M_P = 1.58 +0.13 -0.08 M_J, R_P = 1.090 +0.094 -0.058
R_J and Rho_P = 1.22 +0.19 -0.24 Rho_J. The orbital period is P = 1.6284296
+0.0000048 -0.0000037 d and the mid-transit epoch is T_C (HJD) = 2454375.62466
+0.00026 -0.00025.
|
We discuss a comparative analysis of unimproved and nonperturbatively
improved quenched hadron spectroscopy, on a set of 104 gauge configurations, at
beta=6.2. We also present here our results for meson decay constants, including
the constants f_D and f_Ds in the charm-quark region.
|
A central issue in applying auction theory in practice is the problem of
dealing with budget-constrained agents. A desirable goal in practice is to
design incentive compatible, individually rational, and Pareto optimal auctions
while respecting the budget constraints. Achieving this goal is particularly
challenging in the presence of nontrivial combinatorial constraints over the
set of feasible allocations.
Toward this goal and motivated by AdWords auctions, we present an auction for
{\em polymatroidal} environments satisfying the above properties. Our auction
employs a novel clinching technique with a clean geometric description and only
needs an oracle access to the submodular function defining the polymatroid. As
a result, this auction not only simplifies and generalizes all previous
results, it applies to several new applications including AdWords Auctions,
bandwidth markets, and video on demand. In particular, our characterization of
the AdWords auction as polymatroidal constraints might be of independent
interest. This allows us to design the first mechanism for Ad Auctions taking
into account simultaneously budgets, multiple keywords and multiple slots.
We show that it is impossible to extend this result to generic polyhedral
constraints. This also implies an impossibility result for multi-unit auctions
with decreasing marginal utilities in the presence of budget constraints.
|
The (matricial) solution set of a Linear Matrix Inequality (LMI) is a convex
basic non-commutative semi-algebraic set. The main theorem of this paper is a
converse, a result which has implications for both semidefinite programming and
systems engineering. For p(x) a non-commutative polynomial in free variables x=
(x1, ... xg) we can substitute a tuple of symmetric matrices X= (X1, ... Xg)
for x and obtain a matrix p(X). Assume p is symmetric with p(0) invertible, let
Ip denote the set {X: p(X) is an invertible matrix}, and let Dp denote the
component of Ip containing 0. THEOREM: If the set Dp is uniformly bounded
independent of the size of the matrix tuples, then Dp has an LMI representation
if and only if it is convex. Linear engineering systems problems are called
"dimension free" if they can be stated purely in terms of a signal flow diagram
with L2 performance measures, e.g., H-infinity control. Conjecture: A dimension
free problem can be made convex if and only it can be made into an LMI. The
theorem here settles the core case affirmatively.
|
Inductive transfer learning has greatly impacted computer vision, but
existing approaches in NLP still require task-specific modifications and
training from scratch. We propose Universal Language Model Fine-tuning
(ULMFiT), an effective transfer learning method that can be applied to any task
in NLP, and introduce techniques that are key for fine-tuning a language model.
Our method significantly outperforms the state-of-the-art on six text
classification tasks, reducing the error by 18-24% on the majority of datasets.
Furthermore, with only 100 labeled examples, it matches the performance of
training from scratch on 100x more data. We open-source our pretrained models
and code.
|
Building on the seminal work of Arkani-Hamed, He, Salvatori and Thomas
(AHST), we explore the positive geometry encoding one loop scattering amplitude
for quartic scalar interactions. We define a new class of combinatorial
polytopes that we call pseudo-accordiohedra whose poset structures are
associated to singularities of the one loop integrand associated to scalar
quartic interactions. Pseudo-accordiohedra parametrize a family of projective
forms on the abstract kinematic space defined by AHST and restriction of these
forms to the type-D associahedra can be associated to one-loop integrands for
quartic interactions. The restriction (of the projective form) can also be
thought of as a canonical top form on certain geometric realisations of
pseudo-accordiohedra. Our work explores a large class of geometric realisations
of the type-D associahedra which include all the AHST realisations. These
realisations are based on the pseudo-triangulation model for type-D cluster
algebras discovered by Ceballos and Pilaud.
|
The Any Light Particle Search II (ALPS II) is an experiment currently being
built at DESY in Hamburg, Germany, that will use a light-shining-through-a-wall
(LSW) approach to search for axion-like particles. ALPS II represents a
significant step forward for these types of experiments as it will use 24
superconducting dipole magnets, along with dual, high-finesse, 122 m long
optical cavities. This paper gives the first comprehensive recipe for the
realization of the idea, proposed over 30 years ago, to use optical cavities
before and after the wall to increase the power of the regenerated photon
signal. The experiment is designed to achieve a sensitivity to the coupling
between axion-like particles and photons down to g=2e-11 1/GeV for masses below
0.1 meV, more than three orders of magnitude beyond the sensitivity of previous
laboratory experiments. The layout and main components that define ALPS II are
discussed along with plans for reaching design sensitivity. An accompanying
paper (Hallal, et al [1]) offers a more in-depth description of the heterodyne
detection scheme, the first of two independent detection systems that will be
implemented in ALPS II.
|
Considerable information about the early-stage dynamics of heavy-ion
collisions is encoded in the rapidity dependence of measurements. To leverage
the large amount of experimental data, we perform a systematic analysis using
three-dimensional hydrodynamic simulations of multiple collision systems --
large and small, symmetric and asymmetric. Specifically, we perform fully 3D
multi-stage hydrodynamic simulations initialized by a parameterized model for
rapidity-dependent energy deposition, which we calibrate on the hadron
multiplicity and anisotropic flow coefficients. We utilize Bayesian inference
to constrain properties of the early- and late- time dynamics of the system,
and highlight the impact of enforcing global energy conservation in our 3D
model.
|
We propose new textures for the fermion Yukawa matrices which are
generalizations of the so-called Stech ansatz. We discuss how these textures
can be realized in supersymmetric grand unified models with horizontal symmetry
SU(3)_H among the fermion generations. In this framework the mass and mixing
hierarchy of fermions (including neutrinos) can emerge in a natural way. We
emphasize the central role played by the SU(3)_H adjoint Higgs field which
reduces SU(3)_H to U(2)_H at the GUT scale. A complete SO(10)\times SU(3)_H
model is presented in which the desired Yukawa textures can be obtained by
symmetry reasons. The phenomenological implications of these textures are
thoroughly investigated. Among various realistic possibilities for the Clebsch
factors between the quark and lepton entries, we find three different solutions
which provide excellent fits of the quark masses and CKM mixing angles.
Interestingly, all these solutions predict the correct amount of CP violation
via the CKM mechanism, and, in addition, lead to an appealing pattern of the
neutrino masses and mixing angles. In particular, they all predict nearly
maximal 23 mixing and small 12 mixing in the lepton sector, respectively in the
range needed for the explanation of the atmospheric and solar neutrino anomaly.
|
Social event planning has received a great deal of attention in recent years
where various entities, such as event planners and marketing companies,
organizations, venues, or users in Event-based Social Networks, organize
numerous social events (e.g., festivals, conferences, promotion parties).
Recent studies show that "attendance" is the most common metric used to capture
the success of social events, since the number of attendees has great impact on
the event's expected gains (e.g., revenue, artist/brand publicity). In this
work, we study the Social Event Scheduling (SES) problem which aims at
identifying and assigning social events to appropriate time slots, so that the
number of events attendees is maximized. We show that, even in highly
restricted instances, the SES problem is NP-hard to be approximated over a
factor. To solve the SES problem, we design three efficient and scalable
algorithms. These algorithms exploit several novel schemes that we design. We
conduct extensive experiments using several real and synthetic datasets, and
demonstrate that the proposed algorithms perform on average half the
computations compared to the existing solution and, in several cases, are 3-5
times faster.
|
Dmitriy Zhuk has proved that there exist relational structures which admit
near-unanimity polymorphisms, but the minimum arity of such a polymorphism is
large and almost matches the known upper bounds. We present a simplified and
explicit construction of such structures and a detailed, self-contained proof.
|
The Schwarzschild interior solution, or `Schwarzschild star', which describes
a spherically symmetric homogeneous mass with constant energy density, shows a
divergence in pressure when the radius of the star reaches the
Schwarzschild-Buchdahl bound. Recently Mazur and Mottola showed that this
divergence is integrable through the Komar formula, inducing non-isotropic
transverse stresses on a surface of some radius $R_{0}$. When this radius
approaches the Schwarzschild radius $R_{s}=2M$, the interior solution becomes
one of negative pressure evoking a de Sitter spacetime. This gravitational
condensate star, or gravastar, is an alternative solution to the idea of a
black hole as the ultimate state of gravitational collapse. Using Hartle's
model to calculate equilibrium configurations of slowly rotating masses, we
report results of surface and integral properties for a Schwarzschild star in
the very little studied region $R_{s}<R<(9/8)R_{s}$. We found that in the
gravastar limit, the angular velocity of the fluid relative to the local
inertial frame tends to zero, indicating rigid rotation. Remarkably, the
normalized moment of inertia $I/MR^2$ and the mass quadrupole moment $Q$
approach to the corresponding values for the Kerr metric to second order in
$\Omega$. These results provide a solution to the problem of the source of a
slowly rotating Kerr black hole.
|
We give a complete classification of meromorphically integrable homogeneous
potentials $V$ of degree $-1$ which are real analytic on $\mathbb{R}^2\setminus
\{0\}$. In the more general case when $V$ is only meromorphic on an open set of
an algebraic variety, we give a classification of all integrable potentials
having a Darboux point $c$ with $V'(c)=-c,\; c_1^2+c_2^2\neq 0$ and
$\hbox{Sp}(\nabla^2 V(c)) \subset\{-1,0,2\}$. We eventually present a
conjecture for the other eigenvalues and the degenerate Darboux point case
$V'(c)=0$.
|
We present a spectral atlas of the post-main-sequence population of the most
massive Galactic globular cluster, omega Centauri. Spectra were obtained of
more than 1500 stars selected as uniformly as possible from across the (B, B-V)
colour-magnitude diagram of the proper motion cluster member candidates of van
Leeuwen et al. (2000). The spectra were obtained with the 2dF multi-fibre
spectrograph at the Anglo Australian Telescope, and cover the approximate range
lambda~3840-4940 Angstroem. We measure the radial velocities, effective
temperatures, metallicities and surface gravities by fitting ATLAS9 stellar
atmosphere models. We analyse the cluster membership and stellar kinematics,
interstellar absorption in the Ca II K line at 3933 Angstroem, the RR Lyrae
instability strip and the extreme horizontal branch, the metallicity spread and
bimodal CN abundance distribution of red giants, nitrogen and s-process
enrichment, carbon stars, pulsation-induced Balmer line emission on the
asymptotic giant branch (AGB), and the nature of the post-AGB and UV-bright
stars. Membership is confirmed for the vast majority of stars, and the radial
velocities clearly show the rotation of the cluster core. We identify
long-period RR Lyrae-type variables with low gravity, and low-amplitude
variables coinciding with warm RR Lyrae stars. A barium enhancement in the
coolest red giants indicates that 3rd dredge-up operates in AGB stars in omega
Cen. This is distinguished from the pre-enrichment by more massive AGB stars,
which is also seen in our data. The properties of the AGB, post-AGB and
UV-bright stars suggest that RGB mass loss may be less efficient at very low
metallicity, [Fe/H]<<-1, increasing the importance of mass loss on the AGB. The
catalogue and spectra are made available via CDS.
|
We have investigated the critical conditions required for a steady propeller
effect for magnetized neutron stars with optically thick, geometrically thin
accretion disks. We have shown through simple analytical calculations that a
steady-state propeller mechanism cannot be sustained at an inner disk radius
where the viscous and magnetic stresses are balanced. The radius calculated by
equating these stresses is usually found to be close to the conventional Alfven
radius for spherical accretion, r_A. Our results show that: (1) a steady
propeller phase can be established with a maximum inner disk radius that is at
least \sim 15 times smaller than r_A depending on the mass-flow rate of the
disk, rotational period and strength of the magnetic dipole field of the star,
(2) the critical accretion rate corresponding to the accretion-propeller
transition is orders of magnitude lower than the rate estimated by equating r_A
to the co-rotation radius. Our results are consistent with the properties of
the transitional millisecond pulsars which show transitions between the
accretion powered X-ray pulsar and the rotational powered radio pulsar states.
|
The task of testing whether quantum theory applies to all physical systems
and all scales requires considering situations where a quantum probe interacts
with another system that need not obey quantum theory in full. Important
examples include the cases where a quantum mass probes the gravitational field,
for which a unique quantum theory of gravity does not yet exist, or a quantum
field, such as light, interacts with a macroscopic system, such as a biological
molecule, which may or may not obey unitary quantum theory. In this context a
class of experiments has recently been proposed, where the non-classicality of
a physical system that need not obey quantum theory (the gravitational field)
can be tested indirectly by detecting whether or not the system is capable of
entangling two quantum probes. Here we illustrate some of the subtleties of the
argument, to do with the role of locality of interactions and of
non-classicality, and perform proof-of-principle experiments illustrating the
logic of the proposals, using a Nuclear Magnetic Resonance quantum
computational platform with four qubits.
|
An objective understanding of media depictions, such as inclusive portrayals
of how much someone is heard and seen on screen such as in film and television,
requires the machines to discern automatically who, when, how, and where
someone is talking, and not. Speaker activity can be automatically discerned
from the rich multimodal information present in the media content. This is
however a challenging problem due to the vast variety and contextual
variability in the media content, and the lack of labeled data. In this work,
we present a cross-modal neural network for learning visual representations,
which have implicit information pertaining to the spatial location of a speaker
in the visual frames. Avoiding the need for manual annotations for active
speakers in visual frames, acquiring of which is very expensive, we present a
weakly supervised system for the task of localizing active speakers in movie
content. We use the learned cross-modal visual representations, and provide
weak supervision from movie subtitles acting as a proxy for voice activity,
thus requiring no manual annotations. We evaluate the performance of the
proposed system on the AVA active speaker dataset and demonstrate the
effectiveness of the cross-modal embeddings for localizing active speakers in
comparison to fully supervised systems. We also demonstrate state-of-the-art
performance for the task of voice activity detection in an audio-visual
framework, especially when speech is accompanied by noise and music.
|
The present paper is devoted to the study of classes of mappings with
non-bounded characteristic of quasiconformality. It is obtained a result on
normal families of the open discrete mappings $f:D\rightarrow {\Bbb
C}\setminus\{a, b\}$ of the class $W_{loc}^{1, 1}$ having a finite distortion
and omitting two fixed values $a\ne b$ in ${\Bbb C},$ maximal dilatations of
which has a majorant of the class of finite mean oscillation at every point. In
particular, the result mentioned above holds for the so-called $Q$-mappings and
is an analog of known Montel theorem for analytic functions.
|
Maps of cosmic structure produced by galaxy surveys are one of the key tools
for answering fundamental questions about the Universe. Accurate theoretical
predictions for these quantities are needed to maximize the scientific return
of these programs. Simulating the Universe by including gravity and
hydrodynamics is one of the most powerful techniques to accomplish this;
unfortunately, these simulations are very expensive computationally.
Alternatively, gravity-only simulations are cheaper, but do not predict the
locations and properties of galaxies in the cosmic web. In this work, we use
convolutional neural networks to paint galaxy stellar masses on top of the dark
matter field generated by gravity-only simulations. Stellar mass of galaxies
are important for galaxy selection in surveys and thus an important quantity
that needs to be predicted. Our model outperforms the state-of-the-art
benchmark model and allows the generation of fast and accurate models of the
observed galaxy distribution.
|
We prove, by topological methods, new results on the existence of nonzero
positive weak solutions for a class of multi-parameter second order elliptic
systems subject to functional boundary conditions. The setting is fairly
general and covers the case of multi-point, integral and nonlinear boundary
conditions. We also present a non-existence result. We provide some examples to
illustrate the applicability our theoretical results.
|
Bayesian inverse reinforcement learning (IRL) methods are ideal for safe
imitation learning, as they allow a learning agent to reason about reward
uncertainty and the safety of a learned policy. However, Bayesian IRL is
computationally intractable for high-dimensional problems because each sample
from the posterior requires solving an entire Markov Decision Process (MDP).
While there exist non-Bayesian deep IRL methods, these methods typically infer
point estimates of reward functions, precluding rigorous safety and uncertainty
analysis. We propose Bayesian Reward Extrapolation (B-REX), a highly efficient,
preference-based Bayesian reward learning algorithm that scales to
high-dimensional, visual control tasks. Our approach uses successor feature
representations and preferences over demonstrations to efficiently generate
samples from the posterior distribution over the demonstrator's reward function
without requiring an MDP solver. Using samples from the posterior, we
demonstrate how to calculate high-confidence bounds on policy performance in
the imitation learning setting, in which the ground-truth reward function is
unknown. We evaluate our proposed approach on the task of learning to play
Atari games via imitation learning from pixel inputs, with no access to the
game score. We demonstrate that B-REX learns imitation policies that are
competitive with a state-of-the-art deep imitation learning method that only
learns a point estimate of the reward function. Furthermore, we demonstrate
that samples from the posterior generated via B-REX can be used to compute
high-confidence performance bounds for a variety of evaluation policies. We
show that high-confidence performance bounds are useful for accurately ranking
different evaluation policies when the reward function is unknown. We also
demonstrate that high-confidence performance bounds may be useful for detecting
reward hacking.
|
Baird counterexample was proposed by Leemon Baird in 1995, first used to show
that the Temporal Difference (TD(0)) algorithm diverges on this example. Since
then, it is often used to test and compare off-policy learning algorithms.
Gradient TD algorithms solved the divergence issue of TD on Baird
counterexample. However, their convergence on this example is still very slow,
and the nature of the slowness is not well understood, e.g., see (Sutton and
Barto 2018).
This note is to understand in particular, why TDC is slow on this example,
and provide a debugging analysis to understand this behavior. Our debugging
technique can be used to study the convergence behavior of two-time-scale
stochastic approximation algorithms. We also provide empirical results of the
recent Impression GTD algorithm on this example, showing the convergence is
very fast, in fact, in a linear rate. We conclude that Baird counterexample is
solved, by an algorithm with the convergence guarantee to the TD solution in
general, and a fast convergence rate.
|
Although accelerator technology has matured sufficiently, state-of-the-art
X-ray linacs for radiotherapy and cargo-scanning capture merely 30-50% of the
electrons from a thermionic cathode, requiring a higher cathode current and
leaving uncaptured electrons to cause problems due to back bombardment,
shortening of cathode life, etc. Any solution to increase capture should be
effective, simple, reliable, compact, and low cost in order to be adopted by
industry. To address this, we present the design of a 6 MeV high capture
efficiency S-band electron linac that captures 90% of the initial DC beam. This
linac does not require any extra parts that would increase the cost as the high
efficiency is achieved via a low-field-amplitude in the first bunching cell to
decrease the number of backstreaming electrons, to velocity bunch the electron
beam, and recapture backstreaming electrons. Under the low field amplitude, any
electrons launched at decelerating phases travel backward with low speeds, thus
most of them can catch the next RF cycle, and get re-accelerated/recaptured. As
the electron speed is low, the cell length is also shorter than existing
linacs. Such a short field is achieved by the use of asymmetric cells with
differential coupling to the side-coupled cells. Our novel design has
implications for all commercial high current thermionic gun linacs for
increasing beam current and increasing cathode lifetime.
|
A widely open conjecture proposed by Bollob\'as, Erd\H{o}s, and Tuza in the
early 1990s states that for any $n$-vertex graph $G$, if the independence
number $\alpha(G) = \Omega(n)$, then there is a subset $T \subseteq V(G)$ with
$|T| = o(n)$ such that $T$ intersects all maximum independent sets of $G$. In
this paper, we prove that this conjecture holds for graphs that do not contain
an induced $K_{s,t}$ for fixed $t \ge s$. Our proof leverages the probabilistic
method at an appropriate juncture.
|
We extend classical Maxwell field theory to a first quantized theory of the
photon by deriving a conserved Lorentz four-current whose zero component is a
positive definite number density. Fields are real and their positive (negative)
frequency parts are interpreted as absorption (emission) of a positive energy
photon. With invariant plane wave normalization, the photon position operator
is Hermitian with instantaneously localized eigenvectors that transform as
Lorentz four-vectors. Reality of the fields and wave function ensure causal
propagation and zero net absorption of energy in the absence of charged matter.
The photon probability amplitude is the real part of the projection of the
photon's state vector onto a basis of position eigenvectors and its square
implements the Born rule. Manifest covariance and consistency with quantum
field theory is maintained through use of the electromagnetic four-potential
and the Lorenz gauge.
|
We compute change in entanglement entropy for a single interval in $1+1$
dimensional sine-Gordon model perturbatively in the coupling. The sine-Gordon
perturbation can be thought of as deformation of the free CFT by a primary
operator with dimension $\Delta$. In an independent computation we calculate
holographic entanglement entropy for that interval from three dimensional bulk
AdS which has a massive scalar with its mass satisfying $m^2=
\Delta(\Delta-2)$. We show that the two results match for near-marginal
perturbations upto leading order in the coupling.
|
We prove the strong partition property for \delta^2_1 using methods from
inner model theory.
|
What are the chances of an ethical individual rising through the ranks of a
political party or a corporation in the presence of unethical peers? To answer
this question, I consider a four-player two-stage elimination tournament, in
which players are partitioned into those willing to be involved in sabotage
behavior and those who are not. I show that, under certain conditions, the
latter are more likely to win the tournament.
|
We demonstrate that the rotating black holes in an arbitrary number of
dimensions and without any restrictions on their rotation parameters possess
the same `hidden' symmetry as the 4-dimensional Kerr metric. Namely, besides
the spacetime symmetries generated by the Killing vectors they also admit the
(antisymmetric) Killing-Yano and symmetric Killing tensors.
|
Subsets and Splits