text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Near-IR period-luminosity relations for pulsating stars in $ω$ Centauri (NGC 5139),
Abstract: $\omega$ Centauri (NGC 5139) hosts hundreds of pulsating variable stars of
different types, thus representing a treasure trove for studies of their
corresponding period-luminosity (PL) relations. Our goal in this study is to
obtain the PL relations for RR Lyrae, and SX Phoenicis stars in the field of
the cluster, based on high-quality, well-sampled light curves in the
near-infrared (IR). $\omega$ Centauri was observed using VIRCAM mounted on
VISTA. A total of 42 epochs in $J$ and 100 epochs in $K_{\rm S}$ were obtained,
spanning 352 days. Point-spread function photometry was performed using DoPhot
and DAOPHOT in the outer and inner regions of the cluster, respectively. Based
on the comprehensive catalogue of near-IR light curves thus secured, PL
relations were obtained for the different types of pulsators in the cluster,
both in the $J$ and $K_{\rm S}$ bands. This includes the first PL relations in
the near-IR for fundamental-mode SX Phoenicis stars. The near-IR magnitudes and
periods of Type II Cepheids and RR Lyrae stars were used to derive an updated
true distance modulus to the cluster, with a resulting value of $(m-M)_0 =
13.708 \pm 0.035 \pm 0.10$ mag, where the error bars correspond to the adopted
statistical and systematic errors, respectively. Adding the errors in
quadrature, this is equivalent to a heliocentric distance of $5.52\pm 0.27$
kpc. | [
0,
1,
0,
0,
0,
0
] |
Title: Pseudo-deterministic Proofs,
Abstract: We introduce pseudo-deterministic interactive proofs (psdAM): interactive
proof systems for search problems where the verifier is guaranteed with high
probability to output the same output on different executions. As in the case
with classical interactive proofs, the verifier is a probabilistic polynomial
time algorithm interacting with an untrusted powerful prover.
We view pseudo-deterministic interactive proofs as an extension of the study
of pseudo-deterministic randomized polynomial time algorithms: the goal of the
latter is to find canonical solutions to search problems whereas the goal of
the former is to prove that a solution to a search problem is canonical to a
probabilistic polynomial time verifier. Alternatively, one may think of the
powerful prover as aiding the probabilistic polynomial time verifier to find
canonical solutions to search problems, with high probability over the
randomness of the verifier. The challenge is that pseudo-determinism should
hold not only with respect to the randomness, but also with respect to the
prover: a malicious prover should not be able to cause the verifier to output a
solution other than the unique canonical one. | [
1,
0,
0,
0,
0,
0
] |
Title: The Maximum Likelihood Degree of Toric Varieties,
Abstract: We study the maximum likelihood degree (ML degree) of toric varieties, known
as discrete exponential models in statistics. By introducing scaling
coefficients to the monomial parameterization of the toric variety, one can
change the ML degree. We show that the ML degree is equal to the degree of the
toric variety for generic scalings, while it drops if and only if the scaling
vector is in the locus of the principal $A$-determinant. We also illustrate how
to compute the ML estimate of a toric variety numerically via homotopy
continuation from a scaled toric variety with low ML degree. Throughout, we
include examples motivated by algebraic geometry and statistics. We compute the
ML degree of rational normal scrolls and a large class of Veronese-type
varieties. In addition, we investigate the ML degree of scaled Segre varieties,
hierarchical loglinear models, and graphical models. | [
0,
0,
1,
1,
0,
0
] |
Title: On the Mechanism of Large Amplitude Flapping of Inverted Foil in a Uniform Flow,
Abstract: An elastic foil interacting with a uniform flow with its trailing edge
clamped, also known as the inverted foil, exhibits a wide range of complex
self-induced flapping regimes such as large amplitude flapping (LAF), deformed
and flipped flapping. Here, we perform three-dimensional numerical experiments
to examine the role of vortex shedding and the vortex-vortex interaction on the
LAF response at Reynolds number Re=30,000. Here we investigate the dynamics of
the inverted foil for a novel configuration wherein we introduce a fixed
splitter plate at the trailing edge to suppress the vortex shedding from
trailing edge and inhibit the interaction between the counter-rotating
vortices. We find that the inhibition of the interaction has an insignificant
effect on the transverse flapping amplitudes, due to a relatively weaker
coupling between the counter-rotating vortices emanating from the leading edge
and trailing edge. However, the inhibition of the trailing edge vortex reduces
the streamwise flapping amplitude, the flapping frequency and the net strain
energy of foil. To further generalize our understanding of the LAF, we next
perform low-Reynolds number (Re$\in[0.1,50]$) simulations for the identical
foil properties to realize the impact of vortex shedding on the large amplitude
flapping. Due to the absence of vortex shedding process in the low-$Re$ regime,
the inverted foil no longer exhibits the periodic flapping. However, the
flexible foil still loses its stability through divergence instability to
undergo a large static deformation. Finally, we introduce an analogous
analytical model for the LAF based on the dynamics of an elastically mounted
flat plate undergoing flow-induced pitching oscillations in a uniform stream. | [
0,
1,
0,
0,
0,
0
] |
Title: Multidimensional Sampling of Isotropically Bandlimited Signals,
Abstract: A new lower bound on the average reconstruction error variance of
multidimensional sampling and reconstruction is presented. It applies to
sampling on arbitrary lattices in arbitrary dimensions, assuming a stochastic
process with constant, isotropically bandlimited spectrum and reconstruction by
the best linear interpolator. The lower bound is exact for any lattice at
sufficiently high and low sampling rates. The two threshold rates where the
error variance deviates from the lower bound gives two optimality criteria for
sampling lattices. It is proved that at low rates, near the first threshold,
the optimal lattice is the dual of the best sphere-covering lattice, which for
the first time establishes a rigorous relation between optimal sampling and
optimal sphere covering. A previously known result is confirmed at high rates,
near the second threshold, namely, that the optimal lattice is the dual of the
best sphere-packing lattice. Numerical results quantify the performance of
various lattices for sampling and support the theoretical optimality criteria. | [
1,
0,
1,
1,
0,
0
] |
Title: Asymptotic structure of almost eigenfunctions of drift Laplacians on conical ends,
Abstract: We use a weighted variant of the frequency functions introduced by Almgren to
prove sharp asymptotic estimates for almost eigenfunctions of the drift
Laplacian associated to the Gaussian weight on an asymptotically conical end.
As a consequence, we obtain a purely elliptic proof of a result of L. Wang on
the uniqueness of self-shrinkers of the mean curvature flow asymptotic to a
given cone. Another consequence is a unique continuation property for
self-expanders of the mean curvature flow that flow from a cone. | [
0,
0,
1,
0,
0,
0
] |
Title: Palomar Optical Spectrum of Hyperbolic Near-Earth Object A/2017 U1,
Abstract: We present optical spectroscopy of the recently discovered hyperbolic
near-Earth object A/2017 U1, taken on 25 Oct 2017 at Palomar Observatory.
Although our data are at a very low signal-to-noise, they indicate a very red
surface at optical wavelengths without significant absorption features. | [
0,
1,
0,
0,
0,
0
] |
Title: On the risk of convex-constrained least squares estimators under misspecification,
Abstract: We consider the problem of estimating the mean of a noisy vector. When the
mean lies in a convex constraint set, the least squares projection of the
random vector onto the set is a natural estimator. Properties of the risk of
this estimator, such as its asymptotic behavior as the noise tends to zero,
have been well studied. We instead study the behavior of this estimator under
misspecification, that is, without the assumption that the mean lies in the
constraint set. For appropriately defined notions of risk in the misspecified
setting, we prove a generalization of a low noise characterization of the risk
due to Oymak and Hassibi in the case of a polyhedral constraint set. An
interesting consequence of our results is that the risk can be much smaller in
the misspecified setting than in the well-specified setting. We also discuss
consequences of our result for isotonic regression. | [
0,
0,
1,
1,
0,
0
] |
Title: On the Binary Lossless Many-Help-One Problem with Independently Degraded Helpers,
Abstract: Although the rate region for the lossless many-help-one problem with
independently degraded helpers is already "solved", its solution is given in
terms of a convex closure over a set of auxiliary random variables. Thus, for
any such a problem in particular, an optimization over the set of auxiliary
random variables is required to truly solve the rate region. Providing the
solution is surprisingly difficult even for an example as basic as binary
sources. In this work, we derive a simple and tight inner bound on the rate
region's lower boundary for the lossless many-help-one problem with
independently degraded helpers when specialized to sources that are binary,
uniformly distributed, and interrelated through symmetric channels. This
scenario finds important applications in emerging cooperative communication
schemes in which the direct-link transmission is assisted via multiple lossy
relaying links. Numerical results indicate that the derived inner bound proves
increasingly tight as the helpers become more degraded. | [
1,
0,
0,
0,
0,
0
] |
Title: Dimensional reduction and the equivariant Chern character,
Abstract: We propose a dimensional reduction procedure in the Stolz--Teichner framework
of supersymmetric Euclidean field theories (EFTs) that is well-suited in the
presence of a finite gauge group or, more generally, for field theories over an
orbifold. As an illustration, we give a geometric interpretation of the Chern
character for manifolds with an action by a finite group. | [
0,
0,
1,
0,
0,
0
] |
Title: Formal Privacy for Functional Data with Gaussian Perturbations,
Abstract: Motivated by the rapid rise in statistical tools in Functional Data Analysis,
we consider the Gaussian mechanism for achieving differential privacy with
parameter estimates taking values in a, potentially infinite-dimensional,
separable Banach space. Using classic results from probability theory, we show
how densities over function spaces can be utilized to achieve the desired
differential privacy bounds. This extends prior results of Hall et al (2013) to
a much broader class of statistical estimates and summaries, including "path
level" summaries, nonlinear functionals, and full function releases. By
focusing on Banach spaces, we provide a deeper picture of the challenges for
privacy with complex data, especially the role regularization plays in
balancing utility and privacy. Using an application to penalized smoothing, we
explicitly highlight this balance in the context of mean function estimation.
Simulations and an application to diffusion tensor imaging are briefly
presented, with extensive additions included in a supplement. | [
0,
0,
1,
1,
0,
0
] |
Title: An algorithm to reconstruct convex polyhedra from their face normals and areas,
Abstract: A well-known result in the study of convex polyhedra, due to Minkowski, is
that a convex polyhedron is uniquely determined (up to translation) by the
directions and areas of its faces. The theorem guarantees existence of the
polyhedron associated to given face normals and areas, but does not provide a
constructive way to find it explicitly. This article provides an algorithm to
reconstruct 3D convex polyhedra from their face normals and areas, based on an
method by Lasserre to compute the volume of a convex polyhedron in
$\mathbb{R}^n$. A Python implementation of the algorithm is available at
this https URL. | [
0,
1,
0,
0,
0,
0
] |
Title: Grid-converged Solution and Analysis of the Unsteady Viscous Flow in a Two-dimensional Shock Tube,
Abstract: The flow in a shock tube is extremely complex with dynamic multi-scale
structures of sharp fronts, flow separation, and vortices due to the
interaction of the shock wave, the contact surface, and the boundary layer over
the side wall of the tube. Prediction and understanding of the complex fluid
dynamics is of theoretical and practical importance. It is also an extremely
challenging problem for numerical simulation, especially at relatively high
Reynolds numbers. Daru & Tenaud (Daru, V. & Tenaud, C. 2001 Evaluation of TVD
high resolution schemes for unsteady viscous shocked flows. Computers & Fluids
30, 89-113) proposed a two-dimensional model problem as a numerical test case
for high-resolution schemes to simulate the flow field in a square closed shock
tube. Though many researchers have tried this problem using a variety of
computational methods, there is not yet an agreed-upon grid-converged solution
of the problem at the Reynolds number of 1000. This paper presents a rigorous
grid-convergence study and the resulting grid-converged solutions for this
problem by using a newly-developed, efficient, and high-order gas-kinetic
scheme. Critical data extracted from the converged solutions are documented as
benchmark data. The complex fluid dynamics of the flow at Re = 1000 are
discussed and analysed in detail. Major phenomena revealed by the numerical
computations include the downward concentration of the fluid through the curved
shock, the formation of the vortices, the mechanism of the shock wave
bifurcation, the structure of the jet along the bottom wall, and the
Kelvin-Helmholtz instability near the contact surface. | [
0,
1,
0,
0,
0,
0
] |
Title: Generative Temporal Models with Memory,
Abstract: We consider the general problem of modeling temporal data with long-range
dependencies, wherein new observations are fully or partially predictable based
on temporally-distant, past observations. A sufficiently powerful temporal
model should separate predictable elements of the sequence from unpredictable
elements, express uncertainty about those unpredictable elements, and rapidly
identify novel elements that may help to predict the future. To create such
models, we introduce Generative Temporal Models augmented with external memory
systems. They are developed within the variational inference framework, which
provides both a practical training methodology and methods to gain insight into
the models' operation. We show, on a range of problems with sparse, long-term
temporal dependencies, that these models store information from early in a
sequence, and reuse this stored information efficiently. This allows them to
perform substantially better than existing models based on well-known recurrent
neural networks, like LSTMs. | [
1,
0,
0,
1,
0,
0
] |
Title: Global research collaboration: Networks and partners in South East Asia,
Abstract: This is an empirical paper that addresses the role of bilateral and
multilateral international co-authorships in the six leading science systems
among the ASEAN group of countries (ASEAN6). The paper highlights the different
ways that bilateral and multilateral co-authorships structure global networks
and the collaborations of the ASEAN6. The paper looks at the influence of the
collaboration styles of major collaborating countries of the ASEAN6,
particularly the USA and Japan. It also highlights the role of bilateral and
multilateral co-authorships in the production of knowledge in the leading
specialisations of the ASEAN6. The discussion section offers some tentative
explanations for major dynamics evident in the results and summarises the next
steps in this research. | [
1,
0,
0,
0,
0,
0
] |
Title: Spontaneous and stimulus-induced coherent states of dynamically balanced neuronal networks,
Abstract: How the information microscopically processed by individual neurons is
integrated and used in organising the macroscopic behaviour of an animal is a
central question in neuroscience. Coherence of dynamics over different scales
has been suggested as a clue to the mechanisms underlying this integration.
Balanced excitation and inhibition amplify microscopic fluctuations to a
macroscopic level and may provide a mechanism for generating coherent dynamics
over the two scales. Previous theories of brain dynamics, however, have been
restricted to cases in which population-averaged activities have been
constrained to constant values, that is, to cases with no macroscopic degrees
of freedom. In the present study, we investigate balanced neuronal networks
with a nonzero number of macroscopic degrees of freedom coupled to microscopic
degrees of freedom. In these networks, amplified microscopic fluctuations drive
the macroscopic dynamics, while the macroscopic dynamics determine the
statistics of the microscopic fluctuations. We develop a novel type of
mean-field theory applicable to this class of interscale interactions, for
which an analytical approach has previously been unknown. Irregular macroscopic
rhythms similar to those observed in the brain emerge spontaneously as a result
of such interactions. Microscopic inputs to a small number of neurons
effectively entrain the whole network through the amplification mechanism.
Neuronal responses become coherent as the magnitude of either the balanced
excitation and inhibition or the external inputs is increased. Our mean-field
theory successfully predicts the behaviour of the model. Our numerical results
further suggest that the coherent dynamics can be used for selective read-out
of information. In conclusion, our results show a novel form of neuronal
information processing that bridges different scales, and advance our
understanding of the brain. | [
0,
1,
0,
0,
0,
0
] |
Title: Effect of Blast Exposure on Gene-Gene Interactions,
Abstract: Repeated exposure to low-level blast may initiate a range of adverse health
problem such as traumatic brain injury (TBI). Although many studies
successfully identified genes associated with TBI, yet the cellular mechanisms
underpinning TBI are not fully elucidated. In this study, we investigated
underlying relationship among genes through constructing transcript Bayesian
networks using RNA-seq data. The data for pre- and post-blast transcripts,
which were collected on 33 individuals in Army training program, combined with
our system approach provide unique opportunity to investigate the effect of
blast-wave exposure on gene-gene interactions. Digging into the networks, we
identified four subnetworks related to immune system and inflammatory process
that are disrupted due to the exposure. Among genes with relatively high fold
change in their transcript expression level, ATP6V1G1, B2M, BCL2A1, PELI,
S100A8, TRIM58 and ZNF654 showed major impact on the dysregulation of the
gene-gene interactions. This study reveals how repeated exposures to traumatic
conditions increase the level of fold change of transcript expression and
hypothesizes new targets for further experimental studies. | [
0,
0,
0,
0,
1,
0
] |
Title: A sparse linear algebra algorithm for fast computation of prediction variances with Gaussian Markov random fields,
Abstract: Gaussian Markov random fields are used in a large number of disciplines in
machine vision and spatial statistics. The models take advantage of sparsity in
matrices introduced through the Markov assumptions, and all operations in
inference and prediction use sparse linear algebra operations that scale well
with dimensionality. Yet, for very high-dimensional models, exact computation
of predictive variances of linear combinations of variables is generally
computationally prohibitive, and approximate methods (generally interpolation
or conditional simulation) are typically used instead. A set of conditions are
established under which the variances of linear combinations of random
variables can be computed exactly using the Takahashi recursions. The ensuing
computational simplification has wide applicability and may be used to enhance
several software packages where model fitting is seated in a maximum-likelihood
framework. The resulting algorithm is ideal for use in a variety of spatial
statistical applications, including \emph{LatticeKrig} modelling, statistical
downscaling, and fixed rank kriging. It can compute hundreds of thousands exact
predictive variances of linear combinations on a standard desktop with ease,
even when large spatial GMRF models are used. | [
0,
0,
0,
1,
0,
0
] |
Title: Subspace Clustering of Very Sparse High-Dimensional Data,
Abstract: In this paper we consider the problem of clustering collections of very short
texts using subspace clustering. This problem arises in many applications such
as product categorisation, fraud detection, and sentiment analysis. The main
challenge lies in the fact that the vectorial representation of short texts is
both high-dimensional, due to the large number of unique terms in the corpus,
and extremely sparse, as each text contains a very small number of words with
no repetition. We propose a new, simple subspace clustering algorithm that
relies on linear algebra to cluster such datasets. Experimental results on
identifying product categories from product names obtained from the US Amazon
website indicate that the algorithm can be competitive against state-of-the-art
clustering algorithms. | [
1,
0,
0,
1,
0,
0
] |
Title: Quarnet inference rules for level-1 networks,
Abstract: An important problem in phylogenetics is the construction of phylogenetic
trees. One way to approach this problem, known as the supertree method,
involves inferring a phylogenetic tree with leaves consisting of a set $X$ of
species from a collection of trees, each having leaf-set some subset of $X$. In
the 1980's characterizations, certain inference rules were given for when a
collection of 4-leaved trees, one for each 4-element subset of $X$, can all be
simultaneously displayed by a single supertree with leaf-set $X$. Recently, it
has become of interest to extend such results to phylogenetic networks. These
are a generalization of phylogenetic trees which can be used to represent
reticulate evolution (where species can come together to form a new species).
It has been shown that a certain type of phylogenetic network, called a level-1
network, can essentially be constructed from 4-leaved trees. However, the
problem of providing appropriate inference rules for such networks remains
unresolved. Here we show that by considering 4-leaved networks, called
quarnets, as opposed to 4-leaved trees, it is possible to provide such rules.
In particular, we show that these rules can be used to characterize when a
collection of quarnets, one for each 4-element subset of $X$, can all be
simultaneously displayed by a level-1 network with leaf-set $X$. The rules are
an intriguing mixture of tree inference rules, and an inference rule for
building up a cyclic ordering of $X$ from orderings on subsets of $X$ of size
4. This opens up several new directions of research for inferring phylogenetic
networks from smaller ones, which could yield new algorithms for solving the
supernetwork problem in phylogenetics. | [
1,
0,
0,
0,
0,
0
] |
Title: The IRX-Beta Dust Attenuation Relation in Cosmological Galaxy Formation Simulations,
Abstract: We utilise a series of high-resolution cosmological zoom simulations of
galaxy formation to investigate the relationship between the ultraviolet (UV)
slope, beta, and the ratio of the infrared luminosity to UV luminosity (IRX) in
the spectral energy distributions (SEDs) of galaxies. We employ dust radiative
transfer calculations in which the SEDs of the stars in galaxies propagate
through the dusty interstellar medium. Our main goals are to understand the
origin of, and scatter in the IRX-beta relation; to assess the efficacy of
simplified stellar population synthesis screen models in capturing the
essential physics in the IRX-beta relation; and to understand systematic
deviations from the canonical local IRX-beta relations in particular
populations of high-redshift galaxies. Our main results follow. Galaxies that
have young stellar populations with relatively cospatial UV and IR emitting
regions and a Milky Way-like extinction curve fall on or near the standard
Meurer relation. This behaviour is well captured by simplified screen models.
Scatter in the IRX-beta relation is dominated by three major effects: (i) older
stellar populations drive galaxies below the relations defined for local
starbursts due to a reddening of their intrinsic UV SEDs; (ii) complex
geometries in high-z heavily star forming galaxies drive galaxies toward blue
UV slopes owing to optically thin UV sightlines; (iii) shallow extinction
curves drive galaxies downward in the IRX-beta plane due to lowered NUV/FUV
extinction ratios. We use these features of the UV slopes of galaxies to derive
a fitting relation that reasonably collapses the scatter back toward the
canonical local relation. Finally, we use these results to develop an
understanding for the location of two particularly enigmatic populations of
galaxies in the IRX-beta plane: z~2-4 dusty star forming galaxies, and z>5 star
forming galaxies. | [
0,
1,
0,
0,
0,
0
] |
Title: Strict convexity of the Mabuchi functional for energy minimizers,
Abstract: There are two parts of this paper. First, we discovered an explicit formula
for the complex Hessian of the weighted log-Bergman kernel on a parallelogram
domain, and utilised this formula to give a new proof about the strict
convexity of the Mabuchi functional along a smooth geodesic. Second, when a
C^{1,1}-geodesic connects two non-degenerate energy minimizers, we also proved
this strict convexity, by showing that such a geodesic must be non-degenerate
and smooth. | [
0,
0,
1,
0,
0,
0
] |
Title: DNN Filter Bank Cepstral Coefficients for Spoofing Detection,
Abstract: With the development of speech synthesis techniques, automatic speaker
verification systems face the serious challenge of spoofing attack. In order to
improve the reliability of speaker verification systems, we develop a new
filter bank based cepstral feature, deep neural network filter bank cepstral
coefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The
deep neural network filter bank is automatically generated by training a filter
bank neural network (FBNN) using natural and synthetic speech. By adding
restrictions on the training rules, the learned weight matrix of FBNN is
band-limited and sorted by frequency, similar to the normal filter bank. Unlike
the manually designed filter bank, the learned filter bank has different filter
shapes in different channels, which can capture the differences between natural
and synthetic speech more effectively. The experimental results on the ASVspoof
{2015} database show that the Gaussian mixture model maximum-likelihood
(GMM-ML) classifier trained by the new feature performs better than the
state-of-the-art linear frequency cepstral coefficients (LFCC) based
classifier, especially on detecting unknown attacks. | [
1,
0,
0,
0,
0,
0
] |
Title: Up-down colorings of virtual-link diagrams and the necessity of Reidemeister moves of type II,
Abstract: We introduce an up-down coloring of a virtual-link diagram. The
colorabilities give a lower bound of the minimum number of Reidemeister moves
of type II which are needed between two 2-component virtual-link diagrams. By
using the notion of a quandle cocycle invariant, we determine the necessity of
Reidemeister moves of type II for a pair of diagrams of the trivial
virtual-knot. This implies that for any virtual-knot diagram $D$, there exists
a diagram $D'$ representing the same virtual-knot such that any sequence of
generalized Reidemeister moves between them includes at least one Reidemeister
move of type II. | [
0,
0,
1,
0,
0,
0
] |
Title: Introduction to the Special Issue on Approaches to Control Biological and Biologically Inspired Networks,
Abstract: The emerging field at the intersection of quantitative biology, network
modeling, and control theory has enjoyed significant progress in recent years.
This Special Issue brings together a selection of papers on complementary
approaches to observe, identify, and control biological and biologically
inspired networks. These approaches advance the state of the art in the field
by addressing challenges common to many such networks, including high
dimensionality, strong nonlinearity, uncertainty, and limited opportunities for
observation and intervention. Because these challenges are not unique to
biological systems, it is expected that many of the results presented in these
contributions will also find applications in other domains, including physical,
social, and technological networks. | [
1,
0,
0,
0,
1,
0
] |
Title: Towards Classification of Web ontologies using the Horizontal and Vertical Segmentation,
Abstract: The new era of the Web is known as the semantic Web or the Web of data. The
semantic Web depends on ontologies that are seen as one of its pillars. The
bigger these ontologies, the greater their exploitation. However, when these
ontologies become too big other problems may appear, such as the complexity to
charge big files in memory, the time it needs to download such files and
especially the time it needs to make reasoning on them. We discuss in this
paper approaches for segmenting such big Web ontologies as well as its
usefulness. The segmentation method extracts from an existing ontology a
segment that represents a layer or a generation in the existing ontology; i.e.
a horizontally extraction. The extracted segment should be itself an ontology. | [
1,
0,
0,
0,
0,
0
] |
Title: An Efficient Version of the Bombieri-Vaaler Lemma,
Abstract: In their celebrated paper "On Siegel's Lemma", Bombieri and Vaaler found an
upper bound on the height of integer solutions of systems of linear Diophantine
equations. Calculating the bound directly, however, requires exponential time.
In this paper, we present the bound in a different form that can be computed in
polynomial time. We also give an elementary (and arguably simpler) proof for
the bound. | [
1,
0,
1,
0,
0,
0
] |
Title: Interacting Chaplygin gas revisited,
Abstract: The implications of considering interaction between Chaplygin gas and a
barotropic fluid with constant equation of state have been explored. The unique
feature of this work is that assuming an interaction $Q \propto H\rho_d$,
analytic expressions for the energy density and pressure have been derived in
terms of the Hypergeometric $_2\text{F}_1$ function. It is worthwhile to
mention that an interacting Chaplygin gas model was considered in 2006 by Zhang
and Zhu, nevertheless, analytic solutions for the continuity equations could
not be determined assuming an interaction proportional to $H$ times the sum of
the energy densities of Chaplygin gas and dust. Our model can successfully
explain the transition from the early decelerating phase to the present phase
of cosmic acceleration. Arbitrary choice of the free parameters of our model
through trial and error show at recent observational data strongly favors
$w_m=0$ and $w_m=-\frac{1}{3}$ over the $w_m=\frac{1}{3}$ case. Interestingly,
the present model also incorporates the transition of dark energy into the
phantom domain, however, future deceleration is forbidden. | [
0,
1,
0,
0,
0,
0
] |
Title: Focused time-lapse inversion of radio and audio magnetotelluric data,
Abstract: Geoelectrical techniques are widely used to monitor groundwater processes,
while surprisingly few studies have considered audio (AMT) and radio (RMT)
magnetotellurics for such purposes. In this numerical investigation, we analyze
to what extent inversion results based on AMT and RMT monitoring data can be
improved by (1) time-lapse difference inversion; (2) incorporation of
statistical information about the expected model update (i.e., the model
regularization is based on a geostatistical model); (3) using alternative model
norms to quantify temporal changes (i.e., approximations of l1 and Cauchy norms
using iteratively reweighted least-squares), (4) constraining model updates to
predefined ranges (i.e., using Lagrange Multipliers to only allow either
increases or decreases of electrical resistivity with respect to background
conditions). To do so, we consider a simple illustrative model and a more
realistic test case related to seawater intrusion. The results are encouraging
and show significant improvements when using time-lapse difference inversion
with non l2 model norms. Artifacts that may arise when imposing compactness of
regions with temporal changes can be suppressed through inequality constraints
to yield models without oscillations outside the true region of temporal
changes. Based on these results, we recommend approximate l1-norm solutions as
they can resolve both sharp and smooth interfaces within the same model. | [
0,
1,
0,
0,
0,
0
] |
Title: Deep Exploration via Randomized Value Functions,
Abstract: We study the use of randomized value functions to guide deep exploration in
reinforcement learning. This offers an elegant means for synthesizing
statistically and computationally efficient exploration with common practical
approaches to value function learning. We present several reinforcement
learning algorithms that leverage randomized value functions and demonstrate
their efficacy through computational studies. We also prove a regret bound that
establishes statistical efficiency with a tabular representation. | [
1,
0,
0,
1,
0,
0
] |
Title: SEP-Nets: Small and Effective Pattern Networks,
Abstract: While going deeper has been witnessed to improve the performance of
convolutional neural networks (CNN), going smaller for CNN has received
increasing attention recently due to its attractiveness for mobile/embedded
applications. It remains an active and important topic how to design a small
network while retaining the performance of large and deep CNNs (e.g., Inception
Nets, ResNets). Albeit there are already intensive studies on compressing the
size of CNNs, the considerable drop of performance is still a key concern in
many designs. This paper addresses this concern with several new contributions.
First, we propose a simple yet powerful method for compressing the size of deep
CNNs based on parameter binarization. The striking difference from most
previous work on parameter binarization/quantization lies at different
treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$),
where we only binarize $k\times k$ convolutions into binary patterns. The
resulting networks are referred to as pattern networks. By doing this, we show
that previous deep CNNs such as GoogLeNet and Inception-type Nets can be
compressed dramatically with marginal drop in performance. Second, in light of
the different functionalities of $1\times 1$ (data projection/transformation)
and $k\times k$ convolutions (pattern extraction), we propose a new block
structure codenamed the pattern residual block that adds transformed feature
maps generated by $1\times 1$ convolutions to the pattern feature maps
generated by $k\times k$ convolutions, based on which we design a small network
with $\sim 1$ million parameters. Combining with our parameter binarization, we
achieve better performance on ImageNet than using similar sized networks
including recently released Google MobileNets. | [
1,
0,
0,
0,
0,
0
] |
Title: Fundamental Limitations of Cavity-assisted Atom Interferometry,
Abstract: Atom interferometers employing optical cavities to enhance the beam splitter
pulses promise significant advances in science and technology, notably for
future gravitational wave detectors. Long cavities, on the scale of hundreds of
meters, have been proposed in experiments aiming to observe gravitational waves
with frequencies below 1 Hz, where laser interferometers, such as LIGO, have
poor sensitivity. Alternatively, short cavities have also been proposed for
enhancing the sensitivity of more portable atom interferometers. We explore the
fundamental limitations of two-mirror cavities for atomic beam splitting, and
establish upper bounds on the temperature of the atomic ensemble as a function
of cavity length and three design parameters: the cavity g-factor, the
bandwidth, and the optical suppression factor of the first and second order
spatial modes. A lower bound to the cavity bandwidth is found which avoids
elongation of the interaction time and maximizes power enhancement. An upper
limit to cavity length is found for symmetric two-mirror cavities, restricting
the practicality of long baseline detectors. For shorter cavities, an upper
limit on the beam size was derived from the geometrical stability of the
cavity. These findings aim to aid the design of current and future
cavity-assisted atom interferometers. | [
0,
1,
0,
0,
0,
0
] |
Title: Markov chain aggregation and its application to rule-based modelling,
Abstract: Rule-based modelling allows to represent molecular interactions in a compact
and natural way. The underlying molecular dynamics, by the laws of stochastic
chemical kinetics, behaves as a continuous-time Markov chain. However, this
Markov chain enumerates all possible reaction mixtures, rendering the analysis
of the chain computationally demanding and often prohibitive in practice. We
here describe how it is possible to efficiently find a smaller, aggregate
chain, which preserves certain properties of the original one. Formal methods
and lumpability notions are used to define algorithms for automated and
efficient construction of such smaller chains (without ever constructing the
original ones). We here illustrate the method on an example and we discuss the
applicability of the method in the context of modelling large signalling
pathways. | [
1,
0,
0,
0,
1,
0
] |
Title: Refounding legitimacy towards Aethogenesis,
Abstract: The fusion of humans and technology takes us into an unknown world described
by some authors as populated by quasi living species that would relegate us -
ordinary humans - to the rank of alienated agents emptied of our identity and
consciousness. I argue instead that our world is woven of simple though
invisible perspectives which - if we become aware of them - may renew our
ability for making judgments and enhance our autonomy. I became aware of these
invisible perspectives by observing and practicing a real time collective net
art experiment called the Poietic Generator. As the perspectives unveiled by
this experiment are invisible I have called them anoptical perspectives i.e.
non-optical by analogy with the optical perspective of the Renaissance. Later I
have come to realize that these perspectives obtain their cognitive structure
from the political origins of our language. Accordingly it is possible to
define certain cognitive criteria for assessing the legitimacy of the anoptical
perspectives just like some artists and architects of the Renaissance defined
the geometrical criteria that established the legitimacy of the optical one. | [
1,
0,
0,
0,
0,
0
] |
Title: FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation,
Abstract: The time-dependent generator coordinate method (TDGCM) is a powerful method
to study the large amplitude collective motion of quantum many-body systems
such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the
TDGCM leads to a local, time-dependent Schrödinger equation in a
multi-dimensional collective space. In this paper, we present the version 2.0
of the code FELIX that solves the collective Schrödinger equation in a finite
element basis. This new version features: (i) the ability to solve a
generalized TDGCM+GOA equation with a metric term in the collective
Hamiltonian, (ii) support for new kinds of finite elements and different types
of quadrature to compute the discretized Hamiltonian and overlap matrices,
(iii) the possibility to leverage the spectral element scheme, (iv) an explicit
Krylov approximation of the time propagator for time integration instead of the
implicit Crank-Nicolson method implemented in the first version, (v) an
entirely redesigned workflow. We benchmark this release on an analytic problem
as well as on realistic two-dimensional calculations of the low-energy fission
of Pu240 and Fm256. Low to moderate numerical precision calculations are most
efficiently performed with simplex elements with a degree 2 polynomial basis.
Higher precision calculations should instead use the spectral element method
with a degree 4 polynomial basis. We emphasize that in a realistic calculation
of fission mass distributions of Pu240, FELIX-2.0 is about 20 times faster than
its previous release (within a numerical precision of a few percents). | [
0,
1,
0,
0,
0,
0
] |
Title: Scattering in the energy space for Boussinesq equations,
Abstract: In this note we show that all small solutions in the energy space of the
generalized 1D Boussinesq equation must decay to zero as time tends to
infinity, strongly on slightly proper subsets of the space-time light cone. Our
result does not require any assumption on the power of the nonlinearity,
working even for the supercritical range of scattering. No parity assumption on
the initial data is needed. | [
0,
0,
1,
0,
0,
0
] |
Title: A Data-Driven Supply-Side Approach for Measuring Cross-Border Internet Purchases,
Abstract: The digital economy is a highly relevant item on the European Union's policy
agenda. Cross-border internet purchases are part of the digital economy, but
their total value can currently not be accurately measured or estimated.
Traditional approaches based on consumer surveys or business surveys are shown
to be inadequate for this purpose, due to language bias and sampling issues,
respectively. We address both problems by proposing a novel approach based on
supply-side data, namely tax returns. The proposed data-driven record-linkage
techniques and machine learning algorithms utilize two additional open data
sources: European business registers and internet data. Our main finding is
that the value of total cross-border internet purchases within the European
Union by Dutch consumers was over EUR 1.3 billion in 2016. This is more than 6
times as high as current estimates. Our finding motivates the implementation of
the proposed methodology in other EU member states. Ultimately, it could lead
to more accurate estimates of cross-border internet purchases within the entire
European Union. | [
0,
0,
0,
1,
0,
0
] |
Title: Low Rank Magnetic Resonance Fingerprinting,
Abstract: Purpose: Magnetic Resonance Fingerprinting (MRF) is a relatively new approach
that provides quantitative MRI measures using randomized acquisition.
Extraction of physical quantitative tissue parameters is performed off-line,
without the need of patient presence, based on acquisition with varying
parameters and a dictionary generated according to the Bloch equation
simulations. MRF uses hundreds of radio frequency (RF) excitation pulses for
acquisition, and therefore a high undersampling ratio in the sampling domain
(k-space) is required for reasonable scanning time. This undersampling causes
spatial artifacts that hamper the ability to accurately estimate the tissue's
quantitative values. In this work, we introduce a new approach for quantitative
MRI using MRF, called magnetic resonance Fingerprinting with LOw Rank (FLOR).
Methods: We exploit the low rank property of the concatenated temporal
imaging contrasts, on top of the fact that the MRF signal is sparsely
represented in the generated dictionary domain. We present an iterative scheme
that consists of a gradient step followed by a low rank projection using the
singular value decomposition.
Results: Experimental results consist of retrospective sampling, that allows
comparison to a well defined reference, and prospective sampling that shows the
performance of FLOR for a real-data sampling scenario. Both experiments
demonstrate improved parameter accuracy compared to other compressed-sensing
and low-rank based methods for MRF at 5% and 9% sampling ratios, for the
retrospective and prospective experiments, respectively.
Conclusions: We have shown through retrospective and prospective experiments
that by exploiting the low rank nature of the MRF signal, FLOR recovers the MRF
temporal undersampled images and provides more accurate parameter maps compared
to previous iterative methods. | [
1,
1,
0,
0,
0,
0
] |
Title: SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient,
Abstract: In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH),
as well as its practical variant SARAH+, as a novel approach to the finite-sum
minimization problems. Different from the vanilla SGD and other modern
stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple
recursive framework for updating stochastic gradient estimates; when comparing
to SAG/SAGA, SARAH does not require a storage of past gradients. The linear
convergence rate of SARAH is proven under strong convexity assumption. We also
prove a linear convergence rate (in the strongly convex case) for an inner loop
of SARAH, the property that SVRG does not possess. Numerical experiments
demonstrate the efficiency of our algorithm. | [
1,
0,
1,
1,
0,
0
] |
Title: Spectral and Energy Efficiency of Uplink D2D Underlaid Massive MIMO Cellular Networks,
Abstract: One of key 5G scenarios is that device-to-device (D2D) and massive
multiple-input multiple-output (MIMO) will be co-existed. However, interference
in the uplink D2D underlaid massive MIMO cellular networks needs to be
coordinated, due to the vast cellular and D2D transmissions. To this end, this
paper introduces a spatially dynamic power control solution for mitigating the
cellular-to-D2D and D2D-to-cellular interference. In particular, the proposed
D2D power control policy is rather flexible including the special cases of no
D2D links or using maximum transmit power. Under the considered power control,
an analytical approach is developed to evaluate the spectral efficiency (SE)
and energy efficiency (EE) in such networks. Thus, the exact expressions of SE
for a cellular user or D2D transmitter are derived, which quantify the impacts
of key system parameters such as massive MIMO antennas and D2D density.
Moreover, the D2D scale properties are obtained, which provide the sufficient
conditions for achieving the anticipated SE. Numerical results corroborate our
analysis and show that the proposed power control solution can efficiently
mitigate interference between the cellular and D2D tier. The results
demonstrate that there exists the optimal D2D density for maximizing the area
SE of D2D tier. In addition, the achievable EE of a cellular user can be
comparable to that of a D2D user. | [
1,
0,
1,
0,
0,
0
] |
Title: Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations,
Abstract: Neural networks are among the most accurate supervised learning methods in
use today, but their opacity makes them difficult to trust in critical
applications, especially when conditions in training differ from those in test.
Recent work on explanations for black-box models has produced tools (e.g. LIME)
to show the implicit rules behind predictions, which can help us identify when
models are right for the wrong reasons. However, these methods do not scale to
explaining entire datasets and cannot correct the problems they reveal. We
introduce a method for efficiently explaining and regularizing differentiable
models by examining and selectively penalizing their input gradients, which
provide a normal to the decision boundary. We apply these penalties both based
on expert annotation and in an unsupervised fashion that encourages diverse
models with qualitatively different decision boundaries for the same
classification problem. On multiple datasets, we show our approach generates
faithful explanations and models that generalize much better when conditions
differ between training and test. | [
1,
0,
0,
1,
0,
0
] |
Title: Hamiltonian structure of peakons as weak solutions for the modified Camassa-Holm equation,
Abstract: The modified Camassa-Holm (mCH) equation is a bi-Hamiltonian system
possessing $N$-peakon weak solutions, for all $N\geq 1$, in the setting of an
integral formulation which is used in analysis for studying local
well-posedness, global existence, and wave breaking for non-peakon solutions.
Unlike the original Camassa-Holm equation, the two Hamiltonians of the mCH
equation do not reduce to conserved integrals (constants of motion) for
$2$-peakon weak solutions. This perplexing situation is addressed here by
finding an explicit conserved integral for $N$-peakon weak solutions for all
$N\geq 2$. When $N$ is even, the conserved integral is shown to provide a
Hamiltonian structure with the use of a natural Poisson bracket that arises
from reduction of one of the Hamiltonian structures of the mCH equation. But
when $N$ is odd, the Hamiltonian equations of motion arising from the conserved
integral using this Poisson bracket are found to differ from the dynamical
equations for the mCH $N$-peakon weak solutions. Moreover, the lack of
conservation of the two Hamiltonians of the mCH equation when they are reduced
to $2$-peakon weak solutions is shown to extend to $N$-peakon weak solutions
for all $N\geq 2$. The connection between this loss of integrability structure
and related work by Chang and Szmigielski on the Lax pair for the mCH equation
is discussed. | [
0,
1,
0,
0,
0,
0
] |
Title: A variational derivation of the nonequilibrium thermodynamics of a moist atmosphere with rain process and its pseudoincompressible approximation,
Abstract: Irreversible processes play a major role in the description and prediction of
atmospheric dynamics. In this paper, we present a variational derivation of the
evolution equations for a moist atmosphere with rain process and subject to the
irreversible processes of viscosity, heat conduction, diffusion, and phase
transition. This derivation is based on a general variational formalism for
nonequilibrium thermodynamics which extends Hamilton's principle to
incorporates irreversible processes. It is valid for any state equation and
thus also covers the case of the atmosphere of other planets. In this approach,
the second law of thermodynamics is understood as a nonlinear constraint
formulated with the help of new variables, called thermodynamic displacements,
whose time derivative coincides with the thermodynamic force of the
irreversible process. The formulation is written both in the Lagrangian and
Eulerian descriptions and can be directly adapted to oceanic dynamics. We
illustrate the efficiency of our variational formulation as a modeling tool in
atmospheric thermodynamics, by deriving a pseudoincompressible model for moist
atmospheric thermodynamics with general equations of state and subject to the
irreversible processes of viscosity, heat conduction, diffusion, and phase
transition. | [
0,
1,
1,
0,
0,
0
] |
Title: On certain weighted 7-colored partitions,
Abstract: Inspired by Andrews' 2-colored generalized Frobenius partitions, we consider
certain weighted 7-colored partition functions and establish some interesting
Ramanujan-type identities and congruences. Moreover, we provide combinatorial
interpretations of some congruences modulo 5 and 7. Finally, we study the
properties of weighted 7-colored partitions weighted by the parity of certain
partition statistics. | [
0,
0,
1,
0,
0,
0
] |
Title: Now Playing: Continuous low-power music recognition,
Abstract: Existing music recognition applications require a connection to a server that
performs the actual recognition. In this paper we present a low-power music
recognizer that runs entirely on a mobile device and automatically recognizes
music without user interaction. To reduce battery consumption, a small music
detector runs continuously on the mobile device's DSP chip and wakes up the
main application processor only when it is confident that music is present.
Once woken, the recognizer on the application processor is provided with a few
seconds of audio which is fingerprinted and compared to the stored fingerprints
in the on-device fingerprint database of tens of thousands of songs. Our
presented system, Now Playing, has a daily battery usage of less than 1% on
average, respects user privacy by running entirely on-device and can passively
recognize a wide range of music. | [
1,
0,
0,
0,
0,
0
] |
Title: Improving the Expected Improvement Algorithm,
Abstract: The expected improvement (EI) algorithm is a popular strategy for information
collection in optimization under uncertainty. The algorithm is widely known to
be too greedy, but nevertheless enjoys wide use due to its simplicity and
ability to handle uncertainty and noise in a coherent decision theoretic
framework. To provide rigorous insight into EI, we study its properties in a
simple setting of Bayesian optimization where the domain consists of a finite
grid of points. This is the so-called best-arm identification problem, where
the goal is to allocate measurement effort wisely to confidently identify the
best arm using a small number of measurements. In this framework, one can show
formally that EI is far from optimal. To overcome this shortcoming, we
introduce a simple modification of the expected improvement algorithm.
Surprisingly, this simple change results in an algorithm that is asymptotically
optimal for Gaussian best-arm identification problems, and provably outperforms
standard EI by an order of magnitude. | [
1,
0,
0,
1,
0,
0
] |
Title: Definably compact groups definable in real closed fields. I,
Abstract: We study definably compact definably connected groups definable in a
sufficiently saturated real closed field $R$. We introduce the notion of
group-generic point for $\bigvee$-definable groups and show the existence of
group-generic points for definably compact groups definable in a sufficiently
saturated o-minimal expansion of a real closed field. We use this notion along
with some properties of generic sets to prove that for every definably compact
definably connected group $G$ definable in $R$ there are a connected
$R$-algebraic group $H$, a definable injective map $\phi$ from a generic
definable neighborhood of the identity of $G$ into the group $H\left(R\right)$
of $R$-points of $H$ such that $\phi$ acts as a group homomorphism inside its
domain. This result is used in [2] to prove that the o-minimal universal
covering group of an abelian connected definably compact group definable in a
sufficiently saturated real closed field $R$ is, up to locally definable
isomorphisms, an open connected locally definable subgroup of the o-minimal
universal covering group of the $R$-points of some $R$-algebraic group. | [
0,
0,
1,
0,
0,
0
] |
Title: The unreasonable effectiveness of small neural ensembles in high-dimensional brain,
Abstract: Despite the widely-spread consensus on the brain complexity, sprouts of the
single neuron revolution emerged in neuroscience in the 1970s. They brought
many unexpected discoveries, including grandmother or concept cells and sparse
coding of information in the brain.
In machine learning for a long time, the famous curse of dimensionality
seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of
dimensionality becomes gradually more and more popular. Ensembles of
non-interacting or weakly interacting simple units prove to be an effective
tool for solving essentially multidimensional problems. This approach is
especially useful for one-shot (non-iterative) correction of errors in large
legacy artificial intelligence systems.
These simplicity revolutions in the era of complexity have deep fundamental
reasons grounded in geometry of multidimensional data spaces. To explore and
understand these reasons we revisit the background ideas of statistical
physics. In the course of the 20th century they were developed into the
concentration of measure theory. New stochastic separation theorems reveal the
fine structure of the data clouds.
We review and analyse biological, physical, and mathematical problems at the
core of the fundamental question: how can high-dimensional brain organise
reliable and fast learning in high-dimensional world of data by simple tools?
Two critical applications are reviewed to exemplify the approach: one-shot
correction of errors in intellectual systems and emergence of static and
associative memories in ensembles of single neurons. | [
0,
0,
0,
0,
1,
0
] |
Title: Threshold Selection for Multivariate Heavy-Tailed Data,
Abstract: Regular variation is often used as the starting point for modeling
multivariate heavy-tailed data. A random vector is regularly varying if and
only if its radial part $R$ is regularly varying and is asymptotically
independent of the angular part $\Theta$ as $R$ goes to infinity. The
conditional limiting distribution of $\Theta$ given $R$ is large characterizes
the tail dependence of the random vector and hence its estimation is the
primary goal of applications. A typical strategy is to look at the angular
components of the data for which the radial parts exceed some threshold. While
a large class of methods has been proposed to model the angular distribution
from these exceedances, the choice of threshold has been scarcely discussed in
the literature. In this paper, we describe a procedure for choosing the
threshold by formally testing the independence of $R$ and $\Theta$ using a
measure of dependence called distance covariance. We generalize the limit
theorem for distance covariance to our unique setting and propose an algorithm
which selects the threshold for $R$. This algorithm incorporates a subsampling
scheme that is also applicable to weakly dependent data. Moreover, it avoids
the heavy computation in the calculation of the distance covariance, a typical
limitation for this measure. The performance of our method is illustrated on
both simulated and real data. | [
0,
0,
1,
1,
0,
0
] |
Title: An Orchestrated Empirical Study on Deep Learning Frameworks and Platforms,
Abstract: Deep learning (DL) has recently achieved tremendous success in a variety of
cutting-edge applications, e.g., image recognition, speech and natural language
processing, and autonomous driving. Besides the available big data and hardware
evolution, DL frameworks and platforms play a key role to catalyze the
research, development, and deployment of DL intelligent solutions. However, the
difference in computation paradigm, architecture design and implementation of
existing DL frameworks and platforms brings challenges for DL software
development, deployment, maintenance, and migration. Up to the present, it
still lacks a comprehensive study on how current diverse DL frameworks and
platforms influence the DL software development process.
In this paper, we initiate the first step towards the investigation on how
existing state-of-the-art DL frameworks (i.e., TensorFlow, Theano, and Torch)
and platforms (i.e., server/desktop, web, and mobile) support the DL software
development activities. We perform an in-depth and comparative evaluation on
metrics such as learning accuracy, DL model size, robustness, and performance,
on state-of-the-art DL frameworks across platforms using two popular datasets
MNIST and CIFAR-10. Our study reveals that existing DL frameworks still suffer
from compatibility issues, which becomes even more severe when it comes to
different platforms. We pinpoint the current challenges and opportunities
towards developing high quality and compatible DL systems. To ignite further
investigation along this direction to address urgent industrial demands of
intelligent solutions, we make all of our assembled feasible toolchain and
dataset publicly available. | [
1,
0,
0,
0,
0,
0
] |
Title: On the complexity of topological conjugacy of compact metrizable $G$-ambits,
Abstract: In this note, we analyze the classification problem for compact metrizable
$G$-ambits for a countable discrete group $G$ from the point of view of
descriptive set theory. More precisely, we prove that the topological conjugacy
relation on the standard Borel space of compact metrizable $G$-ambits is Borel
for every countable discrete group $G$. | [
0,
0,
1,
0,
0,
0
] |
Title: Carleman Estimate for Surface in Euclidean Space at Infinity,
Abstract: This paper develops a Carleman type estimate for immersed surface in
Euclidean space at infinity. With this estimate, we obtain an unique
continuation property for harmonic functions on immersed surfaces vanishing at
infinity, which leads to rigidity results in geometry. | [
0,
0,
1,
0,
0,
0
] |
Title: Formalizing Timing Diagram Requirements in Discrete Duration Calulus,
Abstract: Several temporal logics have been proposed to formalise timing diagram
requirements over hardware and embedded controllers. These include LTL,
discrete time MTL and the recent industry standard PSL. However, succintness
and visual structure of a timing diagram are not adequately captured by their
formulae. Interval temporal logic QDDC is a highly succint and visual notation
for specifying patterns of behaviours.
In this paper, we propose a practically useful notation called SeCeCntnl
which enhances negation free fragment of QDDC with features of nominals and
limited liveness. We show that timing diagrams can be naturally
(compositionally) and succintly formalized in SeCeCntnl as compared with PSL
and MTL. We give a linear time translation from timing diagrams to SeCeCntnl.
As our second main result, we propose a linear time translation of SeCeCntnl
into QDDC. This allows QDDC tools such as DCVALID and DCSynth to be used for
checking consistency of timing diagram requirements as well as for automatic
synthesis of property monitors and controllers. We give examples of a minepump
controller and a bus arbiter to illustrate our tools. Giving a theoretical
analysis, we show that for the proposed SeCeCntnl, the satisfiability and model
checking have elementary complexity as compared to the non-elementary
complexity for the full logic QDDC. | [
1,
0,
0,
0,
0,
0
] |
Title: A New Compton-thick AGN in our Cosmic Backyard: Unveiling the Buried Nucleus in NGC 1448 with NuSTAR,
Abstract: NGC 1448 is one of the nearest luminous galaxies ($L_{8-1000\mu m} >$ 10$^{9}
L_{\odot}$) to ours ($z$ $=$ 0.00390), and yet the active galactic nucleus
(AGN) it hosts was only recently discovered, in 2009. In this paper, we present
an analysis of the nuclear source across three wavebands: mid-infrared (MIR)
continuum, optical, and X-rays. We observed the source with the Nuclear
Spectroscopic Telescope Array (NuSTAR), and combined this data with archival
Chandra data to perform broadband X-ray spectral fitting ($\approx$0.5-40 keV)
of the AGN for the first time. Our X-ray spectral analysis reveals that the AGN
is buried under a Compton-thick (CT) column of obscuring gas along our
line-of-sight, with a column density of $N_{\rm H}$(los) $\gtrsim$ 2.5 $\times$
10$^{24}$ cm$^{-2}$. The best-fitting torus models measured an intrinsic 2-10
keV luminosity of $L_{2-10\rm{,int}}$ $=$ (3.5-7.6) $\times$ 10$^{40}$ erg
s$^{-1}$, making NGC 1448 one of the lowest luminosity CTAGNs known. In
addition to the NuSTAR observation, we also performed optical spectroscopy for
the nucleus in this edge-on galaxy using the European Southern Observatory New
Technology Telescope. We re-classify the optical nuclear spectrum as a Seyfert
on the basis of the Baldwin-Philips-Terlevich diagnostic diagrams, thus
identifying the AGN at optical wavelengths for the first time. We also present
high spatial resolution MIR observations of NGC 1448 with Gemini/T-ReCS, in
which a compact nucleus is clearly detected. The absorption-corrected 2-10 keV
luminosity measured from our X-ray spectral analysis agrees with that predicted
from the optical [OIII]$\lambda$5007\AA\ emission line and the MIR 12$\mu$m
continuum, further supporting the CT nature of the AGN. | [
0,
1,
0,
0,
0,
0
] |
Title: Blowup constructions for Lie groupoids and a Boutet de Monvel type calculus,
Abstract: We present natural and general ways of building Lie groupoids, by using the
classical procedures of blowups and of deformations to the normal cone. Our
constructions are seen to recover many known ones involved in index theory. The
deformation and blowup groupoids obtained give rise to several extensions of
$C^*$-algebras and to full index problems. We compute the corresponding
K-theory maps. Finally, the blowup of a manifold sitting in a transverse way in
the space of objects of a Lie groupoid leads to a calculus, quite similar to
the Boutet de Monvel calculus for manifolds with boundary. | [
0,
0,
1,
0,
0,
0
] |
Title: Simulated JWST/NIRISS Transit Spectroscopy of Anticipated TESS Planets Compared to Select Discoveries from Space-Based and Ground-Based Surveys,
Abstract: The Transiting Exoplanet Survey Satellite (TESS) will embark in 2018 on a
2-year wide-field survey mission, discovering over a thousand terrestrial,
super-Earth and sub-Neptune-sized exoplanets potentially suitable for follow-up
observations using the James Webb Space Telescope (JWST). This work aims to
understand the suitability of anticipated TESS planet discoveries for
atmospheric characterization by JWST's Near InfraRed Imager and Slitless
Spectrograph (NIRISS) by employing a simulation tool to estimate the
signal-to-noise (S/N) achievable in transmission spectroscopy. We applied this
tool to Monte Carlo predictions of the TESS expected planet yield and then
compared the S/N for anticipated TESS discoveries to our estimates of S/N for
18 known exoplanets. We analyzed the sensitivity of our results to planetary
composition, cloud cover, and presence of an observational noise floor. We
found that several hundred anticipated TESS discoveries with radii from 1.5 to
2.5 times the Earth's radius will produce S/N higher than currently known
exoplanets in this radius regime, such as K2-3b or K2-3c. In the terrestrial
planet regime, we found that only a few anticipated TESS discoveries will
result in higher S/N than currently known exoplanets, such as the TRAPPIST-1
planets, GJ1132b, and LHS1140b. However, we emphasize that this outcome is
based upon Kepler-derived occurrence rates, and that co-planar compact
multi-planet systems (e.g., TRAPPIST-1) may be under-represented in the
predicted TESS planet yield. Finally, we apply our calculations to estimate the
required magnitude of a JWST follow-up program devoted to mapping the
transition region between hydrogen-dominated and high molecular weight
atmospheres. We find that a modest observing program of between 60 to 100 hours
of charged JWST time can define the nature of that transition (e.g., step
function versus a power law). | [
0,
1,
0,
0,
0,
0
] |
Title: DeepTFP: Mobile Time Series Data Analytics based Traffic Flow Prediction,
Abstract: Traffic flow prediction is an important research issue to avoid traffic
congestion in transportation systems. Traffic congestion avoiding can be
achieved by knowing traffic flow and then conducting transportation planning.
Achieving traffic flow prediction is challenging as the prediction is affected
by many complex factors such as inter-region traffic, vehicles' relations, and
sudden events. However, as the mobile data of vehicles has been widely
collected by sensor-embedded devices in transportation systems, it is possible
to predict the traffic flow by analysing mobile data. This study proposes a
deep learning based prediction algorithm, DeepTFP, to collectively predict the
traffic flow on each and every traffic road of a city. This algorithm uses
three deep residual neural networks to model temporal closeness, period, and
trend properties of traffic flow. Each residual neural network consists of a
branch of residual convolutional units. DeepTFP aggregates the outputs of the
three residual neural networks to optimize the parameters of a time series
prediction model. Contrast experiments on mobile time series data from the
transportation system of England demonstrate that the proposed DeepTFP
outperforms the Long Short-Term Memory (LSTM) architecture based method in
prediction accuracy. | [
1,
0,
0,
0,
0,
0
] |
Title: The Momentum Distribution of Liquid $^4$He,
Abstract: We report high-resolution neutron Compton scattering measurements of liquid
$^4$He under saturated vapor pressure. There is excellent agreement between the
observed scattering and ab initio predictions of its lineshape. Quantum Monte
Carlo calculations predict that the Bose condensate fraction is zero in the
normal fluid, builds up rapidly just below the superfluid transition
temperature, and reaches a value of approximately $7.5\%$ below 1 K. We also
used model fit functions to obtain from the scattering data empirical estimates
for the average atomic kinetic energy and Bose condensate fraction. These
quantities are also in excellent agreement with ab initio calculations. The
convergence between the scattering data and Quantum Monte Carlo calculations is
strong evidence for a Bose broken symmetry in superfluid $^4$He. | [
0,
1,
0,
0,
0,
0
] |
Title: Self-similar minimizers of a branched transport functional,
Abstract: We solve here completely an irrigation problem from a Dirac mass to the
Lebesgue measure. The functional we consider is a two dimensional analog of a
functional previously derived in the study of branched patterns in type-I
superconductors. The minimizer we obtain is a self-similar tree. | [
0,
0,
1,
0,
0,
0
] |
Title: S-OHEM: Stratified Online Hard Example Mining for Object Detection,
Abstract: One of the major challenges in object detection is to propose detectors with
highly accurate localization of objects. The online sampling of high-loss
region proposals (hard examples) uses the multitask loss with equal weight
settings across all loss types (e.g, classification and localization, rigid and
non-rigid categories) and ignores the influence of different loss distributions
throughout the training process, which we find essential to the training
efficacy. In this paper, we present the Stratified Online Hard Example Mining
(S-OHEM) algorithm for training higher efficiency and accuracy detectors.
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling
technique, to choose the training examples according to this influence during
hard example mining, and thus enhance the performance of object detectors. We
show through systematic experiments that S-OHEM yields an average precision
(AP) improvement of 0.5% on rigid categories of PASCAL VOC 2007 for both the
IoU threshold of 0.6 and 0.7. For KITTI 2012, both results of the same metric
are 1.6%. Regarding the mean average precision (mAP), a relative increase of
0.3% and 0.5% (1% and 0.5%) is observed for VOC07 (KITTI12) using the same set
of IoU threshold. Also, S-OHEM is easy to integrate with existing region-based
detectors and is capable of acting with post-recognition level regressors. | [
1,
0,
0,
0,
0,
0
] |
Title: Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions,
Abstract: Embedding complex objects as vectors in low dimensional spaces is a
longstanding problem in machine learning. We propose in this work an extension
of that approach, which consists in embedding objects as elliptical probability
distributions, namely distributions whose densities have elliptical level sets.
We endow these measures with the 2-Wasserstein metric, with two important
benefits: (i) For such measures, the squared 2-Wasserstein metric has a closed
form, equal to a weighted sum of the squared Euclidean distance between means
and the squared Bures metric between covariance matrices. The latter is a
Riemannian metric between positive semi-definite matrices, which turns out to
be Euclidean on a suitable factor representation of such matrices, which is
valid on the entire geodesic between these matrices. (ii) The 2-Wasserstein
distance boils down to the usual Euclidean metric when comparing Diracs, and
therefore provides a natural framework to extend point embeddings. We show that
for these reasons Wasserstein elliptical embeddings are more intuitive and
yield tools that are better behaved numerically than the alternative choice of
Gaussian embeddings with the Kullback-Leibler divergence. In particular, and
unlike previous work based on the KL geometry, we learn elliptical
distributions that are not necessarily diagonal. We demonstrate the advantages
of elliptical embeddings by using them for visualization, to compute embeddings
of words, and to reflect entailment or hypernymy. | [
0,
0,
0,
1,
0,
0
] |
Title: Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines,
Abstract: This paper describes our participation in Task 5 track 2 of SemEval 2017 to
predict the sentiment of financial news headlines for a specific company on a
continuous scale between -1 and 1. We tackled the problem using a number of
approaches, utilising a Support Vector Regression (SVR) and a Bidirectional
Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM
model over the SVR and came fourth in the track. We report a number of
different evaluations using a finance specific word embedding model and reflect
on the effects of using different evaluation metrics. | [
1,
0,
0,
0,
0,
0
] |
Title: Exploring light mediators with low-threshold direct detection experiments,
Abstract: We explore the potential of future cryogenic direct detection experiments to
determine the properties of the mediator that communicates the interactions
between dark matter and nuclei. Due to their low thresholds and large
exposures, experiments like CRESST-III, SuperCDMS SNOLAB and EDELWEISS-III will
have excellent capability to reconstruct mediator masses in the MeV range for a
large class of models. Combining the information from several experiments
further improves the parameter reconstruction, even when taking into account
additional nuisance parameters related to background uncertainties and the dark
matter velocity distribution. These observations may offer the intriguing
possibility of studying dark matter self-interactions with direct detection
experiments. | [
0,
1,
0,
0,
0,
0
] |
Title: Surface Networks,
Abstract: We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs. | [
1,
0,
0,
1,
0,
0
] |
Title: Driving an Ornstein--Uhlenbeck Process to Desired First-Passage Time Statistics,
Abstract: First-passage time (FPT) of an Ornstein-Uhlenbeck (OU) process is of immense
interest in a variety of contexts. This paper considers an OU process with two
boundaries, one of which is absorbing while the other one could be either
reflecting or absorbing, and studies the control strategies that can lead to
desired FPT moments. Our analysis shows that the FPT distribution of an OU
process is scale invariant with respect to the drift parameter, i.e., the drift
parameter just controls the mean FPT and doesn't affect the shape of the
distribution. This allows to independently control the mean and coefficient of
variation (CV) of the FPT. We show that that increasing the threshold may
increase or decrease CV of the FPT, depending upon whether or not one of the
threshold is reflecting. We also explore the effect of control parameters on
the FPT distribution, and find parameters that minimize the distance between
the FPT distribution and a desired distribution. | [
0,
0,
1,
0,
0,
0
] |
Title: The self-consistent Dyson equation and self-energy functionals: failure or new opportunities?,
Abstract: Perturbation theory using self-consistent Green's functions is one of the
most widely used approaches to study many-body effects in condensed matter. On
the basis of general considerations and by performing analytical calculations
for the specific example of the Hubbard atom, we discuss some key features of
this approach. We show that when the domain of the functionals that are used to
realize the map between the non-interacting and the interacting Green's
functions is properly defined, there exists a class of self-energy functionals
for which the self-consistent Dyson equation has only one solution, which is
the physical one. We also show that manipulation of the perturbative expansion
of the interacting Green's function may lead to a wrong self-energy as
functional of the interacting Green's function, at least for some regions of
the parameter space. These findings confirm and explain numerical results of
Kozik et al. for the widely used skeleton series of Luttinger and Ward [Phys.
Rev. Lett. 114, 156402]. Our study shows that it is important to distinguish
between the maps between sets of functions and the functionals that realize
those maps. We demonstrate that the self-consistent Green's functions approach
itself is not problematic, whereas the functionals that are widely used may
have a limited range of validity. | [
0,
1,
0,
0,
0,
0
] |
Title: Sparse and Smooth Prior for Bayesian Linear Regression with Application to ETEX Data,
Abstract: Sparsity of the solution of a linear regression model is a common
requirement, and many prior distributions have been designed for this purpose.
A combination of the sparsity requirement with smoothness of the solution is
also common in application, however, with considerably fewer existing prior
models. In this paper, we compare two prior structures, the Bayesian fused
lasso (BFL) and least-squares with adaptive prior covariance matrix (LS-APC).
Since only variational solution was published for the latter, we derive a Gibbs
sampling algorithm for its inference and Bayesian model selection. The method
is designed for high dimensional problems, therefore, we discuss numerical
issues associated with evaluation of the posterior. In simulation, we show that
the LS-APC prior achieves results comparable to that of the Bayesian Fused
Lasso for piecewise constant parameter and outperforms the BFL for parameters
of more general shapes. Another advantage of the LS-APC priors is revealed in
real application to estimation of the release profile of the European Tracer
Experiment (ETEX). Specifically, the LS-APC model provides more conservative
uncertainty bounds when the regressor matrix is not informative. | [
0,
0,
0,
1,
0,
0
] |
Title: Certifying Some Distributional Robustness with Principled Adversarial Training,
Abstract: Neural networks are vulnerable to adversarial examples and researchers have
proposed many heuristic attack and defense mechanisms. We address this problem
through the principled lens of distributionally robust optimization, which
guarantees performance under adversarial input perturbations. By considering a
Lagrangian penalty formulation of perturbing the underlying data distribution
in a Wasserstein ball, we provide a training procedure that augments model
parameter updates with worst-case perturbations of training data. For smooth
losses, our procedure provably achieves moderate levels of robustness with
little computational or statistical cost relative to empirical risk
minimization. Furthermore, our statistical guarantees allow us to efficiently
certify robustness for the population loss. For imperceptible perturbations,
our method matches or outperforms heuristic approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: The minimal hidden computer needed to implement a visible computation,
Abstract: Master equations are commonly used to model the dynamics of physical systems.
Surprisingly, many deterministic maps $x \rightarrow f(x)$ cannot be
implemented by any master equation, even approximately. This raises the
question of how they arise in real-world systems like digital computers. We
show that any deterministic map over some "visible" states can be implemented
with a master equation--but only if additional "hidden" states are dynamically
coupled to those visible states. We also show that any master equation
implementing a given map can be decomposed into a sequence of "hidden"
timesteps, demarcated by changes in what transitions are allowed under the rate
matrix. Often there is a real-world cost for each additional hidden state, and
for each additional hidden timestep. We derive the associated "space/time"
tradeoff between the numbers of hidden states and of hidden timesteps needed to
implement any given $f(x)$. | [
1,
1,
0,
0,
0,
0
] |
Title: A multi-task convolutional neural network for mega-city analysis using very high resolution satellite imagery and geospatial data,
Abstract: Mega-city analysis with very high resolution (VHR) satellite images has been
drawing increasing interest in the fields of city planning and social
investigation. It is known that accurate land-use, urban density, and
population distribution information is the key to mega-city monitoring and
environmental studies. Therefore, how to generate land-use, urban density, and
population distribution maps at a fine scale using VHR satellite images has
become a hot topic. Previous studies have focused solely on individual tasks
with elaborate hand-crafted features and have ignored the relationship between
different tasks. In this study, we aim to propose a universal framework which
can: 1) automatically learn the internal feature representation from the raw
image data; and 2) simultaneously produce fine-scale land-use, urban density,
and population distribution maps. For the first target, a deep convolutional
neural network (CNN) is applied to learn the hierarchical feature
representation from the raw image data. For the second target, a novel
CNN-based universal framework is proposed to process the VHR satellite images
and generate the land-use, urban density, and population distribution maps. To
the best of our knowledge, this is the first CNN-based mega-city analysis
method which can process a VHR remote sensing image with such a large data
volume. A VHR satellite image (1.2 m spatial resolution) of the center of Wuhan
covering an area of 2606 km2 was used to evaluate the proposed method. The
experimental results confirm that the proposed method can achieve a promising
accuracy for land-use, urban density, and population distribution maps. | [
1,
0,
0,
0,
0,
0
] |
Title: Secure Search on the Cloud via Coresets and Sketches,
Abstract: \emph{Secure Search} is the problem of retrieving from a database table (or
any unsorted array) the records matching specified attributes, as in SQL SELECT
queries, but where the database and the query are encrypted. Secure search has
been the leading example for practical applications of Fully Homomorphic
Encryption (FHE) starting in Gentry's seminal work; however, to the best of our
knowledge all state-of-the-art secure search algorithms to date are realized by
a polynomial of degree $\Omega(m)$ for $m$ the number of records, which is
typically too slow in practice even for moderate size $m$.
In this work we present the first algorithm for secure search that is
realized by a polynomial of degree polynomial in $\log m$. We implemented our
algorithm in an open source library based on HELib implementation for the
Brakerski-Gentry-Vaikuntanthan's FHE scheme, and ran experiments on Amazon's
EC2 cloud. Our experiments show that we can retrieve the first match in a
database of millions of entries in less than an hour using a single machine;
the time reduced almost linearly with the number of machines.
Our result utilizes a new paradigm of employing coresets and sketches, which
are modern data summarization techniques common in computational geometry and
machine learning, for efficiency enhancement for homomorphic encryption. As a
central tool we design a novel sketch that returns the first positive entry in
a (not necessarily sparse) array; this sketch may be of independent interest. | [
1,
0,
0,
0,
0,
0
] |
Title: LATTES: a novel detector concept for a gamma-ray experiment in the Southern hemisphere,
Abstract: The Large Array Telescope for Tracking Energetic Sources (LATTES), is a novel
concept for an array of hybrid EAS array detectors, composed of a Resistive
Plate Counter array coupled to a Water Cherenkov Detector, planned to cover
gamma rays from less than 100 GeV up to 100 TeVs. This experiment, to be
installed at high altitude in South America, could cover the existing gap in
sensitivity between satellite and ground arrays.
The low energy threshold, large duty cycle and wide field of view of LATTES
makes it a powerful tool to detect transient phenomena and perform long term
observations of variable sources. Moreover, given its characteristics, it would
be fully complementary to the planned Cherenkov Telescope Array (CTA) as it
would be able to issue alerts.
In this talk, a description of its main features and capabilities, as well as
results on its expected performance, and sensitivity, will be presented. | [
0,
1,
0,
0,
0,
0
] |
Title: Hyperbolicity as an obstruction to smoothability for one-dimensional actions,
Abstract: Ghys and Sergiescu proved in the $80$s that Thompson's group $T$, and hence
$F$, admits actions by $C^{\infty}$ diffeomorphisms of the circle . They proved
that the standard actions of these groups are topologically conjugate to a
group of $C^\infty$ diffeomorphisms. Monod defined a family of groups of
piecewise projective homeomorphisms, and Lodha-Moore defined finitely
presentable groups of piecewise projective homeomorphisms. These groups are of
particular interest because they are nonamenable and contain no free subgroup.
In contrast to the result of Ghys-Sergiescu, we prove that the groups of Monod
and Lodha-Moore are not topologically conjugate to a group of $C^1$
diffeomorphisms.
Furthermore, we show that the group of Lodha-Moore has no nonabelian $C^1$
action on the interval. We also show that many Monod's groups $H(A)$, for
instance when $A$ is such that $\mathsf{PSL}(2,A)$ contains a rational
homothety $x\mapsto \tfrac{p}{q}x$, do not admit a $C^1$ action on the
interval. The obstruction comes from the existence of hyperbolic fixed points
for $C^1$ actions. With slightly different techniques, we also show that some
groups of piecewise affine homeomorphisms of the interval or the circle are not
smoothable. | [
0,
0,
1,
0,
0,
0
] |
Title: Near-perfect spin filtering and negative differential resistance in an Fe(II)S complex,
Abstract: Density functional theory and nonequilibrium Green's function calculations
have been used to explore spin-resolved transport through the high-spin state
of an iron(II)sulfur single molecular magnet. Our results show that this
molecule exhibits near-perfect spin filtering, where the spin-filtering
efficiency is above 99%, as well as significant negative differential
resistance centered at a low bias voltage. The rise in the spin-up conductivity
up to the bias voltage of 0.4 V is dominated by a conductive lowest unoccupied
molecular orbital, and this is accompanied by a slight increase in the magnetic
moment of the Fe atom. The subsequent drop in the spin-up conductivity is
because the conductive channel moves to the highest occupied molecular orbital
which has a lower conductance contribution. This is accompanied by a drop in
the magnetic moment of the Fe atom. These two exceptional properties, and the
fact that the onset of negative differential resistance occurs at low bias
voltage, suggests the potential of the molecule in nanoelectronic and
nanospintronic applications. | [
0,
1,
0,
0,
0,
0
] |
Title: Structural, elastic, electronic, and bonding properties of intermetallic Nb3Pt and Nb3Os compounds: a DFT study,
Abstract: Theoretical investigation of structural, elastic, electronic and bonding
properties of A-15 Nb-based intermetallic compounds Nb3B (B = Pt, Os) have been
performed using first principles calculations based on the density functional
theory (DFT). Optimized cell parameters are found to be in good agreement with
available experimental and theoretical results. The elastic constants at zero
pressure and temperature are calculated and the anisotropic behaviors of the
compounds are studied. Both the compounds are mechanically stable and ductile
in nature. Other elastic properties such as Pugh's ratio, Cauchy pressure,
machinability index are derived for the first time. Nb3Os is expected to have
good lubricating properties compared to Nb3Pt. The electronic band structure
and energy density of states (DOS) have been studied with and without
spin-orbit coupling (SOC). The band structures of both the compounds are spin
symmetric. Electronic band structure and DOS reveal that both the compounds are
metallic and the conductivity mainly arise from the Nb 4d states. The Fermi
surface features have been studied for the first time. The Fermi surfaces of
Nb3B contain both hole- and electron-like sheets which change as one replaces
Pt with Os. The electronic charge density distribution shows that Nb3Pt and
Nb3Os both have a mixture of ionic and covalent bonding. The charge transfer
between atomic species in these compounds has been explained by the Mulliken
bond population analysis. | [
0,
1,
0,
0,
0,
0
] |
Title: Clustering and Model Selection via Penalized Likelihood for Different-sized Categorical Data Vectors,
Abstract: In this study, we consider unsupervised clustering of categorical vectors
that can be of different size using mixture. We use likelihood maximization to
estimate the parameters of the underlying mixture model and a penalization
technique to select the number of mixture components. Regardless of the true
distribution that generated the data, we show that an explicit penalty, known
up to a multiplicative constant, leads to a non-asymptotic oracle inequality
with the Kullback-Leibler divergence on the two sides of the inequality. This
theoretical result is illustrated by a document clustering application. To this
aim a novel robust expectation-maximization algorithm is proposed to estimate
the mixture parameters that best represent the different topics. Slope
heuristics are used to calibrate the penalty and to select a number of
clusters. | [
0,
0,
1,
1,
0,
0
] |
Title: Topology reveals universal features for network comparison,
Abstract: The topology of any complex system is key to understanding its structure and
function. Fundamentally, algebraic topology guarantees that any system
represented by a network can be understood through its closed paths. The length
of each path provides a notion of scale, which is vitally important in
characterizing dominant modes of system behavior. Here, by combining topology
with scale, we prove the existence of universal features which reveal the
dominant scales of any network. We use these features to compare several
canonical network types in the context of a social media discussion which
evolves through the sharing of rumors, leaks and other news. Our analysis
enables for the first time a universal understanding of the balance between
loops and tree-like structure across network scales, and an assessment of how
this balance interacts with the spreading of information online. Crucially, our
results allow networks to be quantified and compared in a purely model-free way
that is theoretically sound, fully automated, and inherently scalable. | [
1,
0,
1,
1,
0,
0
] |
Title: Gated Recurrent Networks for Seizure Detection,
Abstract: Recurrent Neural Networks (RNNs) with sophisticated units that implement a
gating mechanism have emerged as powerful technique for modeling sequential
signals such as speech or electroencephalography (EEG). The latter is the focus
on this paper. A significant big data resource, known as the TUH EEG Corpus
(TUEEG), has recently become available for EEG research, creating a unique
opportunity to evaluate these recurrent units on the task of seizure detection.
In this study, we compare two types of recurrent units: long short-term memory
units (LSTM) and gated recurrent units (GRU). These are evaluated using a state
of the art hybrid architecture that integrates Convolutional Neural Networks
(CNNs) with RNNs. We also investigate a variety of initialization methods and
show that initialization is crucial since poorly initialized networks cannot be
trained. Furthermore, we explore regularization of these convolutional gated
recurrent networks to address the problem of overfitting. Our experiments
revealed that convolutional LSTM networks can achieve significantly better
performance than convolutional GRU networks. The convolutional LSTM
architecture with proper initialization and regularization delivers 30%
sensitivity at 6 false alarms per 24 hours. | [
0,
0,
0,
1,
0,
0
] |
Title: Non-convex Conditional Gradient Sliding,
Abstract: We investigate a projection free method, namely conditional gradient sliding
on batched, stochastic and finite-sum non-convex problem. CGS is a smart
combination of Nesterov's accelerated gradient method and Frank-Wolfe (FW)
method, and outperforms FW in the convex setting by saving gradient
computations. However, the study of CGS in the non-convex setting is limited.
In this paper, we propose the non-convex conditional gradient sliding (NCGS)
which surpasses the non-convex Frank-Wolfe method in batched, stochastic and
finite-sum setting. | [
0,
0,
1,
0,
0,
0
] |
Title: Copolar convexity,
Abstract: We introduce a new operation, copolar addition, on unbounded convex subsets
of the positive orthant of real euclidean space and establish convexity of the
covolumes of the corresponding convex combinations. The proof is based on a
technique of geodesics of plurisubharmonic functions. As an application, we
show that there are no relative extremal functions inside a non-constant
geodesic curve between two toric relative extremal functions. | [
0,
0,
1,
0,
0,
0
] |
Title: Coexistence of quantum and classical flows in quantum turbulence in the $T=0$ limit,
Abstract: Tangles of quantized vortex line of initial density ${\cal L}(0) \sim 6\times
10^3$\,cm$^{-2}$ and variable amplitude of fluctuations of flow velocity $U(0)$
at the largest length scale were generated in superfluid $^4$He at $T=0.17$\,K,
and their free decay ${\cal L}(t)$ was measured. If $U(0)$ is small, the excess
random component of vortex line length firstly decays as ${\cal L} \propto
t^{-1}$ until it becomes comparable with the structured component responsible
for the classical velocity field, and the decay changes to ${\cal L} \propto
t^{-3/2}$. The latter regime always ultimately prevails, provided the classical
description of $U$ holds. A quantitative model of coexisting cascades of
quantum and classical energies describes all regimes of the decay. | [
0,
1,
0,
0,
0,
0
] |
Title: Four-dimensional Lens Space Index from Two-dimensional Chiral Algebra,
Abstract: We study the supersymmetric partition function on $S^1 \times L(r, 1)$, or
the lens space index of four-dimensional $\mathcal{N}=2$ superconformal field
theories and their connection to two-dimensional chiral algebras. We primarily
focus on free theories as well as Argyres-Douglas theories of type $(A_1, A_k)$
and $(A_1, D_k)$. We observe that in specific limits, the lens space index is
reproduced in terms of the (refined) character of an appropriately twisted
module of the associated two-dimensional chiral algebra or a generalized vertex
operator algebra. The particular twisted module is determined by the choice of
discrete holonomies for the flavor symmetry in four-dimensions. | [
0,
0,
1,
0,
0,
0
] |
Title: Wave propagation modelling in various microearthquake environments using a spectral-element method,
Abstract: Simulation of wave propagation in a microearthquake environment is often
challenging due to small-scale structural and material heterogeneities. We
simulate wave propagation in three different real microearthquake environments
using a spectral-element method. In the first example, we compute the full
wavefield in 2D and 3D models of an underground ore mine, namely the Pyhaesalmi
mine in Finland. In the second example, we simulate wave propagation in a
homogeneous velocity model including the actual topography of an unstable rock
slope at Aaknes in western Norway. Finally, we compute the full wavefield for a
weakly anisotropic cylindrical sample at laboratory scale, which was used for
an acoustic emission experiment under triaxial loading. We investigate the
characteristic features of wave propagation in those models and compare
synthetic waveforms with observed waveforms wherever possible. We illustrate
the challenges associated with the spectral-element simulation in those models. | [
0,
1,
0,
0,
0,
0
] |
Title: Fast Snapshottable Concurrent Braun Heaps,
Abstract: This paper proposes a new concurrent heap algorithm, based on a stateless
shape property, which efficiently maintains balance during insert and removeMin
operations implemented with hand-over-hand locking. It also provides a O(1)
linearizable snapshot operation based on lazy copy-on-write semantics. Such
snapshots can be used to provide consistent views of the heap during iteration,
as well as to make speculative updates (which can later be dropped).
The simplicity of the algorithm allows it to be easily proven correct, and
the choice of shape property provides priority queue performance which is
competitive with highly optimized skiplist implementations (and has stronger
bounds on worst-case time complexity).
A Scala reference implementation is provided. | [
1,
0,
0,
0,
0,
0
] |
Title: Geometric clustering in normed planes,
Abstract: Given two sets of points $A$ and $B$ in a normed plane, we prove that there
are two linearly separable sets $A'$ and $B'$ such that $\mathrm{diam}(A')\leq
\mathrm{diam}(A)$, $\mathrm{diam}(B')\leq \mathrm{diam}(B)$, and $A'\cup
B'=A\cup B.$ This extends a result for the Euclidean distance to symmetric
convex distance functions. As a consequence, some Euclidean $k$-clustering
algorithms are adapted to normed planes, for instance, those that minimize the
maximum, the sum, or the sum of squares of the $k$ cluster diameters. The
2-clustering problem when two different bounds are imposed to the diameters is
also solved. The Hershberger-Suri's data structure for managing ball hulls can
be useful in this context. | [
0,
0,
1,
0,
0,
0
] |
Title: Laplacian networks: growth, local symmetry and shape optimization,
Abstract: Inspired by river networks and other structures formed by Laplacian growth,
we use the Loewner equation to investigate the growth of a network of thin
fingers in a diffusion field. We first review previous contributions to
illustrate how this formalism reduces the network's expansion to three rules,
which respectively govern the velocity, the direction, and the nucleation of
its growing branches. This framework allows us to establish the mathematical
equivalence between three formulations of the direction rule, namely geodesic
growth, growth that maintains local symmetry and growth that maximizes flux
into tips for a given amount of growth. Surprisingly, we find that this growth
rule may result in a network different from the static configuration that
optimizes flux into tips. | [
0,
1,
0,
0,
0,
0
] |
Title: Automatic Vector-based Road Structure Mapping Using Multi-beam LiDAR,
Abstract: In this paper, we studied a SLAM method for vector-based road structure
mapping using multi-beam LiDAR. We propose to use the polyline as the primary
mapping element instead of grid cell or point cloud, because the vector-based
representation is precise and lightweight, and it can directly generate
vector-based High-Definition (HD) driving map as demanded by autonomous driving
systems. We explored: 1) the extraction and vectorization of road structures
based on local probabilistic fusion. 2) the efficient vector-based matching
between frames of road structures. 3) the loop closure and optimization based
on the pose-graph. In this study, we took a specific road structure, the road
boundary, as an example. We applied the proposed matching method in three
different scenes and achieved the average absolute matching error of 0.07. We
further applied the mapping system to the urban road with the length of 860
meters and achieved an average global accuracy of 0.466 m without the help of
high precision GPS. | [
1,
0,
0,
0,
0,
0
] |
Title: Schwarzian derivatives, projective structures, and the Weil-Petersson gradient flow for renormalized volume,
Abstract: To a complex projective structure $\Sigma$ on a surface, Thurston associates
a locally convex pleated surface. We derive bounds on the geometry of both in
terms of the norms $\|\phi_\Sigma\|_\infty$ and $\|\phi_\Sigma\|_2$ of the
quadratic differential $\phi_\Sigma$ of $\Sigma$ given by the Schwarzian
derivative of the associated locally univalent map. We show that these give a
unifying approach that generalizes a number of important, well known results
for convex cocompact hyperbolic structures on 3-manifolds, including bounds on
the Lipschitz constant for the nearest-point retraction and the length of the
bending lamination. We then use these bounds to begin a study of the
Weil-Petersson gradient flow of renormalized volume on the space $CC(N)$ of
convex cocompact hyperbolic structures on a compact manifold $N$ with
incompressible boundary, leading to a proof of the conjecture that the
renormalized volume has infimum given by one-half the simplicial volume of
$DN$, the double of $N$. | [
0,
0,
1,
0,
0,
0
] |
Title: Inferring Structural Characteristics of Networks with Strong and Weak Ties from Fixed-Choice Surveys,
Abstract: Knowing the structure of an offline social network facilitates a variety of
analyses, including studying the rate at which infectious diseases may spread
and identifying a subset of actors to immunize in order to reduce, as much as
possible, the rate of spread. Offline social network topologies are typically
estimated by surveying actors and asking them to list their neighbours. While
identifying close friends and family (i.e., strong ties) can typically be done
reliably, listing all of one's acquaintances (i.e., weak ties) is subject to
error due to respondent fatigue. This issue is commonly circumvented through
the use of so-called "fixed choice" surveys where respondents are asked to name
a fixed, small number of their weak ties (e.g., two or ten). Of course, the
resulting crude observed network will omit many ties, and using this crude
network to infer properties of the network, such as its degree distribution or
clustering coefficient, will lead to biased estimates. This paper develops
estimators, based on the method of moments, for a number of network
characteristics including those related to the first and second moments of the
degree distribution as well as the network size, using fixed-choice survey
data. Experiments with simulated data illustrate that the proposed estimators
perform well across a variety of network topologies and measurement scenarios,
and the resulting estimates are significantly more accurate than those obtained
directly using the crude observed network, which are commonly used in the
literature. We also describe a variation of the Jackknife procedure that can be
used to obtain an estimate of the estimator variance. | [
1,
1,
0,
0,
0,
0
] |
Title: A Method Of Detecting Gravitational Wave Based On Time-frequency Analysis And Convolutional Neural Networks,
Abstract: This work investigated the detection of gravitational wave (GW) from
simulated damped sinusoid signals contaminated with Gaussian noise. We proposed
to treat it as a classification problem with one class bearing our special
attentions. Two successive steps of the proposed scheme are as following:
first, decompose the data using a wavelet packet and represent the GW signal
and noise using the derived decomposition coefficients; Second, detect the
existence of GW using a convolutional neural network (CNN). To reflect our
special attention on searching GW signals, the performance is evaluated using
not only the traditional classification accuracy (correct ratio), but also
receiver operating characteristic (ROC) curve, and experiments show excelllent
performances on both evaluation measures. The generalization of a proposed
searching scheme on GW model parameter and possible extensions to other data
analysis tasks are crucial for a machine learning based approach. On this
aspect, experiments shows that there is no significant difference between GW
model parameters on identification performances by our proposed scheme.
Therefore, the proposed scheme has excellent generalization and could be used
to search for non-trained and un-known GW signals or glitches in the future GW
astronomy era. | [
0,
1,
0,
0,
0,
0
] |
Title: Phonemic and Graphemic Multilingual CTC Based Speech Recognition,
Abstract: Training automatic speech recognition (ASR) systems requires large amounts of
data in the target language in order to achieve good performance. Whereas large
training corpora are readily available for languages like English, there exists
a long tail of languages which do suffer from a lack of resources. One method
to handle data sparsity is to use data from additional source languages and
build a multilingual system. Recently, ASR systems based on recurrent neural
networks (RNNs) trained with connectionist temporal classification (CTC) have
gained substantial research interest. In this work, we extended our previous
approach towards training CTC-based systems multilingually. Our systems feature
a global phone set, based on the joint phone sets of each source language. We
evaluated the use of different language combinations as well as the addition of
Language Feature Vectors (LFVs). As contrastive experiment, we built systems
based on graphemes as well. Systems having a multilingual phone set are known
to suffer in performance compared to their monolingual counterparts. With our
proposed approach, we could reduce the gap between these mono- and multilingual
setups, using either graphemes or phonemes. | [
1,
0,
0,
0,
0,
0
] |
Title: Model-Based Clustering of Time-Evolving Networks through Temporal Exponential-Family Random Graph Models,
Abstract: Dynamic networks are a general language for describing time-evolving complex
systems, and discrete time network models provide an emerging statistical
technique for various applications. It is a fundamental research question to
detect the community structure in time-evolving networks. However, due to
significant computational challenges and difficulties in modeling communities
of time-evolving networks, there is little progress in the current literature
to effectively find communities in time-evolving networks. In this work, we
propose a novel model-based clustering framework for time-evolving networks
based on discrete time exponential-family random graph models. To choose the
number of communities, we use conditional likelihood to construct an effective
model selection criterion. Furthermore, we propose an efficient variational
expectation-maximization (EM) algorithm to find approximate maximum likelihood
estimates of network parameters and mixing proportions. By using variational
methods and minorization-maximization (MM) techniques, our method has appealing
scalability for large-scale time-evolving networks. The power of our method is
demonstrated in simulation studies and empirical applications to international
trade networks and the collaboration networks of a large American research
university. | [
0,
0,
0,
1,
0,
0
] |
Title: Epi-two-dimensional fluid flow: a new topological paradigm for dimensionality,
Abstract: While a variety of fundamental differences are known to separate
two-dimensional (2D) and three-dimensional (3D) fluid flows, it is not well
understood how they are related. Conventionally, dimensional reduction is
justified by an \emph{a priori} geometrical framework; i.e., 2D flows occur
under some geometrical constraint such as shallowness. However, deeper inquiry
into 3D flow often finds the presence of local 2D-like structures without such
a constraint, where 2D-like behavior may be identified by the integrability of
vortex lines or vanishing local helicity. Here we propose a new paradigm of
flow structure by introducing an intermediate class, termed epi-2-dimensional
flow, and thereby build a topological bridge between 2D and 3D flows. The
epi-2D property is local, and is preserved in fluid elements obeying ideal
(inviscid and barotropic) mechanics; a local epi-2D flow may be regarded as a
`particle' carrying a generalized enstrophy as its charge. A finite viscosity
may cause `fusion' of two epi-2D particles, generating helicity from their
charges giving rise to 3D flow. | [
0,
1,
0,
0,
0,
0
] |
Title: Statistical Properties of Loss Rate Estimators in Tree Topology (2),
Abstract: Four types of explicit estimators are proposed here to estimate the loss
rates of the links in a network with the tree topology and all of them are
derived by the maximum likelihood principle. One of the four is developed from
an estimator that was used but neglected because it was suspected to have a
higher variance. All of the estimators are proved to be either unbiased or
asymptotic unbiased. In addition, a set of formulae are derived to compute the
efficiencies and variances of the estimates obtained by the estimators. One of
the formulae shows that if a path is divided into two segments, the variance of
the estimates obtained for the pass rate of a segment is equal to the variance
of the pass rate of the path divided by the square of the pass rate of the
other segment. A number of theorems and corollaries are derived from the
formulae that can be used to evaluate the performance of an estimator. Using
the theorems and corollaries, we find the estimators from the neglected one are
the best estimator for the networks with the tree topology in terms of
efficiency and computation complexity. | [
1,
0,
0,
0,
0,
0
] |
Title: Moonshine: Distilling with Cheap Convolutions,
Abstract: Many engineers wish to deploy modern neural networks in memory-limited
settings; but the development of flexible methods for reducing memory use is in
its infancy, and there is little knowledge of the resulting cost-benefit. We
propose structural model distillation for memory reduction using a strategy
that produces a student architecture that is a simple transformation of the
teacher architecture: no redesign is needed, and the same hyperparameters can
be used. Using attention transfer, we provide Pareto curves/tables for
distillation of residual networks with four benchmark datasets, indicating the
memory versus accuracy payoff. We show that substantial memory savings are
possible with very little loss of accuracy, and confirm that distillation
provides student network performance that is better than training that student
architecture directly on data. | [
1,
0,
0,
1,
0,
0
] |
Title: A New Wiretap Channel Model and its Strong Secrecy Capacity,
Abstract: In this paper, a new wiretap channel model is proposed, where the legitimate
transmitter and receiver communicate over a discrete memoryless channel. The
wiretapper has perfect access to a fixed-length subset of the transmitted
codeword symbols of her choosing. Additionally, she observes the remainder of
the transmitted symbols through a discrete memoryless channel. This new model
subsumes the classical wiretap channel and wiretap channel II with noisy main
channel as its special cases. The strong secrecy capacity of the proposed
channel model is identified. Achievability is established by solving a dual
secret key agreement problem in the source model, and converting the solution
to the original channel model using probability distribution approximation
arguments. In the dual problem, a source encoder and decoder, who observe
random sequences independent and identically distributed according to the input
and output distributions of the legitimate channel in the original problem,
communicate a confidential key over a public error-free channel using a single
forward transmission, in the presence of a compound wiretapping source who has
perfect access to the public discussion. The security of the key is guaranteed
for the exponentially many possibilities of the subset chosen at wiretapper by
deriving a lemma which provides a doubly-exponential convergence rate for the
probability that, for a fixed choice of the subset, the key is uniform and
independent from the public discussion and the wiretapping source's
observation. The converse is derived by using Sanov's theorem to upper bound
the secrecy capacity of the new wiretap channel model by the secrecy capacity
when the tapped subset is randomly chosen by nature. | [
1,
0,
0,
0,
0,
0
] |
Title: Energy fluxes and spectra for turbulent and laminar flows,
Abstract: Two well-known turbulence models to describe the inertial and dissipative
ranges simultaneously are by Pao~[Phys. Fluids {\bf 8}, 1063 (1965)] and
Pope~[{\em Turbulent Flows.} Cambridge University Press, 2000]. In this paper,
we compute energy spectrum $E(k)$ and energy flux $\Pi(k)$ using spectral
simulations on grids up to $4096^3$, and show consistency between the numerical
results and predictions by the aforementioned models. We also construct a model
for laminar flows that predicts $E(k)$ and $\Pi(k)$ to be of the form
$\exp(-k)$, and verify the model predictions using numerical simulations. The
shell-to-shell energy transfers for the turbulent flows are {\em forward and
local} for both inertial and dissipative range, but those for the laminar flows
are {\em forward and nonlocal}. | [
0,
1,
0,
0,
0,
0
] |
Title: Generalised Discount Functions applied to a Monte-Carlo AImu Implementation,
Abstract: In recent years, work has been done to develop the theory of General
Reinforcement Learning (GRL). However, there are few examples demonstrating
these results in a concrete way. In particular, there are no examples
demonstrating the known results regarding gener- alised discounting. We have
added to the GRL simulation platform AIXIjs the functionality to assign an
agent arbitrary discount functions, and an environment which can be used to
determine the effect of discounting on an agent's policy. Using this, we
investigate how geometric, hyperbolic and power discounting affect an informed
agent in a simple MDP. We experimentally reproduce a number of theoretical
results, and discuss some related subtleties. It was found that the agent's
behaviour followed what is expected theoretically, assuming appropriate
parameters were chosen for the Monte-Carlo Tree Search (MCTS) planning
algorithm. | [
1,
0,
0,
0,
0,
0
] |
Title: One year of monitoring the Vela pulsar using a Phased Array Feed,
Abstract: We have observed the Vela pulsar for one year using a Phased Array Feed (PAF)
receiver on the 12-metre antenna of the Parkes Test-Bed Facility. These
observations have allowed us to investigate the stability of the PAF
beam-weights over time, to demonstrate that pulsars can be timed over long
periods using PAF technology and to detect and study the most recent glitch
event that occurred on 12 December 2016. The beam-weights are shown to be
stable to 1% on time scales on the order of three weeks. We discuss the
implications of this for monitoring pulsars using PAFs on single dish
telescopes. | [
0,
1,
0,
0,
0,
0
] |
Title: Robust Guaranteed-Cost Adaptive Quantum Phase Estimation,
Abstract: Quantum parameter estimation plays a key role in many fields like quantum
computation, communication and metrology. Optimal estimation allows one to
achieve the most precise parameter estimates, but requires accurate knowledge
of the model. Any inevitable uncertainty in the model parameters may heavily
degrade the quality of the estimate. It is therefore desired to make the
estimation process robust to such uncertainties. Robust estimation was
previously studied for a varying phase, where the goal was to estimate the
phase at some time in the past, using the measurement results from both before
and after that time within a fixed time interval up to current time. Here, we
consider a robust guaranteed-cost filter yielding robust estimates of a varying
phase in real time, where the current phase is estimated using only past
measurements. Our filter minimizes the largest (worst-case) variance in the
allowable range of the uncertain model parameter(s) and this determines its
guaranteed cost. It outperforms in the worst case the optimal Kalman filter
designed for the model with no uncertainty, that corresponds to the center of
the possible range of the uncertain parameter(s). Moreover, unlike the Kalman
filter, our filter in the worst case always performs better than the best
achievable variance for heterodyne measurements, that we consider as the
tolerable threshold for our system. Furthermore, we consider effective quantum
efficiency and effective noise power, and show that our filter provides the
best results by these measures in the worst case. | [
1,
0,
1,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.