text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Anticipating Persistent Infection,
Abstract: We explore the emergence of persistent infection in a closed region where the
disease progression of the individuals is given by the SIRS model, with an
individual becoming infected on contact with another infected individual within
a given range. We focus on the role of synchronization in the persistence of
contagion. Our key result is that higher degree of synchronization, both
globally in the population and locally in the neighborhoods, hinders
persistence of infection. Importantly, we find that early short-time asynchrony
appears to be a consistent precursor to future persistence of infection, and
can potentially provide valuable early warnings for sustained contagion in a
population patch. Thus transient synchronization can help anticipate the
long-term persistence of infection. Further we demonstrate that when the range
of influence of an infected individual is wider, one obtains lower persistent
infection. This counter-intuitive observation can also be understood through
the relation of synchronization to infection burn-out. | [
0,
0,
0,
0,
1,
0
] |
Title: The first global-scale 30 m resolution mangrove canopy height map using Shuttle Radar Topography Mission data,
Abstract: No high-resolution canopy height map exists for global mangroves. Here we
present the first global mangrove height map at a consistent 30 m pixel
resolution derived from digital elevation model data collected through shuttle
radar topography mission. Additionally, we refined the current global mangrove
area maps by discarding the non-mangrove areas that are included in current
mangrove maps. | [
0,
1,
0,
0,
0,
0
] |
Title: A Survey of Active Attacks on Wireless Sensor Networks and their Countermeasures,
Abstract: Lately, Wireless Sensor Networks (WSNs) have become an emerging technology
and can be utilized in some crucial circumstances like battlegrounds,
commercial applications, habitat observing, buildings, smart homes, traffic
surveillance and other different places. One of the foremost difficulties that
WSN faces nowadays is protection from serious attacks. While organizing the
sensor nodes in an abandoned environment makes network systems helpless against
an assortment of strong assaults, intrinsic memory and power restrictions of
sensor nodes make the traditional security arrangements impractical. The
sensing knowledge combined with the wireless communication and processing power
makes it lucrative for being abused. The wireless sensor network technology
also obtains a big variety of security intimidations. This paper describes four
basic security threats and many active attacks on WSN with their possible
countermeasures proposed by different research scholars. | [
1,
0,
0,
0,
0,
0
] |
Title: A Deterministic Nonsmooth Frank Wolfe Algorithm with Coreset Guarantees,
Abstract: We present a new Frank-Wolfe (FW) type algorithm that is applicable to
minimization problems with a nonsmooth convex objective. We provide convergence
bounds and show that the scheme yields so-called coreset results for various
Machine Learning problems including 1-median, Balanced Development, Sparse PCA,
Graph Cuts, and the $\ell_1$-norm-regularized Support Vector Machine (SVM)
among others. This means that the algorithm provides approximate solutions to
these problems in time complexity bounds that are not dependent on the size of
the input problem. Our framework, motivated by a growing body of work on
sublinear algorithms for various data analysis problems, is entirely
deterministic and makes no use of smoothing or proximal operators. Apart from
these theoretical results, we show experimentally that the algorithm is very
practical and in some cases also offers significant computational advantages on
large problem instances. We provide an open source implementation that can be
adapted for other problems that fit the overall structure. | [
1,
0,
0,
1,
0,
0
] |
Title: The problem of boundary conditions for the shallow water equations (Russian),
Abstract: The problem of choice of boundary conditions are discussed for the case of
numerical integration of the shallow water equations on a substantially
irregular relief. In modeling of unsteady surface water flows has a dynamic
boundary partitioning liquid and dry bottom. The situation is complicated by
the emergence of sub- and supercritical flow regimes for the problems of
seasonal floodplain flooding, flash floods, tsunami landfalls. Analysis of the
use of various methods of setting conditions for the physical quantities of
liquid when the settlement of the boundary shows the advantages of using the
waterfall type conditions in the presence of strong inhomogeneities landforms.
When there is a waterfall on the border of the computational domain and
heterogeneity of the relief in the vicinity of the boundary portion may occur,
which is formed by the region of critical flow with the formation of a
hydraulic jump, which greatly weakens the effect of the waterfall on the flow
pattern upstream. | [
0,
1,
0,
0,
0,
0
] |
Title: Time-Assisted Authentication Protocol,
Abstract: Authentication is the first step toward establishing a service provider and
customer (C-P) association. In a mobile network environment, a lightweight and
secure authentication protocol is one of the most significant factors to
enhance the degree of service persistence. This work presents a secure and
lightweight keying and authentication protocol suite termed TAP (Time-Assisted
Authentication Protocol). TAP improves the security of protocols with the
assistance of time-based encryption keys and scales down the authentication
complexity by issuing a re-authentication ticket. While moving across the
network, a mobile customer node sends a re-authentication ticket to establish
new sessions with service-providing nodes. Consequently, this reduces the
communication and computational complexity of the authentication process. In
the keying protocol suite, a key distributor controls the key generation
arguments and time factors, while other participants independently generate a
keychain based on key generation arguments. We undertake a rigorous security
analysis and prove the security strength of TAP using CSP and rank function
analysis. | [
1,
0,
0,
0,
0,
0
] |
Title: Who is the infector? Epidemic models with symptomatic and asymptomatic cases,
Abstract: What role do asymptomatically infected individuals play in the transmission
dynamics? There are many diseases, such as norovirus and influenza, where some
infected hosts show symptoms of the disease while others are asymptomatically
infected, i.e. do not show any symptoms. The current paper considers a class of
epidemic models following an SEIR (Susceptible $\to$ Exposed $\to$ Infectious
$\to$ Recovered) structure that allows for both symptomatic and asymptomatic
cases. The following question is addressed: what fraction $\rho$ of those
individuals getting infected are infected by symptomatic (asymptomatic) cases?
This is a more complicated question than the related question for the beginning
of the epidemic: what fraction of the expected number of secondary cases of a
typical newly infected individual, i.e. what fraction of the basic reproduction
number $R_0$, is caused by symptomatic individuals? The latter fraction only
depends on the type-specific reproduction numbers, while the former fraction
$\rho$ also depends on timing and hence on the probabilistic distributions of
latent and infectious periods of the two types (not only their means). Bounds
on $\rho$ are derived for the situation where these distributions (and even
their means) are unknown. Special attention is given to the class of Markov
models and the class of continuous-time Reed-Frost models as two classes of
distribution functions. We show how these two classes of models can exhibit
very different behaviour. | [
0,
0,
0,
0,
1,
0
] |
Title: Action-depedent Control Variates for Policy Optimization via Stein's Identity,
Abstract: Policy gradient methods have achieved remarkable successes in solving
challenging reinforcement learning problems. However, it still often suffers
from the large variance issue on policy gradient estimation, which leads to
poor sample efficiency during training. In this work, we propose a control
variate method to effectively reduce variance for policy gradient methods.
Motivated by the Stein's identity, our method extends the previous control
variate methods used in REINFORCE and advantage actor-critic by introducing
more general action-dependent baseline functions. Empirical studies show that
our method significantly improves the sample efficiency of the state-of-the-art
policy gradient approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: Even denominator fractional quantum Hall states at an isospin transition in monolayer graphene,
Abstract: Magnetic fields quench the kinetic energy of two dimensional electrons,
confining them to highly degenerate Landau levels. In the absence of disorder,
the ground state at partial Landau level filling is determined only by Coulomb
interactions, leading to a variety of correlation-driven phenomena. Here, we
realize a quantum Hall analog of the Neél-to-valence bond solid transition
within the spin- and sublattice- degenerate monolayer graphene zero energy
Landau level by experimentally controlling substrate-induced sublattice
symmetry breaking. The transition is marked by unusual isospin transitions in
odd denominator fractional quantum Hall states for filling factors $\nu$ near
charge neutrality, and the unexpected appearance of incompressible even
denominator fractional quantum Hall states at $\nu=\pm1/2$ and $\pm1/4$
associated with pairing between composite fermions on different carbon
sublattices. | [
0,
1,
0,
0,
0,
0
] |
Title: ReBNet: Residual Binarized Neural Network,
Abstract: This paper proposes ReBNet, an end-to-end framework for training
reconfigurable binary neural networks on software and developing efficient
accelerators for execution on FPGA. Binary neural networks offer an intriguing
opportunity for deploying large-scale deep learning models on
resource-constrained devices. Binarization reduces the memory footprint and
replaces the power-hungry matrix-multiplication with light-weight XnorPopcount
operations. However, binary networks suffer from a degraded accuracy compared
to their fixed-point counterparts. We show that the state-of-the-art methods
for optimizing binary networks accuracy, significantly increase the
implementation cost and complexity. To compensate for the degraded accuracy
while adhering to the simplicity of binary networks, we devise the first
reconfigurable scheme that can adjust the classification accuracy based on the
application. Our proposition improves the classification accuracy by
representing features with multiple levels of residual binarization. Unlike
previous methods, our approach does not exacerbate the area cost of the
hardware accelerator. Instead, it provides a tradeoff between throughput and
accuracy while the area overhead of multi-level binarization is negligible. | [
1,
0,
0,
0,
0,
0
] |
Title: Prime geodesic theorem for the modular surface,
Abstract: Under the generalized Lindelöf hypothesis, the exponent in the error term
of the prime geodesic theorem for the modular surface is reduced to
$\frac{5}{8}+\varepsilon $ outside a set of finite logarithmic measure. | [
0,
0,
1,
0,
0,
0
] |
Title: The Molecular Gas Environment in the 20 km s$^{-1}$ Cloud in the Central Molecular Zone,
Abstract: We recently reported a population of protostellar candidates in the 20 km
s$^{-1}$ cloud in the Central Molecular Zone of the Milky Way, traced by H$_2$O
masers in gravitationally bound dense cores. In this paper, we report
high-angular-resolution ($\sim$3'') molecular line studies of the environment
of star formation in this cloud. Maps of various molecular line transitions as
well as the continuum at 1.3 mm are obtained using the Submillimeter Array.
Five NH$_3$ inversion lines and the 1.3 cm continuum are observed with the Karl
G. Jansky Very Large Array. The interferometric observations are complemented
with single-dish data. We find that the CH$_3$OH, SO, and HNCO lines, which are
usually shock tracers, are better correlated spatially with the compact dust
emission from dense cores among the detected lines. These lines also show
enhancement in intensities with respect to SiO intensities toward the compact
dust emission, suggesting the presence of slow shocks or hot cores in these
regions. We find gas temperatures of $\gtrsim$100 K at 0.1-pc scales based on
RADEX modelling of the H$_2$CO and NH$_3$ lines. Although no strong
correlations between temperatures and linewidths/H$_2$O maser luminosities are
found, in high-angular-resolution maps we notice several candidate shock heated
regions offset from any dense cores, as well as signatures of localized heating
by protostars in several dense cores. Our findings suggest that at 0.1-pc
scales in this cloud star formation and strong turbulence may together affect
the chemistry and temperature of the molecular gas. | [
0,
1,
0,
0,
0,
0
] |
Title: Incompressible fluid problems on embedded surfaces: Modeling and variational formulations,
Abstract: Governing equations of motion for a viscous incompressible material surface
are derived from the balance laws of continuum mechanics. The surface is
treated as a time-dependent smooth orientable manifold of codimension one in an
ambient Euclidian space. We use elementary tangential calculus to derive the
governing equations in terms of exterior differential operators in Cartesian
coordinates. The resulting equations can be seen as the Navier-Stokes equations
posed on an evolving manifold. We consider a splitting of the surface
Navier-Stokes system into coupled equations for the tangential and normal
motions of the material surface. We then restrict ourselves to the case of a
geometrically stationary manifold of codimension one embedded in $\Bbb{R}^n$.
For this case, we present new well-posedness results for the simplified surface
fluid model consisting of the surface Stokes equations. Finally, we propose and
analyze several alternative variational formulations for this surface Stokes
problem, including constrained and penalized formulations, which are convenient
for Galerkin discretization methods. | [
0,
1,
1,
0,
0,
0
] |
Title: Geohyperbolic Routing and Addressing Schemes,
Abstract: The key requirement to routing in any telecommunication network, and
especially in Internet-of-Things (IoT) networks, is scalability. Routing must
route packets between any source and destination in the network without
incurring unmanageable routing overhead that grows quickly with increasing
network size and dynamics. Here we present an addressing scheme and a coupled
network topology design scheme that guarantee essentially optimal routing
scalability. The FIB sizes are as small as they can be, equal to the number of
adjacencies a node has, while the routing control overhead is minimized as
nearly zero routing control messages are exchanged even upon catastrophic
failures in the network. The key new ingredient is the addressing scheme, which
is purely local, based only on geographic coordinates of nodes and a centrality
measure, and does not require any sophisticated non-local computations or
global network topology knowledge for network embedding. The price paid for
these benefits is that network topology cannot be arbitrary but should follow a
specific design, resulting in Internet-like topologies. The proposed schemes
can be most easily deployed in overlay networks, and also in other network
deployments, where geolocation information is available, and where network
topology can grow following the design specifications. | [
1,
1,
0,
0,
0,
0
] |
Title: Conical: an extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions,
Abstract: Conical functions appear in a large number of applications in physics and
engineering. In this paper we describe an extension of our module CONICAL for
the computation of conical functions. Specifically, the module includes now a
routine for computing the function ${\rm R}^{m}_{-\frac{1}{2}+i\tau}(x)$, a
real-valued numerically satisfactory companion of the function ${\rm
P}^m_{-\tfrac12+i\tau}(x)$ for $x>1$. In this way, a natural basis for solving
Dirichlet problems bounded by conical domains is provided. | [
1,
0,
1,
0,
0,
0
] |
Title: Online to Offline Conversions, Universality and Adaptive Minibatch Sizes,
Abstract: We present an approach towards convex optimization that relies on a novel
scheme which converts online adaptive algorithms into offline methods. In the
offline optimization setting, our derived methods are shown to obtain
favourable adaptive guarantees which depend on the harmonic sum of the queried
gradients. We further show that our methods implicitly adapt to the objective's
structure: in the smooth case fast convergence rates are ensured without any
prior knowledge of the smoothness parameter, while still maintaining guarantees
in the non-smooth setting. Our approach has a natural extension to the
stochastic setting, resulting in a lazy version of SGD (stochastic GD), where
minibathces are chosen \emph{adaptively} depending on the magnitude of the
gradients. Thus providing a principled approach towards choosing minibatch
sizes. | [
1,
0,
1,
1,
0,
0
] |
Title: RVP-FLMS : A Robust Variable Power Fractional LMS Algorithm,
Abstract: In this paper, we propose an adaptive framework for the variable power of the
fractional least mean square (FLMS) algorithm. The proposed algorithm named as
robust variable power FLMS (RVP-FLMS) dynamically adapts the fractional power
of the FLMS to achieve high convergence rate with low steady state error. For
the evaluation purpose, the problems of system identification and channel
equalization are considered. The experiments clearly show that the proposed
approach achieves better convergence rate and lower steady-state error compared
to the FLMS. The MATLAB code for the related simulation is available online at
this https URL. | [
0,
0,
1,
1,
0,
0
] |
Title: Dichotomy for Digraph Homomorphism Problems (two algorithms),
Abstract: Update : An issue has been found in the correctness of our algorithm, and we
are working to resolve the issue. Until a resolution is found, we retract our
main claim that our approach gives a combinatorial solution to the CSP
conjecture. We remain hopeful that we can resolve the issues. We thank Ross
Willard for carefully checking the algorithm and pointing out the mistake in
the version of this manuscript. We briefly explain one issue at the beginning
of the text, and leave the rest of the manuscript intact for the moment . Ross
Willard is posting a more involved description of a counter-example to the
algorithm in the present manuscript. We have an updated manuscript that
corrects some issues while still not arriving at a full solution; we will keep
this private as long as unresolved issues remain.
Previous abstract : We consider the problem of finding a homomorphism from an
input digraph G to a fixed digraph H. We show that if H admits a
weak-near-unanimity polymorphism $\phi$ then deciding whether G admits a
homomorphism to H (HOM(H)) is polynomial time solvable. This confirms the
conjecture of Maroti and McKenzie, and consequently implies the validity of the
celebrated dichotomy conjecture due to Feder and Vardi. We transform the
problem into an instance of the list homomorphism problem where initially all
the lists are full (contain all the vertices of H). Then we use the
polymorphism $\phi$ as a guide to reduce the lists to singleton lists, which
yields a homomorphism if one exists. | [
1,
0,
0,
0,
0,
0
] |
Title: Independence in generic incidence structures,
Abstract: We study the theory $T_{m,n}$ of existentially closed incidence structures
omitting the complete incidence structure $K_{m,n}$, which can also be viewed
as existentially closed $K_{m,n}$-free bipartite graphs. In the case $m = n =
2$, this is the theory of existentially closed projective planes. We give an
$\forall\exists$-axiomatization of $T_{m,n}$, show that $T_{m,n}$ does not have
a countable saturated model when $m,n\geq 2$, and show that the existence of a
prime model for $T_{2,2}$ is equivalent to a longstanding open question about
finite projective planes. Finally, we analyze model theoretic notions of
complexity for $T_{m,n}$. We show that $T_{m,n}$ is NSOP$_1$, but not simple
when $m,n\geq 2$, and we show that $T_{m,n}$ has weak elimination of
imaginaries but not full elimination of imaginaries. These results rely on
combinatorial characterizations of various notions of independence, including
algebraic independence, Kim independence, and forking independence. | [
0,
0,
1,
0,
0,
0
] |
Title: On the link between atmospheric cloud parameters and cosmic rays,
Abstract: We herewith attempt to investigate the cosmic rays behavior regarding the
scaling features of their time series. Our analysis is based on cosmic ray
observations made at four neutron monitor stations in Athens (Greece), Jung
(Switzerland) and Oulu (Finland), for the period 2000 to early 2017. Each of
these datasets was analyzed by using the Detrended Fluctuation Analysis (DFA)
and Multifractal Detrended Fluctuation Analysis (MF-DFA) in order to
investigate intrinsic properties, like self-similarity and the spectrum of
singularities. The main result obtained is that the cosmic rays time series at
all the neutron monitor stations exhibit positive long-range correlations (of
1/f type) with multifractal behavior. On the other hand, we try to investigate
the possible existence of similar scaling features in the time series of other
meteorological parameters which are closely associated with the cosmic rays,
such as parameters describing physical properties of clouds. | [
0,
1,
0,
0,
0,
0
] |
Title: To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference,
Abstract: The recent advances in deep neural networks (DNNs) make them attractive for
embedded systems. However, it can take a long time for DNNs to make an
inference on resource-constrained computing devices. Model compression
techniques can address the computation issue of deep inference on embedded
devices. This technique is highly attractive, as it does not rely on
specialized hardware, or computation-offloading that is often infeasible due to
privacy concerns or high latency. However, it remains unclear how model
compression techniques perform across a wide range of DNNs. To design efficient
embedded deep learning solutions, we need to understand their behaviors. This
work develops a quantitative approach to characterize model compression
techniques on a representative embedded deep learning architecture, the NVIDIA
Jetson Tx2. We perform extensive experiments by considering 11 influential
neural network architectures from the image classification and the natural
language processing domains. We experimentally show that how two mainstream
compression techniques, data quantization and pruning, perform on these network
architectures and the implications of compression techniques to the model
storage size, inference time, energy consumption and performance metrics. We
demonstrate that there are opportunities to achieve fast deep inference on
embedded systems, but one must carefully choose the compression settings. Our
results provide insights on when and how to apply model compression techniques
and guidelines for designing efficient embedded deep learning systems. | [
1,
0,
0,
0,
0,
0
] |
Title: Non-asymptotic theory for nonparametric testing,
Abstract: We consider nonparametric testing in a non-asymptotic framework. Our
statistical guarantees are exact in the sense that Type I and II errors are
controlled for any finite sample size. Meanwhile, one proposed test is shown to
achieve minimax optimality in the asymptotic sense. An important consequence of
this non-asymptotic theory is a new and practically useful formula for
selecting the optimal smoothing parameter in nonparametric testing. The leading
example in this paper is smoothing spline models under Gaussian errors. The
results obtained therein can be further generalized to the kernel ridge
regression framework under possibly non-Gaussian errors. Simulations
demonstrate that our proposed test improves over the conventional asymptotic
test when sample size is small to moderate. | [
0,
0,
1,
1,
0,
0
] |
Title: Classification of out-of-time-order correlators,
Abstract: The space of n-point correlation functions, for all possible time-orderings
of operators, can be computed by a non-trivial path integral contour, which
depends on how many time-ordering violations are present in the correlator.
These contours, which have come to be known as timefolds, or out-of-time-order
(OTO) contours, are a natural generalization of the Schwinger-Keldysh contour
(which computes singly out-of-time-ordered correlation functions). We provide a
detailed discussion of such higher OTO functional integrals, explaining their
general structure, and the myriad ways in which a particular correlation
function may be encoded in such contours. Our discussion may be seen as a
natural generalization of the Schwinger-Keldysh formalism to higher OTO
correlation functions. We provide explicit illustration for low point
correlators (n=2,3,4) to exemplify the general statements. | [
0,
1,
0,
0,
0,
0
] |
Title: Determinants of public cooperation in multiplex networks,
Abstract: Synergies between evolutionary game theory and statistical physics have
significantly improved our understanding of public cooperation in structured
populations. Multiplex networks, in particular, provide the theoretical
framework within network science that allows us to mathematically describe the
rich structure of interactions characterizing human societies. While research
has shown that multiplex networks may enhance the resilience of cooperation,
the interplay between the overlap in the structure of the layers and the
control parameters of the corresponding games has not yet been investigated.
With this aim, we consider here the public goods game on a multiplex network,
and we unveil the role of the number of layers and the overlap of links, as
well as the impact of different synergy factors in different layers, on the
onset of cooperation. We show that enhanced public cooperation emerges only
when a significant edge overlap is combined with at least one layer being able
to sustain some cooperation by means of a sufficiently high synergy factor. In
the absence of either of these conditions, the evolution of cooperation in
multiplex networks is determined by the bounds of traditional network
reciprocity with no enhanced resilience. These results caution against overly
optimistic predictions that the presence of multiple social domains may in
itself promote cooperation, and they help us better understand the complexity
behind prosocial behavior in layered social systems. | [
1,
1,
0,
0,
0,
0
] |
Title: Variational Encoding of Complex Dynamics,
Abstract: Often the analysis of time-dependent chemical and biophysical systems
produces high-dimensional time-series data for which it can be difficult to
interpret which individual features are most salient. While recent work from
our group and others has demonstrated the utility of time-lagged co-variate
models to study such systems, linearity assumptions can limit the compression
of inherently nonlinear dynamics into just a few characteristic components.
Recent work in the field of deep learning has led to the development of
variational autoencoders (VAE), which are able to compress complex datasets
into simpler manifolds. We present the use of a time-lagged VAE, or variational
dynamics encoder (VDE), to reduce complex, nonlinear processes to a single
embedding with high fidelity to the underlying dynamics. We demonstrate how the
VDE is able to capture nontrivial dynamics in a variety of examples, including
Brownian dynamics and atomistic protein folding. Additionally, we demonstrate a
method for analyzing the VDE model, inspired by saliency mapping, to determine
what features are selected by the VDE model to describe dynamics. The VDE
presents an important step in applying techniques from deep learning to more
accurately model and interpret complex biophysics. | [
0,
1,
0,
1,
0,
0
] |
Title: Accelerating Imitation Learning with Predictive Models,
Abstract: Sample efficiency is critical in solving real-world reinforcement learning
problems, where agent-environment interactions can be costly. Imitation
learning from expert advice has proved to be an effective strategy for reducing
the number of interactions required to train a policy. Online imitation
learning, which interleaves policy evaluation and policy optimization, is a
particularly effective technique with provable performance guarantees. In this
work, we seek to further accelerate the convergence rate of online imitation
learning, thereby making it more sample efficient. We propose two model-based
algorithms inspired by Follow-the-Leader (FTL) with prediction: MoBIL-VI based
on solving variational inequalities and MoBIL-Prox based on stochastic
first-order updates. These two methods leverage a model to predict future
gradients to speed up policy learning. When the model oracle is learned online,
these algorithms can provably accelerate the best known convergence rate up to
an order. Our algorithms can be viewed as a generalization of stochastic
Mirror-Prox (Juditsky et al., 2011), and admit a simple constructive FTL-style
analysis of performance. | [
0,
0,
0,
1,
0,
0
] |
Title: Neural Networks for Predicting Algorithm Runtime Distributions,
Abstract: Many state-of-the-art algorithms for solving hard combinatorial problems in
artificial intelligence (AI) include elements of stochasticity that lead to
high variations in runtime, even for a fixed problem instance. Knowledge about
the resulting runtime distributions (RTDs) of algorithms on given problem
instances can be exploited in various meta-algorithmic procedures, such as
algorithm selection, portfolios, and randomized restarts. Previous work has
shown that machine learning can be used to individually predict mean, median
and variance of RTDs. To establish a new state-of-the-art in predicting RTDs,
we demonstrate that the parameters of an RTD should be learned jointly and that
neural networks can do this well by directly optimizing the likelihood of an
RTD given runtime observations. In an empirical study involving five algorithms
for SAT solving and AI planning, we show that neural networks predict the true
RTDs of unseen instances better than previous methods, and can even do so when
only few runtime observations are available per training instance. | [
1,
0,
0,
0,
0,
0
] |
Title: Small Hankel operators on generalized Fock spaces,
Abstract: We consider Fock spaces $F^{p,\ell}_{\alpha}$ of entire functions on
${\mathbb C}$ associated to the weights $e^{-\alpha |z|^{2\ell}}$, where
$\alpha>0$ and $\ell$ is a positive integer. We compute explicitly the
corresponding Bergman kernel associated to $F^{2,\ell}_{\alpha}$ and, using an
adequate factorization of this kernel, we characterize the boundedness and the
compactness of the small Hankel operator $\mathfrak{h}^{\ell}_{b,\alpha}$ on
$F^{p,\ell}_{\alpha}$. Moreover, we also determine when
$\mathfrak{h}^{\ell}_{b,\alpha}$ is a Hilbert-Schmidt operator on
$F^{2,\ell}_{\alpha}$. | [
0,
0,
1,
0,
0,
0
] |
Title: Efficient enumeration of solutions produced by closure operations,
Abstract: In this paper we address the problem of generating all elements obtained by
the saturation of an initial set by some operations. More precisely, we prove
that we can generate the closure of a boolean relation (a set of boolean
vectors) by polymorphisms with a polynomial delay. Therefore we can compute
with polynomial delay the closure of a family of sets by any set of "set
operations": union, intersection, symmetric difference, subsets, supersets
$\dots$). To do so, we study the $Membership_{\mathcal{F}}$ problem: for a set
of operations $\mathcal{F}$, decide whether an element belongs to the closure
by $\mathcal{F}$ of a family of elements. In the boolean case, we prove that
$Membership_{\mathcal{F}}$ is in P for any set of boolean operations
$\mathcal{F}$. When the input vectors are over a domain larger than two
elements, we prove that the generic enumeration method fails, since
$Membership_{\mathcal{F}}$ is NP-hard for some $\mathcal{F}$. We also study the
problem of generating minimal or maximal elements of closures and prove that
some of them are related to well known enumeration problems such as the
enumeration of the circuits of a matroid or the enumeration of maximal
independent sets of a hypergraph. This article improves on previous works of
the same authors. | [
1,
0,
0,
0,
0,
0
] |
Title: Exact Dimensionality Selection for Bayesian PCA,
Abstract: We present a Bayesian model selection approach to estimate the intrinsic
dimensionality of a high-dimensional dataset. To this end, we introduce a novel
formulation of the probabilisitic principal component analysis model based on a
normal-gamma prior distribution. In this context, we exhibit a closed-form
expression of the marginal likelihood which allows to infer an optimal number
of components. We also propose a heuristic based on the expected shape of the
marginal likelihood curve in order to choose the hyperparameters. In
non-asymptotic frameworks, we show on simulated data that this exact
dimensionality selection approach is competitive with both Bayesian and
frequentist state-of-the-art methods. | [
0,
0,
1,
1,
0,
0
] |
Title: Exponential convergence of testing error for stochastic gradient methods,
Abstract: We consider binary classification problems with positive definite kernels and
square loss, and study the convergence rates of stochastic gradient methods. We
show that while the excess testing loss (squared loss) converges slowly to zero
as the number of observations (and thus iterations) goes to infinity, the
testing error (classification error) converges exponentially fast if low-noise
conditions are assumed. | [
1,
0,
0,
1,
0,
0
] |
Title: What makes a gesture a gesture? Neural signatures involved in gesture recognition,
Abstract: Previous work in the area of gesture production, has made the assumption that
machines can replicate "human-like" gestures by connecting a bounded set of
salient points in the motion trajectory. Those inflection points were
hypothesized to also display cognitive saliency. The purpose of this paper is
to validate that claim using electroencephalography (EEG). That is, this paper
attempts to find neural signatures of gestures (also referred as placeholders)
in human cognition, which facilitate the understanding, learning and repetition
of gestures. Further, it is discussed whether there is a direct mapping between
the placeholders and kinematic salient points in the gesture trajectories.
These are expressed as relationships between inflection points in the gestures'
trajectories with oscillatory mu rhythms (8-12 Hz) in the EEG. This is achieved
by correlating fluctuations in mu power during gesture observation with salient
motion points found for each gesture. Peaks in the EEG signal at central
electrodes (motor cortex) and occipital electrodes (visual cortex) were used to
isolate the salient events within each gesture. We found that a linear model
predicting mu peaks from motion inflections fits the data well. Increases in
EEG power were detected 380 and 500ms after inflection points at occipital and
central electrodes, respectively. These results suggest that coordinated
activity in visual and motor cortices is sensitive to motion trajectories
during gesture observation, and it is consistent with the proposal that
inflection points operate as placeholders in gesture recognition. | [
1,
0,
0,
0,
0,
0
] |
Title: DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples,
Abstract: Recent studies have shown that deep neural networks (DNN) are vulnerable to
adversarial samples: maliciously-perturbed samples crafted to yield incorrect
model outputs. Such attacks can severely undermine DNN systems, particularly in
security-sensitive settings. It was observed that an adversary could easily
generate adversarial samples by making a small perturbation on irrelevant
feature dimensions that are unnecessary for the current classification task. To
overcome this problem, we introduce a defensive mechanism called DeepCloak. By
identifying and removing unnecessary features in a DNN model, DeepCloak limits
the capacity an attacker can use generating adversarial samples and therefore
increase the robustness against such inputs. Comparing with other defensive
approaches, DeepCloak is easy to implement and computationally efficient.
Experimental results show that DeepCloak can increase the performance of
state-of-the-art DNN models against adversarial samples. | [
1,
0,
0,
0,
0,
0
] |
Title: Chaotic dynamics of movements stochastic instability and the hypothesis of N.A. Bernstein about "repetition without repetition",
Abstract: The registration of tremor was performed in two groups of subjects (15 people
in each group) with different physical fitness at rest and at a static loads of
3N. Each subject has been tested 15 series (number of series N=15) in both
states (with and without physical loads) and each series contained 15 samples
(n=15) of tremorogramm measurements (500 elements in each sample, registered
coordinates x1(t) of the finger position relative to eddy current sensor) of
the finger. Using non-parametric Wilcoxon test of each series of experiment a
pairwise comparison was made forming 15 tables in which the results of
calculation of pairwise comparison was presented as a matrix (15x15) for
tremorogramms are presented. The average number of hits random pairs of samples
(<k>) and standard deviation {\sigma} were calculated for all 15 matrices
without load and under the impact of physical load (3N), which showed an
increase almost in twice in the number k of pairs of matching samples of
tremorogramms at conditions of a static load. For all these samples it was
calculated special quasi-attractor (this square was presented the distinguishes
between physical load and without it. All samples present the stochastic
unstable state. | [
0,
0,
0,
0,
1,
0
] |
Title: Deep Learning for Real Time Crime Forecasting,
Abstract: Accurate real time crime prediction is a fundamental issue for public safety,
but remains a challenging problem for the scientific community. Crime
occurrences depend on many complex factors. Compared to many predictable
events, crime is sparse. At different spatio-temporal scales, crime
distributions display dramatically different patterns. These distributions are
of very low regularity in both space and time. In this work, we adapt the
state-of-the-art deep learning spatio-temporal predictor, ST-ResNet [Zhang et
al, AAAI, 2017], to collectively predict crime distribution over the Los
Angeles area. Our models are two staged. First, we preprocess the raw crime
data. This includes regularization in both space and time to enhance
predictable signals. Second, we adapt hierarchical structures of residual
convolutional units to train multi-factor crime prediction models. Experiments
over a half year period in Los Angeles reveal highly accurate predictive power
of our models. | [
1,
0,
0,
1,
0,
0
] |
Title: Scaling of the Detonation Product State with Reactant Kinetic Energy,
Abstract: This submissions has been withdrawn by arXiv administrators because the
submitter did not have the right to agree to our license. | [
0,
1,
0,
0,
0,
0
] |
Title: Almost automorphic functions on the quantum time scale and applications,
Abstract: In this paper, we first propose two types of concepts of almost automorphic
functions on the quantum time scale. Secondly, we study some basic properties
of almost automorphic functions on the quantum time scale. Then, we introduce a
transformation between functions defined on the quantum time scale and
functions defined on the set of generalized integer numbers, by using this
transformation we give equivalent definitions of almost automorphic functions
on the quantum time scale. Finally, as an application of our results, we
establish the existence of almost automorphic solutions of linear and
semilinear dynamic equations on the quantum time scale. | [
0,
0,
1,
0,
0,
0
] |
Title: Stochastic Game in Remote Estimation under DoS Attacks,
Abstract: This paper studies remote state estimation under denial-of-service (DoS)
attacks. A sensor transmits its local estimate of an underlying physical
process to a remote estimator via a wireless communication channel. A DoS
attacker is capable to interfere the channel and degrades the remote estimation
accuracy. Considering the tactical jamming strategies played by the attacker,
the sensor adjusts its transmission power. This interactive process between the
sensor and the attacker is studied in the framework of a zero-sum stochastic
game. To derive their optimal power schemes, we first discuss the existence of
stationary Nash equilibrium (SNE) for this game. We then present the monotone
structure of the optimal strategies, which helps reduce the computational
complexity of the stochastic game algorithm. Numerical examples are provided to
illustrate the obtained results. | [
1,
0,
0,
0,
0,
0
] |
Title: Casimir-Polder size consistency -- a constraint violated by some dispersion theories,
Abstract: A key goal in quantum chemistry methods, whether ab initio or otherwise, is
to achieve size consistency. In this manuscript we formulate the related idea
of "Casimir-Polder size consistency" that manifests in long-range dispersion
energetics. We show that local approximations in time-dependent density
functional theory dispersion energy calculations violate the consistency
condition because of incorrect treatment of highly non-local "xc kernel"
physics, by up to 10% in our tests on closed-shell atoms. | [
0,
1,
0,
0,
0,
0
] |
Title: A combinatorial proof of Bass's determinant formula for the zeta function of regular graphs,
Abstract: We give an elementary combinatorial proof of Bass's determinant formula for
the zeta function of a finite regular graph. This is done by expressing the
number of non-backtracking cycles of a given length in terms of Chebychev
polynomials in the eigenvalues of the adjacency operator of the graph. | [
1,
0,
0,
0,
0,
0
] |
Title: Photometric Redshifts for Hyper Suprime-Cam Subaru Strategic Program Data Release 1,
Abstract: Photometric redshifts are a key component of many science objectives in the
Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). In this paper, we
describe and compare the codes used to compute photometric redshifts for
HSC-SSP, how we calibrate them, and the typical accuracy we achieve with the
HSC five-band photometry (grizy). We introduce a new point estimator based on
an improved loss function and demonstrate that it works better than other
commonly used estimators. We find that our photo-z's are most accurate at
0.2<~zphot<~1.5, where we can straddle the 4000A break. We achieve
sigma(d_zphot/(1+zphot))~0.05 and an outlier rate of about 15% for galaxies
down to i=25 within this redshift range. If we limit to a brighter sample of
i<24, we achieve sigma~0.04 and ~8% outliers. Our photo-z's should thus enable
many science cases for HSC-SSP. We also characterize the accuracy of our
redshift probability distribution function (PDF) and discover that some codes
over/under-estimate the redshift uncertainties, which have implications for
N(z) reconstruction. Our photo-z products for the entire area in the Public
Data Release 1 are publicly available, and both our catalog products (such as
point estimates) and full PDFs can be retrieved from the data release site,
this https URL. | [
0,
1,
0,
0,
0,
0
] |
Title: A note on some constants related to the zeta-function and their relationship with the Gregory coefficients,
Abstract: In this paper new series for the first and second Stieltjes constants (also
known as generalized Euler's constant), as well as for some closely related
constants are obtained. These series contain rational terms only and involve
the so-called Gregory coefficients, which are also known as (reciprocal)
logarithmic numbers, Cauchy numbers of the first kind and Bernoulli numbers of
the second kind. In addition, two interesting series with rational terms are
given for Euler's constant and the constant ln(2*pi), and yet another
generalization of Euler's constant is proposed and various formulas for the
calculation of these constants are obtained. Finally, in the paper, we mention
that almost all the constants considered in this work admit simple
representations via the Ramanujan summation. | [
0,
0,
1,
0,
0,
0
] |
Title: Spin - Phonon Coupling in Nickel Oxide Determined from Ultraviolet Raman Spectroscopy,
Abstract: Nickel oxide (NiO) has been studied extensively for various applications
ranging from electrochemistry to solar cells [1,2]. In recent years, NiO
attracted much attention as an antiferromagnetic (AF) insulator material for
spintronic devices [3-10]. Understanding the spin - phonon coupling in NiO is a
key to its functionalization, and enabling AF spintronics' promise of
ultra-high-speed and low-power dissipation [11,12]. However, despite its status
as an exemplary AF insulator and a benchmark material for the study of
correlated electron systems, little is known about the spin - phonon
interaction, and the associated energy dissipation channel, in NiO. In
addition, there is a long-standing controversy over the large discrepancies
between the experimental and theoretical values for the electron, phonon, and
magnon energies in NiO [13-23]. This gap in knowledge is explained by NiO
optical selection rules, high Neel temperature and dominance of the magnon band
in the visible Raman spectrum, which precludes a conventional approach for
investigating such interaction. Here we show that by using ultraviolet (UV)
Raman spectroscopy one can extract the spin - phonon coupling coefficients in
NiO. We established that unlike in other materials, the spins of Ni atoms
interact more strongly with the longitudinal optical (LO) phonons than with the
transverse optical (TO) phonons, and produce opposite effects on the phonon
energies. The peculiarities of the spin - phonon coupling are consistent with
the trends given by density functional theory calculations. The obtained
results shed light on the nature of the spin - phonon coupling in AF insulators
and may help in developing innovative spintronic devices. | [
0,
1,
0,
0,
0,
0
] |
Title: On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis,
Abstract: Text preprocessing is often the first step in the pipeline of a Natural
Language Processing (NLP) system, with potential impact in its final
performance. Despite its importance, text preprocessing has not received much
attention in the deep learning literature. In this paper we investigate the
impact of simple text preprocessing decisions (particularly tokenizing,
lemmatizing, lowercasing and multiword grouping) on the performance of a
standard neural text classifier. We perform an extensive evaluation on standard
benchmarks from text categorization and sentiment analysis. While our
experiments show that a simple tokenization of input text is generally
adequate, they also highlight significant degrees of variability across
preprocessing techniques. This reveals the importance of paying attention to
this usually-overlooked step in the pipeline, particularly when comparing
different models. Finally, our evaluation provides insights into the best
preprocessing practices for training word embeddings. | [
1,
0,
0,
0,
0,
0
] |
Title: On the zeros of Riemann $Ξ(z)$ function,
Abstract: The Riemann $\Xi(z)$ function (even in $z$) admits a Fourier transform of an
even kernel $\Phi(t)=4e^{9t/2}\theta''(e^{2t})+6e^{5t/2}\theta'(e^{2t})$. Here
$\theta(x):=\theta_3(0,ix)$ and $\theta_3(0,z)$ is a Jacobi theta function, a
modular form of weight $\frac{1}{2}$. (A) We discover a family of functions
$\{\Phi_n(t)\}_{n\geqslant 2}$ whose Fourier transform on compact support
$(-\frac{1}{2}\log n, \frac{1}{2}\log n)$, $\{F(n,z)\}_{n\geqslant2}$,
converges to $\Xi(z)$ uniformly in the critical strip $S_{1/2}:=\{|\Im(z)|<
\frac{1}{2}\}$. (B) Based on this we then construct another family of functions
$\{H(14,n,z)\}_{n\geqslant 2}$ and show that it uniformly converges to $\Xi(z)$
in the critical strip $S_{1/2}$. (C) Based on this we construct another family
of functions $\{W(n,z)\}_{n\geqslant 8}:=\{H(14,n,2z/\log n)\}_{n\geqslant 8}$
and show that if all the zeros of $\{W(n,z)\}_{n\geqslant 8}$ in the critical
strip $S_{1/2}$ are real, then all the zeros of $\{H(14,n,z)\}_{n\geqslant 8}$
in the critical strip $S_{1/2}$ are real. (D) We then show that
$W(n,z)=U(n,z)-V(n,z)$ and $U(n,z^{1/2})$ and $V(n,z^{1/2})$ have only real,
positive and simple zeros. And there exists a positive integer $N\geqslant 8$
such that for all $n\geqslant N$, the zeros of $U(n,x^{1/2})$ are strictly
left-interlacing with those of $V(n,x^{1/2})$. Using an entire function
equivalent to Hermite-Kakeya Theorem for polynomials we show that $W(n\geqslant
N,z^{1/2})$ has only real, positive and simple zeros. Thus $W(n\geqslant N,z)$
have only real and imple zeros. (E) Using a corollary of Hurwitz's theorem in
complex analysis we prove that $\Xi(z)$ has no zeros in
$S_{1/2}\setminus\mathbb{R}$, i.e., $S_{1/2}\setminus \mathbb{R}$ is a
zero-free region for $\Xi(z)$. Since all the zeros of $\Xi(z)$ are in
$S_{1/2}$, all the zeros of $\Xi(z)$ are in $\mathbb{R}$, i.e., all the zeros
of $\Xi(z)$ are real. | [
0,
0,
1,
0,
0,
0
] |
Title: The boundary value problem for Yang--Mills--Higgs fields,
Abstract: We show the existence of Yang--Mills--Higgs (YMH) fields over a Riemann
surface with boundary where a free boundary condition is imposed on the section
and a Neumann boundary condition on the connection. In technical terms, we
study the convergence and blow-up behavior of a sequence of Sacks-Uhlenbeck
type $\alpha$-YMH fields as $\alpha\to 1$. For $\alpha>1$, each $\alpha$-YMH
field is shown to be smooth up to the boundary under some gauge transformation.
This is achieved by showing a regularity theorem for more general coupled
systems, which extends the classical results of Ladyzhenskaya-Ural'ceva and
Morrey. | [
0,
0,
1,
0,
0,
0
] |
Title: Iterated failure rate monotonicity and ordering relations within Gamma and Weibull distributions,
Abstract: Stochastic ordering of distributions of random variables may be defined by
the relative convexity of the tail functions. This has been extended to higher
order stochastic orderings, by iteratively reassigning tail-weights. The actual
verification of those stochastic orderings is not simple, as this depends on
inverting distribution functions for which there may be no explicit expression.
The iterative definition of distributions, of course, contributes to make that
verification even harder. We have a look at the stochastic ordering,
introducing a method that allows for explicit usage, applying it to the Gamma
and Weibull distributions, giving a complete description of the order relations
within each of those families. | [
0,
0,
1,
1,
0,
0
] |
Title: Quantum depletion of a homogeneous Bose-Einstein condensate,
Abstract: We have measured the quantum depletion of an interacting homogeneous
Bose-Einstein condensate, and confirmed the 70-year old theory of N.N.
Bogoliubov. The observed condensate depletion is reversibly tuneable by
changing the strength of the interparticle interactions. Our atomic homogeneous
condensate is produced in an optical-box trap, the interactions are tuned via a
magnetic Feshbach resonance, and the condensed fraction probed by coherent
two-photon Bragg scattering. | [
0,
1,
0,
0,
0,
0
] |
Title: A Deep Generative Model for Graphs: Supervised Subset Selection to Create Diverse Realistic Graphs with Applications to Power Networks Synthesis,
Abstract: Creating and modeling real-world graphs is a crucial problem in various
applications of engineering, biology, and social sciences; however, learning
the distributions of nodes/edges and sampling from them to generate realistic
graphs is still challenging. Moreover, generating a diverse set of synthetic
graphs that all imitate a real network is not addressed. In this paper, the
novel problem of creating diverse synthetic graphs is solved. First, we devise
the deep supervised subset selection (DeepS3) algorithm; Given a ground-truth
set of data points, DeepS3 selects a diverse subset of all items (i.e. data
points) that best represent the items in the ground-truth set. Furthermore, we
propose the deep graph representation recurrent network (GRRN) as a novel
generative model that learns a probabilistic representation of a real weighted
graph. Training the GRRN, we generate a large set of synthetic graphs that are
likely to follow the same features and adjacency patterns as the original one.
Incorporating GRRN with DeepS3, we select a diverse subset of generated graphs
that best represent the behaviors of the real graph (i.e. our ground-truth). We
apply our model to the novel problem of power grid synthesis, where a synthetic
power network is created with the same physical/geometric properties as a real
power system without revealing the real locations of the substations (nodes)
and the lines (edges), since such data is confidential. Experiments on the
Synthetic Power Grid Data Set show accurate synthetic networks that follow
similar structural and spatial properties as the real power grid. | [
1,
0,
0,
1,
0,
0
] |
Title: Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training,
Abstract: While strong progress has been made in image captioning over the last years,
machine and human captions are still quite distinct. A closer look reveals that
this is due to the deficiencies in the generated word distribution, vocabulary
size, and strong bias in the generators towards frequent captions. Furthermore,
humans -- rightfully so -- generate multiple, diverse captions, due to the
inherent ambiguity in the captioning task which is not considered in today's
systems.
To address these challenges, we change the training objective of the caption
generator from reproducing groundtruth captions to generating a set of captions
that is indistinguishable from human generated captions. Instead of
handcrafting such a learning target, we employ adversarial training in
combination with an approximate Gumbel sampler to implicitly match the
generated distribution to the human one. While our method achieves comparable
performance to the state-of-the-art in terms of the correctness of the
captions, we generate a set of diverse captions, that are significantly less
biased and match the word statistics better in several aspects. | [
1,
0,
0,
0,
0,
0
] |
Title: GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework,
Abstract: There is a pressing need to build an architecture that could subsume these
networks under a unified framework that achieves both higher performance and
less overhead. To this end, two fundamental issues are yet to be addressed. The
first one is how to implement the back propagation when neuronal activations
are discrete. The second one is how to remove the full-precision hidden weights
in the training phase to break the bottlenecks of memory/computation
consumption. To address the first issue, we present a multi-step neuronal
activation discretization method and a derivative approximation technique that
enable the implementing the back propagation algorithm on discrete DNNs. While
for the second issue, we propose a discrete state transition (DST) methodology
to constrain the weights in a discrete space without saving the hidden weights.
Through this way, we build a unified framework that subsumes the binary or
ternary networks as its special cases, and under which a heuristic algorithm is
provided at the website this https URL. More
particularly, we find that when both the weights and activations become ternary
values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR
networks (GXNOR-Nets) since only the event of non-zero weight and non-zero
activation enables the control gate to start the XNOR logic operations in the
original binary networks. This promises the event-driven hardware design for
efficient mobile intelligence. We achieve advanced performance compared with
state-of-the-art algorithms. Furthermore, the computational sparsity and the
number of states in the discrete space can be flexibly modified to make it
suitable for various hardware platforms. | [
1,
0,
0,
1,
0,
0
] |
Title: Thermoelectric Transport Coefficients from Charged Solv and Nil Black Holes,
Abstract: In the present work we study charged black hole solutions of the
Einstein-Maxwell action that have Thurston geometries on its near horizon
region. In particular we find solutions with charged Solv and Nil geometry
horizons. We also find Nil black holes with hyperscaling violation. For all our
solutions we compute the thermoelectric DC transport coefficients of the
corresponding dual field theory. We find that the Solv and Nil black holes
without hyperscaling violation are dual to metals while those with hyperscaling
violation are dual to insulators. | [
0,
1,
0,
0,
0,
0
] |
Title: A unified method for maximal truncated Calderón-Zygmund operators in general function spaces by sparse domination,
Abstract: In this note we give simple proofs of several results involving maximal
truncated Caldeón-Zygmund operators in the general setting of rearrangement
invariant quasi-Banach function spaces by sparse domination. Our techniques
allow us to track the dependence of the constants in weighted norm
inequalities; additionally, our results hold in $\mathbb{R}^n$ as well as in
many spaces of homogeneous type. | [
0,
0,
1,
0,
0,
0
] |
Title: Evaluating Quality of Chatbots and Intelligent Conversational Agents,
Abstract: Chatbots are one class of intelligent, conversational software agents
activated by natural language input (which can be in the form of text, voice,
or both). They provide conversational output in response, and if commanded, can
sometimes also execute tasks. Although chatbot technologies have existed since
the 1960s and have influenced user interface development in games since the
early 1980s, chatbots are now easier to train and implement. This is due to
plentiful open source code, widely available development platforms, and
implementation options via Software as a Service (SaaS). In addition to
enhancing customer experiences and supporting learning, chatbots can also be
used to engineer social harm - that is, to spread rumors and misinformation, or
attack people for posting their thoughts and opinions online. This paper
presents a literature review of quality issues and attributes as they relate to
the contemporary issue of chatbot development and implementation. Finally,
quality assessment approaches are reviewed, and a quality assessment method
based on these attributes and the Analytic Hierarchy Process (AHP) is proposed
and examined. | [
1,
0,
0,
0,
0,
0
] |
Title: Simulation study of signal formation in position sensitive planar p-on-n silicon detectors after short range charge injection,
Abstract: Segmented silicon detectors (micropixel and microstrip) are the main type of
detectors used in the inner trackers of Large Hadron Collider (LHC) experiments
at CERN. Due to the high luminosity and eventual high fluence, detectors with
fast response to fit the short shaping time of 20 ns and sufficient radiation
hardness are required.
Measurements carried out at the Ioffe Institute have shown a reversal of the
pulse polarity in the detector response to short-range charge injection. Since
the measured negative signal is about 30-60% of the peak positive signal, the
effect strongly reduces the CCE even in non-irradiated detectors. For further
investigation of the phenomenon the measurements have been reproduced by TCAD
simulations.
As for the measurements, the simulation study was applied for the p-on-n
strip detectors similar in geometry to those developed for the ATLAS experiment
and for the Ioffe Institute designed p-on-n strip detectors with each strip
having a window in the metallization covering the p$^+$ implant, allowing the
generation of electron-hole pairs under the strip implant. Red laser scans
across the strips and the interstrip gap with varying laser diameters and
Si-SiO$_2$ interface charge densities were carried out. The results verify the
experimentally observed negative response along the scan in the interstrip gap.
When the laser spot is positioned on the strip p$^+$ implant the negative
response vanishes and the collected charge at the active strip proportionally
increases.
The simulation results offer a further insight and understanding of the
influence of the oxide charge density in the signal formation. The observed
effects and details of the detector response for different charge injection
positions are discussed in the context of Ramo's theorem. | [
0,
1,
0,
0,
0,
0
] |
Title: Re-DPoctor: Real-time health data releasing with w-day differential privacy,
Abstract: Wearable devices enable users to collect health data and share them with
healthcare providers for improved health service. Since health data contain
privacy-sensitive information, unprotected data release system may result in
privacy leakage problem. Most of the existing work use differential privacy for
private data release. However, they have limitations in healthcare scenarios
because they do not consider the unique features of health data being collected
from wearables, such as continuous real-time collection and pattern
preservation. In this paper, we propose Re-DPoctor, a real-time health data
releasing scheme with $w$-day differential privacy where the privacy of health
data collected from any consecutive $w$ days is preserved. We improve utility
by using a specially-designed partition algorithm to protect the health data
patterns. Meanwhile, we improve privacy preservation by applying newly proposed
adaptive sampling technique and budget allocation method. We prove that
Re-DPoctor satisfies $w$-day differential privacy. Experiments on real health
data demonstrate that our method achieves better utility with strong privacy
guarantee than existing state-of-the-art methods. | [
1,
0,
0,
0,
0,
0
] |
Title: Efficient Computation of the Stochastic Behavior of Partial Sum Processes,
Abstract: In this paper the computational aspects of probability calculations for
dynamical partial sum expressions are discussed. Such dynamical partial sum
expressions have many important applications, and examples are provided in the
fields of reliability, product quality assessment, and stochastic control.
While these probability calculations are ostensibly of a high dimension, and
consequently intractable in general, it is shown how a recursive integration
methodology can be implemented to obtain exact calculations as a series of
two-dimensional calculations. The computational aspects of the implementaion of
this methodology, with the adoption of Fast Fourier Transforms, are discussed. | [
0,
0,
0,
1,
0,
0
] |
Title: A Physicist's view on Chopin's Études,
Abstract: We propose the use of specific dynamical processes and more in general of
ideas from Physics to model the evolution in time of musical structures. We
apply this approach to two Études by F. Chopin, namely op.10 n.3 and op.25
n.1, proposing some original description based on concepts of symmetry
breaking/restoration and quantum coherence, which could be useful for
interpretation. In this analysis, we take advantage of colored musical scores,
obtained by implementing Scriabin's color code for sounds to musical notation. | [
0,
1,
0,
0,
0,
0
] |
Title: A mean-field approach to Kondo-attractive-Hubbard model,
Abstract: With the purpose of investigating coexistence between magnetic order and
superconductivity, we consider a model in which conduction electrons interact
with each other, via an attractive Hubbard on-site coupling $U$, and with local
moments on every site, via a Kondo-like coupling, $J$. The model is solved on a
simple cubic lattice through a Hartree-Fock approximation, within a
`semi-classical' framework which allows spiral magnetic modes to be stabilized.
For a fixed electronic density, $n_c$, the small $J$ region of the ground state
($T=0$) phase diagram displays spiral antiferromagnetic (SAFM) states for small
$U$. Upon increasing $U$, a state with coexistence between superconductivity
(SC) and SAFM sets in; further increase in $U$ turns the spiral mode into a
Néel antiferromagnet. The large $J$ region is a (singlet) Kondo phase. At
finite temperatures, and in the region of coexistence, thermal fluctuations
suppress the different ordered phases in succession: the SAFM phase at lower
temperatures and SC at higher temperatures; also, reentrant behaviour is found
to be induced by temperature. Our results provide a qualitative description of
the competition between local moment magnetism and superconductivity in the
borocarbides family. | [
0,
1,
0,
0,
0,
0
] |
Title: Quantum models as classical cellular automata,
Abstract: A synopsis is offered of the properties of discrete and integer-valued, hence
"natural", cellular automata (CA). A particular class comprises the
"Hamiltonian CA" with discrete updating rules that resemble Hamilton's
equations. The resulting dynamics is linear like the unitary evolution
described by the Schrödinger equation. Employing Shannon's Sampling Theorem,
we construct an invertible map between such CA and continuous quantum
mechanical models which incorporate a fundamental discreteness scale $l$.
Consequently, there is a one-to-one correspondence of quantum mechanical and CA
conservation laws. We discuss the important issue of linearity, recalling that
nonlinearities imply nonlocal effects in the continuous quantum mechanical
description of intrinsically local discrete CA - requiring locality entails
linearity. The admissible CA observables and the existence of solutions of the
$l$-dependent dispersion relation for stationary states are mentioned, besides
the construction of multipartite CA obeying the Superposition Principle. We
point out problems when trying to match the deterministic CA here to those
envisioned in 't Hooft's CA Interpretation of Quantum Mechanics. | [
0,
1,
1,
0,
0,
0
] |
Title: K-closedness of weighted Hardy spaces on the two-dimensional torus,
Abstract: It is proved that, under certain restrictions on weights, a pair of weighted
Hardy spaces on the two-dimensional torus is K-closed in the pair of the
corresponding weighted Lebesgue spaces. By now, K-closedness of Hardy spaces on
the two-dimensional torus was considered either in the case of no weights or in
the case of weights that split into a product of two functions of one variable
(the so-called "split weights"). Here the case of certain nonsplit weights is
studied. | [
0,
0,
1,
0,
0,
0
] |
Title: Non-Negative Matrix Factorization Test Cases,
Abstract: Non-negative matrix factorization (NMF) is a prob- lem with many
applications, ranging from facial recognition to document clustering. However,
due to the variety of algorithms that solve NMF, the randomness involved in
these algorithms, and the somewhat subjective nature of the problem, there is
no clear "correct answer" to any particular NMF problem, and as a result, it
can be hard to test new algorithms. This paper suggests some test cases for NMF
algorithms derived from matrices with enumerable exact non-negative
factorizations and perturbations of these matrices. Three algorithms using
widely divergent approaches to NMF all give similar solutions over these test
cases, suggesting that these test cases could be used as test cases for
implementations of these existing NMF algorithms as well as potentially new NMF
algorithms. This paper also describes how the proposed test cases could be used
in practice. | [
1,
0,
1,
0,
0,
0
] |
Title: Inequalities for the fundamental Robin eigenvalue of the Laplacian for box-shaped domains,
Abstract: This document consists of two papers, both submitted, and supplementary
material. The submitted papers are here given as Parts I and II.
Part I establishes results, used in Part II, 'on functions and inverses, both
positive, decreasing and convex'.
Part II uses results from Part I to extablish 'inequalities for the
fundamental Robin eigenvalue for the Laplacian on N-dimensional boxes' | [
0,
0,
1,
0,
0,
0
] |
Title: Continuum percolation theory of epimorphic regeneration,
Abstract: A biophysical model of epimorphic regeneration based on a continuum
percolation process of fully penetrable disks in two dimensions is proposed.
All cells within a randomly chosen disk of the regenerating organism are
assumed to receive a signal in the form of a circular wave as a result of the
action/reconfiguration of neoblasts and neoblast-derived mesenchymal cells in
the blastema. These signals trigger the growth of the organism, whose cells
read, on a faster time scale, the electric polarization state responsible for
their differentiation and the resulting morphology. In the long time limit, the
process leads to a morphological attractor that depends on experimentally
accessible control parameters governing the blockage of cellular gap junctions
and, therefore, the connectivity of the multicellular ensemble. When this
connectivity is weakened, positional information is degraded leading to more
symmetrical structures. This general theory is applied to the specifics of
planaria regeneration. Computations and asymptotic analyses made with the model
show that it correctly describes a significant subset of the most prominent
experimental observations, notably anterior-posterior polarization (and its
loss) or the formation of four-headed planaria. | [
0,
1,
0,
0,
0,
0
] |
Title: Multiplicity of solutions for a nonhomogeneous quasilinear elliptic problem with critical growth,
Abstract: It is established some existence and multiplicity of solution results for a
quasilinear elliptic problem driven by $\Phi$-Laplacian operator. One of these
solutions is built as a ground state solution. In order to prove our main
results we apply the Nehari method combined with the concentration compactness
theorem in an Orlicz-Sobolev framework. One of the difficulties in dealing with
this kind of operator is the lost of homogeneity properties. | [
0,
0,
1,
0,
0,
0
] |
Title: Mahonian STAT on rearrangement class of words,
Abstract: In 2000, Babson and Steingrímsson generalized the notion of permutation
patterns to the so-called vincular patterns, and they showed that many Mahonian
statistics can be expressed as sums of vincular pattern occurrence statistics.
STAT is one of such Mahonian statistics discoverd by them. In 2016, Kitaev and
the third author introduced a words analogue of STAT and proved a joint
equidistribution result involving two sextuple statistics on the whole set of
words with fixed length and alphabet. Moreover, their computer experiments
hinted at a finer involution on $R(w)$, the rearrangement class of a given word
$w$. We construct such an involution in this paper, which yields a comparable
joint equidistribution between two sextuple statistics over $R(w)$. Our
involution builds on Burstein's involution and Foata-Schützenberger's
involution that utilizes the celebrated RSK algorithm. | [
1,
0,
0,
0,
0,
0
] |
Title: A bibliometric approach to Systematic Mapping Studies: The case of the evolution and perspectives of community detection in complex networks,
Abstract: Critical analysis of the state of the art is a necessary task when
identifying new research lines worthwhile to pursue. To such an end, all the
available work related to the field of interest must be taken into account. The
key point is how to organize, analyze, and make sense of the huge amount of
scientific literature available today on any topic. To tackle this problem, we
present here a bibliometric approach to Systematic Mapping Studies (SMS). Thus,
a modify SMS protocol is used relying on the scientific references metadata to
extract, process and interpret the wealth of information contained in nowadays
research literature. As a test case, the procedure is applied to determine the
current state and perspectives of community detection in complex networks. Our
results show that community detection is a still active, far from exhausted, in
development, field. In addition, we find that, by far, the most exploited
methods are those related to determining hierarchical community structures. On
the other hand, the results show that fuzzy clustering techniques, despite
their interest, are underdeveloped as well as the adaptation of existing
algorithms to parallel or, more specifically, distributed, computational
systems. | [
1,
1,
0,
0,
0,
0
] |
Title: SPEW: Synthetic Populations and Ecosystems of the World,
Abstract: Agent-based models (ABMs) simulate interactions between autonomous agents in
constrained environments over time. ABMs are often used for modeling the spread
of infectious diseases. In order to simulate disease outbreaks or other
phenomena, ABMs rely on "synthetic ecosystems," or information about agents and
their environments that is representative of the real world. Previous
approaches for generating synthetic ecosystems have some limitations: they are
not open-source, cannot be adapted to new or updated input data sources, and do
not allow for alternative methods for sampling agent characteristics and
locations. We introduce a general framework for generating Synthetic
Populations and Ecosystems of the World (SPEW), implemented as an open-source R
package. SPEW allows researchers to choose from a variety of sampling methods
for agent characteristics and locations when generating synthetic ecosystems
for any geographic region. SPEW can produce synthetic ecosystems for any agent
(e.g. humans, mosquitoes, etc), provided that appropriate data is available. We
analyze the accuracy and computational efficiency of SPEW given different
sampling methods for agent characteristics and locations and provide a suite of
diagnostics to screen our synthetic ecosystems. SPEW has generated over five
billion human agents across approximately 100,000 geographic regions in about
70 countries, available online. | [
0,
1,
0,
1,
0,
0
] |
Title: Comparison of the h-index for Different Fields of Research Using Bootstrap Methodology,
Abstract: An important disadvantage of the h-index is that typically it cannot take
into account the specific field of research of a researcher. Usually sample
point estimates of the average and median h-index values for the various fields
are reported that are highly variable and dependent of the specific samples and
it would be useful to provide confidence intervals of prediction accuracy. In
this paper we apply the non-parametric bootstrap technique for constructing
confidence intervals for the h-index for different fields of research. In this
way no specific assumptions about the distribution of the empirical hindex are
required as well as no large samples since that the methodology is based on
resampling from the initial sample. The results of the analysis showed
important differences between the various fields. The performance of the
bootstrap intervals for the mean and median h-index for most fields seems to be
rather satisfactory as revealed by the performed simulation. | [
1,
0,
0,
1,
0,
0
] |
Title: Impact of Intervals on the Emotional Effect in Western Music,
Abstract: Every art form ultimately aims to invoke an emotional response over the
audience, and music is no different. While the precise perception of music is a
highly subjective topic, there is an agreement in the "feeling" of a piece of
music in broad terms. Based on this observation, in this study, we aimed to
determine the emotional feeling associated with short passages of music;
specifically by analyzing the melodic aspects. We have used the dataset put
together by Eerola et. al. which is comprised of labeled short passages of film
music. Our initial survey of the dataset indicated that other than "happy" and
"sad" labels do not possess a melodic structure. We transcribed the main melody
of the happy and sad tracks and used the intervals between the notes to
classify them. Our experiments have shown that treating a melody as a
bag-of-intervals do not possess any predictive power whatsoever, whereas
counting intervals with respect to the key of the melody yielded a classifier
with 85% accuracy. | [
1,
0,
0,
0,
1,
0
] |
Title: The Long Term Fréchet distribution: Estimation, Properties and its Application,
Abstract: In this paper a new long-term survival distribution is proposed. The so
called long term Fréchet distribution allows us to fit data where a part of
the population is not susceptible to the event of interest. This model may be
used, for example, in clinical studies where a portion of the population can be
cured during a treatment. It is shown an account of mathematical properties of
the new distribution such as its moments and survival properties. As well is
presented the maximum likelihood estimators (MLEs) for the parameters. A
numerical simulation is carried out in order to verify the performance of the
MLEs. Finally, an important application related to the leukemia free-survival
times for transplant patients are discussed to illustrates our proposed
distribution | [
0,
0,
1,
1,
0,
0
] |
Title: Active Expansion Sampling for Learning Feasible Domains in an Unbounded Input Space,
Abstract: Many engineering problems require identifying feasible domains under implicit
constraints. One example is finding acceptable car body styling designs based
on constraints like aesthetics and functionality. Current active-learning based
methods learn feasible domains for bounded input spaces. However, we usually
lack prior knowledge about how to set those input variable bounds. Bounds that
are too small will fail to cover all feasible domains; while bounds that are
too large will waste query budget. To avoid this problem, we introduce Active
Expansion Sampling (AES), a method that identifies (possibly disconnected)
feasible domains over an unbounded input space. AES progressively expands our
knowledge of the input space, and uses successive exploitation and exploration
stages to switch between learning the decision boundary and searching for new
feasible domains. We show that AES has a misclassification loss guarantee
within the explored region, independent of the number of iterations or labeled
samples. Thus it can be used for real-time prediction of samples' feasibility
within the explored region. We evaluate AES on three test examples and compare
AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm
and the straddle heuristic -- that operate over fixed input variable bounds. | [
1,
0,
0,
1,
0,
0
] |
Title: On diagrams of simplified trisections and mapping class groups,
Abstract: A simplified trisection is a trisection map on a 4-manifold such that, in its
critical value set, there is no double point and cusps only appear in triples
on innermost fold circles. We give a necessary and sufficient condition for a
3-tuple of systems of simple closed curves in a surface to be a diagram of a
simplified trisection in terms of mapping class groups. As an application of
this criterion, we show that trisections of spun 4-manifolds due to Meier are
diffeomorphic (as trisections) to simplified ones. Baykur and Saeki recently
gave an algorithmic construction of a simplified trisection from a directed
broken Lefschetz fibration. We also give an algorithm to obtain a diagram of a
simplified trisection derived from their construction. | [
0,
0,
1,
0,
0,
0
] |
Title: The Gaia-ESO Survey: Dynamical models of flattened, rotating globular clusters,
Abstract: We present a family of self-consistent axisymmetric rotating globular cluster
models which are fitted to spectroscopic data for NGC 362, NGC 1851, NGC 2808,
NGC 4372, NGC 5927 and NGC 6752 to provide constraints on their physical and
kinematic properties, including their rotation signals. They are constructed by
flattening Modified Plummer profiles, which have the same asymptotic behaviour
as classical Plummer models, but can provide better fits to young clusters due
to a slower turnover in the density profile. The models are in dynamical
equilibrium as they depend solely on the action variables. We employ a fully
Bayesian scheme to investigate the uncertainty in our model parameters
(including mass-to-light ratios and inclination angles) and evaluate the
Bayesian evidence ratio for rotating to non-rotating models. We find convincing
levels of rotation only in NGC 2808. In the other clusters, there is just a
hint of rotation (in particular, NGC 4372 and NGC 5927), as the data quality
does not allow us to draw strong conclusions. Where rotation is present, we
find that it is confined to the central regions, within radii of $R \leq 2
r_h$. As part of this work, we have developed a novel q-Gaussian basis
expansion of the line-of-sight velocity distributions, from which general
models can be constructed via interpolation on the basis coefficients. | [
0,
1,
0,
0,
0,
0
] |
Title: The rationality problem for forms of $\overline{M_{0, n}}$,
Abstract: Let $X$ be a del Pezzo surface of degree $5$ defined over a field $F$. A
theorem of Yu. I. Manin and P. Swinnerton-Dyer asserts that every Del Pezzo
surface of degree $5$ is rational. In this paper we generalize this result as
follows. Recall that del Pezzo surfaces of degree $5$ over a field $F$ are
precisely the twisted $F$-forms of the moduli space $\overline{M_{0, 5}}$ of
stable curves of genus $0$ with $5$ marked points. Suppose $n \geq 5$ is an
integer, and $F$ is an infinite field of characteristic $\neq 2$. It is easy to
see that every twisted $F$-form of $\overline{M_{0, n}}$ is unirational over
$F$. We show that
(a) if $n$ is odd, then every twisted $F$-form of $\overline{M_{0, n}}$ is
rational over $F$.
(b) If $n$ is even, there exists a field extension $F/k$ and a twisted
$F$-form $X$ of $\overline{M_{0, n}}$ such that $X$ is not retract rational
over $F$. | [
0,
0,
1,
0,
0,
0
] |
Title: Regularization Learning Networks: Deep Learning for Tabular Datasets,
Abstract: Despite their impressive performance, Deep Neural Networks (DNNs) typically
underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning
tasks. We propose that applying a different regularization coefficient to each
weight might boost the performance of DNNs by allowing them to make more use of
the more relevant inputs. However, this will lead to an intractable number of
hyperparameters. Here, we introduce Regularization Learning Networks (RLNs),
which overcome this challenge by introducing an efficient hyperparameter tuning
scheme which minimizes a new Counterfactual Loss. Our results show that RLNs
significantly improve DNNs on tabular datasets, and achieve comparable results
to GBTs, with the best performance achieved with an ensemble that combines GBTs
and RLNs. RLNs produce extremely sparse networks, eliminating up to 99.8% of
the network edges and 82% of the input features, thus providing more
interpretable models and reveal the importance that the network assigns to
different inputs. RLNs could efficiently learn a single network in datasets
that comprise both tabular and unstructured data, such as in the setting of
medical imaging accompanied by electronic health records. An open source
implementation of RLN can be found at
this https URL. | [
0,
0,
0,
1,
0,
0
] |
Title: Numerical dimension and locally ample curves,
Abstract: In the paper \cite{Lau16}, it was shown that the restriction of a
pseudoeffective divisor $D$ to a subvariety $Y$ with nef normal bundle is
pseudoeffective. Assuming the normal bundle is ample and that $D|_Y$ is not
big, we prove that the numerical dimension of $D$ is bounded above by that of
its restriction, i.e. $\kappa_{\sigma}(D)\leq \kappa_{\sigma}(D|_Y)$. The main
motivation is to study the cycle classes of "positive" curves: we show that the
cycle class of a curve with ample normal bundle lies in the interior of the
cone of curves, and the cycle class of an ample curve lies in the interior of
the cone of movable curves. We do not impose any condition on the singularities
on the curve or the ambient variety. For locally complete intersection curves
in a smooth projective variety, this is the main result of Ottem \cite{Ott16}.
The main tool in this paper is the theory of $q$-ample divisors. | [
0,
0,
1,
0,
0,
0
] |
Title: Underscreening in concentrated electrolytes,
Abstract: Screening of a surface charge by electrolyte and the resulting interaction
energy between charged objects is of fundamental importance in scenarios from
bio-molecular interactions to energy storage. The conventional wisdom is that
the interaction energy decays exponentially with object separation and the
decay length is a decreasing function of ion concentration; the interaction is
thus negligible in a concentrated electrolyte. Contrary to this conventional
wisdom, we have shown by surface force measurements that the decay length is an
increasing function of ion concentration and Bjerrum length for concentrated
electrolytes. In this paper we report surface force measurements to test
directly the scaling of the screening length with Bjerrum length. Furthermore,
we identify a relationship between the concentration dependence of this
screening length and empirical measurements of activity coefficient and
differential capacitance. The dependence of the screening length on the ion
concentration and the Bjerrum length can be explained by a simple scaling
conjecture based on the physical intuition that solvent molecules, rather than
ions, are charge carriers in a concentrated electrolyte. | [
0,
1,
0,
0,
0,
0
] |
Title: Data-driven Probabilistic Atlases Capture Whole-brain Individual Variation,
Abstract: Probabilistic atlases provide essential spatial contextual information for
image interpretation, Bayesian modeling, and algorithmic processing. Such
atlases are typically constructed by grouping subjects with similar demographic
information. Importantly, use of the same scanner minimizes inter-group
variability. However, generalizability and spatial specificity of such
approaches is more limited than one might like. Inspired by Commowick
"Frankenstein's creature paradigm" which builds a personal specific anatomical
atlas, we propose a data-driven framework to build a personal specific
probabilistic atlas under the large-scale data scheme. The data-driven
framework clusters regions with similar features using a point distribution
model to learn different anatomical phenotypes. Regional structural atlases and
corresponding regional probabilistic atlases are used as indices and targets in
the dictionary. By indexing the dictionary, the whole brain probabilistic
atlases adapt to each new subject quickly and can be used as spatial priors for
visualization and processing. The novelties of this approach are (1) it
provides a new perspective of generating personal specific whole brain
probabilistic atlases (132 regions) under data-driven scheme across sites. (2)
The framework employs the large amount of heterogeneous data (2349 images). (3)
The proposed framework achieves low computational cost since only one affine
registration and Pearson correlation operation are required for a new subject.
Our method matches individual regions better with higher Dice similarity value
when testing the probabilistic atlases. Importantly, the advantage the
large-scale scheme is demonstrated by the better performance of using
large-scale training data (1888 images) than smaller training set (720 images). | [
0,
0,
0,
1,
1,
0
] |
Title: Learning to Communicate: A Machine Learning Framework for Heterogeneous Multi-Agent Robotic Systems,
Abstract: We present a machine learning framework for multi-agent systems to learn both
the optimal policy for maximizing the rewards and the encoding of the high
dimensional visual observation. The encoding is useful for sharing local visual
observations with other agents under communication resource constraints. The
actor-encoder encodes the raw images and chooses an action based on local
observations and messages sent by the other agents. The machine learning agent
generates not only an actuator command to the physical device, but also a
communication message to the other agents. We formulate a reinforcement
learning problem, which extends the action space to consider the communication
action as well. The feasibility of the reinforcement learning framework is
demonstrated using a 3D simulation environment with two collaborating agents.
The environment provides realistic visual observations to be used and shared
between the two agents. | [
1,
0,
0,
0,
0,
0
] |
Title: Optimal investment-consumption problem post-retirement with a minimum guarantee,
Abstract: We study the optimal investment-consumption problem for a member of defined
contribution plan during the decumulation phase. For a fixed annuitization
time, to achieve higher final annuity, we consider a variable consumption rate.
Moreover, to eliminate the ruin possibilities and having a minimum guarantee
for the final annuity, we consider a safety level for the wealth process which
consequently yields a Hamilton-Jacobi-Bellman (HJB) equation on a bounded
domain. We apply the policy iteration method to find approximations of solution
of the HJB equation. Finally, we give the simulation results for the optimal
investment-consumption strategies, optimal wealth process and the final annuity
for different ranges of admissible consumptions. Furthermore, by calculating
the present market value of the future cash flows before and after the
annuitization, we compare the results for different consumption policies. | [
0,
0,
0,
0,
0,
1
] |
Title: Parametrices for the light ray transform on Minkowski spacetime,
Abstract: We consider restricted light ray transforms arising from an inverse problem
of finding cosmic strings. We construct a relative left parametrix for the
transform on two tensors, which recovers the space-like and some light-like
singularities of the two tensor. | [
0,
0,
1,
0,
0,
0
] |
Title: A General Sequential Delay-Doppler Estimation Scheme for Sub-Nyquist Pulse-Doppler Radar,
Abstract: Sequential estimation of the delay and Doppler parameters for sub-Nyquist
radars by analog-to-information conversion (AIC) systems has received wide
attention recently. However, the estimation methods reported are AIC-dependent
and have poor performance for off-grid targets. This paper develops a general
estimation scheme in the sense that it is applicable to all AICs regardless
whether the targets are on or off the grids. The proposed scheme estimates the
delay and Doppler parameters sequentially, in which the delay estimation is
formulated into a beamspace direction-of- arrival problem and the Doppler
estimation is translated into a line spectrum estimation problem. Then the
well-known spatial and temporal spectrum estimation techniques are used to
provide efficient and high-resolution estimates of the delay and Doppler
parameters. In addition, sufficient conditions on the AIC to guarantee the
successful estimation of off-grid targets are provided, while the existing
conditions are mostly related to the on-grid targets. Theoretical analyses and
numerical experiments show the effectiveness and the correctness of the
proposed scheme. | [
1,
0,
1,
0,
0,
0
] |
Title: Assimilated LVEF: A Bayesian technique combining human intuition with machine measurement for sharper estimates of left ventricular ejection fraction and stronger association with outcomes,
Abstract: The cardiologist's main tool for measuring systolic heart failure is left
ventricular ejection fraction (LVEF). Trained cardiologist's report both a
visual and machine-guided measurement of LVEF, but only use this machine-guided
measurement in analysis. We use a Bayesian technique to combine visual and
machine-guided estimates from the PARTNER-IIA Trial, a cohort of patients with
aortic stenosis at moderate risk treated with bioprosthetic aortic valves, and
find our combined estimate reduces measurement errors and improves the
association between LVEF and a 1-year composite endpoint. | [
0,
0,
0,
1,
0,
0
] |
Title: DBSCAN: Optimal Rates For Density Based Clustering,
Abstract: We study the problem of optimal estimation of the density cluster tree under
various assumptions on the underlying density. Building up from the seminal
work of Chaudhuri et al. [2014], we formulate a new notion of clustering
consistency which is better suited to smooth densities, and derive minimax
rates of consistency for cluster tree estimation for Holder smooth densities of
arbitrary degree \alpha. We present a computationally efficient, rate optimal
cluster tree estimator based on a straightforward extension of the popular
density-based clustering algorithm DBSCAN by Ester et al. [1996]. The procedure
relies on a kernel density estimator with an appropriate choice of the kernel
and bandwidth to produce a sequence of nested random geometric graphs whose
connected components form a hierarchy of clusters. The resulting optimal rates
for cluster tree estimation depend on the degree of smoothness of the
underlying density and, interestingly, match minimax rates for density
estimation under the supremum norm. Our results complement and extend the
analysis of the DBSCAN algorithm in Sriperumbudur and Steinwart [2012].
Finally, we consider level set estimation and cluster consistency for densities
with jump discontinuities, where the sizes of the jumps and the distance among
clusters are allowed to vanish as the sample size increases. We demonstrate
that our DBSCAN-based algorithm remains minimax rate optimal in this setting as
well. | [
0,
0,
1,
1,
0,
0
] |
Title: Minimal Sum Labeling of Graphs,
Abstract: A graph $G$ is called a sum graph if there is a so-called sum labeling of
$G$, i.e. an injective function $\ell: V(G) \rightarrow \mathbb{N}$ such that
for every $u,v\in V(G)$ it holds that $uv\in E(G)$ if and only if there exists
a vertex $w\in V(G)$ such that $\ell(u)+\ell(v) = \ell(w)$. We say that sum
labeling $\ell$ is minimal if there is a vertex $u\in V(G)$ such that
$\ell(u)=1$. In this paper, we show that if we relax the conditions (either
allow non-injective labelings or consider graphs with loops) then there are sum
graphs without a minimal labeling, which partially answers the question posed
by Miller, Ryan and Smyth in 1998. | [
1,
0,
0,
0,
0,
0
] |
Title: Provably efficient RL with Rich Observations via Latent State Decoding,
Abstract: We study the exploration problem in episodic MDPs with rich observations
generated from a small number of latent states. Under certain identifiability
assumptions, we demonstrate how to estimate a mapping from the observations to
latent states inductively through a sequence of regression and clustering
steps---where previously decoded latent states provide labels for later
regression problems---and use it to construct good exploration policies. We
provide finite-sample guarantees on the quality of the learned state decoding
function and exploration policies, and complement our theory with an empirical
evaluation on a class of hard exploration problems. Our method exponentially
improves over $Q$-learning with naïve exploration, even when $Q$-learning has
cheating access to latent states. | [
1,
0,
0,
1,
0,
0
] |
Title: Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis,
Abstract: Generating versatile and appropriate synthetic speech requires control over
the output expression separate from the spoken text. Important non-textual
speech variation is seldom annotated, in which case output control must be
learned in an unsupervised fashion. In this paper, we perform an in-depth study
of methods for unsupervised learning of control in statistical speech
synthesis. For example, we show that popular unsupervised training heuristics
can be interpreted as variational inference in certain autoencoder models. We
additionally connect these models to VQ-VAEs, another, recently-proposed class
of deep variational autoencoders, which we show can be derived from a very
similar mathematical argument. The implications of these new probabilistic
interpretations are discussed. We illustrate the utility of the various
approaches with an application to acoustic modelling for emotional speech
synthesis, where the unsupervised methods for learning expression control
(without access to emotional labels) are found to give results that in many
aspects match or surpass the previous best supervised approach. | [
1,
0,
0,
1,
0,
0
] |
Title: AI Safety Gridworlds,
Abstract: We present a suite of reinforcement learning environments illustrating
various safety properties of intelligent agents. These problems include safe
interruptibility, avoiding side effects, absent supervisor, reward gaming, safe
exploration, as well as robustness to self-modification, distributional shift,
and adversaries. To measure compliance with the intended safe behavior, we
equip each environment with a performance function that is hidden from the
agent. This allows us to categorize AI safety problems into robustness and
specification problems, depending on whether the performance function
corresponds to the observed reward function. We evaluate A2C and Rainbow, two
recent deep reinforcement learning agents, on our environments and show that
they are not able to solve them satisfactorily. | [
1,
0,
0,
0,
0,
0
] |
Title: Maximality of Galois actions for abelian varieties,
Abstract: Let $\{\rho_\ell\}_\ell$ be the system of $\ell$-adic representations arising
from the $i$th $\ell$-adic cohomology of a complete smooth variety $X$ defined
over a number field $K$. Denote the image of $\rho_\ell$ by $\Gamma_\ell$ and
its Zariski closure, which is a linear algebraic group over $\mathbb{Q}_\ell$,
by $\mathbf{G}_\ell$. We prove that $\mathbf{G}_\ell^{red}$, the quotient of
$\mathbf{G}_\ell^\circ$ by its unipotent radical, is unramified over a totally
ramified extension of $\mathbb{Q}_\ell$ for all sufficiently large $\ell$. We
give a sufficient condition on $\{\rho_\ell\}_\ell$ such that for all
sufficiently large $\ell$, $\Gamma_\ell$ is in some sense maximal compact in
$\mathbf{G}_\ell(\mathbb{Q}_\ell)$. Since the condition is satisfied when $X$
is an abelian variety by the Tate conjecture, we obtain maximality of Galois
actions for abelian varieties. | [
0,
0,
1,
0,
0,
0
] |
Title: An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science,
Abstract: We apply The Tractor image modeling code to improve upon existing multi-band
photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS).
SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 micron
over five well-studied deep fields spanning 18 square degrees. In concert with
data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to
provide a census of the properties of massive galaxies out to z ~ 5. To
accomplish this, we are using The Tractor to perform "forced photometry." This
technique employs prior measurements of source positions and surface brightness
profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic
Observations (VIDEO) survey to model and fit the fluxes at lower-resolution
bands. We discuss our implementation of The Tractor over a square degree test
region within the XMM-LSS field with deep imaging in 12 NIR/optical bands. Our
new multi-band source catalogs offer a number of advantages over traditional
position-matched catalogs, including 1) consistent source cross-identification
between bands, 2) de-blending of sources that are clearly resolved in the
fiducial band but blended in the lower-resolution SERVS data, 3) a higher
source detection fraction in each band, 4) a larger number of candidate
galaxies in the redshift range 5 < z < 6, and 5) a statistically significant
improvement in the photometric redshift accuracy as evidenced by the
significant decrease in the fraction of outliers compared to spectroscopic
redshifts. Thus, forced photometry using The Tractor offers a means of
improving the accuracy of multi-band extragalactic surveys designed for galaxy
evolution studies. We will extend our application of this technique to the full
SERVS footprint in the future. | [
0,
1,
0,
0,
0,
0
] |
Title: SurfClipse: Context-Aware Meta Search in the IDE,
Abstract: Despite various debugging supports of the existing IDEs for programming
errors and exceptions, software developers often look at web for working
solutions or any up-to-date information. Traditional web search does not
consider the context of the problems that they search solutions for, and thus
it often does not help much in problem solving. In this paper, we propose a
context-aware meta search tool, SurfClipse, that analyzes an encountered
exception and its context in the IDE, and recommends not only suitable search
queries but also relevant web pages for the exception (and its context). The
tool collects results from three popular search engines and a programming Q & A
site against the exception in the IDE, refines the results for relevance
against the context of the exception, and then ranks them before
recommendation. It provides two working modes--interactive and proactive to
meet the versatile needs of the developers, and one can browse the result pages
using a customized embedded browser provided by the tool.
Tool page: www.usask.ca/~masud.rahman/surfclipse | [
1,
0,
0,
0,
0,
0
] |
Title: Chomp on numerical semigroups,
Abstract: We consider the two-player game chomp on posets associated to numerical
semigroups and show that the analysis of strategies for chomp is strongly
related to classical properties of semigroups. We characterize, which player
has a winning-strategy for symmetric semigroups, semigroups of maximal
embedding dimension and several families of numerical semigroups generated by
arithmetic sequences. Furthermore, we show that which player wins on a given
numerical semigroup is a decidable question. Finally, we extend several of our
results to the more general setting of subsemigroups of $\mathbb{N} \times T$,
where $T$ is a finite abelian group. | [
1,
0,
1,
0,
0,
0
] |
Title: Deep Asymmetric Multi-task Feature Learning,
Abstract: We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can
learn deep representations shared across multiple tasks while effectively
preventing negative transfer that may happen in the feature sharing process.
Specifically, we introduce an asymmetric autoencoder term that allows reliable
predictors for the easy tasks to have high contribution to the feature learning
while suppressing the influences of unreliable predictors for more difficult
tasks. This allows the learning of less noisy representations, and enables
unreliable predictors to exploit knowledge from the reliable predictors via the
shared latent features. Such asymmetric knowledge transfer through shared
features is also more scalable and efficient than inter-task asymmetric
transfer. We validate our Deep-AMTFL model on multiple benchmark datasets for
multitask learning and image classification, on which it significantly
outperforms existing symmetric and asymmetric multitask learning models, by
effectively preventing negative transfer in deep feature learning. | [
1,
0,
0,
1,
0,
0
] |
Title: When is the mode functional the Bayes classifier?,
Abstract: In classification problems, the mode of the conditional probability
distribution, i.e., the most probable category, is the Bayes classifier under
zero-one or misclassification loss. Under any other cost structure, the mode
fails to persist. | [
0,
0,
1,
1,
0,
0
] |
Title: CredSaT: Credibility Ranking of Users in Big Social Data incorporating Semantic Analysis and Temporal Factor,
Abstract: The widespread use of big social data has pointed the research community in
several significant directions. In particular, the notion of social trust has
attracted a great deal of attention from information processors | computer
scientists and information consumers | formal organizations. This is evident in
various applications such as recommendation systems, viral marketing and
expertise retrieval. Hence, it is essential to have frameworks that can
temporally measure users credibility in all domains categorised under big
social data. This paper presents CredSaT (Credibility incorporating Semantic
analysis and Temporal factor): a fine-grained users credibility analysis
framework for big social data. A novel metric that includes both new and
current features, as well as the temporal factor, is harnessed to establish the
credibility ranking of users. Experiments on real-world dataset demonstrate the
effectiveness and applicability of our model to indicate highly domain-based
trustworthy users. Further, CredSaT shows the capacity in capturing spammers
and other anomalous users. | [
1,
0,
0,
0,
0,
0
] |
Title: On singular Finsler foliation,
Abstract: In this paper we introduce the concept of singular Finsler foliation, which
generalizes the concepts of Finsler actions, Finsler submersions and (regular)
Finsler foliations. We show that if $\mathcal{F}$ is a singular Finsler
foliation on a Randers manifold $(M,Z)$ with Zermelo data $(\mathtt{h},W),$
then $\mathcal{F}$ is a singular Riemannian foliation on the Riemannian
manifold $(M,\mathtt{h} )$. As a direct consequence we infer that the regular
leaves are equifocal submanifolds (a generalization of isoparametric
submanifolds) when the wind $W$ is an infinitesimal homothety of $\mathtt{h}$
(e.,g when $W$ is killing vector field or $M$ has constant Finsler curvature).
We also present a slice theorem that relates local singular Finsler
foliations on Finsler manifolds with singular Finsler foliations on Minkowski
spaces. | [
0,
0,
1,
0,
0,
0
] |
Title: Prototyping and Experimentation of a Closed-Loop Wireless Power Transmission with Channel Acquisition and Waveform Optimization,
Abstract: A systematic design of adaptive waveform for Wireless Power Transfer (WPT)
has recently been proposed and shown through simulations to lead to significant
performance benefits compared to traditional non-adaptive and heuristic
waveforms. In this study, we design the first prototype of a closed-loop
wireless power transfer system with adaptive waveform optimization based on
Channel State Information acquisition. The prototype consists of three
important blocks, namely the channel estimator, the waveform optimizer, and the
energy harvester. Software Defined Radio (SDR) prototyping tools are used to
implement a wireless power transmitter and a channel estimator, and a voltage
doubler rectenna is designed to work as an energy harvester. A channel adaptive
waveform with 8 sinewaves is shown through experiments to improve the average
harvested DC power at the rectenna output by 9.8% to 36.8% over a non-adaptive
design with the same number of sinewaves. | [
1,
0,
0,
0,
0,
0
] |
Title: Large-Scale Cox Process Inference using Variational Fourier Features,
Abstract: Gaussian process modulated Poisson processes provide a flexible framework for
modelling spatiotemporal point patterns. So far this had been restricted to one
dimension, binning to a pre-determined grid, or small data sets of up to a few
thousand data points. Here we introduce Cox process inference based on Fourier
features. This sparse representation induces global rather than local
constraints on the function space and is computationally efficient. This allows
us to formulate a grid-free approximation that scales well with the number of
data points and the size of the domain. We demonstrate that this allows MCMC
approximations to the non-Gaussian posterior. We also find that, in practice,
Fourier features have more consistent optimization behavior than previous
approaches. Our approximate Bayesian method can fit over 100,000 events with
complex spatiotemporal patterns in three dimensions on a single GPU. | [
0,
0,
0,
1,
0,
0
] |
Title: Buildings-to-Grid Integration Framework,
Abstract: This paper puts forth a mathematical framework for Buildings-to-Grid (BtG)
integration in smart cities. The framework explicitly couples power grid and
building's control actions and operational decisions, and can be utilized by
buildings and power grids operators to simultaneously optimize their
performance. Simplified dynamics of building clusters and building-integrated
power networks with algebraic equations are presented---both operating at
different time-scales. A model predictive control (MPC)-based algorithm that
formulates the BtG integration and accounts for the time-scale discrepancy is
developed. The formulation captures dynamic and algebraic power flow
constraints of power networks and is shown to be numerically advantageous. The
paper analytically establishes that the BtG integration yields a reduced total
system cost in comparison with decoupled designs where grid and building
operators determine their controls separately. The developed framework is
tested on standard power networks that include thousands of buildings modeled
using industrial data. Case studies demonstrate building energy savings and
significant frequency regulation, while these findings carry over in network
simulations with nonlinear power flows and mismatch in building model
parameters. Finally, simulations indicate that the performance does not
significantly worsen when there is uncertainty in the forecasted weather and
base load conditions. | [
1,
0,
1,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.