title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Topological Representation of the Transit Sets of k-Point Crossover Operators | $k$-point crossover operators and their recombination sets are studied from
different perspectives. We show that transit functions of $k$-point crossover
generate, for all $k>1$, the same convexity as the interval function of the
underlying graph. This settles in the negative an open problem by Mulder about
whether the geodesic convexity of a connected graph $G$ is uniquely determined
by its interval function $I$. The conjecture of Gitchoff and Wagner that for
each transit set $R_k(x,y)$ distinct from a hypercube there is a unique pair of
parents from which it is generated is settled affirmatively. Along the way we
characterize transit functions whose underlying graphs are Hamming graphs, and
those with underlying partial cube graphs. For general values of $k$ it is
shown that the transit sets of $k$-point crossover operators are the subsets
with maximal Vapnik-Chervonenkis dimension. Moreover, the transit sets of
$k$-point crossover on binary strings form topes of uniform oriented matroid of
VC-dimension $k+1$. The Topological Representation Theorem for oriented
matroids therefore implies that $k$-point crossover operators can be
represented by pseudosphere arrangements. This provides the tools necessary to
study the special case $k=2$ in detail.
| 1 | 0 | 1 | 0 | 0 | 0 |
Human-Robot Collaboration: From Psychology to Social Robotics | With the advances in robotic technology, research in human-robot
collaboration (HRC) has gained in importance. For robots to interact with
humans autonomously they need active decision making that takes human partners
into account. However, state-of-the-art research in HRC does often assume a
leader-follower division, in which one agent leads the interaction. We believe
that this is caused by the lack of a reliable representation of the human and
the environment to allow autonomous decision making. This problem can be
overcome by an embodied approach to HRC which is inspired by psychological
studies of human-human interaction (HHI). In this survey, we review
neuroscientific and psychological findings of the sensorimotor patterns that
govern HHI and view them in a robotics context. Additionally, we study the
advances made by the robotic community into the direction of embodied HRC. We
focus on the mechanisms that are required for active, physical human-robot
collaboration. Finally, we discuss the similarities and differences in the two
fields of study which pinpoint directions of future research.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enhancing TCP End-to-End Performance in Millimeter-Wave Communications | Recently, millimeter-wave (mmWave) communications have received great
attention due to the availability of large spectrum resources. Nevertheless,
their impact on TCP performance has been overlooked, which is observed that the
said TCP performance collapse occurs owing to the significant difference in
signal quality between LOS and NLOS links. We propose a novel TCP design for
mmWave communications, a mmWave performance enhancing proxy (mmPEP), enabling
not only to overcome TCP performance collapse but also exploit the properties
of mmWave channels. The base station installs the TCP proxy to operate the two
functionalities called Ack management and batch retransmission. Specifically,
the proxy sends the said early-Ack to the server not to decrease its sending
rate even in the NLOS status. In addition, when a packet-loss is detected, the
proxy retransmits not only lost packets but also the certain number of the
following packets expected to be lost too. It is verified by ns-3 simulation
that compared with benchmark, mmPEP enhances the end-to-end rate and packet
delivery ratio by maintaining high sending rate with decreasing the loss
recovery time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adaptive recurrence quantum entanglement distillation for two-Kraus-operator channels | Quantum entanglement serves as a valuable resource for many important quantum
operations. A pair of entangled qubits can be shared between two agents by
first preparing a maximally entangled qubit pair at one agent, and then sending
one of the qubits to the other agent through a quantum channel. In this
process, the deterioration of entanglement is inevitable since the noise
inherent in the channel contaminates the qubit. To address this challenge,
various quantum entanglement distillation (QED) algorithms have been developed.
Among them, recurrence algorithms have advantages in terms of implementability
and robustness. However, the efficiency of recurrence QED algorithms has not
been investigated thoroughly in the literature. This paper put forth two
recurrence QED algorithms that adapt to the quantum channel to tackle the
efficiency issue. The proposed algorithms have guaranteed convergence for
quantum channels with two Kraus operators, which include phase-damping and
amplitude-damping channels. Analytical results show that the convergence speed
of these algorithms is improved from linear to quadratic and one of the
algorithms achieves the optimal speed. Numerical results confirm that the
proposed algorithms significantly improve the efficiency of QED.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimised surface-electrode ion-trap junctions for experiments with cold molecular ions | We discuss the design and optimisation of two types of junctions between
surface-electrode radiofrequency ion-trap arrays that enable the integration of
experiments with sympathetically cooled molecular ions on a monolithic chip
device. A detailed description of a multi-objective optimisation procedure
applicable to an arbitrary planar junction is presented, and the results for a
cross junction between four quadrupoles as well as a quadrupole-to-octupole
junction are discussed. Based on these optimised functional elements, we
propose a multi-functional ion-trap chip for experiments with translationally
cold molecular ions at temperatures in the millikelvin range. This study opens
the door to extending complex chip-based trapping techniques to
Coulomb-crystallised molecular ions with potential applications in mass
spectrometry, spectroscopy, controlled chemistry and quantum technology.
| 0 | 1 | 0 | 0 | 0 | 0 |
A simple anisotropic three-dimensional quantum spin liquid with fracton topological order | We present a three-dimensional cubic lattice spin model, anisotropic in the
$\hat{z}$ direction, that exhibits fracton topological order. The latter is a
novel type of topological order characterized by the presence of immobile
pointlike excitations, named fractons, residing at the corners of an operator
with two-dimensional support. As other recent fracton models, ours exhibits a
subextensive ground state degeneracy: On an $L_x\times L_y\times L_z$
three-torus, it has a $2^{2L_z}$ topological degeneracy, and an additional
non-topological degeneracy equal to $2^{L_xL_y-2}$. The fractons can be
combined into composite excitations that move either in a straight line along
the $\hat{z}$ direction, or freely in the $xy$ plane at a given height $z$.
While our model draws inspiration from the toric code, we demonstrate that it
cannot be adiabatically connected to a layered toric code construction.
Additionally, we investigate the effects of imposing open boundary conditions
on our system. We find zero energy modes on the surfaces perpendicular to
either the $\hat{x}$ or $\hat{y}$ directions, and their absence on the surfaces
normal to $\hat{z}$. This result can be explained using the properties of the
two kinds of composite two-fracton mobile excitations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Onsets and Frames: Dual-Objective Piano Transcription | We advance the state of the art in polyphonic piano music transcription by
using a deep convolutional and recurrent neural network which is trained to
jointly predict onsets and frames. Our model predicts pitch onset events and
then uses those predictions to condition framewise pitch predictions. During
inference, we restrict the predictions from the framewise detector by not
allowing a new note to start unless the onset detector also agrees that an
onset for that pitch is present in the frame. We focus on improving onsets and
offsets together instead of either in isolation as we believe this correlates
better with human musical perception. Our approach results in over a 100%
relative improvement in note F1 score (with offsets) on the MAPS dataset.
Furthermore, we extend the model to predict relative velocities of normalized
audio which results in more natural-sounding transcriptions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Modified mean curvature flow of entire locally Lipschitz radial graphs in hyperbolic space | In a previous joint work of Xiao and the second author, the modified mean
curvature flow (MMCF) in hyperbolic space $\mathbb{H}^{n+1}$: $$\frac{\partial
\mathbf{F}}{\partial t} = (H-\sigma)\,\vnu\,,\quad \quad \sigma\in (-n,n)$$ was
first introduced and the flow starting from an entire Lipschitz continuous
radial graph with uniform local ball condition on the asymptotic boundary was
shown to exist for all time and converge to a complete hypersurface of constant
mean curvature with prescribed asymptotic boundary at infinity. In this paper,
we remove the uniform local ball condition on the asymptotic boundary of the
initial hypersurface, and prove that the MMCF starting from an entire locally
Lipschitz continuous radial graph exists and stays radially graphic for all
time.
| 0 | 0 | 1 | 0 | 0 | 0 |
Calibration of atomic trajectories in a large-area dual-atom-interferometer gyroscope | We propose and demonstrate a method for calibrating atomic trajectories in a
large-area dual-atom-interferometer gyroscope. The atom trajectories are
monitored by modulating and delaying the Raman transition, and they are
precisely calibrated by controlling the laser orientation and the bias magnetic
field. To improve the immunity to the gravity effect and the common phase
noise, the symmetry and the overlapping of two large-area atomic interference
loops are optimized by calibrating the atomic trajectories and by aligning the
Raman-laser orientations. The dual-atom-interferometer gyroscope is applied in
the measurement of the Earth rotation. The sensitivity is $1.2\times10^{-6}$
rad/s/$\sqrt{Hz}$, and the long-term stability is $6.2\times10^{-8}$ rad/s $@$
$2000$ s.
| 0 | 1 | 0 | 0 | 0 | 0 |
Self-adjoint and skew-symmetric extensions of the Laplacian with singular Robin boundary condition | We study the Laplacian in a smooth bounded domain, with a varying Robin
boundary condition singular at one point. The associated quadratic form is not
semi-bounded from below, and the corresponding Laplacian is not self-adjoint,
it has the residual spectrum covering the whole complex plane. We describe its
self-adjoint extensions and exhibit a physically relevant skew-symmetric one.
We approximate the boundary condition, giving rise to a family of self-adjoint
operators, and we describe their eigenvalues by the method of matched
asymptotic expansions. These eigenvalues acquire a strange behaviour when the
small perturbation parameter $\varepsilon>0$ tends to zero, namely they become
almost periodic in the logarithmic scale $|\ln \epsilon|$ and, in this way,
"wander" along the real axis at a speed $O(\eps^{-1})$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Experimental observation of fractional topological phases with photonic qudits | Geometrical and topological phases play a fundamental role in quantum theory.
Geometric phases have been proposed as a tool for implementing unitary gates
for quantum computation. A fractional topological phase has been recently
discovered for bipartite systems. The dimension of the Hilbert space determines
the topological phase of entangled qudits under local unitary operations. Here
we investigate fractional topological phases acquired by photonic entangled
qudits. Photon pairs prepared as spatial qudits are operated inside a Sagnac
interferometer and the two-photon interference pattern reveals the topological
phase as fringes shifts when local operations are performed. Dimensions $d = 2,
3$ and $4$ were tested, showing the expected theoretical values.
| 0 | 1 | 0 | 0 | 0 | 0 |
Homotopy groups of generic leaves of logarithmic foliations | We study the homotopy groups of generic leaves of logarithmic foliations on
complex projective manifolds. We exhibit a relation between the homotopy groups
of a generic leaf and of the complement of the polar divisor of the logarithmic
foliation.
| 0 | 0 | 1 | 0 | 0 | 0 |
On van Kampen-Flores, Conway-Gordon-Sachs and Radon theorems | We exhibit relations between van Kampen-Flores, Conway-Gordon-Sachs and Radon
theorems, by presenting direct proofs of some implications between them. The
key idea is an interesting relation between the van Kampen and the
Conway-Gordon-Sachs numbers for restrictions of a map of $(d+2)$-simplex to
$\mathbb R^d$ to the $(d+1)$-face and to the $[d/2]$-skeleton.
| 1 | 0 | 1 | 0 | 0 | 0 |
Superzeta functions, regularized products, and the Selberg zeta function on hyperbolic manifolds with cusps | Let $\Lambda = \{\lambda_{k}\}$ denote a sequence of complex numbers and
assume that that the counting function $#\{\lambda_{k} \in \Lambda : |
\lambda_{k}| < T\} =O(T^{n})$ for some integer $n$. From Hadamard's theorem, we
can construct an entire function $f$ of order at most $n$ such that $\Lambda$
is the divisor $f$. In this article we prove, under reasonably general
conditions, that the superzeta function $\Z_{f}(s,z)$ associated to $\Lambda$
admits a meromorphic continuation. Furthermore, we describe the relation
between the regularized product of the sequence $z-\Lambda$ and the function
$f$ as constructed as a Weierstrass product. In the case $f$ admits a Dirichlet
series expansion in some right half-plane, we derive the meromorphic
continuation in $s$ of $\Z_{f}(s,z)$ as an integral transform of $f'/f$. We
apply these results to obtain superzeta product evaluations of Selberg zeta
function associated to finite volume hyperbolic manifolds with cusps.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fine Selmer Groups and Isogeny Invariance | We investigate fine Selmer groups for elliptic curves and for Galois
representations over a number field. More specifically, we discuss Conjecture
A, which states that the fine Selmer group of an elliptic curve over the
cyclotomic extension is a finitely generated $\mathbb{Z}_p$-module. The
relationship between this conjecture and Iwasawa's classical $\mu=0$ conjecture
is clarified. We also present some partial results towards the question whether
Conjecture A is invariant under isogenies.
| 0 | 0 | 1 | 0 | 0 | 0 |
Persistence paths and signature features in topological data analysis | We introduce a new feature map for barcodes that arise in persistent homology
computation. The main idea is to first realize each barcode as a path in a
convenient vector space, and to then compute its path signature which takes
values in the tensor algebra of that vector space. The composition of these two
operations - barcode to path, path to tensor series - results in a feature map
that has several desirable properties for statistical learning, such as
universality and characteristicness, and achieves state-of-the-art results on
common classification benchmarks.
| 0 | 0 | 0 | 1 | 0 | 0 |
New Integral representations for the Fox-Wright functions and its applications | Our aim in this paper is to derive several new integral representations of
the Fox-Wright functions. In particular, we give new Laplace and Stieltjes
transform for this special functions under a special restriction on parameters.
From the positivity conditions for the weight in these representations, we
found sufficient conditions to be imposed on the parameters of the Fox-Wright
functions that it be completely monotonic. As applications, we derive a class
of function related to the Fox H-functions is positive definite and an
investigation of a class of the Fox H-function is non-negative. Moreover, we
extended the Luke's inequalities and we establish a new Turán type
inequalities for the Fox-Wright function. Finally, by appealing to each of the
Luke's inequalities, two sets of two-sided bounding inequalities for the
generalized Mathieu's type series are proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ultra-Fast Reactive Transport Simulations When Chemical Reactions Meet Machine Learning: Chemical Equilibrium | During reactive transport modeling, the computational cost associated with
chemical reaction calculations is often 10-100 times higher than that of
transport calculations. Most of these costs results from chemical equilibrium
calculations that are performed at least once in every mesh cell and at every
time step of the simulation. Calculating chemical equilibrium is an iterative
process, where each iteration is in general so computationally expensive that
even if every calculation converged in a single iteration, the resulting
speedup would not be significant. Thus, rather than proposing a fast-converging
numerical method for solving chemical equilibrium equations, we present a
machine learning method that enables new equilibrium states to be quickly and
accurately estimated, whenever a previous equilibrium calculation with similar
input conditions has been performed. We demonstrate the use of this smart
chemical equilibrium method in a reactive transport modeling example and show
that, even at early simulation times, the majority of all equilibrium
calculations are quickly predicted and, after some time steps, the
machine-learning-accelerated chemical solver has been fully trained to rapidly
perform all subsequent equilibrium calculations, resulting in speedups of
almost two orders of magnitude. We remark that our new on-demand machine
learning method can be applied to any case in which a massive number of
sequential/parallel evaluations of a computationally expensive function $f$
needs to be done, $y=f(x)$. We remark, that, in contrast to traditional machine
learning algorithms, our on-demand training approach does not require a
statistics-based training phase before the actual simulation of interest
commences. The introduced on-demand training scheme requires, however, the
first-order derivatives $\partial f/\partial x$ for later smart predictions.
| 0 | 1 | 0 | 1 | 0 | 0 |
Pseudo-edge unfoldings of convex polyhedra | A pseudo-edge graph of a convex polyhedron K is a 3-connected embedded graph
in K whose vertices coincide with those of K, whose edges are distance
minimizing geodesics, and whose faces are convex. We construct a convex
polyhedron K in Euclidean 3-space with a pseudo-edge graph E with respect to
which K is not unfoldable. The proof is based on a result of Pogorelov on
convex caps with prescribed curvature, and an unfoldability criterion for
almost flat convex caps due to Tarasov. Our example, which has 340 vertices,
significantly simplifies an earlier construction by Tarasov, and confirms that
Durer's conjecture does not hold for pseudo-edge unfoldings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Structured Text Representations | In this paper, we focus on learning structure-aware document representations
from data without recourse to a discourse parser or additional annotations.
Drawing inspiration from recent efforts to empower neural networks with a
structural bias, we propose a model that can encode a document while
automatically inducing rich structural dependencies. Specifically, we embed a
differentiable non-projective parsing algorithm into a neural model and use
attention mechanisms to incorporate the structural biases. Experimental
evaluation across different tasks and datasets shows that the proposed model
achieves state-of-the-art results on document modeling tasks while inducing
intermediate structures which are both interpretable and meaningful.
| 1 | 0 | 0 | 0 | 0 | 0 |
Semi-Parametric Empirical Best Prediction for small area estimation of unemployment indicators | The Italian National Institute for Statistics regularly provides estimates of
unemployment indicators using data from the Labor Force Survey. However, direct
estimates of unemployment incidence cannot be released for Local Labor Market
Areas. These are unplanned domains defined as clusters of municipalities; many
are out-of-sample areas and the majority is characterized by a small sample
size, which render direct estimates inadequate. The Empirical Best Predictor
represents an appropriate, model-based, alternative. However, for non-Gaussian
responses, its computation and the computation of the analytic approximation to
its Mean Squared Error require the solution of (possibly) multiple integrals
that, generally, have not a closed form. To solve the issue, Monte Carlo
methods and parametric bootstrap are common choices, even though the
computational burden is a non trivial task. In this paper, we propose a
Semi-Parametric Empirical Best Predictor for a (possibly) non-linear mixed
effect model by leaving the distribution of the area-specific random effects
unspecified and estimating it from the observed data. This approach is known to
lead to a discrete mixing distribution which helps avoid unverifiable
parametric assumptions and heavy integral approximations. We also derive a
second-order, bias-corrected, analytic approximation to the corresponding Mean
Squared Error. Finite sample properties of the proposed approach are tested via
a large scale simulation study. Furthermore, the proposal is applied to
unit-level data from the 2012 Italian Labor Force Survey to estimate
unemployment incidence for 611 Local Labor Market Areas using auxiliary
information from administrative registers and the 2011 Census.
| 0 | 0 | 0 | 1 | 0 | 0 |
The asymptotic coarse-graining formulation of slender-rods, bio-filaments and flagella | The inertialess fluid-structure interactions of active and passive
inextensible filaments and slender- rods are ubiquitous in nature, from the
dynamics of semi-flexible polymers and cytoskeletal filaments to cellular
mechanics and flagella. The coupling between the geometry of deformation and
the phys- ical interaction governing the dynamics of bio-filaments is complex.
Governing equations negotiate elastohydrodynamical interactions with
non-holonomic constraints arising from the filament inex- tensibility. Such
elastohydrodynamic systems are structurally convoluted, prone to numerical
erros, thus requiring penalization methods and high-order spatiotemporal
propagators. The asymptotic coarse-graining formulation presented here exploits
the momentum balance in the asymptotic limit of small rod-like elements which
are integrated semi-analytically. This greatly simplifies the elas-
tohydrodynamic interactions and overcomes previous numerical instability. The
resulting matricial system is straightforward and intuitive to implement, and
allows for a fast and efficient computation, over than a hundred times faster
than previous schemes. Only basic knowledge of systems of linear equations is
required, and implementation achieved with any solver of choice.
Generalisations for complex interaction of multiple rods, Brownian polymer
dynamics, active filaments and non-local hydrodynamics are also
straightforward. We demonstrate these in four examples commonly found in
biological systems, including the dynamics of filaments and flagella. Three of
these systems are novel in the literature. We additionally provide a Matlab
code that can be used as a basis for further generalisations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Performance of two-dimensional tidal turbine arrays in free surface flow | Encouraged by recent studies on the performance of tidal turbine arrays, we
extend the classical momentum actuator disc theory to include the free surface
effects and allow the vertical arrangement of turbines. Most existing
literatures concern one dimensional arrays with single turbine in the vertical
direction, while the arrays in this work are two dimensional (with turbines in
both the vertical and lateral directions) and also partially block the channel
which width is far larger than height. The vertical mixing of array scale flow
is assumed to take place much faster than lateral one. This assumption has been
verified by numerical simulations. Fixing the total turbine area and utilized
width, the comparison between two-dimensional and traditional one-dimensional
arrays is investigated. The results suggest that the two dimensional
arrangements of smaller turbines are preferred to one dimensional arrays from
both the power coefficient and efficiency perspectives. When channel dynamics
are considered, the power increase would be partly offset according to the
parameters of the channel but the optimal arrangement is unchangeable.
Furthermore, we consider how to arrange finite number of turbines in a channel.
It is shown that an optimal distribution of turbines in two directions is
found. Finally, the scenario of arranging turbines in infinite flow, which is
the limiting condition of small blockages, is analysed. A new maximum power
coefficient 0.869 occurs when $Fr=0.2$, greatly increasing the peak power
compared with existing results.
| 0 | 1 | 0 | 0 | 0 | 0 |
Elicitability and its Application in Risk Management | Elicitability is a property of $\mathbb{R}^k$-valued functionals defined on a
set of distribution functions. These functionals represent statistical
properties of a distribution, for instance its mean, variance, or median. They
are called elicitable if there exists a scoring function such that the expected
score under a distribution takes its unique minimum at the functional value of
this distribution. If such a scoring function exists, it is called strictly
consistent for the functional. Motivated by the recent findings of Fissler and
Ziegel concerning higher order elicitability, this thesis reviews the most
important results, examples, and applications which are found in the relevant
literature. Moreover, we also contribute our own examples and findings in order
to give the reader a well-founded overview of the topic as well as of the most
used tools and techniques. We include necessary and sufficient conditions for
strictly consistent scoring functions, several elicitable as well as
non-elicitable functionals and the use of elicitability in forecast comparison,
regression, and estimation. Special emphasis is placed on quantitative risk
management and the result that Value at Risk and Expected Shortfall are jointly
elicitable.
| 0 | 0 | 1 | 1 | 0 | 0 |
Diffusion of particles with short-range interactions | A system of interacting Brownian particles subject to short-range repulsive
potentials is considered. A continuum description in the form of a nonlinear
diffusion equation is derived systematically in the dilute limit using the
method of matched asymptotic expansions. Numerical simulations are performed to
compare the results of the model with those of the commonly used mean-field and
Kirkwood-superposition approximations, as well as with Monte Carlo simulation
of the stochastic particle system, for various interaction potentials. Our
approach works best for very repulsive short-range potentials, while the
mean-field approximation is suitable for long-range interactions. The Kirkwood
superposition approximation provides an accurate description for both short-
and long-range potentials, but is considerably more computationally intensive.
| 0 | 1 | 0 | 0 | 0 | 0 |
Birman-Murakami-Wenzl type algebras for arbitrary Coxeter systems | In this paper we first present a Birman-Murakami-Wenzl type algebra for every
Coxeter system of rank 2 (corresponding to dihedral groups). We prove they have
semisimple for generic parameters, and having natural cellular structures. And
classcify their irreducible representations. Among them there is one serving as
a generalization of the Lawrence-Krammer representation with quite neat shape
and the "correct" dimension. We conjecture they are isomorphic to the
generalized Lawrence-Krammer representaions defined by I.Marin as monodromy of
certain KZ connections. We prove these representations are irreducible for
generic parameters, and find a quite neat invariant bilinear form on them.
Based on above constructions for rank 2, we introduce a Birman-Murakami-Wenzl
type algebra for an arbitrary Coxeter system. For every Coxeter system, the
introduced algebra is a quotient of group algebra of the Artin group
(associated with this Coxeter system), having the corresponding Hecke algebra
as a quotient. The simple generators of the Artin group have degree 3
annihiating polynomials in this algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Protein Classification using Machine Learning and Statistical Techniques: A Comparative Analysis | In recent era prediction of enzyme class from an unknown protein is one of
the challenging tasks in bioinformatics. Day to day the number of proteins is
increases as result the prediction of enzyme class gives a new opportunity to
bioinformatics scholars. The prime objective of this article is to implement
the machine learning classification technique for feature selection and
predictions also find out an appropriate classification technique for function
prediction. In this article the seven different classification technique like
CRT, QUEST, CHAID, C5.0, ANN (Artificial Neural Network), SVM and Bayesian has
been implemented on 4368 protein data that has been extracted from UniprotKB
databank and categories into six different class. The proteins data is high
dimensional sequence data and contain a maximum of 48 features.To manipulate
the high dimensional sequential protein data with different classification
technique, the SPSS has been used as an experimental tool. Different
classification techniques give different results for every model and shows that
the data are imbalanced for class C4, C5 and C6. The imbalanced data affect the
performance of model. In these three classes the precision and recall value is
very less or negligible. The experimental results highlight that the C5.0
classification technique accuracy is more suited for protein feature
classification and predictions. The C5.0 classification technique gives 95.56%
accuracy and also gives high precision and recall value. Finally, we conclude
that the features that is selected can be used for function prediction.
| 0 | 0 | 0 | 0 | 1 | 0 |
The Emergence of Consensus: A Primer | The origin of population-scale coordination has puzzled philosophers and
scientists for centuries. Recently, game theory, evolutionary approaches and
complex systems science have provided quantitative insights on the mechanisms
of social consensus. However, the literature is vast and widely scattered
across fields, making it hard for the single researcher to navigate it. This
short review aims to provide a compact overview of the main dimensions over
which the debate has unfolded and to discuss some representative examples. It
focuses on those situations in which consensus emerges 'spontaneously' in
absence of centralised institutions and covers topic that include the
macroscopic consequences of the different microscopic rules of behavioural
contagion, the role of social networks, and the mechanisms that prevent the
formation of a consensus or alter it after it has emerged. Special attention is
devoted to the recent wave of experiments on the emergence of consensus in
social systems.
| 1 | 1 | 0 | 0 | 0 | 0 |
ProSLAM: Graph SLAM from a Programmer's Perspective | In this paper we present ProSLAM, a lightweight stereo visual SLAM system
designed with simplicity in mind. Our work stems from the experience gathered
by the authors while teaching SLAM to students and aims at providing a highly
modular system that can be easily implemented and understood. Rather than
focusing on the well known mathematical aspects of Stereo Visual SLAM, in this
work we highlight the data structures and the algorithmic aspects that one
needs to tackle during the design of such a system. We implemented ProSLAM
using the C++ programming language in combination with a minimal set of well
known used external libraries. In addition to an open source implementation, we
provide several code snippets that address the core aspects of our approach
directly in this paper. The results of a thorough validation performed on
standard benchmark datasets show that our approach achieves accuracy comparable
to state of the art methods, while requiring substantially less computational
resources.
| 1 | 0 | 0 | 0 | 0 | 0 |
Right Amenability And Growth Of Finitely Right Generated Left Group Sets | We introduce right generating sets, Cayley graphs, growth functions, types
and rates, and isoperimetric constants for left homogeneous spaces equipped
with coordinate systems; characterise right amenable finitely right generated
left homogeneous spaces with finite stabilisers as those whose isoperimetric
constant is $0$; and prove that finitely right generated left homogeneous
spaces with finite stabilisers of sub-exponential growth are right amenable, in
particular, quotient sets of groups of sub-exponential growth by finite
subgroups are right amenable.
| 0 | 0 | 1 | 0 | 0 | 0 |
Convex Hull of the Quadratic Branch AC Power Flow Equations and Its Application in Radial Distribution Networks | A branch flow model (BFM) is used to formulate the AC power flow in general
networks. For each branch/line, the BFM contains a nonconvex quadratic
equality. A mathematical formulation of its convex hull is proposed, which is
the tightest convex relaxation of this quadratic equation. The convex hull
formulation consists of a second order cone inequality and a linear inequality
within the physical bounds of power flows. The convex hull formulation is
analytically proved and geometrically validated. An optimal scheduling problem
of distributed energy storage (DES) in radial distribution systems with high
penetration of photovoltaic resources is investigated in this paper. To capture
the performance of both the battery and converter, a second-order DES model is
proposed. Following the convex hull of the quadratic branch flow equation, the
convex hull formulation of the nonconvex constraint in the DES model is also
derived. The proposed convex hull models are used to generate a tight convex
relaxation of the DES optimal scheduling (DESOS) problem. The proposed approach
is tested on several radial systems. A discussion on the extension to meshed
networks is provided.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-Relevance Transfer Learning | Transfer learning aims to faciliate learning tasks in a label-scarce target
domain by leveraging knowledge from a related source domain with plenty of
labeled data. Often times we may have multiple domains with little or no
labeled data as targets waiting to be solved. Most existing efforts tackle
target domains separately by modeling the `source-target' pairs without
exploring the relatedness between them, which would cause loss of crucial
information, thus failing to achieve optimal capability of knowledge transfer.
In this paper, we propose a novel and effective approach called Multi-Relevance
Transfer Learning (MRTL) for this purpose, which can simultaneously transfer
different knowledge from the source and exploits the shared common latent
factors between target domains. Specifically, we formulate the problem as an
optimization task based on a collective nonnegative matrix tri-factorization
framework. The proposed approach achieves both source-target transfer and
target-target leveraging by sharing multiple decomposed latent subspaces.
Further, an alternative minimization learning algorithm is developed with
convergence guarantee. Empirical study validates the performance and
effectiveness of MRTL compared to the state-of-the-art methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
When confidence and competence collide: Effects on online decision-making discussions | Group discussions are a way for individuals to exchange ideas and arguments
in order to reach better decisions than they could on their own. One of the
premises of productive discussions is that better solutions will prevail, and
that the idea selection process is mediated by the (relative) competence of the
individuals involved. However, since people may not know their actual
competence on a new task, their behavior is influenced by their self-estimated
competence --- that is, their confidence --- which can be misaligned with their
actual competence.
Our goal in this work is to understand the effects of confidence-competence
misalignment on the dynamics and outcomes of discussions. To this end, we
design a large-scale natural setting, in the form of an online team-based
geography game, that allows us to disentangle confidence from competence and
thus separate their effects.
We find that in task-oriented discussions, the more-confident individuals
have a larger impact on the group's decisions even when these individuals are
at the same level of competence as their teammates. Furthermore, this
unjustified role of confidence in the decision-making process often leads teams
to under-perform. We explore this phenomenon by investigating the effects of
confidence on conversational dynamics.
| 1 | 1 | 0 | 0 | 0 | 0 |
NIP formulas and Baire 1 definability | In this short note, using results of Bourgain, Fremlin, and Talagrand
\cite{BFT}, we show that for a countable structure $M$, a saturated elementary
extension $M^*$ of $M$ and a formula $\phi(x,y)$ the following are equivalent:
(i) $\phi(x,y)$ is NIP on $M$ (in the sense of Definition 2.1).
(ii) Whenever $p(x)\in S_\phi(M^*)$ is finitely satisfiable in $M$ then it is
Baire 1 definable over $M$ (in sense of Definition 2.5).
| 0 | 0 | 1 | 0 | 0 | 0 |
Lee-Carter method for forecasting mortality for Peruvian Population | In this article, we have modeled mortality rates of Peruvian female and male
populations during the period of 1950-2017 using the Lee-Carter (LC) model. The
stochastic mortality model was introduced by Lee and Carter (1992) and has been
used by many authors for fitting and forecasting the human mortality rates. The
Singular Value Decomposition (SVD) approach is used for estimation of the
parameters of the LC model. Utilizing the best fitted auto regressive
integrated moving average (ARIMA) model we forecast the values of the time
dependent parameter of the LC model for the next thirty years. The forecasted
values of life expectancy at different age group with $95\%$ confidence
intervals are also reported for the next thirty years. In this research we use
the data, obtained from the Peruvian National Institute of Statistics (INEI).
| 0 | 0 | 0 | 0 | 0 | 1 |
Kernel Recursive ABC: Point Estimation with Intractable Likelihood | We propose a novel approach to parameter estimation for simulator-based
statistical models with intractable likelihood. Our proposed method involves
recursive application of kernel ABC and kernel herding to the same observed
data. We provide a theoretical explanation regarding why the approach works,
showing (for the population setting) that, under a certain assumption, point
estimates obtained with this method converge to the true parameter, as
recursion proceeds. We have conducted a variety of numerical experiments,
including parameter estimation for a real-world pedestrian flow simulator, and
show that in most cases our method outperforms existing approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
More declarative tabling in Prolog using multi-prompt delimited control | Several Prolog implementations include a facility for tabling, an alternative
resolution strategy which uses memoisation to avoid redundant duplication of
computations. Until relatively recently, tabling has required either low-level
support in the underlying Prolog engine, or extensive program transormation (de
Guzman et al., 2008). An alternative approach is to augment Prolog with low
level support for continuation capturing control operators, particularly
delimited continuations, which have been investigated in the field of
functional programming and found to be capable of supporting a wide variety of
computational effects within an otherwise declarative language.
This technical report describes an implementation of tabling in SWI Prolog
based on delimited control operators for Prolog recently introduced by
Schrijvers et al. (2013). In comparison with a previous implementation of
tabling for SWI Prolog using delimited control (Desouter et al., 2015), this
approach, based on the functional memoising parser combinators of Johnson
(1995), stays closer to the declarative core of Prolog, requires less code, and
is able to deliver solutions from systems of tabled predicates incrementally
(as opposed to finding all solutions before delivering any to the rest of the
program).
A collection of benchmarks shows that a small number of carefully targeted
optimisations yields performance within a factor of about 2 of the optimised
version of Desouter et al.'s system currently included in SWI Prolog.
| 1 | 0 | 0 | 0 | 0 | 0 |
Emergent topology and dynamical quantum phase transitions in two-dimensional closed quantum systems | We introduce the notion of a dynamical topological order parameter (DTOP)
that characterises dynamical quantum phase transitions (DQPTs) occurring in the
subsequent temporal evolution of "two dimensional" closed quantum systems,
following a quench (or ramping) of a parameter of the Hamiltonian, {which
generalizes the notion of DTOP introduced in Budich and Heyl, Phys. Rev. B 93,
085416 (2016) for one-dimensional situations}. This DTOP is obtained from the
"gauge-invariant" Pancharatnam phase extracted from the Loschmidt overlap,
i.e., the modulus of the overlap between the initially prepared state and its
time evolved counterpart reached following a temporal evolution generated by
the time-independent final Hamiltonian. This generic proposal is illustrated
considering DQPTs occurring in the subsequent temporal evolution following a
sudden quench of the staggered mass of the topological Haldane model on a
hexagonal lattice where it stays fixed to zero or unity and makes a
discontinuous jump between these two values at critical times at which DQPTs
occur.
| 0 | 1 | 0 | 0 | 0 | 0 |
A pictorial introduction to differential geometry, leading to Maxwell's equations as three pictures | In this article we present pictorially the foundation of differential
geometry which is a crucial tool for multiple areas of physics, notably general
and special relativity, but also mechanics, thermodynamics and solving
differential equations. As all the concepts are presented as pictures, there
are no equations in this article. As such this article may be read by
pre-university students who enjoy physics, mathematics and geometry. However it
will also greatly aid the intuition of an undergraduate and masters students,
learning general relativity and similar courses. It concentrates on the tools
needed to understand Maxwell's equations thus leading to the goal of presenting
Maxwell's equations as 3 pictures.
| 0 | 1 | 1 | 0 | 0 | 0 |
Regularization by noise in (2x 2) hyperbolic systems of conservation law | In this paper we study a non strictly systems of conservation law by
stochastic perturbation. We show the existence and uniqueness of the solution.
We do not assume that $BV$-regularity for the initial conditions. The proofs
are based on the concept of entropy solution and in the characteristics method
(in the influence of noise). This is the first result on the regularization by
noise in hyperbolic systems of conservation law.
| 0 | 0 | 1 | 0 | 0 | 0 |
Large deviations of a tracer in the symmetric exclusion process | The one-dimensional symmetric exclusion process, the simplest interacting
particle process, is a lattice-gas made of particles that hop symmetrically on
a discrete line respecting hard-core exclusion. The system is prepared on the
infinite lattice with a step initial profile with average densities $\rho_{+}$
and $\rho_{-}$ on the right and on the left of the origin. When $\rho_{+} =
\rho_{-}$, the gas is at equilibrium and undergoes stationary fluctuations.
When these densities are unequal, the gas is out of equilibrium and will remain
so forever. A tracer, or a tagged particle, is initially located at the
boundary between the two domains; its position $X_t$ is a random observable in
time, that carries information on the non-equilibrium dynamics of the whole
system. We derive an exact formula for the cumulant generating function and the
large deviation function of $X_t$, in the long time limit, and deduce the full
statistical properties of the tracer's position. The equilibrium fluctuations
of the tracer's position, when the density is uniform, are obtained as an
important special case.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unitary Representations with non-zero Dirac cohomology for complex $E_6$ | This paper classifies the equivalence classes of irreducible unitary
representations with nonvanishing Dirac cohomology for complex $E_6$. This is
achieved by using our finiteness result, and by improving the computing method.
| 0 | 0 | 1 | 0 | 0 | 0 |
Revisiting Lie integrability by quadratures from a geometric perspective | After a short review of the classical Lie theorem, a finite dimensional Lie
algebra of vector fields is considered and the most general conditions under
which the integral curves of one of the fields can be obtained by quadratures
in a prescribed way will be discussed, determining also the number of
quadratures needed to integrate the system. The theory will be illustrated with
examples andbn an extension of the theorem where the Lie algebras are replaced
by some distributions will also be presented.
| 0 | 1 | 1 | 0 | 0 | 0 |
Intersubband polarons in oxides | Intersubband (ISB) polarons result from the interaction of an ISB transition
and the longitudinal optical (LO) phonons in a semiconductor quantum well (QW).
Their observation requires a very dense two dimensional electron gas (2DEG) in
the QW and a polar or highly ionic semiconductor. Here we show that in
ZnO/MgZnO QWs the strength of such a coupling can be as high as 1.5 times the
LO-phonon frequency due to the very dense 2DEG achieved and the large
difference between the static and high-frequency dielectric constants in ZnO.
The ISB polaron is observed optically in multiple QW structures with 2DEG
densities ranging from $5\times 10^{12}$ to $5\times 10^{13}$ cm$^{-2}$, where
an unprecedented regime is reached in which the frequency of the upper ISB
polaron branch is three times larger than that of the bare ISB transition. This
study opens new prospects to the exploitation of oxides in phenomena happening
in the ultrastrong coupling regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
Control Variates for Stochastic Gradient MCMC | It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly
with dataset size. A popular class of methods for solving this issue is
stochastic gradient MCMC. These methods use a noisy estimate of the gradient of
the log posterior, which reduces the per iteration computational cost of the
algorithm. Despite this, there are a number of results suggesting that
stochastic gradient Langevin dynamics (SGLD), probably the most popular of
these methods, still has computational cost proportional to the dataset size.
We suggest an alternative log posterior gradient estimate for stochastic
gradient MCMC, which uses control variates to reduce the variance. We analyse
SGLD using this gradient estimate, and show that, under log-concavity
assumptions on the target distribution, the computational cost required for a
given level of accuracy is independent of the dataset size. Next we show that a
different control variate technique, known as zero variance control variates
can be applied to SGMCMC algorithms for free. This post-processing step
improves the inference of the algorithm by reducing the variance of the MCMC
output. Zero variance control variates rely on the gradient of the log
posterior; we explore how the variance reduction is affected by replacing this
with the noisy gradient estimate calculated by SGMCMC.
| 1 | 0 | 0 | 1 | 0 | 0 |
Augmented Reality for Depth Cues in Monocular Minimally Invasive Surgery | One of the major challenges in Minimally Invasive Surgery (MIS) such as
laparoscopy is the lack of depth perception. In recent years, laparoscopic
scene tracking and surface reconstruction has been a focus of investigation to
provide rich additional information to aid the surgical process and compensate
for the depth perception issue. However, robust 3D surface reconstruction and
augmented reality with depth perception on the reconstructed scene are yet to
be reported. This paper presents our work in this area. First, we adopt a
state-of-the-art visual simultaneous localization and mapping (SLAM) framework
- ORB-SLAM - and extend the algorithm for use in MIS scenes for reliable
endoscopic camera tracking and salient point mapping. We then develop a robust
global 3D surface reconstruction frame- work based on the sparse point clouds
extracted from the SLAM framework. Our approach is to combine an outlier
removal filter within a Moving Least Squares smoothing algorithm and then
employ Poisson surface reconstruction to obtain smooth surfaces from the
unstructured sparse point cloud. Our proposed method has been quantitatively
evaluated compared with ground-truth camera trajectories and the organ model
surface we used to render the synthetic simulation videos. In vivo laparoscopic
videos used in the tests have demonstrated the robustness and accuracy of our
proposed framework on both camera tracking and surface reconstruction,
illustrating the potential of our algorithm for depth augmentation and
depth-corrected augmented reality in MIS with monocular endoscopes.
| 1 | 0 | 0 | 0 | 0 | 0 |
General-purpose Tagging of Freesound Audio with AudioSet Labels: Task Description, Dataset, and Baseline | This paper describes Task 2 of the DCASE 2018 Challenge, titled
"General-purpose audio tagging of Freesound content with AudioSet labels". This
task was hosted on the Kaggle platform as "Freesound General-Purpose Audio
Tagging Challenge". The goal of the task is to build an audio tagging system
that can recognize the category of an audio clip from a subset of 41 diverse
categories drawn from the AudioSet Ontology. We present the task, the dataset
prepared for the competition, and a baseline system.
| 1 | 0 | 0 | 1 | 0 | 0 |
Semantical Equivalence of the Control Flow Graph and the Program Dependence Graph | The program dependence graph (PDG) represents data and control dependence
between statements in a program. This paper presents an operational semantics
of program dependence graphs. Since PDGs exclude artificial order of statements
that resides in sequential programs, executions of PDGs are not unique.
However, we identified a class of PDGs that have unique final states of
executions, called deterministic PDGs. We prove that the operational semantics
of control flow graphs is equivalent to that of deterministic PDGs. The class
of deterministic PDGs properly include PDGs obtained from well-structured
programs. Thus, our operational semantics of PDGs is more general than that of
PDGs for well-structured programs, which are already established in literature.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the smallest non-trivial quotients of mapping class groups | We prove that the smallest non-trivial quotient of the mapping class group of
a connected orientable surface of genus at least 3 without punctures is
$\mathrm{Sp}_{2g}(2)$, thus confirming a conjecture of Zimmermann. In the
process, we generalise Korkmaz's results on $\mathbb{C}$-linear representations
of mapping class groups to projective representations over any field.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Gronwall inequality for a general Caputo fractional operator | In this paper we present a new type of fractional operator, which is a
generalization of the Caputo and Caputo--Hadamard fractional derivative
operators. We study some properties of the operator, namely we prove that it is
the inverse operation of a generalized fractional integral. A relation between
this operator and a Riemann--Liouville type is established. We end with a
fractional Gronwall inequality type, which is useful to compare solutions of
fractional differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
First-principles insights into ultrashort laser spectroscopy of molecular nitrogen | In this research, we employ accurate time-dependent density functional
calculations for ultrashort laser spectroscopy of nitrogen molecule. Laser
pulses with different frequencies, intensities, and durations are applied to
the molecule and the resulting photoelectron spectra are analyzed. It is argued
that relative orientation of the molecule in the laser pulse significantly
influence the orbital character of the emitted photoelectrons. Moreover, the
duration of the laser pulse is also found to be very effective in controlling
the orbital resolution and intensity of photoelectrons. Angular resolved
distribution of photoelectrons are computed at different pulse frequencies and
recording times. By exponential growth of the laser pulse intensity, the
theoretical threshold of two photons absorption in nitrogen molecule is
determined.
| 0 | 1 | 0 | 0 | 0 | 0 |
Borel class and Cartan involution | In this note we prove that the Borel class of representations of 3-manifold
groups to PGL(n,C) is preserved under Cartan involution up to sign. For
representations to PGL(3,C) this is implied by a more general result of E.
Falbel and Q. Wang, however our proof appears to be much shorter for that
special case.
| 0 | 0 | 1 | 0 | 0 | 0 |
Network Dimensions in the Getty Provenance Index | In this article we make a case for a systematic application of complex
network science to study art market history and more general collection
dynamics. We reveal social, temporal, spatial, and conceptual network
dimensions, i.e. network node and link types, previously implicit in the Getty
Provenance Index (GPI). As a pioneering art history database active since the
1980s, the GPI provides online access to source material relevant for research
in the history of collecting and art markets. Based on a subset of the GPI, we
characterize an aggregate of more than 267,000 sales transactions connected to
roughly 22,000 actors in four countries over 20 years at daily resolution from
1801 to 1820. Striving towards a deeper understanding on multiple levels we
disambiguate social dynamics of buying, brokering, and selling, while observing
a general broadening of the market, where large collections are split into
smaller lots. Temporally, we find annual market cycles that are shifted by
country and obviously favor international exchange. Spatially, we differentiate
near-monopolies from regions driven by competing sub-centers, while uncovering
asymmetries of international market flux. Conceptually, we track dynamics of
artist attribution that clearly behave like product categories in a very slow
supermarket. Taken together, we introduce a number of meaningful network
perspectives dealing with historical art auction data, beyond the analysis of
social networks within a single market region. The results presented here have
inspired a Linked Open Data conversion of the GPI, which is currently in
process and will allow further analysis by a broad set of researchers.
| 0 | 1 | 0 | 0 | 0 | 0 |
Maximum-order Complexity and Correlation Measures | We estimate the maximum-order complexity of a binary sequence in terms of its
correlation measures. Roughly speaking, we show that any sequence with small
correlation measure up to a sufficiently large order $k$ cannot have very small
maximum-order complexity.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
Non-integrable dynamics of matter-wave solitons in a density-dependent gauge theory | We study interactions between bright matter-wave solitons which acquire
chiral transport dynamics due to an optically-induced density-dependent gauge
potential. Through numerical simulations, we find that the collision dynamics
feature several non-integrable phenomena, from inelastic collisions including
population transfer and radiation losses to short-lived bound states and
soliton fission. An effective quasi-particle model for the interaction between
the solitons is derived by means of a variational approximation, which
demonstrates that the inelastic nature of the collision arises from a coupling
of the gauge field to velocities of the solitons. In addition, we derive a set
of interaction potentials which show that the influence of the gauge field
appears as a short-range potential, that can give rise to both attractive and
repulsive interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Geometric theories of patch and Lawson topologies | We give geometric characterisations of patch and Lawson topologies in the
context of predicative point-free topology using the constructive notion of
located subset. We present the patch topology of a stably locally compact
formal topology by a geometric theory whose models are the points of the given
topology that are located, and the Lawson topology of a continuous lattice by a
geometric theory whose models are the located subsets of the given lattice. We
also give a predicative presentation of the frame of perfect nuclei on a stably
locally compact formal topology, and show that it is essentially the same as
our geometric presentation of the patch topology. Moreover, the construction of
Lawson topologies naturally induces a monad on the category of compact regular
formal topologies, which is shown to be isomorphic to the Vietoris monad.
| 1 | 0 | 1 | 0 | 0 | 0 |
Active galactic nuclei in the era of the Imaging X-ray Polarimetry Explorer | In about four years, the National Aeronautics and Space Administration (NASA)
will launch a small explorer mission named the Imaging X-ray Polarimetry
Explorer (IXPE). IXPE is a satellite dedicated to the observation of X-ray
polarization from bright astronomical sources in the 2-8 keV energy range.
Using Gas Pixel Detectors (GPD), the mission will allow for the first time to
acquire X-ray polarimetric imaging and spectroscopy of about a hundred of
sources during its first two years of operation. Among them are the most
powerful sources of light in the Universe: active galactic nuclei (AGN). In
this proceedings, we summarize the scientific exploration we aim to achieve in
the field of AGN using IXPE, describing the main discoveries that this new
generation of X-ray polarimeters will be able to make. Among these discoveries,
we expect to detect indisputable signatures of strong gravity, quantifying the
amount and importance of scattering on distant cold material onto the iron
K_alpha line observed at 6.4 keV. IXPE will also be able to probe the
morphology of parsec-scale AGN regions, the magnetic field strength and
direction in quasar jets, and, among the most important results, deliver an
independent measurement of the spin of black holes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exact completion and constructive theories of sets | In the present paper we use the theory of exact completions to study
categorical properties of small setoids in Martin-Löf type theory and, more
generally, of models of the Constructive Elementary Theory of the Category of
Sets, in terms of properties of their subcategories of choice objects (i.e.
objects satisfying the axiom of choice). Because of these intended
applications, we deal with categories that lack equalisers and just have weak
ones, but whose objects can be regarded as collections of global elements. In
this context, we study the internal logic of the categories involved, and
employ this analysis to give a sufficient condition for the local cartesian
closure of an exact completion. Finally, we apply these results to show when an
exact completion produces a model of CETCS.
| 0 | 0 | 1 | 0 | 0 | 0 |
Proceedings 5th Workshop on Horn Clauses for Verification and Synthesis | Many Program Verification and Synthesis problems of interest can be modeled
directly using Horn clauses and many recent advances in the CLP and CAV
communities have centered around efficiently solving problems presented as Horn
clauses.
The HCVS series of workshops aims to bring together researchers working in
the two communities of Constraint/Logic Programming (e.g., ICLP and CP),
Program Verification (e.g., CAV, TACAS, and VMCAI), and Automated Deduction
(e.g., CADE, IJCAR), on the topic of Horn clause based analysis, verification,
and synthesis.
Horn clauses for verification and synthesis have been advocated by these
communities in different times and from different perspectives and HCVS is
organized to stimulate interaction and a fruitful exchange and integration of
experiences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Conditional Neural Processes | Deep neural networks excel at function approximation, yet they are typically
trained from scratch for each new function. On the other hand, Bayesian
methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly
infer the shape of a new function at test time. Yet GPs are computationally
expensive, and it can be hard to design appropriate priors. In this paper we
propose a family of neural models, Conditional Neural Processes (CNPs), that
combine the benefits of both. CNPs are inspired by the flexibility of
stochastic processes such as GPs, but are structured as neural networks and
trained via gradient descent. CNPs make accurate predictions after observing
only a handful of training data points, yet scale to complex functions and
large datasets. We demonstrate the performance and versatility of the approach
on a range of canonical machine learning tasks, including regression,
classification and image completion.
| 0 | 0 | 0 | 1 | 0 | 0 |
ORBIT: Ordering Based Information Transfer Across Space and Time for Global Surface Water Monitoring | Many earth science applications require data at both high spatial and
temporal resolution for effective monitoring of various ecosystem resources.
Due to practical limitations in sensor design, there is often a trade-off in
different resolutions of spatio-temporal datasets and hence a single sensor
alone cannot provide the required information. Various data fusion methods have
been proposed in the literature that mainly rely on individual timesteps when
both datasets are available to learn a mapping between features values at
different resolutions using local relationships between pixels. Earth
observation data is often plagued with spatially and temporally correlated
noise, outliers and missing data due to atmospheric disturbances which pose a
challenge in learning the mapping from a local neighborhood at individual
timesteps. In this paper, we aim to exploit time-independent global
relationships between pixels for robust transfer of information across
different scales. Specifically, we propose a new framework, ORBIT (Ordering
Based Information Transfer) that uses relative ordering constraint among pixels
to transfer information across both time and scales. The effectiveness of the
framework is demonstrated for global surface water monitoring using both
synthetic and real-world datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Importance of Constraint Smoothness for Parameter Estimation in Computational Cognitive Modeling | Psychiatric neuroscience is increasingly aware of the need to define
psychopathology in terms of abnormal neural computation. The central tool in
this endeavour is the fitting of computational models to behavioural data. The
most prominent example of this procedure is fitting reinforcement learning (RL)
models to decision-making data collected from mentally ill and healthy subject
populations. These models are generative models of the decision-making data
themselves, and the parameters we seek to infer can be psychologically and
neurobiologically meaningful. Currently, the gold standard approach to this
inference procedure involves Monte-Carlo sampling, which is robust but
computationally intensive---rendering additional procedures, such as
cross-validation, impractical. Searching for point estimates of model
parameters using optimization procedures remains a popular and interesting
option. On a novel testbed simulating parameter estimation from a common RL
task, we investigated the effects of smooth vs. boundary constraints on
parameter estimation using interior point and deterministic direct search
algorithms for optimization. Ultimately, we show that the use of boundary
constraints can lead to substantial truncation effects. Our results discourage
the use of boundary constraints for these applications.
| 0 | 0 | 0 | 1 | 1 | 0 |
Kinetic Effects in Dynamic Wetting | The maximum speed at which a liquid can wet a solid is limited by the need to
displace gas lubrication films in front of the moving contact line. The
characteristic height of these films is often comparable to the mean free path
in the gas so that hydrodynamic models do not adequately describe the flow
physics. This Letter develops a model which incorporates kinetic effects in the
gas, via the Boltzmann equation, and can predict experimentally-observed
increases in the maximum speed of wetting when (a) the liquid's viscosity is
varied, (b) the ambient gas pressure is reduced or (c) the meniscus is
confined.
| 0 | 1 | 0 | 0 | 0 | 0 |
The spread of low-credibility content by social bots | The massive spread of digital misinformation has been identified as a major
global risk and has been alleged to influence elections and threaten
democracies. Communication, cognitive, social, and computer scientists are
engaged in efforts to study the complex causes for the viral diffusion of
misinformation online and to develop solutions, while search and social media
platforms are beginning to deploy countermeasures. With few exceptions, these
efforts have been mainly informed by anecdotal evidence rather than systematic
data. Here we analyze 14 million messages spreading 400 thousand articles on
Twitter during and following the 2016 U.S. presidential campaign and election.
We find evidence that social bots played a disproportionate role in amplifying
low-credibility content. Accounts that actively spread articles from
low-credibility sources are significantly more likely to be bots. Automated
accounts are particularly active in amplifying content in the very early
spreading moments, before an article goes viral. Bots also target users with
many followers through replies and mentions. Humans are vulnerable to this
manipulation, retweeting bots who post links to low-credibility content.
Successful low-credibility sources are heavily supported by social bots. These
results suggest that curbing social bots may be an effective strategy for
mitigating the spread of online misinformation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Beltrami vector fields with an icosahedral symmetry | A vector field is called a Beltrami vector field, if $B\times(\nabla\times
B)=0$. In this paper we construct two unique Beltrami vector fields
$\mathfrak{I}$ and $\mathfrak{Y}$, such that
$\nabla\times\mathfrak{I}=\mathfrak{I}$,
$\nabla\times\mathfrak{Y}=\mathfrak{Y}$, and such that both have an
orientation-preserving icosahedral symmetry. Both of them have an additional
symmetry with respect to a non-trivial automorphism of the number field
$\mathbb{Q}(\,\sqrt{5}\,)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A model for Faraday pilot waves over variable topography | Couder and Fort discovered that droplets walking on a vibrating bath possess
certain features previously thought to be exclusive to quantum systems. These
millimetric droplets synchronize with their Faraday wavefield, creating a
macroscopic pilot-wave system. In this paper we exploit the fact that the waves
generated are nearly monochromatic and propose a hydrodynamic model capable of
quantitatively capturing the interaction between bouncing drops and a variable
topography. We show that our reduced model is able to reproduce some important
experiments involving the drop-topography interaction, such as non-specular
reflection and single-slit diffraction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Regular Separability of Well Structured Transition Systems | We investigate the languages recognized by well-structured transition systems
(WSTS) with upward and downward compatibility. Our first result shows that,
under very mild assumptions, every two disjoint WSTS languages are regular
separable: There is a regular language containing one of them and being
disjoint from the other. As a consequence, if a language as well as its
complement are both recognized by WSTS, then they are necessarily regular. In
particular, no subclass of WSTS languages beyond the regular languages is
closed under complement. Our second result shows that for Petri nets, the
complexity of the backwards coverability algorithm yields a bound on the size
of the regular separator. We complement it by a lower bound construction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamical Stochastic Higher Spin Vertex Models | We introduce a new family of integrable stochastic processes, called
\textit{dynamical stochastic higher spin vertex models}, arising from fused
representations of Felder's elliptic quantum group $E_{\tau, \eta}
(\mathfrak{sl}_2)$. These models simultaneously generalize the stochastic
higher spin vertex models, studied by Corwin-Petrov and Borodin-Petrov, and are
dynamical in the sense of Borodin's recent stochastic interaction round-a-face
models.
We provide explicit contour integral identities for observables of these
models (when run under specific types of initial data) that characterize the
distributions of their currents. Through asymptotic analysis of these
identities in a special case, we evaluate the scaling limit for the current of
a dynamical version of a discrete-time partial exclusion process. In
particular, we show that its scaling exponent is $1 / 4$ and that its one-point
marginal converges (in a sense of moments) to that of a non-trivial random
variable, which we determine explicitly.
| 0 | 1 | 1 | 0 | 0 | 0 |
Risk-Averse Matchings over Uncertain Graph Databases | A large number of applications such as querying sensor networks, and
analyzing protein-protein interaction (PPI) networks, rely on mining uncertain
graph and hypergraph databases. In this work we study the following problem:
given an uncertain, weighted (hyper)graph, how can we efficiently find a
(hyper)matching with high expected reward, and low risk?
This problem naturally arises in the context of several important
applications, such as online dating, kidney exchanges, and team formation. We
introduce a novel formulation for finding matchings with maximum expected
reward and bounded risk under a general model of uncertain weighted
(hyper)graphs that we introduce in this work. Our model generalizes
probabilistic models used in prior work, and captures both continuous and
discrete probability distributions, thus allowing to handle privacy related
applications that inject appropriately distributed noise to (hyper)edge
weights. Given that our optimization problem is NP-hard, we turn our attention
to designing efficient approximation algorithms. For the case of uncertain
weighted graphs, we provide a $\frac{1}{3}$-approximation algorithm, and a
$\frac{1}{5}$-approximation algorithm with near optimal run time. For the case
of uncertain weighted hypergraphs, we provide a
$\Omega(\frac{1}{k})$-approximation algorithm, where $k$ is the rank of the
hypergraph (i.e., any hyperedge includes at most $k$ nodes), that runs in
almost (modulo log factors) linear time.
We complement our theoretical results by testing our approximation algorithms
on a wide variety of synthetic experiments, where we observe in a controlled
setting interesting findings on the trade-off between reward, and risk. We also
provide an application of our formulation for providing recommendations of
teams that are likely to collaborate, and have high impact.
| 1 | 0 | 0 | 0 | 0 | 0 |
Higher Order Context Transformations | The context transformation and generalized context transformation methods, we
introduced recently, were able to reduce zero order entropy by exchanging
digrams, and as a consequence, they were removing mutual information between
consecutive symbols of the input message. These transformations were intended
to be used as a preprocessor for zero-order entropy coding algorithms like
Arithmetic or Huffman coding, since we know, that especially Arithmetic coding
can achieve a compression rate almost of the size of Shannon's entropy.
This paper introduces a novel algorithm based on the concept of generalized
context transformation, that allows transformation of words longer than simple
digrams. The higher order contexts are exploited using recursive form of a
generalized context transformation. It is shown that the zero order entropy of
transformed data drops significantly, but on the other hand, the overhead given
by a description of individual transformations increases and it has become a
limiting factor in a successful transformation of smaller files.
| 1 | 0 | 1 | 0 | 0 | 0 |
DeepAPT: Nation-State APT Attribution Using End-to-End Deep Neural Networks | In recent years numerous advanced malware, aka advanced persistent threats
(APT) are allegedly developed by nation-states. The task of attributing an APT
to a specific nation-state is extremely challenging for several reasons. Each
nation-state has usually more than a single cyber unit that develops such
advanced malware, rendering traditional authorship attribution algorithms
useless. Furthermore, those APTs use state-of-the-art evasion techniques,
making feature extraction challenging. Finally, the dataset of such available
APTs is extremely small.
In this paper we describe how deep neural networks (DNN) could be
successfully employed for nation-state APT attribution. We use sandbox reports
(recording the behavior of the APT when run dynamically) as raw input for the
neural network, allowing the DNN to learn high level feature abstractions of
the APTs itself. Using a test set of 1,000 Chinese and Russian developed APTs,
we achieved an accuracy rate of 94.6%.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cathode signal in a TPC directional detector: implementation and validation measuring the drift velocity | Low-pressure gaseous TPCs are well suited detectors to correlate the
directions of nuclear recoils to the galactic Dark Matter (DM) halo. Indeed, in
addition to providing a measure of the energy deposition due to the elastic
scattering of a DM particle on a nucleus in the target gas, they allow for the
reconstruction of the track of the recoiling nucleus. In order to exclude the
background events originating from radioactive decays on the surfaces of the
detector materials within the drift volume, efforts are ongoing to precisely
localize the track nuclear recoil in the drift volume along the axis
perpendicular to the cathode plane. We report here the implementation of the
measure of the signal induced on the cathode by the motion of the primary
electrons toward the anode in a MIMAC chamber. As a validation, we performed an
independent measurement of the drift velocity of the electrons in the
considered gas mixture, correlating in time the cathode signal with the measure
of the arrival times of the electrons on the anode.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast Reconstruction of High-qubit Quantum States via Low Rate Measurements | Due to the exponential complexity of the resources required by quantum state
tomography (QST), people are interested in approaches towards identifying
quantum states which require less effort and time. In this paper, we provide a
tailored and efficient method for reconstructing mixed quantum states up to
$12$ (or even more) qubits from an incomplete set of observables subject to
noises. Our method is applicable to any pure or nearly pure state $\rho$, and
can be extended to many states of interest in quantum information processing,
such as multi-particle entangled $W$ state, GHZ state and cluster states that
are matrix product operators of low dimensions. The method applies the quantum
density matrix constraints to a quantum compressive sensing optimization
problem, and exploits a modified Quantum Alternating Direction Multiplier
Method (Quantum-ADMM) to accelerate the convergence. Our algorithm takes $8,35$
and $226$ seconds respectively to reconstruct superposition state density
matrices of $10,11,12$ qubits with acceptable fidelity, using less than $1 \%$
of measurements of expectation. To our knowledge it is the fastest realization
that people can achieve using a normal desktop. We further discuss applications
of this method using experimental data of mixed states obtained in an ion trap
experiment of up to $8$ qubits.
| 1 | 0 | 1 | 0 | 0 | 0 |
Inferring network connectivity from event timing patterns | Reconstructing network connectivity from the collective dynamics of a system
typically requires access to its complete continuous-time evolution although
these are often experimentally inaccessible. Here we propose a theory for
revealing physical connectivity of networked systems only from the event time
series their intrinsic collective dynamics generate. Representing the patterns
of event timings in an event space spanned by inter-event and cross-event
intervals, we reveal which other units directly influence the inter-event times
of any given unit. For illustration, we linearize an event space mapping
constructed from the spiking patterns in model neural circuits to reveal the
presence or absence of synapses between any pair of neurons as well as whether
the coupling acts in an inhibiting or activating (excitatory) manner. The
proposed model-independent reconstruction theory is scalable to larger networks
and may thus play an important role in the reconstruction of networks from
biology to social science and engineering.
| 0 | 0 | 0 | 1 | 1 | 0 |
FADE: Fast and Asymptotically efficient Distributed Estimator for dynamic networks | Consider a set of agents that wish to estimate a vector of parameters of
their mutual interest. For this estimation goal, agents can sense and
communicate. When sensing, an agent measures (in additive gaussian noise)
linear combinations of the unknown vector of parameters. When communicating, an
agent can broadcast information to a few other agents, by using the channels
that happen to be randomly at its disposal at the time.
To coordinate the agents towards their estimation goal, we propose a novel
algorithm called FADE (Fast and Asymptotically efficient Distributed
Estimator), in which agents collaborate at discrete time-steps; at each
time-step, agents sense and communicate just once, while also updating their
own estimate of the unknown vector of parameters.
FADE enjoys five attractive features: first, it is an intuitive estimator,
simple to derive; second, it withstands dynamic networks, that is, networks
whose communication channels change randomly over time; third, it is strongly
consistent in that, as time-steps play out, each agent's local estimate
converges (almost surely) to the true vector of parameters; fourth, it is both
asymptotically unbiased and efficient, which means that, across time, each
agent's estimate becomes unbiased and the mean-square error (MSE) of each
agent's estimate vanishes to zero at the same rate of the MSE of the optimal
estimator at an almighty central node; fifth, and most importantly, when
compared with a state-of-art consensus+innovation (CI) algorithm, it yields
estimates with outstandingly lower mean-square errors, for the same number of
communications -- for example, in a sparsely connected network model with 50
agents, we find through numerical simulations that the reduction can be
dramatic, reaching several orders of magnitude.
| 1 | 0 | 0 | 0 | 0 | 0 |
TextRank Based Search Term Identification for Software Change Tasks | During maintenance, software developers deal with a number of software change
requests. Each of those requests is generally written using natural language
texts, and it involves one or more domain related concepts. A developer needs
to map those concepts to exact source code locations within the project in
order to implement the requested change. This mapping generally starts with a
search within the project that requires one or more suitable search terms.
Studies suggest that the developers often perform poorly in coming up with good
search terms for a change task. In this paper, we propose and evaluate a novel
TextRank-based technique that automatically identifies and suggests search
terms for a software change task by analyzing its task description. Experiments
with 349 change tasks from two subject systems and comparison with one of the
latest and closely related state-of-the-art approaches show that our technique
is highly promising in terms of suggestion accuracy, mean average precision and
recall.
| 1 | 0 | 0 | 0 | 0 | 0 |
Natural Time Analysis of Seismicity in California: The epicenter of an impending mainshock | Upon employing the analysis in a new time domain, termed natural time, it has
been recently demonstrated that a remarkable change of seismicity emerges
before major mainshocks in California. What constitutes this change is that the
fluctuations of the order parameter of seismicity exhibit a clearly detectable
minimum. This is identified by using a natural time window sliding event by
event through the time series of the earthquakes in a wide area and comprising
a number of events that would occur on the average within a few months or so.
Here, we suggest a method to estimate the epicentral area of an impending
mainshock by an additional study of this minimum using an area window sliding
through the wide area. We find that when this area window surrounds (or is
adjacent to) the future epicentral area, the minimum of the order parameter
fluctuations in this area appears at a date very close to the one at which the
minimum is observed in the wide area. The method is applied here to major
earthquakes that occurred in California during the recent decades including the
largest one, i.e., the 1992 Landers earthquake.
| 0 | 1 | 0 | 0 | 0 | 0 |
Atomistic study of hardening mechanism in Al-Cu nanostructure | Nanostructures have the immense potential to supplant the traditional
metallic structure as they show enhanced mechanical properties through strain
hardening. In this paper, the effect of grain size on the hardening mechanism
of Al-Cu nanostructure is elucidated by molecular dynamics simulation. Al-Cu
(50-54% Cu by weight) nanostructure having an average grain size of 4.57 to
7.26 nm are investigated for tensile simulation at different strain rate using
embedded atom method (EAM) potential at a temperature of 50~500K. It is found
that the failure mechanism of the nanostructure is governed by the temperature,
grain size as well as strain rate effect. At the high temperature of 300-500K,
the failure strength of Al-Cu nanostructure increases with the decrease of
average grain size following Hall-Petch relation. Dislocation motions are
hindered significantly when the grain size is decreased which play a vital role
on the hardening of the nanostructure. The failure is always found to initiate
at a particular Al grain due to its weak link and propagates through grain
boundary (GB) sliding, diffusion, dislocation nucleation and propagation. We
also visualize the dislocation density at different grain size to show how the
dislocation affects the material properties at the nanoscale. These results
will further aid investigation on the deformation mechanism of nanostructure.
| 0 | 1 | 0 | 0 | 0 | 0 |
Maximum redshift of gravitational wave merger events | Future generation of gravitational wave detectors will have the sensitivity
to detect gravitational wave events at redshifts far beyond any detectable
electromagnetic sources. We show that if the observed event rate is greater
than one event per year at redshifts z > 40, then the probability distribution
of primordial density fluctuations must be significantly non-Gaussian or the
events originate from primordial black holes. The nature of the excess events
can be determined from the redshift distribution of the merger rate.
| 0 | 1 | 0 | 0 | 0 | 0 |
The altmetric performance of publications authored by Brazilian researchers: analysis of CNPq productivity scholarship holders | The present work seeks to analyse the altmetric performance of Brazilian
publications authored by researchers who are productivity scholarship holders
(PQ) of the National Council of Scientific and Technological Development
(CNPq). It was considered, within the scope of this research, the PQs in
activity in October, 2017 (n = 14.609). The scientific production registered on
Lattes was collected via GetLattesData and filtered by articles from academic
journals published between 2016 and October 2017 that hold the Digital Object
Identifier (n = 99064). The online attention data are analysed according to
their distribution by density and variation; language of the publication and
field of knowledge; and by average performance of the type of source that has
provided its altmetric values. The density evidences the long tail behavior of
the variable, with most part of the articles with altmetrics score = 0, while
few articles have a high index. The average of the online attention indicates a
better performance of articles written in English and belonging to the Health
and Biological Sciences field of knowledge. As for the sources, there was a
good performance from Mendeley, followed by Twitter and a low coverage from
Facebook
| 1 | 0 | 0 | 0 | 0 | 0 |
The Impact of Information Dissemination on Vaccinating Epidemics in Multiplex Networks | The impact of information dissemination on epidemic control essentially
affects individual behaviors. Among the information-driven behaviors,
vaccination is determined by the cost-related factors, and the correlation with
information dissemination is not clear yet. To this end, we present a model to
integrate the information-epidemic spread process into an evolutionary
vaccination game in multiplex networks, and explore how the spread of
information on epidemic influences the vaccination behavior. We propose a
two-layer coupled susceptible-alert-infected-susceptible (SAIS) model on a
multiplex network, where the strength coefficient is defined to characterize
the tendency and intensity of information dissemination. By means of the
evolutionary game theory, we get the equilibrium vaccination level (the
evolutionary stable strategy) for the vaccination game. After exploring the
influence of the strength coefficient on the equilibrium vaccination level, we
reach a counter-intuitive conclusion that more information transmission cannot
promote vaccination. Specifically, when the vaccination cost is within a
certain range, increasing information dissemination even leads to a decline in
the equilibrium vaccination level. Moreover, we study the influence of the
strength coefficient on the infection density and social cost, and unveil the
role of information dissemination in controlling the epidemic with numerical
simulations.
| 0 | 0 | 0 | 0 | 1 | 0 |
Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory | As artificial intelligence is increasingly affecting all parts of society and
life, there is growing recognition that human interpretability of machine
learning models is important. It is often argued that accuracy or other similar
generalization performance metrics must be sacrificed in order to gain
interpretability. Such arguments, however, fail to acknowledge that the overall
decision-making system is composed of two entities: the learned model and a
human who fuses together model outputs with his or her own information. As
such, the relevant performance criteria should be for the entire system, not
just for the machine learning component. In this work, we characterize the
performance of such two-node tandem data fusion systems using the theory of
distributed detection. In doing so, we work in the population setting and model
interpretable learned models as multi-level quantizers. We prove that under our
abstraction, the overall system of a human with an interpretable classifier
outperforms one with a black box classifier.
| 0 | 0 | 0 | 1 | 0 | 0 |
Online Estimation and Adaptive Control for a Class of History Dependent Functional Differential Equations | This paper presents sufficient conditions for the convergence of online
estimation methods and the stability of adaptive control strategies for a class
of history dependent, functional differential equations. The study is motivated
by the increasing interest in estimation and control techniques for robotic
systems whose governing equations include history dependent nonlinearities. The
functional differential equations in this paper are constructed using integral
operators that depend on distributed parameters. As a consequence the resulting
estimation and control equations are examples of distributed parameter systems
whose states and distributed parameters evolve in finite and infinite
dimensional spaces, respectively. suWell-posedness, existence, and uniqueness
are discussed for the class of fully actuated robotic systems with history
dependent forces in their governing equation of motion. By deriving rates of
approximation for the class of history dependent operators in this paper,
sufficient conditions are derived that guarantee that finite dimensional
approximations of the online estimation equations converge to the solution of
the infinite dimensional, distributed parameter system. The convergence and
stability of a sliding mode adaptive control strategy for the history
dependent, functional differential equations is established using Barbalat's
lemma.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Kinematics of the Permitted C II $λ$ 6578 Line in a Large Sample of Planetary Nebulae | We present spectroscopic observations of the C II $\lambda$6578 permitted
line for 83 lines of sight in 76 planetary nebulae at high spectral resolution,
most of them obtained with the Manchester Echelle Spectrograph on the 2.1\,m
telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro
Mártir. We study the kinematics of the C II $\lambda$6578 permitted line with
respect to other permitted and collisionally-excited lines. Statistically, we
find that the kinematics of the C II $\lambda$6578 line are not those expected
if this line arises from the recombination of C$^{2+}$ ions or the fluorescence
of C$^+$ ions in ionization equilibrium in a chemically-homogeneous nebular
plasma, but instead its kinematics are those appropriate for a volume more
internal than expected. The planetary nebulae in this sample have well-defined
morphology and are restricted to a limited range in H$\alpha$ line widths (no
large values) compared to their counterparts in the Milky Way bulge, both of
which could be interpreted as the result of young nebular shells, an inference
that is also supported by nebular modeling. Concerning the long-standing
discrepancy between chemical abundances inferred from permitted and
collisionally-excited emission lines in photoionized nebulae, our results imply
that multiple plasma components occur commonly in planetary nebulae.
| 0 | 1 | 0 | 0 | 0 | 0 |
From parabolic-trough to metasurface-concentrator | Metasurfaces are promising tools towards novel designs for flat optics
applications. As such their quality and tolerance to fabrication imperfections
need to be evaluated with specific tools. However, most such tools rely on the
geometrical optics approximation and are not straightforwardly applicable to
metasurfaces. In this Letter, we introduce and evaluate, for metasurfaces,
parameters such as the intercept factor and the slope error usually defined for
solar concentrators in the realm of ray-optics. After proposing definitions
valid in physical optics, we put forward an approach to calculate them. As
examples, we design three different concentrators based on three specific unit
cells and assess them numerically. The concept allows for the comparison of the
efficiency of the metasurfaces, their sensitivities to fabrication
imperfections and will be critical for practical systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Correspondence Theorem between Holomorphic Discs and Tropical Discs on K3 Surfaces | We prove that the open Gromov-Witten invariants on K3 surfaces satisfy the
Kontsevich-Soibelman wall-crossing formula. One one hand, this gives a
geometric interpretation of the slab functions in Gross-Siebert program. On the
other hands, the open Gromov-Witten invariants coincide with the weighted
counting of tropical discs. This is an analog of the corresponding theorem on
toric varieties \cite{M2}\cite{NS} but on compact Calabi-Yau surfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A review and comparative study on functional time series techniques | This paper reviews the main estimation and prediction results derived in the
context of functional time series, when Hilbert and Banach spaces are
considered, specially, in the context of autoregressive processes of order one
(ARH(1) and ARB(1) processes, for H and B being a Hilbert and Banach space,
respectively). Particularly, we pay attention to the estimation and prediction
results, and statistical tests, derived in both parametric and non-parametric
frameworks. A comparative study between different ARH(1) prediction approaches
is developed in the simulation study undertaken.
| 0 | 0 | 1 | 1 | 0 | 0 |
Surface depression with double-angle geometry during the discharge of close-packed grains from a silo | When rough grains in standard packing conditions are discharged from a silo,
a conical depression with a single slope is formed at the surface. We observed
that the increase of the volume fraction generates a more complex depression
characterized by two angles of discharge: a lower angle close to the one
measured for standard packing and a considerably larger upper angle. The change
in slope appears at the boundary between a densely packed stagnant region at
the periphery and the central flowing channel formed over the aperture. Since
the material in the latter zone is always fluidized, the flow rate is
unaffected by the initial packing of the bed. On the other hand, the contrast
between both angles is markedly smaller when smooth particles of the same size
and density are used, which reveals that high volume fraction and friction must
combine to produce the observed geometry. Our results show that the surface
profile helps to identify by simple visual inspection the packing conditions of
a granular bed, and this can be useful to prevent undesirable collapses during
silo discharge in industry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Answer Set Programming for Non-Stationary Markov Decision Processes | Non-stationary domains, where unforeseen changes happen, present a challenge
for agents to find an optimal policy for a sequential decision making problem.
This work investigates a solution to this problem that combines Markov Decision
Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming
(ASP) in a method we call ASP(RL). In this method, Answer Set Programming is
used to find the possible trajectories of an MDP, from where Reinforcement
Learning is applied to learn the optimal policy of the problem. Results show
that ASP(RL) is capable of efficiently finding the optimal solution of an MDP
representing non-stationary domains.
| 1 | 0 | 0 | 0 | 0 | 0 |
Space-Bounded OTMs and REG$^{\infty}$ | An important theorem in classical complexity theory is that LOGLOGSPACE=REG,
i.e. that languages decidable with double-logarithmic space bound are regular.
We consider a transfinite analogue of this theorem. To this end, we introduce
deterministic ordinal automata (DOAs), show that they satisfy many of the basic
statements of the theory of deterministic finite automata and regular
languages. We then consider languages decidable by an ordinal Turing machine
(OTM), introduced by P. Koepke in 2005 and show that if the working space of an
OTM is of strictly smaller cardinality than the input length for all
sufficiently long inputs, the language so decided is also decidable by a DOA.
| 0 | 0 | 1 | 0 | 0 | 0 |
Linear-time approximation schemes for planar minimum three-edge connected and three-vertex connected spanning subgraphs | We present the first polynomial-time approximation schemes, i.e., (1 +
{\epsilon})-approximation algorithm for any constant {\epsilon} > 0, for the
minimum three-edge connected spanning subgraph problem and the minimum
three-vertex connected spanning subgraph problem in undirected planar graphs.
Both the approximation schemes run in linear time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Wi-Fi AP-Assisted Content Prefetching for On-Demand TV Series: A Reinforcement Learning Approach | The emergence of smart Wi-Fi APs (Access Point), which are equipped with huge
storage space, opens a new research area on how to utilize these resources at
the edge network to improve users' quality of experience (QoE) (e.g., a short
startup delay and smooth playback). One important research interest in this
area is content prefetching, which predicts and accurately fetches contents
ahead of users' requests to shift the traffic away during peak periods.
However, in practice, the different video watching patterns among users, and
the varying network connection status lead to the time-varying server load,
which eventually makes the content prefetching problem challenging. To
understand this challenge, this paper first performs a large-scale measurement
study on users' AP connection and TV series watching patterns using
real-traces. Then, based on the obtained insights, we formulate the content
prefetching problem as a Markov Decision Process (MDP). The objective is to
strike a balance between the increased prefetching&storage cost incurred by
incorrect prediction and the reduced content download delay because of
successful prediction. A learning-based approach is proposed to solve this
problem and another three algorithms are adopted as baselines. In particular,
first, we investigate the performance lower bound by using a random algorithm,
and the upper bound by using an ideal offline approach. Then, we present a
heuristic algorithm as another baseline. Finally, we design a reinforcement
learning algorithm that is more practical to work in the online manner. Through
extensive trace-based experiments, we demonstrate the performance gain of our
design. Remarkably, our learning-based algorithm achieves a better precision
and hit ratio (e.g., 80%) with about 70% (resp. 50%) cost saving compared to
the random (resp. heuristic) algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Schematic Polymorphism in the Abella Proof Assistant | The Abella interactive theorem prover has proven to be an effective vehicle
for reasoning about relational specifications. However, the system has a
limitation that arises from the fact that it is based on a simply typed logic:
formalizations that are identical except in the respect that they apply to
different types have to be repeated at each type. We develop an approach that
overcomes this limitation while preserving the logical underpinnings of the
system. In this approach object constructors, formulas and other relevant
logical notions are allowed to be parameterized by types, with the
interpretation that they stand for the (infinite) collection of corresponding
constructs that are obtained by instantiating the type parameters. The proof
structures that we consider for formulas that are schematized in this fashion
are limited to ones whose type instances are valid proofs in the simply typed
logic. We develop schematic proof rules that ensure this property, a task that
is complicated by the fact that type information influences the notion of
unification that plays a key role in the logic. Our ideas, which have been
implemented in an updated version of the system, accommodate schematic
polymorphism both in the core logic of Abella and in the executable
specification logic that it embeds.
| 1 | 0 | 0 | 0 | 0 | 0 |
Representations of weakly multiplicative arithmetic matroids are unique | An arithmetic matroid is weakly multiplicative if the multiplicity of at
least one of its bases is equal to the product of the multiplicities of its
elements. We show that if such an arithmetic matroid can be represented by an
integer matrix, then this matrix is uniquely determined. This implies that the
integer cohomology ring of a centred toric arrangement whose arithmetic matroid
is weakly multiplicative is determined by its poset of layers. This partially
answers a question asked by Callegaro-Delucchi.
| 0 | 0 | 1 | 0 | 0 | 0 |
Memory Efficient Max Flow for Multi-label Submodular MRFs | Multi-label submodular Markov Random Fields (MRFs) have been shown to be
solvable using max-flow based on an encoding of the labels proposed by
Ishikawa, in which each variable $X_i$ is represented by $\ell$ nodes (where
$\ell$ is the number of labels) arranged in a column. However, this method in
general requires $2\,\ell^2$ edges for each pair of neighbouring variables.
This makes it inapplicable to realistic problems with many variables and
labels, due to excessive memory requirement. In this paper, we introduce a
variant of the max-flow algorithm that requires much less storage.
Consequently, our algorithm makes it possible to optimally solve multi-label
submodular problems involving large numbers of variables and labels on a
standard computer.
| 1 | 0 | 0 | 0 | 0 | 0 |
Double Covers of Cartan Modular Curves | We present a strategy to obtain explicit equations for the modular double
covers associated respectively to both a split and a non-split Cartan subgroup
of $\text{GL}_2(\mathbb F_{p})$ with $p$ prime. Then we apply it successfully
to the level $13$ case.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer | In this paper, we present a simple analysis of {\bf fast rates} with {\it
high probability} of {\bf empirical minimization} for {\it stochastic composite
optimization} over a finite-dimensional bounded convex set with exponential
concave loss functions and an arbitrary convex regularization. To the best of
our knowledge, this result is the first of its kind. As a byproduct, we can
directly obtain the fast rate with {\it high probability} for exponential
concave empirical risk minimization with and without any convex regularization,
which not only extends existing results of empirical risk minimization but also
provides a unified framework for analyzing exponential concave empirical risk
minimization with and without {\it any} convex regularization. Our proof is
very simple only exploiting the covering number of a finite-dimensional bounded
set and a concentration inequality of random vectors.
| 0 | 0 | 0 | 1 | 0 | 0 |
Generation of surface plasmon-polaritons by edge effects | By using numerical and analytical methods, we describe the generation of
fine-scale lateral electromagnetic waves, called surface plasmon-polaritons
(SPPs), on atomically thick, metamaterial conducting sheets in two spatial
dimensions (2D). Our computations capture the two-scale character of the total
field and reveal how each edge of the sheet acts as a source of an SPP that may
dominate the diffracted field. We use the finite element method to numerically
implement a variational formulation for a weak discontinuity of the tangential
magnetic field across a hypersurface. An adaptive, local mesh refinement
strategy based on a posteriori error estimators is applied to resolve the
pronounced two-scale character of wave propagation and radiation over the
metamaterial sheet. We demonstrate by numerical examples how a singular
geometry, e.g., sheets with sharp edges, and sharp spatial changes in the
associated surface conductivity may significantly influence surface plasmons in
nanophotonics.
| 0 | 1 | 1 | 0 | 0 | 0 |
PowerAI DDL | As deep neural networks become more complex and input datasets grow larger,
it can take days or even weeks to train a deep neural network to the desired
accuracy. Therefore, distributed Deep Learning at a massive scale is a critical
capability, since it offers the potential to reduce the training time from
weeks to hours. In this paper, we present a software-hardware co-optimized
distributed Deep Learning system that can achieve near-linear scaling up to
hundreds of GPUs. The core algorithm is a multi-ring communication pattern that
provides a good tradeoff between latency and bandwidth and adapts to a variety
of system configurations. The communication algorithm is implemented as a
library for easy use. This library has been integrated into Tensorflow, Caffe,
and Torch. We train Resnet-101 on Imagenet 22K with 64 IBM Power8 S822LC
servers (256 GPUs) in about 7 hours to an accuracy of 33.8 % validation
accuracy. Microsoft's ADAM and Google's DistBelief results did not reach 30 %
validation accuracy for Imagenet 22K. Compared to Facebook AI Research's recent
paper on 256 GPU training, we use a different communication algorithm, and our
combined software and hardware system offers better communication overhead for
Resnet-50. A PowerAI DDL enabled version of Torch completed 90 epochs of
training on Resnet 50 for 1K classes in 50 minutes using 64 IBM Power8 S822LC
servers (256 GPUs).
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.