text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: The unsaturated flow in porous media with dynamic capillary pressure,
Abstract: In this paper we consider a degenerate pseudoparabolic equation for the
wetting saturation of an unsaturated two-phase flow in porous media with
dynamic capillary pressure-saturation relationship where the relaxation
parameter depends on the saturation. Following the approach given in [12] the
existence of a weak solution is proved using Galerkin approximation and
regularization techniques. A priori estimates needed for passing to the limit
when the regularization parameter goes to zero are obtained by using
appropriate test-functions, motivated by the fact that considered PDE allows a
natural generalization of the classical Kullback entropy. Finally, a special
care was given in obtaining an estimate of the mixed derivative term by
combining the information from the capillary pressure with obtained a priori
estimates on the saturation. | [
0,
0,
1,
0,
0,
0
] |
Title: Dissociation of one-dimensional matter-wave breathers due to quantum many-body effects,
Abstract: We use the ab initio Bethe Ansatz dynamics to predict the dissociation of
one-dimensional cold-atom breathers that are created by a quench from a
fundamental soliton. We find that the dissociation is a robust quantum
many-body effect, while in the mean-field (MF) limit the dissociation is
forbidden by the integrability of the underlying nonlinear Schrödinger
equation. The analysis demonstrates the possibility to observe quantum
many-body effects without leaving the MF range of experimental parameters. We
find that the dissociation time is of the order of a few seconds for a typical
atomic-soliton setting. | [
0,
1,
0,
0,
0,
0
] |
Title: Human Eye Visual Hyperacuity: A New Paradigm for Sensing?,
Abstract: The human eye appears to be using a low number of sensors for image
capturing. Furthermore, regarding the physical dimensions of
cones-photoreceptors responsible for the sharp central vision-, we may realize
that these sensors are of a relatively small size and area. Nonetheless, the
eye is capable to obtain high resolution images due to visual hyperacuity and
presents an impressive sensitivity and dynamic range when set against
conventional digital cameras of similar characteristics. This article is based
on the hypothesis that the human eye may be benefiting from diffraction to
improve both image resolution and acquisition process. The developed method
intends to explain and simulate using MATLAB software the visual hyperacuity:
the introduction of a controlled diffraction pattern at an initial stage,
enables the use of a reduced number of sensors for capturing the image and
makes possible a subsequent processing to improve the final image resolution.
The results have been compared with the outcome of an equivalent system but in
absence of diffraction, achieving promising results. The main conclusion of
this work is that diffraction could be helpful for capturing images or signals
when a small number of sensors available, which is far from being a
resolution-limiting factor. | [
1,
0,
0,
0,
0,
0
] |
Title: Structured Matrix Estimation and Completion,
Abstract: We study the problem of matrix estimation and matrix completion under a
general framework. This framework includes several important models as special
cases such as the gaussian mixture model, mixed membership model, bi-clustering
model and dictionary learning. We consider the optimal convergence rates in a
minimax sense for estimation of the signal matrix under the Frobenius norm and
under the spectral norm. As a consequence of our general result we obtain
minimax optimal rates of convergence for various special models. | [
0,
0,
1,
1,
0,
0
] |
Title: Performance evaluation of PSD for silicon ECAL,
Abstract: We are developing position sensitive silicon detectors (PSD) which have an
electrode at each of four corners so that the incident position of a charged
particle can be obtained using signals from the electrodes. It is expected that
the position resolution the electromagnetic calorimeter (ECAL) of the ILD
detector will be improved by introducing PSD into the detection layers. In this
study, we irradiated collimated laser beams to the surface of the PSD, varying
the incident position. We found that the incident position can be well
reconstructed from the signals if high resistance is implemented in the p+
layer. We also tried to observe the signal of particles by placing a radiative
source on the PSD sensor. | [
0,
1,
0,
0,
0,
0
] |
Title: Online Improper Learning with an Approximation Oracle,
Abstract: We revisit the question of reducing online learning to approximate
optimization of the offline problem. In this setting, we give two algorithms
with near-optimal performance in the full information setting: they guarantee
optimal regret and require only poly-logarithmically many calls to the
approximation oracle per iteration. Furthermore, these algorithms apply to the
more general improper learning problems. In the bandit setting, our algorithm
also significantly improves the best previously known oracle complexity while
maintaining the same regret. | [
0,
0,
0,
1,
0,
0
] |
Title: TC^0 circuits for algorithmic problems in nilpotent groups,
Abstract: Recently, Macdonald et. al. showed that many algorithmic problems for
finitely generated nilpotent groups including computation of normal forms, the
subgroup membership problem, the conjugacy problem, and computation of subgroup
presentations can be done in Logspace. Here we follow their approach and show
that all these problems are complete for the uniform circuit class TC^0 -
uniformly for all r-generated nilpotent groups of class at most c for fixed r
and c. In order to solve these problems in TC^0, we show that the unary version
of the extended gcd problem (compute greatest common divisors and express them
as linear combinations) is in TC^0. Moreover, if we allow a certain binary
representation of the inputs, then the word problem and computation of normal
forms is still in uniform TC^0, while all the other problems we examine are
shown to be TC^0-Turing reducible to the binary extended gcd problem. | [
1,
0,
1,
0,
0,
0
] |
Title: Alperin-McKay natural correspondences in solvable and symmetric groups for the prime $p=2$,
Abstract: Let $G$ be a finite solvable or symmetric group and let $B$ be a $2$-block of
$G$. We construct a canonical correspondence between the irreducible characters
of height zero in $B$ and those in its Brauer first main correspondent. For
symmetric groups our bijection is compatible with restriction of characters. | [
0,
0,
1,
0,
0,
0
] |
Title: Reversible temperature exchange upon thermal contact,
Abstract: According to a well-known principle of thermodynamics, the transfer of heat
between two bodies is reversible when their temperatures are infinitesimally
close. As we demonstrate, a little-known alternative exists: two bodies with
temperatures different by an arbitrary amount can completely exchange their
temperatures in a reversible way if split into infinitesimal parts that are
brought into thermal contact sequentially. | [
0,
1,
0,
0,
0,
0
] |
Title: On consequences of measurements of turbulent Lewis number from observations,
Abstract: Almost all parameterizations of turbulence in NWP models and GCM make the
assumption of equality of exchange coefficients for heat $K_h$ and water $K_w$.
However, large uncertainties exists in old papers published in the 1950s, 1960s
and 1970s, where the turbulent Lewis number Le_t $= K_h / K_w$ have been
evaluated from observations and then set to Le_t$=1$.
The aim of this note is: 1) to trust the recommendations of Richardson
(1919), who suggested to use the moist-air entropy as a variable on which the
turbulence is acting; 2) to compute a new exchange coefficients $K_s$ for the
moist-air entropy; 3) to determine the values of the new entropy-Lewis number
Le_ts $= K_s / K_w$ from observations (Météopole-Flux and Cabauw masts) and
from LES and SCM outputs for the IHOP case (Couvreux et al., 2005).
It is shown that values of Le_ts significantly different from $1$ are
frequently observed and may have large consequences on the way the turbulence
fluxes are computed in NWP models and GCMs. | [
0,
1,
0,
0,
0,
0
] |
Title: Complex Networks Analysis for Software Architecture: an Hibernate Call Graph Study,
Abstract: Recent advancements in complex network analysis are encouraging and may
provide useful insights when applied in software engineering domain, revealing
properties and structures that cannot be captured by traditional metrics. In
this paper, we analyzed the topological properties of Hibernate library, a
well-known Java-based software through the extraction of its static call graph.
The results reveal a complex network with small-world and scale-free
characteristics while displaying a strong propensity on forming communities. | [
1,
0,
0,
0,
0,
0
] |
Title: The Impact of Local Geometry and Batch Size on Stochastic Gradient Descent for Nonconvex Problems,
Abstract: In several experimental reports on nonconvex optimization problems in machine
learning, stochastic gradient descent (SGD) was observed to prefer minimizers
with flat basins in comparison to more deterministic methods, yet there is very
little rigorous understanding of this phenomenon. In fact, the lack of such
work has led to an unverified, but widely-accepted stochastic mechanism
describing why SGD prefers flatter minimizers to sharper minimizers. However,
as we demonstrate, the stochastic mechanism fails to explain this phenomenon.
Here, we propose an alternative deterministic mechanism that can accurately
explain why SGD prefers flatter minimizers to sharper minimizers. We derive
this mechanism based on a detailed analysis of a generic stochastic quadratic
problem, which generalizes known results for classical gradient descent.
Finally, we verify the predictions of our deterministic mechanism on two
nonconvex problems. | [
0,
0,
0,
1,
0,
0
] |
Title: Cosmology and the Origin of the Universe: Historical and Conceptual Perspectives,
Abstract: From a modern perspective cosmology is a historical science in so far that it
deals with the development of the universe since its origin some 14 billion
years ago. The origin itself may not be subject to scientific analysis and
explanation. Nonetheless, there are theories that claim to explain the ultimate
origin or "creation" of the universe. As shown by the history of cosmological
thought, the very concept of "origin" is problematic and can be understood in
different ways. While it is normally understood as a temporal concept, cosmic
origin is not temporal by necessity. The universe can be assigned an origin
even though it has no definite age. In order to clarify the question a view of
earlier ideas will be helpful, these ideas coming not only from astronomy but
also from philosophy and theology. | [
0,
1,
0,
0,
0,
0
] |
Title: Semisimple and separable algebras in multi-fusion categories,
Abstract: We give a classification of semisimple and separable algebras in a
multi-fusion category over an arbitrary field in analogy to Wedderben-Artin
theorem in classical algebras. It turns out that, if the multi-fusion category
admits a semisimple Drinfeld center, the only obstruction to the separability
of a semisimple algebra arises from inseparable field extensions as in
classical algebras. Among others, we show that a division algebra is separable
if and only if it has a nonvanishing dimension. | [
0,
0,
1,
0,
0,
0
] |
Title: Ensemble dependence of fluctuations and the canonical/micro-canonical equivalence of ensembles,
Abstract: We study the equivalence of microcanonical and canonical ensembles in
continuous systems, in the sense of the convergence of the corresponding Gibbs
measures. This is obtained by proving a local central limit theorem and a local
large deviations principle. As an application we prove a formula due to
Lebowitz-Percus-Verlet. It gives mean square fluctuations of an extensive
observable, like the kinetic energy, in a classical micro canonical ensemble at
fixed energy. | [
0,
1,
1,
0,
0,
0
] |
Title: Algorithms for Positive Semidefinite Factorization,
Abstract: This paper considers the problem of positive semidefinite factorization (PSD
factorization), a generalization of exact nonnegative matrix factorization.
Given an $m$-by-$n$ nonnegative matrix $X$ and an integer $k$, the PSD
factorization problem consists in finding, if possible, symmetric $k$-by-$k$
positive semidefinite matrices $\{A^1,...,A^m\}$ and $\{B^1,...,B^n\}$ such
that $X_{i,j}=\text{trace}(A^iB^j)$ for $i=1,...,m$, and $j=1,...n$. PSD
factorization is NP-hard. In this work, we introduce several local optimization
schemes to tackle this problem: a fast projected gradient method and two
algorithms based on the coordinate descent framework. The main application of
PSD factorization is the computation of semidefinite extensions, that is, the
representations of polyhedrons as projections of spectrahedra, for which the
matrix to be factorized is the slack matrix of the polyhedron. We compare the
performance of our algorithms on this class of problems. In particular, we
compute the PSD extensions of size $k=1+ \lceil \log_2(n) \rceil$ for the
regular $n$-gons when $n=5$, $8$ and $10$. We also show how to generalize our
algorithms to compute the square root rank (which is the size of the factors in
a PSD factorization where all factor matrices $A^i$ and $B^j$ have rank one)
and completely PSD factorizations (which is the special case where the input
matrix is symmetric and equality $A^i=B^i$ is required for all $i$). | [
1,
0,
1,
0,
0,
0
] |
Title: Remarks about Synthetic Upper Ricci Bounds for Metric Measure Spaces,
Abstract: We discuss various characterizations of synthetic upper Ricci bounds for
metric measure spaces in terms of heat flow, entropy and optimal transport. In
particular, we present a characterization in terms of semiconcavity of the
entropy along certain Wasserstein geodesics which is stable under convergence
of mm-spaces. And we prove that a related characterization is equivalent to an
asymptotic lower bound on the growth of the Wasseretein distance between heat
flows. For weighted Riemannian manifolds, the crucial result will be a precise
uniform two-sided bound for \begin{eqnarray*}\frac{d}{dt}\Big|_{t=0}W\big(\hat
P_t\delta_x,\hat P_t\delta_y\big)\end{eqnarray*} in terms of the mean value of
the Bakry-Emery Ricci tensor $\mathrm{Ric}+\mathrm{Hess}\, f$ along the
minimizing geodesic from $x$ to $y$ and an explicit correction term depending
on the bound for the curvature along this curve. | [
0,
0,
1,
0,
0,
0
] |
Title: Structured Deep Hashing with Convolutional Neural Networks for Fast Person Re-identification,
Abstract: Given a pedestrian image as a query, the purpose of person re-identification
is to identify the correct match from a large collection of gallery images
depicting the same person captured by disjoint camera views. The critical
challenge is how to construct a robust yet discriminative feature
representation to capture the compounded variations in pedestrian appearance.
To this end, deep learning methods have been proposed to extract hierarchical
features against extreme variability of appearance. However, existing methods
in this category generally neglect the efficiency in the matching stage whereas
the searching speed of a re-identification system is crucial in real-world
applications. In this paper, we present a novel deep hashing framework with
Convolutional Neural Networks (CNNs) for fast person re-identification.
Technically, we simultaneously learn both CNN features and hash functions/codes
to get robust yet discriminative features and similarity-preserving hash codes.
Thereby, person re-identification can be resolved by efficiently computing and
ranking the Hamming distances between images. A structured loss function
defined over positive pairs and hard negatives is proposed to formulate a novel
optimization problem so that fast convergence and more stable optimized
solution can be obtained. Extensive experiments on two benchmarks CUHK03
\cite{FPNN} and Market-1501 \cite{Market1501} show that the proposed deep
architecture is efficacy over state-of-the-arts. | [
1,
0,
0,
0,
0,
0
] |
Title: Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses,
Abstract: Recently, methods have been proposed that perform texture synthesis and style
transfer by using convolutional neural networks (e.g. Gatys et al.
[2015,2016]). These methods are exciting because they can in some cases create
results with state-of-the-art quality. However, in this paper, we show these
methods also have limitations in texture quality, stability, requisite
parameter tuning, and lack of user controls. This paper presents a multiscale
synthesis pipeline based on convolutional neural networks that ameliorates
these issues. We first give a mathematical explanation of the source of
instabilities in many previous approaches. We then improve these instabilities
by using histogram losses to synthesize textures that better statistically
match the exemplar. We also show how to integrate localized style losses in our
multiscale framework. These losses can improve the quality of large features,
improve the separation of content and style, and offer artistic controls such
as paint by numbers. We demonstrate that our approach offers improved quality,
convergence in fewer iterations, and more stability over the optimization. | [
1,
0,
0,
0,
0,
0
] |
Title: Dynamical and Topological Aspects of Consensus Formation in Complex Networks,
Abstract: The present work analyses a particular scenario of consensus formation, where
the individuals navigate across an underlying network defining the topology of
the walks. The consensus, associated to a given opinion coded as a simple
messages, is generated by interactions during the agent's walk and manifest
itself in the collapse of the various opinions into a single one. We analyze
how the topology of the underlying networks and the rules of interaction
between the agents promote or inhibit the emergence of this consensus. We find
that non-linear interaction rules are required to form consensus and that
consensus is more easily achieved in networks whose degree distribution is
narrower. | [
1,
1,
0,
0,
0,
0
] |
Title: Measurement of authorship by publications: a normative approach,
Abstract: Administrators in all academic organizations across the world have to deal
with the unenviable task of comparing researchers on the basis of their
academic contributions. This job is further complicated by the need for
comparing single author publication with joint author publications.
Unfortunately, however, there is no reasonably established consensus on the
method of arriving at such comparisons, which typically involve trading off
accomplishments in teaching, grant writing and academic publication. In this
paper, we focus on the particular dimension of academic publication, and
analyze this issue from a more fundamental perspective than addressed by the
popular $h$-index (which may lead to unfair and counter-intuitive comparisons
in certain situations). In particular, we undertake an axiomatic analysis of
{\it all} possible ways to measure academic authorship for a given dataset of
research articles and find that an egalitarian $e$-index is the \textit{only}
method which satisfies the axioms of anonymity, monotonicity, and efficiency.
This index divides authorship of joint projects equally and sums across all
publications of an author. Thus, our index provides a method to prorate
authorship for multi-author projects, and thereby, delivers more balanced
author comparisons. | [
1,
1,
0,
0,
0,
0
] |
Title: Ordering Garside groups,
Abstract: We introduce a condition on Garside groups that we call Dehornoy structure.
An iteration of such a structure leads to a left order on the group. We show
conditions for a Garside group to admit a Dehornoy structure, and we apply
these criteria to prove that the Artin groups of type A and I 2 (m), m $\ge$ 4,
have Dehornoy structures. We show that the left orders on the Artin groups of
type A obtained from their Dehornoy structures are the Dehornoy orders. In the
case of the Artin groups of type I 2 (m), m $\ge$ 4, we show that the left
orders derived from their Dehornoy structures coincide with the orders obtained
from embeddings of the groups into braid groups. 20F36 | [
0,
0,
1,
0,
0,
0
] |
Title: Self-Taught Support Vector Machine,
Abstract: In this paper, a new approach for classification of target task using limited
labeled target data as well as enormous unlabeled source data is proposed which
is called self-taught learning. The target and source data can be drawn from
different distributions. In the previous approaches, covariate shift assumption
is considered where the marginal distributions p(x) change over domains and the
conditional distributions p(y|x) remain the same. In our approach, we propose a
new objective function which simultaneously learns a common space T(.) where
the conditional distributions over domains p(T(x)|y) remain the same and learns
robust SVM classifiers for target task using both source and target data in the
new representation. Hence, in the proposed objective function, the hidden label
of the source data is also incorporated. We applied the proposed approach on
Caltech-256, MSRC+LMO datasets and compared the performance of our algorithm to
the available competing methods. Our method has a superior performance to the
successful existing algorithms. | [
1,
0,
0,
1,
0,
0
] |
Title: Continuum of classical-field ensembles from canonical to grand canonical and the onset of their equivalence,
Abstract: The canonical and grand-canonical ensembles are two usual marginal cases for
ultracold Bose gases, but real collections of experimental runs commonly have
intermediate properties. Here we study the continuum of intermediate cases, and
look into the appearance of ensemble equivalence as interaction rises for
mesoscopic 1d systems. We demonstrate how at sufficient interaction strength
the distributions of condensate and excited atoms become practically identical
regardless of the ensemble used. Importantly, we find that features that are
fragile in the ideal gas and appear only in a strict canonical ensemble can
become robust in all ensembles when interactions become strong. As evidence,
the steep cliff in the distribution of the number of excited atoms is
preserved. To make this study, a straightforward approach for generating
canonical and intermediate classical field ensembles using a modified
stochastic Gross-Pitaevskii equation (SGPE) is developed. | [
0,
1,
0,
0,
0,
0
] |
Title: Extending Partial Representations of Unit Circular-arc Graphs,
Abstract: The partial representation extension problem, introduced by Klavík et al.
(2011), generalizes the recognition problem. In this short note we show that
this problem is NP-complete for unit circular-arc graphs. | [
1,
0,
0,
0,
0,
0
] |
Title: Lipschitz continuity of quasiconformal mappings and of the solutions to second order elliptic PDE with respect to the distance ratio metric,
Abstract: The main aim of this paper is to study the Lipschitz continuity of certain
$(K, K')$-quasiconformal mappings with respect to the distance ratio metric,
and the Lipschitz continuity of the solution of a quasilinear differential
equation with respect to the distance ratio metric. | [
0,
0,
1,
0,
0,
0
] |
Title: On Estimation of Conditional Modes Using Multiple Quantile Regressions,
Abstract: We propose an estimation method for the conditional mode when the
conditioning variable is high-dimensional. In the proposed method, we first
estimate the conditional density by solving quantile regressions multiple
times. We then estimate the conditional mode by finding the maximum of the
estimated conditional density. The proposed method has two advantages in that
it is computationally stable because it has no initial parameter dependencies,
and it is statistically efficient with a fast convergence rate. Synthetic and
real-world data experiments demonstrate the better performance of the proposed
method compared to other existing ones. | [
0,
0,
1,
1,
0,
0
] |
Title: Kondo destruction in a quantum paramagnet with magnetic frustration,
Abstract: We report results of isothermal magnetotransport and susceptibility
measurements at elevated magnetic fields B down to very low temperatures T on
high-quality single crystals of the frustrated Kondo-lattice system CePdAl.
They reveal a B*(T) line within the paramagnetic part of the phase diagram.
This line denotes a thermally broadened 'small'-to-'large' Fermi surface
crossover which substantially narrows upon cooling. At B_0* = B*(T=0) = (4.6
+/- 0.1) T, this B*(T) line merges with two other crossover lines, viz. Tp(B)
below and T_FL(B) above B_0*. Tp characterizes a frustration-dominated
spin-liquid state, while T_FL is the Fermi-liquid temperature associated with
the lattice Kondo effect. Non-Fermi-liquid phenomena which are commonly
observed near a 'Kondo destruction' quantum critical point cannot be resolved
in CePdAl. Our observations reveal a rare case where Kondo coupling,
frustration and quantum criticality are closely intertwined. | [
0,
1,
0,
0,
0,
0
] |
Title: Deep supervised learning using local errors,
Abstract: Error backpropagation is a highly effective mechanism for learning
high-quality hierarchical features in deep networks. Updating the features or
weights in one layer, however, requires waiting for the propagation of error
signals from higher layers. Learning using delayed and non-local errors makes
it hard to reconcile backpropagation with the learning mechanisms observed in
biological neural networks as it requires the neurons to maintain a memory of
the input long enough until the higher-layer errors arrive. In this paper, we
propose an alternative learning mechanism where errors are generated locally in
each layer using fixed, random auxiliary classifiers. Lower layers could thus
be trained independently of higher layers and training could either proceed
layer by layer, or simultaneously in all layers using local error information.
We address biological plausibility concerns such as weight symmetry
requirements and show that the proposed learning mechanism based on fixed,
broad, and random tuning of each neuron to the classification categories
outperforms the biologically-motivated feedback alignment learning technique on
the MNIST, CIFAR10, and SVHN datasets, approaching the performance of standard
backpropagation. Our approach highlights a potential biological mechanism for
the supervised, or task-dependent, learning of feature hierarchies. In
addition, we show that it is well suited for learning deep networks in custom
hardware where it can drastically reduce memory traffic and data communication
overheads. | [
1,
0,
0,
1,
0,
0
] |
Title: Theoretical and Computational Guarantees of Mean Field Variational Inference for Community Detection,
Abstract: The mean field variational Bayes method is becoming increasingly popular in
statistics and machine learning. Its iterative Coordinate Ascent Variational
Inference algorithm has been widely applied to large scale Bayesian inference.
See Blei et al. (2017) for a recent comprehensive review. Despite the
popularity of the mean field method there exist remarkably little fundamental
theoretical justifications. To the best of our knowledge, the iterative
algorithm has never been investigated for any high dimensional and complex
model. In this paper, we study the mean field method for community detection
under the Stochastic Block Model. For an iterative Batch Coordinate Ascent
Variational Inference algorithm, we show that it has a linear convergence rate
and converges to the minimax rate within $\log n$ iterations. This complements
the results of Bickel et al. (2013) which studied the global minimum of the
mean field variational Bayes and obtained asymptotic normal estimation of
global model parameters. In addition, we obtain similar optimality results for
Gibbs sampling and an iterative procedure to calculate maximum likelihood
estimation, which can be of independent interest. | [
0,
0,
1,
1,
0,
0
] |
Title: The Chandra Deep Field South as a test case for Global Multi Conjugate Adaptive Optics,
Abstract: The era of the next generation of giant telescopes requires not only the
advent of new technologies but also the development of novel methods, in order
to exploit fully the extraordinary potential they are built for. Global Multi
Conjugate Adaptive Optics (GMCAO) pursues this approach, with the goal of
achieving good performance over a field of view of a few arcmin and an increase
in sky coverage. In this article, we show the gain offered by this technique to
an astrophysical application, such as the photometric survey strategy applied
to the Chandra Deep Field South as a case study. We simulated a close-to-real
observation of a 500 x 500 arcsec^2 extragalactic deep field with a 40-m class
telescope that implements GMCAO. We analysed mock K-band images of 6000
high-redshift (up to z = 2.75) galaxies therein as if they were real to recover
the initial input parameters. We attained 94.5 per cent completeness for source
detection with SEXTRACTOR. We also measured the morphological parameters of all
the sources with the two-dimensional fitting tools GALFIT. The agreement we
found between recovered and intrinsic parameters demonstrates GMCAO as a
reliable approach to assist extremely large telescope (ELT) observations of
extragalactic interest. | [
0,
1,
0,
0,
0,
0
] |
Title: Probabilistic Causal Analysis of Social Influence,
Abstract: Mastering the dynamics of social influence requires separating, in a database
of information propagation traces, the genuine causal processes from temporal
correlation, i.e., homophily and other spurious causes. However, most studies
to characterize social influence, and, in general, most data-science analyses
focus on correlations, statistical independence, or conditional independence.
Only recently, there has been a resurgence of interest in "causal data
science", e.g., grounded on causality theories. In this paper we adopt a
principled causal approach to the analysis of social influence from
information-propagation data, rooted in the theory of probabilistic causation.
Our approach consists of two phases. In the first one, in order to avoid the
pitfalls of misinterpreting causation when the data spans a mixture of several
subtypes ("Simpson's paradox"), we partition the set of propagation traces into
groups, in such a way that each group is as less contradictory as possible in
terms of the hierarchical structure of information propagation. To achieve this
goal, we borrow the notion of "agony" and define the Agony-bounded Partitioning
problem, which we prove being hard, and for which we develop two efficient
algorithms with approximation guarantees. In the second phase, for each group
from the first phase, we apply a constrained MLE approach to ultimately learn a
minimal causal topology. Experiments on synthetic data show that our method is
able to retrieve the genuine causal arcs w.r.t. a ground-truth generative
model. Experiments on real data show that, by focusing only on the extracted
causal structures instead of the whole social graph, the effectiveness of
predicting influence spread is significantly improved. | [
1,
0,
0,
1,
0,
0
] |
Title: Advanced reduced-order models for moisture diffusion in porous media,
Abstract: It is of great concern to produce numerically efficient methods for moisture
diffusion through porous media, capable of accurately calculate moisture
distribution with a reduced computational effort. In this way, model reduction
methods are promising approaches to bring a solution to this issue since they
do not degrade the physical model and provide a significant reduction of
computational cost. Therefore, this article explores in details the
capabilities of two model-reduction techniques - the Spectral Reduced-Order
Model (Spectral-ROM) and the Proper Generalised Decomposition (PGD) - to
numerically solve moisture diffusive transfer through porous materials. Both
approaches are applied to three different problems to provide clear examples of
the construction and use of these reduced-order models. The methodology of both
approaches is explained extensively so that the article can be used as a
numerical benchmark by anyone interested in building a reduced-order model for
diffusion problems in porous materials. Linear and non-linear unsteady
behaviors of unidimensional moisture diffusion are investigated. The last case
focuses on solving a parametric problem in which the solution depends on space,
time and the diffusivity properties. Results have highlighted that both methods
provide accurate solutions and enable to reduce significantly the order of the
model around ten times lower than the large original model. It also allows an
efficient computation of the physical phenomena with an error lower than
10^{-2} when compared to a reference solution. | [
1,
1,
1,
0,
0,
0
] |
Title: Gapless quantum spin chains: multiple dynamics and conformal wavefunctions,
Abstract: We study gapless quantum spin chains with spin 1/2 and 1: the Fredkin and
Motzkin models. Their entangled groundstates are known exactly but not their
excitation spectra. We first express the groundstates in the continuum which
allows for the calculation of spin and entanglement properties in a unified
fashion. Doing so, we uncover an emergent conformal-type symmetry, thus
consolidating the connection to a widely studied family of Lifshitz quantum
critical points in 2d. We then obtain the low lying excited states via
large-scale DMRG simulations and find that the dynamical exponent is z = 3.2 in
both cases. Other excited states show a different z, indicating that these
models have multiple dynamics. Moreover, we modify the spin-1/2 model by adding
a ferromagnetic Heisenberg term, which changes the entire spectrum. We track
the resulting non-trivial evolution of the dynamical exponents using DMRG.
Finally, we exploit an exact map from the quantum Hamiltonian to the
non-equilibrium dynamics of a classical spin chain to shed light on the quantum
dynamics. | [
0,
1,
0,
0,
0,
0
] |
Title: Real-time public transport service-level monitoring using passive WiFi: a spectral clustering approach for train timetable estimation,
Abstract: A new area in which passive WiFi analytics have promise for delivering value
is the real-time monitoring of public transport systems. One example is
determining the true (as opposed to the published) timetable of a public
transport system in real-time. In most cases, there are no other
publicly-available sources for this information. Yet, it is indispensable for
the real-time monitoring of public transport service levels. Furthermore, this
information, if accurate and temporally fine-grained, can be used for very
low-latency incident detection. In this work, we propose using spectral
clustering based on trajectories derived from passive WiFi traces of users of a
public transport system to infer the true timetable and two key performance
indicators of the transport service, namely public transport vehicle headway
and in-station dwell time. By detecting anomalous dwell times or headways, we
demonstrate that a fast and accurate real-time incident-detection procedure can
be obtained. The method we introduce makes use of the advantages of the
high-frequency WiFi data, which provides very low-latency,
universally-accessible information, while minimizing the impact of the noise in
the data. | [
1,
0,
0,
0,
0,
0
] |
Title: Finding Network Motifs in Large Graphs using Compression as a Measure of Relevance,
Abstract: We introduce a new method for finding network motifs: interesting or
informative subgraph patterns in a network. Current methods for finding motifs
rely on the frequency of the motif: specifically, subgraphs are motifs when
their frequency in the data is high compared to the expected frequency under a
null model. To compute this expectation, the search for motifs is normally
repeated on as many as 1000 random graphs sampled from the null model; a
prohibitively expensive step. We use ideas from the Minimum Description Length
(MDL) literature to define a new measure of motif relevance, and a new
algorithm for detecting motifs. Our method allows motif analysis to scale to
networks with billions of links, while still resulting in informative motifs. | [
1,
0,
0,
0,
0,
0
] |
Title: A Bag-of-Words Equivalent Recurrent Neural Network for Action Recognition,
Abstract: The traditional bag-of-words approach has found a wide range of applications
in computer vision. The standard pipeline consists of a generation of a visual
vocabulary, a quantization of the features into histograms of visual words, and
a classification step for which usually a support vector machine in combination
with a non-linear kernel is used. Given large amounts of data, however, the
model suffers from a lack of discriminative power. This applies particularly
for action recognition, where the vast amount of video features needs to be
subsampled for unsupervised visual vocabulary generation. Moreover, the kernel
computation can be very expensive on large datasets. In this work, we propose a
recurrent neural network that is equivalent to the traditional bag-of-words
approach but enables for the application of discriminative training. The model
further allows to incorporate the kernel computation into the neural network
directly, solving the complexity issue and allowing to represent the complete
classification system within a single network. We evaluate our method on four
recent action recognition benchmarks and show that the conventional model as
well as sparse coding methods are outperformed. | [
1,
0,
0,
0,
0,
0
] |
Title: Limit on graviton mass from galaxy cluster Abell 1689,
Abstract: To date, the only limit on graviton mass using galaxy clusters was obtained
by Goldhaber and Nieto in 1974, using the fact that the orbits of galaxy
clusters are bound and closed, and extend up to 580 kpc. From positing that
only a Newtonian potential gives rise to such stable bound orbits, a limit on
the graviton mass $m_g<10^{-29}$ eV was obtained (PRD 9,1119, 1974). Recently,
it has been shown that one can obtain closed bound orbits for Yukawa potential
(arXiv:1705.02444), thus invalidating the main \emph{ansatz} used in Goldhaber
and Nieto to obtain the graviton mass bound. In order to obtain a revised
estimate using galaxy clusters, we use dynamical mass models of the Abell 1689
(A1689) galaxy cluster to check their compatibility with a Yukawa gravitational
potential. We assume mass models for the gas, dark matter, and galaxies for
A1689 from arXiv:1703.10219 and arXiv:1610.01543, who used this cluster to test
various alternate gravity theories, which dispense with the need for dark
matter. We quantify the deviations in the acceleration profile using these mass
models assuming a Yukawa potential and that obtained assuming a Newtonian
potential by calculating the $\chi^2$ residuals between the two profiles. Our
estimated bound on the graviton mass ($m_g$) is thereby given by, $m_g < 1.37
\times 10^{-29}$ eV or in terms of the graviton Compton wavelength of,
$\lambda_g>9.1 \times 10^{19}$ km at 90\% confidence level. | [
0,
1,
0,
0,
0,
0
] |
Title: Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks,
Abstract: Artificial neural networks (ANNs) may not be worth their computational/memory
costs when used in mobile phones or embedded devices. Parameter-pruning
algorithms combat these costs, with some algorithms capable of removing over
90% of an ANN's weights without harming the ANN's performance. Removing weights
from an ANN is a form of regularization, but existing pruning algorithms do not
significantly improve generalization error. We show that pruning ANNs can
improve generalization if pruning targets large weights instead of small
weights. Applying our pruning algorithm to an ANN leads to a higher image
classification accuracy on CIFAR-10 data than applying the popular regularizer
dropout. The pruning couples this higher accuracy with an 85% reduction of the
ANN's parameter count. | [
0,
0,
0,
1,
0,
0
] |
Title: The Augustin Center and The Sphere Packing Bound For Memoryless Channels,
Abstract: For any channel with a convex constraint set and finite Augustin capacity,
existence of a unique Augustin center and associated Erven-Harremoes bound are
established. Augustin-Legendre capacity, center, and radius are introduced and
proved to be equal to the corresponding Renyi-Gallager entities. Sphere packing
bounds with polynomial prefactors are derived for codes on two families of
channels: (possibly non-stationary) memoryless channels with multiple additive
cost constraints and stationary memoryless channels with convex constraints on
the empirical distribution of the input codewords. | [
1,
0,
0,
0,
0,
0
] |
Title: SuperMinHash - A New Minwise Hashing Algorithm for Jaccard Similarity Estimation,
Abstract: This paper presents a new algorithm for calculating hash signatures of sets
which can be directly used for Jaccard similarity estimation. The new approach
is an improvement over the MinHash algorithm, because it has a better runtime
behavior and the resulting signatures allow a more precise estimation of the
Jaccard index. | [
1,
0,
0,
0,
0,
0
] |
Title: On catastrophic forgetting and mode collapse in Generative Adversarial Networks,
Abstract: Generative Adversarial Networks (GAN) are one of the most prominent tools for
learning complicated distributions. However, problems such as mode collapse and
catastrophic forgetting, prevent GAN from learning the target distribution.
These problems are usually studied independently from each other. In this
paper, we show that both problems are present in GAN and their combined effect
makes the training of GAN unstable. We also show that methods such as gradient
penalties and momentum based optimizers can improve the stability of GAN by
effectively preventing these problems from happening. Finally, we study a
mechanism for mode collapse to occur and propagate in feedforward neural
networks. | [
0,
0,
0,
1,
0,
0
] |
Title: Metamodel Construction for Sensitivity Analysis,
Abstract: We propose to estimate a metamodel and the sensitivity indices of a complex
model m in the Gaussian regression framework. Our approach combines methods for
sensitivity analysis of complex models and statistical tools for sparse
non-parametric estimation in multivariate Gaussian regression model. It rests
on the construction of a metamodel for aproximating the Hoeffding-Sobol
decomposition of m. This metamodel belongs to a reproducing kernel Hilbert
space constructed as a direct sum of Hilbert spaces leading to a functional
ANOVA decomposition. The estimation of the metamodel is carried out via a
penalized least-squares minimization allowing to select the subsets of
variables that contribute to predict the output. It allows to estimate the
sensitivity indices of m. We establish an oracle-type inequality for the risk
of the estimator, describe the procedure for estimating the metamodel and the
sensitivity indices, and assess the performances of the procedure via a
simulation study. | [
0,
0,
1,
1,
0,
0
] |
Title: Dynamics of the brain extracellular matrix governed by interactions with neural cells,
Abstract: Neuronal and glial cells release diverse proteoglycans and glycoproteins,
which aggregate in the extracellular space and form the extracellular matrix
(ECM) that may in turn regulate major cellular functions. Brain cells also
release extracellular proteases that may degrade the ECM, and both synthesis
and degradation of ECM are activity-dependent. In this study we introduce a
mathematical model describing population dynamics of neurons interacting with
ECM molecules over extended timescales. It is demonstrated that depending on
the prevalent biophysical mechanism of ECM-neuronal interactions, different
dynamical regimes of ECM activity can be observed, including bistable states
with stable stationary levels of ECM molecule concentration, spontaneous ECM
oscillations, and coexistence of ECM oscillations and a stationary state,
allowing dynamical switches between activity regimes. | [
0,
0,
0,
0,
1,
0
] |
Title: AutoPerf: A Generalized Zero-Positive Learning System to Detect Software Performance Anomalies,
Abstract: We present AutoPerf, a generalized software performance regression diagnosis
system. AutoPerf uses autoencoders, an unsupervised learning technique, and
hardware performance counters to learn the performance signatures of a program.
It then uses this knowledge to identify when newer versions of the program
suffer from performance regressions, while simultaneously providing root cause
analysis to help programmers debug the program's performance.
AutoPerf is the first zero-positive learning performance regression diagnosis
system. It trains entirely in the negative (non-anomalous) space to learn
positive (anomalous) behaviors. We demonstrate AutoPerf's generality against
three different types of performance regressions: (i) true sharing cache
contention, (ii) false sharing cache contention, and (iii) NUMA latencies
across 15 real world performance regressions and 7 open source programs. On
average, AutoPerf exhibits only 3.7% profiling overhead and diagnoses more
regressions than prior state-of-the-art approaches. | [
1,
0,
0,
0,
0,
0
] |
Title: Building a bridge between Classical and Quantum Mechanics,
Abstract: The way Quantum Mechanics (QM) is introduced to people used to Classical
Mechanics (CM) is by a complete change of the general methodology) despite QM
historically stemming from CM as a means to explain experimental results.
Therefore, it is desirable to build a bridge from CM to QM.
This paper presents a generalization of CM to QM. It starts from the
generalization of a point-like object and naturally arrives at the quantum
state vector of quantum systems in the complex valued Hilbert space, its time
evolution and quantum representation of a measurement apparatus of any size.
Each time, when generalization is performed, there is a possibility to develop
new theory giving up most simple generalizations. It is shown that a
measurement apparatus is a special case of a general quantum object. An example
of a measurement apparatus of an intermediate size is considered in the end. | [
0,
1,
0,
0,
0,
0
] |
Title: Size-Independent Sample Complexity of Neural Networks,
Abstract: We study the sample complexity of learning neural networks, by providing new
bounds on their Rademacher complexity assuming norm constraints on the
parameter matrix of each layer. Compared to previous work, these complexity
bounds have improved dependence on the network depth, and under some additional
assumptions, are fully independent of the network size (both depth and width).
These results are derived using some novel techniques, which may be of
independent interest. | [
1,
0,
0,
1,
0,
0
] |
Title: Copula Variational Bayes inference via information geometry,
Abstract: Variational Bayes (VB), also known as independent mean-field approximation,
has become a popular method for Bayesian network inference in recent years. Its
application is vast, e.g. in neural network, compressed sensing, clustering,
etc. to name just a few. In this paper, the independence constraint in VB will
be relaxed to a conditional constraint class, called copula in statistics.
Since a joint probability distribution always belongs to a copula class, the
novel copula VB (CVB) approximation is a generalized form of VB. Via
information geometry, we will see that CVB algorithm iteratively projects the
original joint distribution to a copula constraint space until it reaches a
local minimum Kullback-Leibler (KL) divergence. By this way, all mean-field
approximations, e.g. iterative VB, Expectation-Maximization (EM), Iterated
Conditional Mode (ICM) and k-means algorithms, are special cases of CVB
approximation.
For a generic Bayesian network, an augmented hierarchy form of CVB will also
be designed. While mean-field algorithms can only return a locally optimal
approximation for a correlated network, the augmented CVB network, which is an
optimally weighted average of a mixture of simpler network structures, can
potentially achieve the globally optimal approximation for the first time. Via
simulations of Gaussian mixture clustering, the classification's accuracy of
CVB will be shown to be far superior to that of state-of-the-art VB, EM and
k-means algorithms. | [
0,
0,
0,
1,
0,
0
] |
Title: Effect of increasing disorder on domains of the two-dimensional Coulomb glass,
Abstract: We have studied a two dimensional lattice model of Coulomb glass for a wide
range of disorders at $T\sim 0$. The system was first annealed using Monte
Carlo simulation. Further minimization of the total energy of the system was
done using Baranovskii et al algorithm followed by cluster flipping to obtain
the pseudo ground states. We have shown that the energy required to create a
domain of linear size L in d dimensions is proportional to $L^{d-1}$. Using
Imry-Ma arguments given for random field Ising model, one gets critical
dimension $d_{c}\geq 2$ for Coulomb glass. The investigations of domains in the
transition region shows a discontinuity in staggered magnetization which is an
indication of a first-order type transition from charge-ordered phase to
disordered phase. The structure and nature of Random field fluctuations of the
second largest domain in Coulomb glass are inconsistent with the assumptions of
Imry and Ma as was also reported for random field Ising model. The study of
domains showed that in the transition region there were mostly two large
domains and as disorder was increased, the two large domains remained but there
were a large number of small domains. We have also studied the properties of
the second largest domain as a function of disorder. We furthermore analysed
the effect of disorder on the density of states and showed a transition from
hard gap at low disorders to a soft gap at higher disorders. At $W=2$, we have
analysed the soft gap in detail and found that the density of states deviates
slightly ($\delta\approx 1.293 \pm 0.027$) from the linear behaviour in two
dimensions. Analysis of local minima show that the pseudo ground states have
similar structure. | [
0,
1,
0,
0,
0,
0
] |
Title: A dynamic network model with persistent links and node-specific latent variables, with an application to the interbank market,
Abstract: We propose a dynamic network model where two mechanisms control the
probability of a link between two nodes: (i) the existence or absence of this
link in the past, and (ii) node-specific latent variables (dynamic fitnesses)
describing the propensity of each node to create links. Assuming a Markov
dynamics for both mechanisms, we propose an Expectation-Maximization algorithm
for model estimation and inference of the latent variables. The estimated
parameters and fitnesses can be used to forecast the presence of a link in the
future. We apply our methodology to the e-MID interbank network for which the
two linkage mechanisms are associated with two different trading behaviors in
the process of network formation, namely preferential trading and trading
driven by node-specific characteristics. The empirical results allow to
recognise preferential lending in the interbank market and indicate how a
method that does not account for time-varying network topologies tends to
overestimate preferential linkage. | [
1,
0,
0,
1,
0,
1
] |
Title: Predicting stock market movements using network science: An information theoretic approach,
Abstract: A stock market is considered as one of the highly complex systems, which
consists of many components whose prices move up and down without having a
clear pattern. The complex nature of a stock market challenges us on making a
reliable prediction of its future movements. In this paper, we aim at building
a new method to forecast the future movements of Standard & Poor's 500 Index
(S&P 500) by constructing time-series complex networks of S&P 500 underlying
companies by connecting them with links whose weights are given by the mutual
information of 60-minute price movements of the pairs of the companies with the
consecutive 5,340 minutes price records. We showed that the changes in the
strength distributions of the networks provide an important information on the
network's future movements. We built several metrics using the strength
distributions and network measurements such as centrality, and we combined the
best two predictors by performing a linear combination. We found that the
combined predictor and the changes in S&P 500 show a quadratic relationship,
and it allows us to predict the amplitude of the one step future change in S&P
500. The result showed significant fluctuations in S&P 500 Index when the
combined predictor was high. In terms of making the actual index predictions,
we built ARIMA models. We found that adding the network measurements into the
ARIMA models improves the model accuracy. These findings are useful for
financial market policy makers as an indicator based on which they can
interfere with the markets before the markets make a drastic change, and for
quantitative investors to improve their forecasting models. | [
1,
1,
0,
0,
0,
0
] |
Title: Bi-monotonic independence for pairs of algebras,
Abstract: In this article, the notion of bi-monotonic independence is introduced as an
extension of monotonic independence to the two-faced framework for a family of
pairs of algebras in a non-commutative space. The associated cumulants are
defined and a moment-cumulant formula is derived in the bi-monotonic setting.
In general the bi-monotonic product of states is not a state and the
bi-monotonic convolution of probability measures on the plane is not a
probability measure. This provides an additional example of how positivity need
not be preserved under conditional bi-free convolutions. | [
0,
0,
1,
0,
0,
0
] |
Title: Evidence of Significant Energy Input in the Late Phase of a Solar Flare from NuSTAR X-Ray Observations,
Abstract: We present observations of the occulted active region AR12222 during the
third {\em NuSTAR} solar campaign on 2014 December 11, with concurrent {\em
SDO/}AIA and {\em FOXSI-2} sounding rocket observations. The active region
produced a medium size solar flare one day before the observations, at
$\sim18$UT on 2014 December 10, with the post-flare loops still visible at the
time of {\em NuSTAR} observations. The time evolution of the source emission in
the {\em SDO/}AIA $335\textrm{\AA}$ channel reveals the characteristics of an
extreme-ultraviolet late phase event, caused by the continuous formation of new
post-flare loops that arch higher and higher in the solar corona. The spectral
fitting of {\em NuSTAR} observations yields an isothermal source, with
temperature $3.8-4.6$ MK, emission measure $0.3-1.8 \times 10^{46}\textrm{
cm}^{-3}$, and density estimated at $2.5-6.0 \times 10^8 \textrm{ cm}^{-3}$.
The observed AIA fluxes are consistent with the derived {\em NuSTAR}
temperature range, favoring temperature values in the range $4.0-4.3$ MK. By
examining the post-flare loops' cooling times and energy content, we estimate
that at least 12 sets of post-flare loops were formed and subsequently cooled
between the onset of the flare and {\em NuSTAR} observations, with their total
thermal energy content an order of magnitude larger than the energy content at
flare peak time. This indicates that the standard approach of using only the
flare peak time to derive the total thermal energy content of a flare can lead
to a large underestimation of its value. | [
0,
1,
0,
0,
0,
0
] |
Title: Gender Bias in Sharenting: Both Men and Women Mention Sons More Often Than Daughters on Social Media,
Abstract: Gender inequality starts before birth. Parents tend to prefer boys over
girls, which is manifested in reproductive behavior, marital life, and parents'
pastimes and investments in their children. While social media and sharing
information about children (so-called "sharenting") have become an integral
part of parenthood, it is not well-known if and how gender preference shapes
online behavior of users. In this paper, we investigate public mentions of
daughters and sons on social media. We use data from a popular social
networking site on public posts from 635,665 users. We find that both men and
women mention sons more often than daughters in their posts. We also find that
posts featuring sons get more "likes" on average. Our results indicate that
girls are underrepresented in parents' digital narratives about their children.
This gender imbalance may send a message that girls are less important than
boys, or that they deserve less attention, thus reinforcing gender inequality. | [
1,
0,
0,
0,
0,
0
] |
Title: Spatial disease mapping using Directed Acyclic Graph Auto-Regressive (DAGAR) models,
Abstract: Hierarchical models for regionally aggregated disease incidence data commonly
involve region specific latent random effects that are modelled jointly as
having a multivariate Gaussian distribution. The covariance or precision matrix
incorporates the spatial dependence between the regions. Common choices for the
precision matrix include the widely used intrinsic conditional autoregressive
model, which is singular, and its nonsingular extension which lacks
interpretability. We propose a new parametric model for the precision matrix
based on a directed acyclic graph representation of the spatial dependence. Our
model guarantees positive definiteness and, hence, in addition to being a valid
prior for regional spatially correlated random effects, can also directly model
the outcome from dependent data like images and networks. Theoretical and
empirical results demonstrate the interpretability of parameters in our model.
Our precision matrix is sparse and the model is highly scalable for large
datasets. We also derive a novel order-free version which remedies the
dependence of directed acyclic graphs on the ordering of the regions by
averaging over all possible orderings. The resulting precision matrix is
available in closed form. We demonstrate the superior performance of our models
over competing models using simulation experiments and a public health
application. | [
0,
0,
0,
1,
0,
0
] |
Title: Bag-of-Words Method Applied to Accelerometer Measurements for the Purpose of Classification and Energy Estimation,
Abstract: Accelerometer measurements are the prime type of sensor information most
think of when seeking to measure physical activity. On the market, there are
many fitness measuring devices which aim to track calories burned and steps
counted through the use of accelerometers. These measurements, though good
enough for the average consumer, are noisy and unreliable in terms of the
precision of measurement needed in a scientific setting. The contribution of
this paper is an innovative and highly accurate regression method which uses an
intermediary two-stage classification step to better direct the regression of
energy expenditure values from accelerometer counts.
We show that through an additional unsupervised layer of intermediate feature
construction, we can leverage latent patterns within accelerometer counts to
provide better grounds for activity classification than expert-constructed
timeseries features. For this, our approach utilizes a mathematical model
originating in natural language processing, the bag-of-words model, that has in
the past years been appearing in diverse disciplines outside of the natural
language processing field such as image processing. Further emphasizing the
natural language connection to stochastics, we use a gaussian mixture model to
learn the dictionary upon which the bag-of-words model is built. Moreover, we
show that with the addition of these features, we're able to improve regression
root mean-squared error of energy expenditure by approximately 1.4 units over
existing state-of-the-art methods. | [
1,
0,
0,
1,
0,
0
] |
Title: Investigation of the commensurate magnetic structure in heavy fermion CePt2In7 using magnetic resonant X-ray diffraction,
Abstract: We investigated the magnetic structure of the heavy fermion compound
CePt$_2$In$_7$ below $T_N~=5.34(2)$ K using magnetic resonant X-ray diffraction
at ambient pressure. The magnetic order is characterized by a commensurate
propagation vector ${k}_{1/2}~=~\left( \frac{1}{2} , \frac{1}{2},
\frac{1}{2}\right)$ with spins lying in the basal plane. Our measurements did
not reveal the presence of an incommensurate order propagating along the high
symmetry directions in reciprocal space but cannot exclude other incommensurate
modulations or weak scattering intensities. The observed commensurate order can
be described equivalently by either a single-${k}$ structure or by a
multi-${k}$ structure. Furthermore we explain how a commensurate-only ordering
may explain the broad distribution of internal fields observed in nuclear
quadrupolar resonance experiments (Sakai et al. 2011, Phys. Rev. B 83 140408)
that was previously attributed to an incommensurate order. We also report
powder X-ray diffraction showing that the crystallographic structure of
CePt$_2$In$_7$ changes monotonically with pressure up to $P~=~7.3$ GPa at room
temperature. The determined bulk modulus $B_0~=~81.1(3)$ GPa is similar to the
ones of the Ce-115 family. Broad diffraction peaks confirm the presence of
pronounced strain in polycrystalline samples of CePt$_2$In$_7$. We discuss how
strain effects can lead to different electronic and magnetic properties between
polycrystalline and single crystal samples. | [
0,
1,
0,
0,
0,
0
] |
Title: Constant-Time Predictive Distributions for Gaussian Processes,
Abstract: One of the most compelling features of Gaussian process (GP) regression is
its ability to provide well-calibrated posterior distributions. Recent advances
in inducing point methods have sped up GP marginal likelihood and posterior
mean computations, leaving posterior covariance estimation and sampling as the
remaining computational bottlenecks. In this paper we address these
shortcomings by using the Lanczos algorithm to rapidly approximate the
predictive covariance matrix. Our approach, which we refer to as LOVE (LanczOs
Variance Estimates), substantially improves time and space complexity. In our
experiments, LOVE computes covariances up to 2,000 times faster and draws
samples 18,000 times faster than existing methods, all without sacrificing
accuracy. | [
0,
0,
0,
1,
0,
0
] |
Title: Analysis of Thompson Sampling for Gaussian Process Optimization in the Bandit Setting,
Abstract: We consider the global optimization of a function over a continuous domain.
At every evaluation attempt, we can observe the function at a chosen point in
the domain and we reap the reward of the value observed. We assume that drawing
these observations are expensive and noisy. We frame it as a continuum-armed
bandit problem with a Gaussian Process prior on the function. In this regime,
most algorithms have been developed to minimize some form of regret. Contrary
to this popular norm, in this paper, we study the convergence of the sequential
point $\boldsymbol{x}^t$ to the global optimizer $\boldsymbol{x}^*$ for the
Thompson Sampling approach. Under some assumptions and regularity conditions,
we show an exponential rate of convergence to the true optimal. | [
0,
0,
0,
1,
0,
0
] |
Title: Dependency Graph Approach for Multiprocessor Real-Time Synchronization,
Abstract: Over the years, many multiprocessor locking protocols have been designed and
analyzed. However, the performance of these protocols highly depends on how the
tasks are partitioned and prioritized and how the resources are shared locally
and globally. This paper answers a few fundamental questions when real-time
tasks share resources in multiprocessor systems. We explore the fundamental
difficulty of the multiprocessor synchronization problem and show that a very
simplified version of this problem is ${\mathcal NP}$-hard in the strong sense
regardless of the number of processors and the underlying scheduling paradigm.
Therefore, the allowance of preemption or migration does not reduce the
computational complexity. For the positive side, we develop a dependency-graph
approach, that is specifically useful for frame-based real-time tasks, in which
all tasks have the same period and release their jobs always at the same time.
We present a series of algorithms with speedup factors between $2$ and $3$
under semi-partitioned scheduling. We further explore methodologies and
tradeoffs of preemptive against non-preemptive scheduling algorithms and
partitioned against semi-partitioned scheduling algorithms. The approach is
extended to periodic tasks under certain conditions. | [
1,
0,
0,
0,
0,
0
] |
Title: On topological cyclic homology,
Abstract: Topological cyclic homology is a refinement of Connes--Tsygan's cyclic
homology which was introduced by Bökstedt--Hsiang--Madsen in 1993 as an
approximation to algebraic $K$-theory. There is a trace map from algebraic
$K$-theory to topological cyclic homology, and a theorem of
Dundas--Goodwillie--McCarthy asserts that this induces an equivalence of
relative theories for nilpotent immersions, which gives a way for computing
$K$-theory in various situations. The construction of topological cyclic
homology is based on genuine equivariant homotopy theory, the use of explicit
point-set models, and the elaborate notion of a cyclotomic spectrum.
The goal of this paper is to revisit this theory using only
homotopy-invariant notions. In particular, we give a new construction of
topological cyclic homology. This is based on a new definition of the
$\infty$-category of cyclotomic spectra: We define a cyclotomic spectrum to be
a spectrum $X$ with $S^1$-action (in the most naive sense) together with
$S^1$-equivariant maps $\varphi_p: X\to X^{tC_p}$ for all primes $p$. Here
$X^{tC_p}=\mathrm{cofib}(\mathrm{Nm}: X_{hC_p}\to X^{hC_p})$ is the Tate
construction. On bounded below spectra, we prove that this agrees with previous
definitions. As a consequence, we obtain a new and simple formula for
topological cyclic homology.
In order to construct the maps $\varphi_p: X\to X^{tC_p}$ in the example of
topological Hochschild homology we introduce and study Tate diagonals for
spectra and Frobenius homomorphisms of commutative ring spectra. In particular
we prove a version of the Segal conjecture for the Tate diagonals and relate
these Frobenius homomorphisms to power operations. | [
0,
0,
1,
0,
0,
0
] |
Title: Local Partition in Rich Graphs,
Abstract: Local graph partitioning is a key graph mining tool that allows researchers
to identify small groups of interrelated nodes (e.g. people) and their
connective edges (e.g. interactions). Because local graph partitioning is
primarily focused on the network structure of the graph (vertices and edges),
it often fails to consider the additional information contained in the
attributes. In this paper we propose---(i) a scalable algorithm to improve
local graph partitioning by taking into account both the network structure of
the graph and the attribute data and (ii) an application of the proposed local
graph partitioning algorithm (AttriPart) to predict the evolution of local
communities (LocalForecasting). Experimental results show that our proposed
AttriPart algorithm finds up to 1.6$\times$ denser local partitions, while
running approximately 43$\times$ faster than traditional local partitioning
techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm shows
a significant improvement in the number of nodes and edges correctly predicted
over baseline methods. | [
1,
0,
0,
0,
0,
0
] |
Title: A Transient Queueing Analysis under Time-varying Arrival and Service Rates for Enabling Low-Latency Services,
Abstract: Understanding the detailed queueing behavior of a networking session is
critical in enabling low-latency services over the Internet. Especially when
the packet arrival and service rates at the queue of a link vary over time and
moreover when the session is short-lived, analyzing the corresponding queue
behavior as a function of time, which involves a transient analysis, becomes
extremely challenging. In this paper, we propose and develop a new analytical
framework that anatomizes the transient queue behavior under time-varying
arrival and service rates even under unstable conditions. Our framework is
capable of answering key questions in designing low-latency services such as
the time-dependent probability distribution of the queue length; the
instantaneous or time-averaged violation probability that the queue length
exceeds a certain threshold; and the fraction of time during an interval $[0,
t]$ at which the queue length exceeds a certain threshold. We validate our
framework by comparing its prediction results over time with the statistical
simulation results and confirm that our analysis is accurate enough. Our
extensive demonstrations on the efficacy of the analytical framework in
designing low-latency services reveal that its prediction ability for the
transient queue behavior in diverse time-varying packet arrival and service
patterns can be of a high practical value. | [
1,
0,
0,
0,
0,
0
] |
Title: SalientDSO: Bringing Attention to Direct Sparse Odometry,
Abstract: Although cluttered indoor scenes have a lot of useful high-level semantic
information which can be used for mapping and localization, most Visual
Odometry (VO) algorithms rely on the usage of geometric features such as
points, lines and planes. Lately, driven by this idea, the joint optimization
of semantic labels and obtaining odometry has gained popularity in the robotics
community. The joint optimization is good for accurate results but is generally
very slow. At the same time, in the vision community, direct and sparse
approaches for VO have stricken the right balance between speed and accuracy.
We merge the successes of these two communities and present a way to
incorporate semantic information in the form of visual saliency to Direct
Sparse Odometry - a highly successful direct sparse VO algorithm. We also
present a framework to filter the visual saliency based on scene parsing. Our
framework, SalientDSO, relies on the widely successful deep learning based
approaches for visual saliency and scene parsing which drives the feature
selection for obtaining highly-accurate and robust VO even in the presence of
as few as 40 point features per frame. We provide extensive quantitative
evaluation of SalientDSO on the ICL-NUIM and TUM monoVO datasets and show that
we outperform DSO and ORB-SLAM - two very popular state-of-the-art approaches
in the literature. We also collect and publicly release a CVL-UMD dataset which
contains two indoor cluttered sequences on which we show qualitative
evaluations. To our knowledge this is the first paper to use visual saliency
and scene parsing to drive the feature selection in direct VO. | [
1,
0,
0,
0,
0,
0
] |
Title: Asynchronous Decentralized Parallel Stochastic Gradient Descent,
Abstract: Most commonly used distributed machine learning systems are either
synchronous or centralized asynchronous. Synchronous algorithms like
AllReduce-SGD perform poorly in a heterogeneous environment, while asynchronous
algorithms using a parameter server suffer from 1) communication bottleneck at
parameter servers when workers are many, and 2) significantly worse convergence
when the traffic to parameter server is congested. Can we design an algorithm
that is robust in a heterogeneous environment, while being communication
efficient and maintaining the best-possible convergence rate? In this paper, we
propose an asynchronous decentralized stochastic gradient decent algorithm
(AD-PSGD) satisfying all above expectations. Our theoretical analysis shows
AD-PSGD converges at the optimal $O(1/\sqrt{K})$ rate as SGD and has linear
speedup w.r.t. number of workers. Empirically, AD-PSGD outperforms the best of
decentralized parallel SGD (D-PSGD), asynchronous parallel SGD (A-PSGD), and
standard data parallel SGD (AllReduce-SGD), often by orders of magnitude in a
heterogeneous environment. When training ResNet-50 on ImageNet with up to 128
GPUs, AD-PSGD converges (w.r.t epochs) similarly to the AllReduce-SGD, but each
epoch can be up to 4-8X faster than its synchronous counterparts in a
network-sharing HPC environment. To the best of our knowledge, AD-PSGD is the
first asynchronous algorithm that achieves a similar epoch-wise convergence
rate as AllReduce-SGD, at an over 100-GPU scale. | [
1,
0,
0,
1,
0,
0
] |
Title: Seamless Resources Sharing in Wearable Networks by Application Function Virtualization,
Abstract: The prevalence of smart wearable devices is increasing exponentially and we
are witnessing a wide variety of fascinating new services that leverage the
capabilities of these wearables. Wearables are truly changing the way mobile
computing is deployed and mobile applications are being developed. It is
possible to leverage the capabilities such as connectivity, processing, and
sensing of wearable devices in an adaptive manner for efficient resource usage
and information accuracy within the personal area network. We show that
application developers are not yet taking advantage of these cross-device
capabilities, however, instead using wearables as passive sensors or simple end
displays to provide notifications to the user. We thus design AFV (Application
Function Virtualization), an architecture enabling automated dynamic function
virtualization and scheduling across devices in a personal area network,
simplifying the development of the apps that are adaptive to context changes.
AFV provides a simple set of APIs hiding complex architectural tasks from app
developers whilst continuously monitoring the user, device and network context,
to enable the adaptive invocation of functions across devices. We show the
feasibility of our design by implementing AFV on Android, and the benefits for
the user in terms of resource efficiency, especially in saving energy
consumption, and quality of experience with multiple use cases. | [
1,
0,
0,
0,
0,
0
] |
Title: Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration,
Abstract: Q-Ensembles are a model-free approach where input images are fed into
different Q-networks and exploration is driven by the assumption that
uncertainty is proportional to the variance of the output Q-values obtained.
They have been shown to perform relatively well compared to other exploration
strategies. Further, model-based approaches, such as encoder-decoder models
have been used successfully for next frame prediction given previous frames.
This paper proposes to integrate the model-free Q-ensembles and model-based
approaches with the hope of compounding the benefits of both and achieving
superior exploration as a result. Results show that a model-based trajectory
memory approach when combined with Q-ensembles produces superior performance
when compared to only using Q-ensembles. | [
0,
0,
0,
1,
0,
0
] |
Title: Creating a Cybersecurity Concept Inventory: A Status Report on the CATS Project,
Abstract: We report on the status of our Cybersecurity Assessment Tools (CATS) project
that is creating and validating a concept inventory for cybersecurity, which
assesses the quality of instruction of any first course in cybersecurity. In
fall 2014, we carried out a Delphi process that identified core concepts of
cybersecurity. In spring 2016, we interviewed twenty-six students to uncover
their understandings and misconceptions about these concepts. In fall 2016, we
generated our first assessment tool--a draft Cybersecurity Concept Inventory
(CCI), comprising approximately thirty multiple-choice questions. Each question
targets a concept; incorrect answers are based on observed misconceptions from
the interviews. This year we are validating the draft CCI using cognitive
interviews, expert reviews, and psychometric testing. In this paper, we
highlight our progress to date in developing the CCI.
The CATS project provides infrastructure for a rigorous evidence-based
improvement of cybersecurity education. The CCI permits comparisons of
different instructional methods by assessing how well students learned the core
concepts of the field (especially adversarial thinking), where instructional
methods refer to how material is taught (e.g., lab-based, case-studies,
collaborative, competitions, gaming). Specifically, the CCI is a tool that will
enable researchers to scientifically quantify and measure the effect of their
approaches to, and interventions in, cybersecurity education. | [
1,
0,
0,
0,
0,
0
] |
Title: Critical Learning Periods in Deep Neural Networks,
Abstract: Critical periods are phases in the early development of humans and animals
during which experience can irreversibly affect the architecture of neuronal
networks. In this work, we study the effects of visual stimulus deficits on the
training of artificial neural networks (ANNs). Introducing well-characterized
visual deficits, such as cataract-like blurring, in the early training phase of
a standard deep neural network causes a permanent performance loss that closely
mimics critical period behavior in humans and animal models. Deficits that do
not affect low-level image statistics, such as vertical flipping of the images,
have no lasting effect on the ANNs' performance and can be rapidly overcome
with further training. In addition, the deeper the ANN is, the more pronounced
the critical period. To better understand this phenomenon, we use Fisher
Information as a measure of the strength of the network's connections during
the training. Our information-theoretic analysis suggests that the first few
epochs are critical for the creation of strong connections across different
layers, optimal for processing the input data distribution. Once such strong
connections are created, they do not appear to change during additional
training. These findings suggest that the initial rapid learning phase of ANN
training, under-scrutinized compared to its asymptotic behavior, plays a key
role in defining the final performance of networks. Our results also show how
critical periods are not restricted to biological systems, but can emerge
naturally in learning systems, whether biological or artificial, due to
fundamental constrains arising from learning dynamics and information
processing. | [
1,
0,
0,
1,
0,
0
] |
Title: Whole-Body Nonlinear Model Predictive Control Through Contacts for Quadrupeds,
Abstract: In this work we present a whole-body Nonlinear Model Predictive Control
approach for Rigid Body Systems subject to contacts. We use a full dynamic
system model which also includes explicit contact dynamics. Therefore, contact
locations, sequences and timings are not prespecified but optimized by the
solver. Yet, thorough numerical and software engineering allows for running the
nonlinear Optimal Control solver at rates up to 190 Hz on a quadruped for a
time horizon of half a second. This outperforms the state of the art by at
least one order of magnitude. Hardware experiments in form of periodic and
non-periodic tasks are applied to two quadrupeds with different actuation
systems. The obtained results underline the performance, transferability and
robustness of the approach. | [
1,
0,
0,
0,
0,
0
] |
Title: An informative path planning framework for UAV-based terrain monitoring,
Abstract: Unmanned aerial vehicles (UAVs) represent a new frontier in a wide range of
monitoring and research applications. To fully leverage their potential, a key
challenge is planning missions for efficient data acquisition in complex
environments. To address this issue, this article introduces a general
informative path planning (IPP) framework for monitoring scenarios using an
aerial robot. The approach is capable of mapping either discrete or continuous
target variables on a terrain using variable-resolution data received from
probabilistic sensors. During a mission, the terrain maps built online are used
to plan information-rich trajectories in continuous 3-D space by optimizing
initial solutions obtained by a course grid search. Extensive simulations show
that our approach is more efficient than existing methods. We also demonstrate
its real-time application on a photorealistic mapping scenario using a publicly
available dataset. | [
1,
0,
0,
0,
0,
0
] |
Title: Machine Learning on Sequential Data Using a Recurrent Weighted Average,
Abstract: Recurrent Neural Networks (RNN) are a type of statistical model designed to
handle sequential data. The model reads a sequence one symbol at a time. Each
symbol is processed based on information collected from the previous symbols.
With existing RNN architectures, each symbol is processed using only
information from the previous processing step. To overcome this limitation, we
propose a new kind of RNN model that computes a recurrent weighted average
(RWA) over every past processing step. Because the RWA can be computed as a
running average, the computational overhead scales like that of any other RNN
architecture. The approach essentially reformulates the attention mechanism
into a stand-alone model. The performance of the RWA model is assessed on the
variable copy problem, the adding problem, classification of artificial
grammar, classification of sequences by length, and classification of the MNIST
images (where the pixels are read sequentially one at a time). On almost every
task, the RWA model is found to outperform a standard LSTM model. | [
1,
0,
0,
1,
0,
0
] |
Title: Feynman-Kac equation for anomalous processes with space- and time-dependent forces,
Abstract: Functionals of a stochastic process Y(t) model many physical time-extensive
observables, e.g. particle positions, local and occupation times or accumulated
mechanical work. When Y(t) is a normal diffusive process, their statistics are
obtained as the solution of the Feynman-Kac equation. This equation provides
the crucial link between the expected values of diffusion processes and the
solutions of deterministic second-order partial differential equations. When
Y(t) is an anomalous diffusive process, generalizations of the Feynman-Kac
equation that incorporate power-law or more general waiting time distributions
of the underlying random walk have recently been derived. A general
representation of such waiting times is provided in terms of a Lévy process
whose Laplace exponent is related to the memory kernel appearing in the
generalized Feynman-Kac equation. The corresponding anomalous processes have
been shown to capture nonlinear mean square displacements exhibiting crossovers
between different scaling regimes, which have been observed in biological
systems like migrating cells or diffusing macromolecules in intracellular
environments. However, the case where both space- and time-dependent forces
drive the dynamics of the generalized anomalous process has not been solved
yet. Here, we present the missing derivation of the Feynman-Kac equation in
such general case by using the subordination technique. Furthermore, we discuss
its extension to functionals explicitly depending on time, which are relevant
for the stochastic thermodynamics of anomalous diffusive systems. Exact results
on the work fluctuations of a simple non-equilibrium model are obtained. In
this paper we also provide a pedagogical introduction to Lévy processes,
semimartingales and their associated stochastic calculus, which underlie the
mathematical formulation of anomalous diffusion as a subordinated process. | [
0,
1,
1,
0,
0,
0
] |
Title: An Online Ride-Sharing Path Planning Strategy for Public Vehicle Systems,
Abstract: As efficient traffic-management platforms, public vehicle (PV) systems are
envisioned to be a promising approach to solving traffic congestions and
pollutions for future smart cities. PV systems provide online/dynamic
peer-to-peer ride-sharing services with the goal of serving sufficient number
of customers with minimum number of vehicles and lowest possible cost. A key
component of the PV system is the online ride-sharing scheduling strategy. In
this paper, we propose an efficient path planning strategy that focuses on a
limited potential search area for each vehicle by filtering out the requests
that violate passenger service quality level, so that the global search is
reduced to local search. We analyze the performance of the proposed solution
such as reduction ratio of computational complexity. Simulations based on the
Manhattan taxi data set show that, the computing time is reduced by 22%
compared with the exhaustive search method under the same service quality
performance. | [
1,
0,
0,
0,
0,
0
] |
Title: Ask the Right Questions: Active Question Reformulation with Reinforcement Learning,
Abstract: We frame Question Answering (QA) as a Reinforcement Learning task, an
approach that we call Active Question Answering. We propose an agent that sits
between the user and a black box QA system and learns to reformulate questions
to elicit the best possible answers. The agent probes the system with,
potentially many, natural language reformulations of an initial question and
aggregates the returned evidence to yield the best answer. The reformulation
system is trained end-to-end to maximize answer quality using policy gradient.
We evaluate on SearchQA, a dataset of complex questions extracted from
Jeopardy!. The agent outperforms a state-of-the-art base model, playing the
role of the environment, and other benchmarks. We also analyze the language
that the agent has learned while interacting with the question answering
system. We find that successful question reformulations look quite different
from natural language paraphrases. The agent is able to discover non-trivial
reformulation strategies that resemble classic information retrieval techniques
such as term re-weighting (tf-idf) and stemming. | [
1,
0,
0,
0,
0,
0
] |
Title: Time consistency for scalar multivariate risk measures,
Abstract: In this paper we present results on dynamic multivariate scalar risk
measures, which arise in markets with transaction costs and systemic risk. Dual
representations of such risk measures are presented. These are then used to
obtain the main results of this paper on time consistency; namely, an
equivalent recursive formulation of multivariate scalar risk measures to
multiportfolio time consistency. We are motivated to study time consistency of
multivariate scalar risk measures as the superhedging risk measure in markets
with transaction costs (with a single eligible asset) (Jouini and Kallal
(1995), Roux and Zastawniak (2016), Loehne and Rudloff (2014)) does not satisfy
the usual scalar concept of time consistency. In fact, as demonstrated in
(Feinstein and Rudloff (2018)), scalar risk measures with the same
scalarization weight at all times would not be time consistent in general. The
deduced recursive relation for the scalarizations of multiportfolio time
consistent set-valued risk measures provided in this paper requires
consideration of the entire family of scalarizations. In this way we develop a
direct notion of a "moving scalarization" for scalar time consistency that
corroborates recent research on scalarizations of dynamic multi-objective
problems (Karnam, Ma, and Zhang (2017), Kovacova and Rudloff (2018)). | [
0,
0,
0,
0,
0,
1
] |
Title: A Continuous Beam Steering Slotted Waveguide Antenna Using Rotating Dielectric Slabs,
Abstract: The design, simulation and measurement of a beam steerable slotted waveguide
antenna operating in X band are presented. The proposed beam steerable antenna
consists of a standard rectangular waveguide (RWG) section with longitudinal
slots in the broad wall. The beam steering in this configuration is achieved by
rotating two dielectric slabs inside the waveguide and consequently changing
the phase of the slots excitations. In order to confirm the usefulness of this
concept, a non-resonant 20-slot waveguide array antenna with an element spacing
of d = 0.58{\lambda}0 has been designed, built and measured. A 14 deg beam
scanning from near broadside ({\theta} = 4 deg) toward end-fire ({\theta} = 18
deg) direction is observed. The gain varies from 18.33 dB to 19.11 dB which
corresponds to the radiation efficiencies between 95% and 79%. The side-lobe
level is -14 dB at the design frequency of 9.35 GHz. The simulated co-polarized
realized gain closely matches the fabricated prototype patterns. | [
0,
1,
0,
0,
0,
0
] |
Title: Developing an edge computing platform for real-time descriptive analytics,
Abstract: The Internet of Mobile Things encompasses stream data being generated by
sensors, network communications that pull and push these data streams, as well
as running processing and analytics that can effectively leverage actionable
information for transportation planning, management, and business advantage.
Edge computing emerges as a new paradigm that decentralizes the communication,
computation, control and storage resources from the cloud to the edge of the
network. This paper proposes an edge computing platform where mobile edge nodes
are physical devices deployed on a transit bus where descriptive analytics is
used to uncover meaningful patterns from real-time transit data streams. An
application experiment is used to evaluate the advantages and disadvantages of
our proposed platform to support descriptive analytics at a mobile edge node
and generate actionable information to transit managers. | [
1,
0,
0,
0,
0,
0
] |
Title: Space Telescope and Optical Reverberation Mapping Project. VII. Understanding the UV anomaly in NGC 5548 with X-Ray Spectroscopy,
Abstract: During the Space Telescope and Optical Reverberation Mapping Project (STORM)
observations of NGC 5548, the continuum and emission-line variability became
de-correlated during the second half of the 6-month long observing campaign.
Here we present Swift and Chandra X-ray spectra of NGC 5548 obtained as a part
of the campaign. The Swift spectra show that excess flux (relative to a
power-law continuum) in the soft X-ray band appears before the start of the
anomalous emission-line behavior, peaks during the period of the anomaly, and
then declines. This is a model-independent result suggesting that the soft
excess is related to the anomaly. We divide the Swift data into on- and
off-anomaly spectra to characterize the soft excess via spectral fitting. The
cause of the spectral differences is likely due to a change in the intrinsic
spectrum rather than being due to variable obscuration or partial covering. The
Chandra spectra have lower signal-to-noise ratios, but are consistent with
Swift data. Our preferred model of the soft excess is emission from an
optically thick, warm Comptonizing corona, the effective optical depth of which
increases during the anomaly. This model simultaneously explains all the three
observations: the UV emission line flux decrease, the soft-excess increase, and
the emission line anomaly. | [
0,
1,
0,
0,
0,
0
] |
Title: Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge,
Abstract: We present the grasping system and design approach behind Cartman, the
winning entrant in the 2017 Amazon Robotics Challenge. We investigate the
design processes leading up to the final iteration of the system and describe
the emergent solution by comparing it with key robotics design aspects.
Following our experience, we propose a new design aspect, precision vs.
redundancy, that should be considered alongside the previously proposed design
aspects of modularity vs. integration, generality vs. assumptions, computation
vs. embodiment and planning vs. feedback. We present the grasping system behind
Cartman, the winning robot in the 2017 Amazon Robotics Challenge. The system
makes strong use of redundancy in design by implementing complimentary tools, a
suction gripper and a parallel gripper. This multi-modal end-effector is
combined with three grasp synthesis algorithms to accommodate the range of
objects provided by Amazon during the challenge. We provide a detailed system
description and an evaluation of its performance before discussing the broader
nature of the system with respect to the key aspects of robotic design as
initially proposed by the winners of the first Amazon Picking Challenge. To
address the principal nature of our grasping system and the reason for its
success, we propose an additional robotic design aspect `precision vs.
redundancy'. The full design of our robotic system, including the end-effector,
is open sourced and available at
this http URL | [
1,
0,
0,
0,
0,
0
] |
Title: Coping with Construals in Broad-Coverage Semantic Annotation of Adpositions,
Abstract: We consider the semantics of prepositions, revisiting a broad-coverage
annotation scheme used for annotating all 4,250 preposition tokens in a 55,000
word corpus of English. Attempts to apply the scheme to adpositions and case
markers in other languages, as well as some problematic cases in English, have
led us to reconsider the assumption that a preposition's lexical contribution
is equivalent to the role/relation that it mediates. Our proposal is to embrace
the potential for construal in adposition use, expressing such phenomena
directly at the token level to manage complexity and avoid sense proliferation.
We suggest a framework to represent both the scene role and the adposition's
lexical function so they can be annotated at scale---supporting automatic,
statistical processing of domain-general language---and sketch how this
representation would inform a constructional analysis. | [
1,
0,
0,
0,
0,
0
] |
Title: A Framework for Generalizing Graph-based Representation Learning Methods,
Abstract: Random walks are at the heart of many existing deep learning algorithms for
graph data. However, such algorithms have many limitations that arise from the
use of random walks, e.g., the features resulting from these methods are unable
to transfer to new nodes and graphs as they are tied to node identity. In this
work, we introduce the notion of attributed random walks which serves as a
basis for generalizing existing methods such as DeepWalk, node2vec, and many
others that leverage random walks. Our proposed framework enables these methods
to be more widely applicable for both transductive and inductive learning as
well as for use on graphs with attributes (if available). This is achieved by
learning functions that generalize to new nodes and graphs. We show that our
proposed framework is effective with an average AUC improvement of 16.1% while
requiring on average 853 times less space than existing methods on a variety of
graphs from several domains. | [
1,
0,
0,
1,
0,
0
] |
Title: A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and Support Vector Machine (SVM) for Intrusion Detection in Network Traffic Data,
Abstract: Gated Recurrent Unit (GRU) is a recently-developed variation of the long
short-term memory (LSTM) unit, both of which are types of recurrent neural
network (RNN). Through empirical evidence, both models have been proven to be
effective in a wide variety of machine learning tasks such as natural language
processing (Wen et al., 2015), speech recognition (Chorowski et al., 2015), and
text classification (Yang et al., 2016). Conventionally, like most neural
networks, both of the aforementioned RNN variants employ the Softmax function
as its final output layer for its prediction, and the cross-entropy function
for computing its loss. In this paper, we present an amendment to this norm by
introducing linear support vector machine (SVM) as the replacement for Softmax
in the final output layer of a GRU model. Furthermore, the cross-entropy
function shall be replaced with a margin-based function. While there have been
similar studies (Alalshekmubarak & Smith, 2013; Tang, 2013), this proposal is
primarily intended for binary classification on intrusion detection using the
2013 network traffic data from the honeypot systems of Kyoto University.
Results show that the GRU-SVM model performs relatively higher than the
conventional GRU-Softmax model. The proposed model reached a training accuracy
of ~81.54% and a testing accuracy of ~84.15%, while the latter was able to
reach a training accuracy of ~63.07% and a testing accuracy of ~70.75%. In
addition, the juxtaposition of these two final output layers indicate that the
SVM would outperform Softmax in prediction time - a theoretical implication
which was supported by the actual training and testing time in the study. | [
1,
0,
0,
1,
0,
0
] |
Title: Spelling Correction as a Foreign Language,
Abstract: In this paper, we reformulated the spell correction problem as a machine
translation task under the encoder-decoder framework. This reformulation
enabled us to use a single model for solving the problem that is traditionally
formulated as learning a language model and an error model. This model employs
multi-layer recurrent neural networks as an encoder and a decoder. We
demonstrate the effectiveness of this model using an internal dataset, where
the training data is automatically obtained from user logs. The model offers
competitive performance as compared to the state of the art methods but does
not require any feature engineering nor hand tuning between models. | [
1,
0,
0,
0,
0,
0
] |
Title: Intrinsic p-type W-based transition metal dichalcogenide by substitutional Ta-doping,
Abstract: Two-dimensional (2D) transition metal dichalcogenides (TMDs) have recently
emerged as promising candidates for future electronics and optoelectronics.
While most of TMDs are intrinsic n-type semiconductors due to electron donating
which originates from chalcogen vacancies, obtaining intrinsic high-quality
p-type semiconducting TMDs has been challenging. Here, we report an
experimental approach to obtain intrinsic p-type Tungsten (W)-based TMDs by
substitutional Ta-doping. The obtained few-layer Ta-doped WSe2 (Ta0.01W0.99Se2)
field-effect transistor (FET) devices exhibit competitive p-type performances,
including ~10^6 current on/off at room temperature. We also demonstrate high
quality van der Waals (vdW) p-n heterojunctions based on Ta0.01W0.99Se2/MoS2
structure, which exhibit nearly ideal diode characteristics (with an ideality
factor approaching 1 and a rectification ratio up to 10^5) and excellent
photodetecting performance. Our study suggests that substitutional Ta-doping
holds great promise to realize intrinsic p-type W-based TMDs for future
electronic and photonic applications. | [
0,
1,
0,
0,
0,
0
] |
Title: Synchronisation of Partial Multi-Matchings via Non-negative Factorisations,
Abstract: In this work we study permutation synchronisation for the challenging case of
partial permutations, which plays an important role for the problem of matching
multiple objects (e.g. images or shapes). The term synchronisation refers to
the property that the set of pairwise matchings is cycle-consistent, i.e. in
the full matching case all compositions of pairwise matchings over cycles must
be equal to the identity. Motivated by clustering and matrix factorisation
perspectives of cycle-consistency, we derive an algorithm to tackle the
permutation synchronisation problem based on non-negative factorisations. In
order to deal with the inherent non-convexity of the permutation
synchronisation problem, we use an initialisation procedure based on a novel
rotation scheme applied to the solution of the spectral relaxation. Moreover,
this rotation scheme facilitates a convenient Euclidean projection to obtain a
binary solution after solving our relaxed problem. In contrast to
state-of-the-art methods, our approach is guaranteed to produce
cycle-consistent results. We experimentally demonstrate the efficacy of our
method and show that it achieves better results compared to existing methods. | [
0,
0,
0,
1,
0,
0
] |
Title: Associated varieties and Higgs branches (a survey),
Abstract: Associated varieties of vertex algebras are analogue of the associated
varieties of primitive ideals of the universal enveloping algebras of
semisimple Lie algebras. They not only capture some of the important properties
of vertex algebras but also have interesting relationship with the Higgs
branches of four-dimensional $N=2$ superconformal field theories (SCFTs). As a
consequence, one can deduce the modular invariance of Schur indices of 4d $N=2$
SCFTs from the theory of vertex algebras. | [
0,
0,
1,
0,
0,
0
] |
Title: Optimization Design of Decentralized Control for Complex Decentralized Systems,
Abstract: A new method is developed to deal with the problem that a complex
decentralized control system needs to keep centralized control performance. The
systematic procedure emphasizes quickly finding the decentralized
subcontrollers that matching the closed-loop performance and robustness
characteristics of the centralized controller, which is featured by the fact
that GA is used to optimize the design of centralized H-infinity controller
K(s) and decentralized engine subcontroller KT(s), and that only one interface
variable needs to satisfy decentralized control system requirement according to
the proposed selection principle. The optimization design is motivated by the
implementation issues where it is desirable to reduce the time in trial and
error process and accurately find the best decentralized subcontrollers. The
method is applied to decentralized control system design for a short takeoff
and landing fighter. By comparing the simulation results of the decentralized
control system with those of the centralized control system, the target of the
decentralized control attains the performance and robustness of centralized
control is validated. | [
1,
0,
0,
0,
0,
0
] |
Title: SAM: Semantic Attribute Modulation for Language Modeling and Style Variation,
Abstract: This paper presents a Semantic Attribute Modulation (SAM) for language
modeling and style variation. The semantic attribute modulation includes
various document attributes, such as titles, authors, and document categories.
We consider two types of attributes, (title attributes and category
attributes), and a flexible attribute selection scheme by automatically scoring
them via an attribute attention mechanism. The semantic attributes are embedded
into the hidden semantic space as the generation inputs. With the attributes
properly harnessed, our proposed SAM can generate interpretable texts with
regard to the input attributes. Qualitative analysis, including word semantic
analysis and attention values, shows the interpretability of SAM. On several
typical text datasets, we empirically demonstrate the superiority of the
Semantic Attribute Modulated language model with different combinations of
document attributes. Moreover, we present a style variation for the lyric
generation using SAM, which shows a strong connection between the style
variation and the semantic attributes. | [
1,
0,
0,
1,
0,
0
] |
Title: A Flexible Procedure for Mixture Proportion Estimation in Positive--Unlabeled Learning,
Abstract: Positive--unlabeled (PU) learning considers two samples, a positive set P
with observations from only one class and an unlabeled set U with observations
from two classes. The goal is to classify observations in U. Class mixture
proportion estimation (MPE) in U is a key step in PU learning. Blanchard et al.
[2010] showed that MPE in PU learning is a generalization of the problem of
estimating the proportion of true null hypotheses in multiple testing problems.
Motivated by this idea, we propose reducing the problem to one dimension via
construction of a probabilistic classifier trained on the P and U data sets
followed by application of a one--dimensional mixture proportion method from
the multiple testing literature to the observation class probabilities. The
flexibility of this framework lies in the freedom to choose the classifier and
the one--dimensional MPE method. We prove consistency of two mixture proportion
estimators using bounds from empirical process theory, develop tuning parameter
free implementations, and demonstrate that they have competitive performance on
simulated waveform data and a protein signaling problem. | [
0,
0,
0,
1,
0,
0
] |
Title: The Application of SNiPER to the JUNO Simulation,
Abstract: JUNO is a multipurpose neutrino experiment which is designed to determine
neutrino mass hierarchy and precisely measure oscillation parameters. As one of
the important systems, the JUNO offline software is being developed using the
SNiPER software. In this proceeding, we focus on the requirements of JUNO
simulation and present the working solution based on the SNiPER.
The JUNO simulation framework is in charge of managing event data, detector
geometries and materials, physics processes, simulation truth information etc.
It glues physics generator, detector simulation and electronics simulation
modules together to achieve a full simulation chain. In the implementation of
the framework, many attractive characteristics of the SNiPER have been used,
such as dynamic loading, flexible flow control, multiple event management and
Python binding. Furthermore, additional efforts have been made to make both
detector and electronics simulation flexible enough to accommodate and optimize
different detector designs.
For the Geant4-based detector simulation, each sub-detector component is
implemented as a SNiPER tool which is a dynamically loadable and configurable
plugin. So it is possible to select the detector configuration at runtime. The
framework provides the event loop to drive the detector simulation and
interacts with the Geant4 which is implemented as a passive service. All levels
of user actions are wrapped into different customizable tools, so that user
functions can be easily extended by just adding new tools. The electronics
simulation has been implemented by following an event driven scheme. The SNiPER
task component is used to simulate data processing steps in the electronics
modules. The electronics and trigger are synchronized by triggered events
containing possible physics signals. | [
0,
1,
0,
0,
0,
0
] |
Title: A Modified Sigma-Pi-Sigma Neural Network with Adaptive Choice of Multinomials,
Abstract: Sigma-Pi-Sigma neural networks (SPSNNs) as a kind of high-order neural
networks can provide more powerful mapping capability than the traditional
feedforward neural networks (Sigma-Sigma neural networks). In the existing
literature, in order to reduce the number of the Pi nodes in the Pi layer, a
special multinomial P_s is used in SPSNNs. Each monomial in P_s is linear with
respect to each particular variable sigma_i when the other variables are taken
as constants. Therefore, the monomials like sigma_i^n or sigma_i^n sigma_j with
n>1 are not included. This choice may be somehow intuitive, but is not
necessarily the best. We propose in this paper a modified Sigma-Pi-Sigma neural
network (MSPSNN) with an adaptive approach to find a better multinomial for a
given problem. To elaborate, we start from a complete multinomial with a given
order. Then we employ a regularization technique in the learning process for
the given problem to reduce the number of monomials used in the multinomial,
and end up with a new SPSNN involving the same number of monomials (= the
number of nodes in the Pi-layer) as in P_s. Numerical experiments on some
benchmark problems show that our MSPSNN behaves better than the traditional
SPSNN with P_s. | [
0,
0,
0,
1,
0,
0
] |
Title: Proceedings of the 2017 AdKDD & TargetAd Workshop,
Abstract: Proceedings of the 2017 AdKDD and TargetAd Workshop held in conjunction with
the 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining Halifax,
Nova Scotia, Canada. | [
1,
0,
0,
0,
0,
0
] |
Title: On the R-superlinear convergence of the KKT residues generated by the augmented Lagrangian method for convex composite conic programming,
Abstract: Due to the possible lack of primal-dual-type error bounds, the superlinear
convergence for the Karush-Kuhn-Tucker (KKT) residues of the sequence generated
by augmented Lagrangian method (ALM) for solving convex composite conic
programming (CCCP) has long been an outstanding open question. In this paper,
we aim to resolve this issue by first conducting convergence rate analysis for
the ALM with Rockafellar's stopping criteria under only a mild quadratic growth
condition on the dual of CCCP. More importantly, by further assuming that the
Robinson constraint qualification holds, we establish the R-superlinear
convergence of the KKT residues of the iterative sequence under
easy-to-implement stopping criteria {for} the augmented Lagrangian subproblems.
Equipped with this discovery, we gain insightful interpretations on the
impressive numerical performance of several recently developed semismooth
Newton-CG based ALM solvers for solving linear and convex quadratic
semidefinite programming. | [
0,
0,
1,
0,
0,
0
] |
Title: It's Like Python But: Towards Supporting Transfer of Programming Language Knowledge,
Abstract: Expertise in programming traditionally assumes a binary novice-expert divide.
Learning resources typically target programmers who are learning programming
for the first time, or expert programmers for that language. An
underrepresented, yet important group of programmers are those that are
experienced in one programming language, but desire to author code in a
different language. For this scenario, we postulate that an effective form of
feedback is presented as a transfer from concepts in the first language to the
second. Current programming environments do not support this form of feedback.
In this study, we apply the theory of learning transfer to teach a language
that programmers are less familiar with--such as R--in terms of a programming
language they already know--such as Python. We investigate learning transfer
using a new tool called Transfer Tutor that presents explanations for R code in
terms of the equivalent Python code. Our study found that participants
leveraged learning transfer as a cognitive strategy, even when unprompted.
Participants found Transfer Tutor to be useful across a number of affordances
like stepping through and highlighting facts that may have been missed or
misunderstood. However, participants were reluctant to accept facts without
code execution or sometimes had difficulty reading explanations that are
verbose or complex. These results provide guidance for future designs and
research directions that can support learning transfer when learning new
programming languages. | [
1,
0,
0,
0,
0,
0
] |
Title: How do Mixture Density RNNs Predict the Future?,
Abstract: Gaining a better understanding of how and what machine learning systems learn
is important to increase confidence in their decisions and catalyze further
research. In this paper, we analyze the predictions made by a specific type of
recurrent neural network, mixture density RNNs (MD-RNNs). These networks learn
to model predictions as a combination of multiple Gaussian distributions,
making them particularly interesting for problems where a sequence of inputs
may lead to several distinct future possibilities. An example is learning
internal models of an environment, where different events may or may not occur,
but where the average over different events is not meaningful. By analyzing the
predictions made by trained MD-RNNs, we find that their different Gaussian
components have two complementary roles: 1) Separately modeling different
stochastic events and 2) Separately modeling scenarios governed by different
rules. These findings increase our understanding of what is learned by
predictive MD-RNNs, and open up new research directions for further
understanding how we can benefit from their self-organizing model
decomposition. | [
1,
0,
0,
1,
0,
0
] |
Title: Robust Recovery of Missing Data in Electricity Distribution Systems,
Abstract: The advanced operation of future electricity distribution systems is likely
to require significant observability of the different parameters of interest
(e.g., demand, voltages, currents, etc.). Ensuring completeness of data is,
therefore, paramount. In this context, an algorithm for recovering missing
state variable observations in electricity distribution systems is presented.
The proposed method exploits the low rank structure of the state variables via
a matrix completion approach while incorporating prior knowledge in the form of
second order statistics. Specifically, the recovery method combines nuclear
norm minimization with Bayesian estimation. The performance of the new
algorithm is compared to the information-theoretic limits and tested trough
simulations using real data of an urban low voltage distribution system. The
impact of the prior knowledge is analyzed when a mismatched covariance is used
and for a Markovian sampling that introduces structure in the observation
pattern. Numerical results demonstrate that the proposed algorithm is robust
and outperforms existing state of the art algorithms. | [
1,
0,
0,
0,
0,
0
] |
Title: Numerical analysis of a nonlinear free-energy diminishing Discrete Duality Finite Volume scheme for convection diffusion equations,
Abstract: We propose a nonlinear Discrete Duality Finite Volume scheme to approximate
the solutions of drift diffusion equations. The scheme is built to preserve at
the discrete level even on severely distorted meshes the energy / energy
dissipation relation. This relation is of paramount importance to capture the
long-time behavior of the problem in an accurate way. To enforce it, the linear
convection diffusion equation is rewritten in a nonlinear form before being
discretized. We establish the existence of positive solutions to the scheme.
Based on compactness arguments, the convergence of the approximate solution
towards a weak solution is established. Finally, we provide numerical evidences
of the good behavior of the scheme when the discretization parameters tend to 0
and when time goes to infinity. | [
0,
0,
1,
0,
0,
0
] |
Title: A momentum conserving $N$-body scheme with individual timesteps,
Abstract: $N$-body simulations study the dynamics of $N$ particles under the influence
of mutual long-distant forces such as gravity. In practice, $N$-body codes will
violate Newton's third law if they use either an approximate Poisson solver or
individual timesteps. In this study, we construct a novel $N$-body scheme by
combining a fast multipole method (FMM) based Poisson solver and a time
integrator using a hierarchical Hamiltonian splitting (HHS) technique. We test
our implementation for collision-less systems using several problems in
galactic dynamics. As a result of the momentum conserving nature of these two
key components, the new $N$-body scheme is also momentum conserving. Moreover,
we can fully utilize the $\mathcal O(\textit N)$ complexity of FMM with the
integrator. With the restored force symmetry, we can improve both angular
momentum conservation and energy conservation substantially. The new scheme
will be suitable for many applications in galactic dynamics and structure
formation. Our implementation, in the code Taichi, is publicly available at
this https URL. | [
0,
1,
0,
0,
0,
0
] |
Title: Accurate Real Time Localization Tracking in A Clinical Environment using Bluetooth Low Energy and Deep Learning,
Abstract: Deep learning has started to revolutionize several different industries, and
the applications of these methods in medicine are now becoming more
commonplace. This study focuses on investigating the feasibility of tracking
patients and clinical staff wearing Bluetooth Low Energy (BLE) tags in a
radiation oncology clinic using artificial neural networks (ANNs) and
convolutional neural networks (CNNs). The performance of these networks was
compared to relative received signal strength indicator (RSSI) thresholding and
triangulation. By utilizing temporal information, a combined CNN+ANN network
was capable of correctly identifying the location of the BLE tag with an
accuracy of 99.9%. It outperformed a CNN model (accuracy = 94%), a thresholding
model employing majority voting (accuracy = 95%), and a triangulation
classifier utilizing majority voting (accuracy = 95%). Future studies will seek
to deploy this affordable real time location system in hospitals to improve
clinical workflow, efficiency, and patient safety. | [
1,
1,
0,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.