title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Logarithmic singularities and quantum oscillations in magnetically doped topological insulators | We report magnetotransport measurements on magnetically doped
(Bi,Sb)$_2$Te$_3$ films grown by molecular beam epitaxy. In Hallbar devices,
logarithmic dependence on temperature and bias voltage are obseved in both the
longitudinal and anomalous Hall resistance. The interplay of disorder and
electron-electron interactions is found to explain quantitatively the observed
logarithmic singularities and is a dominant scattering mechanism in these
samples. Submicron scale devices exhibit intriguing quantum oscillations at
high magnetic fields with dependence on bias voltage. The observed quantum
oscillations can be attributed to bulk and surface transport.
| 0 | 1 | 0 | 0 | 0 | 0 |
A General Algorithm to Calculate the Inverse Principal $p$-th Root of Symmetric Positive Definite Matrices | We address the general mathematical problem of computing the inverse $p$-th
root of a given matrix in an efficient way. A new method to construct iteration
functions that allow calculating arbitrary $p$-th roots and their inverses of
symmetric positive definite matrices is presented. We show that the order of
convergence is at least quadratic and that adaptively adjusting a parameter $q$
always leads to an even faster convergence. In this way, a better performance
than with previously known iteration schemes is achieved. The efficiency of the
iterative functions is demonstrated for various matrices with different
densities, condition numbers and spectral radii.
| 0 | 0 | 1 | 0 | 0 | 0 |
Latent Mixture Modeling for Clustered Data | This article proposes a mixture modeling approach to estimating cluster-wise
conditional distributions in clustered (grouped) data. We adapt the
mixture-of-experts model to the latent distributions, and propose a model in
which each cluster-wise density is represented as a mixture of latent experts
with cluster-wise mixing proportions distributed as Dirichlet distribution. The
model parameters are estimated by maximizing the marginal likelihood function
using a newly developed Monte Carlo Expectation-Maximization algorithm. We also
extend the model such that the distribution of cluster-wise mixing proportions
depends on some cluster-level covariates. The finite sample performance of the
proposed model is compared with some existing mixture modeling approaches as
well as linear mixed model through the simulation studies. The proposed model
is also illustrated with the posted land price data in Japan.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fast Switching Dual Fabry-Perot-Cavity-based Optical Refractometry for Assessment of Gas Refractivity and Density - Estimates of Its Precision, Accuracy, and Temperature Dependence | Dual Fabry-Perot-Cavity-based Optical Refractometry (DFCB-OR) have been shown
to have excellent potential for characterization of gases, in particular their
refractivity and density. However, its performance has in practice been found
to be limited by drifts. To remedy this, drift-free DFPC-OR (DF-DFCB-OR) has
recently been proposed. Suggested methodologies for realization of a specific
type of DF-DFCB-OR, termed Fast Switching DFCB-OR (FS-DFCB-OR), have been
presented in an accompanying work. This paper scrutinizes the performance and
the limitations of both DF- and FS-DFCB-OR for assessments of refractivity and
gas density, in particular their precision, accuracy, and temperature
dependence. It is shown that both refractivity and gas density can be assessed
by FS-DFCB-OR with a precision in the 10$^{-9}$ range under STP conditions. It
is demonstrated that the absolute accuracy is mainly limited by the accuracy by
which the instantaneous deformation of the cavity or the higher order virial
coefficients can be assessed. It is also shown that the internal accuracy, i.e.
the accuracy by which the system can be characterized with respect to an
internal standard, can be several orders of magnitude better than the absolute.
It is concluded that the temperature dependence of FS-DFCB-OR is exceptionally
small, typically in the 10$^{-8}$ to 10$^{-7}$/C range, and primarily caused by
thermal expansion of the FPC-spacer material. Finally, this paper discusses
means on how to design a FS-DFCB-or system for optimal performance and
epitomizes the conclusions of this and our accompanying works regarding both
DF- and FS-DFCB-OR in terms of performance and provides an outlook for both
techniques. Our works can serve as a basis for future realizations of
instrumentation for assessments of gas refractivity and density that can fully
benefit from the extraordinary potential of FPC-OR.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dixmier traces and residues on weak operator ideals | We develop the theory of modulated operators in general principal ideals of
compact operators. For Laplacian modulated operators we establish Connes' trace
formula in its local Euclidean model and a global version thereof. It expresses
Dixmier traces in terms of the vector-valued Wodzicki residue. We demonstrate
the applicability of our main results in the context of log-classical
pseudo-differential operators, studied by Lesch, and a class of operators
naturally appearing in noncommutative geometry.
| 0 | 0 | 1 | 0 | 0 | 0 |
Investigating early-type galaxy evolution with a multiwavelength approach. II. The UV structure of 11 galaxies with Swift-UVOT | GALEX detected a significant fraction of early-type galaxies showing Far-UV
bright structures. These features suggest the occurrence of recent star
formation episodes. We aim at understanding their evolutionary path[s] and the
mechanisms at the origin of their UV-bright structures. We investigate with a
multi-lambda approach 11 early-types selected because of their nearly passive
stage of evolution in the nuclear region. The paper, second of a series,
focuses on the comparison between UV features detected by Swift-UVOT, tracing
recent star formation, and the galaxy optical structure mapping older stellar
populations. We performed their UV surface photometry and used BVRI photometry
from other sources. Our integrated magnitudes have been analyzed and compared
with corresponding values in the literature. We characterize the overall galaxy
structure best fitting the UV and optical luminosity profiles using a single
Sersic law. NGC 1366, NGC 1426, NGC 3818, NGC 3962 and NGC 7192 show
featureless luminosity profiles. Excluding NGC 1366 which has a clear edge-on
disk , n~1-2, and NGC 3818, the remaining three have Sersic's indices n~3-4 in
optical and a lower index in the UV. Bright ring/arm-like structures are
revealed by UV images and luminosity profiles of NGC 1415, NGC 1533, NGC 1543,
NGC 2685, NGC 2974 and IC 2006. The ring/arm-like structures are different from
galaxy to galaxy. Sersic indices of UV profiles for those galaxies are in the
range n=1.5-3 both in S0s and in Es. In our sample optical Sersic indices are
usually larger than the UV ones. (M2-V) color profiles are bluer in
ring/arm-like structures with respect to the galaxy body. The lower values of
Sersic's indices in the UV bands with respect to optical ones, suggesting the
presence of a disk, point out that the role of the dissipation cannot be
neglected in recent evolutionary phases of these early-type galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Lazy Automata Techniques for WS1S | We present a new decision procedure for the logic WS1S. It originates from
the classical approach, which first builds an automaton accepting all models of
a formula and then tests whether its language is empty. The main novelty is to
test the emptiness on the fly, while constructing a symbolic, term-based
representation of the automaton, and prune the constructed state space from
parts irrelevant to the test. The pruning is done by a generalization of two
techniques used in antichain-based language inclusion and universality checking
of finite automata: subsumption and early termination. The richer structure of
the WS1S decision problem allows us, however, to elaborate on these techniques
in novel ways. Our experiments show that the proposed approach can in many
cases significantly outperform the classical decision procedure (implemented in
the MONA tool) as well as recently proposed alternatives.
| 1 | 0 | 0 | 0 | 0 | 0 |
Joint distribution of conjugate algebraic numbers: a random polynomial approach | Given a polynomial $q(z):=a_0+a_1z+\dots+a_nz^n$ and a vector of positive
weights $\mathbf{w}=(w_0, w_1,\dots,w_n)$, define the $\mathbf{w}$-weighted
$l_p$-norm of $q$ as $$ l_{p,\mathbf{w}}[q]:=\left(\sum_{k=0}^{n}|w_k
a_k|^p\right)^{1/p},\quad p\in[1,\infty]. $$ Define the $\mathbf{w}$-weighted
$l_p$-norm of an algebraic number to be the $\mathbf{w}$-weighted $l_p$-norm of
its minimal polynomial. For non-negative integers $k,l$ such that $k+2l\leq n$
and a Borel subset $B\subset \mathbb{R}^k\times\mathbb{C}_+^l$ denote by
$\Phi_{p,\mathbf{w},k,l}(Q,B)$ the number of ordered $(k+l)$-tuples in $B$ of
conjugate algebraic numbers of degree $n$ and $\mathbf{w}$-weighted $l_p$-norm
at most $ Q$. We show that $$ \lim_{ Q\to\infty}\frac{\Phi_{p,\mathbf{w},k,l}(
Q,B)}{
Q^{n+1}}=\frac{\mathrm{Vol}_{n+1}(\mathbb{B}_{p,\mathbf{w}}^{n+1})}{2\zeta(n+1)}\int_B
\rho_{p,\mathbf{w},k,l}(\mathbf{x},\mathbf{z})\rm d \mathbf{x}\rm d \mathbf{z},
$$ where $\mathrm{Vol}_{n+1}(\mathbb{B}_{p,\mathbf{w}}^{n+1})$ is the volume of
the unit $\mathbf{w}$-weighted $l_p$-ball and $\rho_{p,\mathbf{w},k,l}$ shall
denote the correlation function of $k$ real and $l$ complex zeros of the random
polynomial $\sum_{k=1}^n \frac{\eta_k}{w_k} z^k$ for i.i.d. random variables
$\eta_k $ with density $c_p e^{|t|^p}$ for $p<\infty$ resp. with constant
density on $[-1,1]$ for $p=\infty$. We give an explicit formula for
$\rho_{p,\mathbf{w},k,l}$ which in the case $k+2l=n$ simplifies to $$
\rho_{p,\mathbf{w},n-2l,l}=\frac{2}{(n+1)\mathrm{Vol}_{n+1}(\mathbb{B}_{p,\mathbf{w}}^{n+1})}\,\frac{\sqrt{|\mathrm{D}[q]|}\phantom{1^n}}{(l_{p,\mathbf{w}}[q])^{n+1}},
$$ where $q$ is the monic polynomial whose zeros are the arguments of the
correlation function $\rho_{p,\mathbf{w},n-2l,l}$ and $\mathrm{D}[q]$ denotes
its discriminant.
| 0 | 0 | 1 | 0 | 0 | 0 |
On wrapping the Kalman filter and estimating with the SO(2) group | This paper analyzes directional tracking in 2D with the extended Kalman
filter on Lie groups (LG-EKF). The study stems from the problem of tracking
objects moving in 2D Euclidean space, with the observer measuring direction
only, thus rendering the measurement space and object position on the
circle---a non-Euclidean geometry. The problem is further inconvenienced if we
need to include higher-order dynamics in the state space, like angular velocity
which is a Euclidean variables. The LG-EKF offers a solution to this issue by
modeling the state space as a Lie group or combination thereof, e.g., SO(2) or
its combinations with Rn. In the present paper, we first derive the LG-EKF on
SO(2) and subsequently show that this derivation, based on the mathematically
grounded framework of filtering on Lie groups, yields the same result as
heuristically wrapping the angular variable within the EKF framework. This
result applies only to the SO(2) and SO(2)xRn LG-EKFs and is not intended to be
extended to other Lie groups or combinations thereof. In the end, we showcase
the SO(2)xR2 LG-EKF, as an example of a constant angular acceleration model, on
the problem of speaker tracking with a microphone array for which real-world
experiments are conducted and accuracy is evaluated with ground truth data
obtained by a motion capture system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards a Context-Aware IDE-Based Meta Search Engine for Recommendation about Programming Errors and Exceptions | Study shows that software developers spend about 19% of their time looking
for information in the web during software development and maintenance.
Traditional web search forces them to leave the working environment (e.g., IDE)
and look for information in the web browser. It also does not consider the
context of the problems that the developers search solutions for. The frequent
switching between web browser and the IDE is both time-consuming and
distracting, and the keyword-based traditional web search often does not help
much in problem solving. In this paper, we propose an Eclipse IDE-based web
search solution that exploits the APIs provided by three popular web search
engines-- Google, Yahoo, Bing and a popular programming Q & A site, Stack
Overflow, and captures the content-relevance, context-relevance, popularity and
search engine confidence of each candidate result against the encountered
programming problems. Experiments with 75 programming errors and exceptions
using the proposed approach show that inclusion of different types of context
information associated with a given exception can enhance the recommendation
accuracy of a given exception. Experiments both with two existing approaches
and existing web search engines confirm that our approach can perform better
than them in terms of recall, mean precision and other performance measures
with little computational cost.
| 1 | 0 | 0 | 0 | 0 | 0 |
Infrared Flares from M Dwarfs: a Hinderance to Future Transiting Exoplanet Studies | Many current and future exoplanet missions are pushing to infrared (IR)
wavelengths where the flux contrast between the planet and star is more
favorable (Deming et al. 2009), and the impact of stellar magnetic activity is
decreased. Indeed, a recent analysis of starspots and faculae found these forms
of stellar activity do not substantially impact the transit signatures or
science potential for FGKM stars with JWST (Zellem et al. 2017). However, this
is not true in the case of flares, which I demonstrate can be a hinderance to
transit studies in this note.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topic Identification for Speech without ASR | Modern topic identification (topic ID) systems for speech use automatic
speech recognition (ASR) to produce speech transcripts, and perform supervised
classification on such ASR outputs. However, under resource-limited conditions,
the manually transcribed speech required to develop standard ASR systems can be
severely limited or unavailable. In this paper, we investigate alternative
unsupervised solutions to obtaining tokenizations of speech in terms of a
vocabulary of automatically discovered word-like or phoneme-like units, without
depending on the supervised training of ASR systems. Moreover, using automatic
phoneme-like tokenizations, we demonstrate that a convolutional neural network
based framework for learning spoken document representations provides
competitive performance compared to a standard bag-of-words representation, as
evidenced by comprehensive topic ID evaluations on both single-label and
multi-label classification tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals | Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN.
| 1 | 0 | 0 | 0 | 0 | 0 |
Zonotope hit-and-run for efficient sampling from projection DPPs | Determinantal point processes (DPPs) are distributions over sets of items
that model diversity using kernels. Their applications in machine learning
include summary extraction and recommendation systems. Yet, the cost of
sampling from a DPP is prohibitive in large-scale applications, which has
triggered an effort towards efficient approximate samplers. We build a novel
MCMC sampler that combines ideas from combinatorial geometry, linear
programming, and Monte Carlo methods to sample from DPPs with a fixed sample
cardinality, also called projection DPPs. Our sampler leverages the ability of
the hit-and-run MCMC kernel to efficiently move across convex bodies. Previous
theoretical results yield a fast mixing time of our chain when targeting a
distribution that is close to a projection DPP, but not a DPP in general. Our
empirical results demonstrate that this extends to sampling projection DPPs,
i.e., our sampler is more sample-efficient than previous approaches which in
turn translates to faster convergence when dealing with costly-to-evaluate
functions, such as summary extraction in our experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Brownian motion: from kinetics to hydrodynamics | Brownian motion has served as a pilot of studies in diffusion and other
transport phenomena for over a century. The foundation of Brownian motion, laid
by Einstein, has generally been accepted to be far from being complete since
the late 1960s, because it fails to take important hydrodynamic effects into
account. The hydrodynamic effects yield a time dependence of the diffusion
coefficient, and this extends the ordinary hydrodynamics. However, the time
profile of the diffusion coefficient across the kinetic and hydrodynamic
regions is still absent, which prohibits a complete description of Brownian
motion in the entire course of time. Here we close this gap. We manage to
separate the diffusion process into two parts: a kinetic process governed by
the kinetics based on molecular chaos approximation and a hydrodynamics process
described by linear hydrodynamics. We find the analytical solution of vortex
backflow of hydrodynamic modes triggered by a tagged particle. Coupling it to
the kinetic process we obtain explicit expressions of the velocity
autocorrelation function and the time profile of diffusion coefficient. This
leads to an accurate account of both kinetic and hydrodynamic effects. Our
theory is applicable for fluid and Brownian particles, even of irregular-shaped
objects, in very general environments ranging from dilute gases to dense
liquids. The analytical results are in excellent agreement with numerical
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Natural Time, Nowcasting and the Physics of Earthquakes: Estimation of Seismic Risk to Global Megacities | This paper describes the use of the idea of natural time to propose a new
method for characterizing the seismic risk to the world's major cities at risk
of earthquakes. Rather than focus on forecasting, which is the computation of
probabilities of future events, we define the term seismic nowcasting, which is
the computation of the current state of seismic hazard in a defined geographic
region.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile | Owing to their connection with generative adversarial networks (GANs),
saddle-point problems have recently attracted considerable interest in machine
learning and beyond. By necessity, most theoretical guarantees revolve around
convex-concave (or even linear) problems; however, making theoretical inroads
towards efficient GAN training depends crucially on moving beyond this classic
framework. To make piecemeal progress along these lines, we analyze the
behavior of mirror descent (MD) in a class of non-monotone problems whose
solutions coincide with those of a naturally associated variational inequality
- a property which we call coherence. We first show that ordinary, "vanilla" MD
converges under a strict version of this condition, but not otherwise; in
particular, it may fail to converge even in bilinear models with a unique
solution. We then show that this deficiency is mitigated by optimism: by taking
an "extra-gradient" step, optimistic mirror descent (OMD) converges in all
coherent problems. Our analysis generalizes and extends the results of
Daskalakis et al. (2018) for optimistic gradient descent (OGD) in bilinear
problems, and makes concrete headway for establishing convergence beyond
convex-concave games. We also provide stochastic analogues of these results,
and we validate our analysis by numerical experiments in a wide array of GAN
models (including Gaussian mixture models, as well as the CelebA and CIFAR-10
datasets).
| 0 | 0 | 0 | 1 | 0 | 0 |
Design of an Autonomous Precision Pollination Robot | Precision robotic pollination systems can not only fill the gap of declining
natural pollinators, but can also surpass them in efficiency and uniformity,
helping to feed the fast-growing human population on Earth. This paper presents
the design and ongoing development of an autonomous robot named "BrambleBee",
which aims at pollinating bramble plants in a greenhouse environment. Partially
inspired by the ecology and behavior of bees, BrambleBee employs
state-of-the-art localization and mapping, visual perception, path planning,
motion control, and manipulation techniques to create an efficient and robust
autonomous pollination system.
| 1 | 0 | 0 | 0 | 0 | 0 |
A sufficiently complicated noded Schottky group of rank three | The theoretical existence of non-classical Schottky groups is due to Marden.
Explicit examples of such kind of groups are only known in rank two, the first
one by by Yamamoto in 1991 and later by Williams in 2009. In 2006, Maskit and
the author provided a theoretical method to obtain examples of non-classical
Schottky groups in any rank. The method assumes the knowledge of some algebraic
limits of Schottky groups, called sufficiently complicated noded Schottky
groups, whose existence was stated. In this paper we provide an explicit
construction of a sufficiently complicated noded Schottky group of rank three
and it is explained how to construct explicit non-classical Schottky groups of
rank three.
| 0 | 0 | 1 | 0 | 0 | 0 |
DONUT: CTC-based Query-by-Example Keyword Spotting | Keyword spotting--or wakeword detection--is an essential feature for
hands-free operation of modern voice-controlled devices. With such devices
becoming ubiquitous, users might want to choose a personalized custom wakeword.
In this work, we present DONUT, a CTC-based algorithm for online
query-by-example keyword spotting that enables custom wakeword detection. The
algorithm works by recording a small number of training examples from the user,
generating a set of label sequence hypotheses from these training examples, and
detecting the wakeword by aggregating the scores of all the hypotheses given a
new audio recording. Our method combines the generalization and
interpretability of CTC-based keyword spotting with the user-adaptation and
convenience of a conventional query-by-example system. DONUT has low
computational requirements and is well-suited for both learning and inference
on embedded systems without requiring private user data to be uploaded to the
cloud.
| 1 | 0 | 0 | 0 | 0 | 0 |
Emulation of the space radiation environment for materials testing and radiobiological experiments | Radiobiology studies on the effects of galactic cosmic ray radiation utilize
mono-energetic single-ion particle beams, where the projected doses for
exploration missions are given using highly-acute exposures. This methodology
does not replicate the multi-ion species and energies found in the space
radiation environment, nor does it reflect the low dose rate found in
interplanetary space. In radiation biology studies, as well as in the
assessment of health risk to astronaut crews, the differences in the biological
effectiveness of different ions is primarily attributed to differences in the
linear energy transfer of the radiation spectrum. Here we show that the linear
energy transfer spectrum of the intravehicular environment of, e.g.,
spaceflight vehicles can be accurately generated experimentally by perturbing
the intrinsic properties of hydrogen-rich crystalline materials in order to
instigate specific nuclear spallation and fragmentation processes when placed
in an accelerated mono-energetic heavy ion beam. Modifications to the internal
geometry and chemical composition of the materials allow for the shaping of the
emerging field to specific spectra that closely resemble the intravehicular
field. Our approach can also be utilized to emulate the external galactic
cosmic ray field, the planetary surface spectrum (e.g., Mars), and the local
radiation environment of orbiting satellites. This provides the first instance
of a true ground-based analog for characterizing the effects of space
radiation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Convex Relaxations for Pose Graph Optimization with Outliers | Pose Graph Optimization involves the estimation of a set of poses from
pairwise measurements and provides a formalization for many problems arising in
mobile robotics and geometric computer vision. In this paper, we consider the
case in which a subset of the measurements fed to pose graph optimization is
spurious. Our first contribution is to develop robust estimators that can cope
with heavy-tailed measurement noise, hence increasing robustness to the
presence of outliers. Since the resulting estimators require solving nonconvex
optimization problems, we further develop convex relaxations that approximately
solve those problems via semidefinite programming. We then provide conditions
under which the proposed relaxations are exact. Contrarily to existing
approaches, our convex relaxations do not rely on the availability of an
initial guess for the unknown poses, hence they are more suitable for setups in
which such guess is not available (e.g., multi-robot localization, recovery
after localization failure). We tested the proposed techniques in extensive
simulations, and we show that some of the proposed relaxations are indeed tight
(i.e., they solve the original nonconvex problem 10 exactly) and ensure
accurate estimation in the face of a large number of outliers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasi-Frobenius-splitting and lifting of Calabi-Yau varieties in characteristic $p$ | Extending the notion of Frobenius-splitting, we prove that every finite
height Calabi-Yau variety defined over an algebraically closed field of
positive characteristic can be lifted to the ring of Witt vectors of length
two.
| 0 | 0 | 1 | 0 | 0 | 0 |
Consistency and Asymptotic Normality of Latent Blocks Model Estimators | Latent Block Model (LBM) is a model-based method to cluster simultaneously
the $d$ columns and $n$ rows of a data matrix. Parameter estimation in LBM is a
difficult and multifaceted problem. Although various estimation strategies have
been proposed and are now well understood empirically, theoretical guarantees
about their asymptotic behavior is rather sparse. We show here that under some
mild conditions on the parameter space, and in an asymptotic regime where
$\log(d)/n$ and $\log(n)/d$ tend to $0$ when $n$ and $d$ tend to $+\infty$, (1)
the maximum-likelihood estimate of the complete model (with known labels) is
consistent and (2) the log-likelihood ratios are equivalent under the complete
and observed (with unknown labels) models. This equivalence allows us to
transfer the asymptotic consistency to the maximum likelihood estimate under
the observed model. Moreover, the variational estimator is also consistent.
| 0 | 0 | 1 | 1 | 0 | 0 |
Linking Generative Adversarial Learning and Binary Classification | In this note, we point out a basic link between generative adversarial (GA)
training and binary classification -- any powerful discriminator essentially
computes an (f-)divergence between real and generated samples. The result,
repeatedly re-derived in decision theory, has implications for GA Networks
(GANs), providing an alternative perspective on training f-GANs by designing
the discriminator loss function.
| 1 | 0 | 0 | 1 | 0 | 0 |
High Speed Elephant Flow Detection Under Partial Information | In this paper we introduce a new framework to detect elephant flows at very
high speed rates and under uncertainty. The framework provides exact
mathematical formulas to compute the detection likelihood and introduces a new
flow reconstruction lemma under partial information. These theoretical results
lead to the design of BubbleCache, a new elephant flow detection algorithm
designed to operate near the optimal tradeoff between computational scalability
and accuracy by dynamically tracking the traffic's natural cutoff sampling
rate. We demonstrate on a real world 100 Gbps network that the BubbleCache
algorithm helps reduce the computational cost by a factor of 1000 and the
memory requirements by a factor of 100 while detecting the top flows on the
network with very high probability.
| 1 | 0 | 0 | 0 | 0 | 0 |
Scalable k-Means Clustering via Lightweight Coresets | Coresets are compact representations of data sets such that models trained on
a coreset are provably competitive with models trained on the full data set. As
such, they have been successfully used to scale up clustering models to massive
data sets. While existing approaches generally only allow for multiplicative
approximation errors, we propose a novel notion of lightweight coresets that
allows for both multiplicative and additive errors. We provide a single
algorithm to construct lightweight coresets for k-means clustering as well as
soft and hard Bregman clustering. The algorithm is substantially faster than
existing constructions, embarrassingly parallel, and the resulting coresets are
smaller. We further show that the proposed approach naturally generalizes to
statistical k-means clustering and that, compared to existing results, it can
be used to compute smaller summaries for empirical risk minimization. In
extensive experiments, we demonstrate that the proposed algorithm outperforms
existing data summarization strategies in practice.
| 1 | 0 | 0 | 1 | 0 | 0 |
Scaling relations in the diffusive infiltration in fractals | In a recent work on fluid infiltration in a Hele-Shaw cell with the
pore-block geometry of Sierpinski carpets (SCs), the area filled by the
invading fluid was shown to scale as F~t^n, with n<1/2, thus providing a
macroscopic realization of anomalous diffusion [Filipovitch et al, Water
Resour. Res. 52 5167 (2016)]. The results agree with simulations of a diffusion
equation with constant pressure at one of the borders of those fractals, but
the exponent n is very different from the anomalous exponent nu=1/D_W of single
particle diffusion in the same fractals (D_W is the random walk dimension).
Here we use a scaling approach to show that those exponents are related as
n=nu(D_F-D_B), where D_F and D_B are the fractal dimensions of the bulk and of
the border from which diffusing particles come, respectively. This relation is
supported by accurate numerical estimates in two SCs and in two generalized
Menger sponges (MSs), in which we performed simulations of single particle
random walks (RWs) with a rigid impermeable border and of a diffusive
infiltration model in which that border is permanently filled with diffusing
particles. This study includes one MS whose external border is also fractal.
The exponent relation is also consistent with the recent simulational and
experimental results on fluid infiltration in SCs, and explains the approximate
quadratic dependence of n on D_F in these fractals. We also show that the
mean-square displacement of single particle RWs has log-periodic oscillations,
whose periods are similar for fractals with the same scaling factor in the
generator (even with different embedding dimensions), which is consistent with
the discrete scale invariance scenario. The roughness of a diffusion front
defined in the infiltration problem also shows this type of oscillation, which
is enhanced in fractals with narrow channels between large lacunas.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Sequential MCMC for Combined State and Parameter Estimation | In the case of a linear state space model, we implement an MCMC sampler with
two phases. In the learning phase, a self-tuning sampler is used to learn the
parameter mean and covariance structure. In the estimation phase, the parameter
mean and covariance structure informs the proposed mechanism and is also used
in a delayed-acceptance algorithm. Information on the resulting state of the
system is given by a Gaussian mixture. In on-line mode, the algorithm is
adaptive and uses a sliding window approach to accelerate sampling speed and to
maintain appropriate acceptance rates. We apply the algorithm to joined state
and parameter estimation in the case of irregularly sampled GPS time series
data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Information sensitivity functions to assess parameter information gain and identifiability of dynamical systems | A new class of functions, called the `Information sensitivity functions'
(ISFs), which quantify the information gain about the parameters through the
measurements/observables of a dynamical system are presented. These functions
can be easily computed through classical sensitivity functions alone and are
based on Bayesian and information-theoretic approaches. While marginal
information gain is quantified by decrease in differential entropy,
correlations between arbitrary sets of parameters are assessed through mutual
information. For individual parameters these information gains are also
presented as marginal posterior variances, and, to assess the effect of
correlations, as conditional variances when other parameters are given. The
easy to interpret ISFs can be used to a) identify time-intervals or regions in
dynamical system behaviour where information about the parameters is
concentrated; b) assess the effect of measurement noise on the information gain
for the parameters; c) assess whether sufficient information in an experimental
protocol (input, measurements, and their frequency) is available to identify
the parameters; d) assess correlation in the posterior distribution of the
parameters to identify the sets of parameters that are likely to be
indistinguishable; and e) assess identifiability problems for particular sets
of parameters.
| 0 | 0 | 0 | 1 | 0 | 0 |
Combinatorial cost: a coarse setting | The main inspiration for this paper is a paper by Elek where he introduces
combinatorial cost for graph sequences. We show that having cost equal to 1 and
hyperfiniteness are coarse invariants. We also show `cost-1' for box spaces
behaves multiplicatively when taking subgroups. We show that graph sequences
coming from Farber sequences of a group have property A if and only if the
group is amenable. The same is true for hyperfiniteness. This generalises a
theorem by Elek. Furthermore we optimise this result when Farber sequences are
replaced by sofic approximations. In doing so we introduce a new concept:
property almost-A.
| 0 | 0 | 1 | 0 | 0 | 0 |
Uncharted Forest a Technique for Exploratory Data Analysis | Exploratory data analysis is crucial for developing and understanding
classification models from high-dimensional datasets. We explore the utility of
a new unsupervised tree ensemble called uncharted forest for visualizing class
associations, sample-sample associations, class heterogeneity, and
uninformative classes for provenance studies. The uncharted forest algorithm
can be used to partition data using random selections of variables and metrics
based on statistical spread. After each tree is grown, a tally of the samples
that arrive at every terminal node is maintained. Those tallies are stored in
single sample association matrix and a likelihood measure for each sample being
partitioned with one another can be made. That matrix may be readily viewed as
a heat map, and the probabilities can be quantified via new metrics that
account for class or cluster membership. We display the advantages and
limitations of using this technique by applying it to two classification
datasets and three provenance study datasets. Two of the metrics presented in
this paper are also compared with widely used metrics from two algorithms that
have variance-based clustering mechanisms.
| 0 | 0 | 0 | 1 | 0 | 0 |
Surface thermophysical properties investigation of the potentially hazardous asteroid (99942) Apophis | In this work, we investigate the surface thermophysical properties (thermal
emissivity, thermal inertia, roughness fraction and geometric albedo) of
asteroid (99942) Apophis, using the currently available thermal infrared
observations of CanariCam on Gran Telescopio CANARIAS and far-infrared data by
PACS of Herschel, on the basis of the Advanced thermophysical model. We show
that the thermal emissivity of Apophis should be wavelength dependent from
$8.70~\mu m$ to $160~\mu m$, and the maximum emissivity may arise around
$20~\mu m$ similar to that of Vesta. Moreover, we further derive the thermal
inertia, roughness fraction, geometric albedo and effective diameter of Apophis
within a possible 1$\sigma$ scale of
$\Gamma=100^{+100}_{-52}\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\rm r}=0.78\sim1.0$,
$p_{\rm v}=0.286^{+0.030}_{-0.026}$, $D_{\rm eff}=378^{+19}_{-25}\rm~m$, and
3$\sigma$ scale of $\Gamma=100^{+240}_{-100}\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\rm
r}=0.2\sim1.0$, $p_{\rm v}=0.286^{+0.039}_{-0.029}$, $D_{\rm
eff}=378^{+27}_{-29}\rm~m$. The derived low thermal inertia but high roughness
fraction may imply that Apophis could have regolith on its surface, and less
regolith migration process has happened in comparison with asteroid Itokawa.
Our results show that small-size asteroids could also have fine regolith on the
surface, and further infer that Apophis may be delivered from the Main Belt by
Yarkovsky effect.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reconfigurable cluster state generation in specially poled nonlinear waveguide arrays | We present a new approach for generating cluster states on-chip, with the
state encoded in the spatial component of the photonic wavefunction. We show
that for spatial encoding, a change of measurement basis can improve the
practicality of cluster state algorithm implementation, and demonstrate this by
simulating Grover's search algorithm. Our state generation scheme involves
shaping the wavefunction produced by spontaneous parametric down-conversion in
on-chip waveguides using specially tailored nonlinear poling patterns.
Furthermore the form of the cluster state can be reconfigured quickly by
driving different waveguides in the array.
| 0 | 1 | 0 | 0 | 0 | 0 |
Whole planet coupling between climate, mantle, and core: Implications for the evolution of rocky planets | Earth's climate, mantle, and core interact over geologic timescales. Climate
influences whether plate tectonics can take place on a planet, with cool
climates being favorable for plate tectonics because they enhance stresses in
the lithosphere, suppress plate boundary annealing, and promote hydration and
weakening of the lithosphere. Plate tectonics plays a vital role in the
long-term carbon cycle, which helps to maintain a temperate climate. Plate
tectonics provides long-term cooling of the core, which is vital for generating
a magnetic field, and the magnetic field is capable of shielding atmospheric
volatiles from the solar wind. Coupling between climate, mantle, and core can
potentially explain the divergent evolution of Earth and Venus. As Venus lies
too close to the sun for liquid water to exist, there is no long-term carbon
cycle and thus an extremely hot climate. Therefore plate tectonics cannot
operate and a long-lived core dynamo cannot be sustained due to insufficient
core cooling. On planets within the habitable zone where liquid water is
possible, a wide range of evolutionary scenarios can take place depending on
initial atmospheric composition, bulk volatile content, or the timing of when
plate tectonics initiates, among other factors. Many of these evolutionary
trajectories would render the planet uninhabitable. However, there is still
significant uncertainty over the nature of the coupling between climate,
mantle, and core. Future work is needed to constrain potential evolutionary
scenarios and the likelihood of an Earth-like evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solutions of the Helmholtz equation given by solutions of the eikonal equation | We find the form of the refractive index such that a solution, $S$, of the
eikonal equation yields an exact solution, $\exp ({\rm i} k_{0} S)$, of the
corresponding Helmholtz equation.
| 0 | 1 | 0 | 0 | 0 | 0 |
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation | Semantic Textual Similarity (STS) measures the meaning similarity of
sentences. Applications include machine translation (MT), summarization,
generation, question answering (QA), short answer grading, semantic search,
dialog and conversational systems. The STS shared task is a venue for assessing
the current state-of-the-art. The 2017 task focuses on multilingual and
cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE)
data. The task obtained strong participation from 31 teams, with 17
participating in all language tracks. We summarize performance and review a
selection of well performing methods. Analysis highlights common errors,
providing insight into the limitations of existing models. To support ongoing
work on semantic representations, the STS Benchmark is introduced as a new
shared training and evaluation set carefully selected from the corpus of
English STS shared task data (2012-2017).
| 1 | 0 | 0 | 0 | 0 | 0 |
Modelling Luminous-Blue-Variable Isolation | Observations show that luminous blue variables (LBVs) are far more dispersed
than massive O-type stars, and Smith & Tombleson suggested that these large
separations are inconsistent with a single-star evolution model of LBVs.
Instead, they suggested that the large distances are most consistent with
binary evolution scenarios. To test these suggestions, we modelled young
stellar clusters and their passive dissolution, and we find that, indeed, the
standard single-star evolution model is mostly inconsistent with the observed
LBV environments. If LBVs are single stars, then the lifetimes inferred from
their luminosity and mass are far too short to be consistent with their extreme
isolation. This implies that there is either an inconsistency in the
luminosity- to-mass mapping or the mass-to-age mapping. In this paper, we
explore binary solutions that modify the mass-to-age mapping and are consistent
with the isolation of LBVs. For the binary scenarios, our crude models suggest
that LBVs are rejuvenated stars. They are either the result of mergers or they
are mass gainers and received a kick when the primary star exploded. In the
merger scenario, if the primary is about 19 solar masses, then the binary has
enough time to wander far afield, merge and form a rejuvenated star. In the
mass-gainer and kick scenario, we find that LBV isolation is consistent with a
wide range of kick velocities, anywhere from 0 to ~ 105 km/s. In either
scenario, binarity seems to play a major role in the isolation of LBVs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Service adoption spreading in online social networks | The collective behaviour of people adopting an innovation, product or online
service is commonly interpreted as a spreading phenomenon throughout the fabric
of society. This process is arguably driven by social influence, social
learning and by external effects like media. Observations of such processes
date back to the seminal studies by Rogers and Bass, and their mathematical
modelling has taken two directions: One paradigm, called simple contagion,
identifies adoption spreading with an epidemic process. The other one, named
complex contagion, is concerned with behavioural thresholds and successfully
explains the emergence of large cascades of adoption resulting in a rapid
spreading often seen in empirical data. The observation of real world adoption
processes has become easier lately due to the availability of large digital
social network and behavioural datasets. This has allowed simultaneous study of
network structures and dynamics of online service adoption, shedding light on
the mechanisms and external effects that influence the temporal evolution of
behavioural or innovation adoption. These advancements have induced the
development of more realistic models of social spreading phenomena, which in
turn have provided remarkably good predictions of various empirical adoption
processes. In this chapter we review recent data-driven studies addressing
real-world service adoption processes. Our studies provide the first detailed
empirical evidence of a heterogeneous threshold distribution in adoption. We
also describe the modelling of such phenomena with formal methods and
data-driven simulations. Our objective is to understand the effects of
identified social mechanisms on service adoption spreading, and to provide
potential new directions and open questions for future research.
| 1 | 1 | 0 | 0 | 0 | 0 |
The Amplitude-Phase Decomposition for the Magnetotelluric Impedance Tensor | The Phase Tensor (PT) marked a breakthrough in understanding and analysis of
electric galvanic distortion but does not contain any impedance amplitude
information and therefore cannot quantify resistivity without complementary
data. We formulate a complete impedance tensor decomposition into the PT and a
new Amplitude Tensor (AT) that is shown to be complementary and mathematically
independent to the PT. We show that for the special cases of 1D and 2D models,
the geometric AT parameters (strike and skew angles) converge to PT parameters
and the singular values of the AT correspond to the impedance amplitudes of the
transverse electric and transverse magnetic modes. In all cases, we show that
the AT contains both galvanic and inductive amplitudes, the latter of which is
argued to be physically related to the inductive information of the PT. The
geometric parameters of the inductive AT and the PT represent the same geometry
of the subsurface conductivity distribution that is affected by induction
processes, and therefore we hypothesise that geometric PT parameters can be
used to approximate the inductive AT. Then, this hypothesis leads to the
estimation of the galvanic AT which is equal to the galvanic electric
distortion tensor at the lowest measured period. This estimation of the
galvanic distortion departs from the common assumption to consider 1D or 2D
regional structures and can be applied for general 3D subsurfaces. We
demonstrate exemplarily with an explicit formulation how our hypothesis can be
used to recover the galvanic electric anisotropic distortion for 2D
subsurfaces, which was, until now, believed to be indeterminable for 2D data.
Moreover, we illustrate the AT as a mapping tool and we compare it to the PT
with both synthetic and real data examples. Lastly, we argue that the AT can
provide important non-redundant amplitude information to PT inversions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Real-time Convolutional Neural Networks for Emotion and Gender Classification | In this paper we propose an implement a general convolutional neural network
(CNN) building framework for designing real-time CNNs. We validate our models
by creating a real-time vision system which accomplishes the tasks of face
detection, gender classification and emotion classification simultaneously in
one blended step using our proposed CNN architecture. After presenting the
details of the training procedure setup we proceed to evaluate on standard
benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66%
in the FER-2013 emotion dataset. Along with this we also introduced the very
recent real-time enabled guided back-propagation visualization technique.
Guided back-propagation uncovers the dynamics of the weight changes and
evaluates the learned features. We argue that the careful implementation of
modern CNN architectures, the use of the current regularization methods and the
visualization of previously hidden features are necessary in order to reduce
the gap between slow performances and real-time architectures. Our system has
been validated by its deployment on a Care-O-bot 3 robot used during
RoboCup@Home competitions. All our code, demos and pre-trained architectures
have been released under an open-source license in our public repository.
| 1 | 0 | 0 | 0 | 0 | 0 |
Compound-Specific Chlorine Isotope Analysis of Organochlorines Using Gas Chromatography-Double Focus Magnetic-Sector High Resolution Mass Spectrometry | Compound-specific chlorine isotope analysis (CSIA-Cl) is a practicable and
high-performance approach for quantification of transformation processes and
pollution source apportionment of chlorinated organic compounds. This study
developed a CSIA-Cl method for perchlorethylene (PCE) and trichloroethylene
(TCE) using gas chromatography-double focus magnetic-sector high resolution
mass spectrometry (GC-DFS-HRMS) with a bracketing injection mode. The achieved
highest precision for PCE was 0.021% (standard deviation of isotope ratios),
and that for TCE was 0.025%. When one standard was used as the external
isotopic standard for another of the same analyte, the lowest standard
deviations of relative isotope-ratio variations ({\delta}37Cl') between the two
corresponding standards were 0.064% and 0.080% for PCE and TCE, respectively.
As a result, the critical {\delta}37Cl' for differentiating two isotope ratios
are 0.26% and 0.32% for PCE and TCE, respectively, which are comparable with
those in some reported studies using GC-quadrupole MS (GC-qMS). The lower limit
of detection for CSIA-Cl of PCE was 0.1 ug/mL (0.1 ng on column), and that for
TCE was determined to be 1.0 ug/mL (1.0 ng on column). Two isotope ratio
calculation schemes, i.e., a scheme using complete molecular-ion isotopologues
and another one using a pair of neighboring isotopologues, were evaluated in
terms of precision and accuracy. The complete-isotopologue scheme showed
evidently higher precision and was deduced to be more competent to reflect
trueness in comparison with the isotopologue-pair scheme. The CSIA-Cl method
developed in this study will be conducive to future studies concerning
transformation processes and source apportionment of PCE and TCE, and light the
ways to method development of CSIA-Cl for more organochlorines.
| 0 | 1 | 0 | 0 | 0 | 0 |
Realization of an atomically thin mirror using monolayer MoSe2 | Advent of new materials such as van der Waals heterostructures, propels new
research directions in condensed matter physics and enables development of
novel devices with unique functionalities. Here, we show experimentally that a
monolayer of MoSe2 embedded in a charge controlled heterostructure can be used
to realize an electrically tunable atomically-thin mirror, that effects 90%
extinction of an incident field that is resonant with its exciton transition.
The corresponding maximum reflection coefficient of 45% is only limited by the
ratio of the radiative decay rate to the linewidth of exciton transition and is
independent of incident light intensity up to 400 Watts/cm2. We demonstrate
that the reflectivity of the mirror can be drastically modified by applying a
gate voltage that modifies the monolayer charge density. Our findings could
find applications ranging from fast programmable spatial light modulators to
suspended ultra-light mirrors for optomechanical devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Equivariant mirror symmetry for the weighted projective line | In this paper, we establish equivariant mirror symmetry for the weighted
projective line. This extends the results by B. Fang, C.C. Liu and Z. Zong,
where the projective line was considered [{\it Geometry \& Topology}
24:2049-2092, 2017]. More precisely, we prove the equivalence of the
$R$-matrices for A-model and B-model at large radius limit, and establish
isomorphism for $R$-matrices for general radius. We further demonstrate that
the graph sum of higher genus cases are the same for both models, hence
establish equivariant mirror symmetry for the weighted projective line.
| 0 | 0 | 1 | 0 | 0 | 0 |
Precise Recovery of Latent Vectors from Generative Adversarial Networks | Generative adversarial networks (GANs) transform latent vectors into visually
plausible images. It is generally thought that the original GAN formulation
gives no out-of-the-box method to reverse the mapping, projecting images back
into latent space. We introduce a simple, gradient-based technique called
stochastic clipping. In experiments, for images generated by the GAN, we
precisely recover their latent vector pre-images 100% of the time. Additional
experiments demonstrate that this method is robust to noise. Finally, we show
that even for unseen images, our method appears to recover unique encodings.
| 1 | 0 | 0 | 1 | 0 | 0 |
The dependence of protostar formation on the geometry and strength of the initial magnetic field | We report results from twelve simulations of the collapse of a molecular
cloud core to form one or more protostars, comprising three field strengths
(mass-to-flux ratios, {\mu}, of 5, 10, and 20) and four field geometries (with
values of the angle between the field and rotation axes, {\theta}, of 0°,
20°, 45°, and 90°), using a smoothed particle
magnetohydrodynamics method. We find that the values of both parameters have a
strong effect on the resultant protostellar system and outflows. This ranges
from the formation of binary systems when {\mu} = 20 to strikingly differing
outflow structures for differing values of {\theta}, in particular highly
suppressed outflows when {\theta} = 90°. Misaligned magnetic fields can
also produce warped pseudo-discs where the outer regions align perpendicular to
the magnetic field but the innermost region re-orientates to be perpendicular
to the rotation axis. We follow the collapse to sizes comparable to those of
first cores and find that none of the outflow speeds exceed 8 km s$^{-1}$.
These results may place constraints on both observed protostellar outflows, and
also on which molecular cloud cores may eventually form either single stars and
binaries: a sufficiently weak magnetic field may allow for disc fragmentation,
whilst conversely the greater angular momentum transport of a strong field may
inhibit disc fragmentation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Current-Voltage Characteristics of Weyl Semimetal Semiconducting Devices, Veselago Lenses and Hyperbolic Dirac Phase | The current-voltage characteristics of a new range of devices built around
Weyl semimetals has been predicted using the Landauer formalism. The potential
step and barrier have been reconsidered for a three-dimensional Weyl
semimetals, with analogies to the two-dimensional material graphene and to
optics. With the use of our results we also show how a Veselago lens can be
made from Weyl semimetals, e.g. from NbAs and NbP. Such a lens may have many
practical applications and can be used as a probing tip in a scanning tunneling
microscope (STM). The ballistic character of Weyl fermion transport inside the
semimetal tip, combined with the ideal focusing of the Weyl fermions (by
Veselago lens) on the surface of the tip may create a very narrow electron beam
from the tip to the surface of the studied material. With a Weyl semimetal
probing tip the resolution of the present STMs can be improved significantly,
and one may image not only individual atoms but also individual electron
orbitals or chemical bonding and therewith to resolve the long-term issue of
chemical and hydrogen bond formation. We show that applying a pressure to the
Weyl semimental, having no centre of spacial inversion one may model matter at
extreme conditions such as those arising in the vicinity of a black hole. As
the materials Cd3As2 and Na3Bi show an asymmetry in their Dirac cones, a
scaling factor was used to model this asymmetry. The scaling factor created
additional regions of no propagation and condensed the appearance of
resonances. We argue that under an external pressure there may arise a
topological phase transition in Weyl semimetals, where the electron transport
changes character and becomes anisotropic. There a hyperbolic Dirac phases
occurs where there is a strong light absorption and photo-current generation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Safer Classification by Synthesis | The discriminative approach to classification using deep neural networks has
become the de-facto standard in various fields. Complementing recent
reservations about safety against adversarial examples, we show that
conventional discriminative methods can easily be fooled to provide incorrect
labels with very high confidence to out of distribution examples. We posit that
a generative approach is the natural remedy for this problem, and propose a
method for classification using generative models. At training time, we learn a
generative model for each class, while at test time, given an example to
classify, we query each generator for its most similar generation, and select
the class corresponding to the most similar one. Our approach is general and
can be used with expressive models such as GANs and VAEs. At test time, our
method accurately "knows when it does not know," and provides resilience to out
of distribution examples while maintaining competitive performance for standard
examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
A geometrical analysis of global stability in trained feedback networks | Recurrent neural networks have been extensively studied in the context of
neuroscience and machine learning due to their ability to implement complex
computations. While substantial progress in designing effective learning
algorithms has been achieved in the last years, a full understanding of trained
recurrent networks is still lacking. Specifically, the mechanisms that allow
computations to emerge from the underlying recurrent dynamics are largely
unknown. Here we focus on a simple, yet underexplored computational setup: a
feedback architecture trained to associate a stationary output to a stationary
input. As a starting point, we derive an approximate analytical description of
global dynamics in trained networks which assumes uncorrelated connectivity
weights in the feedback and in the random bulk. The resulting mean-field theory
suggests that the task admits several classes of solutions, which imply
different stability properties. Different classes are characterized in terms of
the geometrical arrangement of the readout with respect to the input vectors,
defined in the high-dimensional space spanned by the network population. We
find that such approximate theoretical approach can be used to understand how
standard training techniques implement the input-output task in finite-size
feedback networks. In particular, our simplified description captures the local
and the global stability properties of the target solution, and thus predicts
training performance.
| 0 | 0 | 0 | 0 | 1 | 0 |
Submolecular-resolution non-invasive imaging of interfacial water with atomic force microscopy | Scanning probe microscopy (SPM) has been extensively applied to probe
interfacial water in many interdisciplinary fields but the disturbance of the
probes on the hydrogen-bonding structure of water has remained an intractable
problem. Here we report submolecular-resolution imaging of the water clusters
on a NaCl(001) surface within the nearly non-invasive region by a qPlus-based
noncontact atomic force microscopy. Comparison with theoretical simulations
reveals that the key lies in probing the weak high-order electrostatic force
between the quadrupole-like CO-terminated tip and the polar water molecules at
large tip-water distances. This interaction allows the imaging and structural
determination of the weakly bonded water clusters and even of their metastable
states without inducing any disturbance. This work may open up new possibility
of studying the intrinsic structure and electrostatics of ice or water on bulk
insulating surfaces, ion hydration and biological water with atomic precision.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity | SGD (Stochastic Gradient Descent) is a popular algorithm for large scale
optimization problems due to its low iterative cost. However, SGD can not
achieve linear convergence rate as FGD (Full Gradient Descent) because of the
inherent gradient variance. To attack the problem, mini-batch SGD was proposed
to get a trade-off in terms of convergence rate and iteration cost. In this
paper, a general CVI (Convergence-Variance Inequality) equation is presented to
state formally the interaction of convergence rate and gradient variance. Then
a novel algorithm named SSAG (Stochastic Stratified Average Gradient) is
introduced to reduce gradient variance based on two techniques, stratified
sampling and averaging over iterations that is a key idea in SAG (Stochastic
Average Gradient). Furthermore, SSAG can achieve linear convergence rate of
$\mathcal {O}((1-\frac{\mu}{8CL})^k)$ at smaller storage and iterative costs,
where $C\geq 2$ is the category number of training data. This convergence rate
depends mainly on the variance between classes, but not on the variance within
the classes. In the case of $C\ll N$ ($N$ is the training data size), SSAG's
convergence rate is much better than SAG's convergence rate of $\mathcal
{O}((1-\frac{\mu}{8NL})^k)$. Our experimental results show SSAG outperforms SAG
and many other algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
First observation of Ce volume collapse in CeN | On the occasion of the 80th anniversary of the first observation of Ce volume
collapse in CeN a remembrance of the implications of that transcendent event is
presented, along with a review of the knowledge of Ce physical properties
available at that time. Coincident anniversary corresponds to the first
proposal for Ce as a mix valence element, motivating to briefly review how the
valence instability of Ce was investigated since that time.
| 0 | 1 | 0 | 0 | 0 | 0 |
Experimentation with MANETs of Smartphones | Mobile AdHoc NETworks (MANETs) have been identified as a key emerging
technology for scenarios in which IEEE 802.11 or cellular communications are
either infeasible, inefficient, or cost-ineffective. Smartphones are the most
adequate network nodes in many of these scenarios, but it is not
straightforward to build a network with them. We extensively survey existing
possibilities to build applications on top of ad-hoc smartphone networks for
experimentation purposes, and introduce a taxonomy to classify them. We present
AdHocDroid, an Android package that creates an IP-level MANET of (rooted)
Android smartphones, and make it publicly available to the community.
AdHocDroid supports standard TCP/IP applications, providing real smartphone
IEEE 802.11 MANET and the capability to easily change the routing protocol. We
tested our framework on several smartphones and a laptop. We validate the MANET
running off-the-shelf applications, and reporting on experimental performance
evaluation, including network metrics and battery discharge rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interaction between cluster synchronization and epidemic spread in community networks | In real world, there is a significant relation between human behaviors and
epidemic spread. Especially, the reactions among individuals in different
communities to epidemics may be different, which lead to cluster
synchronization of human behaviors. So, a mathematical model that embeds
community structures, behavioral evolution and epidemic transmission is
constructed to study the interaction between cluster synchronization and
epidemic spread. The epidemic threshold of the model is obtained by using
Gersgorin Lemma and dynamical system theory. By applying the Lyapunov stability
method, the stability analysis of cluster synchronization and spreading
dynamics are presented. Then, some numerical simulations are performed to
illustrate and complement our theoretical results. As far as we know, this work
is the first one to address the interplay between cluster synchronization and
epidemic transmission in community networks, so it may deepen the understanding
of the impact of cluster behaviors on infectious disease dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Focusing on a Probability Element: Parameter Selection of Message Importance Measure in Big Data | Message importance measure (MIM) is applicable to characterize the importance
of information in the scenario of big data, similar to entropy in information
theory. In fact, MIM with a variable parameter can make an effect on the
characterization of distribution. Furthermore, by choosing an appropriate
parameter of MIM, it is possible to emphasize the message importance of a
certain probability element in a distribution. Therefore, parametric MIM can
play a vital role in anomaly detection of big data by focusing on probability
of an anomalous event. In this paper, we propose a parameter selection method
of MIM focusing on a probability element and then present its major properties.
In addition, we discuss the parameter selection with prior probability, and
investigate the availability in a statistical processing model of big data for
anomaly detection problem.
| 1 | 0 | 1 | 0 | 0 | 0 |
Deep Multi-camera People Detection | This paper addresses the problem of multi-view people occupancy map
estimation. Existing solutions for this problem either operate per-view, or
rely on a background subtraction pre-processing. Both approaches lessen the
detection performance as scenes become more crowded. The former does not
exploit joint information, whereas the latter deals with ambiguous input due to
the foreground blobs becoming more and more interconnected as the number of
targets increases.
Although deep learning algorithms have proven to excel on remarkably numerous
computer vision tasks, such a method has not been applied yet to this problem.
In large part this is due to the lack of large-scale multi-camera data-set.
The core of our method is an architecture which makes use of monocular
pedestrian data-set, available at larger scale then the multi-view ones,
applies parallel processing to the multiple video streams, and jointly utilises
it. Our end-to-end deep learning method outperforms existing methods by large
margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly
available a new three-camera HD data-set. Our source code and trained models
will be made available under an open-source license.
| 1 | 0 | 0 | 0 | 0 | 0 |
SMAGEXP: a galaxy tool suite for transcriptomics data meta-analysis | Bakground: With the proliferation of available microarray and high throughput
sequencing experiments in the public domain, the use of meta-analysis methods
increases. In these experiments, where the sample size is often limited,
meta-analysis offers the possibility to considerably enhance the statistical
power and give more accurate results. For those purposes, it combines either
effect sizes or results of single studies in a appropriate manner. R packages
metaMA and metaRNASeq perform meta-analysis on microarray and NGS data,
respectively. They are not interchangeable as they rely on statistical modeling
specific to each technology.
Results: SMAGEXP (Statistical Meta-Analysis for Gene EXPression) integrates
metaMA and metaRNAseq packages into Galaxy. We aim to propose a unified way to
carry out meta-analysis of gene expression data, while taking care of their
specificities. We have developed this tool suite to analyse microarray data
from Gene Expression Omnibus (GEO) database or custom data from affymetrix
microarrays. These data are then combined to carry out meta-analysis using
metaMA package. SMAGEXP also offers to combine raw read counts from Next
Generation Sequencing (NGS) experiments using DESeq2 and metaRNASeq package. In
both cases, key values, independent from the technology type, are reported to
judge the quality of the meta-analysis. These tools are available on the Galaxy
main tool shed. Source code, help and installation instructions are available
on github.
Conclusion: The use of Galaxy offers an easy-to-use gene expression
meta-analysis tool suite based on the metaMA and metaRNASeq packages.
| 0 | 0 | 0 | 1 | 1 | 0 |
Big Data Fusion to Estimate Fuel Consumption: A Case Study of Riyadh | Falling oil revenues and rapid urbanization are putting a strain on the
budgets of oil producing nations which often subsidize domestic fuel
consumption. A direct way to decrease the impact of subsidies is to reduce fuel
consumption by reducing congestion and car trips. While fuel consumption models
have started to incorporate data sources from ubiquitous sensing devices, the
opportunity is to develop comprehensive models at urban scale leveraging
sources such as Global Positioning System (GPS) data and Call Detail Records.
We combine these big data sets in a novel method to model fuel consumption
within a city and estimate how it may change due to different scenarios. To do
so we calibrate a fuel consumption model for use on any car fleet fuel economy
distribution and apply it in Riyadh, Saudi Arabia. The model proposed, based on
speed profiles, is then used to test the effects on fuel consumption of
reducing flow, both randomly and by targeting the most fuel inefficient trips
in the city. The estimates considerably improve baseline methods based on
average speeds, showing the benefits of the information added by the GPS data
fusion. The presented method can be adapted to also measure emissions. The
results constitute a clear application of data analysis tools to help decision
makers compare policies aimed at achieving economic and environmental goals.
| 1 | 0 | 0 | 0 | 0 | 0 |
Person Following by Autonomous Robots: A Categorical Overview | A wide range of human-robot collaborative applications in industry, search
and rescue operations, healthcare, and social interactions require an
autonomous robot to follow its human companion. Different operating mediums and
applications pose diverse challenges by adding constraints on the choice of
sensors, the degree of autonomy, and dynamics of the person following robot.
Researchers have addressed these challenges in many ways and contributed to the
development of a large body of literature. This paper provides a comprehensive
overview of the literature by categorizing different aspects of
person-following by autonomous robots. Also, the corresponding operational
challenges are identified based on various design choices for ground,
underwater, and aerial scenarios. In addition, state-of-the-art methods for
perception, planning, control, and interaction are elaborately discussed, and
their feasibilities are evaluated in terms of standard operational and
performance metrics. Furthermore, several prospective application areas are
identified, and open problems are highlighted for future research.
| 1 | 0 | 0 | 0 | 0 | 0 |
Structures, phase transitions, and magnetic properties of Co3Si from first-principles calculations | Co3Si was recently reported to exhibit remarkable magnetic properties in the
nanoparticle form [Appl. Phys. Lett. 108, 152406 (2016)], yet better
understanding of this material is to be promoted. Here we report a study on the
crystal structures of Co3Si using adaptive genetic algorithm, and discuss its
electronic and magnetic properties from first-principles calculations. Several
competing phases of Co3Si have been revealed from our calculations. We show
that the hexagonal Co3Si structure reported in experiments has lower energy in
non-magnetic state than ferromagnetic state at zero temperature. The
ferromagnetic state of the hexagonal structure is dynamically unstable with
imaginary phonon modes and transforms to a new orthorhombic structure, which is
confirmed by our structure searches to have the lowest energy for both Co3Si
and Co3Ge. Magnetic properties of the experimental hexagonal structure and the
lowest-energy structures obtained from our structure searches are investigated
in detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
Rigid realizations of modular forms in Calabi--Yau threefolds | We construct examples of modular rigid Calabi--Yau threefolds, which give a
realization of some new weight 4 cusp forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Steady-state analysis of single exponential vacation in a $PH/MSP/1/\infty$ queue using roots | We consider an infinite-buffer single-server queue where inter-arrival times
are phase-type ($PH$), the service is provided according to Markovian service
process $(MSP)$, and the server may take single, exponentially distributed
vacations when the queue is empty. The proposed analysis is based on roots of
the associated characteristic equation of the vector-generating function (VGF)
of system-length distribution at a pre-arrival epoch. Also, we obtain the
steady-state system-length distribution at an arbitrary epoch along with some
important performance measures such as the mean number of customers in the
system and the mean system sojourn time of a customer. Later, we have
established heavy- and light-traffic approximations as well as an approximation
for the tail probabilities at pre-arrival epoch based on one root of the
characteristic equation. At the end, we present numerical results in the form
of tables to show the effect of model parameters on the performance measures.
| 1 | 0 | 1 | 0 | 0 | 0 |
The agreement distance of rooted phylogenetic networks | The minimal number of rooted subtree prune and regraft (rSPR) operations
needed to transform one phylogenetic tree into another one induces a metric on
phylogenetic trees - the rSPR-distance. The rSPR-distance between two
phylogenetic trees $T$ and $T'$ can be characterised by a maximum agreement
forest; a forest with a minimal number of components that covers both $T$ and
$T'$. The rSPR operation has recently been generalised to phylogenetic networks
with, among others, the subnetwork prune and regraft (SNPR) operation. Here, we
introduce maximum agreement graphs as an explicit representations of
differences of two phylogenetic networks, thus generalising maximum agreement
forests. We show that maximum agreement graphs induce a metric on phylogenetic
networks - the agreement distance. While this metric does not characterise the
distances induced by SNPR and other generalisations of rSPR, we prove that it
still bounds these distances with constant factors.
| 0 | 0 | 0 | 0 | 1 | 0 |
Optimization of the Waiting Time for H-R Coordination | An analytical model of Human-Robot (H-R) coordination is presented for a
Human-Robot system executing a collaborative task in which a high level of
synchronization among the agents is desired. The influencing parameters and
decision variables that affect the waiting time of the collaborating agents
were analyzed. The performance of the model was evaluated based on the costs of
the waiting times of each of the agents at the pre-defined spatial point of
handover. The model was tested for two cases of dynamic H-R coordination
scenarios. Results indicate that this analytical model can be used as a tool
for designing an H-R system that optimizes the agent waiting time thereby
increasing the joint-efficiency of the system and making coordination fluent
and natural.
| 1 | 0 | 0 | 0 | 0 | 0 |
Introducing AIC model averaging in ecological niche modeling: a single-algorithm multi-model strategy to account for uncertainty in suitability predictions | Aim: The Akaike information Criterion (AIC) is widely used science to make
predictions about complex phenomena based on an entire set of models weighted
by Akaike weights. This approach (AIC model averaging; hereafter AvgAICc) is
often preferable than alternatives based on the selection of a single model.
Surprisingly, AvgAICc has not yet been introduced in ecological niche modeling
(ENM). We aimed to introduce AvgAICc in the context of ENM to serve both as an
optimality criterion in analyses that tune-up model parameters and as a
multi-model prediction strategy.
Innovation: Results from the AvgAICc approach differed from those of
alternative approaches with respect to model complexity, contribution of
environmental variables, and predicted amount and geographic location of
suitable conditions for the focal species. Two theoretical properties of the
AvgAICc approach might justify that future studies will prefer its use over
alternative approaches: (1) it is not limited to make predictions based on a
single model, but it also uses secondary models that might have important
predictive power absent in a given single model favored by alternative
optimality criteria; (2) it balances goodness of fit and model accuracy, this
being of critical importance in applications of ENM that require model
transference.
Main conclusions: Our introduction of the AvgAICc approach in ENM; its
theoretical properties, which are expected to confer advantages over
alternatives approaches; and the differences we found when comparing the
AvgAICc approach with alternative ones; should eventually lead to a wider use
of the AvgAICc approach. Our work should also promote further methodological
research comparing properties of the AvgAICc approach with respect to those of
alternative procedures.
| 0 | 0 | 0 | 0 | 1 | 0 |
Extrasolar Planets and Their Host Stars | In order to understand the exoplanet, you need to understand its parent star.
Astrophysical parameters of extrasolar planets are directly and indirectly
dependent on the properties of their respective host stars. These host stars
are very frequently the only visible component in the systems. This book
describes our work in the field of characterization of exoplanet host stars
using interferometry to determine angular diameters, trigonometric parallax to
determine physical radii, and SED fitting to determine effective temperatures
and luminosities. The interferometry data are based on our decade-long survey
using the CHARA Array. We describe our methods and give an update on the status
of the field, including a table with the astrophysical properties of all stars
with high-precision interferometric diameters out to 150 pc (status Nov 2016).
In addition, we elaborate in more detail on a number of particularly
significant or important exoplanet systems, particularly with respect to (1)
insights gained from transiting exoplanets, (2) the determination of system
habitable zones, and (3) the discrepancy between directly determined and
model-based stellar radii. Finally, we discuss current and future work
including the calibration of semi-empirical methods based on interferometric
data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling polypharmacy side effects with graph convolutional networks | The use of drug combinations, termed polypharmacy, is common to treat
patients with complex diseases and co-existing conditions. However, a major
consequence of polypharmacy is a much higher risk of adverse side effects for
the patient. Polypharmacy side effects emerge because of drug-drug
interactions, in which activity of one drug may change if taken with another
drug. The knowledge of drug interactions is limited because these complex
relationships are rare, and are usually not observed in relatively small
clinical testing. Discovering polypharmacy side effects thus remains an
important challenge with significant implications for patient mortality. Here,
we present Decagon, an approach for modeling polypharmacy side effects. The
approach constructs a multimodal graph of protein-protein interactions,
drug-protein target interactions, and the polypharmacy side effects, which are
represented as drug-drug interactions, where each side effect is an edge of a
different type. Decagon is developed specifically to handle such multimodal
graphs with a large number of edge types. Our approach develops a new graph
convolutional neural network for multirelational link prediction in multimodal
networks. Decagon predicts the exact side effect, if any, through which a given
drug combination manifests clinically. Decagon accurately predicts polypharmacy
side effects, outperforming baselines by up to 69%. We find that it
automatically learns representations of side effects indicative of
co-occurrence of polypharmacy in patients. Furthermore, Decagon models
particularly well side effects with a strong molecular basis, while on
predominantly non-molecular side effects, it achieves good performance because
of effective sharing of model parameters across edge types. Decagon creates
opportunities to use large pharmacogenomic and patient data to flag and
prioritize side effects for follow-up analysis.
| 0 | 0 | 0 | 1 | 1 | 0 |
Musical intervals under 12-note equal temperament: a geometrical interpretation | Musical intervals in multiple of semitones under 12-note equal temperament,
or more specifically pitch-class subsets of assigned cardinality ($n$-chords)
are conceived as positive integer points within an Euclidean $n$-space. The
number of distinct $n$-chords is inferred from combinatorics with the extension
to $n=0$, involving an Euclidean 0-space. The number of repeating $n$-chords,
or points which are turned into themselves during a circular permutation,
$T_n$, of their coordinates, is inferred from algebraic considerations.
Finally, the total number of $n$-chords and the number of $T_n$ set classes are
determined. Palindrome and pseudo palindrome $n$-chords are defined and
included among repeating $n$-chords, with regard to an equivalence relation,
$T_n/T_nI$, where reflection is added to circular permutation. To this respect,
the number of $T_n$ set classes is inferred concerning palindrome and pseudo
palindrome $n$-chords and the remaining $n$-chords. The above results are
reproduced within the framework of a geometrical interpretation, where positive
integer points related to $n$-chords of cardinality, $n$, belong to a regular
inclined $n$-hedron, $\Psi_{12}^n$, the vertexes lying on the coordinate axes
of a Cartesian orthogonal reference frame at a distance, $x_i=12$, $1\le i\le
n$, from the origin. Considering $\Psi_{12}^n$ as special cases of lattice
polytopes, the number of related nonnegative integer points is also determined
for completeness. A comparison is performed with the results inferred from
group theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
Determining rough first order perturbations of the polyharmonic operator | We show that the knowledge of Dirichlet to Neumann map for rough $A$ and $q$
in $(-\Delta)^m +A\cdot D +q$ for $m \geq 2$ for a bounded domain in
$\mathbb{R}^n$, $n \geq 3$ determines $A$ and $q$ uniquely. The unique
identifiability is proved using property of products of functions in Sobolev
spaces and constructing complex geometrical optics solutions with sufficient
decay of remainder terms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Data Science: A Three Ring Circus or a Big Tent? | This is part of a collection of discussion pieces on David Donoho's paper 50
Years of Data Science, appearing in Volume 26, Issue 4 of the Journal of
Computational and Graphical Statistics (2017).
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Control of Partially Observable Piecewise Deterministic Markov Processes | In this paper we consider a control problem for a Partially Observable
Piecewise Deterministic Markov Process of the following type: After the jump of
the process the controller receives a noisy signal about the state and the aim
is to control the process continuously in time in such a way that the expected
discounted cost of the system is minimized. We solve this optimization problem
by reducing it to a discrete-time Markov Decision Process. This includes the
derivation of a filter for the unobservable state. Imposing sufficient
continuity and compactness assumptions we are able to prove the existence of
optimal policies and show that the value function satisfies a fixed point
equation. A generic application is given to illustrate the results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Moment-based parameter estimation in binomial random intersection graph models | Binomial random intersection graphs can be used as parsimonious statistical
models of large and sparse networks, with one parameter for the average degree
and another for transitivity, the tendency of neighbours of a node to be
connected. This paper discusses the estimation of these parameters from a
single observed instance of the graph, using moment estimators based on
observed degrees and frequencies of 2-stars and triangles. The observed data
set is assumed to be a subgraph induced by a set of $n_0$ nodes sampled from
the full set of $n$ nodes. We prove the consistency of the proposed estimators
by showing that the relative estimation error is small with high probability
for $n_0 \gg n^{2/3} \gg 1$. As a byproduct, our analysis confirms that the
empirical transitivity coefficient of the graph is with high probability close
to the theoretical clustering coefficient of the model.
| 1 | 0 | 1 | 1 | 0 | 0 |
Efficient Compression and Indexing of Trajectories | We present a new compressed representation of free trajectories of moving
objects. It combines a partial-sums-based structure that retrieves in constant
time the position of the object at any instant, with a hierarchical
minimum-bounding-boxes representation that allows determining if the object is
seen in a certain rectangular area during a time period. Combined with spatial
snapshots at regular intervals, the representation is shown to outperform
classical ones by orders of magnitude in space, and also to outperform previous
compressed representations in time performance, when using the same amount of
space.
| 1 | 0 | 0 | 0 | 0 | 0 |
Denoising Linear Models with Permuted Data | The multivariate linear regression model with shuffled data and additive
Gaussian noise arises in various correspondence estimation and matching
problems. Focusing on the denoising aspect of this problem, we provide a
characterization the minimax error rate that is sharp up to logarithmic
factors. We also analyze the performance of two versions of a computationally
efficient estimator, and establish their consistency for a large range of input
parameters. Finally, we provide an exact algorithm for the noiseless problem
and demonstrate its performance on an image point-cloud matching task. Our
analysis also extends to datasets with outliers.
| 0 | 0 | 1 | 1 | 0 | 0 |
Collisions in shape memory alloys | We present here a model for instantaneous collisions in a solid made of shape
memory alloys (SMA) by means of a predictive theory which is based on the
introduction not only of macroscopic velocities and temperature, but also of
microscopic velocities responsible of the austenite-martensites phase changes.
Assuming time discontinuities for velocities, volume fractions and temperature,
and applying the principles of thermodynamics for non-smooth evolutions
together with constitutive laws typical of SMA, we end up with a system of
nonlinearly coupled elliptic equations for which we prove an existence and
uniqueness result in the 2 and 3 D cases. Finally, we also present numerical
results for a SMA 2D solid subject to an external percussion by an hammer
stroke.
| 0 | 0 | 1 | 0 | 0 | 0 |
Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale | Today's high-performance computing (HPC) systems are heavily instrumented,
generating logs containing information about abnormal events, such as critical
conditions, faults, errors and failures, system resource utilization, and about
the resource usage of user applications. These logs, once fully analyzed and
correlated, can produce detailed information about the system health, root
causes of failures, and analyze an application's interactions with the system,
providing valuable insights to domain scientists and system administrators.
However, processing HPC logs requires a deep understanding of hardware and
software components at multiple layers of the system stack. Moreover, most log
data is unstructured and voluminous, making it more difficult for system users
and administrators to manually inspect the data. With rapid increases in the
scale and complexity of HPC systems, log data processing is becoming a big data
challenge. This paper introduces a HPC log data analytics framework that is
based on a distributed NoSQL database technology, which provides scalability
and high availability, and the Apache Spark framework for rapid in-memory
processing of the log data. The analytics framework enables the extraction of a
range of information about the system so that system administrators and end
users alike can obtain necessary insights for their specific needs. We describe
our experience with using this framework to glean insights from the log data
about system behavior from the Titan supercomputer at the Oak Ridge National
Laboratory.
| 1 | 0 | 0 | 0 | 0 | 0 |
Clustering Spectrum of scale-free networks | Real-world networks often have power-law degrees and scale-free properties
such as ultra-small distances and ultra-fast information spreading. In this
paper, we study a third universal property: three-point correlations that
suppress the creation of triangles and signal the presence of hierarchy. We
quantify this property in terms of $\bar c(k)$, the probability that two
neighbors of a degree-$k$ node are neighbors themselves. We investigate how the
clustering spectrum $k\mapsto\bar c(k)$ scales with $k$ in the hidden variable
model and show that $c(k)$ follows a {\it universal curve} that consists of
three $k$-ranges where $\bar c(k)$ remains flat, starts declining, and
eventually settles on a power law $\bar c(k)\sim k^{-\alpha}$ with $\alpha$
depending on the power law of the degree distribution. We test these results
against ten contemporary real-world networks and explain analytically why the
universal curve properties only reveal themselves in large networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Multiplex model of mental lexicon reveals explosive learning in humans | Word similarities affect language acquisition and use in a multi-relational
way barely accounted for in the literature. We propose a multiplex network
representation of this mental lexicon of word similarities as a natural
framework for investigating large-scale cognitive patterns. Our representation
accounts for semantic, taxonomic, and phonological interactions and it
identifies a cluster of words which are used with greater frequency, are
identified, memorised, and learned more easily, and have more meanings than
expected at random. This cluster emerges around age 7 through an explosive
transition not reproduced by null models. We relate this explosive emergence to
polysemy -- redundancy in word meanings. Results indicate that the word cluster
acts as a core for the lexicon, increasing both lexical navigability and
robustness to linguistic degradation. Our findings provide quantitative
confirmation of existing conjectures about core structure in the mental lexicon
and the importance of integrating multi-relational word-word interactions in
psycholinguistic frameworks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Weighted $L_{p,q}$-estimates for higher order elliptic and parabolic systems with BMO coefficients on Reifenberg flat domains | We prove weighted $L_{p,q}$-estimates for divergence type higher order
elliptic and parabolic systems with irregular coefficients on Reifenberg flat
domains. In particular, in the parabolic case the coefficients do not have any
regularity assumptions in the time variable. As functions of the spatial
variables, the leading coefficients are permitted to have small mean
oscillations. The weights are in the class of Muckenhoupt weights $A_p$. We
also prove the solvability in weighted Sobolev spaces for the systems in the
whole space, on a half space, and on bounded Reifenberg flat domains.
| 0 | 0 | 1 | 0 | 0 | 0 |
Various sharp estimates for semi-discrete Riesz transforms of the second order | We give several sharp estimates for a class of combinations of second order
Riesz transforms on Lie groups ${G}={G}_{x} \times {G}_{y}$ that are multiply
connected, composed of a discrete abelian component ${G}_{x}$ and a connected
component ${G}_{y}$ endowed with a biinvariant measure. These estimates include
new sharp $L^p$ estimates via Choi type constants, depending upon the
multipliers of the operator. They also include weak-type, logarithmic and
exponential estimates. We give an optimal $L^q \to L^p$ estimate as well.
It was shown recently by Arcozzi, Domelevo and Petermichl that such second
order Riesz transforms applied to a function may be written as conditional
expectation of a simple transformation of a stochastic integral associated with
the function.
The proofs of our theorems combine this stochastic integral representation
with a number of deep estimates for pairs of martingales under strong
differential subordination by Choi, Banuelos and Osekowski.
When two continuous directions are available, sharpness is shown via the
laminates technique. We show that sharpness is preserved in the discrete case
using Lax-Richtmyer theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Linear Spectral Estimators and an Application to Phase Retrieval | Phase retrieval refers to the problem of recovering real- or complex-valued
vectors from magnitude measurements. The best-known algorithms for this problem
are iterative in nature and rely on so-called spectral initializers that
provide accurate initialization vectors. We propose a novel class of estimators
suitable for general nonlinear measurement systems, called linear spectral
estimators (LSPEs), which can be used to compute accurate initialization
vectors for phase retrieval problems. The proposed LSPEs not only provide
accurate initialization vectors for noisy phase retrieval systems with
structured or random measurement matrices, but also enable the derivation of
sharp and nonasymptotic mean-squared error bounds. We demonstrate the efficacy
of LSPEs on synthetic and real-world phase retrieval problems, and show that
our estimators significantly outperform existing methods for structured
measurement systems that arise in practice.
| 0 | 0 | 0 | 1 | 0 | 0 |
A geometric perspective on the method of descent | We derive a representation formula for the tensorial wave equation $\Box_\bg
\phi^I=F^I$ in globally hyperbolic Lorentzian spacetimes $(\M^{2+1}, \bg)$ by
giving a geometric formulation of the method of descent which is applicable for
any dimension.
| 0 | 0 | 1 | 0 | 0 | 0 |
Detecting Changes in Hidden Markov Models | We consider the problem of sequential detection of a change in the
statistical behavior of a hidden Markov model. By adopting a worst-case
analysis with respect to the time of change and by taking into account the data
that can be accessed by the change-imposing mechanism we offer alternative
formulations of the problem. For each formulation we derive the optimum
Shewhart test that maximizes the worst-case detection probability while
guaranteeing infrequent false alarms.
| 0 | 0 | 1 | 1 | 0 | 0 |
Towards an Empirical Study of Affine Types for Isolated Actors in Scala | LaCasa is a type system and programming model to enforce the object
capability discipline in Scala, and to provide affine types. One important
application of LaCasa's type system is software isolation of concurrent
processes. Isolation is important for several reasons including security and
data-race freedom. Moreover, LaCasa's affine references enable efficient,
by-reference message passing while guaranteeing a "deep-copy" semantics. This
deep-copy semantics enables programmers to seamlessly port concurrent programs
running on a single machine to distributed programs running on large-scale
clusters of machines.
This paper presents an integration of LaCasa with actors in Scala,
specifically, the Akka actor-based middleware, one of the most widely-used
actor systems in industry. The goal of this integration is to statically ensure
the isolation of Akka actors. Importantly, we present the results of an
empirical study investigating the effort required to use LaCasa's type system
in existing open-source Akka-based systems and applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
FPGA Architecture for Deep Learning and its application to Planetary Robotics | Autonomous control systems onboard planetary rovers and spacecraft benefit
from having cognitive capabilities like learning so that they can adapt to
unexpected situations in-situ. Q-learning is a form of reinforcement learning
and it has been efficient in solving certain class of learning problems.
However, embedded systems onboard planetary rovers and spacecraft rarely
implement learning algorithms due to the constraints faced in the field, like
processing power, chip size, convergence rate and costs due to the need for
radiation hardening. These challenges present a compelling need for a portable,
low-power, area efficient hardware accelerator to make learning algorithms
practical onboard space hardware. This paper presents a FPGA implementation of
Q-learning with Artificial Neural Networks (ANN). This method matches the
massive parallelism inherent in neural network software with the fine-grain
parallelism of an FPGA hardware thereby dramatically reducing processing time.
Mars Science Laboratory currently uses Xilinx-Space-grade Virtex FPGA devices
for image processing, pyrotechnic operation control and obstacle avoidance. We
simulate and program our architecture on a Xilinx Virtex 7 FPGA. The
architectural implementation for a single neuron Q-learning and a more complex
Multilayer Perception (MLP) Q-learning accelerator has been demonstrated. The
results show up to a 43-fold speed up by Virtex 7 FPGAs compared to a
conventional Intel i5 2.3 GHz CPU. Finally, we simulate the proposed
architecture using the Symphony simulator and compiler from Xilinx, and
evaluate the performance and power consumption.
| 1 | 1 | 0 | 0 | 0 | 0 |
Boundary Layer Problems in the Viscosity-Diffusion Vanishing Limits for the Incompressible MHD Systems | In this paper, we we study boundary layer problems for the incompressible MHD
systems in the presence of physical boundaries with the standard Dirichlet
oundary conditions with small generic viscosity and diffusion coefficients. We
identify a non-trivial class of initial data for which we can establish the
uniform stability of the Prandtl's type boundary layers and prove rigorously
that the solutions to the viscous and diffusive incompressible MHD systems
converges strongly to the superposition of the solution to the ideal MHD
systems with a Prandtl's type boundary layer corrector. One of the main
difficulties is to deal with the effect of the difference between viscosity and
diffusion coefficients and to control the singular boundary layers resulting
from the Dirichlet boundary conditions for both the viscosity and the magnetic
fields. One key derivation here is that for the class of initial data we
identify here, there exist cancelations between the boundary layers of the
velocity field and that of the magnetic fields so that one can use an elaborate
energy method to take advantage this special structure. In addition, in the
case of fixed positive viscosity, we also establish the stability of diffusive
boundary layer for the magnetic field and convergence of solutions in the limit
of zero magnetic diffusion for general initial data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Why Adaptively Collected Data Have Negative Bias and How to Correct for It | From scientific experiments to online A/B testing, the previously observed
data often affects how future experiments are performed, which in turn affects
which data will be collected. Such adaptivity introduces complex correlations
between the data and the collection procedure. In this paper, we prove that
when the data collection procedure satisfies natural conditions, then sample
means of the data have systematic \emph{negative} biases. As an example,
consider an adaptive clinical trial where additional data points are more
likely to be tested for treatments that show initial promise. Our surprising
result implies that the average observed treatment effects would underestimate
the true effects of each treatment. We quantitatively analyze the magnitude and
behavior of this negative bias in a variety of settings. We also propose a
novel debiasing algorithm based on selective inference techniques. In
experiments, our method can effectively reduce bias and estimation error.
| 1 | 0 | 0 | 1 | 0 | 0 |
Machine learning of neuroimaging to diagnose cognitive impairment and dementia: a systematic review and comparative analysis | INTRODUCTION: Advanced machine learning methods might help to identify
dementia risk from neuroimaging, but their accuracy to date is unclear.
METHODS: We systematically reviewed the literature, 2006 to late 2016, for
machine learning studies differentiating healthy ageing through to dementia of
various types, assessing study quality, and comparing accuracy at different
disease boundaries.
RESULTS: Of 111 relevant studies, most assessed Alzheimer's disease (AD) vs
healthy controls, used ADNI data, support vector machines and only T1-weighted
sequences. Accuracy was highest for differentiating AD from healthy controls,
and poor for differentiating healthy controls vs MCI vs AD, or MCI converters
vs non-converters. Accuracy increased using combined data types, but not by
data source, sample size or machine learning method.
DISCUSSION: Machine learning does not differentiate clinically-relevant
disease categories yet. More diverse datasets, combinations of different types
of data, and close clinical integration of machine learning would help to
advance the field.
| 0 | 0 | 0 | 0 | 1 | 0 |
Exact Diffusion for Distributed Optimization and Learning --- Part II: Convergence Analysis | Part I of this work [2] developed the exact diffusion algorithm to remove the
bias that is characteristic of distributed solutions for deterministic
optimization problems. The algorithm was shown to be applicable to a larger set
of combination policies than earlier approaches in the literature. In
particular, the combination matrices are not required to be doubly stochastic,
which impose stringent conditions on the graph topology and communications
protocol. In this Part II, we examine the convergence and stability properties
of exact diffusion in some detail and establish its linear convergence rate. We
also show that it has a wider stability range than the EXTRA consensus
solution, meaning that it is stable for a wider range of step-sizes and can,
therefore, attain faster convergence rates. Analytical examples and numerical
simulations illustrate the theoretical findings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximating meta-heuristics with homotopic recurrent neural networks | Much combinatorial optimisation problems constitute a non-polynomial (NP)
hard optimisation problem, i.e., they can not be solved in polynomial time. One
such problem is finding the shortest route between two nodes on a graph.
Meta-heuristic algorithms such as $A^{*}$ along with mixed-integer programming
(MIP) methods are often employed for these problems. Our work demonstrates that
it is possible to approximate solutions generated by a meta-heuristic algorithm
using a deep recurrent neural network. We compare different methodologies based
on reinforcement learning (RL) and recurrent neural networks (RNN) to gauge
their respective quality of approximation. We show the viability of recurrent
neural network solutions on a graph that has over 300 nodes and argue that a
sequence-to-sequence network rather than other recurrent networks has improved
approximation quality. Additionally, we argue that homotopy continuation --
that increases chances of hitting an extremum -- further improves the estimate
generated by a vanilla RNN.
| 1 | 0 | 0 | 1 | 0 | 0 |
Real embedding and equivariant eta forms | In 1993, Bismut and Zhang establish a mod Z embedding formula of
Atiyah-Patodi-Singer reduced eta invariants. In this paper, we explain the
hidden mod Z term as a spectral flow and extend this embedding formula to the
equivariant family case. In this case, the spectral flow is generalized to the
equivariant chern character of some equivariant Dai-Zhang higher spectral flow.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hierarchical Block Sparse Neural Networks | Sparse deep neural networks(DNNs) are efficient in both memory and compute
when compared to dense DNNs. But due to irregularity in computation of sparse
DNNs, their efficiencies are much lower than that of dense DNNs on regular
parallel hardware such as TPU. This inefficiency leads to poor/no performance
benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated
by bringing structure to the sparsity and leveraging it for improving runtime
efficiency. But such structural constraints often lead to suboptimal
accuracies. In this work, we jointly address both accuracy and performance of
sparse DNNs using our proposed class of sparse neural networks called HBsNN
(Hierarchical Block sparse Neural Networks). For a given sparsity, HBsNN models
achieve better runtime performance than unstructured sparse models and better
accuracy than highly structured sparse models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Are crossing dependencies really scarce? | The syntactic structure of a sentence can be modelled as a tree, where
vertices correspond to words and edges indicate syntactic dependencies. It has
been claimed recurrently that the number of edge crossings in real sentences is
small. However, a baseline or null hypothesis has been lacking. Here we
quantify the amount of crossings of real sentences and compare it to the
predictions of a series of baselines. We conclude that crossings are really
scarce in real sentences. Their scarcity is unexpected by the hubiness of the
trees. Indeed, real sentences are close to linear trees, where the potential
number of crossings is maximized.
| 1 | 1 | 0 | 0 | 0 | 0 |
Theory of Compact Hausdorff Shape | In this paper, we aim to establish a new shape theory, compact Hausdorff
shape (CH-shape) for general Hausdorff spaces. We use the "internal" method and
direct system approach on the homotopy category of compact Hausdorff spaces.
Such a construction can preserve most good properties of H-shape given by Rubin
and Sanders. Most importantly, we can moreover develop the entire homology
theory for CH-shape, including the exactness, dual to the consequence of
Mardešić and Segal.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Proof of the Herschel-Maxwell Theorem Using the Strong Law of Large Numbers | In this article, we use the strong law of large numbers to give a proof of
the Herschel-Maxwell theorem, which characterizes the normal distribution as
the distribution of the components of a spherically symmetric random vector,
provided they are independent. We present shorter proofs under additional
moment assumptions, and include a remark, which leads to another strikingly
short proof of Maxwell's characterization using the central limit theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Heating and cooling of coronal loops with turbulent suppression of parallel heat conduction | Using the "enthalpy-based thermal evolution of loops" (EBTEL) model, we
investigate the hydrodynamics of the plasma in a flaring coronal loop in which
heat conduction is limited by turbulent scattering of the electrons that
transport the thermal heat flux. The EBTEL equations are solved analytically in
each of the two (conduction-dominated and radiation-dominated) cooling phases.
Comparison of the results with typical observed cooling times in solar flares
shows that the turbulent mean free-path $\lambda_T$ lies in a range
corresponding to a regime in which classical (collision-dominated) conduction
plays at most a limited role. We also consider the magnitude and duration of
the heat input that is necessary to account for the enhanced values of
temperature and density at the beginning of the cooling phase and for the
observed cooling times. We find through numerical modeling that in order to
produce a peak temperature $\simeq 1.5 \times 10^7$~K and a 200~s cooling time
consistent with observations, the flare heating profile must extend over a
significant period of time; in particular, its lingering role must be taken
into consideration in any description of the cooling phase. Comparison with
observationally-inferred values of post-flare loop temperatures, densities, and
cooling times thus leads to useful constraints on both the magnitude and
duration of the magnetic energy release in the loop, as well as on the value of
the turbulent mean free-path $\lambda_T$.
| 0 | 1 | 0 | 0 | 0 | 0 |
How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights | Particle filters are a popular and flexible class of numerical algorithms to
solve a large class of nonlinear filtering problems. However, standard particle
filters with importance weights have been shown to require a sample size that
increases exponentially with the dimension D of the state space in order to
achieve a certain performance, which precludes their use in very
high-dimensional filtering problems. Here, we focus on the dynamic aspect of
this curse of dimensionality (COD) in continuous time filtering, which is
caused by the degeneracy of importance weights over time. We show that the
degeneracy occurs on a time-scale that decreases with increasing D. In order to
soften the effects of weight degeneracy, most particle filters use particle
resampling and improved proposal functions for the particle motion. We explain
why neither of the two can prevent the COD in general. In order to address this
fundamental problem, we investigate an existing filtering algorithm based on
optimal feedback control that sidesteps the use of importance weights. We use
numerical experiments to show that this Feedback Particle Filter (FPF) by Yang
et al. (2013) does not exhibit a COD.
| 0 | 0 | 1 | 1 | 0 | 0 |
Motion and Cooperative Transportation Planning for Multi-Agent Systems under Temporal Logic Formulas | This paper presents a hybrid control framework for the motion planning of a
multi-agent system including N robotic agents and M objects, under high level
goals expressed as Linear Temporal Logic (LTL) formulas. In particular, we
design control protocols that allow the transition of the agents as well as the
cooperative transportation of the objects by the agents, among predefined
regions of interest in the workspace. This allows to abstract the coupled
behavior of the agents and the objects as a finite transition system and to
design a high-level multi-agent plan that satisfies the agents' and the
objects' specifications, given as temporal logic formulas. Simulation results
verify the proposed framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Realistic finite temperature simulations of magnetic systems using quantum statistics | We have performed realistic atomistic simulations at finite temperatures
using Monte Carlo and atomistic spin dynamics simulations incorporating quantum
(Bose-Einstein) statistics. The description is much improved at low
temperatures compared to classical (Boltzmann) statistics normally used in
these kind of simulations, while at higher temperatures the classical
statistics are recovered. This corrected low-temperature description is
reflected in both magnetization and the magnetic specific heat, the latter
allowing for improved modeling of the magnetic contribution to free energies. A
central property in the method is the magnon density of states at finite
temperatures and we have compared several different implementations for
obtaining it. The method has no restrictions regarding chemical and magnetic
order of the considered materials. This is demonstrated by applying the method
to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co
random alloys and the ferrimagnetic system GdFe$_3$ .
| 0 | 1 | 0 | 0 | 0 | 0 |
Effective Tensor Sketching via Sparsification | In this paper, we investigate effective sketching schemes via sparsification
for high dimensional multilinear arrays or tensors. More specifically, we
propose a novel tensor sparsification algorithm that retains a subset of the
entries of a tensor in a judicious way, and prove that it can attain a given
level of approximation accuracy in terms of tensor spectral norm with a much
smaller sample complexity when compared with existing approaches. In
particular, we show that for a $k$th order $d\times\cdots\times d$ cubic tensor
of {\it stable rank} $r_s$, the sample size requirement for achieving a
relative error $\varepsilon$ is, up to a logarithmic factor, of the order
$r_s^{1/2} d^{k/2} /\varepsilon$ when $\varepsilon$ is relatively large, and
$r_s d /\varepsilon^2$ and essentially optimal when $\varepsilon$ is
sufficiently small. It is especially noteworthy that the sample size
requirement for achieving a high accuracy is of an order independent of $k$. To
further demonstrate the utility of our techniques, we also study how higher
order singular value decomposition (HOSVD) of large tensors can be efficiently
approximated via sparsification.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.